summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorMiroslav Los <miroslav.los@pantheon.tech>2019-12-18 13:10:24 +0100
committerMiroslav Los <miroslav.los@pantheon.tech>2020-01-07 10:25:09 +0100
commite0b1912b608523881e43e5d7e22610fafba8fac4 (patch)
tree544d4d830a40c0ebe790ca24fb539ca203fa1492
parentf31bd85266b8bdb7d95bb6f6e2f6d48967278f9a (diff)
Remove obsolete cdap and docker plugins
Signed-off-by: Miroslav Los <miroslav.los@pantheon.tech> Issue-ID: DCAEGEN2-1987 Change-Id: I7e7114458a2931b8f1baf915f1714ee8465b86e5
-rw-r--r--cdap/.gitignore3
-rw-r--r--cdap/Changelog.md60
-rw-r--r--cdap/LICENSE.txt32
-rw-r--r--cdap/README.md178
-rwxr-xr-xcdap/cdap_types.yaml129
-rw-r--r--cdap/cdapplugin/.coveragerc25
-rw-r--r--cdap/cdapplugin/cdapcloudify/__init__.py30
-rw-r--r--cdap/cdapplugin/cdapcloudify/cdap_plugin.py288
-rw-r--r--cdap/cdapplugin/cdapcloudify/discovery.py132
-rw-r--r--cdap/cdapplugin/requirements.txt3
-rw-r--r--cdap/cdapplugin/setup.py40
-rw-r--r--cdap/cdapplugin/tests/test_cdap_plugin.py103
-rw-r--r--cdap/cdapplugin/tests/test_discovery.py114
-rw-r--r--cdap/cdapplugin/tox-local.ini30
-rw-r--r--cdap/cdapplugin/tox.ini29
-rw-r--r--cdap/demo_blueprints/cdap_hello_world.yaml68
-rwxr-xr-xcdap/demo_blueprints/cdap_hello_world_reconfigure.sh21
-rw-r--r--cdap/demo_blueprints/cdap_hello_world_with_dmaap.yaml165
-rw-r--r--cdap/demo_blueprints/cdap_hello_world_with_laika.yaml97
-rw-r--r--cdap/demo_blueprints/cdap_hello_world_with_mr.yaml151
-rw-r--r--cdap/pom.xml167
-rw-r--r--docker/.coveragerc21
-rw-r--r--docker/.gitignore68
-rw-r--r--docker/ChangeLog.md81
-rw-r--r--docker/LICENSE.txt32
-rw-r--r--docker/README.md214
-rw-r--r--docker/docker-node-type.yaml386
-rw-r--r--docker/dockerplugin/__init__.py31
-rw-r--r--docker/dockerplugin/decorators.py102
-rw-r--r--docker/dockerplugin/discovery.py257
-rw-r--r--docker/dockerplugin/exceptions.py29
-rw-r--r--docker/dockerplugin/tasks.py672
-rw-r--r--docker/dockerplugin/utils.py44
-rw-r--r--docker/examples/blueprint-laika-dmaap-pubs.yaml165
-rw-r--r--docker/examples/blueprint-laika-dmaap-pubsub.yaml167
-rw-r--r--docker/examples/blueprint-laika-dmaap-subs.yaml173
-rw-r--r--docker/examples/blueprint-laika-policy.yaml138
-rw-r--r--docker/examples/blueprint-laika.yaml97
-rw-r--r--docker/examples/blueprint-registrator.yaml64
-rw-r--r--docker/pom.xml165
-rw-r--r--docker/requirements.txt5
-rw-r--r--docker/setup.py38
-rw-r--r--docker/tests/test_decorators.py36
-rw-r--r--docker/tests/test_discovery.py69
-rw-r--r--docker/tests/test_tasks.py277
-rw-r--r--docker/tests/test_utils.py32
-rw-r--r--docker/tox.ini29
47 files changed, 0 insertions, 5257 deletions
diff --git a/cdap/.gitignore b/cdap/.gitignore
deleted file mode 100644
index f5e9b16..0000000
--- a/cdap/.gitignore
+++ /dev/null
@@ -1,3 +0,0 @@
-cfyhelper.sh
-cdapplugin/coverage-reports/
-cdapplugin/xunit-reports/*
diff --git a/cdap/Changelog.md b/cdap/Changelog.md
deleted file mode 100644
index 53220e7..0000000
--- a/cdap/Changelog.md
+++ /dev/null
@@ -1,60 +0,0 @@
-# Change Log
-All notable changes to this project will be documented in this file.
-
-The format is based on [Keep a Changelog](http://keepachangelog.com/)
-and this project adheres to [Semantic Versioning](http://semver.org/).
-
-## [14.3.0]
-* DCAEGEN2-1956 support python3 in all plugins
-
-## [14.2.5] - Sep 21 2017
-* Use the public pypi version of policy lib
-
-## [14.2.4] - Sep 15 2017
-* Add decorator usage to DRY up code
-
-## [14.2.3] - Sep 14 2017
-* Remove the raise for status from discovery into tasks, allows for unit testing
-* Unit test discovery
-
-## [14.2.2] - MISSING
-
-## [14.2.1] - MISSING
-
-## [14.2.0]
-* Integrate with Policy handler. Policy handling for CDAP is done.
-
-## [14.1.0]
-* Merge the broker deleter function into here; no need for separate plugin
-
-## [14.0.2]
-* Start a tox/pytest unit test suite
-
-## [14.0.1]
-* Type file change to move reconfiguration defaults into the type file so each blueprint doesn't need them.
-
-## [14.0.0]
-* Better type speccing in the type file
-* Simplify the component naming
-* Remove the unused (after two years) location and service-id properties
-* Add more demo blueprints and reconfiguration tests
-
-## [13.0.0]
-* Support for data router publication. Data router subscription is a problem, see README.
-* Fixes `services_calls` to have the same format as streams. This is an API break but users are aware.
-
-## [12.1.0]
-* Support for message router integration. Data router publish to come in next release.
-
-## [12.0.1]
-* Use "localhost" instead of solutioning Consul host.
-
-## [12.0.0]
-* Add in functions for policy to call (execute_workflows) to reconfigure CDAP applications
-* Remove "Selected" Nonsense.
-
-FAILURE TO UPDATE
-
-## [10.0.0]
-* Update to support broker API 3.X. This is a breaking change, involving the renaming of Node types
-* Cut dependencies over to Nexus
diff --git a/cdap/LICENSE.txt b/cdap/LICENSE.txt
deleted file mode 100644
index cb8008a..0000000
--- a/cdap/LICENSE.txt
+++ /dev/null
@@ -1,32 +0,0 @@
-============LICENSE_START=======================================================
-org.onap.dcae
-================================================================================
-Copyright (c) 2017 AT&T Intellectual Property. All rights reserved.
-================================================================================
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-============LICENSE_END=========================================================
-
-ECOMP is a trademark and service mark of AT&T Intellectual Property.
-
-
-Copyright (c) 2017 AT&T Intellectual Property. All rights reserved.
-===================================================================
-Licensed under the Creative Commons License, Attribution 4.0 Intl. (the "License");
-you may not use this documentation except in compliance with the License.
-You may obtain a copy of the License at
- https://creativecommons.org/licenses/by/4.0/
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
diff --git a/cdap/README.md b/cdap/README.md
deleted file mode 100644
index b1601a7..0000000
--- a/cdap/README.md
+++ /dev/null
@@ -1,178 +0,0 @@
-# cdap-cloudify
-Contains a plugin and type file for deploying CDAP and related artifacts.
-
-# service component name
-When the cdap plugin deploys an application, it generates a service component name. That service component name is injected
-into the node's runtime dictionary under the key "service_component_name" and also made available as an output under this key.
-
-# Demo blueprints
-There is a subfolder in this repo called `demo_blueprints` that contains (templatized) example blueprints.
-
-# Connections
-Since you cannot type-spec complicated objects in a cloudify node type, I have to explain this here. This is a requirement on all blueprints that use this node type.
-
-There is a property at the top level of the CDAP node called `connections` that is expecting a specific structure, best serviced with examples.
-
-## DMaaP
-
-### Message Router
-Message router publication
-```
- connections:
- streams_publishes: // is a list
- - name: topic00 // THIS NAME MUST MATCH THE NODE NAME IN BLUEPRINT, SEE BELOW*
- location: mtc5
- client_role: XXXX
- type: message_router
- config_key: "myconfigkey1" // from spec
- aaf_username: { get_input: aafu1 }
- aaf_password: { get_input: aafp1 }
- - name: topic01 // THIS NAME MUST MATCH THE NODE NAME IN BLUEPRINT, SEE BELOW*
- location: mtc5
- client_role: XXXX
- type: message_router
- config_key: "myconfigkey2" // from spec
- aaf_username: { get_input: aafu2 }
- aaf_password: { get_input: aafp2 }
-```
-Message router subscription is the exact same format, except change `streams_publishes` to `streams_subscribes`:
-```
- streams_subscribes:
- - name: topic00 #MEANT FOR DEMO ONLY! Subscribing and publishing to same topic. Not real example.
- location: mtc5
- client_role: XXXX
- type: message_router
- config_key: "myconfigkey2"
- aaf_username: { get_input: aafu2 }
- aaf_password: { get_input: aafp2 }
- - name: topic01
- location: mtc5
- client_role: XXXX
- type: message_router
- config_key: "myconfigkey3"
- aaf_username: { get_input: aafu3 }
- aaf_password: { get_input: aafp3 }
-```
-The terms `streams_publishes` and `streams_subscribes` comes from the component specification.
-
-### Data Router
-For publication, data router does not have the notion of AAF credentials, and there is no `client_role`. So the expected blueprint input is simpler than the MR case:
-```
- streams_publishes:
- ...
- - name: feed00
- location: mtc5
- type: data_router
- config_key: "mydrconfigkey"
-```
-
-Data router subscription is not supported because there is an impedance mismatch between DR and CDAP.
-CDAP streams expect a POST but DR outputs a PUT.
-Some future platform capability needs to fill this hole; either something like the AF team's DR Sub or DMD.
-
-### Bound configuration
-The above blueprint snippets will lead to the cdap application's `app_config` getting an entry that looks like this:
-```
-{
- "streams_subscribes":{
- "myconfigkey3":{
- "type":"message_router",
- "aaf_username":"foo3",
- "aaf_password":"bar3",
- "dmaap_info":{
- "client_role":"XXXX",
- "client_id":"XXXX",
- "location":"XXXX",
- "topic_url":"XXXX"
- }
- },
- "myconfigkey2":{
- "type":"message_router",
- "aaf_username":"foo2",
- "aaf_password":"bar2",
- "dmaap_info":{
- "client_role":"XXXX",
- "client_id":"XXXX",
- "location":"XXXX",
- "topic_url":"XXXX"
- }
- }
- },
- "streams_publishes":{
- "myconfigkey1":{
- "type":"message_router",
- "aaf_username":"foo1",
- "aaf_password":"bar1",
- "dmaap_info":{
- "client_role":"XXXX",
- "client_id":"XXXX",
- "location":"XXXX",
- "topic_url":"XXXX"
- }
- },
- "mydrconfigkey":{
- "type":"data_router",
- "dmaap_info":{
- "username":"XXXX",
- "location":"XXXX",
- "publish_url":"XXXX",
- "publisher_id":"XXXX",
- "log_url":"XXXX",
- "password":"XXXX"
- }
- },
- "myconfigkey0":{
- "type":"message_router",
- "aaf_username":"foo0",
- "aaf_password":"bar0",
- "dmaap_info":{
- "client_role":"XXXX",
- "client_id":"XXXX",
- "location":"XXXX",
- "topic_url":"XXXX"
- }
- }
- }
-}
-```
-## HTTP
-In addition to DMaaP, we support HTTP services.
-
-### Services Calls
-In a blueprint, to express that one component calls asynchronous HTTP service of another component, writing this as `A -> B,` you need:
-
-1. `A` to have a `connections/services_calls` entry:
-```
- connections:
- services_calls:
- - service_component_type: laika
- config_key: "laika_handle"
-```
-2. A relationship of type `dcae.relationships.component_connected_to` from A to B.
-
-3. The `B` node's `service_component_type` should match #1
-
-See the demo blueprint `cdap_hello_world_with_laika.yaml`
-
-### Bound Configuration
-
-The above (without having defined streams) will lead to:
-```
-{
- "streams_subscribes":{
-
- },
- "streams_publishes":{
-
- },
- "services_calls":{
- "laika_handle":[
- "some_up:some_port"
- ]
- }
-}
-```
-Note that the value is always a list of IP:Ports because there could be multiple identical services that satisfy the client (A in this case). This is client side load balancing.
-
-# Tests
-To run the tests, you need `tox`. You can get it with `pip install tox`. After that, simply run `tox -c tox-local.ini` from inside the `cdapplugin` directory to run the tests and generate a coverage report.
diff --git a/cdap/cdap_types.yaml b/cdap/cdap_types.yaml
deleted file mode 100755
index ae0f146..0000000
--- a/cdap/cdap_types.yaml
+++ /dev/null
@@ -1,129 +0,0 @@
-# ================================================================================
-# Copyright (c) 2017-2018 AT&T Intellectual Property. All rights reserved.
-# ================================================================================
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ============LICENSE_END=========================================================
-#
-# ECOMP is a trademark and service mark of AT&T Intellectual Property.
-tosca_definitions_version: cloudify_dsl_1_3
-
-imports:
- - http://www.getcloudify.org/spec/cloudify/3.4/types.yaml
-plugins:
- cdap_deploy:
- executor: central_deployment_agent
- package_name: cdapcloudify
- package_version: 14.3.0
-
-data_types:
- cdap_connections:
- properties:
- services_calls:
- default: []
- streams_publishes:
- default: []
- streams_subscribes:
- default: []
-
-node_types:
- dcae.nodes.MicroService.cdap:
- derived_from: cloudify.nodes.Root
- properties:
- service_component_type:
- type: string
- #####
- #For the following parameters in this block, see the Broker API
- #####
- jar_url:
- type: string
- artifact_name:
- type: string
- artifact_version:
- type: string
- connections:
- type: cdap_connections
- app_config:
- default: {}
- app_preferences:
- default: {}
- program_preferences:
- default: []
- programs:
- default: []
- streamname:
- #currently, we only support CDAP apps written that read from a
- #stream. This is not the only ingest mechanism for CDAP. This may have to change/get
- type: string
- namespace:
- #the namespace to deploy the CDAP app into
- #defaults to the default cdap namespace which is called "default"
- type: string
- default : "default"
- service_endpoints:
- default: []
-
- interfaces:
- cloudify.interfaces.lifecycle:
- create:
- implementation: cdap_deploy.cdapcloudify.cdap_plugin.create
- inputs:
- connected_broker_dns_name:
- type: string
- description: This is the broker's DNS name. There could be multiple brokers/clusters at a site. Could by populated via an intrinsic_function in a blueprint, or manually via inputs file
- default: "cdap_broker"
- start:
- cdap_deploy.cdapcloudify.cdap_plugin.deploy_and_start_application
- delete:
- cdap_deploy.cdapcloudify.cdap_plugin.stop_and_undeploy_application
-
- dcae.interfaces.policy:
- policy_update:
- implementation:
- cdap_deploy.cdapcloudify.cdap_plugin.policy_update
- inputs:
- updated_policies:
- description: "list of policy objects"
- default: []
-
- #TODO: These can probably go away after policy_update is implemented
- reconfiguration:
- app_config_reconfigure:
- implementation: cdap_deploy.cdapcloudify.cdap_plugin.app_config_reconfigure
- inputs:
- new_config_template:
- description: "new unbound config for the CDAP AppConfig as a JSON"
- default: {}
- app_preferences_reconfigure:
- implementation: cdap_deploy.cdapcloudify.cdap_plugin.app_preferences_reconfigure
- inputs:
- new_config_template:
- description: "new bound config for the CDAP AppPreferences as a JSON"
- default: {}
- app_smart_reconfigure:
- implementation: cdap_deploy.cdapcloudify.cdap_plugin.app_smart_reconfigure
- inputs:
- new_config_template:
- description: "new unbound config for the CDAP AppConfig as a JSON"
- default: {}
-
- dcae.nodes.broker_deleter:
- derived_from: cloudify.nodes.Root
- interfaces:
- cloudify.interfaces.lifecycle:
- delete: #stop better than delete? not sure it matters much. Think all source interfaces are operated on before target on uninstall.
- implementation: cdap_deploy.cdapcloudify.cdap_plugin.delete_all_registered_apps
- inputs:
- connected_broker_dns_name:
- type: string
- description: This is the broker's DNS name. There could be multiple brokers/clusters at a site. Could by populated via an intrinsic_function in a blueprint, or manually via inputs file
- default: "cdap_broker"
diff --git a/cdap/cdapplugin/.coveragerc b/cdap/cdapplugin/.coveragerc
deleted file mode 100644
index 7e87c60..0000000
--- a/cdap/cdapplugin/.coveragerc
+++ /dev/null
@@ -1,25 +0,0 @@
-# .coveragerc to control coverage.py
-[run]
-branch = True
-cover_pylib = False
-include = */cdapcloudify/*.py
-
-[report]
-# Regexes for lines to exclude from consideration
-exclude_lines =
- # Have to re-enable the standard pragma
- pragma: no cover
-
- # Don't complain about missing debug-only code:
- def __repr__
- if self\.debug
-
- # Don't complain if tests don't hit defensive assertion code:
- raise AssertionError
- raise NotImplementedError
-
- # Don't complain if non-runnable code isn't run:
- if 0:
- if __name__ == .__main__.:
-
-ignore_errors = True
diff --git a/cdap/cdapplugin/cdapcloudify/__init__.py b/cdap/cdapplugin/cdapcloudify/__init__.py
deleted file mode 100644
index 388ac55..0000000
--- a/cdap/cdapplugin/cdapcloudify/__init__.py
+++ /dev/null
@@ -1,30 +0,0 @@
-# org.onap.dcae
-# ================================================================================
-# Copyright (c) 2017 AT&T Intellectual Property. All rights reserved.
-# ================================================================================
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ============LICENSE_END=========================================================
-#
-# ECOMP is a trademark and service mark of AT&T Intellectual Property.
-
-import logging
-
-def get_module_logger(mod_name):
- logger = logging.getLogger(mod_name)
- handler = logging.StreamHandler()
- formatter = logging.Formatter(
- '%(asctime)s [%(name)-12s] %(levelname)-8s %(message)s')
- handler.setFormatter(formatter)
- logger.addHandler(handler)
- logger.setLevel(logging.DEBUG)
- return logger
diff --git a/cdap/cdapplugin/cdapcloudify/cdap_plugin.py b/cdap/cdapplugin/cdapcloudify/cdap_plugin.py
deleted file mode 100644
index a64354c..0000000
--- a/cdap/cdapplugin/cdapcloudify/cdap_plugin.py
+++ /dev/null
@@ -1,288 +0,0 @@
-# org.onap.dcae
-# ================================================================================
-# Copyright (c) 2017 AT&T Intellectual Property. All rights reserved.
-# ================================================================================
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ============LICENSE_END=========================================================
-#
-# ECOMP is a trademark and service mark of AT&T Intellectual Property.
-from onap_dcae_dcaepolicy_lib import Policies
-
-import requests
-from cloudify import ctx
-from cloudify.decorators import operation
-from cloudify.exceptions import NonRecoverableError
-import time
-import uuid
-import re
-from cdapcloudify import discovery
-import json
-import requests
-
-# Property keys
-SERVICE_COMPONENT_NAME = "service_component_name"
-SELECTED_BROKER = "selected_broker"
-PUB_C = "streams_publishes_for_config"
-SUB_C = "streams_subscribes_for_config"
-SER_C = "services_calls_for_config"
-STREAMS_PUBLISHES = "streams_publishes"
-STREAMS_SUBSCRIBES = "streams_subscribes"
-SERVICES_CALLS = "services_calls"
-
-# Custom Exception
-class BadConnections(NonRecoverableError):
- pass
-
-
-def _trigger_update(updated_policies):
- """
- Helper function for reconfiguring after a policy update
-
- updated_policies is assumed to be a list of JSONs that are applicable to the brokers smart interface
- """
- for p in updated_policies:
- ctx.logger.info("Reconfiguring CDAP application via smart interface")
- return discovery.reconfigure_in_broker(
- cdap_broker_name = ctx.instance.runtime_properties[SELECTED_BROKER],
- service_component_name = ctx.instance.runtime_properties[SERVICE_COMPONENT_NAME],
- config = p,
- reconfiguration_type = "program-flowlet-smart",
- logger = ctx.logger)
-
-def _validate_conns(connections):
- """
- Cloudify allows you to type spec a data type in a type file, however it does not appear to do strict checking on blueprints against that.
- Sad!
- The "connections" block has an important structure to this plugin, so here we validate it and fail fast if it is not correct.
- """
- try:
- def _assert_ks_in_d(ks,d):
- for k in ks:
- assert(k in d)
- assert STREAMS_PUBLISHES in connections
- assert STREAMS_SUBSCRIBES in connections
- for s in connections[STREAMS_PUBLISHES] + connections[STREAMS_SUBSCRIBES]:
- _assert_ks_in_d(["name", "location", "type", "config_key"], s)
- assert(s["type"] in ["message_router", "data_router"])
- if s["type"] == "message_router":
- _assert_ks_in_d(["aaf_username", "aaf_password", "client_role"], s) #I am not checking that these are not blank. I will leave it possible for you to put empty values for these, but force you to acknowledge that you are doing so by not allowing these to be ommited.
- #nothing extra for DR; no AAF, no client role.
- except:
- raise BadConnections("Bad Connections definition in blueprint") #is a NoneRecoverable
-
-def _streams_iterator(streams):
- """
- helper function for iterating over streams_publishes and subscribes
- note! this is an impure function. it also sets the properties the dmaap plugin needs into runtime properties
- """
- for_config = {}
- for s in streams:
- if s["type"] == "message_router":
- #set the properties the DMaaP plugin needs
- ctx.instance.runtime_properties[s["name"]] = {"client_role" : s["client_role"], "location" : s["location"]}
- #form (or append to) the dict the component will get, including the template for the CBS
- for_config[s["config_key"]] = {"aaf_username" : s["aaf_username"], "aaf_password" : s["aaf_password"], "type" : s["type"], "dmaap_info" : "<< " + s["name"] + ">>"} #will get bound by CBS
- if s["type"] == "data_router":
- #set the properties the DMaaP plugin needs$
- ctx.instance.runtime_properties[s["name"]] = {"location" : s["location"]}
- #form (or append to) the dict the component will get, including the template for the CBS$
- for_config[s["config_key"]] = {"type" : s["type"], "dmaap_info" : "<<" + s["name"] + ">>"} #will get bound by CBS
-
- return for_config
-
-def _services_calls_iterator(services_calls):
- """
- helper function for iterating over services_calls
- """
- for_config = {}
- for s in services_calls:
- #form (or append to) the dict the component will get, including the template for the CBS
- for_config[s["config_key"]] = "{{ " + s["service_component_type"] + " }}" #will get bound by CBS
- return for_config
-
-######################
-# Decorators
-######################
-def try_raise_nonr(func):
- def inner(*args, **kwargs):
- try:
- return func(*args, **kwargs)
- except Exception as e:
- raise NonRecoverableError(e)
- return inner
-
-######################
-# Cloudify Operations
-######################
-
-@operation
-@try_raise_nonr
-def create(connected_broker_dns_name, **kwargs):
- """
- setup critical runtime properties
- """
-
- #fail fast
- _validate_conns(ctx.node.properties["connections"])
-
- #The config binding service needs to know whether cdap or docker. Currently (aug 1 2018) it looks for "cdap_app" in the name
- service_component_name = "{0}_cdap_app_{1}".format(str(uuid.uuid4()).replace("-",""), ctx.node.properties["service_component_type"])
-
- #set this into a runtime dictionary
- ctx.instance.runtime_properties[SERVICE_COMPONENT_NAME] = service_component_name
-
- #fetch the broker name from inputs and set it in runtime properties so other functions can use it
- ctx.instance.runtime_properties[SELECTED_BROKER] = connected_broker_dns_name
-
- #set the properties the DMaap plugin expects for message router
- #see the README for the structures of these keys
- #NOTE! This has to be done in create because Jack's DMaaP plugin expects to do it's thing in preconfigure.
- # and we need to get this key into consul before start
- #set this as a runtime property for start to use
- ctx.instance.runtime_properties[PUB_C] = _streams_iterator(ctx.node.properties["connections"][STREAMS_PUBLISHES])
- ctx.instance.runtime_properties[SUB_C] = _streams_iterator(ctx.node.properties["connections"][STREAMS_SUBSCRIBES])
- ctx.instance.runtime_properties[SER_C] = _services_calls_iterator(ctx.node.properties["connections"][SERVICES_CALLS])
-
-@operation
-@try_raise_nonr
-@Policies.gather_policies_to_node
-def deploy_and_start_application(**kwargs):
- """
- pushes the application into the workspace and starts it
- """
- #parse TOSCA model params
- config_template = ctx.node.properties["app_config"]
-
- #there is a typed section in the node type called "connections", but the broker expects those two keys at the top level of app_config, so add them here
- #In cloudify you can't have a custom data type and then specify unknown propertys, the vlidation will fail, so typespeccing just part of app_config doesnt work
- #the rest of the CDAP app's app_config is app-dependent
- config_template[SERVICES_CALLS] = ctx.instance.runtime_properties[SER_C]
- config_template[STREAMS_PUBLISHES] = ctx.instance.runtime_properties[PUB_C]
- config_template[STREAMS_SUBSCRIBES] = ctx.instance.runtime_properties[SUB_C]
-
- #register with broker
- ctx.logger.info("Registering with Broker, config template was: {0}".format(json.dumps(config_template)))
- response = discovery.put_broker(
- cdap_broker_name = ctx.instance.runtime_properties[SELECTED_BROKER],
- service_component_name = ctx.instance.runtime_properties[SERVICE_COMPONENT_NAME],
- namespace = ctx.node.properties["namespace"],
- streamname = ctx.node.properties["streamname"],
- jar_url = ctx.node.properties["jar_url"],
- artifact_name = ctx.node.properties["artifact_name"],
- artifact_version = ctx.node.properties["artifact_version"],
- app_config = config_template,
- app_preferences = ctx.node.properties["app_preferences"],
- service_endpoints = ctx.node.properties["service_endpoints"],
- programs = ctx.node.properties["programs"],
- program_preferences = ctx.node.properties["program_preferences"],
- logger = ctx.logger)
-
- response.raise_for_status() #bomb if not 2xx
-
- #TODO! Would be better to do an initial merge first before deploying, but the merge is complicated for CDAP
- #because of app config vs. app preferences. So, for now, let the broker do the work with an immediate reconfigure
- #get policies that may have changed prior to this blueprint deployment
- policy_configs = Policies.get_policy_configs()
- if policy_configs is not None:
- ctx.logger.info("Updated policy configs: {0}".format(policy_configs))
- response = _trigger_update(policy_configs)
- response.raise_for_status() #bomb if not 2xx
-
-@operation
-def stop_and_undeploy_application(**kwargs):
- #per jack Lucas, do not raise Nonrecoverables on any delete operation. Keep going on them all, cleaning up as much as you can.
- #bombing would also bomb the deletion of the rest of the blueprint
- ctx.logger.info("Undeploying CDAP application")
- try: #deregister with the broker, which will also take down the service from consul
- response = discovery.delete_on_broker(
- cdap_broker_name = ctx.instance.runtime_properties[SELECTED_BROKER],
- service_component_name = ctx.instance.runtime_properties[SERVICE_COMPONENT_NAME],
- logger = ctx.logger)
- response.raise_for_status() #bomb if not 2xx
- except Exception as e:
- ctx.logger.error("Error deregistering from Broker, but continuing with deletion process: {0}".format(e))
-
-@operation
-def delete_all_registered_apps(connected_broker_dns_name, **kwargs):
- """
- Used in the cdap broker deleter node.
- Deletes all registered applications (in the broker)
- per jack Lucas, do not raise Nonrecoverables on any delete operation. Keep going on them all, cleaning up as much as you can.
- """
- ctx.logger.info("Undeploying CDAP application")
- try:
- response = discovery.delete_all_registered_apps(
- cdap_broker_name = connected_broker_dns_name,
- logger = ctx.logger)
- response.raise_for_status() #bomb if not 2xx
- except Exception as e:
- ctx.logger.error("Error deregistering from Broker, but continuing with deletion process: {0}".format(e))
-
-############
-#RECONFIGURATION
-# These calls works as follows:
-# 1) it expects "new_config_template" to be a key in kwargs, i.e., passed in using execute_operations -p parameter
-# 2) it pushes the new unbound config down to the broker
-# 3) broker deals with the rest
-############
-@operation
-@try_raise_nonr
-def app_config_reconfigure(new_config_template, **kwargs):
- """
- reconfigure the CDAP app's app config
- """
- ctx.logger.info("Reconfiguring CDAP application via app_config")
- response = discovery.reconfigure_in_broker(
- cdap_broker_name = ctx.instance.runtime_properties[SELECTED_BROKER],
- service_component_name = ctx.instance.runtime_properties[SERVICE_COMPONENT_NAME],
- config = new_config_template, #This keyname will likely change per policy handler
- reconfiguration_type = "program-flowlet-app-config",
- logger = ctx.logger)
- response.raise_for_status() #bomb if not 2xx
-
-@operation
-@try_raise_nonr
-def app_preferences_reconfigure(new_config_template, **kwargs):
- """
- reconfigure the CDAP app's app preferences
- """
- ctx.logger.info("Reconfiguring CDAP application via app_preferences")
- response = discovery.reconfigure_in_broker(
- cdap_broker_name = ctx.instance.runtime_properties[SELECTED_BROKER],
- service_component_name = ctx.instance.runtime_properties[SERVICE_COMPONENT_NAME],
- config = new_config_template, #This keyname will likely change per policy handler
- reconfiguration_type = "program-flowlet-app-preferences",
- logger = ctx.logger)
- response.raise_for_status() #bomb if not 2xx
-
-@operation
-@try_raise_nonr
-def app_smart_reconfigure(new_config_template, **kwargs):
- """
- reconfigure the CDAP app via the broker smart interface
- """
- ctx.logger.info("Reconfiguring CDAP application via smart interface")
- response = _trigger_update([new_config_template])
- response.raise_for_status() #bomb if not 2xx
-
-@operation
-@try_raise_nonr
-@Policies.update_policies_on_node(configs_only=True)
-def policy_update(updated_policies, **kwargs):
- #its already develiered through policy
- ctx.logger.info("Policy update recieved. updated policies: {0}".format(updated_policies))
- #TODO! In the future, if we really have many different policies, would be more efficient to do a single merge here.
- #However all use cases today are a single policy so OK with this for loop for now.
- response = _trigger_update(updated_policies)
- response.raise_for_status() #bomb if not 2xx
-
diff --git a/cdap/cdapplugin/cdapcloudify/discovery.py b/cdap/cdapplugin/cdapcloudify/discovery.py
deleted file mode 100644
index e20258f..0000000
--- a/cdap/cdapplugin/cdapcloudify/discovery.py
+++ /dev/null
@@ -1,132 +0,0 @@
-# org.onap.dcae
-# ================================================================================
-# Copyright (c) 2017 AT&T Intellectual Property. All rights reserved.
-# ================================================================================
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ============LICENSE_END=========================================================
-#
-# ECOMP is a trademark and service mark of AT&T Intellectual Property.
-
-import requests
-import json
-
-CONSUL_HOST = "http://localhost:8500"
-
-def _get_connection_info_from_consul(service_component_name, logger):
- """
- Call consul's catalog
- TODO: currently assumes there is only one service
- """
- url = "{0}/v1/catalog/service/{1}".format(CONSUL_HOST, service_component_name)
- logger.info("Trying to query: {0}".format(url))
- res = requests.get(url)
- res.raise_for_status()
- services = res.json()
- return services[0]["ServiceAddress"], services[0]["ServicePort"]
-
-def _get_broker_url(cdap_broker_name, service_component_name, logger):
- """
- fetch the broker connection information from Consul
- """
- broker_ip, broker_port = _get_connection_info_from_consul(cdap_broker_name, logger)
- broker_url = "http://{ip}:{port}/application/{appname}".format(ip=broker_ip, port=broker_port, appname=service_component_name)
- logger.info("Trying to connect to broker endpoint: {0}".format(broker_url))
- return broker_url
-
-"""
-decorators
-"""
-def run_response(func):
- """
- decorator for generic http call, log the response, and return the flask response
-
- make sure you call the functons below using logger as a kwarg!
- """
- def inner(*args, **kwargs):
- logger = kwargs["logger"]
- response = func(*args, **kwargs)
- logger.info((response, response.status_code, response.text))
- return response #let the caller deal with the response
- return inner
-
-"""
-public
-"""
-@run_response
-def put_broker(cdap_broker_name,
- service_component_name,
- namespace,
- streamname,
- jar_url,
- artifact_name,
- artifact_version,
- app_config,
- app_preferences,
- service_endpoints,
- programs,
- program_preferences,
- logger):
- """
- Conforms to Broker API 4.X
- """
-
- data = dict()
- data["cdap_application_type"] = "program-flowlet"
- data["namespace"] = namespace
- data["streamname"] = streamname
- data["jar_url"] = jar_url
- data["artifact_name"] = artifact_name
- data["artifact_version"] = artifact_version
- data["app_config"] = app_config
- data["app_preferences"] = app_preferences
- data["services"] = service_endpoints
- data["programs"] = programs
- data["program_preferences"] = program_preferences
-
- #register with the broker
- return requests.put(_get_broker_url(cdap_broker_name, service_component_name, logger),
- json = data,
- headers = {'content-type':'application/json'})
-
-@run_response
-def reconfigure_in_broker(cdap_broker_name,
- service_component_name,
- config,
- reconfiguration_type,
- logger):
- #trigger a reconfiguration with the broker
- #man am I glad I broke the broker API from 3 to 4 to standardize this interface because now I only need one function here
- return requests.put("{u}/reconfigure".format(u = _get_broker_url(cdap_broker_name, service_component_name, logger)),
- headers = {'content-type':'application/json'},
- json = {"reconfiguration_type" : reconfiguration_type,
- "config" : config})
-
-@run_response
-def delete_on_broker(cdap_broker_name, service_component_name, logger):
- #deregister with the broker
- return requests.delete(_get_broker_url(cdap_broker_name, service_component_name, logger))
-
-@run_response
-def delete_all_registered_apps(cdap_broker_name, logger):
- #get the broker connection
- broker_ip, broker_port = _get_connection_info_from_consul(cdap_broker_name, logger)
- broker_url = "http://{ip}:{port}".format(ip=broker_ip, port=broker_port)
-
- #binge and purge
- logger.info("Trying to connect to broker called {0} at {1}".format(cdap_broker_name, broker_url))
- registered_apps = json.loads(requests.get("{0}/application".format(broker_url)).text) #should be proper list of strings (appnames)
- logger.info("Trying to delete: {0}".format(registered_apps))
- return requests.post("{0}/application/delete".format(broker_url),
- headers = {'content-type':'application/json'},
- json = {"appnames" : registered_apps})
-
diff --git a/cdap/cdapplugin/requirements.txt b/cdap/cdapplugin/requirements.txt
deleted file mode 100644
index 43a0ea1..0000000
--- a/cdap/cdapplugin/requirements.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-onap-dcae-dcaepolicy-lib==1.0.0
-cloudify-common>=5.0.0; python_version<"3"
-cloudify-common @ git+https://github.com/cloudify-cosmo/cloudify-common@cy-1374-python3#egg=cloudify-common==5.0.0; python_version>="3"
diff --git a/cdap/cdapplugin/setup.py b/cdap/cdapplugin/setup.py
deleted file mode 100644
index 5ef6cc6..0000000
--- a/cdap/cdapplugin/setup.py
+++ /dev/null
@@ -1,40 +0,0 @@
-# org.onap.dcae
-# ================================================================================
-# Copyright (c) 2017 AT&T Intellectual Property. All rights reserved.
-# Copyright (c) 2019 Pantheon.tech. All rights reserved.
-# ================================================================================
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ============LICENSE_END=========================================================
-#
-# ECOMP is a trademark and service mark of AT&T Intellectual Property.
-
-import os
-from setuptools import setup, find_packages
-
-setup(
- name = "cdapcloudify",
- version = "14.3.0",
- packages=find_packages(),
- author = "Tommy Carpenter",
- author_email = "tommy@research.att.com",
- description = ("Cloudify plugin for CDAP"),
- license = "",
- keywords = "",
- url = "https://gerrit.onap.org/r/#/admin/projects/dcaegen2/platform/plugins",
- zip_safe=False,
- install_requires = [
- # FIXME: not compatible with latest version
- 'onap-dcae-dcaepolicy-lib==1.0.0',
- 'cloudify-common>=5.0.0',
- ]
-)
diff --git a/cdap/cdapplugin/tests/test_cdap_plugin.py b/cdap/cdapplugin/tests/test_cdap_plugin.py
deleted file mode 100644
index f28485d..0000000
--- a/cdap/cdapplugin/tests/test_cdap_plugin.py
+++ /dev/null
@@ -1,103 +0,0 @@
-# org.onap.dcae
-# ================================================================================
-# Copyright (c) 2017 AT&T Intellectual Property. All rights reserved.
-# ================================================================================
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ============LICENSE_END=========================================================
-#
-# ECOMP is a trademark and service mark of AT&T Intellectual Property.
-
-from cdapcloudify.cdap_plugin import _validate_conns, BadConnections, try_raise_nonr
-import pytest
-from cloudify.exceptions import NonRecoverableError
-
-#todo.. add more tests.. #shame
-
-def _get_good_connection():
- connections = {}
- connections["streams_publishes"] = [
- {"name" : "test_n",
- "location" : "test_l",
- "client_role" : "test_cr",
- "type" : "message_router",
- "config_key" : "test_c",
- "aaf_username": "test_u",
- "aaf_password": "test_p"
- },
- {"name" : "test_n2",
- "location" : "test_l",
- "client_role" : "test_cr",
- "type" : "message_router",
- "config_key" : "test_c",
- "aaf_username": "test_u",
- "aaf_password": "test_p"
- },
- {"name" : "test_feed00",
- "location" : "test_l",
- "type" : "data_router",
- "config_key" : "mydrconfigkey"
- }
- ]
- connections["streams_subscribes"] = [
- {"name" : "test_n",
- "location" : "test_l",
- "client_role" : "test_cr",
- "type" : "message_router",
- "config_key" : "test_c",
- "aaf_username": "test_u",
- "aaf_password": "test_p"
- },
- {"name" : "test_n2",
- "location" : "test_l",
- "client_role" : "test_cr",
- "type" : "message_router",
- "config_key" : "test_c",
- "aaf_username": "test_u",
- "aaf_password": "test_p"
- }
- ]
- return connections
-
-def test_validate_cons():
- #test good streams
- good_conn = _get_good_connection()
- _validate_conns(good_conn)
-
- #mutate
- nosub = _get_good_connection().pop("streams_subscribes")
- with pytest.raises(BadConnections) as excinfo:
- _validate_conns(nosub)
-
- nopub = _get_good_connection().pop("streams_publishes")
- with pytest.raises(BadConnections) as excinfo:
- _validate_conns(nopub)
-
- noloc = _get_good_connection()["streams_publishes"][0].pop("location")
- with pytest.raises(BadConnections) as excinfo:
- _validate_conns(noloc)
-
-def test_nonr_dec():
- def blow():
- d = {}
- d["emptyinside"] += 1
- return d
- #apply decorator
- blow = try_raise_nonr(blow)
- with pytest.raises(NonRecoverableError):
- blow()
-
- def work():
- return 666
- work = try_raise_nonr(work)
- assert work() == 666
-
diff --git a/cdap/cdapplugin/tests/test_discovery.py b/cdap/cdapplugin/tests/test_discovery.py
deleted file mode 100644
index 7354f4e..0000000
--- a/cdap/cdapplugin/tests/test_discovery.py
+++ /dev/null
@@ -1,114 +0,0 @@
-# org.onap.dcae
-# ================================================================================
-# Copyright (c) 2017 AT&T Intellectual Property. All rights reserved.
-# ================================================================================
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ============LICENSE_END=========================================================
-#
-# ECOMP is a trademark and service mark of AT&T Intellectual Property.
-
-from cdapcloudify import get_module_logger
-from cdapcloudify import discovery
-import pytest
-import requests
-import collections
-import json
-
-logger = get_module_logger(__name__)
-
-_TEST_BROKER_NAME = "test_broker"
-_TEST_SCN = "test_scn"
-
-
-class FakeResponse:
- def __init__(self, status_code, text, json = {}):
- self.status_code = status_code
- self.json = json #this is kind of misleading as the broker doesnt return the input as output but this cheat makes testing easier
- self.text = text
-
-def _fake_putpost(url, json, headers):
- return FakeResponse(status_code = 200,
- json = json,
- text = "URL: {0}, headers {1}".format(url, headers))
-
-def _fake_delete(url):
- return FakeResponse(status_code = 200, text = "URL: {0}".format(url))
-
-def _fake_get_broker_url(cdap_broker_name, service_component_name, logger):
- return "http://{ip}:{port}/application/{appname}".format(ip="666.666.666.666", port="666", appname=service_component_name)
-
-def test_put_broker(monkeypatch):
- monkeypatch.setattr('requests.put', _fake_putpost)
- monkeypatch.setattr('cdapcloudify.discovery._get_broker_url', _fake_get_broker_url)
- R = discovery.put_broker(
- _TEST_BROKER_NAME,
- _TEST_SCN,
- "test_ns",
- "test_sn",
- "test_ju",
- "test_an",
- "test_av",
- "test_ac",
- "test_ap",
- "test_se",
- "test_p",
- "test_pp",
- logger = logger)
-
- assert R.text == "URL: http://666.666.666.666:666/application/test_scn, headers {'content-type': 'application/json'}"
- assert R.json == {'app_preferences': 'test_ap', 'services': 'test_se', 'namespace': 'test_ns', 'programs': 'test_p', 'cdap_application_type': 'program-flowlet', 'app_config': 'test_ac', 'streamname': 'test_sn', 'program_preferences': 'test_pp', 'artifact_name': 'test_an', 'jar_url': 'test_ju', 'artifact_version': 'test_av'}
- assert R.status_code == 200
-
-def test_reconfigure_in_broker(monkeypatch):
- monkeypatch.setattr('requests.put', _fake_putpost)
- monkeypatch.setattr('cdapcloudify.discovery._get_broker_url', _fake_get_broker_url)
- R = discovery.reconfigure_in_broker(
- _TEST_BROKER_NAME,
- _TEST_SCN,
- {"redome" : "baby"},
- "program-flowlet-app-config",
- logger = logger)
- assert R.text == "URL: http://666.666.666.666:666/application/test_scn/reconfigure, headers {'content-type': 'application/json'}"
- assert R.json == {'reconfiguration_type': 'program-flowlet-app-config', 'config': {'redome': 'baby'}}
- assert R.status_code == 200
-
-def test_delete_on_broker(monkeypatch):
- monkeypatch.setattr('requests.delete', _fake_delete)
- monkeypatch.setattr('cdapcloudify.discovery._get_broker_url', _fake_get_broker_url)
- R = discovery.delete_on_broker(
- _TEST_BROKER_NAME,
- _TEST_SCN,
- logger = logger)
- assert R.text == "URL: http://666.666.666.666:666/application/test_scn"
- assert R.status_code == 200
-
-def test_multi_delete(monkeypatch):
- pretend_appnames = ['yo1', 'yo2']
-
- def fake_get(url):
- #return a fake list of app names
- return FakeResponse(status_code = 200,
- text = json.dumps(pretend_appnames))
- def fake_get_connection_info_from_consul(broker_name, logger):
- return "666.666.666.666", "666"
-
- monkeypatch.setattr('requests.get', fake_get)
- monkeypatch.setattr('cdapcloudify.discovery._get_connection_info_from_consul', fake_get_connection_info_from_consul)
- monkeypatch.setattr('requests.post', _fake_putpost)
- R = discovery.delete_all_registered_apps(
- _TEST_BROKER_NAME,
- logger = logger)
-
- assert R.text == "URL: http://666.666.666.666:666/application/delete, headers {'content-type': 'application/json'}"
- assert R.status_code == 200
- assert R.json == {'appnames': pretend_appnames}
diff --git a/cdap/cdapplugin/tox-local.ini b/cdap/cdapplugin/tox-local.ini
deleted file mode 100644
index d14c8a1..0000000
--- a/cdap/cdapplugin/tox-local.ini
+++ /dev/null
@@ -1,30 +0,0 @@
-# tox -c tox-local.ini
-[tox]
-envlist = py27,py36,cov
-
-[testenv]
-# coverage can only find modules if pythonpath is set
-setenv=
- PYTHONPATH={toxinidir}
- COVERAGE_FILE=.coverage.{envname}
-deps=
- -rrequirements.txt
- pytest
- coverage
- pytest-cov
-commands=
- coverage erase
- pytest --junitxml xunit-results.{envname}.xml --cov cdapcloudify
-
-[testenv:cov]
-skip_install = true
-deps=
- coverage
-setenv=
- COVERAGE_FILE=.coverage
-commands=
- coverage combine
- coverage html
-
-[pytest]
-junit_family = xunit2
diff --git a/cdap/cdapplugin/tox.ini b/cdap/cdapplugin/tox.ini
deleted file mode 100644
index 5c399a7..0000000
--- a/cdap/cdapplugin/tox.ini
+++ /dev/null
@@ -1,29 +0,0 @@
-[tox]
-envlist = py27,py36,cov
-
-[testenv]
-# coverage can only find modules if pythonpath is set
-setenv=
- PYTHONPATH={toxinidir}
- COVERAGE_FILE=.coverage.{envname}
-deps=
- -rrequirements.txt
- pytest
- coverage
- pytest-cov
-commands=
- coverage erase
- pytest --junitxml xunit-results.{envname}.xml --cov cdapcloudify
-
-[testenv:cov]
-skip_install = true
-deps=
- coverage
-setenv=
- COVERAGE_FILE=.coverage
-commands=
- coverage combine
- coverage xml
-
-[pytest]
-junit_family = xunit2
diff --git a/cdap/demo_blueprints/cdap_hello_world.yaml b/cdap/demo_blueprints/cdap_hello_world.yaml
deleted file mode 100644
index e154cf7..0000000
--- a/cdap/demo_blueprints/cdap_hello_world.yaml
+++ /dev/null
@@ -1,68 +0,0 @@
-# ================================================================================
-# Copyright (c) 2017-2018 AT&T Intellectual Property. All rights reserved.
-# ================================================================================
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ============LICENSE_END=========================================================
-#
-# ECOMP is a trademark and service mark of AT&T Intellectual Property.
-tosca_definitions_version: cloudify_dsl_1_3
-
-imports:
- - http://www.getcloudify.org/spec/cloudify/3.4/types.yaml
- - {{ ONAPTEMPLATE_RAWREPOURL_org_onap_dcaegen2 }}/type_files/cdap/14.2.5/cdap_types.yaml
- - {{ ONAPTEMPLATE_RAWREPOURL_org_onap_dcaegen2 }}/type_files/dcaepolicy/1.0.0/node-type.yaml
-
-inputs:
- hello_world_jar_url:
- type: string
- connected_broker_dns_name:
- type: string
- default : "cdap_broker"
-
-node_templates:
- hw_app_policy_test:
- type: dcae.nodes.policy
- properties:
- policy_id : DCAE_alex.Config_test_cdap_policy
-
- hw_cdap_app:
- type: dcae.nodes.MicroService.cdap
- properties:
- service_component_type:
- 'hello_world'
- jar_url: { get_input : hello_world_jar_url }
- artifact_name: "HelloWorld"
- artifact_version: "3.4.3"
- namespace: "cloudifyhwtest"
- programs:
- [{"program_type" : "flows", "program_id" : "WhoFlow"}, {"program_type" : "services", "program_id" : "Greeting"}]
- streamname:
- 'who'
- service_endpoints:
- [{"service_name" : "Greeting", "service_endpoint" : "greet", "endpoint_method" : "GET"}]
- app_config: {"foo" : "you should never see this; it should be overwritten by policy"}
- app_preferences: {"foo_updated" : "you should never see this; it should be overwritten by policy"}
-
- interfaces:
- cloudify.interfaces.lifecycle:
- create:
- inputs:
- connected_broker_dns_name: { get_input: connected_broker_dns_name }
- relationships:
- - target: hw_app_policy_test
- type: cloudify.relationships.depends_on
-
-outputs:
- hw_cdap_app_name:
- value:
- {get_attribute:[hw_cdap_app, service_component_name]}
diff --git a/cdap/demo_blueprints/cdap_hello_world_reconfigure.sh b/cdap/demo_blueprints/cdap_hello_world_reconfigure.sh
deleted file mode 100755
index 7731160..0000000
--- a/cdap/demo_blueprints/cdap_hello_world_reconfigure.sh
+++ /dev/null
@@ -1,21 +0,0 @@
-# ================================================================================
-# Copyright (c) 2017-2018 AT&T Intellectual Property. All rights reserved.
-# ================================================================================
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ============LICENSE_END=========================================================
-#
-# ECOMP is a trademark and service mark of AT&T Intellectual Property.
-#!/bin/bash
-cfy executions start -d cdap_hello_world -w execute_operation -p '{"operation" : "reconfiguration.app_config_reconfigure", "node_ids" : ["hw_cdap_app"], "operation_kwargs" : {"new_config_template" : {"foo":"bar-manual-update"}}, "allow_kwargs_override": true}'
-cfy executions start -d cdap_hello_world -w execute_operation -p '{"operation" : "reconfiguration.app_preferences_reconfigure", "node_ids" : ["hw_cdap_app"], "operation_kwargs" : {"new_config_template" : {"foo_updated":"foo-pref-manual-update"}}, "allow_kwargs_override": true}'
-cfy executions start -d cdap_hello_world -w execute_operation -p '{"operation" : "reconfiguration.app_smart_reconfigure", "node_ids" : ["hw_cdap_app"], "operation_kwargs" : {"new_config_template" : {"foo_updated":"SO SMARTTTTTT", "foo":"SO SMART AGAINNNNN"}}, "allow_kwargs_override": true}'
diff --git a/cdap/demo_blueprints/cdap_hello_world_with_dmaap.yaml b/cdap/demo_blueprints/cdap_hello_world_with_dmaap.yaml
deleted file mode 100644
index 11c6f75..0000000
--- a/cdap/demo_blueprints/cdap_hello_world_with_dmaap.yaml
+++ /dev/null
@@ -1,165 +0,0 @@
-# ================================================================================
-# Copyright (c) 2017-2018 AT&T Intellectual Property. All rights reserved.
-# ================================================================================
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ============LICENSE_END=========================================================
-#
-# ECOMP is a trademark and service mark of AT&T Intellectual Property.
-tosca_definitions_version: cloudify_dsl_1_3
-
-imports:
- - http://www.getcloudify.org/spec/cloudify/3.4/types.yaml
- - {{ ONAPTEMPLATE_RAWREPOURL_org_onap_dcaegen2 }}/type_files/dmaap/1.1.0/dmaap.yaml
- - {{ ONAPTEMPLATE_RAWREPOURL_org_onap_dcaegen2 }}/type_files/cdap/14.2.5/cdap_types.yaml
-
-inputs:
- hello_world_jar_url:
- type: string
- connected_broker_dns_name:
- type: string
- default : "cdap_broker"
-
- #aaf inputs
- client_role:
- type: string
- topic00fqtn:
- type: string
- topic01fqtn:
- type: string
- aafu0:
- type: string
- default: "foo0"
- aafp0:
- type: string
- default: "bar0"
- aafu1:
- type: string
- default : "foo1"
- aafp1:
- type: string
- default : "bar1"
- aafu2:
- type: string
- default: "foo2"
- aafp2:
- type: string
- default: "bar2"
- aafu3:
- type: string
- default : "foo3"
- aafp3:
- type: string
- default : "bar3"
-
-node_templates:
- topic00:
- type: dcae.nodes.ExistingTopic
- properties:
- fqtn: { get_input : topic00fqtn }
- topic01:
- type: dcae.nodes.ExistingTopic
- properties:
- fqtn: { get_input : topic01fqtn }
- feed00:
- type: dcae.nodes.Feed
- properties:
- feed_name: "FEEDME-12"
- feed_description: "Tommy Test feed for CDAP Publishes"
- feed_version: 6.6.6
- aspr_classification: "unclassified"
-
- hw_cdap_app:
- type: dcae.nodes.MicroService.cdap
- properties:
- service_component_type: 'hello_world'
- jar_url: { get_input : hello_world_jar_url }
- artifact_name: "HelloWorld"
- artifact_version: "3.4.3"
- namespace: "cloudifyhwtest"
- programs:
- [{"program_type" : "flows", "program_id" : "WhoFlow"}, {"program_type" : "services", "program_id" : "Greeting"}]
- streamname:
- 'who'
- service_endpoints:
- [{"service_name" : "Greeting", "service_endpoint" : "greet", "endpoint_method" : "GET"}]
-
- #special key for CDAP plugin
- connections:
- streams_publishes:
- - name: topic00 #MR pub 1
- location: mtc5
- client_role: { get_input: client_role }
- type: message_router
- config_key: "myconfigkey0"
- aaf_username: { get_input: aafu0 }
- aaf_password: { get_input: aafp0 }
- - name: topic01 #MR pub 2
- location: mtc5
- client_role: { get_input: client_role }
- type: message_router
- config_key: "myconfigkey1"
- aaf_username: { get_input: aafu1 }
- aaf_password: { get_input: aafp1 }
- - name: feed00 #Feed pub 1
- location: mtc5
- type: data_router
- config_key: "mydrconfigkey"
- streams_subscribes:
- - name: topic00 #MEANT FOR DEMO ONLY! Subscribing and publishing to same topic. Not real example.
- location: mtc5
- client_role: { get_input: client_role }
- type: message_router
- config_key: "myconfigkey2"
- aaf_username: { get_input: aafu2 }
- aaf_password: { get_input: aafp2 }
- - name: topic01
- location: mtc5
- client_role: { get_input: client_role }
- type: message_router
- config_key: "myconfigkey3"
- aaf_username: { get_input: aafu3 }
- aaf_password: { get_input: aafp3 }
-
- relationships:
- - type: dcae.relationships.publish_events
- target: topic00 #MEANT FOR DEMO ONLY! Subscribing and publishing to same topic. Not real example.
- - type: dcae.relationships.publish_events
- target: topic01
- - type: dcae.relationships.subscribe_to_events
- target: topic00
- - type: dcae.relationships.subscribe_to_events
- target: topic01
- - type: dcae.relationships.publish_files
- target: feed00
-
- interfaces:
- cloudify.interfaces.lifecycle:
- create:
- inputs:
- connected_broker_dns_name: { get_input: connected_broker_dns_name }
-
-outputs:
- hw_cdap_app_name:
- value: {get_attribute:[hw_cdap_app, service_component_name]}
-
- topic00_data:
- description: "Topic 00 data"
- value: { get_attribute: [hw_cdap_app, topic00]}
-
- topic01_data:
- description: "Topic 01 data"
- value: { get_attribute: [hw_cdap_app, topic01]}
-
-
-
-
diff --git a/cdap/demo_blueprints/cdap_hello_world_with_laika.yaml b/cdap/demo_blueprints/cdap_hello_world_with_laika.yaml
deleted file mode 100644
index fc84f8b..0000000
--- a/cdap/demo_blueprints/cdap_hello_world_with_laika.yaml
+++ /dev/null
@@ -1,97 +0,0 @@
-# ================================================================================
-# Copyright (c) 2017-2018 AT&T Intellectual Property. All rights reserved.
-# ================================================================================
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ============LICENSE_END=========================================================
-#
-# ECOMP is a trademark and service mark of AT&T Intellectual Property.
-tosca_definitions_version: cloudify_dsl_1_3
-
-imports:
- - http://www.getcloudify.org/spec/cloudify/3.4/types.yaml
- - {{ ONAPTEMPLATE_RAWREPOURL_org_onap_dcaegen2 }}/type_files/cdap/14.2.5/cdap_types.yaml
- - {{ ONAPTEMPLATE_RAWREPOURL_org_onap_dcaegen2 }}/type_files/docker/2.3.0/node-type.yaml
- - {{ ONAPTEMPLATE_RAWREPOURL_org_onap_dcaegen2 }}/type_files/relationship/1.0.0/node-type.yaml
-
-inputs:
- hello_world_jar_url:
- type: string
- laika_image:
- type: string
- connected_broker_dns_name:
- type: string
- default : "cdap_broker"
-
-node_templates:
-
- hw_cdap_app:
- type: dcae.nodes.MicroService.cdap
- properties:
- service_component_type:
- 'hello_world'
- jar_url: { get_input : hello_world_jar_url }
- artifact_name: "HelloWorld"
- artifact_version: "3.4.3"
- namespace: "cloudifyhwtest"
- programs:
- [{"program_type" : "flows", "program_id" : "WhoFlow"}, {"program_type" : "services", "program_id" : "Greeting"}]
- streamname:
- 'who'
- service_endpoints:
- [{"service_name" : "Greeting", "service_endpoint" : "greet", "endpoint_method" : "GET"}]
-
- connections:
- services_calls:
- - service_component_type: laika
- config_key: "laika_handle"
-
- relationships:
- - type: dcae.relationships.component_connected_to
- target: laika-one
-
- interfaces:
- cloudify.interfaces.lifecycle:
- create:
- inputs:
- connected_broker_dns_name: { get_input: connected_broker_dns_name }
-
- laika-one:
- type: dcae.nodes.DockerContainerForComponents
- properties:
- service_component_type: 'laika'
- service_id: 'this_is_dumb'
- location_id: 'this_is_dumb'
- image: { get_input : laika_image }
- # Trying without health check
- relationships:
- - type: dcae.relationships.component_contained_in
- target: docker_host
- interfaces:
- cloudify.interfaces.lifecycle:
- stop:
- inputs:
- cleanup_image:
- False
-
- docker_host:
- type: dcae.nodes.SelectedDockerHost
- properties:
- location_id: 'this is dumb'
- docker_host_override: 'platform_dockerhost'
-
-outputs:
- hw_cdap_app_name:
- value: {get_attribute:[hw_cdap_app, service_component_name]}
-
-
-
diff --git a/cdap/demo_blueprints/cdap_hello_world_with_mr.yaml b/cdap/demo_blueprints/cdap_hello_world_with_mr.yaml
deleted file mode 100644
index f52edad..0000000
--- a/cdap/demo_blueprints/cdap_hello_world_with_mr.yaml
+++ /dev/null
@@ -1,151 +0,0 @@
-# ================================================================================
-# Copyright (c) 2017-2018 AT&T Intellectual Property. All rights reserved.
-# ================================================================================
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ============LICENSE_END=========================================================
-#
-# ECOMP is a trademark and service mark of AT&T Intellectual Property.
-tosca_definitions_version: cloudify_dsl_1_3
-
-imports:
- - http://www.getcloudify.org/spec/cloudify/3.4/types.yaml
- - {{ ONAPTEMPLATE_RAWREPOURL_org_onap_dcaegen2 }}/type_files/cdap/14.2.5/cdap_types.yaml
- - {{ ONAPTEMPLATE_RAWREPOURL_org_onap_dcaegen2 }}/type_files/dmaap/1.1.0/dmaap.yaml
-
-inputs:
- hello_world_jar_url:
- type: string
- connected_broker_dns_name:
- type: string
- default : "cdap_broker"
-
- #aaf inputs
- client_role:
- type: string
- topic00fqtn:
- type: string
- topic01fqtn:
- type: string
- aafu0:
- type: string
- default: "foo0"
- aafp0:
- type: string
- default: "bar0"
- aafu1:
- type: string
- default : "foo1"
- aafp1:
- type: string
- default : "bar1"
- aafu2:
- type: string
- default: "foo2"
- aafp2:
- type: string
- default: "bar2"
- aafu3:
- type: string
- default : "foo3"
- aafp3:
- type: string
- default : "bar3"
-
-node_templates:
- topic00:
- type: dcae.nodes.ExistingTopic
- properties:
- fqtn: { get_input : topic00fqtn }
-
- topic01:
- type: dcae.nodes.ExistingTopic
- properties:
- fqtn: { get_input : topic01fqtn }
-
- hw_cdap_app:
- type: dcae.nodes.MicroService.cdap
- properties:
- service_component_type:
- 'hello_world'
- jar_url: { get_input : hello_world_jar_url }
- artifact_name: "HelloWorld"
- artifact_version: "3.4.3"
- namespace: "cloudifyhwtest"
- programs:
- [{"program_type" : "flows", "program_id" : "WhoFlow"}, {"program_type" : "services", "program_id" : "Greeting"}]
- streamname:
- 'who'
- service_endpoints:
- [{"service_name" : "Greeting", "service_endpoint" : "greet", "endpoint_method" : "GET"}]
-
- #special key for CDAP plugin
- connections:
- streams_publishes:
- - name: topic00 #MR pub 1
- location: mtc5
- client_role: { get_input: client_role }
- type: message_router
- config_key: "myconfigkey0"
- aaf_username: { get_input: aafu0 }
- aaf_password: { get_input: aafp0 }
- - name: topic01 #MR pub 2
- location: mtc5
- client_role: { get_input: client_role }
- type: message_router
- config_key: "myconfigkey1"
- aaf_username: { get_input: aafu1 }
- aaf_password: { get_input: aafp1 }
- streams_subscribes:
- - name: topic00 #MEANT FOR DEMO ONLY! Subscribing and publishing to same topic. Not real example.
- location: mtc5
- client_role: { get_input: client_role }
- type: message_router
- config_key: "myconfigkey2"
- aaf_username: { get_input: aafu2 }
- aaf_password: { get_input: aafp2 }
- - name: topic01
- location: mtc5
- client_role: { get_input: client_role }
- type: message_router
- config_key: "myconfigkey3"
- aaf_username: { get_input: aafu3 }
- aaf_password: { get_input: aafp3 }
-
- relationships:
- - type: dcae.relationships.publish_events
- target: topic00 #MEANT FOR DEMO ONLY! Subscribing and publishing to same topic. Not real example.
- - type: dcae.relationships.publish_events
- target: topic01
- - type: dcae.relationships.subscribe_to_events
- target: topic00
- - type: dcae.relationships.subscribe_to_events
- target: topic01
-
- interfaces:
- cloudify.interfaces.lifecycle:
- create:
- inputs:
- connected_broker_dns_name: { get_input: connected_broker_dns_name }
-
-outputs:
- hw_cdap_app_name:
- value: {get_attribute:[hw_cdap_app, service_component_name]}
-
- topic00_data:
- description: "Topic 00 data"
- value: { get_attribute: [hw_cdap_app, topic00]}
-
- topic01_data:
- description: "Topic 01 data"
- value: { get_attribute: [hw_cdap_app, topic01]}
-
diff --git a/cdap/pom.xml b/cdap/pom.xml
deleted file mode 100644
index 6c34227..0000000
--- a/cdap/pom.xml
+++ /dev/null
@@ -1,167 +0,0 @@
-<?xml version="1.0"?>
-<!--
-================================================================================
-Copyright (c) 2017 AT&T Intellectual Property. All rights reserved.
-================================================================================
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-============LICENSE_END=========================================================
-
-ECOMP is a trademark and service mark of AT&T Intellectual Property.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
- <modelVersion>4.0.0</modelVersion>
- <parent>
- <groupId>org.onap.dcaegen2.platform</groupId>
- <artifactId>plugins</artifactId>
- <version>1.2.0-SNAPSHOT</version>
- </parent>
- <groupId>org.onap.dcaegen2.platform.plugins</groupId>
- <artifactId>cdap</artifactId>
- <name>cdap-plugin</name>
- <version>1.2.0-SNAPSHOT</version>
- <url>http://maven.apache.org</url>
-
- <properties>
- <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
- <sonar.sources>.</sonar.sources>
- <sonar.modules>cdapplugin</sonar.modules>
- <cdapplugin.sonar.junit.reportsPath>xunit-results.xml</cdapplugin.sonar.junit.reportsPath>
- <cdapplugin.sonar.python.coverage.reportPath>coverage.xml</cdapplugin.sonar.python.coverage.reportPath>
- <sonar.language>py</sonar.language>
- <sonar.pluginname>Python</sonar.pluginname>
- <cdapplugin.sonar.inclusions>**/*.py</cdapplugin.sonar.inclusions>
- <cdapplugin.sonar.exclusions>tests/*,setup.py</cdapplugin.sonar.exclusions>
- </properties>
- <build>
- <finalName>${project.artifactId}-${project.version}</finalName>
- <plugins>
- <!-- plugin>
- <artifactId>maven-assembly-plugin</artifactId>
- <version>2.4.1</version>
- <configuration>
- <descriptors>
- <descriptor>assembly/dep.xml</descriptor>
- </descriptors>
- </configuration>
- <executions>
- <execution>
- <id>make-assembly</id>
- <phase>package</phase>
- <goals>
- <goal>single</goal>
- </goals>
- </execution>
- </executions>
- </plugin -->
- <!-- now we configure custom action (calling a script) at various lifecycle phases -->
- <plugin>
- <groupId>org.codehaus.mojo</groupId>
- <artifactId>exec-maven-plugin</artifactId>
- <version>1.2.1</version>
- <executions>
- <execution>
- <id>clean phase script</id>
- <phase>clean</phase>
- <goals>
- <goal>exec</goal>
- </goals>
- <configuration>
- <arguments>
- <argument>${project.artifactId}</argument>
- <argument>clean</argument>
- </arguments>
- </configuration>
- </execution>
- <execution>
- <id>generate-sources script</id>
- <phase>generate-sources</phase>
- <goals>
- <goal>exec</goal>
- </goals>
- <configuration>
- <arguments>
- <argument>${project.artifactId}</argument>
- <argument>generate-sources</argument>
- </arguments>
- </configuration>
- </execution>
- <execution>
- <id>compile script</id>
- <phase>compile</phase>
- <goals>
- <goal>exec</goal>
- </goals>
- <configuration>
- <arguments>
- <argument>${project.artifactId}</argument>
- <argument>compile</argument>
- </arguments>
- </configuration>
- </execution>
- <execution>
- <id>package script</id>
- <phase>package</phase>
- <goals>
- <goal>exec</goal>
- </goals>
- <configuration>
- <arguments>
- <argument>${project.artifactId}</argument>
- <argument>package</argument>
- </arguments>
- </configuration>
- </execution>
- <execution>
- <id>test script</id>
- <phase>test</phase>
- <goals>
- <goal>exec</goal>
- </goals>
- <configuration>
- <arguments>
- <argument>${project.artifactId}</argument>
- <argument>test</argument>
- </arguments>
- </configuration>
- </execution>
- <execution>
- <id>install script</id>
- <phase>install</phase>
- <goals>
- <goal>exec</goal>
- </goals>
- <configuration>
- <arguments>
- <argument>${project.artifactId}</argument>
- <argument>install</argument>
- </arguments>
- </configuration>
- </execution>
- <execution>
- <id>deploy script</id>
- <phase>deploy</phase>
- <goals>
- <goal>exec</goal>
- </goals>
- <configuration>
- <arguments>
- <argument>${project.artifactId}</argument>
- <argument>deploy</argument>
- </arguments>
- </configuration>
- </execution>
- </executions>
- </plugin>
- </plugins>
- </build>
-</project>
diff --git a/docker/.coveragerc b/docker/.coveragerc
deleted file mode 100644
index 088c2da..0000000
--- a/docker/.coveragerc
+++ /dev/null
@@ -1,21 +0,0 @@
-# .coveragerc to control coverage.py
-[run]
-branch = True
-
-[report]
-# Regexes for lines to exclude from consideration
-exclude_lines =
- # Have to re-enable the standard pragma
- pragma: no cover
-
- # Don't complain about missing debug-only code:
- def __repr__
- if self\.debug
-
- # Don't complain if tests don't hit defensive assertion code:
- raise AssertionError
- raise NotImplementedError
-
- # Don't complain if non-runnable code isn't run:
- if 0:
- if __name__ == .__main__.:
diff --git a/docker/.gitignore b/docker/.gitignore
deleted file mode 100644
index 8f0f9ba..0000000
--- a/docker/.gitignore
+++ /dev/null
@@ -1,68 +0,0 @@
-cfyhelper
-.cloudify
-*.swp
-*.swn
-*.swo
-.DS_Store
-.project
-.pydevproject
-venv
-
-
-# Byte-compiled / optimized / DLL files
-__pycache__/
-*.py[cod]
-
-# C extensions
-*.so
-
-# Distribution / packaging
-.Python
-env/
-build/
-develop-eggs/
-dist/
-downloads/
-eggs/
-.eggs/
-lib/
-lib64/
-parts/
-sdist/
-var/
-*.egg-info/
-.installed.cfg
-*.egg
-
-# PyInstaller
-# Usually these files are written by a python script from a template
-# before PyInstaller builds the exe, so as to inject date/other infos into it.
-*.manifest
-*.spec
-
-# Installer logs
-pip-log.txt
-pip-delete-this-directory.txt
-
-# Unit test / coverage reports
-htmlcov/
-.tox/
-.coverage
-.coverage.*
-.cache
-nosetests.xml
-coverage.xml
-*,cover
-
-# Translations
-*.mo
-*.pot
-
-# Django stuff:
-*.log
-
-# Sphinx documentation
-docs/_build/
-
-# PyBuilder
-target/
diff --git a/docker/ChangeLog.md b/docker/ChangeLog.md
deleted file mode 100644
index 7b27363..0000000
--- a/docker/ChangeLog.md
+++ /dev/null
@@ -1,81 +0,0 @@
-# Change Log
-
-All notable changes to this project will be documented in this file.
-
-The format is based on [Keep a Changelog](http://keepachangelog.com/)
-and this project adheres to [Semantic Versioning](http://semver.org/).
-
-## [3.3.0]
-* DCAEGEN2-1956 support python3 in all plugins
-
-## [3.2.1]
-* DCAEGEN2-1086 update onap-dcae-dcaepolicy-lib version to avoid Consul stores under old service_component_name
-
-## [3.2.0]
-
-* Change requirements.txt to use a version range for dcaepolicylib
-* DCAEGEN2-442
-
-## [3.1.0]
-
-* DCAEGEN2-415 - Change requirements.txt to use dcaepolicy 2.3.0. *Apparently* this constitutes a version bump.
-
-## [3.0.0]
-
-* Update docker plugin to use dcaepolicy 2.1.0. This involved all sorts of updates in how policy is expected to work for the component where the updates are not backwards friendly.
-
-## [2.4.0]
-
-* Change *components* to be policy reconfigurable:
- - Add policy execution operation
- - Add policy decorators to task so that application configuration will be merged with policy
-* Fetch Docker logins from Consul
-
-## [2.3.0+t.0.3]
-
-* Enhance `SelectedDockerHost` node type with `name_search` and add default to `docker_host_override`
-* Implement the functionality in the `select_docker_host` task to query Consul given location id and name search
-* Deprecate `location_id` on the `DockerContainerForComponents*` node types
-* Change `service_id` to be optional for `DockerContainerForComponents*` node types
-* Add deployment id as a tag for registration on the component
-
-## [2.3.0]
-
-* Rip out dockering and use common python-dockering library
- - Using 1.2.0 of python-dockering supports Docker exec based health checks
-* Support mapping ports and volumes when provided in docker config
-
-## [2.2.0]
-
-* Add `dcae.nodes.DockerContainerForComponentsUsingDmaap` node type and parse streams_publishes and streams_subscribes to be used by the DMaaP plugin.
- - Handle message router wiring in the create operation for components
- - Handle data router wiring in the create and in the start operation for components
-* Refactor the create operations and the start operations for components. Refactored to be functional to enable for better unit test coverage.
-* Add decorators for common cross cutting functionality
-* Add example blueprints for different dmaap cases
-
-## [2.1.0]
-
-* Add the node type `DockerContainerForPlatforms` which is intended for platform services who are to have well known names and ports
-* Add backdoor for `DockerContainerForComponents` to statically map ports
-* Add hack fix to allow this plugin access to the research nexus
-* Add support for dns through the local Consul agent
-* Free this plugin from the CentOS bondage
-
-## [2.0.0]
-
-* Remove the magic env.ini code. It's no longer needed because we are now running local agents of Consul.
-* Save and use the docker container id
-* `DockerContainer` is now a different node type that is much simpler than `DockerContainerforComponents`. It is targeted for the use case of registrator. This involved overhauling the create and start container functionality.
-* Classify connection and docker host not found error as recoverable
-* Apply CONSUL_HOST to point to the local Consul agent
-
-## [1.0.0]
-
-* Implement health checks - expose health checks on the node and register Docker containers with it. Note that health checks are currently optional.
-* Add option to remove images in the stop operation
-* Verify that the container is running and healthy before finishing the start operation
-* Image names passed in are now required to be the fully tagged names including registry
-* Remove references to rework in the code namespaces
-* Application configuration is now a YAML map to accommodate future blueprint generation
-* Update blueprints and cfyhelper.sh
diff --git a/docker/LICENSE.txt b/docker/LICENSE.txt
deleted file mode 100644
index cb8008a..0000000
--- a/docker/LICENSE.txt
+++ /dev/null
@@ -1,32 +0,0 @@
-============LICENSE_START=======================================================
-org.onap.dcae
-================================================================================
-Copyright (c) 2017 AT&T Intellectual Property. All rights reserved.
-================================================================================
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-============LICENSE_END=========================================================
-
-ECOMP is a trademark and service mark of AT&T Intellectual Property.
-
-
-Copyright (c) 2017 AT&T Intellectual Property. All rights reserved.
-===================================================================
-Licensed under the Creative Commons License, Attribution 4.0 Intl. (the "License");
-you may not use this documentation except in compliance with the License.
-You may obtain a copy of the License at
- https://creativecommons.org/licenses/by/4.0/
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
diff --git a/docker/README.md b/docker/README.md
deleted file mode 100644
index 6a9ce70..0000000
--- a/docker/README.md
+++ /dev/null
@@ -1,214 +0,0 @@
-# docker-cloudify
-
-This repository contains Cloudify artifacts used to orchestrate the deployment of Docker containers. See the example blueprints in the [`examples` directory](examples).
-
-More details about what is expected from Docker components can be found in the DCAE ONAP documentation.
-
-## Pre-requisites
-
-### Docker logins
-
-The Docker plugin requires a key-value entry in Consul that holds all the Docker login credentials needed to access remote registries. The expected key is `docker_plugin/docker_logins` and the corresponding value is a json array with json objects:
-
-```
-[
- { "username": "bob", "password": "123456", "registry": "this-docker-registry.com" },
- { "username": "jane", "password": "7890ab", "registry": "that-docker-registry.com" }
-]
-```
-
-If there are no required Docker logins then set the value to empty list `[]`.
-
-### "consul" DNS query
-
-The Docker plugin assumes that the DNS query for "consul" will resolve. Make sure the Cloudify installation includes any steps (e.g. adding a line to `/etc/hosts`) to ensure this.
-
-## Input parameters
-
-### start
-
-These input parameters are for the `start` `cloudify.interfaces.lifecycle` and are inputs into the variant task operations `create_and_start_container*`.
-
-#### `envs`
-
-A map of environment variables that is intended to be forwarded to the Docker container as environment variables. Example:
-
-```yaml
-envs:
- EXTERNAL_IP: '10.100.1.99'
-```
-
-These environment variables will be forwarded in addition to the *platform-related* environment variables like `CONSUL_HOST`.
-
-#### `volumes`
-
-List of maps used for setting up Docker volume mounts. Example:
-
-```yaml
-volumes:
- - host:
- path: '/var/run/docker.sock'
- container:
- bind: '/tmp/docker.sock'
- mode: 'ro'
-```
-
-This information is used to pass forward into [`docker-py` create container call](http://docker-py.readthedocs.io/en/1.10.6/volumes.html).
-
-key | description
---- | -----------
-path | Full path to the file or directory on the host machine to be mounted
-bind | Full path to the file or directory in the container where the volume should be mounted to
-mode | Readable, writeable: `ro`, `rw`
-
-#### `ports`
-
-List of strings - Used to bind container ports to host ports. Each item is of the format: `<container port>:<host port>`.
-
-Note that `DockerContainerForPlatforms` has the property pair `host_port` and `container_port`. This pair will be merged with the input parameters ports.
-
-```yaml
-ports:
- - '8000:8000'
-```
-
-Default is `None`.
-
-#### `max_wait`
-
-Integer - seconds to wait for Docker to come up healthy before throwing a `NonRecoverableError`.
-
-```yaml
-max_wait:
- 60
-```
-
-Default is 300 seconds.
-
-### stop
-
-These input parameters are for the `stop` `cloudify.interfaces.lifecycle` and are inputs into the task operation `stop_and_remove_container`.
-
-#### `cleanup_image`
-
-Boolean that controls whether to attempt to remove the associated Docker image (true) or not (false).
-
-```yaml
-cleanup_image
- True
-```
-
-Default is false.
-
-## Using DMaaP
-
-The node type `dcae.nodes.DockerContainerForComponentsUsingDmaap` is intended to be used by components that use DMaaP and expects to be connected with the DMaaP node types found in the DMaaP plugin.
-
-### Node properties
-
-The properties `streams_publishes` and `streams_subscribes` both are lists of objects that are intended to be passed into the DMaaP plugin and used to create additional parameters that will be passed into the DMaaP plugin.
-
-#### Message router
-
-For message router publishers and subscribers, the objects look like:
-
-```yaml
-name: topic00
-location: mtc5
-client_role: XXXX
-type: message_router
-```
-
-Where `name` is the node name of `dcae.nodes.Topic` or `dcae.nodes.ExistingTopic` that the Docker node is connecting with via the relationships `dcae.relationships.publish_events` for publishing and `dcae.relationships.subscribe_to_events` for subscribing.
-
-#### Data router
-
-For data router publishers, the object looks like:
-
-```yaml
-name: feed00
-location: mtc5
-type: data_router
-```
-
-Where `name` is the node name of `dcae.nodes.Feed` or `dcae.nodes.ExistingFeed` that the Docker node is connecting with via the relationships `dcae.relationships.publish_files`.
-
-For data router subscribers, the object looks like:
-
-```yaml
-name: feed00
-location: mtc5
-type: data_router
-username: king
-password: "123456"
-route: some-path
-scheme: https
-```
-
-Where the relationship to use is `dcae.relationships.subscribe_to_files`.
-
-If `username` and `password` are not provided, then the plugin will generate username and password pair.
-
-`route` and `scheme` are parameter used in the dynamic construction of the delivery url which will be passed to the DMaaP plugin to be used in the setting up of the subscriber to the feed.
-
-`route` is the http path endpoint of the subscriber that will handle files from the associated feed.
-
-`scheme` is either `http` or `https`. If not specified, then the plugin will default to `http`.
-
-### Component configuration
-
-The DMaaP plugin is responsible to provision the feed/topic and store into Consul the resulting DMaaP connection details. Here is an example:
-
-```json
-{
- "topic00": {
- "client_role": "XXXX",
- "client_id": "XXXX",
- "location": "XXXX",
- "topic_url": "https://some-topic-url.com/events/abc"
- }
-}
-```
-
-This is to be merged with the templetized application configuration:
-
-```json
-{
- "some-param": "Lorem ipsum dolor sit amet",
- "streams_subscribes": {
- "topic-alpha": {
- "type": "message_router",
- "aaf_username": "user-foo",
- "aaf_password": "password-bar",
- "dmaap_info": "<< topic00 >>"
- },
- },
- "streams_publishes": {},
- "services_calls": {}
-}
-```
-
-To form the application configuration:
-
-```json
-{
- "some-param": "Lorem ipsum dolor sit amet",
- "streams_subscribes": {
- "topic-alpha": {
- "type": "message_router",
- "aaf_username": "user-foo",
- "aaf_password": "password-bar",
- "dmaap_info": {
- "client_role": "XXXX",
- "client_id": "XXXX",
- "location": "XXXX",
- "topic_url": "https://some-topic-url.com/events/abc"
- }
- },
- },
- "streams_publishes": {},
- "services_calls": {}
-}
-```
-
-This also applies to data router feeds.
diff --git a/docker/docker-node-type.yaml b/docker/docker-node-type.yaml
deleted file mode 100644
index fba4ccb..0000000
--- a/docker/docker-node-type.yaml
+++ /dev/null
@@ -1,386 +0,0 @@
-# ================================================================================
-# Copyright (c) 2017-2018 AT&T Intellectual Property. All rights reserved.
-# ================================================================================
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ============LICENSE_END=========================================================
-#
-# ECOMP is a trademark and service mark of AT&T Intellectual Property.
-tosca_definitions_version: cloudify_dsl_1_3
-
-imports:
- - http://www.getcloudify.org/spec/cloudify/3.4/types.yaml
-
-plugins:
- docker:
- executor: 'central_deployment_agent'
- package_name: dockerplugin
- package_version: 3.3.0
-
-
-data_types:
- # NOTE: These data types were copied from the k8s node type in order to have
- # consistent node properties between docker and k8s. Perhaps we should make
- # these data_types common somehow?
- dcae.types.MSBRegistration:
- description: >
- Information for registering an HTTP service into MSB. It's optional to do so,
- but if MSB registration is desired at least the port property must be provided.
- If 'port' property is not provided, the plugin will not do the registration.
- (The properties all have to be declared as not required, otherwise the
- 'msb_registration' property on the node would also be required.)
- properties:
- port:
- description: The container port at which the service is exposed
- type: string
- required: false
- version:
- description: The version identifier for the service
- type: string
- required: false
- url_path:
- description: The URL path (e.g., "/api", not the full URL) to the service endpoint
- type: string
- required: false
- uses_ssl:
- description: Set to true if service endpoint uses SSL (TLS)
- type: boolean
- required: false
-
- dcae.types.LoggingInfo:
- description: >
- Information for setting up centralized logging via ELK using a "sidecar" container.
- If 'log_directory' is not provided, the plugin will not set up ELK logging.
- (The properties all have to be declared as not required, otherwise the
- 'log_info' property on the node would also be required.)
- properties:
- log_directory:
- description: >
- The path in the container where the component writes its logs.
- If the component is following the EELF requirements, this would be
- the directory where the four EELF files are being written.
- (Other logs can be placed in the directory--if their names in '.log',
- they'll also be sent into ELK.)
- type: string
- required: false
- alternate_fb_path:
- description: >
- Hope not to use this. By default, the plugin will mount the log volume
- at /var/log/onap/<component_type> in the sidecar container's file system.
- 'alternate_fb_path' allows overriding the default. Will affect how the log
- data can be found in the ELK system.
- type: string
- required: false
-
-
-node_types:
- # The DockerContainerForComponents node type is to be used for DCAE service components that
- # are to be run in a Docker container. This node type goes beyond that of a ordinary Docker
- # plugin where it has DCAE platform specific functionality:
- #
- # * Generation of the service component name
- # * Managing of service component configuration information
- #
- # The Docker run command arguments are intentionally not visible. This node type is
- # not intended to be a generic all-purpose Docker container thing. This should be thought
- # to be an interface to how Docker containers are to be run in the rework context.
- dcae.nodes.DockerContainerForComponents:
- derived_from: cloudify.nodes.Root
- properties:
- service_component_type:
- type: string
- description: Service component type of the application being run in the container
-
- service_id:
- type: string
- description: >
- Unique id for this DCAE service instance this component belongs to. This value
- will be applied as a tag in the registration of this component with Consul.
- default: Null
-
- location_id:
- type: string
- description: >
- Location id of where to run the container.
- DEPRECATED - No longer used. Infer the location from the docker host service
- and/or node.
- default: Null
-
- service_component_name_override:
- type: string
- description: >
- Manually override and set the name for this Docker container node. If this
- is set, then the name will not be auto-generated. Platform services are the
- specific use cases for using this parameter because they have static
- names for example the CDAP broker.
- default: Null
-
- application_config:
- default: {}
- description: >
- Application configuration for this Docker component. The data strcture is
- expected to be a complex map (native YAML) and to be constructed and filled
- by the creator of the blueprint.
-
- docker_config:
- default: {}
- description: >
- This is what is the auxilary portion of the component spec that contains things
- like healthcheck definitions for the Docker component. Health checks are
- optional.
-
- image:
- type: string
- description: Full uri of the Docker image
-
- # The following properties are copied from k8s node type to be consistent.
- # However, these properties are not currently being used within the docker
- # plugin.
- log_info:
- type: dcae.types.LoggingInfo
- description: >
- Information for setting up centralized logging via ELK.
- required: false
-
- replicas:
- type: integer
- description: >
- The number of instances of the component that should be launched initially
- default: 1
-
- always_pull_image:
- type: boolean
- description: >
- Set to true if the orchestrator should always pull a new copy of the image
- before deploying. By default the orchestrator pulls only if the image is
- not already present on the Docker host where the container is being launched.
- default: false
-
- interfaces:
- cloudify.interfaces.lifecycle:
- create:
- # Generate service component name and populate config into Consul
- implementation: docker.dockerplugin.create_for_components
- start:
- # Create Docker container and start
- implementation: docker.dockerplugin.create_and_start_container_for_components
- stop:
- # Stop and remove Docker container
- implementation: docker.dockerplugin.stop_and_remove_container
- delete:
- # Delete configuration from Consul
- implementation: docker.dockerplugin.cleanup_discovery
- dcae.interfaces.policy:
- # This is to be invoked by the policy handler upon policy updates
- policy_update:
- implementation: docker.dockerplugin.policy_update
-
-
- # This node type is intended for DCAE service components that use DMaaP and must use the
- # DMaaP plugin.
- dcae.nodes.DockerContainerForComponentsUsingDmaap:
- derived_from: dcae.nodes.DockerContainerForComponents
- properties:
- streams_publishes:
- description: >
- List of DMaaP streams used for publishing.
-
- Message router items look like:
-
- name: topic00
- location: mtc5
- client_role: XXXX
- type: message_router
-
- Data router items look like:
-
- name: feed00
- location: mtc5
- type: data_router
-
- This information is forwarded to the dmaap plugin to provision
- default: []
- streams_subscribes:
- description: >
- List of DMaaP streams used for subscribing.
-
- Message router items look like:
-
- name: topic00
- location: mtc5
- client_role: XXXX
- type: message_router
-
- Data router items look like:
-
- name: feed00
- location: mtc5
- type: data_router
- username: king
- password: 123456
- route: some-path
- scheme: https
-
- Note that username and password is optional. If not provided or null then the
- plugin will generate them.
-
- default: []
- interfaces:
- cloudify.interfaces.lifecycle:
- create:
- # Generate service component name and populate config into Consul
- implementation: docker.dockerplugin.create_for_components_with_streams
- start:
- # Create Docker container and start
- implementation: docker.dockerplugin.create_and_start_container_for_components_with_streams
-
-
- # DockerContainerForPlatforms is intended for DCAE platform services. Unlike the components,
- # platform services have well-known names and well-known ports.
- dcae.nodes.DockerContainerForPlatforms:
- derived_from: cloudify.nodes.Root
- properties:
- name:
- description: >
- Container name used to register with Consul
-
- application_config:
- default: {}
- description: >
- Application configuration for this Docker component. The data strcture is
- expected to be a complex map (native YAML) and to be constructed and filled
- by the creator of the blueprint.
-
- docker_config:
- default: {}
- description: >
- This is what is the auxilary portion of the component spec that contains things
- like healthcheck definitions for the Docker component. Health checks are
- optional.
-
- image:
- type: string
- description: Full uri of the Docker image
-
- host_port:
- type: integer
- description: >
- Network port that the platform service is expecting to expose on the host
- default: 0
-
- container_port:
- type: integer
- description: >
- Network port that the platform service exposes in the container
- default: 0
-
- # The following properties are copied from k8s node type to be consistent.
- # However, these properties are not currently being used within the docker
- # plugin.
- msb_registration:
- type: dcae.types.MSBRegistration
- description: >
- Information for registering with MSB
- required: false
-
- log_info:
- type: dcae.types.LoggingInfo
- description: >
- Information for setting up centralized logging via ELK.
- required: false
-
- replicas:
- type: integer
- description: >
- The number of instances of the component that should be launched initially
- default: 1
-
- always_pull_image:
- type: boolean
- description: >
- Set to true if the orchestrator should always pull a new copy of the image
- before deploying. By default the orchestrator pulls only if the image is
- not already present on the Docker host where the container is being launched.
- default: false
-
- interfaces:
- cloudify.interfaces.lifecycle:
- create:
- # Populate config into Consul
- implementation: docker.dockerplugin.create_for_platforms
- start:
- # Create Docker container and start
- implementation: docker.dockerplugin.create_and_start_container_for_platforms
- stop:
- # Stop and remove Docker container
- implementation: docker.dockerplugin.stop_and_remove_container
- delete:
- # Delete configuration from Consul
- implementation: docker.dockerplugin.cleanup_discovery
-
-
- # DockerContainer is intended to be more of an all-purpose Docker container node
- # for non-componentized applications.
- dcae.nodes.DockerContainer:
- derived_from: cloudify.nodes.Root
- properties:
- name:
- type: string
- description: Name of the Docker container to be given
- image:
- type: string
- description: Full uri of the Docker image
- interfaces:
- cloudify.interfaces.lifecycle:
- start:
- # Create Docker container and start
- implementation: docker.dockerplugin.create_and_start_container
- stop:
- # Stop and remove Docker container
- implementation: docker.dockerplugin.stop_and_remove_container
-
-
- # TODO: Revisit using Docker swarm
- # The DockerSwarm node type provides the connection information of an available Docker swarm
- # cluster to be used to run Docker containers given search contraints like location.
- # This node type is not responsible for instantiating and managing the Docker swarm clusters.
-
- # The DockerHost node is responsible for selecting a pre-existing Docker host to run
- # Docker containers on. It is not responsible for instantiating new Docker hosts or expanding
- # more resources.
- dcae.nodes.SelectedDockerHost:
- derived_from: cloudify.nodes.Root
- properties:
- location_id:
- type: string
- description: Location id of the Docker host to use
-
- name_search:
- type: string
- description: String to use when matching for names
- default: component-dockerhost
-
- # REVIEW: This field should really be optional but because there's no functionality
- # that provides the dynamic solution sought after yet, it has been promoted to be
- # required.
- docker_host_override:
- type: string
- description: Docker hostname here is used as a manual override
- default: Null
-
- interfaces:
- cloudify.interfaces.lifecycle:
- create:
- # Provide the Docker host to use for containers
- implementation: docker.dockerplugin.select_docker_host
- delete:
- implementation: docker.dockerplugin.unselect_docker_host
diff --git a/docker/dockerplugin/__init__.py b/docker/dockerplugin/__init__.py
deleted file mode 100644
index 669e196..0000000
--- a/docker/dockerplugin/__init__.py
+++ /dev/null
@@ -1,31 +0,0 @@
-# ============LICENSE_START=======================================================
-# org.onap.dcae
-# ================================================================================
-# Copyright (c) 2017 AT&T Intellectual Property. All rights reserved.
-# ================================================================================
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ============LICENSE_END=========================================================
-#
-# ECOMP is a trademark and service mark of AT&T Intellectual Property.
-
-# REVIEW: Tried to source the version from here but you run into import issues
-# because "tasks" module is loaded. This method seems to be the PEP 396
-# recommended way and is listed #3 here https://packaging.python.org/single_source_version/
-# __version__ = '0.1.0'
-
-from .tasks import create_for_components, create_for_components_with_streams, \
- create_and_start_container_for_components_with_streams, \
- create_for_platforms, create_and_start_container, \
- create_and_start_container_for_components, create_and_start_container_for_platforms, \
- stop_and_remove_container, cleanup_discovery, select_docker_host, unselect_docker_host, \
- policy_update
diff --git a/docker/dockerplugin/decorators.py b/docker/dockerplugin/decorators.py
deleted file mode 100644
index f83263b..0000000
--- a/docker/dockerplugin/decorators.py
+++ /dev/null
@@ -1,102 +0,0 @@
-# ============LICENSE_START=======================================================
-# org.onap.dcae
-# ================================================================================
-# Copyright (c) 2017 AT&T Intellectual Property. All rights reserved.
-# ================================================================================
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ============LICENSE_END=========================================================
-#
-# ECOMP is a trademark and service mark of AT&T Intellectual Property.
-
-import copy
-from cloudify import ctx
-from cloudify.exceptions import NonRecoverableError, RecoverableError
-from dockering import utils as doc
-from dockerplugin import discovery as dis
-from dockerplugin.exceptions import DockerPluginDeploymentError, \
- DockerPluginDependencyNotReadyError
-from dockerplugin import utils
-
-
-def monkeypatch_loggers(task_func):
- """Sets up the dependent loggers"""
-
- def wrapper(**kwargs):
- # Ouch! Monkeypatch loggers
- doc.logger = ctx.logger
- dis.logger = ctx.logger
-
- return task_func(**kwargs)
-
- return wrapper
-
-
-def wrap_error_handling_start(task_start_func):
- """Wrap error handling for the start operations"""
-
- def wrapper(**kwargs):
- try:
- return task_start_func(**kwargs)
- except DockerPluginDependencyNotReadyError as e:
- # You are here because things we need like a working docker host is not
- # available yet so let Cloudify try again later.
- raise RecoverableError(e)
- except DockerPluginDeploymentError as e:
- # Container failed to come up in the allotted time. This is deemed
- # non-recoverable.
- raise NonRecoverableError(e)
- except Exception as e:
- ctx.logger.error("Unexpected error while starting container: {0}"
- .format(str(e)))
- raise NonRecoverableError(e)
-
- return wrapper
-
-
-def _wrapper_merge_inputs(task_func, properties, **kwargs):
- """Merge Cloudify properties with input kwargs before calling task func"""
- inputs = copy.deepcopy(properties)
- # Recursively update
- utils.update_dict(inputs, kwargs)
-
- # Apparently kwargs contains "ctx" which is cloudify.context.CloudifyContext
- # This has to be removed and not copied into runtime_properties else you get
- # JSON serialization errors.
- if "ctx" in inputs:
- del inputs["ctx"]
-
- return task_func(**inputs)
-
-def merge_inputs_for_create(task_create_func):
- """Merge all inputs for start operation into one dict"""
-
- # Needed to wrap the wrapper because I was seeing issues with
- # "RuntimeError: No context set in current execution thread"
- def wrapper(**kwargs):
- # NOTE: ctx.node.properties is an ImmutableProperties instance which is
- # why it is passed into a mutable dict so that it can be deep copied
- return _wrapper_merge_inputs(task_create_func,
- dict(ctx.node.properties), **kwargs)
-
- return wrapper
-
-def merge_inputs_for_start(task_start_func):
- """Merge all inputs for start operation into one dict"""
-
- # Needed to wrap the wrapper because I was seeing issues with
- # "RuntimeError: No context set in current execution thread"
- def wrapper(**kwargs):
- return _wrapper_merge_inputs(task_start_func,
- ctx.instance.runtime_properties, **kwargs)
-
- return wrapper
diff --git a/docker/dockerplugin/discovery.py b/docker/dockerplugin/discovery.py
deleted file mode 100644
index 563693c..0000000
--- a/docker/dockerplugin/discovery.py
+++ /dev/null
@@ -1,257 +0,0 @@
-# ============LICENSE_START=======================================================
-# org.onap.dcae
-# ================================================================================
-# Copyright (c) 2017 AT&T Intellectual Property. All rights reserved.
-# Copyright (c) 2019 Pantheon.tech. All rights reserved.
-# ================================================================================
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ============LICENSE_END=========================================================
-#
-# ECOMP is a trademark and service mark of AT&T Intellectual Property.
-
-from functools import partial
-import json
-import logging
-import uuid
-import requests
-import consul
-
-
-logger = logging.getLogger("discovery")
-
-
-class DiscoveryError(RuntimeError):
- pass
-
-class DiscoveryConnectionError(RuntimeError):
- pass
-
-class DiscoveryServiceNotFoundError(RuntimeError):
- pass
-
-class DiscoveryKVEntryNotFoundError(RuntimeError):
- pass
-
-
-def _wrap_consul_call(consul_func, *args, **kwargs):
- """Wrap Consul call to map errors"""
- try:
- return consul_func(*args, **kwargs)
- except requests.exceptions.ConnectionError as e:
- raise DiscoveryConnectionError(e)
-
-
-def generate_service_component_name(service_component_type):
- """Generate service component id used to pass into the service component
- instance and used as the key to the service component configuration.
-
- Format:
- <service component id>_<service component type>
- """
- # Random generated
- # Copied from cdap plugin
- return "{0}_{1}".format(str(uuid.uuid4()).replace("-",""),
- service_component_type)
-
-
-def create_kv_conn(host):
- """Create connection to key-value store
-
- Returns a Consul client to the specified Consul host"""
- try:
- [hostname, port] = host.split(":")
- return consul.Consul(host=hostname, port=int(port))
- except ValueError as e:
- return consul.Consul(host=host)
-
-def push_service_component_config(kv_conn, service_component_name, config):
- config_string = config if isinstance(config, str) else json.dumps(config)
- kv_put_func = partial(_wrap_consul_call, kv_conn.kv.put)
-
- if kv_put_func(service_component_name, config_string):
- logger.info("Added config for {0}".format(service_component_name))
- else:
- raise DiscoveryError("Failed to push configuration")
-
-def remove_service_component_config(kv_conn, service_component_name):
- kv_delete_func = partial(_wrap_consul_call, kv_conn.kv.delete)
- kv_delete_func(service_component_name)
-
-
-def get_kv_value(kv_conn, key):
- """Get a key-value entry's value from Consul
-
- Raises DiscoveryKVEntryNotFoundError if entry not found
- """
- kv_get_func = partial(_wrap_consul_call, kv_conn.kv.get)
- (index, val) = kv_get_func(key)
-
- if val:
- return json.loads(val['Value']) # will raise ValueError if not JSON, let it propagate
- else:
- raise DiscoveryKVEntryNotFoundError("{0} kv entry not found".format(key))
-
-
-def _create_rel_key(service_component_name):
- return "{0}:rel".format(service_component_name)
-
-def store_relationship(kv_conn, source_name, target_name):
- # TODO: Rel entry may already exist in a one-to-many situation. Need to
- # support that.
- rel_key = _create_rel_key(source_name)
- rel_value = [target_name] if target_name else []
-
- kv_put_func = partial(_wrap_consul_call, kv_conn.kv.put)
- kv_put_func(rel_key, json.dumps(rel_value))
- logger.info("Added relationship for {0}".format(rel_key))
-
-def delete_relationship(kv_conn, service_component_name):
- rel_key = _create_rel_key(service_component_name)
- kv_get_func = partial(_wrap_consul_call, kv_conn.kv.get)
- index, rels = kv_get_func(rel_key)
-
- if rels:
- rels = json.loads(rels["Value"].decode("utf-8"))
- kv_delete_func = partial(_wrap_consul_call, kv_conn.kv.delete)
- kv_delete_func(rel_key)
- return rels
- else:
- return []
-
-def lookup_service(kv_conn, service_component_name):
- catalog_get_func = partial(_wrap_consul_call, kv_conn.catalog.service)
- index, results = catalog_get_func(service_component_name)
-
- if results:
- return results
- else:
- raise DiscoveryServiceNotFoundError("Failed to find: {0}".format(service_component_name))
-
-
-# TODO: Note these functions have been (for the most part) shamelessly lifted from
-# dcae-cli and should really be shared.
-
-def _is_healthy_pure(get_health_func, instance):
- """Checks to see if a component instance is running healthy
-
- Pure function edition
-
- Args
- ----
- get_health_func: func(string) -> complex object
- Look at unittests in test_discovery to see examples
- instance: (string) fully qualified name of component instance
-
- Returns
- -------
- True if instance has been found and is healthy else False
- """
- index, resp = get_health_func(instance)
-
- if resp:
- def is_passing(instance):
- return all([check["Status"] == "passing" for check in instance["Checks"]])
-
- return any([is_passing(instance) for instance in resp])
- else:
- return False
-
-def is_healthy(consul_host, instance):
- """Checks to see if a component instance is running healthy
-
- Impure function edition
-
- Args
- ----
- consul_host: (string) host string of Consul
- instance: (string) fully qualified name of component instance
-
- Returns
- -------
- True if instance has been found and is healthy else False
- """
- cons = create_kv_conn(consul_host)
-
- get_health_func = partial(_wrap_consul_call, cons.health.service)
- return _is_healthy_pure(get_health_func, instance)
-
-
-def add_to_entry(conn, key, add_name, add_value):
- """
- Find 'key' in consul.
- Treat its value as a JSON string representing a dict.
- Extend the dict by adding an entry with key 'add_name' and value 'add_value'.
- Turn the resulting extended dict into a JSON string.
- Store the string back into Consul under 'key'.
- Watch out for conflicting concurrent updates.
-
- Example:
- Key 'xyz:dmaap' has the value '{"feed00": {"feed_url" : "http://example.com/feeds/999"}}'
- add_to_entry('xyz:dmaap', 'topic00', {'topic_url' : 'http://example.com/topics/1229'})
- should result in the value for key 'xyz:dmaap' in consul being updated to
- '{"feed00": {"feed_url" : "http://example.com/feeds/999"}, "topic00" : {"topic_url" : "http://example.com/topics/1229"}}'
- """
- while True: # do until update succeeds
- (index, val) = conn.kv.get(key) # index gives version of key retrieved
-
- if val is None: # no key yet
- vstring = '{}'
- mod_index = 0 # Use 0 as the cas index for initial insertion of the key
- else:
- vstring = val['Value']
- mod_index = val['ModifyIndex']
-
- # Build the updated dict
- # Exceptions just propagate
- v = json.loads(vstring)
- v[add_name] = add_value
- new_vstring = json.dumps(v)
-
- updated = conn.kv.put(key, new_vstring, cas=mod_index) # if the key has changed since retrieval, this will return false
- if updated:
- return v
-
-
-def _find_matching_services(services, name_search, tags):
- """Find matching services given search criteria"""
- tags = set(tags)
- return [srv_name for srv_name in services
- if name_search in srv_name and tags <= set(services[srv_name])]
-
-def search_services(conn, name_search, tags):
- """Search for services that match criteria
-
- Args:
- -----
- name_search: (string) Name to search for as a substring
- tags: (list) List of strings that are tags. A service must match **all** the
- tags in the list.
-
- Retruns:
- --------
- List of names of services that matched
- """
- # srvs is dict where key is service name and value is list of tags
- catalog_get_services_func = partial(_wrap_consul_call, conn.catalog.services)
- index, srvs = catalog_get_services_func()
-
- if srvs:
- matches = _find_matching_services(srvs, name_search, tags)
-
- if matches:
- return matches
-
- raise DiscoveryServiceNotFoundError(
- "No matches found: {0}, {1}".format(name_search, tags))
- else:
- raise DiscoveryServiceNotFoundError("No services found")
diff --git a/docker/dockerplugin/exceptions.py b/docker/dockerplugin/exceptions.py
deleted file mode 100644
index 0d8a341..0000000
--- a/docker/dockerplugin/exceptions.py
+++ /dev/null
@@ -1,29 +0,0 @@
-# ============LICENSE_START=======================================================
-# org.onap.dcae
-# ================================================================================
-# Copyright (c) 2017 AT&T Intellectual Property. All rights reserved.
-# ================================================================================
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ============LICENSE_END=========================================================
-#
-# ECOMP is a trademark and service mark of AT&T Intellectual Property.
-
-class DockerPluginDeploymentError(RuntimeError):
- pass
-
-
-class DockerPluginDependencyNotReadyError(RuntimeError):
- """Error to use when something that this plugin depends upon e.g. docker api,
- consul is not ready"""
- pass
-
diff --git a/docker/dockerplugin/tasks.py b/docker/dockerplugin/tasks.py
deleted file mode 100644
index 8a15319..0000000
--- a/docker/dockerplugin/tasks.py
+++ /dev/null
@@ -1,672 +0,0 @@
-# ============LICENSE_START=======================================================
-# org.onap.dcae
-# ================================================================================
-# Copyright (c) 2017 AT&T Intellectual Property. All rights reserved.
-# Copyright (c) 2019 Pantheon.tech. All rights reserved.
-# ================================================================================
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ============LICENSE_END=========================================================
-#
-# ECOMP is a trademark and service mark of AT&T Intellectual Property.
-
-# Lifecycle interface calls for DockerContainer
-
-import json, time, copy, random
-from cloudify import ctx
-from cloudify.decorators import operation
-from cloudify.exceptions import NonRecoverableError, RecoverableError
-import dockering as doc
-from onap_dcae_dcaepolicy_lib import Policies
-from dockerplugin import discovery as dis
-from dockerplugin.decorators import monkeypatch_loggers, wrap_error_handling_start, \
- merge_inputs_for_start, merge_inputs_for_create
-from dockerplugin.exceptions import DockerPluginDeploymentError, \
- DockerPluginDependencyNotReadyError
-from dockerplugin import utils
-
-# TODO: Remove this Docker port hardcoding and query for this port instead
-DOCKER_PORT = 2376
-# Rely on the setup of the cloudify manager host to resolve "consul" for the
-# plugin. NOTE: This variable is not passed to components.
-CONSUL_HOST = "consul"
-
-# Used to construct delivery urls for data router subscribers. Data router in FTL
-# requires https but this author believes that ONAP is to be defaulted to http.
-DEFAULT_SCHEME = "http"
-
-# Property keys
-SERVICE_COMPONENT_NAME = "service_component_name"
-SELECTED_CONTAINER_DESTINATION = "selected_container_destination"
-CONTAINER_ID = "container_id"
-APPLICATION_CONFIG = "application_config"
-
-
-# Utility methods
-
-def _get_docker_logins(consul_host=CONSUL_HOST):
- """Get Docker logins
-
- The assumption is that all Docker logins to be used will be available in
- Consul's key-value store under "docker_plugin/docker_logins" as a list of
- json objects where each object is a single login:
-
- [{ "username": "dcae_dev_ro", "password": "att123ro",
- "registry": "nexus01.research.att.com:18443" }]
- """
- # REVIEW: The error handling may have to be re-examined. The current thought is
- # that the logins *must* be setup even with an empty list otherwise the task
- # will fail (fail fast). One alterative is to pass back empty list upon any
- # issues but this would push potential issues to a later point of the
- # deployment.
- kv_conn = dis.create_kv_conn(consul_host)
- return dis.get_kv_value(kv_conn, "docker_plugin/docker_logins")
-
-
-# Lifecycle interface calls for dcae.nodes.DockerContainer
-
-def _setup_for_discovery(**kwargs):
- """Setup for config discovery"""
- try:
- name = kwargs['name']
- application_config = kwargs[APPLICATION_CONFIG]
-
- # NOTE: application_config is no longer a json string and is inputed as a
- # YAML map which translates to a dict. We don't have to do any
- # preprocessing anymore.
- conn = dis.create_kv_conn(CONSUL_HOST)
- dis.push_service_component_config(conn, name, application_config)
- return kwargs
- except dis.DiscoveryConnectionError as e:
- raise RecoverableError(e)
- except Exception as e:
- ctx.logger.error("Unexpected error while pushing configuration: {0}"
- .format(str(e)))
- raise NonRecoverableError(e)
-
-def _generate_component_name(**kwargs):
- """Generate component name"""
- service_component_type = kwargs['service_component_type']
- name_override = kwargs['service_component_name_override']
-
- kwargs['name'] = name_override if name_override \
- else dis.generate_service_component_name(service_component_type)
- return kwargs
-
-def _done_for_create(**kwargs):
- """Wrap up create operation"""
- name = kwargs['name']
- kwargs[SERVICE_COMPONENT_NAME] = name
- # All updates to the runtime_properties happens here. I don't see a reason
- # why we shouldn't do this because the context is not being mutated by
- # something else and will keep the other functions pure (pure in the sense
- # not dealing with CloudifyContext).
- ctx.instance.runtime_properties.update(kwargs)
- ctx.logger.info("Done setting up: {0}".format(name))
- return kwargs
-
-
-@merge_inputs_for_create
-@monkeypatch_loggers
-@Policies.gather_policies_to_node()
-@operation
-def create_for_components(**create_inputs):
- """Create step for Docker containers that are components
-
- This interface is responible for:
-
- 1. Generating service component name
- 2. Populating config information into Consul
- """
- _done_for_create(
- **_setup_for_discovery(
- **_generate_component_name(
- **create_inputs)))
-
-
-def _parse_streams(**kwargs):
- """Parse streams and setup for DMaaP plugin"""
- # The DMaaP plugin requires this plugin to set the runtime properties
- # keyed by the node name.
- for stream in kwargs["streams_publishes"]:
- kwargs[stream["name"]] = stream
-
- # NOTE: That the delivery url is constructed and setup in the start operation
- for stream in kwargs["streams_subscribes"]:
- if stream["type"] == "data_router":
- # If either username or password is missing then generate it. The
- # DMaaP plugin doesn't generate them for subscribers.
- # The code and length of username/password are lifted from the DMaaP
- # plugin.
-
- # Don't want to mutate the source
- stream = copy.deepcopy(stream)
- if not stream.get("username", None):
- stream["username"] = utils.random_string(8)
- if not stream.get("password", None):
- stream["password"] = utils.random_string(10)
-
- kwargs[stream["name"]] = stream
-
- return kwargs
-
-
-def _setup_for_discovery_streams(**kwargs):
- """Setup for discovery of streams
-
- Specifically, there's a race condition this call addresses for data router
- subscriber case. The component needs its feed subscriber information but the
- DMaaP plugin doesn't provide this until after the docker plugin start
- operation.
- """
- dr_subs = [kwargs[s["name"]] for s in kwargs["streams_subscribes"] \
- if s["type"] == "data_router"]
-
- if dr_subs:
- dmaap_kv_key = "{0}:dmaap".format(kwargs["name"])
- conn = dis.create_kv_conn(CONSUL_HOST)
-
- def add_feed(dr_sub):
- # delivery url and subscriber id will be fill by the dmaap plugin later
- v = { "location": dr_sub["location"], "delivery_url": None,
- "username": dr_sub["username"], "password": dr_sub["password"],
- "subscriber_id": None }
- return dis.add_to_entry(conn, dmaap_kv_key, dr_sub["name"], v)
-
- try:
- for dr_sub in dr_subs:
- if add_feed(dr_sub) is None:
- raise NonRecoverableError(
- "Failure updating feed streams in Consul")
- except Exception as e:
- raise NonRecoverableError(e)
-
- return kwargs
-
-
-@merge_inputs_for_create
-@monkeypatch_loggers
-@Policies.gather_policies_to_node()
-@operation
-def create_for_components_with_streams(**create_inputs):
- """Create step for Docker containers that are components that use DMaaP
-
- This interface is responible for:
-
- 1. Generating service component name
- 2. Setup runtime properties for DMaaP plugin
- 3. Populating application config into Consul
- 4. Populating DMaaP config for data router subscribers in Consul
- """
- _done_for_create(
- **_setup_for_discovery(
- **_setup_for_discovery_streams(
- **_parse_streams(
- **_generate_component_name(
- **create_inputs)))))
-
-
-@merge_inputs_for_create
-@monkeypatch_loggers
-@operation
-def create_for_platforms(**create_inputs):
- """Create step for Docker containers that are platform components
-
- This interface is responible for:
-
- 1. Populating config information into Consul
- """
- _done_for_create(
- **_setup_for_discovery(
- **create_inputs))
-
-
-def _lookup_service(service_component_name, consul_host=CONSUL_HOST,
- with_port=False):
- conn = dis.create_kv_conn(consul_host)
- results = dis.lookup_service(conn, service_component_name)
-
- if with_port:
- # Just grab first
- result = results[0]
- return "{address}:{port}".format(address=result["ServiceAddress"],
- port=result["ServicePort"])
- else:
- return results[0]["ServiceAddress"]
-
-
-def _verify_container(service_component_name, max_wait, consul_host=CONSUL_HOST):
- """Verify that the container is healthy
-
- Args:
- -----
- max_wait (integer): limit to how may attempts to make which translates to
- seconds because each sleep is one second. 0 means infinite.
-
- Return:
- -------
- True if component is healthy else a DockerPluginDeploymentError exception
- will be raised.
- """
- num_attempts = 1
-
- while True:
- if dis.is_healthy(consul_host, service_component_name):
- return True
- else:
- num_attempts += 1
-
- if max_wait > 0 and max_wait < num_attempts:
- raise DockerPluginDeploymentError("Container never became healthy")
-
- time.sleep(1)
-
-
-def _create_and_start_container(container_name, image, docker_host,
- consul_host=CONSUL_HOST, **kwargs):
- """Create and start Docker container
-
- This is the function that actually does more of the heavy lifting including
- resolving the docker host to connect and common things to do in setting up
- docker containers like making sure CONSUL_HOST gets set as the local docker
- host ip.
-
- This method raises DockerPluginDependencyNotReadyError
- """
- try:
- # Setup for Docker operations
-
- docker_host_ip = _lookup_service(docker_host, consul_host=consul_host)
-
- logins = _get_docker_logins(consul_host=consul_host)
- client = doc.create_client(docker_host_ip, DOCKER_PORT, logins=logins)
-
- hcp = doc.add_host_config_params_volumes(volumes=kwargs.get("volumes",
- None))
- hcp = doc.add_host_config_params_ports(ports=kwargs.get("ports", None),
- host_config_params=hcp)
- hcp = doc.add_host_config_params_dns(docker_host_ip,
- host_config_params=hcp)
-
- # NOTE: The critical env variable CONSUL_HOST is being assigned the
- # docker host ip itself because there should be a local Consul agent. We
- # want services to register with their local Consul agent.
- # CONFIG_BINDING_SERVICE is here for backwards compatibility. This is a
- # well-known name now.
- platform_envs = { "CONSUL_HOST": docker_host_ip,
- "CONFIG_BINDING_SERVICE": "config_binding_service" }
- # NOTE: The order of the envs being passed in is **important**. The
- # kwargs["envs"] getting passed in last ensures that manual overrides
- # will override the hardcoded envs.
- envs = doc.create_envs(container_name, platform_envs, kwargs.get("envs", {}))
-
- # Do Docker operations
-
- container = doc.create_container(client, image, container_name, envs, hcp)
- container_id = doc.start_container(client, container)
-
- return container_id
- except (doc.DockerConnectionError, dis.DiscoveryConnectionError,
- dis.DiscoveryServiceNotFoundError) as e:
- raise DockerPluginDependencyNotReadyError(e)
-
-
-def _parse_cloudify_context(**kwargs):
- """Parse Cloudify context
-
- Extract what is needed. This is impure function because it requires ctx.
- """
- kwargs["deployment_id"] = ctx.deployment.id
- return kwargs
-
-def _enhance_docker_params(**kwargs):
- """Setup Docker envs"""
- docker_config = kwargs.get("docker_config", {})
-
- envs = kwargs.get("envs", {})
- # NOTE: Healthchecks are optional until prepared to handle use cases that
- # don't necessarily use http
- envs_healthcheck = doc.create_envs_healthcheck(docker_config) \
- if "healthcheck" in docker_config else {}
- envs.update(envs_healthcheck)
-
- # Set tags on this component for its Consul registration as a service
- tags = [kwargs.get("deployment_id", None), kwargs["service_id"]]
- tags = [ str(tag) for tag in tags if tag is not None ]
- # Registrator will use this to register this component with tags. Must be
- # comma delimited.
- envs["SERVICE_TAGS"] = ",".join(tags)
-
- kwargs["envs"] = envs
-
- def combine_params(key, docker_config, kwargs):
- v = docker_config.get(key, []) + kwargs.get(key, [])
- if v:
- kwargs[key] = v
- return kwargs
-
- # Add the lists of ports and volumes unintelligently - meaning just add the
- # lists together with no deduping.
- kwargs = combine_params("ports", docker_config, kwargs)
- kwargs = combine_params("volumes", docker_config, kwargs)
-
- return kwargs
-
-def _create_and_start_component(**kwargs):
- """Create and start component (container)"""
- image = kwargs["image"]
- service_component_name = kwargs[SERVICE_COMPONENT_NAME]
- docker_host = kwargs[SELECTED_CONTAINER_DESTINATION]
- # Need to be picky and manually select out pieces because just using kwargs
- # which contains everything confused the execution of
- # _create_and_start_container because duplicate variables exist
- sub_kwargs = { "volumes": kwargs.get("volumes", []),
- "ports": kwargs.get("ports", None), "envs": kwargs.get("envs", {}) }
-
- container_id = _create_and_start_container(service_component_name, image,
- docker_host, **sub_kwargs)
- kwargs[CONTAINER_ID] = container_id
-
- # TODO: Use regular logging here
- ctx.logger.info("Container started: {0}, {1}".format(container_id,
- service_component_name))
-
- return kwargs
-
-def _verify_component(**kwargs):
- """Verify component (container) is healthy"""
- service_component_name = kwargs[SERVICE_COMPONENT_NAME]
- # TODO: "Consul doesn't make its first health check immediately upon registration.
- # Instead it waits for the health check interval to pass."
- # Possible enhancement is to read the interval (and possibly the timeout) from
- # docker_config and multiply that by a number to come up with a more suitable
- # max_wait.
- max_wait = kwargs.get("max_wait", 300)
-
- # Verify that the container is healthy
-
- if _verify_container(service_component_name, max_wait):
- container_id = kwargs[CONTAINER_ID]
- service_component_name = kwargs[SERVICE_COMPONENT_NAME]
-
- # TODO: Use regular logging here
- ctx.logger.info("Container is healthy: {0}, {1}".format(container_id,
- service_component_name))
-
- return kwargs
-
-def _done_for_start(**kwargs):
- ctx.instance.runtime_properties.update(kwargs)
- ctx.logger.info("Done starting: {0}".format(kwargs["name"]))
- return kwargs
-
-@wrap_error_handling_start
-@merge_inputs_for_start
-@monkeypatch_loggers
-@operation
-def create_and_start_container_for_components(**start_inputs):
- """Create Docker container and start for components
-
- This operation method is to be used with the DockerContainerForComponents
- node type. After launching the container, the plugin will verify with Consul
- that the app is up and healthy before terminating.
- """
- _done_for_start(
- **_verify_component(
- **_create_and_start_component(
- **_enhance_docker_params(
- **_parse_cloudify_context(**start_inputs)))))
-
-
-def _update_delivery_url(**kwargs):
- """Update the delivery url for data router subscribers"""
- dr_subs = [kwargs[s["name"]] for s in kwargs["streams_subscribes"] \
- if s["type"] == "data_router"]
-
- if dr_subs:
- service_component_name = kwargs[SERVICE_COMPONENT_NAME]
- # TODO: Should NOT be setting up the delivery url with ip addresses
- # because in the https case, this will not work because data router does
- # a certificate validation using the fqdn.
- subscriber_host = _lookup_service(service_component_name, with_port=True)
-
- for dr_sub in dr_subs:
- scheme = dr_sub["scheme"] if "scheme" in dr_sub else DEFAULT_SCHEME
- if "route" not in dr_sub:
- raise NonRecoverableError("'route' key missing from data router subscriber")
- path = dr_sub["route"]
- dr_sub["delivery_url"] = "{scheme}://{host}/{path}".format(
- scheme=scheme, host=subscriber_host, path=path)
- kwargs[dr_sub["name"]] = dr_sub
-
- return kwargs
-
-@wrap_error_handling_start
-@merge_inputs_for_start
-@monkeypatch_loggers
-@operation
-def create_and_start_container_for_components_with_streams(**start_inputs):
- """Create Docker container and start for components that have streams
-
- This operation method is to be used with the DockerContainerForComponents
- node type. After launching the container, the plugin will verify with Consul
- that the app is up and healthy before terminating.
- """
- _done_for_start(
- **_update_delivery_url(
- **_verify_component(
- **_create_and_start_component(
- **_enhance_docker_params(
- **_parse_cloudify_context(**start_inputs))))))
-
-
-@wrap_error_handling_start
-@monkeypatch_loggers
-@operation
-def create_and_start_container_for_platforms(**kwargs):
- """Create Docker container and start for platform services
-
- This operation method is to be used with the DockerContainerForPlatforms
- node type. After launching the container, the plugin will verify with Consul
- that the app is up and healthy before terminating.
- """
- image = ctx.node.properties["image"]
- docker_config = ctx.node.properties.get("docker_config", {})
- service_component_name = ctx.node.properties["name"]
-
- docker_host = ctx.instance.runtime_properties[SELECTED_CONTAINER_DESTINATION]
-
- envs = kwargs.get("envs", {})
- # NOTE: Healthchecks are optional until prepared to handle use cases that
- # don't necessarily use http
- envs_healthcheck = doc.create_envs_healthcheck(docker_config) \
- if "healthcheck" in docker_config else {}
- envs.update(envs_healthcheck)
- kwargs["envs"] = envs
-
- host_port = ctx.node.properties["host_port"]
- container_port = ctx.node.properties["container_port"]
-
- # Cloudify properties are all required and Cloudify complains that None
- # is not a valid type for integer. Defaulting to 0 to indicate to not
- # use this and not to set a specific port mapping in cases like service
- # change handler.
- if host_port != 0 and container_port != 0:
- # Doing this because other nodes might want to use this property
- port_mapping = "{cp}:{hp}".format(cp=container_port, hp=host_port)
- ports = kwargs.get("ports", []) + [ port_mapping ]
- kwargs["ports"] = ports
- if "ports" not in kwargs:
- ctx.logger.warn("No port mappings defined. Will randomly assign port.")
-
- container_id = _create_and_start_container(service_component_name, image,
- docker_host, **kwargs)
- ctx.instance.runtime_properties[CONTAINER_ID] = container_id
-
- ctx.logger.info("Container started: {0}, {1}".format(container_id,
- service_component_name))
-
- # Verify that the container is healthy
-
- max_wait = kwargs.get("max_wait", 300)
-
- if _verify_container(service_component_name, max_wait):
- ctx.logger.info("Container is healthy: {0}, {1}".format(container_id,
- service_component_name))
-
-
-@wrap_error_handling_start
-@monkeypatch_loggers
-@operation
-def create_and_start_container(**kwargs):
- """Create Docker container and start"""
- service_component_name = ctx.node.properties["name"]
- ctx.instance.runtime_properties[SERVICE_COMPONENT_NAME] = service_component_name
-
- image = ctx.node.properties["image"]
- docker_host = ctx.instance.runtime_properties[SELECTED_CONTAINER_DESTINATION]
-
- container_id = _create_and_start_container(service_component_name, image,
- docker_host, **kwargs)
- ctx.instance.runtime_properties[CONTAINER_ID] = container_id
-
- ctx.logger.info("Container started: {0}, {1}".format(container_id,
- service_component_name))
-
-
-@monkeypatch_loggers
-@operation
-def stop_and_remove_container(**kwargs):
- """Stop and remove Docker container"""
- try:
- docker_host = ctx.instance.runtime_properties[SELECTED_CONTAINER_DESTINATION]
-
- docker_host_ip = _lookup_service(docker_host)
-
- logins = _get_docker_logins()
- client = doc.create_client(docker_host_ip, DOCKER_PORT, logins=logins)
-
- container_id = ctx.instance.runtime_properties[CONTAINER_ID]
- doc.stop_then_remove_container(client, container_id)
-
- cleanup_image = kwargs.get("cleanup_image", False)
-
- if cleanup_image:
- image = ctx.node.properties["image"]
-
- if doc.remove_image(client, image):
- ctx.logger.info("Removed Docker image: {0}".format(image))
- else:
- ctx.logger.warn("Couldnot remove Docker image: {0}".format(image))
- except (doc.DockerConnectionError, dis.DiscoveryConnectionError,
- dis.DiscoveryServiceNotFoundError) as e:
- raise RecoverableError(e)
- except Exception as e:
- ctx.logger.error("Unexpected error while stopping container: {0}"
- .format(str(e)))
- raise NonRecoverableError(e)
-
-@monkeypatch_loggers
-@Policies.cleanup_policies_on_node
-@operation
-def cleanup_discovery(**kwargs):
- """Delete configuration from Consul"""
- service_component_name = ctx.instance.runtime_properties[SERVICE_COMPONENT_NAME]
-
- try:
- conn = dis.create_kv_conn(CONSUL_HOST)
- dis.remove_service_component_config(conn, service_component_name)
- except dis.DiscoveryConnectionError as e:
- raise RecoverableError(e)
-
-
-def _notify_container(**kwargs):
- """Notify container using the policy section in the docker_config"""
- dc = kwargs["docker_config"]
-
- if "policy" in dc:
- if dc["policy"]["trigger_type"] == "docker":
- # REVIEW: Need to finalize on the docker config policy data structure
- script_path = dc["policy"]["script_path"]
- updated_policies = kwargs["updated_policies"]
- removed_policies = kwargs["removed_policies"]
- policies = kwargs["policies"]
- cmd = doc.build_policy_update_cmd(script_path, use_sh=False,
- msg_type="policies",
- updated_policies=updated_policies,
- removed_policies=removed_policies,
- policies=policies
- )
-
- docker_host = kwargs[SELECTED_CONTAINER_DESTINATION]
- docker_host_ip = _lookup_service(docker_host)
- logins = _get_docker_logins()
- client = doc.create_client(docker_host_ip, DOCKER_PORT, logins=logins)
-
- container_id = kwargs["container_id"]
-
- doc.notify_for_policy_update(client, container_id, cmd)
- # else the default is no trigger
-
- return kwargs
-
-@monkeypatch_loggers
-@Policies.update_policies_on_node()
-@operation
-def policy_update(updated_policies, removed_policies=None, policies=None, **kwargs):
- """Policy update task
-
- This method is responsible for updating the application configuration and
- notifying the applications that the change has occurred. This is to be used
- for the dcae.interfaces.policy.policy_update operation.
-
- :updated_policies: contains the list of changed policy-configs when configs_only=True
- (default) Use configs_only=False to bring the full policy objects in :updated_policies:.
- """
- update_inputs = copy.deepcopy(ctx.instance.runtime_properties)
- update_inputs["updated_policies"] = updated_policies
- update_inputs["removed_policies"] = removed_policies
- update_inputs["policies"] = policies
-
- _notify_container(**update_inputs)
-
-
-# Lifecycle interface calls for dcae.nodes.DockerHost
-
-
-@monkeypatch_loggers
-@operation
-def select_docker_host(**kwargs):
- selected_docker_host = ctx.node.properties['docker_host_override']
- name_search = ctx.node.properties['name_search']
- location_id = ctx.node.properties['location_id']
-
- if selected_docker_host:
- ctx.instance.runtime_properties[SERVICE_COMPONENT_NAME] = selected_docker_host
- ctx.logger.info("Selected Docker host: {0}".format(selected_docker_host))
- else:
- try:
- conn = dis.create_kv_conn(CONSUL_HOST)
- names = dis.search_services(conn, name_search, [location_id])
- ctx.logger.info("Docker hosts found: {0}".format(names))
- # Randomly choose one
- ctx.instance.runtime_properties[SERVICE_COMPONENT_NAME] = random.choice(names)
- except (dis.DiscoveryConnectionError, dis.DiscoveryServiceNotFoundError) as e:
- raise RecoverableError(e)
- except Exception as e:
- raise NonRecoverableError(e)
-
-@operation
-def unselect_docker_host(**kwargs):
- del ctx.instance.runtime_properties[SERVICE_COMPONENT_NAME]
- ctx.logger.info("Unselected Docker host")
-
diff --git a/docker/dockerplugin/utils.py b/docker/dockerplugin/utils.py
deleted file mode 100644
index 6475aaa..0000000
--- a/docker/dockerplugin/utils.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# ============LICENSE_START=======================================================
-# org.onap.dcae
-# ================================================================================
-# Copyright (c) 2017 AT&T Intellectual Property. All rights reserved.
-# Copyright (c) 2019 Pantheon.tech. All rights reserved.
-# ================================================================================
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ============LICENSE_END=========================================================
-#
-# ECOMP is a trademark and service mark of AT&T Intellectual Property.
-
-import string
-import random
-import collections
-
-
-def random_string(n):
- """Random generate an ascii string of "n" length"""
- corpus = string.ascii_lowercase + string.ascii_uppercase + string.digits
- return ''.join(random.choice(corpus) for x in range(n))
-
-
-def update_dict(d, u):
- """Recursively updates dict
-
- Update dict d with dict u
- """
- for k, v in u.items():
- if isinstance(v, collections.Mapping):
- r = update_dict(d.get(k, {}), v)
- d[k] = r
- else:
- d[k] = u[k]
- return d
diff --git a/docker/examples/blueprint-laika-dmaap-pubs.yaml b/docker/examples/blueprint-laika-dmaap-pubs.yaml
deleted file mode 100644
index 6462227..0000000
--- a/docker/examples/blueprint-laika-dmaap-pubs.yaml
+++ /dev/null
@@ -1,165 +0,0 @@
-# ================================================================================
-# Copyright (c) 2017-2018 AT&T Intellectual Property. All rights reserved.
-# ================================================================================
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ============LICENSE_END=========================================================
-#
-# ECOMP is a trademark and service mark of AT&T Intellectual Property.
-tosca_definitions_version: cloudify_dsl_1_3
-
-description: >
- This Blueprint installs a chain of two laika instances on a Docker cluster
-
-imports:
- - http://www.getcloudify.org/spec/cloudify/3.4/types.yaml
- - {{ ONAPTEMPLATE_RAWREPOURL_org_onap_dcaegen2 }}/type_files/docker/2.2.0/node-type.yaml
- - {{ ONAPTEMPLATE_RAWREPOURL_org_onap_dcaegen2 }}/type_files/relationship/1.0.0/node-type.yaml
- - {{ ONAPTEMPLATE_RAWREPOURL_org_onap_dcaegen2 }}/type_files/dmaap/1.1.0/dmaap.yaml
-
-inputs:
-
- service_id:
- description: Unique id used for an instance of this DCAE service. Use deployment id
- default: 'foobar'
-
- topic00_aaf_username:
- topic00_aaf_password:
- topic00_location:
- default: mtc5
- topic00_client_role:
-
- topic01_aaf_username:
- topic01_aaf_password:
- topic01_location:
- default: mtc5
- topic01_client_role:
-
- feed00_location:
- default: mtc5
-
- feed01_location:
- default: mtc5
-
- topic00fqtn:
- type: string
- topic01fqtn:
- type: string
- laika_image:
- type: string
-
-node_templates:
-
- topic00:
- type: dcae.nodes.ExistingTopic
- properties:
- fqtn: { get_input : topic00fqtn }
-
- topic01:
- type: dcae.nodes.ExistingTopic
- properties:
- fqtn: { get_input : topic01fqtn }
-
- feed00:
- type: dcae.nodes.Feed
- properties:
- # NOTE: Had to manually make unique feed names per test because I've been told there's
- # an issue with feeds not being deleted by uninstall.
- feed_name: "feed00-pub-laika"
- feed_description: "Feed00 to test pub for laika"
- feed_version: 1.0.0
- aspr_classification: "unclassified"
-
- feed01:
- type: dcae.nodes.Feed
- properties:
- feed_name: "feed01-pub-laika"
- feed_description: "Feed01 to test pub for laika"
- feed_version: 1.0.0
- aspr_classification: "unclassified"
-
- laika-one:
- type: dcae.nodes.DockerContainerForComponentsUsingDmaap
- properties:
- service_component_type:
- 'laika'
- service_id:
- { get_input: service_id }
- location_id:
- 'rework-central'
- application_config:
- some-param: "Lorem ipsum dolor sit amet"
- streams_publishes:
- topic-alpha:
- aaf_username: { get_input: topic00_aaf_username }
- aaf_password: { get_input: topic00_aaf_password }
- type: "message_router"
- dmaap_info: "<< topic00 >>"
- topic-beta:
- aaf_username: { get_input: topic01_aaf_username }
- aaf_password: { get_input: topic01_aaf_password }
- type: "message_router"
- dmaap_info: "<< topic01 >>"
- feed-gamma:
- type: "data_router"
- dmaap_info: "<< feed00 >>"
- feed-kappa:
- type: "data_router"
- dmaap_info: "<< feed01 >>"
- streams_subscribes: {}
- services_calls: {}
- image: { get_input : laika_image }
- docker_config:
- healthcheck:
- type: "http"
- endpoint: "/health"
- streams_publishes:
- - name: topic00
- location: { get_input: topic00_location }
- client_role: { get_input: topic00_client_role }
- type: message_router
- - name: topic01
- location: { get_input: topic01_location }
- client_role: { get_input: topic01_client_role }
- type: message_router
- - name: feed00
- location: { get_input: feed00_location }
- type: data_router
- - name: feed01
- location: { get_input: feed01_location }
- type: data_router
- streams_subscribes: []
- relationships:
- - type: dcae.relationships.component_contained_in
- target: docker_host
- - type: dcae.relationships.publish_events
- target: topic00
- - type: dcae.relationships.publish_events
- target: topic01
- - type: dcae.relationships.publish_files
- target: feed00
- - type: dcae.relationships.publish_files
- target: feed01
- interfaces:
- cloudify.interfaces.lifecycle:
- stop:
- inputs:
- cleanup_image:
- False
-
- docker_host:
- type: dcae.nodes.SelectedDockerHost
- properties:
- location_id:
- 'rework-central'
- docker_host_override:
- 'component_dockerhost'
diff --git a/docker/examples/blueprint-laika-dmaap-pubsub.yaml b/docker/examples/blueprint-laika-dmaap-pubsub.yaml
deleted file mode 100644
index c6099a2..0000000
--- a/docker/examples/blueprint-laika-dmaap-pubsub.yaml
+++ /dev/null
@@ -1,167 +0,0 @@
-# ================================================================================
-# Copyright (c) 2017-2018 AT&T Intellectual Property. All rights reserved.
-# ================================================================================
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ============LICENSE_END=========================================================
-#
-# ECOMP is a trademark and service mark of AT&T Intellectual Property.
-tosca_definitions_version: cloudify_dsl_1_3
-
-description: >
- This Blueprint installs a chain of two laika instances on a Docker cluster
-
-imports:
- - http://www.getcloudify.org/spec/cloudify/3.4/types.yaml
- - {{ ONAPTEMPLATE_RAWREPOURL_org_onap_dcaegen2 }}/type_files/docker/2.2.0/node-type.yaml
- - {{ ONAPTEMPLATE_RAWREPOURL_org_onap_dcaegen2 }}/type_files/relationship/1.0.0/node-type.yaml
- - {{ ONAPTEMPLATE_RAWREPOURL_org_onap_dcaegen2 }}/type_files/dmaap/1.1.0/dmaap.yaml
-
-inputs:
-
- service_id:
- description: Unique id used for an instance of this DCAE service. Use deployment id
- default: 'foobar'
-
- topic00_aaf_username:
- topic00_aaf_password:
- topic00_location:
- default: mtc5
- topic00_client_role:
-
- topic01_aaf_username:
- topic01_aaf_password:
- topic01_location:
- default: mtc5
- topic01_client_role:
-
- feed00_location:
- default: mtc5
-
- feed01_location:
- default: mtc5
-
- topic00fqtn:
- type: string
- topic01fqtn:
- type: string
- laika_image:
- type: string
-
-node_templates:
-
- topic00:
- type: dcae.nodes.ExistingTopic
- properties:
- fqtn: { get_input : topic00fqtn }
-
- topic01:
- type: dcae.nodes.ExistingTopic
- properties:
- fqtn: { get_input : topic01fqtn }
-
- feed00:
- type: dcae.nodes.Feed
- properties:
- # NOTE: Had to manually make unique feed names per test because I've been told there's
- # an issue with feeds not being deleted by uninstall.
- feed_name: "feed00-pub-laika"
- feed_description: "Feed00 to test pub for laika"
- feed_version: 1.0.0
- aspr_classification: "unclassified"
-
- feed01:
- type: dcae.nodes.Feed
- properties:
- feed_name: "feed01-sub-laika"
- feed_description: "Feed01 to test sub for laika"
- feed_version: 1.0.0
- aspr_classification: "unclassified"
-
- laika-one:
- type: dcae.nodes.DockerContainerForComponentsUsingDmaap
- properties:
- service_component_type:
- 'laika'
- service_id:
- { get_input: service_id }
- location_id:
- 'rework-central'
- application_config:
- some-param: "Lorem ipsum dolor sit amet"
- streams_publishes:
- my-publishing-topic:
- aaf_username: { get_input: topic00_aaf_username }
- aaf_password: { get_input: topic00_aaf_password }
- type: "message_router"
- dmaap_info: "<< topic00 >>"
- my-publishing-feed:
- type: "data_router"
- dmaap_info: "<< feed00 >>"
- streams_subscribes:
- my-subscribing-topic:
- aaf_username: { get_input: topic01_aaf_username }
- aaf_password: { get_input: topic01_aaf_password }
- type: "message_router"
- dmaap_info: "<< topic01 >>"
- my-subscribing-feed:
- type: "data_router"
- dmaap_info: "<< feed01 >>"
- services_calls: {}
- image: { get_input : laika_image }
- docker_config:
- healthcheck:
- type: "http"
- endpoint: "/health"
- streams_publishes:
- - name: topic00
- location: { get_input: topic00_location }
- client_role: { get_input: topic00_client_role }
- type: message_router
- - name: feed00
- location: { get_input: feed00_location }
- type: data_router
- streams_subscribes:
- - name: topic01
- location: { get_input: topic01_location }
- client_role: { get_input: topic01_client_role }
- type: message_router
- - name: feed01
- location: { get_input: feed01_location }
- type: data_router
- route: identity
- scheme: https
- relationships:
- - type: dcae.relationships.component_contained_in
- target: docker_host
- - type: dcae.relationships.publish_events
- target: topic00
- - type: dcae.relationships.subscribe_to_events
- target: topic01
- - type: dcae.relationships.publish_files
- target: feed00
- - type: dcae.relationships.subscribe_to_files
- target: feed01
- interfaces:
- cloudify.interfaces.lifecycle:
- stop:
- inputs:
- cleanup_image:
- False
-
- docker_host:
- type: dcae.nodes.SelectedDockerHost
- properties:
- location_id:
- 'rework-central'
- docker_host_override:
- 'component_dockerhost'
diff --git a/docker/examples/blueprint-laika-dmaap-subs.yaml b/docker/examples/blueprint-laika-dmaap-subs.yaml
deleted file mode 100644
index ec28668..0000000
--- a/docker/examples/blueprint-laika-dmaap-subs.yaml
+++ /dev/null
@@ -1,173 +0,0 @@
-# ================================================================================
-# Copyright (c) 2017-2018 AT&T Intellectual Property. All rights reserved.
-# ================================================================================
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ============LICENSE_END=========================================================
-#
-# ECOMP is a trademark and service mark of AT&T Intellectual Property.
-tosca_definitions_version: cloudify_dsl_1_3
-
-description: >
- This Blueprint installs a chain of two laika instances on a Docker cluster
-
-imports:
- - http://www.getcloudify.org/spec/cloudify/3.4/types.yaml
- - {{ ONAPTEMPLATE_RAWREPOURL_org_onap_dcaegen2 }}/type_files/docker/2.2.0/node-type.yaml
- - {{ ONAPTEMPLATE_RAWREPOURL_org_onap_dcaegen2 }}/type_files/relationship/1.0.0/node-type.yaml
- - {{ ONAPTEMPLATE_RAWREPOURL_org_onap_dcaegen2 }}/type_files/dmaap/1.1.0/dmaap.yaml
-
-
-inputs:
-
- service_id:
- description: Unique id used for an instance of this DCAE service. Use deployment id
- default: 'foobar'
-
- topic00_aaf_username:
- topic00_aaf_password:
- topic00_location:
- default: mtc5
- topic00_client_role:
-
- topic01_aaf_username:
- topic01_aaf_password:
- topic01_location:
- default: mtc5
- topic01_client_role:
-
- feed00_location:
- default: mtc5
-
- feed01_location:
- default: mtc5
-
- topic00fqtn:
- type: string
- topic01fqtn:
- type: string
- laika_image:
- type: string
-
-node_templates:
-
- topic00:
- type: dcae.nodes.ExistingTopic
- properties:
- fqtn: { get_input : topic00fqtn }
-
- topic01:
- type: dcae.nodes.ExistingTopic
- properties:
- fqtn: { get_input : topic01fqtn }
-
- feed00:
- type: dcae.nodes.Feed
- properties:
- # NOTE: Had to manually make unique feed names per test because I've been told there's
- # an issue with feeds not being deleted by uninstall.
- feed_name: "feed00-sub-laika"
- feed_description: "Feed00 to test sub for laika"
- feed_version: 1.0.0
- aspr_classification: "unclassified"
-
- feed01:
- type: dcae.nodes.Feed
- properties:
- feed_name: "feed01-sub-laika"
- feed_description: "Feed01 to test sub for laika"
- feed_version: 1.0.0
- aspr_classification: "unclassified"
-
- laika-one:
- type: dcae.nodes.DockerContainerForComponentsUsingDmaap
- properties:
- service_component_type:
- 'laika'
- service_id:
- { get_input: service_id }
- location_id:
- 'rework-central'
- application_config:
- some-param: "Lorem ipsum dolor sit amet"
- streams_publishes: {}
- streams_subscribes:
- topic-alpha:
- aaf_username: { get_input: topic00_aaf_username }
- aaf_password: { get_input: topic00_aaf_password }
- type: "message_router"
- dmaap_info: "<< topic00 >>"
- topic-beta:
- aaf_username: { get_input: topic01_aaf_username }
- aaf_password: { get_input: topic01_aaf_password }
- type: "message_router"
- dmaap_info: "<< topic01 >>"
- feed-gamma:
- type: "data_router"
- dmaap_info: "<< feed00 >>"
- feed-kappa:
- type: "data_router"
- dmaap_info: "<< feed01 >>"
- services_calls: {}
- image: { get_input : laika_image }
- docker_config:
- healthcheck:
- type: "http"
- endpoint: "/health"
- streams_publishes: []
- streams_subscribes:
- - name: topic00
- location: { get_input: topic00_location }
- client_role: { get_input: topic00_client_role }
- type: message_router
- - name: topic01
- location: { get_input: topic01_location }
- client_role: { get_input: topic01_client_role }
- type: message_router
- - name: feed00
- location: { get_input: feed00_location }
- type: data_router
- username: king
- password: !!str 123456
- route: identity
- scheme: https
- # This feed should have username/password generated
- - name: feed01
- location: { get_input: feed01_location }
- type: data_router
- route: identity
- scheme: https
- relationships:
- - type: dcae.relationships.component_contained_in
- target: docker_host
- - type: dcae.relationships.subscribe_to_events
- target: topic00
- - type: dcae.relationships.subscribe_to_events
- target: topic01
- - type: dcae.relationships.subscribe_to_files
- target: feed00
- - type: dcae.relationships.subscribe_to_files
- target: feed01
- interfaces:
- cloudify.interfaces.lifecycle:
- stop:
- inputs:
- cleanup_image:
- False
-
- docker_host:
- type: dcae.nodes.SelectedDockerHost
- properties:
- location_id:
- 'rework-central'
- docker_host_override:
- 'component_dockerhost'
diff --git a/docker/examples/blueprint-laika-policy.yaml b/docker/examples/blueprint-laika-policy.yaml
deleted file mode 100644
index f6b6925..0000000
--- a/docker/examples/blueprint-laika-policy.yaml
+++ /dev/null
@@ -1,138 +0,0 @@
-# ================================================================================
-# Copyright (c) 2017-2018 AT&T Intellectual Property. All rights reserved.
-# ================================================================================
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ============LICENSE_END=========================================================
-#
-# ECOMP is a trademark and service mark of AT&T Intellectual Property.
-tosca_definitions_version: cloudify_dsl_1_3
-
-description: >
- This Blueprint installs a chain of two laika instances on a Docker cluster
-
-imports:
- - http://www.getcloudify.org/spec/cloudify/3.4/types.yaml
- - {{ ONAPTEMPLATE_RAWREPOURL_org_onap_dcaegen2 }}/type_files/docker/3.0.0/node-type.yaml
- - {{ ONAPTEMPLATE_RAWREPOURL_org_onap_dcaegen2 }}/type_files/relationship/1.0.0/node-type.yaml
- - {{ ONAPTEMPLATE_RAWREPOURL_org_onap_dcaegen2 }}/type_files/dcaepolicy/1.0.0/node-type.yaml
-
-inputs:
- laika_image:
- type: string
-
- host_capacity_policy_id:
- type: string
- default: DCAE_alex.Config_host_capacity_policy_id_value
-
- host_location_policy_id:
- type: string
- default: DCAE_alex.Config_host_location_policy_id_value
-
- db_server_policy_id:
- type: string
- default: DCAE_alex.Config_db_server_policy_id_value
-
-node_templates:
-
- host_capacity_policy:
- type: dcae.nodes.policy
- properties:
- policy_id: { get_input: host_capacity_policy_id }
- # Use this property to toggle whether policy is required meaning errors are considered
- # critical otherwise errors are silenced.
- policy_required: true
-
- host_location_policy:
- type: dcae.nodes.policy
- properties:
- policy_id: { get_input: host_location_policy_id }
- policy_required: true
-
- db_server_policy:
- type: dcae.nodes.policy
- properties:
- policy_id: { get_input: db_server_policy_id }
-
- laika-zero:
- type: dcae.nodes.DockerContainerForComponents
- properties:
- service_component_type:
- 'laika'
- location_id:
- 'rework-central'
- service_id:
- 'foo-service'
- application_config:
- some-param: "Lorem ipsum dolor sit amet"
- downstream-laika: "{{ laika }}"
- image: { get_input : laika_image }
- docker_config:
- healthcheck:
- type: "http"
- endpoint: "/health"
- policy:
- trigger_type: "docker"
- script_path: "/bin/echo"
- relationships:
- # Link to downstream laika
- - type: dcae.relationships.component_connected_to
- target: laika-one
- - type: dcae.relationships.component_contained_in
- target: docker_host
- - type: cloudify.relationships.depends_on
- target: host_capacity_policy
- - type: cloudify.relationships.depends_on
- target: host_location_policy
- interfaces:
- cloudify.interfaces.lifecycle:
- start:
- inputs:
- ports:
- - "8080:5432"
- envs:
- SOME-ENV: "BAM"
- max_wait:
- 120
- stop:
- inputs:
- cleanup_image:
- False
-
- laika-one:
- type: dcae.nodes.DockerContainerForComponents
- properties:
- service_component_type:
- 'laika'
- application_config:
- some-param: "Lorem ipsum dolor sit amet"
- image: { get_input : laika_image }
- # Trying without health check
- relationships:
- - type: dcae.relationships.component_contained_in
- target: docker_host
- - type: cloudify.relationships.depends_on
- target: db_server_policy
- interfaces:
- cloudify.interfaces.lifecycle:
- stop:
- inputs:
- cleanup_image:
- False
-
- docker_host:
- type: dcae.nodes.SelectedDockerHost
- properties:
- location_id:
- 'rework-central'
- docker_host_override:
- 'component_dockerhost'
diff --git a/docker/examples/blueprint-laika.yaml b/docker/examples/blueprint-laika.yaml
deleted file mode 100644
index 8ef6f0c..0000000
--- a/docker/examples/blueprint-laika.yaml
+++ /dev/null
@@ -1,97 +0,0 @@
-# ================================================================================
-# Copyright (c) 2017-2018 AT&T Intellectual Property. All rights reserved.
-# ================================================================================
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ============LICENSE_END=========================================================
-#
-# ECOMP is a trademark and service mark of AT&T Intellectual Property.
-tosca_definitions_version: cloudify_dsl_1_3
-
-description: >
- This Blueprint installs a chain of two laika instances on a Docker cluster
-
-imports:
- - http://www.getcloudify.org/spec/cloudify/3.4/types.yaml
- - {{ ONAPTEMPLATE_RAWREPOURL_org_onap_dcaegen2 }}/type_files/docker/2.3.0/node-type.yaml
- - {{ ONAPTEMPLATE_RAWREPOURL_org_onap_dcaegen2 }}/type_files/relationship/1.0.0/node-type.yaml
-
-inputs:
- laika_image:
- type: string
-
-node_templates:
-
- laika-zero:
- type: dcae.nodes.DockerContainerForComponents
- properties:
- service_component_type:
- 'laika'
- location_id:
- 'rework-central'
- service_id:
- 'foo-service'
- application_config:
- some-param: "Lorem ipsum dolor sit amet"
- downstream-laika: "{{ laika }}"
- image: { get_input : laika_image }
- docker_config:
- healthcheck:
- type: "http"
- endpoint: "/health"
- relationships:
- # Link to downstream laika
- - type: dcae.relationships.component_connected_to
- target: laika-one
- - type: dcae.relationships.component_contained_in
- target: docker_host
- interfaces:
- cloudify.interfaces.lifecycle:
- start:
- inputs:
- ports:
- - "8080:5432"
- envs:
- SOME-ENV: "BAM"
- max_wait:
- 120
- stop:
- inputs:
- cleanup_image:
- False
-
- laika-one:
- type: dcae.nodes.DockerContainerForComponents
- properties:
- service_component_type:
- 'laika'
- application_config:
- some-param: "Lorem ipsum dolor sit amet"
- image: { get_input : laika_image }
- # Trying without health check
- relationships:
- - type: dcae.relationships.component_contained_in
- target: docker_host
- interfaces:
- cloudify.interfaces.lifecycle:
- stop:
- inputs:
- cleanup_image:
- False
-
- docker_host:
- type: dcae.nodes.SelectedDockerHost
- properties:
- location_id:
- 'rework-central'
- docker_host_override:
- 'component_dockerhost'
diff --git a/docker/examples/blueprint-registrator.yaml b/docker/examples/blueprint-registrator.yaml
deleted file mode 100644
index fbfd7d9..0000000
--- a/docker/examples/blueprint-registrator.yaml
+++ /dev/null
@@ -1,64 +0,0 @@
-# ================================================================================
-# Copyright (c) 2017-2018 AT&T Intellectual Property. All rights reserved.
-# ================================================================================
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ============LICENSE_END=========================================================
-#
-# ECOMP is a trademark and service mark of AT&T Intellectual Property.
-tosca_definitions_version: cloudify_dsl_1_3
-
-description: >
- This Blueprint installs registrator on a Docker host
-
-imports:
- - http://www.getcloudify.org/spec/cloudify/3.4/types.yaml
- - {{ ONAPTEMPLATE_RAWREPOURL_org_onap_dcaegen2 }}/type_files/docker/2.3.0/node-type.yaml
- - {{ ONAPTEMPLATE_RAWREPOURL_org_onap_dcaegen2 }}/type_files/relationship/1.0.0/node-type.yaml
-
-inputs:
- registrator-image:
- type: string
- external_ip:
- type: string
-
-node_templates:
-
- registrator:
- type: dcae.nodes.DockerContainer
- properties:
- name:
- 'test-registrator'
- image: { get_input : registrator-image }
- relationships:
- - type: dcae.relationships.component_contained_in
- target: docker_host
- interfaces:
- cloudify.interfaces.lifecycle:
- start:
- inputs:
- envs:
- EXTERNAL_IP: { get_input : external_ip }
- volumes:
- - host:
- path: '/var/run/docker.sock'
- container:
- bind: '/tmp/docker.sock'
- mode: 'ro'
-
- docker_host:
- type: dcae.nodes.SelectedDockerHost
- properties:
- location_id:
- 'rework-central'
- docker_host_override:
- 'platform_dockerhost'
diff --git a/docker/pom.xml b/docker/pom.xml
deleted file mode 100644
index 6811392..0000000
--- a/docker/pom.xml
+++ /dev/null
@@ -1,165 +0,0 @@
-<?xml version="1.0"?>
-<!--
-================================================================================
-Copyright (c) 2017 AT&T Intellectual Property. All rights reserved.
-================================================================================
-Licensed under the Apache License, Version 2.0 (the "License");
-you may not use this file except in compliance with the License.
-You may obtain a copy of the License at
-
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software
-distributed under the License is distributed on an "AS IS" BASIS,
-WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-See the License for the specific language governing permissions and
-limitations under the License.
-============LICENSE_END=========================================================
-
-ECOMP is a trademark and service mark of AT&T Intellectual Property.
--->
-<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
- <modelVersion>4.0.0</modelVersion>
- <parent>
- <groupId>org.onap.dcaegen2.platform</groupId>
- <artifactId>plugins</artifactId>
- <version>1.2.0-SNAPSHOT</version>
- </parent>
- <groupId>org.onap.dcaegen2.platform.plugins</groupId>
- <artifactId>docker</artifactId>
- <name>docker-plugin</name>
- <version>3.3.0-SNAPSHOT</version>
- <url>http://maven.apache.org</url>
- <properties>
- <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
- <sonar.sources>.</sonar.sources>
- <sonar.junit.reportsPath>xunit-results.xml</sonar.junit.reportsPath>
- <sonar.python.coverage.reportPath>coverage.xml</sonar.python.coverage.reportPath>
- <sonar.language>py</sonar.language>
- <sonar.pluginName>Python</sonar.pluginName>
- <sonar.inclusions>**/*.py</sonar.inclusions>
- <sonar.exclusions>tests/*,setup.py</sonar.exclusions>
- </properties>
- <build>
- <finalName>${project.artifactId}-${project.version}</finalName>
- <plugins>
- <!-- plugin>
- <artifactId>maven-assembly-plugin</artifactId>
- <version>2.4.1</version>
- <configuration>
- <descriptors>
- <descriptor>assembly/dep.xml</descriptor>
- </descriptors>
- </configuration>
- <executions>
- <execution>
- <id>make-assembly</id>
- <phase>package</phase>
- <goals>
- <goal>single</goal>
- </goals>
- </execution>
- </executions>
- </plugin -->
- <!-- now we configure custom action (calling a script) at various lifecycle phases -->
- <plugin>
- <groupId>org.codehaus.mojo</groupId>
- <artifactId>exec-maven-plugin</artifactId>
- <version>1.2.1</version>
- <executions>
- <execution>
- <id>clean phase script</id>
- <phase>clean</phase>
- <goals>
- <goal>exec</goal>
- </goals>
- <configuration>
- <arguments>
- <argument>${project.artifactId}</argument>
- <argument>clean</argument>
- </arguments>
- </configuration>
- </execution>
- <execution>
- <id>generate-sources script</id>
- <phase>generate-sources</phase>
- <goals>
- <goal>exec</goal>
- </goals>
- <configuration>
- <arguments>
- <argument>${project.artifactId}</argument>
- <argument>generate-sources</argument>
- </arguments>
- </configuration>
- </execution>
- <execution>
- <id>compile script</id>
- <phase>compile</phase>
- <goals>
- <goal>exec</goal>
- </goals>
- <configuration>
- <arguments>
- <argument>${project.artifactId}</argument>
- <argument>compile</argument>
- </arguments>
- </configuration>
- </execution>
- <execution>
- <id>package script</id>
- <phase>package</phase>
- <goals>
- <goal>exec</goal>
- </goals>
- <configuration>
- <arguments>
- <argument>${project.artifactId}</argument>
- <argument>package</argument>
- </arguments>
- </configuration>
- </execution>
- <execution>
- <id>test script</id>
- <phase>test</phase>
- <goals>
- <goal>exec</goal>
- </goals>
- <configuration>
- <arguments>
- <argument>${project.artifactId}</argument>
- <argument>test</argument>
- </arguments>
- </configuration>
- </execution>
- <execution>
- <id>install script</id>
- <phase>install</phase>
- <goals>
- <goal>exec</goal>
- </goals>
- <configuration>
- <arguments>
- <argument>${project.artifactId}</argument>
- <argument>install</argument>
- </arguments>
- </configuration>
- </execution>
- <execution>
- <id>deploy script</id>
- <phase>deploy</phase>
- <goals>
- <goal>exec</goal>
- </goals>
- <configuration>
- <arguments>
- <argument>${project.artifactId}</argument>
- <argument>deploy</argument>
- </arguments>
- </configuration>
- </execution>
- </executions>
- </plugin>
- </plugins>
- </build>
-</project>
diff --git a/docker/requirements.txt b/docker/requirements.txt
deleted file mode 100644
index 36602c8..0000000
--- a/docker/requirements.txt
+++ /dev/null
@@ -1,5 +0,0 @@
-python-consul>=0.6.0
-onap-dcae-dockering>=1.4.0
-onap-dcae-dcaepolicy-lib>=2.4.1
-cloudify-common>=5.0.0; python_version<"3"
-cloudify-common @ git+https://github.com/cloudify-cosmo/cloudify-common@cy-1374-python3#egg=cloudify-common==5.0.0; python_version>="3"
diff --git a/docker/setup.py b/docker/setup.py
deleted file mode 100644
index b6a914b..0000000
--- a/docker/setup.py
+++ /dev/null
@@ -1,38 +0,0 @@
-# ============LICENSE_START=======================================================
-# org.onap.dcae
-# ================================================================================
-# Copyright (c) 2017-2018 AT&T Intellectual Property. All rights reserved.
-# Copyright (c) 2019 Pantheon.tech. All rights reserved.
-# ================================================================================
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ============LICENSE_END=========================================================
-#
-# ECOMP is a trademark and service mark of AT&T Intellectual Property.
-
-import os
-from setuptools import setup
-
-setup(
- name='dockerplugin',
- description='Cloudify plugin for applications run in Docker containers',
- version="3.3.0",
- author='Michael Hwang, Tommy Carpenter',
- packages=['dockerplugin'],
- zip_safe=False,
- install_requires=[
- 'python-consul>=0.6.0',
- 'onap-dcae-dockering>=1.4.0',
- 'onap-dcae-dcaepolicy-lib>=2.4.1',
- 'cloudify-common>=5.0.0',
- ]
-)
diff --git a/docker/tests/test_decorators.py b/docker/tests/test_decorators.py
deleted file mode 100644
index 403e39f..0000000
--- a/docker/tests/test_decorators.py
+++ /dev/null
@@ -1,36 +0,0 @@
-# ============LICENSE_START=======================================================
-# org.onap.dcae
-# ================================================================================
-# Copyright (c) 2017-2018 AT&T Intellectual Property. All rights reserved.
-# ================================================================================
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ============LICENSE_END=========================================================
-#
-# ECOMP is a trademark and service mark of AT&T Intellectual Property.
-
-from dockerplugin import decorators as dec
-
-
-def test_wrapper_merge_inputs():
- properties = { "app_config": {"nested": { "a": 123, "b": 456 }, "foo": "duh"},
- "image": "some-docker-image" }
- kwargs = { "app_config": {"nested": {"a": 789, "c": "zyx"}} }
-
- def task_func(**inputs):
- return inputs
-
- expected = { "app_config": {"nested": { "a": 789, "b": 456, "c": "zyx" },
- "foo": "duh"}, "image": "some-docker-image" }
-
- assert expected == dec._wrapper_merge_inputs(task_func, properties, **kwargs)
-
diff --git a/docker/tests/test_discovery.py b/docker/tests/test_discovery.py
deleted file mode 100644
index f3aed66..0000000
--- a/docker/tests/test_discovery.py
+++ /dev/null
@@ -1,69 +0,0 @@
-# ============LICENSE_START=======================================================
-# org.onap.dcae
-# ================================================================================
-# Copyright (c) 2017-2018 AT&T Intellectual Property. All rights reserved.
-# ================================================================================
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ============LICENSE_END=========================================================
-#
-# ECOMP is a trademark and service mark of AT&T Intellectual Property.
-
-import pytest
-from functools import partial
-import requests
-from dockerplugin import discovery as dis
-
-
-def test_wrap_consul_call():
- def foo(a, b, c="default"):
- return " ".join([a, b, c])
-
- wrapped_foo = partial(dis._wrap_consul_call, foo)
- assert wrapped_foo("hello", "world") == "hello world default"
- assert wrapped_foo("hello", "world", c="new masters") == "hello world new masters"
-
- def foo_connection_error(a, b, c):
- raise requests.exceptions.ConnectionError("simulate failed connection")
-
- wrapped_foo = partial(dis._wrap_consul_call, foo_connection_error)
- with pytest.raises(dis.DiscoveryConnectionError):
- wrapped_foo("a", "b", "c")
-
-
-def test_generate_service_component_name():
- component_type = "some-component-type"
- name = dis.generate_service_component_name(component_type)
- assert name.split("_")[1] == component_type
-
-
-def test_find_matching_services():
- services = { "component_dockerhost_1": ["foo", "bar"],
- "platform_dockerhost": [], "component_dockerhost_2": ["baz"] }
- assert sorted(["component_dockerhost_1", "component_dockerhost_2"]) \
- == sorted(dis._find_matching_services(services, "component_dockerhost", []))
-
- assert ["component_dockerhost_1"] == dis._find_matching_services(services, \
- "component_dockerhost", ["foo", "bar"])
-
- assert ["component_dockerhost_1"] == dis._find_matching_services(services, \
- "component_dockerhost", ["foo"])
-
- assert [] == dis._find_matching_services(services, "unknown", ["foo"])
-
-
-def test_is_healthy_pure():
- def fake_is_healthy(name):
- return 0, [{ "Checks": [{"Status": "passing"}] }]
-
- assert True == dis._is_healthy_pure(fake_is_healthy, "some-component")
-
diff --git a/docker/tests/test_tasks.py b/docker/tests/test_tasks.py
deleted file mode 100644
index c58d02c..0000000
--- a/docker/tests/test_tasks.py
+++ /dev/null
@@ -1,277 +0,0 @@
-# ============LICENSE_START=======================================================
-# org.onap.dcae
-# ================================================================================
-# Copyright (c) 2017-2018 AT&T Intellectual Property. All rights reserved.
-# ================================================================================
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ============LICENSE_END=========================================================
-#
-# ECOMP is a trademark and service mark of AT&T Intellectual Property.
-
-import copy
-import pytest
-from cloudify.exceptions import NonRecoverableError, RecoverableError
-import dockerplugin
-from dockerplugin import tasks
-from dockerplugin.exceptions import DockerPluginDeploymentError
-
-
-def test_generate_component_name():
- kwargs = { "service_component_type": "doodle",
- "service_component_name_override": None }
-
- assert "doodle" in tasks._generate_component_name(**kwargs)["name"]
-
- kwargs["service_component_name_override"] = "yankee"
-
- assert "yankee" == tasks._generate_component_name(**kwargs)["name"]
-
-
-def test_parse_streams(monkeypatch):
- # Good case for streams_publishes
- test_input = { "streams_publishes": [{"name": "topic00", "type": "message_router"},
- {"name": "feed00", "type": "data_router"}],
- "streams_subscribes": {} }
-
- expected = {'feed00': {'type': 'data_router', 'name': 'feed00'},
- 'streams_publishes': [{'type': 'message_router', 'name': 'topic00'},
- {'type': 'data_router', 'name': 'feed00'}],
- 'streams_subscribes': {},
- 'topic00': {'type': 'message_router', 'name': 'topic00'}
- }
-
- assert expected == tasks._parse_streams(**test_input)
-
- # Good case for streams_subscribes (password provided)
- test_input = { "streams_publishes": {},
- "streams_subscribes": [{"name": "topic01", "type": "message_router"},
- {"name": "feed01", "type": "data_router", "username": "hero",
- "password": "123456"}] }
-
- expected = {'feed01': {'type': 'data_router', 'name': 'feed01',
- 'username': 'hero', 'password': '123456'},
- 'streams_publishes': {},
- 'streams_subscribes': [{'type': 'message_router', 'name': 'topic01'},
- {'type': 'data_router', 'name': 'feed01', 'username': 'hero',
- 'password': '123456'}],
- 'topic01': {'type': 'message_router', 'name': 'topic01'}}
-
- assert expected == tasks._parse_streams(**test_input)
-
- # Good case for streams_subscribes (password generated)
- test_input = { "streams_publishes": {},
- "streams_subscribes": [{"name": "topic01", "type": "message_router"},
- {"name": "feed01", "type": "data_router", "username": None,
- "password": None}] }
-
- def not_so_random(n):
- return "nosurprise"
-
- monkeypatch.setattr(dockerplugin.utils, "random_string", not_so_random)
-
- expected = {'feed01': {'type': 'data_router', 'name': 'feed01',
- 'username': 'nosurprise', 'password': 'nosurprise'},
- 'streams_publishes': {},
- 'streams_subscribes': [{'type': 'message_router', 'name': 'topic01'},
- {'type': 'data_router', 'name': 'feed01', 'username': None,
- 'password': None}],
- 'topic01': {'type': 'message_router', 'name': 'topic01'}}
-
- assert expected == tasks._parse_streams(**test_input)
-
-
-def test_setup_for_discovery(monkeypatch):
- test_input = { "name": "some-name",
- "application_config": { "one": "a", "two": "b" } }
-
- def fake_push_config(conn, name, application_config):
- return
-
- monkeypatch.setattr(dockerplugin.discovery, "push_service_component_config",
- fake_push_config)
-
- assert test_input == tasks._setup_for_discovery(**test_input)
-
- def fake_push_config_connection_error(conn, name, application_config):
- raise dockerplugin.discovery.DiscoveryConnectionError("Boom")
-
- monkeypatch.setattr(dockerplugin.discovery, "push_service_component_config",
- fake_push_config_connection_error)
-
- with pytest.raises(RecoverableError):
- tasks._setup_for_discovery(**test_input)
-
-
-def test_setup_for_discovery_streams(monkeypatch):
- test_input = {'feed01': {'type': 'data_router', 'name': 'feed01',
- 'username': 'hero', 'password': '123456', 'location': 'Bedminster'},
- 'streams_publishes': {},
- 'streams_subscribes': [{'type': 'message_router', 'name': 'topic01'},
- {'type': 'data_router', 'name': 'feed01', 'username': 'hero',
- 'password': '123456', 'location': 'Bedminster'}],
- 'topic01': {'type': 'message_router', 'name': 'topic01'}}
- test_input["name"] = "some-foo-service-component"
-
- # Good case
- def fake_add_to_entry(conn, key, add_name, add_value):
- """
- This fake method will check all the pieces that are used to make store
- details in Consul
- """
- if key != test_input["name"] + ":dmaap":
- return None
- if add_name != "feed01":
- return None
- if add_value != {"location": "Bedminster", "delivery_url": None,
- "username": "hero", "password": "123456", "subscriber_id": None}:
- return None
-
- return "SUCCESS!"
-
- monkeypatch.setattr(dockerplugin.discovery, "add_to_entry",
- fake_add_to_entry)
-
- assert tasks._setup_for_discovery_streams(**test_input) == test_input
-
- # Good case - no data router subscribers
- test_input = {"streams_publishes": [{"name": "topic00", "type": "message_router"}],
- 'streams_subscribes': [{'type': 'message_router', 'name': 'topic01'}]}
- test_input["name"] = "some-foo-service-component"
-
- assert tasks._setup_for_discovery_streams(**test_input) == test_input
-
- # Bad case - something happened from the Consul call
- test_input = {'feed01': {'type': 'data_router', 'name': 'feed01',
- 'username': 'hero', 'password': '123456', 'location': 'Bedminster'},
- 'streams_publishes': {},
- 'streams_subscribes': [{'type': 'message_router', 'name': 'topic01'},
- {'type': 'data_router', 'name': 'feed01', 'username': 'hero',
- 'password': '123456', 'location': 'Bedminster'}],
- 'topic01': {'type': 'message_router', 'name': 'topic01'}}
- test_input["name"] = "some-foo-service-component"
-
- def barf(conn, key, add_name, add_value):
- raise RuntimeError("Barf")
-
- monkeypatch.setattr(dockerplugin.discovery, "add_to_entry",
- barf)
-
- with pytest.raises(NonRecoverableError):
- tasks._setup_for_discovery_streams(**test_input)
-
-
-def test_lookup_service(monkeypatch):
- def fake_lookup(conn, scn):
- return [{"ServiceAddress": "192.168.1.1", "ServicePort": "80"}]
-
- monkeypatch.setattr(dockerplugin.discovery, "lookup_service",
- fake_lookup)
-
- assert "192.168.1.1" == tasks._lookup_service("some-component")
- assert "192.168.1.1:80" == tasks._lookup_service("some-component",
- with_port=True)
-
-
-def test_verify_container(monkeypatch):
- def fake_is_healthy_success(ch, scn):
- return True
-
- monkeypatch.setattr(dockerplugin.discovery, "is_healthy",
- fake_is_healthy_success)
-
- assert tasks._verify_container("some-name", 3)
-
- def fake_is_healthy_never_good(ch, scn):
- return False
-
- monkeypatch.setattr(dockerplugin.discovery, "is_healthy",
- fake_is_healthy_never_good)
-
- with pytest.raises(DockerPluginDeploymentError):
- tasks._verify_container("some-name", 2)
-
-
-def test_update_delivery_url(monkeypatch):
- test_input = {'feed01': {'type': 'data_router', 'name': 'feed01',
- 'username': 'hero', 'password': '123456', 'location': 'Bedminster',
- 'route': 'some-path'},
- 'streams_publishes': {},
- 'streams_subscribes': [{'type': 'message_router', 'name': 'topic01'},
- {'type': 'data_router', 'name': 'feed01', 'username': 'hero',
- 'password': '123456', 'location': 'Bedminster',
- 'route': 'some-path'}],
- 'topic01': {'type': 'message_router', 'name': 'topic01'}}
- test_input["service_component_name"] = "some-foo-service-component"
-
- def fake_lookup_service(name, with_port=False):
- if with_port:
- return "10.100.1.100:8080"
- else:
- return
-
- monkeypatch.setattr(dockerplugin.tasks, "_lookup_service",
- fake_lookup_service)
-
- expected = copy.deepcopy(test_input)
- expected["feed01"]["delivery_url"] = "http://10.100.1.100:8080/some-path"
-
- assert tasks._update_delivery_url(**test_input) == expected
-
-
-def test_enhance_docker_params():
- # Good - Test empty docker config
-
- test_kwargs = { "docker_config": {}, "service_id": None }
- actual = tasks._enhance_docker_params(**test_kwargs)
-
- assert actual == {'envs': {"SERVICE_TAGS": ""}, 'docker_config': {}, "service_id": None }
-
- # Good - Test just docker config ports and volumes
-
- test_kwargs = { "docker_config": { "ports": ["1:1", "2:2"],
- "volumes": [{"container": "somewhere", "host": "somewhere else"}] },
- "service_id": None }
- actual = tasks._enhance_docker_params(**test_kwargs)
-
- assert actual == {'envs': {"SERVICE_TAGS": ""}, 'docker_config': {'ports': ['1:1', '2:2'],
- 'volumes': [{'host': 'somewhere else', 'container': 'somewhere'}]},
- 'ports': ['1:1', '2:2'], 'volumes': [{'host': 'somewhere else',
- 'container': 'somewhere'}], "service_id": None}
-
- # Good - Test just docker config ports and volumes with overrrides
-
- test_kwargs = { "docker_config": { "ports": ["1:1", "2:2"],
- "volumes": [{"container": "somewhere", "host": "somewhere else"}] },
- "ports": ["3:3", "4:4"], "volumes": [{"container": "nowhere", "host":
- "nowhere else"}],
- "service_id": None }
- actual = tasks._enhance_docker_params(**test_kwargs)
-
- assert actual == {'envs': {"SERVICE_TAGS": ""}, 'docker_config': {'ports': ['1:1', '2:2'],
- 'volumes': [{'host': 'somewhere else', 'container': 'somewhere'}]},
- 'ports': ['1:1', '2:2', '3:3', '4:4'], 'volumes': [{'host': 'somewhere else',
- 'container': 'somewhere'}, {'host': 'nowhere else', 'container':
- 'nowhere'}], "service_id": None}
-
- # Good
-
- test_kwargs = { "docker_config": {}, "service_id": "zed",
- "deployment_id": "abc" }
- actual = tasks._enhance_docker_params(**test_kwargs)
-
- assert actual["envs"] == {"SERVICE_TAGS": "abc,zed"}
-
-
-def test_notify_container():
- test_input = { "docker_config": { "trigger_type": "unknown" } }
- assert test_input == tasks._notify_container(**test_input)
diff --git a/docker/tests/test_utils.py b/docker/tests/test_utils.py
deleted file mode 100644
index 4578dae..0000000
--- a/docker/tests/test_utils.py
+++ /dev/null
@@ -1,32 +0,0 @@
-# ============LICENSE_START=======================================================
-# org.onap.dcae
-# ================================================================================
-# Copyright (c) 2017-2018 AT&T Intellectual Property. All rights reserved.
-# ================================================================================
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# ============LICENSE_END=========================================================
-#
-# ECOMP is a trademark and service mark of AT&T Intellectual Property.
-
-from dockerplugin import utils
-
-
-def test_random_string():
- target_length = 10
- assert len(utils.random_string(target_length)) == target_length
-
-
-def test_update_dict():
- d = { "a": 1, "b": 2 }
- u = { "a": 2, "b": 3 }
- assert utils.update_dict(d, u) == u
diff --git a/docker/tox.ini b/docker/tox.ini
deleted file mode 100644
index 96c0e1f..0000000
--- a/docker/tox.ini
+++ /dev/null
@@ -1,29 +0,0 @@
-[tox]
-envlist = py27,py36,cov
-
-[testenv]
-# coverage can only find modules if pythonpath is set
-setenv=
- PYTHONPATH={toxinidir}
- COVERAGE_FILE=.coverage.{envname}
-deps=
- -rrequirements.txt
- pytest
- coverage
- pytest-cov
-commands=
- coverage erase
- pytest --junitxml xunit-results.{envname}.xml --cov dockerplugin
-
-[testenv:cov]
-skip_install = true
-deps=
- coverage
-setenv=
- COVERAGE_FILE=.coverage
-commands=
- coverage combine
- coverage xml
-
-[pytest]
-junit_family = xunit2