summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--k8s/ChangeLog.md6
-rw-r--r--k8s/README.md69
-rw-r--r--k8s/k8s-node-type.yaml154
-rw-r--r--k8s/k8sclient/__init__.py3
-rw-r--r--k8s/k8sclient/k8sclient.py104
-rw-r--r--k8s/k8splugin/__init__.py2
-rw-r--r--k8s/k8splugin/decorators.py14
-rw-r--r--k8s/k8splugin/tasks.py93
-rw-r--r--k8s/pom.xml2
-rw-r--r--k8s/requirements.txt2
-rw-r--r--k8s/setup.py2
-rw-r--r--k8s/tests/test_tasks.py6
12 files changed, 285 insertions, 172 deletions
diff --git a/k8s/ChangeLog.md b/k8s/ChangeLog.md
index 28df171..bf3353e 100644
--- a/k8s/ChangeLog.md
+++ b/k8s/ChangeLog.md
@@ -2,13 +2,15 @@
All notable changes to this project will be documented in this file.
-The format is based on [Keep a Changelog](http://keepachangelog.com/)
+The format is based on [Keep a Changelog](http://keepachangelog.com/)
and this project adheres to [Semantic Versioning](http://semver.org/).
+## [1.3.0]
+* Enhancement: Add support for changing the image running in the application container. ("Rolling upgrade")
## [1.2.0]
-* Enhancement: Use the "healthcheck" parameters from node_properties to set up a
+* Enhancement: Use the "healthcheck" parameters from node_properties to set up a
Kubernetes readiness probe for the container.
## [1.1.0]
diff --git a/k8s/README.md b/k8s/README.md
index 79ce303..5b2d0da 100644
--- a/k8s/README.md
+++ b/k8s/README.md
@@ -1,10 +1,10 @@
# ONAP DCAE Kubernetes Plugin for Cloudify
-This directory contains a Cloudify plugin used to orchestrate the deployment of containerized DCAE platform and service components into a Kubernetes ("k8s")
+This directory contains a Cloudify plugin used to orchestrate the deployment of containerized DCAE platform and service components into a Kubernetes ("k8s")
environment. This work is based on the [ONAP DCAE Docker plugin] (../docker).
This plugin is *not* a generic Kubernetes plugin that exposes the full set of Kubernetes features.
-In fact, the plugin largely hides the fact that we're using Kubernetes from both component developers and blueprint authors,
+In fact, the plugin largely hides the fact that we're using Kubernetes from both component developers and blueprint authors.
The Cloudify node type definitions are very similar to the Cloudify type definitions used in the ONAP DCAE Docker plugin.
For the node types `ContainerizedPlatformComponent`, `ContainerizedServiceComponent`, and `ContainerizedServiceComponentUsingDmaap`, this plugin
@@ -17,14 +17,14 @@ creates the following Kubernetes entities:
running the `filebeat` logging sidecar that ships logging information to the ONAP ELK stack. The `Deployment` will include
some additional volumes needed by filebeat.
- If the blueprint indicates that the component exposes any ports, the plugin will create a Kubernetes `Service` that allocates an address
- in the Kubernetes network address space that will route traffic to a container that's running the component. This `Service` provides a
+ in the Kubernetes network address space that will route traffic to a container that's running the component. This `Service` provides a
fixed "virtual IP" for the component.
- If the blueprint indicates that the component exposes a port externally, the plugin will create an additional Kubernetes `Service` that opens up a
port on the external interface of each node in the Kubernetes cluster.
Through the `replicas` property, a blueprint can request deployment of multiple instances of the component. The plugin will still create a single `Deployment` (and,
if needed one or two `Services`), but the `Deployment` will cause multiple instances of the container to run. (Specifically, the `Deployment` will create
-a Kubernetes `Pod` for each requested instance.) Other entities connect to a component via the IP address of a `Service`, and Kubernetes takes care of routing
+a Kubernetes `Pod` for each requested instance.) Other entities connect to a component via the IP address of a `Service`, and Kubernetes takes care of routing
traffic to an appropriate container instance.
## Pre-requisites
@@ -52,19 +52,18 @@ The configuration is provided as JSON object with the following properties:
- image: Docker image to use for filebeat
#### Kubernetes access information
-The plugin accesses a Kubernetes cluster. The information and credentials for accessing a cluster are stored in a "kubeconfig"
+The plugin accesses a Kubernetes cluster. The information and credentials for accessing a cluster are stored in a "kubeconfig"
file. The plugin expects to find such a file at `/etc/cloudify/.kube/config`.
#### Additional Kubernetes configuration elements
The plugin expects certain elements to be provided in the DCAE/ONAP environment, namely:
- Kubernetes secret(s) containing the credentials needed to pull images from Docker registries, if needed
- A Kubernetes ConfigMap containing the filebeat.yml file used by the filebeat logging container
- - An ExternalName service
-
+ - An ExternalName service
## Input parameters
-### start
+### `start` operation parameters
These input parameters are for the `start` `cloudify.interfaces.lifecycle` and are inputs into the variant task operations `create_and_start_container*`.
@@ -120,7 +119,7 @@ of every Kubernetes host in the cluster. (This uses the Kubernetes `NodePort` s
#### `max_wait`
-Integer - seconds to wait for Docker to come up healthy before throwing a `NonRecoverableError`.
+Integer - seconds to wait for component to become ready before throwing a `NonRecoverableError`. For example:
```yaml
max_wait:
@@ -129,7 +128,6 @@ max_wait:
Default is 300 seconds.
-
## Using DMaaP
The node type `dcae.nodes.ContainerizedServiceComponentUsingDmaap` is intended to be used by components that use DMaaP and expects to be connected with the DMaaP node types found in the DMaaP plugin.
@@ -242,3 +240,54 @@ To form the application configuration:
```
This also applies to data router feeds.
+
+## Additional Operations Supported by the Plugin
+In addition to supporting the Cloudify `install` and `uninstall` workflows, the plugin provides two standalone operations that can be invoked using the Cloudify [`execute_operation` workflow](https://docs.cloudify.co/4.3.0/working_with/workflows/built-in-workflows/). The `dcae.nodes.ContainerizedApplication`, `dcae.nodes.ContainerizedPlatformComponent`, `dcae.nodes.ContainerizedServiceComponent`, and `dcae.nodes.ContainerizedServiceComponentUsingDmaap` node types support these operations.
+
+Currently, there's no convenient high-level interface to trigger these operations, but they could potentially be exposed through some kind of dashboard.
+
+### Scaling Operation (`scale`)
+The `scale` operation provides a way to change the number of replicas running for a node instance. The operation is implemented by modifying the number of replicas in the Kubernetes Deployment specification associated with a node instance and submitting the updated specification to the Kubernetes API. The scale operation works for increasing the number of replicas as well as decreasing the number of replications. The minimum number of replicas is 1.
+
+The `scale` operation takes two parameters:
+- `replicas`: Number of desired replicas. Integer, required.
+- `max_wait`: Number of seconds to wait for successful completion of the operation. Integer, optional, defaults to 300 seconds.
+
+One way to trigger a `scale` operation is by using the Cloudify command line. For example:
+```
+cfy executions start -d my_deployment -p scale_params.yaml execute_operation
+```
+where `my_deployment` is the name of an existing Cloudify deployment and
+`scale_params.yaml` is a a file containing the operation parameters:
+```
+operation: scale
+operation_kwargs:
+ replicas: 3
+node_ids:
+ - "web_server"
+```
+Note that the `node_ids` list is required by the `execute_operation` workflow. The list contains all of the nodes that are being targeted by the workflow. If a blueprint contains more than one node, it's possible to scale all of them--or some subset--with a single workflow execution.
+
+### Image Update Operation (`image_update`)
+The `update_image` operation provides a way to change the Docker image running for a node instance, using the Kubernetes _rolling update_ strategy. (See this [tutorial](https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/) and this [discussion of the concept](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment) in the Kubernetes documentation.) The operation is implemented by modifying the image property in the Kubernetes Deployment specification associated with a node instance and submitting the updated specification to the Kubernetes API.
+
+The `update_image` operation takes two parameters:
+- `image`: Full name (including registry, if not the default Docker registry, and tag) of the new image to use for the component. String, required.
+- `max_wait`: Number of seconds to wait for successful completion of the operation. Integer, optional, defaults to 300 seconds.
+
+The `update_image` operation can be triggered using the Cloudify command line. For example:
+```
+cfy executions start -d my_deployment -p update_params.yaml execute_operation
+```
+where `my_deployment` is the name of an existing Cloudify deployment and
+`update_params.yaml` is a a file containing the operation parameters:
+```
+operation: update_image
+operation_kwargs:
+ image: myrepository.example.com/server/web:1.15
+node_ids:
+ - "web_server"
+```
+Note that the `node_ids` list is required by the `execute_operation` workflow. The list contains all of the nodes that are being targeted by the workflow. For an `update_image` operation, the list typically has only one element.
+
+Note also that the `update_image` operation targets the container running the application code (i.e., the container running the image specified in the `image` node property). This plugin may deploy "sidecar" containers running supporting code--for example, the "filebeat" container that relays logs to the central log server. The `update_image` operation does not touch any "sidecar" containers. \ No newline at end of file
diff --git a/k8s/k8s-node-type.yaml b/k8s/k8s-node-type.yaml
index 00f8c8d..4810f3a 100644
--- a/k8s/k8s-node-type.yaml
+++ b/k8s/k8s-node-type.yaml
@@ -25,7 +25,7 @@ plugins:
k8s:
executor: 'central_deployment_agent'
package_name: k8splugin
- package_version: 1.2.0
+ package_version: 1.3.0
data_types:
@@ -80,7 +80,57 @@ data_types:
required: false
node_types:
- # The ContainerizedServiceComponent node type is to be used for DCAE service components that
+ dcae.nodes.ContainerizedComponent:
+ # Bese type for all containerized components
+ # Captures common properties and interfaces
+ derived_from: cloudify.nodes.Root
+ properties:
+ image:
+ type: string
+ description: Full uri of the Docker image
+
+ application_config:
+ default: {}
+ description: >
+ Application configuration for this Docker component. The data structure is
+ expected to be a complex map (native YAML) and to be constructed and filled
+ by the creator of the blueprint.
+
+ docker_config:
+ default: {}
+ description: >
+ This is what is the auxilary portion of the component spec that contains things
+ like healthcheck definitions for the Docker component. Health checks are
+ optional.
+
+ log_info:
+ type: dcae.types.LoggingInfo
+ description: >
+ Information for setting up centralized logging via ELK.
+ required: false
+
+ replicas:
+ type: integer
+ description: >
+ The number of instances of the component that should be launched initially
+ default: 1
+
+ always_pull_image:
+ type: boolean
+ description: >
+ Set to true if the orchestrator should always pull a new copy of the image
+ before deploying. By default the orchestrator pulls only if the image is
+ not already present on the Docker host where the container is being launched.
+ default: false
+
+ interfaces:
+ dcae.interfaces.update:
+ scale:
+ implementation: k8s.k8splugin.scale
+ update_image:
+ implementation: k8s.k8splugin.update_image
+
+ # The ContainerizedServiceComponent node type is to be used for DCAE service components that
# are to be run in a Docker container. This node type goes beyond that of a ordinary Docker
# plugin where it has DCAE platform specific functionality:
#
@@ -88,11 +138,11 @@ node_types:
# * Managing of service component configuration information
#
# The plugin deploys the container into a Kubernetes cluster with a very specific choice
- # of Kubernetes elements that are deliberately not under the control of the blueprint author.
+ # of Kubernetes elements that are deliberately not under the control of the blueprint author.
# The idea is to deploy all service components in a consistent way, with the details abstracted
# away from the blueprint author.
dcae.nodes.ContainerizedServiceComponent:
- derived_from: cloudify.nodes.Root
+ derived_from: dcae.nodes.ContainerizedComponent
properties:
service_component_type:
type: string
@@ -121,44 +171,6 @@ node_types:
names for example the CDAP broker.
default: Null
- application_config:
- default: {}
- description: >
- Application configuration for this Docker component. The data structure is
- expected to be a complex map (native YAML) and to be constructed and filled
- by the creator of the blueprint.
-
- docker_config:
- default: {}
- description: >
- This is what is the auxilary portion of the component spec that contains things
- like healthcheck definitions for the Docker component. Health checks are
- optional.
-
- image:
- type: string
- description: Full uri of the Docker image
-
- log_info:
- type: dcae.types.LoggingInfo
- description: >
- Information for setting up centralized logging via ELK.
- required: false
-
- replicas:
- type: integer
- description: >
- The number of instances of the component that should be launched initially
- default: 1
-
- always_pull_image:
- type: boolean
- description: >
- Set to true if the orchestrator should always pull a new copy of the image
- before deploying. By default the orchestrator pulls only if the image is
- not already present on the Docker host where the container is being launched.
- default: false
-
interfaces:
cloudify.interfaces.lifecycle:
create:
@@ -177,12 +189,8 @@ node_types:
# This is to be invoked by the policy handler upon policy updates
policy_update:
implementation: k8s.k8splugin.policy_update
- dcae.interfaces.scale:
- scale:
- implementation: k8s.k8splugin.scale
-
- # This node type is intended for DCAE service components that use DMaaP and must use the
+ # This node type is intended for DCAE service components that use DMaaP and must use the
# DMaaP plugin.
dcae.nodes.ContainerizedServiceComponentUsingDmaap:
derived_from: dcae.nodes.ContainerizedServiceComponent
@@ -240,11 +248,10 @@ node_types:
# Create Docker container and start
implementation: k8s.k8splugin.create_and_start_container_for_components_with_streams
-
# ContainerizedPlatformComponent is intended for DCAE platform services. Unlike the components,
# platform services have well-known names and well-known ports.
dcae.nodes.ContainerizedPlatformComponent:
- derived_from: cloudify.nodes.Root
+ derived_from: dcae.nodes.ContainerizedComponent
properties:
name:
description: >
@@ -252,30 +259,12 @@ node_types:
dns_name:
required: false
description: >
- Name to be registered in the DNS for the service provided by the container.
+ Name to be registered in the DNS for the service provided by the container.
If not provided, the 'name' field is used.
This is a work-around for the Kubernetes restriction on having '_' in a DNS name.
Having this field allows a component to look up its configuration using a name that
includes a '_' while providing a legal Kubernetes DNS name.
- application_config:
- default: {}
- description: >
- Application configuration for this Docker component. The data strcture is
- expected to be a complex map (native YAML) and to be constructed and filled
- by the creator of the blueprint.
-
- docker_config:
- default: {}
- description: >
- This is what is the auxilary portion of the component spec that contains things
- like healthcheck definitions for the Docker component. Health checks are
- optional.
-
- image:
- type: string
- description: Full uri of the Docker image
-
host_port:
type: integer
description: >
@@ -294,26 +283,6 @@ node_types:
Information for registering with MSB
required: false
- log_info:
- type: dcae.types.LoggingInfo
- description: >
- Information for setting up centralized logging via ELK.
- required: false
-
- replicas:
- type: integer
- description: >
- The number of instances of the component that should be launched initially
- default: 1
-
- always_pull_image:
- type: boolean
- description: >
- Set to true if the orchestrator should always pull a new copy of the image
- before deploying. By default the orchestrator pulls only if the image is
- not already present on the Docker host where the container is being launched.
- default: false
-
interfaces:
cloudify.interfaces.lifecycle:
create:
@@ -328,9 +297,6 @@ node_types:
delete:
# Delete configuration from Consul
implementation: k8s.k8splugin.cleanup_discovery
- dcae.interfaces.scale:
- scale:
- implementation: k8s.k8splugin.tasks.scale
# ContainerizedApplication is intended to be more of an all-purpose Docker container node
# for non-componentized applications.
@@ -351,3 +317,9 @@ node_types:
stop:
# Stop and remove Docker container
implementation: k8s.k8splugin.stop_and_remove_container
+ dcae.interfaces.scale:
+ scale:
+ implementation: k8s.k8splugin.scale
+ dcae.interfaces.update:
+ update_image:
+ implementation: k8s.k8splugin.update_image
diff --git a/k8s/k8sclient/__init__.py b/k8s/k8sclient/__init__.py
index b986659..1ba4553 100644
--- a/k8s/k8sclient/__init__.py
+++ b/k8s/k8sclient/__init__.py
@@ -16,4 +16,5 @@
# limitations under the License.
# ============LICENSE_END=========================================================
#
-# ECOMP is a trademark and service mark of AT&T Intellectual Property. \ No newline at end of file
+# ECOMP is a trademark and service mark of AT&T Intellectual Property.
+from .k8sclient import deploy, undeploy, is_available, scale, upgrade, rollback \ No newline at end of file
diff --git a/k8s/k8sclient/k8sclient.py b/k8s/k8sclient/k8sclient.py
index 7ca7b03..c4ba67d 100644
--- a/k8s/k8sclient/k8sclient.py
+++ b/k8s/k8sclient/k8sclient.py
@@ -64,7 +64,7 @@ def _create_probe(hc, port):
probe = None
period = hc.get('interval', PROBE_DEFAULT_PERIOD)
timeout = hc.get('timeout', PROBE_DEFAULT_TIMEOUT)
- if probe_type == 'http' or probe_type == 'https':
+ if probe_type in ['http', 'https']:
probe = client.V1Probe(
failure_threshold = 1,
initial_delay_seconds = 5,
@@ -74,9 +74,9 @@ def _create_probe(hc, port):
path = hc['endpoint'],
port = port,
scheme = probe_type.upper()
- )
+ )
)
- elif probe_type == 'script' or probe_type == 'docker':
+ elif probe_type in ['script', 'docker']:
probe = client.V1Probe(
failure_threshold = 1,
initial_delay_seconds = 5,
@@ -84,9 +84,9 @@ def _create_probe(hc, port):
timeout_seconds = timeout,
_exec = client.V1ExecAction(
command = [hc['script']]
- )
+ )
)
- return probe
+ return probe
def _create_container_object(name, image, always_pull, env={}, container_ports=[], volume_mounts = [], readiness = None):
# Set up environment variables
@@ -99,8 +99,8 @@ def _create_container_object(name, image, always_pull, env={}, container_ports=[
# If a health check is specified, create a readiness probe
# (For an HTTP-based check, we assume it's at the first container port)
probe = None
-
- if (readiness):
+
+ if readiness:
hc_port = None
if len(container_ports) > 0:
hc_port = container_ports[0]
@@ -121,7 +121,7 @@ def _create_deployment_object(component_name,
containers,
replicas,
volumes,
- labels,
+ labels,
pull_secrets=[]):
# pull_secrets is a list of the names of the k8s secrets containing docker registry credentials
@@ -144,14 +144,14 @@ def _create_deployment_object(component_name,
replicas=replicas,
template=template
)
-
+
# Create deployment object
deployment = client.ExtensionsV1beta1Deployment(
kind="Deployment",
metadata=client.V1ObjectMeta(name=_create_deployment_name(component_name)),
spec=spec
)
-
+
return deployment
def _create_service_object(service_name, component_name, service_ports, annotations, labels, service_type):
@@ -185,7 +185,7 @@ def _parse_ports(port_list):
port_map[container] = hport
except:
pass # if something doesn't parse, we just ignore it
-
+
return container_ports, port_map
def _parse_volumes(volume_list):
@@ -198,7 +198,7 @@ def _parse_volumes(volume_list):
vro = (v['container']['mode'] == 'ro')
volumes.append(client.V1Volume(name=vname, host_path=client.V1HostPathVolumeSource(path=vhost)))
volume_mounts.append(client.V1VolumeMount(name=vname, mount_path=vcontainer, read_only=vro))
-
+
return volumes, volume_mounts
def _service_exists(namespace, component_name):
@@ -209,9 +209,26 @@ def _service_exists(namespace, component_name):
exists = True
except client.rest.ApiException:
pass
-
+
return exists
+def _patch_deployment(namespace, deployment, modify):
+ '''
+ Gets the current spec for 'deployment' in 'namespace',
+ uses the 'modify' function to change the spec,
+ then sends the updated spec to k8s.
+ '''
+ _configure_api()
+
+ # Get deployment spec
+ spec = client.ExtensionsV1beta1Api().read_namespaced_deployment(deployment, namespace)
+
+ # Apply changes to spec
+ spec = modify(spec)
+
+ # Patch the deploy with updated spec
+ client.ExtensionsV1beta1Api().patch_namespaced_deployment(deployment, namespace, spec)
+
def deploy(namespace, component_name, image, replicas, always_pull, k8sconfig, **kwargs):
'''
This will create a k8s Deployment and, if needed, one or two k8s Services.
@@ -219,7 +236,7 @@ def deploy(namespace, component_name, image, replicas, always_pull, k8sconfig, *
We're not exposing k8s to the component developer and the blueprint author.
This is a conscious choice. We want to use k8s in a controlled, consistent way, and we want to hide
the details from the component developer and the blueprint author.)
-
+
namespace: the Kubernetes namespace into which the component is deployed
component_name: the component name, used to derive names of Kubernetes entities
image: the docker image for the component being deployed
@@ -359,11 +376,10 @@ def deploy(namespace, component_name, image, replicas, always_pull, k8sconfig, *
return dep, deployment_description
def undeploy(deployment_description):
- # TODO: do real configuration
_configure_api()
namespace = deployment_description["namespace"]
-
+
# remove any services associated with the component
for service in deployment_description["services"]:
client.CoreV1Api().delete_namespaced_service(service, namespace)
@@ -375,20 +391,54 @@ def undeploy(deployment_description):
def is_available(namespace, component_name):
_configure_api()
dep_status = client.AppsV1beta1Api().read_namespaced_deployment_status(_create_deployment_name(component_name), namespace)
- # Check if the number of available replicas is equal to the number requested
- return dep_status.status.available_replicas >= dep_status.spec.replicas
+ # Check if the number of available replicas is equal to the number requested and that the replicas match the current spec
+ # This check can be used to verify completion of an initial deployment, a scale operation, or an update operation
+ return dep_status.status.available_replicas == dep_status.spec.replicas and dep_status.status.updated_replicas == dep_status.spec.replicas
def scale(deployment_description, replicas):
- # TODO: do real configuration
- _configure_api()
+ ''' Trigger a scaling operation by updating the replica count for the Deployment '''
- namespace = deployment_description["namespace"]
- name = deployment_description["deployment"]
+ def update_replica_count(spec):
+ spec.spec.replicas = replicas
+ return spec
- # Get deployment spec
- spec = client.ExtensionsV1beta1Api().read_namespaced_deployment(name, namespace)
+ _patch_deployment(deployment_description["namespace"], deployment_description["deployment"], update_replica_count)
+
+def upgrade(deployment_description, image, container_index = 0):
+ ''' Trigger a rolling upgrade by sending a new image name/tag to k8s '''
+
+ def update_image(spec):
+ spec.spec.template.spec.containers[container_index].image = image
+ return spec
+
+ _patch_deployment(deployment_description["namespace"], deployment_description["deployment"], update_image)
+
+def rollback(deployment_description, rollback_to=0):
+ '''
+ Undo upgrade by rolling back to a previous revision of the deployment.
+ By default, go back one revision.
+ rollback_to can be used to supply a specific revision number.
+ Returns the image for the app container and the replica count from the rolled-back deployment
+ '''
+ '''
+ 2018-07-13
+ Currently this does not work due to a bug in the create_namespaced_deployment_rollback() method.
+ The k8s python client code throws an exception while processing the response from the API.
+ See:
+ - https://github.com/kubernetes-client/python/issues/491
+ - https://github.com/kubernetes/kubernetes/pull/63837
+ The fix has been merged into the master branch but is not in the latest release.
+ '''
+ _configure_api()
+ deployment = deployment_description["deployment"]
+ namespace = deployment_description["namespace"]
- # Update the replica count in the spec
- spec.spec.replicas = replicas
- client.ExtensionsV1beta1Api().patch_namespaced_deployment(name, namespace, spec)
+ # Initiate the rollback
+ client.ExtensionsV1beta1Api().create_namespaced_deployment_rollback(
+ deployment,
+ namespace,
+ client.AppsV1beta1DeploymentRollback(name=deployment, rollback_to=client.AppsV1beta1RollbackConfig(revision=rollback_to)))
+ # Read back the spec for the rolled-back deployment
+ spec = client.ExtensionsV1beta1Api().read_namespaced_deployment(deployment, namespace)
+ return spec.spec.template.spec.containers[0].image, spec.spec.replicas \ No newline at end of file
diff --git a/k8s/k8splugin/__init__.py b/k8s/k8splugin/__init__.py
index 28306ee..7f721b2 100644
--- a/k8s/k8splugin/__init__.py
+++ b/k8s/k8splugin/__init__.py
@@ -27,4 +27,4 @@ from .tasks import create_for_components, create_for_components_with_streams, \
create_and_start_container_for_components_with_streams, \
create_for_platforms, create_and_start_container, \
create_and_start_container_for_components, create_and_start_container_for_platforms, \
- stop_and_remove_container, cleanup_discovery, policy_update, scale \ No newline at end of file
+ stop_and_remove_container, cleanup_discovery, policy_update, scale, update_image \ No newline at end of file
diff --git a/k8s/k8splugin/decorators.py b/k8s/k8splugin/decorators.py
index 2edcc0d..59d14d8 100644
--- a/k8s/k8splugin/decorators.py
+++ b/k8s/k8splugin/decorators.py
@@ -98,3 +98,17 @@ def merge_inputs_for_start(task_start_func):
ctx.instance.runtime_properties, **kwargs)
return wrapper
+
+def wrap_error_handling_update(update_func):
+ """ Wrap error handling for update operations (scale and upgrade) """
+
+ def wrapper(**kwargs):
+ try:
+ return update_func(**kwargs)
+ except DockerPluginDeploymentError:
+ raise NonRecoverableError ("Update operation did not complete successfully in the alloted time")
+ except Exception as e:
+ ctx.logger.error ("Unexpected error during update operation: {0}".format(str(e)))
+ raise NonRecoverableError(e)
+
+ return wrapper \ No newline at end of file
diff --git a/k8s/k8splugin/tasks.py b/k8s/k8splugin/tasks.py
index 50087fb..4205122 100644
--- a/k8s/k8splugin/tasks.py
+++ b/k8s/k8splugin/tasks.py
@@ -30,11 +30,11 @@ from cloudify.exceptions import NonRecoverableError, RecoverableError
from onap_dcae_dcaepolicy_lib import Policies
from k8splugin import discovery as dis
from k8splugin.decorators import monkeypatch_loggers, wrap_error_handling_start, \
- merge_inputs_for_start, merge_inputs_for_create
+ merge_inputs_for_start, merge_inputs_for_create, wrap_error_handling_update
from k8splugin.exceptions import DockerPluginDeploymentError
from k8splugin import utils
from configure import configure
-from k8sclient import k8sclient
+import k8sclient
# Get configuration
plugin_conf = configure.configure()
@@ -245,7 +245,7 @@ def _verify_container(service_component_name, max_wait):
will be raised.
"""
num_attempts = 1
-
+
while True:
if k8sclient.is_available(DCAE_NAMESPACE, service_component_name):
return True
@@ -256,7 +256,7 @@ def _verify_container(service_component_name, max_wait):
raise DockerPluginDeploymentError("Container never became healthy")
time.sleep(1)
-
+
return True
def _create_and_start_container(container_name, image, **kwargs):
@@ -266,7 +266,7 @@ def _create_and_start_container(container_name, image, **kwargs):
We're not exposing k8s to the component developer and the blueprint author.
This is a conscious choice. We want to use k8s in a controlled, consistent way, and we want to hide
the details from the component developer and the blueprint author.)
-
+
kwargs may have:
- volumes: array of volume objects, where a volume object is:
{"host":{"path": "/path/on/host"}, "container":{"bind":"/path/on/container","mode":"rw_or_ro"}
@@ -287,21 +287,21 @@ def _create_and_start_container(container_name, image, **kwargs):
ctx.logger.info("Deploying {}, image: {}, env: {}, kwargs: {}".format(container_name, image, env, kwargs))
ctx.logger.info("Passing k8sconfig: {}".format(plugin_conf))
replicas = kwargs.get("replicas", 1)
- _,dep = k8sclient.deploy(DCAE_NAMESPACE,
- container_name,
+ _,dep = k8sclient.deploy(DCAE_NAMESPACE,
+ container_name,
image,
- replicas = replicas,
+ replicas = replicas,
always_pull=kwargs.get("always_pull_image", False),
k8sconfig=plugin_conf,
- volumes=kwargs.get("volumes",[]),
+ volumes=kwargs.get("volumes",[]),
ports=kwargs.get("ports",[]),
- msb_list=kwargs.get("msb_list"),
+ msb_list=kwargs.get("msb_list"),
env = env,
labels = kwargs.get("labels", {}),
log_info=kwargs.get("log_info"),
readiness=kwargs.get("readiness"))
- # Capture the result of deployment for future use
+ # Capture the result of deployment for future use
ctx.instance.runtime_properties["k8s_deployment"] = dep
ctx.instance.runtime_properties["replicas"] = replicas
ctx.logger.info ("Deployment complete: {0}".format(dep))
@@ -337,11 +337,11 @@ def _enhance_docker_params(**kwargs):
Set up Docker environment variables and readiness check info
and inject into kwargs.
'''
-
+
# Get info for setting up readiness probe, if present
docker_config = kwargs.get("docker_config", {})
if "healthcheck" in docker_config:
- kwargs["readiness"] = docker_config["healthcheck"]
+ kwargs["readiness"] = docker_config["healthcheck"]
envs = kwargs.get("envs", {})
@@ -364,7 +364,7 @@ def _enhance_docker_params(**kwargs):
# lists together with no deduping.
kwargs = combine_params("ports", docker_config, kwargs)
kwargs = combine_params("volumes", docker_config, kwargs)
-
+
return kwargs
@@ -375,15 +375,15 @@ def _create_and_start_component(**kwargs):
# Need to be picky and manually select out pieces because just using kwargs
# which contains everything confused the execution of
# _create_and_start_container because duplicate variables exist
- sub_kwargs = {
+ sub_kwargs = {
"volumes": kwargs.get("volumes", []),
"ports": kwargs.get("ports", None),
- "envs": kwargs.get("envs", {}),
+ "envs": kwargs.get("envs", {}),
"log_info": kwargs.get("log_info", {}),
"labels": kwargs.get("labels", {}),
"readiness": kwargs.get("readiness",{})}
_create_and_start_container(service_component_name, image, **sub_kwargs)
-
+
# TODO: Use regular logging here
ctx.logger.info("Container started: {0}".format(service_component_name))
@@ -392,12 +392,7 @@ def _create_and_start_component(**kwargs):
def _verify_component(**kwargs):
"""Verify component (container) is healthy"""
service_component_name = kwargs[SERVICE_COMPONENT_NAME]
- # TODO: "Consul doesn't make its first health check immediately upon registration.
- # Instead it waits for the health check interval to pass."
- # Possible enhancement is to read the interval (and possibly the timeout) from
- # docker_config and multiply that by a number to come up with a more suitable
- # max_wait.
-
+
max_wait = kwargs.get("max_wait", 300)
# Verify that the container is healthy
@@ -407,7 +402,7 @@ def _verify_component(**kwargs):
# TODO: Use regular logging here
ctx.logger.info("Container is healthy: {0}".format(service_component_name))
-
+
return kwargs
def _done_for_start(**kwargs):
@@ -499,7 +494,7 @@ def create_and_start_container_for_platforms(**kwargs):
image = ctx.node.properties["image"]
docker_config = ctx.node.properties.get("docker_config", {})
if "healthcheck" in docker_config:
- kwargs["readiness"] = docker_config["healthcheck"]
+ kwargs["readiness"] = docker_config["healthcheck"]
if "dns_name" in ctx.node.properties:
service_component_name = ctx.node.properties["dns_name"]
else:
@@ -564,7 +559,7 @@ def create_and_start_container(**kwargs):
image = ctx.node.properties["image"]
_create_and_start_container(service_component_name, image,**kwargs)
-
+
ctx.logger.info("Component deployed: {0}".format(service_component_name))
@@ -580,6 +575,7 @@ def stop_and_remove_container(**kwargs):
ctx.logger.error("Unexpected error while stopping container: {0}"
.format(str(e)))
+@wrap_error_handling_update
@monkeypatch_loggers
@operation
def scale(replicas, **kwargs):
@@ -587,15 +583,44 @@ def scale(replicas, **kwargs):
if replicas > 0:
current_replicas = ctx.instance.runtime_properties["replicas"]
ctx.logger.info("Scaling from {0} to {1}".format(current_replicas, replicas))
- try:
- deployment_description = ctx.instance.runtime_properties["k8s_deployment"]
- k8sclient.scale(deployment_description, replicas)
- ctx.instance.runtime_properties["replicas"] = replicas
- except Exception as e:
- ctx.logger.error ("Unexpected error while scaling {0}".format(str(e)))
+ deployment_description = ctx.instance.runtime_properties["k8s_deployment"]
+ k8sclient.scale(deployment_description, replicas)
+ ctx.instance.runtime_properties["replicas"] = replicas
+
+ # Verify that the scaling took place as expected
+ max_wait = kwargs.get("max_wait", 300)
+ service_component_name = ctx.instance.runtime_properties["service_component_name"]
+ if _verify_container(service_component_name, max_wait):
+ ctx.logger.info("Scaling complete : {0} from {1} to {2} instance(s)".format(service_component_name, current_replicas, replicas))
+
else:
ctx.logger.info("Ignoring request to scale to zero replicas")
-
+
+@wrap_error_handling_update
+@monkeypatch_loggers
+@operation
+def update_image(image, **kwargs):
+ if image:
+ current_image = ctx.instance.runtime_properties["image"]
+ ctx.logger.info("Updating application container image from {0} to {1}".format(current_image, image))
+ deployment_description = ctx.instance.runtime_properties["k8s_deployment"]
+ k8sclient.upgrade(deployment_description, image)
+ ctx.instance.runtime_properties["image"] = image
+
+ # Verify that the update took place as expected
+ max_wait = kwargs.get("max_wait", 300)
+ service_component_name = ctx.instance.runtime_properties["service_component_name"]
+ if _verify_container(service_component_name, max_wait):
+ ctx.logger.info("Update complete : {0} from {1} to {2} instance(s)".format(service_component_name, current_image, image))
+
+ else:
+ ctx.logger.info("Ignoring update_image request with unusable image '{0}'".format(str(image)))
+
+#TODO: implement rollback operation when kubernetes python client fix is available.
+# (See comments in k8sclient.py.)
+# In the meantime, it's possible to undo an update_image operation by doing a second
+# update_image that specifies the older image.
+
@monkeypatch_loggers
@Policies.cleanup_policies_on_node
@operation
@@ -628,7 +653,7 @@ def _notify_container(**kwargs):
pods could be changing. We can query to get all the pods, but
there's no guarantee the list won't change while we're trying to
execute the script.
-
+
In ONAP R2, all of the policy-driven components rely on polling.
"""
"""
diff --git a/k8s/pom.xml b/k8s/pom.xml
index d51eae5..bfaa3f5 100644
--- a/k8s/pom.xml
+++ b/k8s/pom.xml
@@ -28,7 +28,7 @@ ECOMP is a trademark and service mark of AT&T Intellectual Property.
<groupId>org.onap.dcaegen2.platform.plugins</groupId>
<artifactId>k8s</artifactId>
<name>k8s-plugin</name>
- <version>1.2.0-SNAPSHOT</version>
+ <version>1.3.0-SNAPSHOT</version>
<url>http://maven.apache.org</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
diff --git a/k8s/requirements.txt b/k8s/requirements.txt
index f5cac20..833039f 100644
--- a/k8s/requirements.txt
+++ b/k8s/requirements.txt
@@ -1,5 +1,5 @@
python-consul>=0.6.0,<1.0.0
uuid==1.30
-onap-dcae-dcaepolicy-lib==2.1.0
+onap-dcae-dcaepolicy-lib==2.4.0
kubernetes==4.0.0
cloudify-plugins-common==3.4 \ No newline at end of file
diff --git a/k8s/setup.py b/k8s/setup.py
index 5de6a76..190d603 100644
--- a/k8s/setup.py
+++ b/k8s/setup.py
@@ -23,7 +23,7 @@ from setuptools import setup
setup(
name='k8splugin',
description='Cloudify plugin for containerized components deployed using Kubernetes',
- version="1.2.0",
+ version="1.3.0",
author='J. F. Lucas, Michael Hwang, Tommy Carpenter',
packages=['k8splugin','k8sclient','msb','configure'],
zip_safe=False,
diff --git a/k8s/tests/test_tasks.py b/k8s/tests/test_tasks.py
index 4d0aa90..077a940 100644
--- a/k8s/tests/test_tasks.py
+++ b/k8s/tests/test_tasks.py
@@ -190,10 +190,10 @@ def test_lookup_service(monkeypatch, mockconfig):
def test_verify_container(monkeypatch, mockconfig):
- from k8sclient import k8sclient
+ import k8sclient
from k8splugin import tasks
from k8splugin.exceptions import DockerPluginDeploymentError
-
+
def fake_is_available_success(ch, scn):
return True
@@ -273,7 +273,7 @@ def test_enhance_docker_params(mockconfig):
assert actual == {'envs': {"SERVICE_TAGS": ""}, 'docker_config': {'ports': ['1:1', '2:2'],
'volumes': [{'host': 'somewhere else', 'container': 'somewhere'}]},
- 'ports': ['1:1', '2:2', '3:3', '4:4'], 'volumes': [{'host': 'somewhere else',
+ 'ports': ['1:1', '2:2', '3:3', '4:4'], 'volumes': [{'host': 'somewhere else',
'container': 'somewhere'}, {'host': 'nowhere else', 'container':
'nowhere'}], "service_id": None}