summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorJack Lucas <jflucas@research.att.com>2019-03-06 17:58:58 -0500
committerJack Lucas <jflucas@research.att.com>2019-03-06 18:31:14 -0500
commita0a88be5349019496ff1027cc67fc79abd2bb476 (patch)
tree6066d59e77c90c11e29efb0e46e6b68b233fc263
parent8c58bb2c4fb000208070f88cbd2e6998c9866fa8 (diff)
Fix cleanup script
Document use of cleanup script Issue-ID: DCAEGEN2-1317 Change-Id: Ibd5766c94a939e9086e0b9724c21eaf1178bd5d3 Signed-off-by: Jack Lucas <jflucas@research.att.com>
-rw-r--r--cm-container/README.md61
-rwxr-xr-x[-rw-r--r--]cm-container/dcae-cleanup.sh10
-rw-r--r--cm-container/pom.xml4
3 files changed, 64 insertions, 11 deletions
diff --git a/cm-container/README.md b/cm-container/README.md
index 6e1e26e..7d90aeb 100644
--- a/cm-container/README.md
+++ b/cm-container/README.md
@@ -20,7 +20,7 @@ docker run --name cfy-mgr -d --restart unless-stopped \
-p <some_external_port>:80 \
--tmpfs /run \
--tmpfs /run/lock \
- --security-opt seccomp:unconfined
+ --security-opt seccomp:unconfined
--cap-add SYS_ADMIN \
-v <path_to_kubeconfig_file>:/etc/cloudify/.kube/config
-v <path_to_config_file>:/opt/onap/config.txt
@@ -34,12 +34,12 @@ needed.
## Persistent Storage
In an ONAP deployment driven by OOM, Cloudify Manager will store data related to its state
-in a Kubernetes PersistentVolume. If the Cloudify Manager pod is destroyed and recreated,
+in a Kubernetes PersistentVolume. If the Cloudify Manager pod is destroyed and recreated,
the new instance will have all of the state information from the previous run.
To set up persistent, we replace the command run by the container (`CMD` in the Dockerfile) with
our own script `start-persistent.sh`. This script checks to see if a persistent volume has been
-mounted in a well-known place (`/cfy-persist` in the container's file system). If so, the script
+mounted in a well-known place (`/cfy-persist` in the container's file system). If so, the script
then checks to see if the persistent volume has been populated with data. There are two possibilities:
1. The persistent volume hasn't been populated, indicating that this is the first time Cloudify Manager is
being run in the current environment. In this case, the script copies state data from several directories in
@@ -61,4 +61,57 @@ which then brings up the many other processes needed for a working instance of C
## The `setup-secret.sh` script
When Kubernetes starts a container, it mounts a directory containing the credentials that the container needs to access the Kubernetes API on the local Kubernetes cluster. The mountpoint is `/var/run/secrets/kubernetes.io/serviceaccount`. Something about the way that Cloudify Manager is started (possibly because `/sbin/init` is run) causes this mountpoint to be hidden. `setup-secret.sh` will recreated the directory if it's not present and symbolically link it to a copy of the credentials mounted at `/secret` in the container file system. This gives Cloudify Manager the credentials that the Kubernetes plugin needs to deploy Kubernetes-based DCAE components.
-`setup-secret.sh` needs to run after '/sbin/init'. The Dockerfile installs it in the `rc.local` script that runs at startup. \ No newline at end of file
+`setup-secret.sh` needs to run after '/sbin/init'. The Dockerfile installs it in the `rc.local` script that runs at startup.
+
+## Cleaning up Kubernetes components deployed by Cloudify Manager
+Using the `helm undeploy` (or `helm delete`) command will destroy the Kubernetes components deployed via helm. In an ONAP deployment
+driven by OOM, this includes destroying Cloudify Manager. helm will *not* delete Kubernetes components deployed by Cloudify Manager.
+This includes components ("microservices") deployed as part of the ONAP installation process by the DCAE bootstrap container as well as
+components deployed after the initial installation using CLAMP. Removing *all* of DCAE, including any components deployed by Cloudify
+Manager, requires running a command before running the `helm undeploy` or `helm delete` command.
+
+```kubectl -n _namespace_ exec _cloudify_manager_pod_ /scripts/dcae-cleanup.sh```
+where _namespace_ is the namespace in which ONAP was deployed and _cloudify_manager_pod_ is the ID of the pod running Cloudify Manager.
+
+For example:
+```
+$ kubectl -n onap exec dev-dcaegen2-dcae-cloudify-manager-bf885f5bd-hm97x /scripts/dcae-cleanup.sh
++ set +e
+++ grep admin_password: /etc/cloudify/config.yaml
+++ cut -d : -f2
+++ tr -d ' '
++ CMPASS=admin
++ TYPENAMES='[\"dcae.nodes.ContainerizedServiceComponent\",\"dcae.nodes.ContainerizedServiceComponentUsingDmaap\",\"dcae.nodes.ContainerizedPlatformComponent\",\"dcae.nodes.ContainerizedApplication\"]'
++ xargs -I % sh -c 'cfy executions start -d % -p '\''{'\''\"type_names\":[\"dcae.nodes.ContainerizedServiceComponent\",\"dcae.nodes.ContainerizedServiceComponentUsingDmaap\",\"dcae.nodes.ContainerizedPlatformComponent\",\"dcae.nodes.ContainerizedApplication\"],\"operation\":\"cloudify.interfaces.lifecycle.stop\"'\''}'\'' execute_operation'
++ /bin/jq '.items[].id'
++ curl -Ss --user admin:admin -H 'Tenant: default_tenant' 'localhost/api/v3.1/deployments?_include=id'
+Executing workflow execute_operation on deployment pgaas_initdb [timeout=900 seconds]
+2019-03-06 23:06:06.838 CFY <pgaas_initdb> Starting 'execute_operation' workflow execution
+2019-03-06 23:06:07.518 CFY <pgaas_initdb> 'execute_operation' workflow execution succeeded
+Finished executing workflow execute_operation on deployment pgaas_initdb
+* Run 'cfy events list -e c88d5a0a-9699-4077-961b-749384b1e455' to retrieve the execution's events/logs
+Executing workflow execute_operation on deployment hv-ves [timeout=900 seconds]
+2019-03-06 23:06:14.928 CFY <hv-ves> Starting 'execute_operation' workflow execution
+2019-03-06 23:06:15.535 CFY <hv-ves> [hv-ves_dlkit2] Starting operation cloudify.interfaces.lifecycle.stop
+2019-03-06 23:06:15.535 CFY <hv-ves> [hv-ves_dlkit2.stop] Sending task 'k8splugin.stop_and_remove_container'
+2019-03-06 23:06:16.554 CFY <hv-ves> [hv-ves_dlkit2.stop] Task started 'k8splugin.stop_and_remove_container'
+2019-03-06 23:06:20.163 CFY <hv-ves> [hv-ves_dlkit2.stop] Task succeeded 'k8splugin.stop_and_remove_container'
+2019-03-06 23:06:20.561 CFY <hv-ves> [hv-ves_dlkit2] Finished operation cloudify.interfaces.lifecycle.stop
+2019-03-06 23:06:21.570 CFY <hv-ves> 'execute_operation' workflow execution succeeded
+Finished executing workflow execute_operation on deployment hv-ves
+* Run 'cfy events list -e b4ea6608-befd-421d-9851-94527deab372' to retrieve the execution's events/logs
+Executing workflow execute_operation on deployment datafile-collector [timeout=900 seconds]
+2019-03-06 23:06:27.471 CFY <datafile-collector> Starting 'execute_operation' workflow execution
+2019-03-06 23:06:28.593 CFY <datafile-collector> [datafile-collector_j2b0r4] Starting operation cloudify.interfaces.lifecycle.stop
+2019-03-06 23:06:28.593 CFY <datafile-collector> [datafile-collector_j2b0r4.stop] Sending task 'k8splugin.stop_and_remove_container'
+2019-03-06 23:06:28.593 CFY <datafile-collector> [datafile-collector_j2b0r4.stop] Task started 'k8splugin.stop_and_remove_container'
+2019-03-06 23:06:32.078 CFY <datafile-collector> [datafile-collector_j2b0r4.stop] Task succeeded 'k8splugin.stop_and_remove_container'
+2019-03-06 23:06:32.609 CFY <datafile-collector> [datafile-collector_j2b0r4] Finished operation cloudify.interfaces.lifecycle.stop
+2019-03-06 23:06:32.609 CFY <datafile-collector> 'execute_operation' workflow execution succeeded
+Finished executing workflow execute_operation on deployment datafile-collector
+* Run 'cfy events list -e 24749c7e-591f-4cac-b127-420b0932ef09' to retrieve the execution's events/logs
+Executing workflow execute_operation on deployment ves [timeout=900 seconds]
+```
+The exact content of the output will depend on what components have been deployed. Note that in the example output
+above, the `pgaas_initdb` node was visited, but no 'stop' operation was sent because `pgaas_initdb` is not a Kubernetes node.
+
diff --git a/cm-container/dcae-cleanup.sh b/cm-container/dcae-cleanup.sh
index a072dd4..a9779be 100644..100755
--- a/cm-container/dcae-cleanup.sh
+++ b/cm-container/dcae-cleanup.sh
@@ -1,6 +1,6 @@
#!/bin/bash
# ================================================================================
-# Copyright (c) 2018 AT&T Intellectual Property. All rights reserved.
+# Copyright (c) 2018-2019 AT&T Intellectual Property. All rights reserved.
# ================================================================================
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -25,7 +25,7 @@
# Rather than using the 'cfy uninstall' command to run a full 'uninstall' workflow
# against the deployments, this script uses 'cfy executions' to run a 'stop'
# stop operation against the nodes in each deployment. The reason for this is that,
-# at the time this script run, we have no# guarantees about what other components are
+# at the time this script runs, we have no guarantees about what other components are
# still running. In particular, a full 'uninstall' will cause API requests to Consul
# and will raise RecoverableErrors if it cannot connect. RecoverableErrors send Cloudify
# into a long retry loop. Instead, we invoke only the 'stop'
@@ -33,7 +33,7 @@
# present) but not the Consul API.
#
# Note that the script finds all of the deployments known to Cloudify and runs the
-# 'stop' operation on every node
+# 'stop' operation on every k8s node.
# The result of the script is that all of the k8s entities deployed by Cloudify
# should be destroyed. Cloudify Manager itself isn't fully cleaned up (the deployments and
# blueprints are left), but that doesn't matter because Cloudify Manager will be
@@ -47,7 +47,7 @@ set +e
# Brittle, but the container is built with an unchanging version of CM,
# so no real risk of a breaking change
CMPASS=$(grep 'admin_password:' /etc/cloudify/config.yaml | cut -d ':' -f2 | tr -d ' ')
-TYPENAMES='[dcae.nodes.ContainerizedServiceComponent,dcae.nodes.ContainerizedServiceComponent,dcae.nodes.ContainerizedServiceComponent,dcae.nodes.ContainerizedServiceComponent]'
+TYPENAMES=[\\\"dcae.nodes.ContainerizedServiceComponent\\\",\\\"dcae.nodes.ContainerizedServiceComponentUsingDmaap\\\",\\\"dcae.nodes.ContainerizedPlatformComponent\\\",\\\"dcae.nodes.ContainerizedApplication\\\"]
# Uninstall components managed by Cloudify
# Get the list of deployment ids known to Cloudify via curl to Cloudify API.
@@ -59,4 +59,4 @@ TYPENAMES='[dcae.nodes.ContainerizedServiceComponent,dcae.nodes.ContainerizedSer
curl -Ss --user admin:$CMPASS -H "Tenant: default_tenant" "localhost/api/v3.1/deployments?_include=id" \
| /bin/jq .items[].id \
-| xargs -I % sh -c 'cfy executions start -d % -p type_names=${TYPENAMES} -p operation=cloudify.interfaces.lifecycle.stop execute_operation' \ No newline at end of file
+| xargs -I % sh -c "cfy executions start -d % -p '{'\\\"type_names\\\":${TYPENAMES},\\\"operation\\\":\\\"cloudify.interfaces.lifecycle.stop\\\"'}' execute_operation"
diff --git a/cm-container/pom.xml b/cm-container/pom.xml
index b245f4c..4cac26d 100644
--- a/cm-container/pom.xml
+++ b/cm-container/pom.xml
@@ -1,7 +1,7 @@
<?xml version="1.0"?>
<!--
================================================================================
-Copyright (c) 2018 AT&T Intellectual Property. All rights reserved.
+Copyright (c) 2018-2019 AT&T Intellectual Property. All rights reserved.
================================================================================
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
@@ -27,7 +27,7 @@ limitations under the License.
<groupId>org.onap.dcaegen2.deployments</groupId>
<artifactId>cm-container</artifactId>
<name>dcaegen2-deployments-cm-container</name>
- <version>1.5.1</version>
+ <version>1.5.2</version>
<url>http://maven.apache.org</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>