diff options
author | Jack Lucas <jflucas@research.att.com> | 2018-12-04 15:02:06 -0500 |
---|---|---|
committer | Jack Lucas <jflucas@research.att.com> | 2018-12-04 15:03:25 -0500 |
commit | 8ad4f6db9865a9a9fb9076c9ce9e07e91a1519ea (patch) | |
tree | 9995d6441f1c378dc3a56fea4eb2f93aae53d164 /cm-container/README.md | |
parent | e24fb188c483acce93fc5419690792c2300161cf (diff) |
Add persistent storage for CM state information
Issue-ID: DCAEGEN2-990
Change-Id: I122e541d6ea0fa6bca06157d6ae7a330048d2ed7
Signed-off-by: Jack Lucas <jflucas@research.att.com>
Diffstat (limited to 'cm-container/README.md')
-rw-r--r-- | cm-container/README.md | 31 |
1 files changed, 31 insertions, 0 deletions
diff --git a/cm-container/README.md b/cm-container/README.md index a29423d..6e1e26e 100644 --- a/cm-container/README.md +++ b/cm-container/README.md @@ -31,3 +31,34 @@ In a Kubernetes environment, we expect that the <path_to_kubeconfile_file> and t We also expect that in a Kubernetes environment the external port mapping would not be needed. + +## Persistent Storage +In an ONAP deployment driven by OOM, Cloudify Manager will store data related to its state +in a Kubernetes PersistentVolume. If the Cloudify Manager pod is destroyed and recreated, +the new instance will have all of the state information from the previous run. + +To set up persistent, we replace the command run by the container (`CMD` in the Dockerfile) with +our own script `start-persistent.sh`. This script checks to see if a persistent volume has been +mounted in a well-known place (`/cfy-persist` in the container's file system). If so, the script +then checks to see if the persistent volume has been populated with data. There are two possibilities: +1. The persistent volume hasn't been populated, indicating that this is the first time Cloudify Manager is +being run in the current environment. In this case, the script copies state data from several directories in +the container file system into directories in the persistent volume. This is data (such as database schemas for +Cloudify Manager's internal postgres instance) that was generated when the original Cloudify Manager image was +created by Cloudify. +2. The persistent volume has been populated, indicating that this is not the first time Cloudify Manager is being +run in the current environment. The data in the persistent volume reflects the state that Cloudify Manager was in +when it exited at some point in the past. There's no need to copy data in this case. +In either case, the script will create symbolic links from the original data directories to the corresponding directories +in the persistent store. + +If there is no persistent volume mounted, the script does nothing to set up persistent data, and the container will have +no persistent storage. + +The last command in the script is the command from the original Cloudify version of the Cloudify Manager image. It runs `/sbin/init`, +which then brings up the many other processes needed for a working instance of Cloudify Manager. + +## The `setup-secret.sh` script +When Kubernetes starts a container, it mounts a directory containing the credentials that the container needs to access the Kubernetes API on the local Kubernetes cluster. The mountpoint is `/var/run/secrets/kubernetes.io/serviceaccount`. Something about the way that Cloudify Manager is started (possibly because `/sbin/init` is run) causes this mountpoint to be hidden. `setup-secret.sh` will recreated the directory if it's not present and symbolically link it to a copy of the credentials mounted at `/secret` in the container file system. This gives Cloudify Manager the credentials that the Kubernetes plugin needs to deploy Kubernetes-based DCAE components. + +`setup-secret.sh` needs to run after '/sbin/init'. The Dockerfile installs it in the `rc.local` script that runs at startup.
\ No newline at end of file |