summaryrefslogtreecommitdiffstats
path: root/bootstrap
diff options
context:
space:
mode:
authorLusheng Ji <lji@research.att.com>2017-09-21 18:55:55 +0000
committerLusheng Ji <lji@research.att.com>2017-09-21 19:14:51 +0000
commiteb92775498e1e43ba24b9d1065930cf9e619670b (patch)
tree1bf7450769540d5d148cfff74332252d6d4f3bab /bootstrap
parent9bedea048390ef26a9dd0b02ea703b8d0def7906 (diff)
Update bootstrap README
Issue-Id: DCAEGEN2-115 Change-Id: Ia2745d0d4c1ceaa361424d44d28bbdcffb591937 Signed-off-by: Lusheng Ji <lji@research.att.com>
Diffstat (limited to 'bootstrap')
-rw-r--r--bootstrap/README-docker.md75
1 files changed, 14 insertions, 61 deletions
diff --git a/bootstrap/README-docker.md b/bootstrap/README-docker.md
index af4edff..0fce3fc 100644
--- a/bootstrap/README-docker.md
+++ b/bootstrap/README-docker.md
@@ -1,11 +1,15 @@
## Dockerized bootstrap for Cloudify Manager and Consul cluster
1. Preparations
-a) Add a public key to openStack, note its name (we will use KEYNAME as example for below). Save the private key (we will use KAYPATH as its path example), make sure it's permission is globally readable.
-b) Load the folowing base VM images to OpenStack: a CentOS 7 base image and a Ubuntu 16.04 base image.
-c) Obatin the resource IDs/UUIDs for resources needed by the inputs.yaml file, as explained belowi, from OpenStack.
-d) DCAEGEN2 boot straping assumes that VMs are assigned private IP addresses from a network. Each VM can also be assigned a floating public IP address from another network.
+a) The current DCAEGEN2 bootstraping process assumes that the networking in the OpenStack is based on the following model:
+a private network interconnecting the VMs; and an external network that provides "floating" IP addresses for the VMs. A router
+connects the two networks. Each VM is assigned two IP addresses, one allocated from the private network when the VM is launched.
+Then a floating IP is assigned to the VM from the externl network. The UUID's of the private and externa networks are needed for
+preparing the inputs.yaml file needed for running the bootstrap container.
+b) Add a public key to openStack, note its name (we will use KEYNAME as example for below). Save the private key (we will use KAYPATH as its path example), make sure it's permission is globally readable.
+c) Load the folowing base VM images to OpenStack: a CentOS 7 base image and a Ubuntu 16.04 base image.
+d) Obatin the resource IDs/UUIDs for resources needed by the inputs.yaml file, as explained belowi, from OpenStack.
2. On dev machine, edit an inputs.yaml file at INPUTSYAMLPATH
```
@@ -24,9 +28,9 @@ d) DCAEGEN2 boot straping assumes that VMs are assigned private IP addresses fro
13 keypair: 'KEYNME'
14 key_filename: '/opt/dcae/key'
15 location_prefix: 'onapr1'
-16 location_domain: 'onap-f.onap.homer.att.com'
-17 codesource_url: 'https://nexus01.research.att.com:8443/repository'
-18 codesource_version: 'solutioning01-mte2'
+16 location_domain: 'onapdevlab.onap.org'
+17 codesource_url: 'https://nexus.onap.org/service/local/repositories/raw/content'
+18 codesource_version: 'org.onap.dcaegen2.deployments/releases/scripts'
```
Here is a line-by-line explanation of the arameters
1 UUID of the OpenStack's CentOD 7 VM image
@@ -47,68 +51,17 @@ Here is a line-by-line explanation of the arameters
16 Domain name of the OpenStack tenant 'onapr1.playground.onap.org'
17 Location of the raw artifact repo hosting additional boot scripts called by DCAEGEN2 VMs' cloud-init, for example:
'https://nexus.onap.org/service/local/repositories/raw/content'
-18 Path to the boot scripts within the raw artifact repo, for example: 'org.onap.dcaegen2.deployments.scripts/releases/'
+18 Path to the boot scripts within the raw artifact repo, for example: 'org.onap.dcaegen2.deployments/releases/scripts'
3. Pull and run the docker conatiner
```
docker pull nexus3.onap.org:10003/onap/org.onap.dcaegen2.deployments.bootstrap:1.0
-
-docker run -d -v /home/ubuntu/JFLucasBootStrap/utils/platform_base_installation/key:/opt/app/installer/config/key -v /home/ubuntu/JFLucasBootStrap/utils/platform_base_installation/inputs.yaml:/opt/app/installer/config/inputs.yaml -e "LOCATION=dg2" bootstrap
-
-docker run -d -v KEYPATH:/opt/app/installer/config/key -v INPUTSYAMLPATH:/opt/app/installer/config/inputs.yaml -e "LOCATION=dg2" nexus3.onap.org:10003/onap/org.onap.dcaegen2.deployments.bootstrap:1.0
+docker run -v KEYPATH:/opt/app/installer/config/key -v INPUTSYAMLPATH:/opt/app/installer/config/inputs.yaml -e "LOCATION=dg2" nexus3.onap.org:10003/onap/org.onap.dcaegen2.deployments.bootstrap:1.0
```
-
-R
-`expand.sh` expands the blueprints and the installer script so they
-point to the repo where the necessary artifacts (plugins, type files)
-are store.
-
-`docker build -t bootstrap .` builds the image
-
-`docker run -d -v /path/to/worldreadable_private_key:/opt/app/installer/config/key -v /path/to/inputs_file:/opt/app/installer/config/inputs.yaml -e "LOCATION=location_id_here" --name bsexec bootstrap` runs the container and (if you're lucky) does the deployment.
-
-(
-1. the private key is THE private key for the public key added to OpenStack
-2. the path to inputs and key file are FULL path starting from /
-3. --name is optional. if so the container name will be random
-)
-
-
-`example-inputs.yaml` is, as the name suggests, an example inputs file. The values in it work in the ONAP-Future environment, except for the
-user name and password.
-
-To watch the action use
-`docker logs -f bsexec`
-
-The container stays up even after the installation is complete.
-To enter the running container:
-`docker exec -it bsexec /bin/bash`
-Once in the container, to uninstall CM and the host VM and its supporting entities
-`source dcaeinstall/bin/active`
-`cfy local uninstall`
-
-(But remember--before uninstalling CM, be sure to go to CM first and uninstall the Consul cluster.)
+The container stays up even after the installation is complete. Using the docker exec command to get inside of the container, then run cfy commands to interact with the Cloudify Manager.
-####TODOS:
-- Integrate with the maven-based template expansion.
-- Integrate with maven-based Docker build and push to LF Docker repo
-- Add full list of plugins to be installed onto CM
-- Separate the Docker stuff from the non-Docker installation. (The blueprints are common to both methods.)
-- Get rid of any AT&T-isms
-- (Maybe) Move the installation of the Cloudify CLI and the sshkeyshare and dnsdesig plugins into the Dockerfile,
-so the image has everything set up and can just enter the vevn and start the Centos VM installation.
-- Figure out what (if anything) needs to change if the container is deployed by Kubernetes rather than vanilla Docker
-- Make sure the script never exits, even in the face of errors. We need the container to stay up so we can do uninstalls.
-- Figure out how to add in the deployments for the rest of the DCAE platform components. (If this container deploys all of DCAE,
-should it move out of the CCSDK project and into DCAE?)
-- Figure out the right way to get the Cloudify OpenStack plugins and the Cloudify Fabric plugins onto CM. Right now there are
-handbuilt wagons in the Nexus repo. (In theory, CM should be able to install these plugins whenever a blueprint calls for them. However,
-they require gcc, and we're not installing gcc on our CM host.)
-- Maybe look at using a different base image--the Ubuntu 16.04 image needs numerous extra packages installed.
-- The blueprint for Consul shows up in Cloudify Manager with the name 'blueprints'. I'll leave it as an exercise for the reader to figure why
-and to figure out how to change it. (See ~ line 248 of installer-docker.sh-template.)