summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorLusheng Ji <lji@research.att.com>2017-11-05 19:24:05 -0500
committerLusheng Ji <lji@research.att.com>2017-11-05 19:24:17 -0500
commita73548f4eb6ba4bdb2b2064c7e24a4b74b467f5a (patch)
tree38fa662d90000f88df368d53ced9333a5e36428b
parent20cb0239275b2c1c6276507943ca341d932b0378 (diff)
Add documentation
Issue-Id: DCAEGEN2-194 Change-Id: Ic70d210e91a0d964861e287d6c2affd4e098b919 Signed-off-by: Lusheng Ji <lji@research.att.com>
-rw-r--r--docs/sections/build.rst101
-rw-r--r--docs/sections/configuration.rst2
-rw-r--r--docs/sections/humaninsterfaces.rst32
-rw-r--r--docs/sections/installation.rst501
-rw-r--r--docs/sections/installation_heat.rst127
-rw-r--r--docs/sections/installation_manual.rst500
-rw-r--r--docs/sections/installation_test.rst52
-rw-r--r--docs/sections/logging.rst32
8 files changed, 824 insertions, 523 deletions
diff --git a/docs/sections/build.rst b/docs/sections/build.rst
index 99a061c2..0290b2bb 100644
--- a/docs/sections/build.rst
+++ b/docs/sections/build.rst
@@ -4,20 +4,107 @@
Build
=====
-.. note::
- * This section is used to describe how a software component is built from source
- into something ready for use either in a run-time environment or to build other
- components.
+Build
+=====
+
+
+Description
+-----------
+DCAE has multiple code repos and these repos are in several different languages. All DCAE projects are built in similar fashion, following Maven framework as Maven projects. Although many DCAE projects are not written in Java, adopting the Maven framework does help including DCAE projects in the overall ONAP building methodology and CICD process.
+
+All DCAE projects use ONAP oparent project POM as ancestor. That is, DCAE projects inherent all parameters defined in oparent project which include many ONAP wide configuration parameters such as the location of various artifact repos.
+
+A number of DCAE projects are not written Java. For these projects we use the CodeHaus Maven Execution plugin for triggering a Bash script at various stages of Maven lifecycle. The script is mvn-phase-script.sh, located at the root of each non-Java DACE project. It is in this script that the actual build operation is performed at different Maven phases. For example, for a Python project, Maven test will actually trigger a call to tox to conduct project unit tests.
+
+Below is a list of the repos and their sub-modules, and the language they are written in.
+
+* dcaegen2
+
+ - docs (rst)
+ - platformdoc (mkdoc)
+
+* dcaegen2.analytics
+
+* dcaegen2.analytics.tca
+
+ - dcae-analytics-aai (Java)
+ - dcae-analytics-cdap-common (Java)
+ - dcae-analytics-cdap-plugins (Java)
+ - dcae-analytics-cdap-tca (Java)
+ - dcae-analytics-common (Java)
+ - dcae-analytics-dmaap (Java)
+ - dcae-analytics-it (Java)
+ - dcae-analytics-model (Java)
+ - dcae-analytics-tca (Java)
+ - dcae-analytics-test (Java)
+ - dpo (Java)
+
+* dcaegen2.collectors
+
+ - dcaegen2.collectors.snmptrap (Java)
+ - dcaegen2.collectors.ves (Python)
+
+* dcaegen2.deployments
+
+ - bootstrap (bash)
+ - cloud_init (bash)
+ - scripts (bash, python)
+
+* dcaegen2.platform
- * This section is typically provided for a platform-component, application, and sdk; and
- referenced in developer guides.
+* dcaegen2.platform.blueprints
+
+ - blueprints (yaml)
+ - check-blueprint-vs-input (yaml)
+ - input-templates (yaml)
+
+* dcaegen2.platform.cdapbroker (Erlang)
+
+* dcaegen2.platform.cli
+
+ - component-json-schemas (yaml)
+ - dcae-cli (Python)
+
+* dcaegen2.platform.configbinding (Python)
+
+* dcaegen2.platform.deployment-handler (Python)
+
+* dcaegen2.platform.inventory-api (Clojure)
+
+* dcaegen2.platform.plugins
+
+ - cdap (Python)
+ - dcae-policy (Python)
+ - docker (Python)
+ - relationships (Python)
+
+* dcaegen2.platform.policy-handler (Python)
+
+* dcaegen2.platform.servicechange-handler (Python)
+
+* dcaegen2.utils
+
+ - onap-dcae-cbs-docker-client (Python)
+ - onap-dcae-dcaepolicy-lib (Python)
+ - python-discovery-client (Python)
+ - python-dockering (Python)
+ - scripts (bash)
- * This note must be removed after content has been added.
Environment
-----------
+Building is conducted in a Linux environment that has the basic building tools such as JDK 8, Maven 3, Python 2.7 and 3.6, docker engine, etc.
Steps
-----
+Because of the uniform adoption of Maven framework, each project can be built by running the standard Maven build commands: mvn clean, install, deploy, etc. For projects with submodules, the pom file in the project root will descent to the submodules and complete the submodule building.
+
+
+Artifacts
+---------
+Building of DCAE projects produce three different kinds of artifacts: Java jar files, raw file artifacts (including yaml files, scripts, wagon packages, etc), Pypi packages, and docker container images.
+
+
+
diff --git a/docs/sections/configuration.rst b/docs/sections/configuration.rst
index 770da288..80541b98 100644
--- a/docs/sections/configuration.rst
+++ b/docs/sections/configuration.rst
@@ -5,7 +5,7 @@ Configuration
=============
DACEGEN2 platform deploys its components via Cloudify Blueprints. Below is the list of Blueprints included in ONAP DCAEGEN2
-and details for how to configure them.
+and details for how to configure them. For how to configure the deployemnt of the DCAE platform and service components, please see teh Installation document: ./installation.rst.
.. toctree::
:maxdepth: 1
diff --git a/docs/sections/humaninsterfaces.rst b/docs/sections/humaninsterfaces.rst
index 42928460..3afcac30 100644
--- a/docs/sections/humaninsterfaces.rst
+++ b/docs/sections/humaninsterfaces.rst
@@ -4,14 +4,30 @@
Human Interfaces
================
-.. note::
- * This section is used to describe a software component's command line and graphical
- user interfaces.
-
- * This section is typically: provided for a platform-component and application; and
- referenced from user guides.
-
- * This note must be removed after content has been added.
+DCAE provides a number of interfaces for users to interact with the DCAE system.
+
+1. DCAE Bootstrap VM
+ * The DCAE bootstrap VM accepts ssh connection with the standard access key.
+ * After ssh into the VM, the DCAE bootstarp docker container can be access via "docker exec" command.
+
+2. DCAE Clouify Manager
+ * The DCAE Clouify Manager VM accepts ssh connection with the standard access key. The access account is **centos** because this is a CentOS 7 VM.
+ * The Cloudify Manager GUI can be accessed from http://{{CLOUDIFY_MANAGER_VM_IP}} .
+ * The standard Cloudify command line CLI as specified here: http://cloudify.co/guide/3.2/cli-general.html .
+
+3. DCAE Consul Cluster
+ * The DCAE Consul Cluster VMs accept ssh connection with the standard access key.
+ * The Consul GUI can be accessed from http://{{ANY_CONSUL_CLUSTER_VM_IP}}:8500 .
+ * The standard Consul HTTP API as specified here: https://www.consul.io/api/index.html .
+ * The standard Consul CLI access as specified here: https://www.consul.io/docs/commands/index.html .
+
+4. DCAE Docket hosts
+ * The DCAE Docker host VMs accept ssh connection with the standard access key.
+ * After ssh into the VM, the running docker containers can be access via "docker exec" command.
+
+5. DCAE CDAP
+ * The CDAP VMs accept ssh connection with the standard access key.
+ * The CDAP GUI can be accessed from http://{{CDAP02_VM_IP}}:11011 .
diff --git a/docs/sections/installation.rst b/docs/sections/installation.rst
index 070e36ab..f6c7d0d9 100644
--- a/docs/sections/installation.rst
+++ b/docs/sections/installation.rst
@@ -1,500 +1,11 @@
DCAE mS Installation
====================
-The below steps covers manual setup of DCAE VM’s and DCAE service
-components.
+.. toctree::
+ :maxdepth: 1
+ :titlesonly:
-VESCollector
-------------
-
-
-DCAE VES Collector can be configured on VM with ubuntu-16.04 image
-(m1.small should suffice if this is only service) and 20Gb cinder
-storage
-
-1. Install docker
-
-  sudo apt-get update
-
-  sudo apt install `docker.io <http://docker.io/>`__
-
-2. Pull the latest container from onap nexus
-
- sudo docker login -u docker -p docker
- `nexus.onap.org <http://nexus.onap.org/>`__:10001
-
- sudo docker pull
- `nexus.onap.org <http://nexus.onap.org/>`__:10001/onap/org.onap.dcaegen2.collectors.ves.vescollector:1.1
-
-3. Start the VESCollector with below command
-
- sudo docker run -d --name vescollector -p 8080:8080/tcp -p
- 8443:8443/tcp -P -e DMAAPHOST='<dmaap IP>'
- `nexus.onap.org <http://nexus.onap.org/>`__:10001/onap/org.onap.dcaegen2.collectors.ves.vescollector:1.1
-
-  Note:  Change the dmaaphost to required DMAAP ip. To change the
- dmaap information for a running container,  stop the active
- container and rerun above command changing the dmaap IP.
-
-4. Verification
-
-i. Check logs under container /opt/app/VESCollector/logs/collector.log
- for errors
-
-ii. If no active feed, you can simulate an event into collector via curl
-
- curl -i  -X POST -d @<sampleves> --header "Content-Type:
- application/json" http://localhost:8080/eventListener/v5 -k
-
- Note: If DMAAPHOST provided is invalid, you will see exception
- around publish on the collector.logs (collector queues and attempts
- to resend the event hence exceptions reported will be periodic). 
-
-i. Below two topic configuration are pre-set into this container.  When
- valid DMAAP instance ip was provided and VES events are received,
- the collector will post to below topics.
-
- Fault -
-  http://<dmaaphost>:3904/events/unauthenticated.SEC\_FAULT\_OUTPUT
-
- Measurement
- -http://<dmaaphost>:3904/events/unauthenticated.SEC\_MEASUREMENT\_OUTPUT
-
-VM Init
-~~~~~~
-
-To address windriver server in-stability, the below **init.sh** script
-was used to start the container on VM restart.  
-
-+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
-| #!/bin/sh |
-| |
-| sudo docker ps \| grep "vescollector" |
-| |
-| if [ $? -ne 0 ]; then |
-| |
-|         sudo docker login -u docker -p docker nexus.onap.org:10001 |
-| |
-|         sudo docker pull nexus.onap.org:10001/onap/org.onap.dcaegen2.collectors.ves.vescollector:1.1 |
-| |
-|         sudo docker rm -f vescollector |
-| |
-|         echo "Collector process not running - $(date)" >> /home/ubuntu/startuplog |
-| |
-|         sudo docker run -d --name vescollector -p 8080:8080/tcp -p 8443:8443/tcp -P -e DMAAPHOST='10.12.25.96' nexus.onap.org:10001/onap/org.onap.dcaegen2.collectors.ves.vescollector:1.1 |
-| |
-| else |
-| |
-|         echo "Collector process running - $(date)" >> /home/ubuntu/startuplog |
-| |
-| fi |
-+==============================================================================================================================================================================================+
-+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
-
-This script was invoked via VM init script (rc.d).
-
-ln -s /home/ubuntu/init.sh /etc/init.d/init.sh
-
-sudo  update-rc.d init.sh start 2
-
-
-ThresholdCrossingAnalysis (TCA/CDAP)
-------------------------------------
-
-The platform deploys CDAP as cluster and instantiates TCA. For the
-manual setup, we will leverage the CDAP SDK docker container to deploy
-TCA instances.  To setup TCA, choose VM with ubuntu-16.04 image,
-m1.medium size and 50gb cinder volumes.
-
-1. Install docker
-
-  sudo apt-get update
-
-  sudo apt install `docker.io <http://docker.io/>`__
-
-2. Pull CDAP SDK container
-
-sudo docker pull caskdata/cdap-standalone:4.1.2
-
-3. Deploy and run the CDAP container
-
- sudo docker run -d --name cdap-sdk-2 -p 11011:11011 -p 11015:11015
- caskdata/cdap-standalone:4.1.2
-
-4. Create Namespace on CDAP application
-
-curl -X PUT http://localhost:11015/v3/namespaces/cdap_tca_hi_lo
-
-5. Create TCA app config file - "tca\_app\_config.json" under ~ubuntu as
- below
-
-+------------------------------------------------------------------------------+
-| { |
-| |
-|  "artifact": { |
-| |
-|   "name": "dcae-analytics-cdap-tca", |
-| |
-|   "version": "2.0.0", |
-| |
-|   "scope": "user" |
-| |
-|  }, |
-| |
-|  "config": { |
-| |
-|   "appName": "dcae-tca", |
-| |
-|   "appDescription": "DCAE Analytics Threshold Crossing Alert Application", |
-| |
-|   "tcaVESMessageStatusTableName": "TCAVESMessageStatusTable", |
-| |
-|   "tcaVESMessageStatusTableTTLSeconds": 86400.0, |
-| |
-|   "tcaAlertsAbatementTableName": "TCAAlertsAbatementTable", |
-| |
-|   "tcaAlertsAbatementTableTTLSeconds": 1728000.0, |
-| |
-|   "tcaVESAlertsTableName": "TCAVESAlertsTable", |
-| |
-|   "tcaVESAlertsTableTTLSeconds": 1728000.0, |
-| |
-|   "thresholdCalculatorFlowletInstances": 2.0, |
-| |
-|   "tcaSubscriberOutputStreamName": "TCASubscriberOutputStream" |
-| |
-|  } |
-| |
-| } |
-+==============================================================================+
-+------------------------------------------------------------------------------+
-
-6. Create TCA app preference file under ~ubuntu as below
-
-+--------------------------------------------------------------------------------------------------------------------------------------------+
-| { |
-| |
-|   "publisherContentType" : "application/json", |
-| |
-|   "publisherHostName" : "10.12.25.96", |
-| |
-|   "publisherHostPort" : "3904", |
-| |
-|   "publisherMaxBatchSize" : "1", |
-| |
-|   "publisherMaxRecoveryQueueSize" : "100000", |
-| |
-|   "publisherPollingInterval" : "20000", |
-| |
-|   "publisherProtocol" : "http", |
-| |
-|   "publisherTopicName" : "unauthenticated.DCAE\_CL\_OUTPUT", |
-| |
-|   "subscriberConsumerGroup" : "OpenDCAE-c1", |
-| |
-|   "subscriberConsumerId" : "c1", |
-| |
-|   "subscriberContentType" : "application/json", |
-| |
-|   "subscriberHostName" : "10.12.25.96", |
-| |
-|   "subscriberHostPort" : "3904", |
-| |
-|   "subscriberMessageLimit" : "-1", |
-| |
-|   "subscriberPollingInterval" : "20000", |
-| |
-|   "subscriberProtocol" : "http", |
-| |
-|   "subscriberTimeoutMS" : "-1", |
-| |
-|   "subscriberTopicName" : "unauthenticated.SEC\_MEASUREMENT\_OUTPUT", |
-| |
-|   "enableAAIEnrichment" : false, |
-| |
-|   "aaiEnrichmentHost" : "10.12.25.72", |
-| |
-|   "aaiEnrichmentPortNumber" : 8443, |
-| |
-|   "aaiEnrichmentProtocol" : "https", |
-| |
-|   "aaiEnrichmentUserName" : "DCAE", |
-| |
-|   "aaiEnrichmentUserPassword" : "DCAE", |
-| |
-|   "aaiEnrichmentIgnoreSSLCertificateErrors" : false, |
-| |
-|   "aaiVNFEnrichmentAPIPath" : "/aai/v11/network/generic-vnfs/generic-vnf", |
-| |
-|   "aaiVMEnrichmentAPIPath" :  "/aai/v11/search/nodes-query", |
-| |
-|   "tca\_policy" : "{ |
-| |
-|         \\"domain\\": \\"measurementsForVfScaling\\", |
-| |
-|         \\"metricsPerEventName\\": [{ |
-| |
-|                 \\"eventName\\": \\"vFirewallBroadcastPackets\\", |
-| |
-|                 \\"controlLoopSchemaType\\": \\"VNF\\", |
-| |
-|                 \\"policyScope\\": \\"DCAE\\", |
-| |
-|                 \\"policyName\\": \\"DCAE.Config\_tca-hi-lo\\", |
-| |
-|                 \\"policyVersion\\": \\"v0.0.1\\", |
-| |
-|                 \\"thresholds\\": [{ |
-| |
-|                         \\"closedLoopControlName\\": \\"ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a\\", |
-| |
-|                         \\"version\\": \\"1.0.2\\", |
-| |
-|                         \\"fieldPath\\": \\"$.event.measurementsForVfScalingFields.vNicUsageArray[\*].receivedTotalPacketsDelta\\", |
-| |
-|                         \\"thresholdValue\\": 300, |
-| |
-|                         \\"direction\\": \\"LESS\_OR\_EQUAL\\", |
-| |
-|                         \\"severity\\": \\"MAJOR\\", |
-| |
-|                         \\"closedLoopEventStatus\\": \\"ONSET\\" |
-| |
-|                 }, { |
-| |
-|                         \\"closedLoopControlName\\": \\"ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a\\", |
-| |
-|                         \\"version\\": \\"1.0.2\\", |
-| |
-|                         \\"fieldPath\\": \\"$.event.measurementsForVfScalingFields.vNicUsageArray[\*].receivedTotalPacketsDelta\\", |
-| |
-|                         \\"thresholdValue\\": 700, |
-| |
-|                         \\"direction\\": \\"GREATER\_OR\_EQUAL\\", |
-| |
-|                         \\"severity\\": \\"CRITICAL\\", |
-| |
-|                         \\"closedLoopEventStatus\\": \\"ONSET\\" |
-| |
-|                 }] |
-| |
-|         }, { |
-| |
-|                 \\"eventName\\": \\"vLoadBalancer\\", |
-| |
-|                 \\"controlLoopSchemaType\\": \\"VM\\", |
-| |
-|                 \\"policyScope\\": \\"DCAE\\", |
-| |
-|                 \\"policyName\\": \\"DCAE.Config\_tca-hi-lo\\", |
-| |
-|                 \\"policyVersion\\": \\"v0.0.1\\", |
-| |
-|                 \\"thresholds\\": [{ |
-| |
-|                         \\"closedLoopControlName\\": \\"ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3\\", |
-| |
-|                         \\"version\\": \\"1.0.2\\", |
-| |
-|                         \\"fieldPath\\": \\"$.event.measurementsForVfScalingFields.vNicUsageArray[\*].receivedTotalPacketsDelta\\", |
-| |
-|                         \\"thresholdValue\\": 300, |
-| |
-|                         \\"direction\\": \\"GREATER\_OR\_EQUAL\\", |
-| |
-|                         \\"severity\\": \\"CRITICAL\\", |
-| |
-|                         \\"closedLoopEventStatus\\": \\"ONSET\\" |
-| |
-|                 }] |
-| |
-|         }, { |
-| |
-|                 \\"eventName\\": \\"Measurement\_vGMUX\\", |
-| |
-|                 \\"controlLoopSchemaType\\": \\"VNF\\", |
-| |
-|                 \\"policyScope\\": \\"DCAE\\", |
-| |
-|                 \\"policyName\\": \\"DCAE.Config\_tca-hi-lo\\", |
-| |
-|                 \\"policyVersion\\": \\"v0.0.1\\", |
-| |
-|                 \\"thresholds\\": [{ |
-| |
-|                         \\"closedLoopControlName\\": \\"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\\", |
-| |
-|                         \\"version\\": \\"1.0.2\\", |
-| |
-|                         \\"fieldPath\\": \\"$.event.measurementsForVfScalingFields.additionalMeasurements[\*].arrayOfFields[0].value\\", |
-| |
-|                         \\"thresholdValue\\": 0, |
-| |
-|                         \\"direction\\": \\"EQUAL\\", |
-| |
-|                         \\"severity\\": \\"MAJOR\\", |
-| |
-|                         \\"closedLoopEventStatus\\": \\"ABATED\\" |
-| |
-|                 }, { |
-| |
-|                         \\"closedLoopControlName\\": \\"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\\", |
-| |
-|                         \\"version\\": \\"1.0.2\\", |
-| |
-|                         \\"fieldPath\\": \\"$.event.measurementsForVfScalingFields.additionalMeasurements[\*].arrayOfFields[0].value\\", |
-| |
-|                         \\"thresholdValue\\": 0, |
-| |
-|                         \\"direction\\": \\"GREATER\\", |
-| |
-|                         \\"severity\\": \\"CRITICAL\\", |
-| |
-|                         \\"closedLoopEventStatus\\": \\"ONSET\\" |
-| |
-|                 }] |
-| |
-|         }] |
-| |
-| }" |
-| |
-| } |
-+============================================================================================================================================+
-+--------------------------------------------------------------------------------------------------------------------------------------------+
-
-  Note: Dmaap configuration are specified on this file on
- publisherHostName and subscriberHostName. To be changed as
- required\*\*
-
-7. Copy below script to CDAP server (this gets latest image from nexus
- and deploys TCA application) and execute it
-
-+--------------------------------------------------------------------------------------------------------------------------------------------------+
-| #!/bin/sh |
-| |
-| TCA\_JAR=dcae-analytics-cdap-tca-2.0.0.jar |
-| |
-| rm -f /home/ubuntu/$TCA\_JAR |
-| |
-| cd /home/ubuntu/ |
-| |
-| wget https://nexus.onap.org/service/local/repositories/staging/content/org/onap/dcaegen2/analytics/tca/dcae-analytics-cdap-tca/2.0.0/$TCA\_JAR |
-| |
-| if [ $? -eq 0 ]; then |
-| |
-|         if [ -f /home/ubuntu/$TCA\_JAR ]; then |
-| |
-|                 echo "Restarting TCA CDAP application using $TCA\_JAR artifact" |
-| |
-|         else |
-| |
-|                 echo "ERROR: $TCA\_JAR missing" |
-| |
-|                 exit 1 |
-| |
-|         fi |
-| |
-| else |
-| |
-|         echo "ERROR: $TCA\_JAR not found in nexus" |
-| |
-|         exit 1 |
-| |
-| fi |
-| |
-| # stop programs |
-| |
-| curl -X POST http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/workers/TCADMaaPMRPublisherWorker/stop |
-| |
-| curl -X POST http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/workers/TCADMaaPMRSubscriberWorker/stop |
-| |
-| curl -X POST http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/flows/TCAVESCollectorFlow/stop |
-| |
-| # delete application |
-| |
-| curl -X DELETE http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca |
-| |
-| # delete artifact |
-| |
-| curl -X DELETE http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/artifacts/dcae-analytics-cdap-tca/versions/2.0.0 |
-| |
-| # load artifact |
-| |
-| curl -X POST --data-binary @/home/ubuntu/$TCA\_JAR http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/artifacts/dcae-analytics-cdap-tca |
-| |
-| # create app |
-| |
-| curl -X PUT -d @/home/ubuntu/tca\_app\_config.json http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca |
-| |
-| # load preferences |
-| |
-| curl -X PUT -d @/home/ubuntu/tca\_app\_preferences.json http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/preferences |
-| |
-| # start programs |
-| |
-| curl -X POST http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/workers/TCADMaaPMRPublisherWorker/start |
-| |
-| curl -X POST http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/workers/TCADMaaPMRSubscriberWorker/start |
-| |
-| curl -X POST http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/flows/TCAVESCollectorFlow/start |
-| |
-| echo |
-| |
-| # get status of programs |
-| |
-| curl http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/workers/TCADMaaPMRPublisherWorker/status |
-| |
-| curl http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/workers/TCADMaaPMRSubscriberWorker/status |
-| |
-| curl http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/flows/TCAVESCollectorFlow/status |
-| |
-| echo |
-+==================================================================================================================================================+
-+--------------------------------------------------------------------------------------------------------------------------------------------------+
-
-8. Verify TCA application and logs via CDAP GUI processes
-
- The overall flow can be checked here
-
-TCA Configuration Change
-~~~~~~~~~~~~~~~~~~~~~~~
-
-Typical configuration changes include changing DMAAP host and/or Policy configuration. If necessary, modify the file on step #6 and run the script noted as step #7 to redeploy TCA with updated configuration.
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-VM Init
-~~~~~~
-
-To address windriver server in-stability, the below **init.sh** script
-was used to restart the container on VM restart.  This script was
-invoked via VM init script (rc.d).
-
-+------------------------------------------------------------------------------------------------------------------------------+
-| #!/bin/sh |
-| |
-| #docker run -d --name cdap-sdk -p 11011:11011 -p 11015:11015 caskdata/cdap-standalone:4.1.2 |
-| |
-| sudo docker restart cdap-sdk-2 |
-| |
-| sleep 30 |
-| |
-| # start program |
-| |
-| curl -X POST http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/workers/TCADMaaPMRPublisherWorker/start |
-| |
-| curl -X POST http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/workers/TCADMaaPMRSubscriberWorker/start |
-| |
-| curl -X POST http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/flows/TCAVESCollectorFlow/start |
-+==============================================================================================================================+
-+------------------------------------------------------------------------------------------------------------------------------+
-
-
-This script was invoked via VM init script (rc.d).
-
-ln -s /home/ubuntu/init.sh /etc/init.d/init.sh
-
-sudo  update-rc.d init.sh start 2
+ ./installation_heat.rst
+ ./installation_manual.rst
+ ./installation_test.rst
diff --git a/docs/sections/installation_heat.rst b/docs/sections/installation_heat.rst
new file mode 100644
index 00000000..b90ebb41
--- /dev/null
+++ b/docs/sections/installation_heat.rst
@@ -0,0 +1,127 @@
+OpenStack Heat Template Based ONAP Deployment
+=============================================
+
+For ONAP R1, ONAP is deployed using OpenStack Heat template. DCAE is also deployed through this process. This document describes the details of the Heat template deployment process and how to configure DCAE related parameters in the Heat template and its parameter file.
+
+
+ONAP Deployment
+---------------
+
+ONAP supports an OpenStack Heat template based system deployment. When a new "stack" is created using the template, the following virtual resources will be launched in the target OpenStack tenant:
+
+* A four-character alphanumerical random text string, to be used as the ID of the deployment. It is denoted as {{RAND}} in the remainder of this document.
+* A private OAM network interconnecting all ONAP VMs, named oam_onap_{{RAND}}.
+* A virtual router interconnecting the private OAM network with the external network of the OpenStack installation.
+* A key-pair named onap_key_{{RAND}}.
+* A security group named onap_sg_{{RAND}}.
+* A list of VMs for ONAP components. Each VM has one NIC connected to the OAM network and assigned a fixed IP. Each VM is also assigned a floating IP address from the external network. The VM hostnames are name consistently across different ONAP deployments, a user defined prefix, denoted as {{PREFIX}}, followed by a descriptive string for the ONAP component this VM runs, and optionally followed by a sub-function name. The VMs of the same ONAP role across different ONAP deployments will always have the same OAM network IP address. For example, the Message Router will always have the OAM network IP address of 10.0.11.1.
+
+ ============== ========================== ==========================
+ ONAP Role VM (Neutron) hostname OAM IP address(s)
+ ============== ========================== ==========================
+ A&AI {{PREFIX}}-aai-inst1 10.0.1.1
+ A&AI {{PREFIX}}-aai-inst2 10.0.1.2
+ APPC {{PREFIX}}-appc 10.0.2.1
+ SDC {{PREFIX}}-sdc 10.0.3.1
+ DCAE {{PREFIX}}-dcae-bootstrap 10.0.4.1
+ SO {{PREFIX}}-so 10.0.5.1
+ Policy {{PREFIX}}-policy 10.0.6.1
+ SD&C {{PREFIX}}-sdnc 10.0.7.1
+ VID {{PREFIX}}-vid 10.0.8.1
+ Portal {{PREFIX}}-portal 10.0.9.1
+ Robot TF {{PREFIX}}-robot 10.0.10.1
+ Message Router {{PREFIX}}-message-router 10.0.11.1
+ CLAMP {{PREFIX}}-clamp 10.0.12.1
+ MultiService {{PREFIX}}-multi-service 10.0.14.1
+ Private DNS {{PREFIX}}-dns-server 10.0.100.1
+ ============== ========================== ==========================
+
+* A list of DCAE VMs, launched by the {{PREFIX}}-dcae-bootstrap VM. These VMs are also connected to the OAM network and associated with floating IP addresses on teh external network. What's different is that their OAM IP addresses are DHCP assigned, not statically assigned. The table below lists the DCAE VMs that are deployed for R1 use stories.
+
+ ===================== ============================
+ DCAE Role VM (Neutron) hostname(s)
+ ===================== ============================
+ Cloudify Manager {{DCAEPREFIX}}orcl{00}
+ Consul cluster {{DCAEPREFIX}}cnsl{00-02}
+ Platform Docker Host {{DCAEPREFIX}}dokp{00}
+ Service Docker Host {{DCAEPREFIX}}dokp{00}
+ CDAP cluster {{DCAEPREFIX}}cdap{00-06}
+ Postgres {{DCAEPREFIX}}pgvm{00}
+ ===================== ============================
+
+DNS
+===
+
+ONAP VMs deployed by Heat template are all registered with the private DNS server under the domain name of **simpledemo.onap.org**. This domain can not be exposed to any where outside of the ONAP deployment because all ONAP deployments use the same domain name and same address space. Hence these host names remain only resolvable within the same ONAP deployment.
+
+On the other hand DCAE VMs, although attached to the same OAM network as the rest of ONAP VMs, all have dynamic IP addresses allocated by the DHCP server and resort to a DNS based solution for registering the hostname and IP address mapping. DCAE VMs of different ONAP deployments are registered under different zones named as **{{RAND}}.dcaeg2.onap.org**. The API that DCAE calls to request the DNS zone registration and record registration is provided by OpenStack's DNS as a Service technology Designate.
+
+To enable VMs spun up by ONPA Heat template and DCAE's bootstrap process communicate with each other using hostnames, all VMs are configured to use the private DNS server launched by the Heat template as their name resolution server. In the configuration of this private DNS server, the DNS server that backs up Designate API frontend is used as the DNS forwarder.
+
+For simpledemo.onap.org VM to simpledemo.onap.org VM communications and {{RAND}}.dcaeg2.onap.org VM to simpledemo.onap.org VM communications, the resolution is completed by the private DNS server itself. For simpledemo.onap.org VM to {{RAND}}.dcaeg2.onap.org VM communications and {{RAND}}.dcaeg2.onap.org VM to {{RAND}}.dcaeg2.onap.org VM communications, the resolution request is forwarded from the private DNS server to the Designate DNS server and resolved there. Communications to outside world are resolved also by the Designate DNS server if the hostname belongs to a zone registered under the Designate DNS server, or forwarded to the next DNS server, either an organizational DNS server or a DNS server even higher in the global DNS server hierarchy.
+
+For OpenStack installations where there is no existing DNS service, a "proxyed" Designate solution is supported. In this arrangement, DCAE bootstrap process will use MultiCloud service node as its Keystone API endpoint. For non Designate API calls, the MultiCloud service node forwards to the underlying cloud provider. However, for Designate API calls, the MultiCloud service node forwards to an off-stack Designate server.
+
+Heat Template Parameters
+========================
+
+Here we list Heat template parameters that are related to DCAE operation. Bold values are the default values that should be used "as-is".
+
+* public_net_id: the UUID of the external network where floating IPs are assigned from. For example: 971040b2-7059-49dc-b220-4fab50cb2ad4
+* public_net_name: the name of the external network where floating IPs are assigned from. For example: external
+* openstack_tenant_id: the ID of the OpenStack tenant/project that will host the ONPA deployment. For example: dd327af0542e47d7853e0470fe9ad625.
+* openstack_tenant_name: the name of the OpenStack tenant/project that will host the ONPA deployment. For example: Integration-SB-01.
+* openstack_username: the username for accessing the OpenStack tenant specified by openstack_tenant_id/ openstack_tenant_name.
+* openstack_api_key: the password for accessing the OpenStack tenant specified by openstack_tenant_id/ openstack_tenant_name.
+* openstack_auth_method: **password**
+* openstack_region: **RegionOne**
+* cloud_env: **openstack**
+* dns_forwarder: This is the DNS forwarder for the ONAP deployment private DNS server. It must point to the IP address of the Designate DNS. For example '10.12.25.5'.
+* dcae_ip_addr: **10.0.4.1**. The static IP address on the OAM network that is assigned to the DCAE bootstraping VM.
+* dnsaas_config_enabled: Whether a proxy-ed Designate solution is used. For example: **true**.
+* dnsaas_region: The region of the Designate providing OpenStack. For example: RegionOne
+* dnsaas_tenant_name: The tenant/project name of the Designate providing OpenStack. For example Integration-SB-01.
+* dnsaas_keystone_url: The keystone URL of the Designate providing OpenStack. For example http://10.12.25.5:5000/v3.
+* dnsaas_username: The username for accessing the Designate providing OpenStack.
+* dnsaas_password: The password for accessing the Designate providing OpenStack.
+* dcae_keystone_url: This is the API endpoint for MltiCloud service node. **"http://10.0.14.1/api/multicloud-titanium_cloud/v0/pod25_RegionOne/identity/v2.0"**
+* dcae_centos_7_image: The name of the CentOS-7 image.
+* dcae_domain: The domain under which ONAP deployment zones are registered. For example: 'dcaeg2.onap.org'.
+* dcae_public_key: the public key of the onap_key_{{RAND}} key-pair.
+* dcae_private_key: The private key of the onap_key_{{RAND}} key-pair (put a literal \n at the end of each line of text).
+
+Heat Deployment
+===============
+
+Heat template can be deployed using the OpenStack CLI. For more details, please visit the demo project of ONAP. All files references in this secton can be found under the **demo** project.
+
+In the Heat template file **heat/ONAP/onap_openstack.yaml** file, there is one block of sepcification towrads the end of the file defines the dcae_bootstrap VM. This block follows the same approach as other VMs defined in the same template. That is, a number of parameters within the Heat context, such as the floating IP addresses of the VMs and parameters provided in the user defined parameter env file, are written to disk files under the /opt/config directory of the VM during cloud init time. Then a script, found under the **boot** directory of the **demo** project, **{{VMNAME}}_install.sh**, is called to prepare the VM. At the end of running this script, another script **{VMNAME}}_vm_init.sh** is called.
+
+For DCAE bootstrap VM, the dcae2_vm_init.sh script completes the following steps:
+
+* If we use proxy-ed Designate solution, runs:
+ * Wait for A&AI to become ready
+ * Register MultiCloud proxy information into A&AI
+ * Wait for MultiCloud proxy node ready
+ * Register the DNS zone for the ONAP installation, **{{RAND}}.dcaeg2.onap.org**
+* Runs DCAE bootstrap docker container
+ * Install Cloudify locally
+ * Launch the Cloudify Manager VM
+ * Launch the Consul cluster
+ * Launch the platform component Docker host
+ * Launch the service component Docker host
+ * Launch the CDAP cluster
+ * Install Config Binding Service onto platform component Docker host
+ * Launch the Postgres VM
+ * Install Platform Inventory onto platform component Docker host
+ * Install Deployment Handler onto platform component Docker host
+ * Install Policy Handler onto platform component Docker host
+ * Install CDAP Broker onto platform component Docker host
+ * Install VES collector onto service component Docker host
+ * Install TCA analytics onto CDAP cluster
+ * Install Holmes Engine onto service component Docker host
+ * Install Holmes Rules onto service component Docker host
+* Starts a Nginx docker container to proxy the healthcheck API to Consul
+* Enters a infinite sleep loop to keep the bootstrap container up
+
+
diff --git a/docs/sections/installation_manual.rst b/docs/sections/installation_manual.rst
new file mode 100644
index 00000000..070e36ab
--- /dev/null
+++ b/docs/sections/installation_manual.rst
@@ -0,0 +1,500 @@
+DCAE mS Installation
+====================
+
+The below steps covers manual setup of DCAE VM’s and DCAE service
+components.
+
+VESCollector
+------------
+
+
+DCAE VES Collector can be configured on VM with ubuntu-16.04 image
+(m1.small should suffice if this is only service) and 20Gb cinder
+storage
+
+1. Install docker
+
+  sudo apt-get update
+
+  sudo apt install `docker.io <http://docker.io/>`__
+
+2. Pull the latest container from onap nexus
+
+ sudo docker login -u docker -p docker
+ `nexus.onap.org <http://nexus.onap.org/>`__:10001
+
+ sudo docker pull
+ `nexus.onap.org <http://nexus.onap.org/>`__:10001/onap/org.onap.dcaegen2.collectors.ves.vescollector:1.1
+
+3. Start the VESCollector with below command
+
+ sudo docker run -d --name vescollector -p 8080:8080/tcp -p
+ 8443:8443/tcp -P -e DMAAPHOST='<dmaap IP>'
+ `nexus.onap.org <http://nexus.onap.org/>`__:10001/onap/org.onap.dcaegen2.collectors.ves.vescollector:1.1
+
+  Note:  Change the dmaaphost to required DMAAP ip. To change the
+ dmaap information for a running container,  stop the active
+ container and rerun above command changing the dmaap IP.
+
+4. Verification
+
+i. Check logs under container /opt/app/VESCollector/logs/collector.log
+ for errors
+
+ii. If no active feed, you can simulate an event into collector via curl
+
+ curl -i  -X POST -d @<sampleves> --header "Content-Type:
+ application/json" http://localhost:8080/eventListener/v5 -k
+
+ Note: If DMAAPHOST provided is invalid, you will see exception
+ around publish on the collector.logs (collector queues and attempts
+ to resend the event hence exceptions reported will be periodic). 
+
+i. Below two topic configuration are pre-set into this container.  When
+ valid DMAAP instance ip was provided and VES events are received,
+ the collector will post to below topics.
+
+ Fault -
+  http://<dmaaphost>:3904/events/unauthenticated.SEC\_FAULT\_OUTPUT
+
+ Measurement
+ -http://<dmaaphost>:3904/events/unauthenticated.SEC\_MEASUREMENT\_OUTPUT
+
+VM Init
+~~~~~~
+
+To address windriver server in-stability, the below **init.sh** script
+was used to start the container on VM restart.  
+
++----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| #!/bin/sh |
+| |
+| sudo docker ps \| grep "vescollector" |
+| |
+| if [ $? -ne 0 ]; then |
+| |
+|         sudo docker login -u docker -p docker nexus.onap.org:10001 |
+| |
+|         sudo docker pull nexus.onap.org:10001/onap/org.onap.dcaegen2.collectors.ves.vescollector:1.1 |
+| |
+|         sudo docker rm -f vescollector |
+| |
+|         echo "Collector process not running - $(date)" >> /home/ubuntu/startuplog |
+| |
+|         sudo docker run -d --name vescollector -p 8080:8080/tcp -p 8443:8443/tcp -P -e DMAAPHOST='10.12.25.96' nexus.onap.org:10001/onap/org.onap.dcaegen2.collectors.ves.vescollector:1.1 |
+| |
+| else |
+| |
+|         echo "Collector process running - $(date)" >> /home/ubuntu/startuplog |
+| |
+| fi |
++==============================================================================================================================================================================================+
++----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+
+This script was invoked via VM init script (rc.d).
+
+ln -s /home/ubuntu/init.sh /etc/init.d/init.sh
+
+sudo  update-rc.d init.sh start 2
+
+
+ThresholdCrossingAnalysis (TCA/CDAP)
+------------------------------------
+
+The platform deploys CDAP as cluster and instantiates TCA. For the
+manual setup, we will leverage the CDAP SDK docker container to deploy
+TCA instances.  To setup TCA, choose VM with ubuntu-16.04 image,
+m1.medium size and 50gb cinder volumes.
+
+1. Install docker
+
+  sudo apt-get update
+
+  sudo apt install `docker.io <http://docker.io/>`__
+
+2. Pull CDAP SDK container
+
+sudo docker pull caskdata/cdap-standalone:4.1.2
+
+3. Deploy and run the CDAP container
+
+ sudo docker run -d --name cdap-sdk-2 -p 11011:11011 -p 11015:11015
+ caskdata/cdap-standalone:4.1.2
+
+4. Create Namespace on CDAP application
+
+curl -X PUT http://localhost:11015/v3/namespaces/cdap_tca_hi_lo
+
+5. Create TCA app config file - "tca\_app\_config.json" under ~ubuntu as
+ below
+
++------------------------------------------------------------------------------+
+| { |
+| |
+|  "artifact": { |
+| |
+|   "name": "dcae-analytics-cdap-tca", |
+| |
+|   "version": "2.0.0", |
+| |
+|   "scope": "user" |
+| |
+|  }, |
+| |
+|  "config": { |
+| |
+|   "appName": "dcae-tca", |
+| |
+|   "appDescription": "DCAE Analytics Threshold Crossing Alert Application", |
+| |
+|   "tcaVESMessageStatusTableName": "TCAVESMessageStatusTable", |
+| |
+|   "tcaVESMessageStatusTableTTLSeconds": 86400.0, |
+| |
+|   "tcaAlertsAbatementTableName": "TCAAlertsAbatementTable", |
+| |
+|   "tcaAlertsAbatementTableTTLSeconds": 1728000.0, |
+| |
+|   "tcaVESAlertsTableName": "TCAVESAlertsTable", |
+| |
+|   "tcaVESAlertsTableTTLSeconds": 1728000.0, |
+| |
+|   "thresholdCalculatorFlowletInstances": 2.0, |
+| |
+|   "tcaSubscriberOutputStreamName": "TCASubscriberOutputStream" |
+| |
+|  } |
+| |
+| } |
++==============================================================================+
++------------------------------------------------------------------------------+
+
+6. Create TCA app preference file under ~ubuntu as below
+
++--------------------------------------------------------------------------------------------------------------------------------------------+
+| { |
+| |
+|   "publisherContentType" : "application/json", |
+| |
+|   "publisherHostName" : "10.12.25.96", |
+| |
+|   "publisherHostPort" : "3904", |
+| |
+|   "publisherMaxBatchSize" : "1", |
+| |
+|   "publisherMaxRecoveryQueueSize" : "100000", |
+| |
+|   "publisherPollingInterval" : "20000", |
+| |
+|   "publisherProtocol" : "http", |
+| |
+|   "publisherTopicName" : "unauthenticated.DCAE\_CL\_OUTPUT", |
+| |
+|   "subscriberConsumerGroup" : "OpenDCAE-c1", |
+| |
+|   "subscriberConsumerId" : "c1", |
+| |
+|   "subscriberContentType" : "application/json", |
+| |
+|   "subscriberHostName" : "10.12.25.96", |
+| |
+|   "subscriberHostPort" : "3904", |
+| |
+|   "subscriberMessageLimit" : "-1", |
+| |
+|   "subscriberPollingInterval" : "20000", |
+| |
+|   "subscriberProtocol" : "http", |
+| |
+|   "subscriberTimeoutMS" : "-1", |
+| |
+|   "subscriberTopicName" : "unauthenticated.SEC\_MEASUREMENT\_OUTPUT", |
+| |
+|   "enableAAIEnrichment" : false, |
+| |
+|   "aaiEnrichmentHost" : "10.12.25.72", |
+| |
+|   "aaiEnrichmentPortNumber" : 8443, |
+| |
+|   "aaiEnrichmentProtocol" : "https", |
+| |
+|   "aaiEnrichmentUserName" : "DCAE", |
+| |
+|   "aaiEnrichmentUserPassword" : "DCAE", |
+| |
+|   "aaiEnrichmentIgnoreSSLCertificateErrors" : false, |
+| |
+|   "aaiVNFEnrichmentAPIPath" : "/aai/v11/network/generic-vnfs/generic-vnf", |
+| |
+|   "aaiVMEnrichmentAPIPath" :  "/aai/v11/search/nodes-query", |
+| |
+|   "tca\_policy" : "{ |
+| |
+|         \\"domain\\": \\"measurementsForVfScaling\\", |
+| |
+|         \\"metricsPerEventName\\": [{ |
+| |
+|                 \\"eventName\\": \\"vFirewallBroadcastPackets\\", |
+| |
+|                 \\"controlLoopSchemaType\\": \\"VNF\\", |
+| |
+|                 \\"policyScope\\": \\"DCAE\\", |
+| |
+|                 \\"policyName\\": \\"DCAE.Config\_tca-hi-lo\\", |
+| |
+|                 \\"policyVersion\\": \\"v0.0.1\\", |
+| |
+|                 \\"thresholds\\": [{ |
+| |
+|                         \\"closedLoopControlName\\": \\"ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a\\", |
+| |
+|                         \\"version\\": \\"1.0.2\\", |
+| |
+|                         \\"fieldPath\\": \\"$.event.measurementsForVfScalingFields.vNicUsageArray[\*].receivedTotalPacketsDelta\\", |
+| |
+|                         \\"thresholdValue\\": 300, |
+| |
+|                         \\"direction\\": \\"LESS\_OR\_EQUAL\\", |
+| |
+|                         \\"severity\\": \\"MAJOR\\", |
+| |
+|                         \\"closedLoopEventStatus\\": \\"ONSET\\" |
+| |
+|                 }, { |
+| |
+|                         \\"closedLoopControlName\\": \\"ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a\\", |
+| |
+|                         \\"version\\": \\"1.0.2\\", |
+| |
+|                         \\"fieldPath\\": \\"$.event.measurementsForVfScalingFields.vNicUsageArray[\*].receivedTotalPacketsDelta\\", |
+| |
+|                         \\"thresholdValue\\": 700, |
+| |
+|                         \\"direction\\": \\"GREATER\_OR\_EQUAL\\", |
+| |
+|                         \\"severity\\": \\"CRITICAL\\", |
+| |
+|                         \\"closedLoopEventStatus\\": \\"ONSET\\" |
+| |
+|                 }] |
+| |
+|         }, { |
+| |
+|                 \\"eventName\\": \\"vLoadBalancer\\", |
+| |
+|                 \\"controlLoopSchemaType\\": \\"VM\\", |
+| |
+|                 \\"policyScope\\": \\"DCAE\\", |
+| |
+|                 \\"policyName\\": \\"DCAE.Config\_tca-hi-lo\\", |
+| |
+|                 \\"policyVersion\\": \\"v0.0.1\\", |
+| |
+|                 \\"thresholds\\": [{ |
+| |
+|                         \\"closedLoopControlName\\": \\"ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3\\", |
+| |
+|                         \\"version\\": \\"1.0.2\\", |
+| |
+|                         \\"fieldPath\\": \\"$.event.measurementsForVfScalingFields.vNicUsageArray[\*].receivedTotalPacketsDelta\\", |
+| |
+|                         \\"thresholdValue\\": 300, |
+| |
+|                         \\"direction\\": \\"GREATER\_OR\_EQUAL\\", |
+| |
+|                         \\"severity\\": \\"CRITICAL\\", |
+| |
+|                         \\"closedLoopEventStatus\\": \\"ONSET\\" |
+| |
+|                 }] |
+| |
+|         }, { |
+| |
+|                 \\"eventName\\": \\"Measurement\_vGMUX\\", |
+| |
+|                 \\"controlLoopSchemaType\\": \\"VNF\\", |
+| |
+|                 \\"policyScope\\": \\"DCAE\\", |
+| |
+|                 \\"policyName\\": \\"DCAE.Config\_tca-hi-lo\\", |
+| |
+|                 \\"policyVersion\\": \\"v0.0.1\\", |
+| |
+|                 \\"thresholds\\": [{ |
+| |
+|                         \\"closedLoopControlName\\": \\"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\\", |
+| |
+|                         \\"version\\": \\"1.0.2\\", |
+| |
+|                         \\"fieldPath\\": \\"$.event.measurementsForVfScalingFields.additionalMeasurements[\*].arrayOfFields[0].value\\", |
+| |
+|                         \\"thresholdValue\\": 0, |
+| |
+|                         \\"direction\\": \\"EQUAL\\", |
+| |
+|                         \\"severity\\": \\"MAJOR\\", |
+| |
+|                         \\"closedLoopEventStatus\\": \\"ABATED\\" |
+| |
+|                 }, { |
+| |
+|                         \\"closedLoopControlName\\": \\"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\\", |
+| |
+|                         \\"version\\": \\"1.0.2\\", |
+| |
+|                         \\"fieldPath\\": \\"$.event.measurementsForVfScalingFields.additionalMeasurements[\*].arrayOfFields[0].value\\", |
+| |
+|                         \\"thresholdValue\\": 0, |
+| |
+|                         \\"direction\\": \\"GREATER\\", |
+| |
+|                         \\"severity\\": \\"CRITICAL\\", |
+| |
+|                         \\"closedLoopEventStatus\\": \\"ONSET\\" |
+| |
+|                 }] |
+| |
+|         }] |
+| |
+| }" |
+| |
+| } |
++============================================================================================================================================+
++--------------------------------------------------------------------------------------------------------------------------------------------+
+
+  Note: Dmaap configuration are specified on this file on
+ publisherHostName and subscriberHostName. To be changed as
+ required\*\*
+
+7. Copy below script to CDAP server (this gets latest image from nexus
+ and deploys TCA application) and execute it
+
++--------------------------------------------------------------------------------------------------------------------------------------------------+
+| #!/bin/sh |
+| |
+| TCA\_JAR=dcae-analytics-cdap-tca-2.0.0.jar |
+| |
+| rm -f /home/ubuntu/$TCA\_JAR |
+| |
+| cd /home/ubuntu/ |
+| |
+| wget https://nexus.onap.org/service/local/repositories/staging/content/org/onap/dcaegen2/analytics/tca/dcae-analytics-cdap-tca/2.0.0/$TCA\_JAR |
+| |
+| if [ $? -eq 0 ]; then |
+| |
+|         if [ -f /home/ubuntu/$TCA\_JAR ]; then |
+| |
+|                 echo "Restarting TCA CDAP application using $TCA\_JAR artifact" |
+| |
+|         else |
+| |
+|                 echo "ERROR: $TCA\_JAR missing" |
+| |
+|                 exit 1 |
+| |
+|         fi |
+| |
+| else |
+| |
+|         echo "ERROR: $TCA\_JAR not found in nexus" |
+| |
+|         exit 1 |
+| |
+| fi |
+| |
+| # stop programs |
+| |
+| curl -X POST http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/workers/TCADMaaPMRPublisherWorker/stop |
+| |
+| curl -X POST http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/workers/TCADMaaPMRSubscriberWorker/stop |
+| |
+| curl -X POST http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/flows/TCAVESCollectorFlow/stop |
+| |
+| # delete application |
+| |
+| curl -X DELETE http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca |
+| |
+| # delete artifact |
+| |
+| curl -X DELETE http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/artifacts/dcae-analytics-cdap-tca/versions/2.0.0 |
+| |
+| # load artifact |
+| |
+| curl -X POST --data-binary @/home/ubuntu/$TCA\_JAR http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/artifacts/dcae-analytics-cdap-tca |
+| |
+| # create app |
+| |
+| curl -X PUT -d @/home/ubuntu/tca\_app\_config.json http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca |
+| |
+| # load preferences |
+| |
+| curl -X PUT -d @/home/ubuntu/tca\_app\_preferences.json http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/preferences |
+| |
+| # start programs |
+| |
+| curl -X POST http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/workers/TCADMaaPMRPublisherWorker/start |
+| |
+| curl -X POST http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/workers/TCADMaaPMRSubscriberWorker/start |
+| |
+| curl -X POST http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/flows/TCAVESCollectorFlow/start |
+| |
+| echo |
+| |
+| # get status of programs |
+| |
+| curl http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/workers/TCADMaaPMRPublisherWorker/status |
+| |
+| curl http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/workers/TCADMaaPMRSubscriberWorker/status |
+| |
+| curl http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/flows/TCAVESCollectorFlow/status |
+| |
+| echo |
++==================================================================================================================================================+
++--------------------------------------------------------------------------------------------------------------------------------------------------+
+
+8. Verify TCA application and logs via CDAP GUI processes
+
+ The overall flow can be checked here
+
+TCA Configuration Change
+~~~~~~~~~~~~~~~~~~~~~~~
+
+Typical configuration changes include changing DMAAP host and/or Policy configuration. If necessary, modify the file on step #6 and run the script noted as step #7 to redeploy TCA with updated configuration.
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+VM Init
+~~~~~~
+
+To address windriver server in-stability, the below **init.sh** script
+was used to restart the container on VM restart.  This script was
+invoked via VM init script (rc.d).
+
++------------------------------------------------------------------------------------------------------------------------------+
+| #!/bin/sh |
+| |
+| #docker run -d --name cdap-sdk -p 11011:11011 -p 11015:11015 caskdata/cdap-standalone:4.1.2 |
+| |
+| sudo docker restart cdap-sdk-2 |
+| |
+| sleep 30 |
+| |
+| # start program |
+| |
+| curl -X POST http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/workers/TCADMaaPMRPublisherWorker/start |
+| |
+| curl -X POST http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/workers/TCADMaaPMRSubscriberWorker/start |
+| |
+| curl -X POST http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/flows/TCAVESCollectorFlow/start |
++==============================================================================================================================+
++------------------------------------------------------------------------------------------------------------------------------+
+
+
+This script was invoked via VM init script (rc.d).
+
+ln -s /home/ubuntu/init.sh /etc/init.d/init.sh
+
+sudo  update-rc.d init.sh start 2
+
diff --git a/docs/sections/installation_test.rst b/docs/sections/installation_test.rst
new file mode 100644
index 00000000..2c49a957
--- /dev/null
+++ b/docs/sections/installation_test.rst
@@ -0,0 +1,52 @@
+Testing and Debugging ONAP DCAE Deployment
+===========================================
+
+
+Check Component Status
+======================
+
+Testing of a DCAE system starts with checking the health of the deployed components. This can be done by accessing the Consul becsue all DCAE components register their staus with Consul. Such API is accessible at http://{{ANY_CONSUL_VM_IP}}:8500 .
+
+In addition, more details status information can be obtained in additional ways.
+
+1. Check VES Status
+ VES status and running logs can be found on the {{RAND}}doks00 VM. The detailed API and access methods can be found in the logging and human interface sections.
+
+2. Check TCA Status
+ TCA has its own GUI that provides detailed operation information. Point browser to http://{{CDAP02_VM_IP}}:11011/oldcdap/ns/cdap_tca_hi_lo/apps/, select the application with Description "DCAE Analytics Threshold Crossing Alert Application"; then select "TCAVESCollectorFlow". This leads to a flow display where all stages of processing are illustrated and the number inside of each stage icon shows the number of events/messages processed.
+
+
+3. Check Message Router Status
+ Run **curl {{MESSAGE_ROUTER_IP}}:3904/topics** to check the status of the message router. It should return with a list of message topics currently active on the Message Router;
+ * Among the topics, find one called "unauthenticated.SEC_MEASUREMENT_OUTPUT", which is the topics VES collector publishes its data to, and the other called "unauthenticated.DCAE_CL_OUTPUT", which is used for TCA to publish analytics events.
+
+
+Check data Flow
+===============
+After the platform is assessed as heathy, the next step is to check the functionality of the system. This can be monitored at a number of "observation" points.
+
+1. Check incoming VNF Data
+
+ For R1 use cases, VNF data enters the DCAE system via the VES collector. This can be verified in the following steps:
+
+ 1. ssh into the {{RAND}}doks00 VM;
+ 2. Run: **sudo docker ps** to see that the VES collector container is running;
+ * Optionally run: **docker logs -f {{ID_OF_THE_VES_CONTAINER}}** to check the VES container log information;
+ 3. Run: **netstat -ln** to see that port 8080 is open;
+ 4. Run: **sudo tcpdump dst port 8080** to see incoming packets (from VNFs) into the VM's 8080 port, which is mapped to the VES collectors's 8080 port.
+
+
+2. Check VES Output
+
+ VES publishes received VNF data, after authentication and syntax check, onto DMaaP Message Router. To check this is happening we can subscribe to the publishing topic.
+
+ 1. Run the subscription command to subscribe to the topic: **curl -H "Content-Type:text/plain" -X GET http://{{MESSAGE_ROUTER_IP}}:3904/events/unauthenticated.SEC_MEASUREMENT_OUTPUT/group19/C1?timeout=50000**. The actual format and use of Message Router API can be found in DMaaP project documentation.
+ * When there are messages being published, this command returns with the JSON array of messages;
+ * If no message being published, up to the timeout value (i.e. 50000 seconds as in the example above), the call is returned with empty JAON array;
+ * It may be useful to run this command in a loop: **while :; do curl -H "Content-Type:text/plain" -X GET http://{{MESSAGE_ROUTER_IP}}:3904/events/unauthenticated.SEC_MEASUREMENT_OUTPUT/group19/C1?timeout=50000; echo; done**;
+
+3. Check TCA Output
+ TCA also publishes its events to Message Router under the topic of "unauthenticated.DCAE_CL_OUTPUT". The same Message Router subscription command can be used for checking the messages being published by TCA;
+ * Run the subscription command to subscribe to the topic: **curl -H "Content-Type:text/plain" -X GET http://{{MESSAGE_ROUTER_IP}}:3904/events/unauthenticated.DCAE_CL_OUTPUT/group19/C1?timeout=50000**.
+ * Or run the command in a loop: **while :; do curl -H "Content-Type:text/plain" -X GET http://{{MESSAGE_ROUTER_IP}}:3904/events/unauthenticated.DCAE_CL_OUTPUT/group19/C1?timeout=50000; echo; done**;
+
diff --git a/docs/sections/logging.rst b/docs/sections/logging.rst
index 39eabfba..a6da9518 100644
--- a/docs/sections/logging.rst
+++ b/docs/sections/logging.rst
@@ -4,19 +4,27 @@
Logging
=======
-.. note::
- * This section is used to describe the informational or diagnostic messages emitted from
- a software component and the methods or collecting them.
-
- * This section is typically: provided for a platform-component and sdk; and
- referenced in developer and user guides
-
- * This note must be removed after content has been added.
+DCAE logging is available in several levels.
+Platform VM Logging
+-------------------
+1. DCAE bootstrap VM:
+ * /var/log directory containing various system logs including cloud init logs.
+ * /tmp/dcae2_install.log file provided installation logs.
+ * **docker logs** command for DCAE bootstrap container logs.
+2. Cloudify Manager VM:
+ * /var/log directory containing various system logs including cloud init logs.
+ * Cloudify Manager GUI provides viewing access to Cloudify's operation logs.
+3. Consul cluster:
+ * /var/log directory containing various system logs including cloud init logs.
+ * Consul GUI provides viewing access to Consul registered platform and service components healthcheck logs.
+4. Docker hosts
+ * /var/log directory containing various system logs including cloud init logs.
+ * **docker logs** command for Docker container logs.
-Where to Access Information
----------------------------
+Component Logging
+-----------------
+
+In general the logs of service component can be accessed under the /opt/log directory of the container, either the Docker container or the VM. Their deployment logs can be found at the deployment engine and deployment location, e.g. Cloudify Manager, CDAP, and Docker hosts.
-Error / Warning Messages
-------------------------