summaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
authorVijay VK <vv770d@att.com>2019-05-22 22:10:29 +0100
committerVENKATESH KUMAR <vv770d@att.com>2019-05-22 23:01:13 -0400
commitf9e4344e9c9834face106b6c153ede020ff5410f (patch)
tree84250de8fcffc70275664bf4316ed2eafc280e13 /docs
parenteb9b422dc1b980e9a9128ab0e8fece20653f8279 (diff)
dcae r4 doc updates4.0.0-ONAP
Change-Id: I341f10a9eace4facac5aec3a3ff56cb89ef005e8 Signed-off-by: VENKATESH KUMAR <vv770d@att.com> Issue-ID: DCAEGEN2-1505 Signed-off-by: VENKATESH KUMAR <vv770d@att.com>
Diffstat (limited to 'docs')
-rw-r--r--docs/sections/architecture.rst37
-rw-r--r--docs/sections/configuration.rst50
-rw-r--r--docs/sections/healthcheck.rst2
-rw-r--r--docs/sections/installation.rst4
-rw-r--r--docs/sections/installation_MS_ondemand.rst32
-rw-r--r--docs/sections/installation_oom.rst24
-rw-r--r--docs/sections/installation_test.rst6
-rw-r--r--docs/sections/offeredapis.rst4
-rw-r--r--docs/sections/release-notes.rst6
-rw-r--r--docs/sections/services/bbs-event-processor/index.rst5
-rw-r--r--docs/sections/services/dfc/consumedapis.rst4
-rw-r--r--docs/sections/services/dfc/troubleshooting.rst3
-rw-r--r--docs/sections/services/heartbeat-ms/installation.rst46
-rw-r--r--docs/sections/services/serviceindex.rst3
-rw-r--r--docs/sections/services/son-handler/index.rst2
-rw-r--r--docs/sections/services/son-handler/installation.rst (renamed from docs/sections/services/son-handler/son_handler_installation.rst)5
-rw-r--r--docs/sections/services/tca-cdap/development_info.rst71
-rw-r--r--docs/sections/services/tca-cdap/functionality.rst92
-rw-r--r--docs/sections/services/tca-cdap/index.rst27
-rw-r--r--docs/sections/services/tca-cdap/installation.rst51
20 files changed, 429 insertions, 45 deletions
diff --git a/docs/sections/architecture.rst b/docs/sections/architecture.rst
index fc101159..92ed40f6 100644
--- a/docs/sections/architecture.rst
+++ b/docs/sections/architecture.rst
@@ -5,12 +5,12 @@
Architecture
============
-Data Collection Analytics and Events (DCAE) is the primary data collection and analysis system of ONAP. DCAE architecture comprises of DCAE Platform and DCAE Service components so that the DCAE system is flexible, elastic, and expansive enough for supporting the potentially infinite number of ways of constructing intelligent and automated control loops on distributed and heterogeneous infrastructure.
-
-DCAE Service components are the functional entities that realize the collection and analytics needs of ONAP control loops. They include the collectors for various data collection needs, event processors for data standardization, analytics that assess collected data, and various auxiliary microservices that assist data collection and analytics, and support other ONAP functions. Service components and DMaaP buses form the "data plane" for DCAE, where DCAE collected data is transported among different DCAE service components.
+Data Collection Analytics and Events (DCAE) is the primary data collection and analysis system of ONAP. DCAE architecture comprises of DCAE Platform and DCAE Service components making DCAE flexible, elastic, and expansive enough for supporting the potentially infinite number of ways of constructing intelligent and automated control loops on distributed and heterogeneous infrastructure.
DCAE Platform supports the functions to deploy, host and perform LCM applications of Service components. DCAE Platform components enable model driven deployment of service components and middleware infrastructures that service components depend upon, such as special storage and computation platforms. When triggered by an invocation call (such as CLAMP or via DCAE Dashboard), DCAE Platform follows the TOSCA model of the control loop that is specified by the triggering call, interacts with the underlying networking and computing infrastructure such as OpenSatck installations and Kubernetes clusters to deploy and configure the virtual apparatus (i.e. the collectors, the analytics, and auxiliary microservices) that are needed to form the control loop, at locations that requested. DCAE Platform also provisions DMaaP topics and manages the distribution scopes of the topics following the prescription of the control loop model by interacting with controlling function of DMaaP.
+DCAE Service components are the functional entities that realize the collection and analytics needs of ONAP control loops. They include the collectors for various data collection needs, event processors for data standardization, analytics that assess collected data, and various auxiliary microservices that assist data collection and analytics, and support other ONAP functions. Service components and DMaaP buses form the "data plane" for DCAE, where DCAE collected data is transported among different DCAE service components.
+
DCAE service components configuration are stored under Key-Value store service, embodied by a Consul cluster. During deployment, DCAE platform (via Cloudify plugin) stores service component configuration under Consul for each deployment/instance (identified by ServiceComponentName). All DCAE components during startup will acess these configuration through ConfigBindingService api's to load deployment configuration and watch for any subsequent update.
DCAE components use Consul's distributed K-V store service to distribute and manage component configurations where each key is based on the unique identity of a DCAE component, and the value is the configuration for the corresponding component. DCAE platform creates and updates the K-V pairs based on information provided as part of the control loop blueprint, or received from other ONAP components such as Policy Framework and CLAMP. Either through periodically polling or proactive pushing, the DCAE components get the configuration updates in realtime and apply the configuration updates. DCAE Platform also offers dynamic template resolution for configuration parameters that are dynamic and only known by the DCAE platform, such as dynamically provisioned DMaaP topics. This approach standardizes component deployment and configuration management for DCAE service components in multi-site deployment.
@@ -24,7 +24,7 @@ The following list displays the details of what are included in ONAP DCAE R4. A
- DCAE Platform
- Core Platform
- Cloudify Manager: TOSCA model executor. Materializes TOSCA models of control loop, or Blueprints, into properly configured and managed virtual DCAE functional components.
- - Plugins
+ - Plugins (K8s, Dmaap, Policy, Clamp, Pg)
- Extended Platform
- Configuration Binding Service: Agent for service component configuration fetching; providing configuration parameter resolution.
- Deployment Handler: API for triggering control loop deployment based on control loop's TOSCA model.
@@ -43,17 +43,21 @@ The following list displays the details of what are included in ONAP DCAE R4. A
- SNMP Trap collector
- High-Volume VES collector (HV-VES)
- DataFile collector
+ - RESTConf collector
- Analytics
- Holmes correlation analytics
- CDAP based Threshold Crosssing Analytics application (tca)
- - Dockerized standalone TCA (tca-gen2)
+ - Heartbeat Services
+ - SON-Handler Service
- Microservices
- PNF Registration Handler
- - Missing Heartbeat analytics
- - Universal Data Mapper service
+ - VES Mapper Service
+ - PM-Mapper Service
+ - BBS-EventProcessor Service
+
-The figure below shows the DCAE R3 architecture and how the components work with each other. The components on the right constitute the Platform/controller components which are statically deployed. The components on the right represent the services which can be both deployed statically or dynamically (via CLAMP)
+The figure below shows the DCAE R4 architecture and how the components work with each other. The components on the right constitute the Platform/controller components which are statically deployed. The components on the right represent the services which can be both deployed statically or dynamically (via CLAMP)
.. image:: images/R4_architecture_diagram.png
@@ -61,19 +65,21 @@ The figure below shows the DCAE R3 architecture and how the components work with
Deployment Scenarios
--------------------
-Because DCAE service components are deployed on-demand following the control loop needs for managing ONAP deployed services, DCAE must support dynamic and on-demand deployment of service components based on ONAP control loop demands. This is why all other ONAP components are launched from the ONAP level method, DCAE only deploys a subset of its components during this ONAP deployment process and rest of DCAE components will be deployed either as TOSCA executor launches a series of Blueprints, or deployed by control loop request originated from CLAMP, or even by operator manually invoking DCAE's deployment API call.
+Because DCAE service components are deployed on-demand following the control loop needs for managing ONAP deployed services, DCAE must support dynamic and on-demand deployment of service components based on ONAP control loop demands. This is why all other ONAP components are launched from the ONAP level method, DCAE only deploys a subset of its components during this ONAP deployment process and rest of DCAE components will be deployed on-demand based on usecase needs triggered by control loop request originated from CLAMP, or even by operator manually invoking DCAE's deployment API call.
-For R3, ONAP supports two deployment methodologies: Heat Orchestration Template method, or Helm Chart method. No matter which method, DCAE is deployed following the same flow. At its minimum, only the TOSCA model executor, the DCAE Cloudify Manager, needs to be deployed through the ONAP deployment process. Once the Cloudify Manager is up and running, all the rest of DCAE platform can be deployed by a bootstrap script, which makes a number of calls into the Cloudify Manager API with Blueprints for various DCAE components, first the DCAE Platform components, then the service components that are needed for the built-in control loops, such as vFW/vDNS traffic throttling. It is also possible that additional DCAE components are also launched as part of the ONAP deployment process using the ONAP level method instead of TOSCA model based method.
+For R4, ONAP supports deployment via OOM Helm Chart method and Heat deployment support is discontinued. DCAE Platform components are deployed via Helm charts - this includes Cloudify Manager, ConfigBinding service, ServiceChange Handler, Policy Handler and Inventory. Once DCAE platform components are up and running, rest of DCAE service components required for ONAP flow are deployed via bootstrap POD, which invokes Cloudify Manager API with Blueprints for various DCAE components that are needed for the built-in collections and control loops flow support.
+
+To keep the ONAP footprint minimal, only minmial set MS (required for ONAP Integration usecases) are deployed via bootstrap pod. Rest of service blueprints are available for operator to deploy on-demand as required.
The PNDA platform service is an optional component that can be installed when using the OOM Helm Chart installation method on Openstack based Kubernetes infrastructure.
-More details of the DCAE R3 deployment will be covered by the Installation section.
+More details of the DCAE deployment can be found under Installation section.
Usage Scenarios
---------------
-For ONAP R3 DCAE participates in the following use cases.
+For ONAP R4 DCAE participates in the following use cases.
- vDNS: VES collector, TCA analytics
@@ -83,7 +89,12 @@ For ONAP R3 DCAE participates in the following use cases.
- vVoLTE: VES collector, Holmes analytics
-- OSAM/PNF: VES Collector, PRH
+- CCVPN : RestConf Collector, Holmes
+
+- BBS : VES Collector, PRH, BBS-Event Processor, VES-Mapper, RESTConf Collector
+
+- 5g : DataFile Collector, PM-Mapper, HV-VES
+
In addition, DCAE supports on-demand deployment and configuration of service components via CLAMP. In such case CLAMP invokes the deployment and configuration of additional TCA instances.
diff --git a/docs/sections/configuration.rst b/docs/sections/configuration.rst
index 85d74a72..22e77d18 100644
--- a/docs/sections/configuration.rst
+++ b/docs/sections/configuration.rst
@@ -21,4 +21,54 @@ ConfigBindingService
"Invetory", "https://git.onap.org/oom/tree/kubernetes/dcaegen2/charts/dcae-servicechange-handler/charts/dcae-inventory-api"
+
+Deployment time configuration of DCAE components are defined in several places.
+
+ * Helm Chart templates:
+ * Helm/Kubernetes template files can contain static values for configuration parameters;
+ * Helm Chart resources:
+ * Helm/Kubernetes resources files can contain static values for configuration parameters;
+ * Helm values.yaml files:
+ * The values.yaml files supply the values that Helm templating engine uses to expand any templates defined in Helm templates;
+ * In a Helm chart hierarchy, values defined in values.yaml files in higher level supersedes values defined in values.yaml files in lower level;
+ * Helm command line supplied values supersedes values defined in any values.yaml files.
+
+In addition, for DCAE components deployed through Cloudify Manager blueprints, their configuration parameters are defined in the following places:
+
+ * The blueprint files can contain static values for configuration parameters;
+ * The blueprint files are defined under the ``blueprints`` directory of the ``dcaegen2/platform/blueprints`` repo, named with "k8s" prefix.
+ * The blueprint files can specify input parameters and the values of these parameters will be used for configuring parameters in Blueprints. The values for these input parameters can be supplied in several ways as listed below in the order of precedence (low to high):
+ * The blueprint files can define default values for the input parameters;
+ * The blueprint input files can contain static values for input parameters of blueprints. These input files are provided as config resources under the dcae-bootstrap chart;
+ * The blueprint input files may contain Helm templates, which are resolved into actual deployment time values following the rules for Helm values.
+
+
+Now we walk through an example, how to configure the Docker image for the DCAE dashboard, which is deployed by Cloudify Manager.
+
+In the ``k8s-dashboard.yaml-template`` blueprint template, the Docker image to use is defined as an input parameter with a default value:
+
+.. code-block:: yaml
+
+ dashboard_docker_image:
+ description: 'Docker image for dashboard'
+ default: 'nexus3.onap.org:10001/onap/org.onap.ccsdk.dashboard.ccsdk-app-os:1.1.0-SNAPSHOT-latest'
+
+Then in the input file, ``oom/kubernetes/dcaegen2/charts/dcae-bootstrap/resources/inputs/k8s-dashboard-inputs.yaml``,
+it is defined again as:
+
+.. code-block:: yaml
+
+ dashboard_docker_image: {{ include "common.repository" . }}/{{ .Values.componentImages.dashboard }}
+
+Thus, when ``common.repository`` and ``componentImages.policy_handler`` are defined in the ``values.yaml`` files,
+their values will be plugged in here and the resulting ``policy_handler_image`` value
+will be passed to the Policy Handler blueprint as the Docker image tag to use instead of the default value in the blueprint.
+
+Indeed the ``componentImages.dashboard`` value is provided in the ``oom/kubernetes/dcaegen2/charts/dcae-bootstrap/values.yaml`` file:
+
+.. code-block:: yaml
+
+ componentImages:
+ dashboard: onap/org.onap.ccsdk.dashboard.ccsdk-app-os:1.1.0
+
DCAE Service components are deployed via Cloudify Blueprints. Instruction for deployment and configuration are documented under https://docs.onap.org/en/latest/submodules/dcaegen2.git/docs/sections/services/serviceindex.html
diff --git a/docs/sections/healthcheck.rst b/docs/sections/healthcheck.rst
index d9b5e1f2..9fec3a80 100644
--- a/docs/sections/healthcheck.rst
+++ b/docs/sections/healthcheck.rst
@@ -26,7 +26,7 @@ blueprints after the initial DCAE installation.
The healthcheck service is exposed as a Kubernetes ClusterIP Service named
`dcae-healthcheck`. The service can be queried for status as shown below.
-.. code-block::
+.. code-block::json
$ curl dcae-healthcheck
{
diff --git a/docs/sections/installation.rst b/docs/sections/installation.rst
index 0b60c1de..35e20d25 100644
--- a/docs/sections/installation.rst
+++ b/docs/sections/installation.rst
@@ -9,6 +9,6 @@ DCAE Deployment (Installation)
:titlesonly:
./installation_oom.rst
+ ./installation_MS_ondemand.rst
./installation_pnda.rst
- ./installation_test.rst
-
+ ./installation_test.rst \ No newline at end of file
diff --git a/docs/sections/installation_MS_ondemand.rst b/docs/sections/installation_MS_ondemand.rst
new file mode 100644
index 00000000..ee5e639e
--- /dev/null
+++ b/docs/sections/installation_MS_ondemand.rst
@@ -0,0 +1,32 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+
+DCAE MS Deployment
+==================
+
+DCAE MS catalog includes number of collectors, analytics and event processor services. For Dublin, not all MS available on default ONAP/DCAE deployment.
+
+Following Services are deployed via DCAE Bootstrap
+
+
+.. toctree::
+ :maxdepth: 1
+
+ ./services/snmptrap/index.rst
+ ./services/ves-http/index.rst
+ ./services/ves-hv/index.rst
+ ./services/prh/index.rst
+ ./services/tca-cdap/index.rst
+
+Following additional MS are available for on-demand deployment as necessary for any usecases; instruction for deployment are provided under each MS.
+
+.. toctree::
+ :maxdepth: 1
+
+ Mapper MS Installation <./services/mapper/installation>
+ DFC MS Installation <./services/dfc/installation>
+ Heartbeat MS Installation <./services/heartbeat/installation>
+ PM-Mapper MS Installation <./services/pm-mapper/installation>
+ BBS EventProcessor MS Installation <./services/bbs-event-processor/installation>
+ Son-Handler MS Installation <./services/son-handler/installation>
+ RESTconf MS Installation <./services/restconf/installation>
diff --git a/docs/sections/installation_oom.rst b/docs/sections/installation_oom.rst
index 1715237c..bd1b752d 100644
--- a/docs/sections/installation_oom.rst
+++ b/docs/sections/installation_oom.rst
@@ -1,14 +1,14 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
-Helm Chart Based DCAE Deployment
-================================
+DCAE Deployment (using Helm and Cloudify)
+=========================================
This document describes the details of the Helm chart based deployment process for R4 ONAP and how DCAE is deployed through this process.
-ONAP Deployment Overview
-------------------------
+Deployment Overview
+-------------------
ONAP R4 extends the Kubernetes deployment method introduced in R2 and continued in R3.
Kubernetes is a container orchestration technology that organizes containers into composites of various patterns for easy deployment, management, and scaling.
@@ -24,10 +24,16 @@ and invokes Kubernetes deployment operations for all the resources.
All ONAP Helm charts are organized under the **kubernetes** directory of the **OOM** project, where roughly each ONAP component occupies a subdirectory.
DCAE charts are placed under the **dcaegen2** directory.
+In Dublin, all DCAE platform components (exception of Dashboard) have corresponding Helm chart which will be used to trigger the deployment.
+All DCAE Services are deployed through Cloudify Blueprint. The default ONAP DCAE deployment includes small subset of DCAE services deployed through Bootstrap pod to meet
+ONAP Integration usecases. Optionally operators can deploy on-demand other MS required for their usecases as described in `On-demand MS Installation
+<installation_MS_ondemand>`_.
+
The PNDA data platform is an optional DCAE component that is placed under the **pnda**
directory. Details for how to configure values to enable PNDA installation during Helm install
-are described in `Installing PNDA During Helm Chart Based DCAE Deployment
-<installation_pnda>`.
+are described in `Installing PNDA through Helm Chart
+<installation_pnda>`_.
+
DCAE Chart Organization
-----------------------
@@ -143,7 +149,7 @@ Now we walk through an example, how to configure the Docker image for the DCAE d
In the ``k8s-dashboard.yaml-template`` blueprint template, the Docker image to use is defined as an input parameter with a default value:
-.. code-block::
+.. code-block:: yaml
dashboard_docker_image:
description: 'Docker image for dashboard'
@@ -152,7 +158,7 @@ In the ``k8s-dashboard.yaml-template`` blueprint template, the Docker image to u
Then in the input file, ``oom/kubernetes/dcaegen2/charts/dcae-bootstrap/resources/inputs/k8s-dashboard-inputs.yaml``,
it is defined again as:
-.. code-block::
+.. code-block:: yaml
dashboard_docker_image: {{ include "common.repository" . }}/{{ .Values.componentImages.dashboard }}
@@ -162,7 +168,7 @@ will be passed to the Policy Handler blueprint as the Docker image tag to use in
Indeed the ``componentImages.dashboard`` value is provided in the ``oom/kubernetes/dcaegen2/charts/dcae-bootstrap/values.yaml`` file:
-.. code-block::
+.. code-block:: yaml
componentImages:
dashboard: onap/org.onap.ccsdk.dashboard.ccsdk-app-os:1.1.0
diff --git a/docs/sections/installation_test.rst b/docs/sections/installation_test.rst
index c39923f5..2d2f357d 100644
--- a/docs/sections/installation_test.rst
+++ b/docs/sections/installation_test.rst
@@ -1,5 +1,5 @@
-ONAP DCAE Deployment Validation
-===============================
+DCAE Deployment Validation
+==========================
Check Deployment Status
@@ -8,7 +8,7 @@ Check Deployment Status
The healthcheck service is exposed as a Kubernetes ClusterIP Service named
`dcae-healthcheck`. The service can be queried for status as shown below.
-.. code-block::
+.. code-block:: json
$ curl dcae-healthcheck
{
diff --git a/docs/sections/offeredapis.rst b/docs/sections/offeredapis.rst
index 969684a0..921ba54c 100644
--- a/docs/sections/offeredapis.rst
+++ b/docs/sections/offeredapis.rst
@@ -1,5 +1,5 @@
-DCAEGEN2 Components Offered APIs
-================================
+Offered APIs
+============
.. toctree::
:maxdepth: 1
diff --git a/docs/sections/release-notes.rst b/docs/sections/release-notes.rst
index 0f5245e3..251d5cc6 100644
--- a/docs/sections/release-notes.rst
+++ b/docs/sections/release-notes.rst
@@ -26,7 +26,7 @@ DCAE R4 improves upon previous release with the following new features:
- Collectors
- RESTConf collector 
- Event Processors
- - VES/Universal Mapper
+ - VES Mapper
- 3gpp PM-Mapper
- BBS Event processor
- Analytics/RCA
@@ -133,7 +133,7 @@ The following components are introduced in R4
- Docker container tag: onap/org.onap.dcaegen2.services.components.bbs-event-processor:1.0.0
- Description: Handles PNF-Reregistration and CPE authentication events and generate CL events
- SON-Handler
- - Docker container tag: onap/org.onap.dcaegen2.services.son-handler:1.0.1
+ - Docker container tag: onap/org.onap.dcaegen2.services.son-handler:1.0.2
- Description: Supports PC-ANR optimization analysis and generating CL events output
- Heartbeat MS
- Docker container tag: onap/org.onap.dcaegen2.services.heartbeat:2.1.0
@@ -171,7 +171,7 @@ The following components are upgraded from R3
- Docker container image tag: onap/org.onap.dcaegen2.deployments.tca-cdap-container:1.1.2
- Description: Config updates. Replaced Hadoop VM Cluster based file system with regular host file system; repackaged full TCA-CDAP stack into Docker container; transactional state separation from TCA in-memory to off-node Redis cluster for supporting horizontal scaling.
- DataFile Collector
- - Docker container tag: onap/org.onap.dcaegen2.collectors.datafile.datafile-app-server:1.1.2
+ - Docker container tag: onap/org.onap.dcaegen2.collectors.datafile.datafile-app-server:1.1.3
- Description : Code optimization, bug fixes, logging and performance improvement
- PNF Registrator handler
- Docker container tag: onap/org.onap.dcaegen2.services.prh.prh-app-server:1.2.3
diff --git a/docs/sections/services/bbs-event-processor/index.rst b/docs/sections/services/bbs-event-processor/index.rst
index bcaa700b..f9fc2d8b 100644
--- a/docs/sections/services/bbs-event-processor/index.rst
+++ b/docs/sections/services/bbs-event-processor/index.rst
@@ -5,11 +5,6 @@
BBS-EventProcessor
==================
-:Date: 2019-06-06
-
-.. contents::
- :depth: 3
-..
Overview
========
diff --git a/docs/sections/services/dfc/consumedapis.rst b/docs/sections/services/dfc/consumedapis.rst
index 0ab10498..2fe63b46 100644
--- a/docs/sections/services/dfc/consumedapis.rst
+++ b/docs/sections/services/dfc/consumedapis.rst
@@ -1,8 +1,8 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
-Paths
-=====
+API
+===
GET /events/unauthenticated.VES_NOTIFICATION_OUTPUT
---------------------------------------------------
diff --git a/docs/sections/services/dfc/troubleshooting.rst b/docs/sections/services/dfc/troubleshooting.rst
index 7d2ddede..c905802b 100644
--- a/docs/sections/services/dfc/troubleshooting.rst
+++ b/docs/sections/services/dfc/troubleshooting.rst
@@ -1,3 +1,6 @@
+Trobleshooting
+==============
+
In order to find the origin of an error, we suggest to use the logs resulting from tracing, which needs to be activated.
Activate tracing: Spring actuator
diff --git a/docs/sections/services/heartbeat-ms/installation.rst b/docs/sections/services/heartbeat-ms/installation.rst
new file mode 100644
index 00000000..df50dfb1
--- /dev/null
+++ b/docs/sections/services/heartbeat-ms/installation.rst
@@ -0,0 +1,46 @@
+Installation
+============
+
+
+Following are steps if manual deployment/undeployment required.
+
+Steps to deploy are shown below
+
+- Transfer blueprint component file in DCAE bootstrap POD under /blueprints directory. Heartbeat Blueprint can be found under https://git.onap.org/dcaegen2/services/heartbeat/tree/dpo/k8s-heartbeat.yaml?h=dublin
+
+- Transfer blueprint inputs file in DCAE bootstrap POD under /inputs directory. Sample input file can be found under https://git.onap.org/dcaegen2/services/heartbeat/tree/dpo/k8s-heartbeat-inputs.yaml
+
+
+- Enter the Bootstrap POD
+- Validate blueprint
+ .. code-block:: bash
+
+ cfy blueprints validate /blueprints/k8s-hearttbeat.yaml
+- Upload validated blueprint
+ .. code-block:: bash
+
+
+ cfy blueprints upload -b heartbeat /blueprints/k8s-hearttbeat.yaml
+- Create deployment
+ .. code-block:: bash
+
+
+ cfy deployments create -b heartbeat -i /k8s-hearttbeat-input.yaml heartbeat
+- Deploy blueprint
+ .. code-block:: bash
+
+
+ cfy executions start -d heartbeat install
+
+To undeploy heartbeat, steps are shown below
+
+- Uninstall running heartbeat and delete deployment
+ .. code-block:: bash
+
+
+ cfy uninstall heartbeat
+- Delete blueprint
+ .. code-block:: bash
+
+
+ cfy blueprints delete heartbeat \ No newline at end of file
diff --git a/docs/sections/services/serviceindex.rst b/docs/sections/services/serviceindex.rst
index fdc8aedd..c046157c 100644
--- a/docs/sections/services/serviceindex.rst
+++ b/docs/sections/services/serviceindex.rst
@@ -20,4 +20,5 @@ DCAE Service components
./pm-mapper/index.rst
./bbs-event-processor/index.rst
./son-handler/index.rst
- ./restconf/index.rst \ No newline at end of file
+ ./restconf/index.rst
+ ./tca-cdap/index.rst \ No newline at end of file
diff --git a/docs/sections/services/son-handler/index.rst b/docs/sections/services/son-handler/index.rst
index 5bc4fbd2..79bcb2d1 100644
--- a/docs/sections/services/son-handler/index.rst
+++ b/docs/sections/services/son-handler/index.rst
@@ -28,5 +28,5 @@ SON-Handler MS Installation Steps, Configurations, Troubleshooting Tips and Logg
.. toctree::
:maxdepth: 1
- ./son_handler_installation.rst
+ ./installation.rst
./son_handler_troubleshooting.rst
diff --git a/docs/sections/services/son-handler/son_handler_installation.rst b/docs/sections/services/son-handler/installation.rst
index b7807e46..f529bc4a 100644
--- a/docs/sections/services/son-handler/son_handler_installation.rst
+++ b/docs/sections/services/son-handler/installation.rst
@@ -1,6 +1,5 @@
-
-Instalation Steps
------------------
+Installation
+============
SON handler microservice can be deployed using cloudify blueprint using bootstrap container of an existing DCAE deployment
diff --git a/docs/sections/services/tca-cdap/development_info.rst b/docs/sections/services/tca-cdap/development_info.rst
new file mode 100644
index 00000000..afb240ef
--- /dev/null
+++ b/docs/sections/services/tca-cdap/development_info.rst
@@ -0,0 +1,71 @@
+Compiling TCA
+=============
+
+TCA code is maintained under https://gerrit.onap.org/r/#/admin/projects/dcaegen2/analytics/tca
+To build just the TCA component, run the following maven command
+`mvn clean install`
+
+
+Maven GroupId:
+==============
+
+org.onap.dcaegen2.analytics.tca
+
+Maven Parent ArtifactId:
+----------------
+dcae-analytics
+
+Maven Children Artifacts:
+------------------------
+1. dcae-analytics-test: Common test code for all DCAE Analytics Modules
+2. dcae-analytics-model: Contains models (e.g. Common Event Format) which are common to DCAE Analytics
+3. dcae-analytics-common: Contains Components common to all DCAE Analytics Modules - contains high level abstractions
+4. dcae-analytics-dmaap: DMaaP(Data Movement as a Platform) MR (Message Router)API using AAF(Authentication and Authorization Framework)
+5. dcae-analytics-tca: DCAE Analytics TCA (THRESHOLD CROSSING ALERT) Core
+6. dcae-analytics-cdap-common: Common code for all cdap modules
+7. dcae-analytics-cdap-tca: CDAP Flowlet implementation for TCA
+8. dcae-analytics-cdap-plugins: CDAP Plugins
+9. dcae-analytics-cdap-it: Cucumber and CDAP Pipeline integration tests
+
+
+API Endpoints
+=============
+# create namespace
+curl -X PUT http://<k8s-clusterIP>:11015/v3/namespaces/cdap_tca_hi_lo
+
+# load artifact
+curl -X POST --data-binary @/c/usr/tmp/dcae-analytics-cdap-tca-2.0.0-SNAPSHOT.jar http://<k8s-clusterIP>:11015/v3/namespaces/cdap_tca_hi_lo/artifacts/dcae-analytics-cdap-tca
+
+# create app
+curl -X PUT -d @/c/usr/docs/ONAP/tca_app_config.json http://<k8s-clusterIP>:11015/v3/namespaces/cdap_tca_hi_lo/apps/dcae-tca
+
+# load preferences
+curl -X PUT -d @/c/usr/docs/ONAP/tca_app_preferences.json http://<k8s-clusterIP>:11015/v3/namespaces/cdap_tca_hi_lo/apps/dcae-tca/preferences
+
+# start program
+curl -X POST http://<k8s-clusterIP>:11015/v3/namespaces/cdap_tca_hi_lo/apps/dcae-tca/workers/TCADMaaPMRPublisherWorker/start
+curl -X POST http://<k8s-clusterIP>:11015/v3/namespaces/cdap_tca_hi_lo/apps/dcae-tca/workers/TCADMaaPMRSubscriberWorker/start
+curl -X POST http://<k8s-clusterIP>:11015/v3/namespaces/cdap_tca_hi_lo/apps/dcae-tca/flows/TCAVESCollectorFlow/start
+
+# check status
+curl http://<k8s-clusterIP>:11015/v3/namespaces/cdap_tca_hi_lo/apps/dcae-tca/workers/TCADMaaPMRPublisherWorker/status
+curl http://<k8s-clusterIP>:11015/v3/namespaces/cdap_tca_hi_lo/apps/dcae-tca/workers/TCADMaaPMRSubscriberWorker/status
+curl http://<k8s-clusterIP>:11015/v3/namespaces/cdap_tca_hi_lo/apps/dcae-tca/flows/TCAVESCollectorFlow/status
+
+# Delete namespace (and all its content)
+# curl -X DELETE http://<k8s-clusterIP>:11015/v3/unrecoverable/namespaces/cdap_tca_hi_lo
+
+# Delete artifact
+# curl -X DELETE http://<k8s-clusterIP>:11015/v3/namespaces/cdap_tca_hi_lo/artifacts/dcae-analytics-cdap-tca/versions/2.0.0.SNAPSHOT
+
+
+TCA CDAP Container
+=================
+
+If new jar is generated, corresponding version should be updated into https://git.onap.org/dcaegen2/deployments/tree/tca-cdap-container.
+
+Following files should be revised
+- tca_app_config.json
+- tca_app_preferences.json
+- restart.sh
+
diff --git a/docs/sections/services/tca-cdap/functionality.rst b/docs/sections/services/tca-cdap/functionality.rst
new file mode 100644
index 00000000..35515f97
--- /dev/null
+++ b/docs/sections/services/tca-cdap/functionality.rst
@@ -0,0 +1,92 @@
+Functionality
+=============
+
+TCA is CDAP application driven by the VES collector events published into Message Router. This Message Router topic is the source for the CDAP application which will read each incoming message. If a message meets the Common Event Format (CEF, v28.3) as specified by the VES 5.3 standard, it will be parsed and if it contains a message which matches the policy configuration for a given metric (denoted primarily by the "eventName" and the "fieldPath"), the value of the metric will be compared to the "thresholdValue". If that comparison indicates that a Control Loop Event Message should be generated, the application will output the alarm to the Message Router topic in a format that matches the interface spec defined for Control-Loop by ONAP-Policy
+
+Assumptions:
+
+TCA output will be similar to R0 implementation, where CL event will be triggered each time threshold rules are met.
+In the context of the vCPE use case, the CLEAR event (aka ABATED event) is driven by a measured metric (i.e. packet loss equal to 0) rather than by the lapse of a threshold crossing event over some minimum number of measured intervals. Thus, this requirement can be accommodated by use of the low threshold with a policy of "direction = 0". Hence, for this release, the cdap-tca-hi-lo implementation will keep only the minimal state needed to correlate an ABATED event with the corresponding ONSET event. This correlation will be indicated by the requestID in the Control Loop Event Message.
+
+CDAP Programming Paradigm
+
+Since Amsterdam/ONAP R1, the code has been refactored to allow for delivery as either a flowlet or a batch pipeline. Current implementation will stay with the flowlet version.
+
+
+TCA is used multiple ONAP usecases since ONAP Amsterdam release. Single TCA instance can be deployed to support all 3 usecases.
+- vFirewall
+- vDNS
+- vCPE
+
+
+Following is default configuration set for TCA during deployment.
+
+.. code-block:: json
+
+ {
+ "domain": "measurementsForVfScaling",
+ "metricsPerEventName": [{
+ "eventName": "measurement_vFirewall-Att-Linkdownerr",
+ "controlLoopSchemaType": "VM",
+ "policyScope": "DCAE",
+ "policyName": "DCAE.Config_tca-hi-lo",
+ "policyVersion": "v0.0.1",
+ "thresholds": [{
+ "closedLoopControlName": "ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a",
+ "version": "1.0.2",
+ "fieldPath": "$.event.measurementsForVfScalingFields.vNicPerformanceArray[*].receivedTotalPacketsDelta",
+ "thresholdValue": 300,
+ "direction": "LESS_OR_EQUAL",
+ "severity": "MAJOR",
+ "closedLoopEventStatus": "ONSET"
+ }, {
+ "closedLoopControlName": "ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a",
+ "version": "1.0.2",
+ "fieldPath": "$.event.measurementsForVfScalingFields.vNicPerformanceArray[*].receivedTotalPacketsDelta",
+ "thresholdValue": 700,
+ "direction": "GREATER_OR_EQUAL",
+ "severity": "CRITICAL",
+ "closedLoopEventStatus": "ONSET"
+ }]
+ }, {
+ "eventName": "vLoadBalancer",
+ "controlLoopSchemaType": "VM",
+ "policyScope": "DCAE",
+ "policyName": "DCAE.Config_tca-hi-lo",
+ "policyVersion": "v0.0.1",
+ "thresholds": [{
+ "closedLoopControlName": "ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3",
+ "version": "1.0.2",
+ "fieldPath": "$.event.measurementsForVfScalingFields.vNicPerformanceArray[*].receivedTotalPacketsDelta",
+ "thresholdValue": 300,
+ "direction": "GREATER_OR_EQUAL",
+ "severity": "CRITICAL",
+ "closedLoopEventStatus": "ONSET"
+ }]
+ }, {
+ "eventName": "Measurement_vGMUX",
+ "controlLoopSchemaType": "VNF",
+ "policyScope": "DCAE",
+ "policyName": "DCAE.Config_tca-hi-lo",
+ "policyVersion": "v0.0.1",
+ "thresholds": [{
+ "closedLoopControlName": "ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e",
+ "version": "1.0.2",
+ "fieldPath": "$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value",
+ "thresholdValue": 0,
+ "direction": "EQUAL",
+ "severity": "MAJOR",
+ "closedLoopEventStatus": "ABATED"
+ }, {
+ "closedLoopControlName": "ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e",
+ "version": "1.0.2",
+ "fieldPath": "$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value",
+ "thresholdValue": 0,
+ "direction": "GREATER",
+ "severity": "CRITICAL",
+ "closedLoopEventStatus": "ONSET"
+ }]
+ }]
+ }
+
+For more details about the exact flows - please refer to usecases wiki
diff --git a/docs/sections/services/tca-cdap/index.rst b/docs/sections/services/tca-cdap/index.rst
new file mode 100644
index 00000000..9d184f5b
--- /dev/null
+++ b/docs/sections/services/tca-cdap/index.rst
@@ -0,0 +1,27 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+
+=================================
+Threshold Crossing Analytics (TCA)
+=================================
+
+
+.. contents::
+ :depth: 3
+..
+
+Overview
+========
+
+The TCA (cdap-tca-hi-lo) app was first delivered as part of ONAP R0. In that release, it was intended to be an application that established a software architecture for building CDAP applications that demonstrate sufficient unit test coverage and reusable libraries for ingesting DMaaP MR feeds formatted according to the VES standard. Functionally, it performs a comparison of an incoming performance metric(s) against both a high and low threshold defined and generates CL events when threshold are exceeded.
+
+In Amsterdam release, TCA was deployed into 7 node CDAP cluster. However since Beijin release as ONAP required application to be containerized and deployable into K8s, a wrapper TCA CDAP container was built using CDAP SDK base image on which TCA application is deployed.
+
+
+
+.. toctree::
+ :maxdepth: 1
+
+ ./installation
+ ./functionality
+ ./development_info
diff --git a/docs/sections/services/tca-cdap/installation.rst b/docs/sections/services/tca-cdap/installation.rst
new file mode 100644
index 00000000..84b4d1e6
--- /dev/null
+++ b/docs/sections/services/tca-cdap/installation.rst
@@ -0,0 +1,51 @@
+Installation
+============
+
+TCA will be deployed by DCAE deployment among the bootstrapped services. This is more to facilitate automated deployment of ONAP regression test cases required services.
+
+As TCA jar is packaged into docker container, the container can be deployer standalone or via Cloudify Blueprint.
+
+
+
+Following are steps if manual deployment/undeployment required.
+
+Steps to deploy are shown below
+
+- Transfer blueprint component file in DCAE bootstrap POD under /blueprints directory. TCA Blueprint can be found under /blueprint directory. Same is also available on gerrit https://git.onap.org/dcaegen2/platform/blueprints/tree/blueprints/k8s-tca.yaml-template
+
+- Modify blueprint inputs file in DCAE bootstrap POD under /inputs directory. Copy this file to / and update as necessary.
+
+
+- Enter the Bootstrap POD
+- Validate blueprint
+ .. code-block:: bash
+
+ cfy blueprints validate /blueprints/k8s-tca.yaml
+- Upload validated blueprint
+ .. code-block:: bash
+
+
+ cfy blueprints upload -b tca /blueprints/k8s-tca.yaml
+- Create deployment
+ .. code-block:: bash
+
+
+ cfy deployments create -b tca -i /k8s-tca-input.yaml tca
+- Deploy blueprint
+ .. code-block:: bash
+
+
+ cfy executions start -d tca install
+
+To undeploy TCA, steps are shown below
+
+- Uninstall running TCA and delete deployment
+ .. code-block:: bash
+
+
+ cfy uninstall tca
+- Delete blueprint
+ .. code-block:: bash
+
+
+ cfy blueprints delete tca \ No newline at end of file