summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--docs/index.rst1
-rw-r--r--docs/sections/apis/PNDA.rst1
-rw-r--r--docs/sections/apis/SDK.rst6
-rw-r--r--docs/sections/apis/configbinding.rst4
-rw-r--r--docs/sections/apis/deployment-handler.rst4
-rw-r--r--docs/sections/apis/inventory.rst4
-rw-r--r--docs/sections/apis/ves.rst6
-rw-r--r--docs/sections/architecture.rst22
-rw-r--r--docs/sections/blueprints/DockerHost.rst23
-rw-r--r--docs/sections/blueprints/PGaaS.rst166
-rw-r--r--docs/sections/blueprints/cbs.rst23
-rw-r--r--docs/sections/blueprints/cdap.rst130
-rw-r--r--docs/sections/blueprints/cdapbroker.rst23
-rw-r--r--docs/sections/blueprints/centos_vm.rst145
-rw-r--r--docs/sections/blueprints/consul.rst23
-rw-r--r--docs/sections/blueprints/deploymenthandler.rst23
-rw-r--r--docs/sections/blueprints/holmes.rst23
-rw-r--r--docs/sections/blueprints/inventoryapi.rst23
-rw-r--r--docs/sections/blueprints/policyhandler.rst23
-rw-r--r--docs/sections/blueprints/servicechangehandler.rst23
-rw-r--r--docs/sections/blueprints/tca.rst23
-rw-r--r--docs/sections/blueprints/ves.rst23
-rw-r--r--docs/sections/build.rst16
-rw-r--r--docs/sections/components/component-development.rst4
-rw-r--r--docs/sections/configuration.rst30
-rw-r--r--docs/sections/consumedapis.rst6
-rw-r--r--docs/sections/images/R4_architecture_diagram.pngbin0 -> 111709 bytes
-rw-r--r--docs/sections/installation.rst1
-rw-r--r--docs/sections/installation_heat.rst138
-rw-r--r--docs/sections/installation_pnda.rst10
-rw-r--r--docs/sections/installation_test.rst140
-rw-r--r--docs/sections/offeredapis.rst1
-rw-r--r--docs/sections/release-notes.rst188
-rw-r--r--docs/sections/sdk/architecture.rst (renamed from docs/sections/services/sdk/architecture.rst)0
-rw-r--r--docs/sections/sdk/index.rst16
-rw-r--r--docs/sections/services/bbs-event-processor/index.rst8
-rw-r--r--docs/sections/services/dfc/index.rst4
-rw-r--r--docs/sections/services/heartbeat-ms/index.rst4
-rw-r--r--docs/sections/services/mapper/index.rst9
-rw-r--r--docs/sections/services/restconf/index.rst6
-rw-r--r--docs/sections/services/sdk/index.rst19
-rw-r--r--docs/sections/services/serviceindex.rst7
-rw-r--r--docs/sections/services/snmptrap/index.rst4
-rw-r--r--docs/sections/services/ves-http/architecture.rst6
44 files changed, 391 insertions, 968 deletions
diff --git a/docs/index.rst b/docs/index.rst
index 0c73b5eb..69d43e81 100644
--- a/docs/index.rst
+++ b/docs/index.rst
@@ -18,6 +18,7 @@ Data Collection, Analytics, and Events (DCAE)
./sections/logging.rst
./sections/healthcheck.rst
./sections/tls_enablement.rst
+ ./sections/sdk/index.rst
./sections/configuration.rst
./sections/humaninterfaces.rst
./sections/components/component-development.rst
diff --git a/docs/sections/apis/PNDA.rst b/docs/sections/apis/PNDA.rst
index 4423e7c6..9e54cba0 100644
--- a/docs/sections/apis/PNDA.rst
+++ b/docs/sections/apis/PNDA.rst
@@ -9,4 +9,3 @@ PNDA has several APIs that are documented as part of the PNDA project.
* https://github.com/pndaproject/platform-package-repository#repository-api
* https://github.com/pndaproject/platform-deployment-manager#api-documentation
* https://github.com/pndaproject/platform-data-mgmnt/blob/develop/data-service/README.md#dataset-apis
-
diff --git a/docs/sections/apis/SDK.rst b/docs/sections/apis/SDK.rst
index 40e7f4d6..567b8a7f 100644
--- a/docs/sections/apis/SDK.rst
+++ b/docs/sections/apis/SDK.rst
@@ -1,12 +1,10 @@
.. This work is licensed under a
Creative Commons Attribution 4.0 International License.
-========
+
DCAE SDK
========
-:Date: 2019-04-29
-
.. contents::
:depth: 3
..
@@ -26,7 +24,7 @@ Introduction
.. code-block:: XML
<properties>
- <sdk.version>1.1.4</sdk.version>
+ <sdk.version>1.1.6</sdk.version>
</properties>
<dependencies>
diff --git a/docs/sections/apis/configbinding.rst b/docs/sections/apis/configbinding.rst
index 0b947dbf..5eb026ba 100644
--- a/docs/sections/apis/configbinding.rst
+++ b/docs/sections/apis/configbinding.rst
@@ -1,5 +1,5 @@
-Config Binding Service 2.2.3
-============================
+Config Binding Service
+======================
.. toctree::
:maxdepth: 3
diff --git a/docs/sections/apis/deployment-handler.rst b/docs/sections/apis/deployment-handler.rst
index dc172a29..ab4c0c5c 100644
--- a/docs/sections/apis/deployment-handler.rst
+++ b/docs/sections/apis/deployment-handler.rst
@@ -1,8 +1,8 @@
.. This work is licensed under a
Creative Commons Attribution 4.0 International License.
-deployment-handler API 3.0.3
-============================
+deployment-handler
+==================
.. toctree::
:maxdepth: 3
diff --git a/docs/sections/apis/inventory.rst b/docs/sections/apis/inventory.rst
index e392bab4..7420102d 100644
--- a/docs/sections/apis/inventory.rst
+++ b/docs/sections/apis/inventory.rst
@@ -1,8 +1,8 @@
.. This work is licensed under a
Creative Commons Attribution 4.0 International License.
-DCAE Inventory API 3.0.4
-========================
+Inventory API
+=============
.. toctree::
:maxdepth: 3
diff --git a/docs/sections/apis/ves.rst b/docs/sections/apis/ves.rst
index f7f88020..f444c273 100644
--- a/docs/sections/apis/ves.rst
+++ b/docs/sections/apis/ves.rst
@@ -1,8 +1,8 @@
.. This work is licensed under a
Creative Commons Attribution 4.0 International License.
-VES Collector 1.3.2
-===================
+VES Collector
+=============
.. toctree::
:maxdepth: 3
@@ -23,7 +23,7 @@ against VES schema before distributing to DMAAP MR topics.
Contact Information
~~~~~~~~~~~~~~~~~~~
-dcae@lists.openecomp.org
+onap-discuss@lists.onap.org
Security
~~~~~~~~
diff --git a/docs/sections/architecture.rst b/docs/sections/architecture.rst
index 62b4e9b5..fc101159 100644
--- a/docs/sections/architecture.rst
+++ b/docs/sections/architecture.rst
@@ -5,29 +5,26 @@
Architecture
============
-Data Collection Analytics and Events (DCAE) is the data collection and analysis subsystem of ONAP. Its tasks include collecting measurement, fault, status, configuration, and other types of data from network entities and infrastructure that ONAP interacts with, applying analytics on collected data, and generating intelligence (i.e. events) for other ONAP components such as Policy, APPC, and SDNC to operate upon; hence completing the ONAP's close control loop for managing network services and applications.
+Data Collection Analytics and Events (DCAE) is the primary data collection and analysis system of ONAP. DCAE architecture comprises of DCAE Platform and DCAE Service components so that the DCAE system is flexible, elastic, and expansive enough for supporting the potentially infinite number of ways of constructing intelligent and automated control loops on distributed and heterogeneous infrastructure.
-The design of DCAE separates DCAE Services from DCAE Platform so that the DCAE system is flexible, elastic, and expansive enough for supporting the potentially infinite number of ways of constructing intelligent and automated control loops on distributed and heterogeneous infrastructure.
+DCAE Service components are the functional entities that realize the collection and analytics needs of ONAP control loops. They include the collectors for various data collection needs, event processors for data standardization, analytics that assess collected data, and various auxiliary microservices that assist data collection and analytics, and support other ONAP functions. Service components and DMaaP buses form the "data plane" for DCAE, where DCAE collected data is transported among different DCAE service components.
-DCAE Service components are the virtual functional entities that realize the collection and analysis needs of ONAP control loops. They include the collectors for various data collection needs, the analytics that assess collected data, and various auxiliary microservices that assist data collection and analytics, and support other ONAP functions. Service components and DMaaP buses form the "data plane" for DCAE, where DCAE collected data is transported among different DCAE service components.
+DCAE Platform supports the functions to deploy, host and perform LCM applications of Service components. DCAE Platform components enable model driven deployment of service components and middleware infrastructures that service components depend upon, such as special storage and computation platforms. When triggered by an invocation call (such as CLAMP or via DCAE Dashboard), DCAE Platform follows the TOSCA model of the control loop that is specified by the triggering call, interacts with the underlying networking and computing infrastructure such as OpenSatck installations and Kubernetes clusters to deploy and configure the virtual apparatus (i.e. the collectors, the analytics, and auxiliary microservices) that are needed to form the control loop, at locations that requested. DCAE Platform also provisions DMaaP topics and manages the distribution scopes of the topics following the prescription of the control loop model by interacting with controlling function of DMaaP.
-On the other hand DCAE Platform components enable model driven deployment of service components and middleware infrastructures that service components depend upon, such as special storage and computation platforms. That is, when triggered by an invocation call, DCAE Platform follows the TOSCA model of the control loop that is specified by the triggering call, interacts with the underlying networking and computing infrastructure such as OpenSatck installations and Kubernetes clusters to deploy and configure the virtual apparatus (i.e. the collectors, the analytics, and auxiliary microservices) that are needed to form the control loop, at locations that are requested by the requirements of the control loop model. DCAE Platform also provisions DMaaP topics and manages the distribution scopes of the topics following the prescription of the control loop model by interacting with controlling function of DMaaP.
+DCAE service components configuration are stored under Key-Value store service, embodied by a Consul cluster. During deployment, DCAE platform (via Cloudify plugin) stores service component configuration under Consul for each deployment/instance (identified by ServiceComponentName). All DCAE components during startup will acess these configuration through ConfigBindingService api's to load deployment configuration and watch for any subsequent update.
-DCAE service components operate following a service discovery model. A highly available and distributed service discovery and Key-Value store service, embodied by a Consul cluster, is the foundation for this approach. DCAE components register they identities and service endpoint access parameters with the Consul service so that DCAE components can locate the API endpoint of other DCAE components by querying Consul with the well know service identities of other components.
+DCAE components use Consul's distributed K-V store service to distribute and manage component configurations where each key is based on the unique identity of a DCAE component, and the value is the configuration for the corresponding component. DCAE platform creates and updates the K-V pairs based on information provided as part of the control loop blueprint, or received from other ONAP components such as Policy Framework and CLAMP. Either through periodically polling or proactive pushing, the DCAE components get the configuration updates in realtime and apply the configuration updates. DCAE Platform also offers dynamic template resolution for configuration parameters that are dynamic and only known by the DCAE platform, such as dynamically provisioned DMaaP topics. This approach standardizes component deployment and configuration management for DCAE service components in multi-site deployment.
-During the registration process, DCAE components also register a health-check API with the Consul so that the operational status of the components are verified. Consul's health check offers a separate path for DACE and ONAP to learn about module operation status that would still be applicable even when the underlying infrastructure does not provide native health-check methods.
-More over, Consul's distributed K-V store service is the foundation for DCAE to distribute and manage component configurations where each key is based on the unique identity of a DACE component, and the value is the configuration for the corresponding component. DCAE platform creates and updates the K-V pairs based on information provided as part of the control loop blueprint, or received from other ONAP components such as Policy Framework and SDC. Either through periodically polling or proactive pushing, the DCAE components get the configuration updates in realtime and apply the configuration updates. DCAE Platform also offers dynamic template resolution for configuration parameters that are dynamic and only known by the DCAE platform, such as dynamically provisioned DMaaP topics.
-
-
-DCAE R3 Components
+DCAE R4 Components
------------------
-The following list displays the details of what are included in ONAP DCAE R3. All DCAE R3 components are offered as Docker containers. Following ONAP level deployment methods, these components can be deployed as Docker containers running on Docker host VM that is launched by OpenStack Heat Orchestration Template; or as Kubernetes Deployments and Services by Helm.
+The following list displays the details of what are included in ONAP DCAE R4. All DCAE components are offered as Docker containers. Following ONAP level deployment methods, these components can be deployed as Kubernetes Deployments and Services.
- DCAE Platform
- Core Platform
- Cloudify Manager: TOSCA model executor. Materializes TOSCA models of control loop, or Blueprints, into properly configured and managed virtual DCAE functional components.
+ - Plugins
- Extended Platform
- Configuration Binding Service: Agent for service component configuration fetching; providing configuration parameter resolution.
- Deployment Handler: API for triggering control loop deployment based on control loop's TOSCA model.
@@ -58,9 +55,8 @@ The following list displays the details of what are included in ONAP DCAE R3. A
The figure below shows the DCAE R3 architecture and how the components work with each other. The components on the right constitute the Platform/controller components which are statically deployed. The components on the right represent the services which can be both deployed statically or dynamically (via CLAMP)
-.. image:: images/R3_architecture_diagram.gif
+.. image:: images/R4_architecture_diagram.png
-Note: Missing Heartbeat, Universal Data-mapper, PM-Mapper descoped from R3
Deployment Scenarios
--------------------
diff --git a/docs/sections/blueprints/DockerHost.rst b/docs/sections/blueprints/DockerHost.rst
deleted file mode 100644
index 25a96904..00000000
--- a/docs/sections/blueprints/DockerHost.rst
+++ /dev/null
@@ -1,23 +0,0 @@
-DCAE Docker Host
-======================
-
-Overview of my blueprint and the part it plays in DCAE.
-
-Blueprint files
-----------------------
-
-List where we can find the blueprints
-
-Parameters
----------------------
-
-The input parameters needed for running the blueprint
-
-How To Run
----------------------
-
-Cfy command for running the blueprint
-
-Additional Information
-----------------------
-Any additional information that help other people understanding and using your blueprint
diff --git a/docs/sections/blueprints/PGaaS.rst b/docs/sections/blueprints/PGaaS.rst
deleted file mode 100644
index eedcfe56..00000000
--- a/docs/sections/blueprints/PGaaS.rst
+++ /dev/null
@@ -1,166 +0,0 @@
-PostgreSQL as a Service
-=======================
-
-PostgreSQL as a Service (PGaaS) comes in two flavors: all-in-one blueprint, and
-separate disk/cluster/database blueprints to separate the management of
-the lifetime of those constituent parts. Both are provided for use.
-
-Why Three Flavors?
-------------------
-
-The reason there are three flavors of blueprints lays in the difference in
-lifetime management of the constituent parts and the number of VMs created.
-
-For example, a database usually needs to have persistent storage, which
-in these blueprints comes from Cinder storage volumes. The primitives
-used in these blueprints assume that the lifetime of the Cinder storage
-volumes matches the lifetime of the blueprint deployment. So when the
-blueprint goes away, any Cinder storage volume allocated in the
-blueprint also goes away.
-
-Similarly, a database's lifetime may be the same time as an application's
-lifetime. When the application is undeployed, the associated database should
-be deployed too. OR, the database should have a lifetime beyond the scope
-of the applications that are writing to it or reading from it.
-
-Blueprint Files
----------------
-
-The Blueprints for PG Services and Cinder
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The all-in-one blueprint ``pgaas.yaml`` assumes that the PG servers and Cinder volumes can be allocated and
-deallocated together. The ``pgaas.yaml`` blueprint creates a cluster of two VMs named "``pstg``" by default.
-
-The ``pgaas-onevm.yaml`` blueprint creates a single-VM instance named "``pgvm``" by default.
-
-Alternatively, you can split them apart into separate steps, using ``pgaas-disk.yaml`` to allocate the
-Cinder volume, and ``pgaas-cluster.yaml`` to allocate a PG cluster. Create the Cinder volume first using
-``pgaas-disk.yaml``, and then use ``pgaas-cluster.yaml`` to create the cluster. The PG cluster can be
-redeployed without affecting the data on the Cinder volumes.
-
-The Blueprints for Databases
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The ``pgaas-database.yaml`` blueprint shows how a database can be created separately from any application
-that uses it. That database will remain present until the pgaas-database.yaml blueprint is
-undeployed. The ``pgaas-getdbinfo.yaml`` file demonstrates how an application would access the credentials
-needed to access a given database on a given PostgreSQL cluster.
-
-If the lifetime of your database is tied to the lifetime of your application, use a block similar to what
-is in ``pgaas-database.yaml`` to allocate the database, and use the attributes as shown in ``pgaas-getdbinfo.yaml``
-to access the credentials.
-
-Both of these blueprints use the ``dcae.nodes.pgaas.database`` plugin reference, but ``pgaas-getdbinfo.yaml``
-adds the ``use_existing: true`` property.
-
-
-What is Created by the Blueprints
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Each PostgreSQL cluster has a name, represented below as ``${CLUSTER}`` or ``CLNAME``. Each cluster is created
-with two VMs, one VM used for the writable master and the other as a cascaded read-only secondary.
-
-
-There are two DNS A records added, ``${LOCATIONPREFIX}${CLUSTER}00.${LOCATIONDOMAIN}`` and
-``${LOCATIONPREFIX}${CLUSTER}01.${LOCATIONDOMAIN}``. In addition,
-there are two CNAME entries added:
-``${LOCATIONPREFIX}-${CLUSTER}-write.${LOCATIONDOMAIN} ``
-and
-``${LOCATIONPREFIX}-${CLUSTER}.${LOCATIONDOMAIN}``. The CNAME
-``${LOCATIONPREFIX}-${CLUSTER}-write.${LOCATIONDOMAIN}`` will be used by further
-blueprints to create and attach to databases.
-
-
-Parameters
-------------
-
-The blueprints are designed to run using the standard inputs file used for all of the blueprints,
-plus several additional parameters that are given reasonable defaults.
-
-How to Run
-------------
-
-
-
-To install the PostgreSQL as a Service
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Installing the all-in-one blueprint is straightforward:
-
-::
-
- cfy install -p pgaas.yaml -i inputs.yaml
-
-By default, the all-in-one blueprint creates a cluster by the name ``pstg``.
-
-You can override that name using another ``-i`` option.
-(When overriding the defaults, it is also best to explicitly
-set the -b and -d names.)
-
-::
-
- cfy install -p pgaas.yaml -b pgaas-CLNAME -d pgaas-CLNAME -i inputs.yaml -i pgaas_cluster_name=CLNAME
-
-
-Separating out the disk allocation from the service creation requires using two blueprints:
-
-::
-
- cfy install -p pgaas-disk.yaml -i inputs.yaml
- cfy install -p pgaas-cluster.yaml -i inputs.yaml
-
-By default, these blueprints create a cluster named ``pgcl``, which can be overridden the same
-way as shown above:
-
-::
-
- cfy install -p pgaas-disk.yaml -b pgaas-disk-CLNAME -d pgaas-disk-CLNAME -i inputs.yaml -i pgaas_cluster_name=CLNAME
- cfy install -p pgaas-cluster.yaml -b pgaas-disk-CLNAME -d pgaas-disk-CLNAME -i inputs.yaml -i pgaas_cluster_name=CLNAME
-
-
-You must use the same pgaas_cluster_name for the two blueprints to work together.
-
-For the disk, you can also specify a ``cinder_volume_size``, as in ``-i cinder_volume_size=1000``
-for 1TiB volume. (There is no need to override the ``-b`` and ``-d`` names when changing the
-volume size.)
-
-
-You can verify that the cluster is up and running by connecting to the PostgreSQL service
-on port 5432. To verify that all of the DNS names were created properly and that PostgreSQL is
-answering on port 5432, you can use something like this:
-
-::
-
- sleep 1 | nc -v ${LOCATIONPREFIX}${CLUSTER}00.${LOCATIONDOMAIN} 5432
- sleep 1 | nc -v ${LOCATIONPREFIX}${CLUSTER}01.${LOCATIONDOMAIN} 5432
- sleep 1 | nc -v ${LOCATIONPREFIX}-${CLUSTER}-write.${LOCATIONDOMAIN} 5432
- sleep 1 | nc -v ${LOCATIONPREFIX}-${CLUSTER}.${LOCATIONDOMAIN} 5432
-
-
-Once you have the cluster created, you can then allocate databases. An application that
-wants a persistent database not tied to the lifetime of the application blueprint can
-use the ``pgaas-database.yaml`` blueprint to create the database;
-
-::
-
- cfy install -p pgaas-database.yaml -i inputs.yaml
-
-By default, the ``pgaas-database.yaml`` blueprint creates a database with the name ``sample``, which
-can be overridden using ``database_name``.
-
-
-::
-
- cfy install -p pgaas-database.yaml -b pgaas-database-DBNAME -d pgaas-database-DBNAME -i inputs.yaml -i database_name=DBNAME
- cfy install -p pgaas-database.yaml -b pgaas-database-CLNAME-DBNAME -d pgaas-database-CLNAME-DBNAME -i inputs.yaml -i pgaas_cluster_name=CLNAME -i database_name=DBNAME
-
-
-The ``pgaas-getdbinfo.yaml`` blueprint shows how an application can attach to an existing
-database and access its attributes:
-
-::
-
- cfy install -p pgaas-getdbinfo.yaml -d pgaas-getdbinfo -b pgaas-getdbinfo -i inputs.yaml
- cfy deployments outputs -d pgaas-getdbinfo
- cfy uninstall -d pgaas-getdbinfo
diff --git a/docs/sections/blueprints/cbs.rst b/docs/sections/blueprints/cbs.rst
deleted file mode 100644
index 79136d2e..00000000
--- a/docs/sections/blueprints/cbs.rst
+++ /dev/null
@@ -1,23 +0,0 @@
-Config Binding Service
-======================
-
-Overview of my blueprint and the part it plays in DCAE.
-
-Blueprint files
-----------------------
-
-List where we can find the blueprints
-
-Parameters
----------------------
-
-The input parameters needed for running the blueprint
-
-How To Run
----------------------
-
-Cfy command for running the blueprint
-
-Additional Information
-----------------------
-Any additional information that help other people understanding and using your blueprint
diff --git a/docs/sections/blueprints/cdap.rst b/docs/sections/blueprints/cdap.rst
deleted file mode 100644
index cff25617..00000000
--- a/docs/sections/blueprints/cdap.rst
+++ /dev/null
@@ -1,130 +0,0 @@
-CDAP
-======================
-
-Note: This blueprint is intended to be deployed, automatically, as part of the
-DCAE bootstrap process, and is not normally invoked manually.
-
-The ONAP DCAEGEN2 CDAP blueprint deploys a 7 node Cask Data Application
-Platform (CDAP) cluster (version 4.1.x), for running data analysis
-applications. The template for the blueprint is at
-``blueprints/cdapbp7.yaml-template`` in the ONAP
-``dcaegen2.platform.blueprints`` project. The ``02`` VM in the cluster
-will be the CDAP master.
-
-Blueprint Input Parameters
---------------------------
-
-This blueprint has the following required input parameters:
-
-* ``ubuntu1604image_id``
-
- This is the OpenStack image ID of the Ubuntu 16.04 VM image that will be
- used to launch the 7 VMs making up the cluster.
-
-* ``flavor_id``
-
- This is the OpenStack flavor ID specifying the amount of memory, disk, and
- CPU available to each VM in the cluster. While the required values will be
- largely application dependent, a minimum of 32 Gigabytes of memory is
- strongly recommended.
-
-* ``security_group``
-
- This is the OpenStack security group specifying permitted inbound and
- outbound IP connectivity to the VMs in the cluster.
-
-* ``public_net``
-
- This is the name of the OpenStack network from which floating IP addresses
- for the VMs in the cluster will be allocated.
-
-* ``private_net``
-
- This is the name of the OpenStack network from which fixed IP addresses for
- the VMs in the cluster will be allocated.
-
-* ``openstack``
-
- This is the JSON object / YAML associative array providing values necessary
- for accessing OpenStack. The keys are:
-
- * ``auth_url``
-
- The URL for accessing the OpenStack Identity V2 API. (The version of
- Cloudify currently being used, and the associated OpenStack plugin do
- not currently support Identity V3).
-
- * ``tenant_name``
-
- The name of the OpenStack tenant/project where the VMs will be launched.
-
- * ``region``
-
- The name of the OpenStack region within the deployment. In smaller
- OpenStack deployments, where there is only one region, the region is
- often named ``RegionOne``.
-
- * ``username``
-
- The name of the OpenStack user used as a credential for accessing
- OpenStack.
-
- * ``password``
-
- The password of the OpenStack user. (The version of Cloudify currently
- being used does not provide a mechanism for encrypting this value).
-
-* ``keypair``
-
- The name of the ssh "key pair", within OpenStack, that will be given access,
- via the ubuntu login, to the VMs. Note: OpenStack actually stores only the
- public key.
-
-* ``key_filename``
-
- The full file path, on the Cloudify Manager VM used to deploy this blueprint,
- of the ssh private key file corresponding to the ``keypair`` input parameter.
-
-* ``location_domain``
-
- The DNS domain/zone for DNS entries associated with the VMs in the cluster.
- If, for example, location_domain is ``dcae.example.com`` then the FQDN for
- a VM with hostname ``abcd`` would be ``abcd.dcae.example.com`` and a DNS
- lookup of that FQDN would lead an A (or AAAA) record giving the floating
- IP address assigned to that VM.
-
-* ``location_prefix``
-
- The hostname prefix for hostnames of VMs in the cluster. The hostnames
- assigned to the VMs are created by concatenating this prefix with a suffix
- identifying the individual VMs in the cluster (``cdap00``, ``cdap01``, ...,
- ``cdap06``). If the location prefix is ``jupiter`` then the hostname of
- the CDAP master in the cluster would be ``jupitercdap02``.
-
-* ``codesource_url`` and ``codesource_version``
-
- ``codesource_url`` is the base URL for downloading DCAE specific project
- installation scripts. The intent is that this URL may be environment
- dependent, (for example it may, for security reasons, point to an internal
- mirror). This is used in combination with the ``codesource_version`` input
- parameter to determine the URL for downloading the scripts. There are 2
- scripts used by this blueprint - ``cdap-init.sh`` and
- ``instconsulagentub16.sh`` These scripts are part of the
- dcaegen2.deployments ONAP project. This blueprint assumes that curl/wget
- can find these scripts at
- *codesource_url/codesource_version*\ ``/cloud_init/cdap-init.sh`` and
- *codesource_url/codesource_version*\ ``/cloud_init/instconsulagentub16.sh``
- respectively. For example, if codesource_url is
- ``https://mymirror.example.com`` and codesource_version is ``rel1.0``,
- then the installation scripts would be expected to be stored under
- ``https://mymirror.example.com/rel1.0/raw/cloud_init/``
-
-This blueprint has the following optional inputs:
-
-* ``location_id`` (default ``solutioning-central``)
-
- The name of the Consul cluster to register this CDAP cluster with.
-
-* ``cdap_cluster_name`` (default ``cdap``)
-
- The name of the service to register this cluster as, in Consul.
diff --git a/docs/sections/blueprints/cdapbroker.rst b/docs/sections/blueprints/cdapbroker.rst
deleted file mode 100644
index 59ed5d37..00000000
--- a/docs/sections/blueprints/cdapbroker.rst
+++ /dev/null
@@ -1,23 +0,0 @@
-CDAP Broker
-======================
-
-Overview of my blueprint and the part it plays in DCAE.
-
-Blueprint files
-----------------------
-
-List where we can find the blueprints
-
-Parameters
----------------------
-
-The input parameters needed for running the blueprint
-
-How To Run
----------------------
-
-Cfy command for running the blueprint
-
-Additional Information
-----------------------
-Any additional information that help other people understanding and using your blueprint
diff --git a/docs/sections/blueprints/centos_vm.rst b/docs/sections/blueprints/centos_vm.rst
deleted file mode 100644
index cd2660e4..00000000
--- a/docs/sections/blueprints/centos_vm.rst
+++ /dev/null
@@ -1,145 +0,0 @@
-CentOS VM
-======================
-
-Note: This blueprint is intended to be deployed, automatically, as part of the
-DCAE bootstrap process, and is not normally invoked manually.
-
-This blueprint controls the deployment of a VM running the CentOS 7 operating system, used to
-run an instance of the Cloudify Manager orchestration engine.
-
-This blueprint is used to bootstrap an installation of Cloudify Manager. All other DCAE
-components are launched using Cloudify Manager. The Cloudify Manager VM and the Cloudify Manager
-software are launched using the Cloudify command line software in its local mode.
-
-Blueprint files
-----------------------
-
-The blueprint file is stored under source control in the ONAP ``dcaegen2.platform.blueprints`` project, in the ``blueprints``
-subdirectory of the project, as a template named ``centos_vm.yaml-template``. The build process expands
-the template to fill in certain environment-specific values. In the ONAP integration environment, the build process
-uploads the expanded template, using the name ``centos_vm.yaml``, to a well known-location in a Nexus artifact repository.
-
-Parameters
----------------------
-
-This blueprint has the following required input parameters:
-* ``centos7image_id``
-
- This is the OpenStack image ID of the Centos7 VM image that will be
- used to launch the Cloudify Manager VM.
-
-* ``ubuntu1604image_id``
-
- This is not used by the blueprint but is specified here so that the blueprint
- can use the same common inputs file as other DCAE VMs (which use an Ubuntu 16.04 image).
-
-* ``flavor_id``
-
- This is the OpenStack flavor ID specifying the amount of memory, disk, and
- CPU available to the Cloudify Manager VM. While the required values will be
- largely application dependent, a minimum of 16 Gigabytes of memory is
- strongly recommended.
-
-* ``security_group``
-
- This is the OpenStack security group specifying permitted inbound and
- outbound IP connectivity to the VM.
-
-* ``public_net``
-
- This is the name of the OpenStack network from which a floating IP address
- for the VM will be allocated.
-
-* ``private_net``
-
- This is the name of the OpenStack network from which fixed IP addresses for
- the VM will be allocated.
-
-* ``openstack``
-
- This is the JSON object / YAML associative array providing values necessary
- for accessing OpenStack. The keys are:
-
- * ``auth_url``
-
- The URL for accessing the OpenStack Identity V2 API. (The version of
- Cloudify currently being used, and the associated OpenStack plugin do
- not currently support Identity V3).
-
- * ``tenant_name``
-
- The name of the OpenStack tenant/project where the VM will be launched.
-
- * ``region``
-
- The name of the OpenStack region within the deployment. In smaller
- OpenStack deployments, where there is only one region, the region is
- often named ``RegionOne``.
-
- * ``username``
-
- The name of the OpenStack user used as a credential for accessing
- OpenStack.
-
- * ``password``
-
- The password of the OpenStack user. (The version of Cloudify currently
- being used does not provide a mechanism for encrypting this value).
-
-* ``keypair``
-
- The name of the ssh "key pair", within OpenStack, that will be given access,
- via the ubuntu login, to the VMs. Note: OpenStack actually stores only the
- public key.
-
-* ``key_filename``
-
- The full file path, on the Cloudify Manager VM,
- of the ssh private key file corresponding to the ``keypair`` input parameter.
-
-* ``location_domain``
-
- The DNS domain/zone for DNS entries associated with the VM.
- If, for example, location_domain is ``dcae.example.com`` then the FQDN for
- a VM with hostname ``abcd`` would be ``abcd.dcae.example.com`` and a DNS
- lookup of that FQDN would lead an A (or AAAA) record giving the floating
- IP address assigned to that VM.
-
-* ``location_prefix``
-
- The hostname prefix for hostname of the VM. The hostname
- assigned to the VM is created by concatenating this prefix with a suffix
- identifying the Cloudify Manager VM (``orcl00``). If the location prefix is ``jupiter`` then the hostname of
- the Cloudify Manager VM would be ``jupiterorcl00``.
-
-* ``codesource_url`` and ``codesource_version``
-
- This is not used by the blueprint but is specified here so that the blueprint
- can use the same common inputs file as other DCAE VMs. Some of the other VMs use
- combination of ``codesource_url`` and ``codesource_version`` to locate scripts
- that are used at installation time.
-* ``datacenter``
-
- The datacenter name that is used by the DCAE Consul installation. This is needed so that the Consul agent
- installed on the Cloudify Manager VM can be configured to register itself to the Consul service discovery system.
-
-This blueprint has the following optional inputs:
-
-* ``cname`` (default ``dcae-orcl``)
-
- A DNS alias name for the Cloudify Manager VM. In addition to creating a DNS A record for the Cloudify Manager VM,
- the installation process also creates a CNAME record, using ``dcae-orcl`` by default as the alias.
- For example, if the ``location_domain`` input is ``dcae.example.com``, the ``location_prefix`` input is ``jupiter``,
- and the ``cname`` input is the default ``dcae-orcl``, then the installation process will create an A record for
- ``jupiterorcl00.dcae.example.com`` and a CNAME record for ``dcae-orcl.dcae.example.com`` that points to
- ``jupiterorcl00.dcae.example.com``.
-
-
-How To Run
----------------------
-
-This blueprint is run as part of the bootstrapping process. (See the ``dcaegen2.deployments`` project.)
-Running it manually requires setting up a Cloudify 3.4 command line environment--something that's handled
-automatically by the bootstrap process.
-
-
diff --git a/docs/sections/blueprints/consul.rst b/docs/sections/blueprints/consul.rst
deleted file mode 100644
index f036b345..00000000
--- a/docs/sections/blueprints/consul.rst
+++ /dev/null
@@ -1,23 +0,0 @@
-Consul Cluster
-======================
-
-Overview of my blueprint and the part it plays in DCAE.
-
-Blueprint files
-----------------------
-
-List where we can find the blueprints
-
-Parameters
----------------------
-
-The input parameters needed for running the blueprint
-
-How To Run
----------------------
-
-Cfy command for running the blueprint
-
-Additional Information
-----------------------
-Any additional information that help other people understanding and using your blueprint
diff --git a/docs/sections/blueprints/deploymenthandler.rst b/docs/sections/blueprints/deploymenthandler.rst
deleted file mode 100644
index 427182c5..00000000
--- a/docs/sections/blueprints/deploymenthandler.rst
+++ /dev/null
@@ -1,23 +0,0 @@
-Deployment Handler
-======================
-
-Overview of my blueprint and the part it plays in DCAE.
-
-Blueprint files
-----------------------
-
-List where we can find the blueprints
-
-Parameters
----------------------
-
-The input parameters needed for running the blueprint
-
-How To Run
----------------------
-
-Cfy command for running the blueprint
-
-Additional Information
-----------------------
-Any additional information that help other people understanding and using your blueprint
diff --git a/docs/sections/blueprints/holmes.rst b/docs/sections/blueprints/holmes.rst
deleted file mode 100644
index 94ca80fc..00000000
--- a/docs/sections/blueprints/holmes.rst
+++ /dev/null
@@ -1,23 +0,0 @@
-Holmes Correlation Analytics
-============================
-
-Overview of my blueprint and the part it plays in DCAE.
-
-Blueprint files
-----------------------
-
-List where we can find the blueprints
-
-Parameters
----------------------
-
-The input parameters needed for running the blueprint
-
-How To Run
----------------------
-
-Cfy command for running the blueprint
-
-Additional Information
-----------------------
-Any additional information that help other people understanding and using your blueprint
diff --git a/docs/sections/blueprints/inventoryapi.rst b/docs/sections/blueprints/inventoryapi.rst
deleted file mode 100644
index ab998b2d..00000000
--- a/docs/sections/blueprints/inventoryapi.rst
+++ /dev/null
@@ -1,23 +0,0 @@
-Inventory API
-======================
-
-Overview of my blueprint and the part it plays in DCAE.
-
-Blueprint files
-----------------------
-
-List where we can find the blueprints
-
-Parameters
----------------------
-
-The input parameters needed for running the blueprint
-
-How To Run
----------------------
-
-Cfy command for running the blueprint
-
-Additional Information
-----------------------
-Any additional information that help other people understanding and using your blueprint
diff --git a/docs/sections/blueprints/policyhandler.rst b/docs/sections/blueprints/policyhandler.rst
deleted file mode 100644
index 99637204..00000000
--- a/docs/sections/blueprints/policyhandler.rst
+++ /dev/null
@@ -1,23 +0,0 @@
-Policy Handler
-======================
-
-Overview of my blueprint and the part it plays in DCAE.
-
-Blueprint files
-----------------------
-
-List where we can find the blueprints
-
-Parameters
----------------------
-
-The input parameters needed for running the blueprint
-
-How To Run
----------------------
-
-Cfy command for running the blueprint
-
-Additional Information
-----------------------
-Any additional information that help other people understanding and using your blueprint
diff --git a/docs/sections/blueprints/servicechangehandler.rst b/docs/sections/blueprints/servicechangehandler.rst
deleted file mode 100644
index 979948ba..00000000
--- a/docs/sections/blueprints/servicechangehandler.rst
+++ /dev/null
@@ -1,23 +0,0 @@
-Service Change Handler
-======================
-
-Overview of my blueprint and the part it plays in DCAE.
-
-Blueprint files
-----------------------
-
-List where we can find the blueprints
-
-Parameters
----------------------
-
-The input parameters needed for running the blueprint
-
-How To Run
----------------------
-
-Cfy command for running the blueprint
-
-Additional Information
-----------------------
-Any additional information that help other people understanding and using your blueprint
diff --git a/docs/sections/blueprints/tca.rst b/docs/sections/blueprints/tca.rst
deleted file mode 100644
index 85fe70fb..00000000
--- a/docs/sections/blueprints/tca.rst
+++ /dev/null
@@ -1,23 +0,0 @@
-Threshold Crossing Analytics
-============================
-
-Overview of my blueprint and the part it plays in DCAE.
-
-Blueprint files
-----------------------
-
-List where we can find the blueprints
-
-Parameters
----------------------
-
-The input parameters needed for running the blueprint
-
-How To Run
----------------------
-
-Cfy command for running the blueprint
-
-Additional Information
-----------------------
-Any additional information that help other people understanding and using your blueprint
diff --git a/docs/sections/blueprints/ves.rst b/docs/sections/blueprints/ves.rst
deleted file mode 100644
index 1df74253..00000000
--- a/docs/sections/blueprints/ves.rst
+++ /dev/null
@@ -1,23 +0,0 @@
-VNF Event Streaming Collector
-=============================
-
-Overview of my blueprint and the part it plays in DCAE.
-
-Blueprint files
----------------
-
-List where we can find the blueprints
-
-Parameters
-----------
-
-The input parameters needed for running the blueprint
-
-How To Run
-----------
-
-Cfy command for running the blueprint
-
-Additional Information
-----------------------
-Any additional information that help other people understanding and using your blueprint
diff --git a/docs/sections/build.rst b/docs/sections/build.rst
index 6b3a1140..4b9db930 100644
--- a/docs/sections/build.rst
+++ b/docs/sections/build.rst
@@ -46,31 +46,33 @@ Below is a list of the repos and their sub-modules, and the language they are wr
- dcaegen2.collectors.ves (Java)
- dcaegen2.collectors.hv-ves (Kotlin)
- dcaegen2.collectors.datafile (Java)
+ - dcaegen2.collectors.restconf (Java)
* dcaegen2.services
- dcaegen2.services.heartbeat (Python)
- dcaegen2.services.prh (Java)
-
+ - dcaegen2.services.bbs-eventprocessor (Java)
+ - dcaegen2.services.pm-mapper (Java)
+ - dcaegen2.services.ves-mapper (Java)
+ - dcaegen2.services.son-handler (Java)
* dcaegen2.deployments
- - bootstrap (bash)
- - cloud_init (bash)
- scripts (bash, python)
- tls-init-container (bash)
- k8s-bootstrap-container (bash)
- healthcheck-container (Node.js)
- k8s-bootstrap-container (bash)
- - pnda-bootstrap-container (bash)
- - pnda-mirror-container (bash)
+ - tca-cdap-container (bash)
+ - multisite-init-container (python)
+ - dcae-remote-site (helm chart)
* dcaegen2.platform
* dcaegen2.platform.blueprints
- blueprints (yaml)
- - check-blueprint-vs-input (yaml)
- input-templates (yaml)
* dcaegen2.platform.cli (Python)
@@ -90,6 +92,7 @@ Below is a list of the repos and their sub-modules, and the language they are wr
- dcae-policy (Python)
- docker (Python)
- relationships (Python)
+ - k8splugin (Python)
* dcaegen2.platform.policy-handler (Python)
@@ -104,7 +107,6 @@ Below is a list of the repos and their sub-modules, and the language they are wr
- scripts (bash)
-
Environment
-----------
Building is conducted in a Linux environment that has the basic building tools such as JDK 8, Maven 3, Python 2.7 and 3.6, docker engine, etc.
diff --git a/docs/sections/components/component-development.rst b/docs/sections/components/component-development.rst
index 14a2d470..24463902 100644
--- a/docs/sections/components/component-development.rst
+++ b/docs/sections/components/component-development.rst
@@ -1,8 +1,8 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
-DCAE Component Development
-==========================
+Onboarding Pre-requisite (Service Component)
+============================================
.. toctree::
:maxdepth: 1
diff --git a/docs/sections/configuration.rst b/docs/sections/configuration.rst
index 0e6fade4..85d74a72 100644
--- a/docs/sections/configuration.rst
+++ b/docs/sections/configuration.rst
@@ -4,19 +4,21 @@
Configuration
=============
-DACEGEN2 platform deploys its components via Cloudify Blueprints. Below is the list of Blueprints included in ONAP DCAEGEN2
-and details for how to configure them. For how to configure the deployment of the DCAE platform and service components, please see the Installation document: ./installation.rst.
+DACEGEN2 platform is deployed via helm charts. The configuration are maintained as on values.yaml and can be updated for deployment if necessary.
-.. toctree::
- :maxdepth: 1
- :titlesonly:
+The following components are migrated to helm chart part of Dublin release.
- ./blueprints/cbs.rst
- ./blueprints/deploymenthandler.rst
- ./blueprints/servicechangehandler.rst
- ./blueprints/inventoryapi.rst
- ./blueprints/policyhandler.rst
- ./blueprints/PGaaS.rst
- ./blueprints/ves.rst
- ./blueprints/tca.rst
- ./blueprints/holmes.rst \ No newline at end of file
+ConfigBindingService
+
+.. csv-table::
+ :header: "Component", "Charts"
+ :widths: 22,100
+
+ "ConfigBinding Service", "https://git.onap.org/oom/tree/kubernetes/dcaegen2/charts/dcae-config-binding-service"
+ "Deployment Handler", "https://git.onap.org/oom/tree/kubernetes/dcaegen2/charts/dcae-deployment-handler"
+ "Policy Handler", "https://git.onap.org/oom/tree/kubernetes/dcaegen2/charts/dcae-policy-handler"
+ "ServiceChangeHandler", "https://git.onap.org/oom/tree/kubernetes/dcaegen2/charts/dcae-servicechange-handler"
+ "Invetory", "https://git.onap.org/oom/tree/kubernetes/dcaegen2/charts/dcae-servicechange-handler/charts/dcae-inventory-api"
+
+
+DCAE Service components are deployed via Cloudify Blueprints. Instruction for deployment and configuration are documented under https://docs.onap.org/en/latest/submodules/dcaegen2.git/docs/sections/services/serviceindex.html
diff --git a/docs/sections/consumedapis.rst b/docs/sections/consumedapis.rst
index 46229fd8..17850c7e 100644
--- a/docs/sections/consumedapis.rst
+++ b/docs/sections/consumedapis.rst
@@ -16,6 +16,12 @@ Consumed APIs
DCAEGEN2 Components making following API calls into other ONAP components.
DMaaP Message Router
+* https://docs.onap.org/en/latest/submodules/dmaap/messagerouter/messageservice.git/docs/offeredapis/offeredapis.html
+DMaaP Data Router
+* https://docs.onap.org/en/latest/submodules/dmaap/datarouter.git/docs/offeredapis.html
Policy
+* https://docs.onap.org/en/latest/submodules/policy/engine.git/docs/platform/offeredapis.html
SDC
+* https://docs.onap.org/en/latest/submodules/sdc.git/docs/offeredapis.html
A&AI
+* https://docs.onap.org/en/latest/submodules/aai/aai-common.git/docs/platform/offeredapis.html \ No newline at end of file
diff --git a/docs/sections/images/R4_architecture_diagram.png b/docs/sections/images/R4_architecture_diagram.png
new file mode 100644
index 00000000..1f63503b
--- /dev/null
+++ b/docs/sections/images/R4_architecture_diagram.png
Binary files differ
diff --git a/docs/sections/installation.rst b/docs/sections/installation.rst
index da87f529..0b60c1de 100644
--- a/docs/sections/installation.rst
+++ b/docs/sections/installation.rst
@@ -8,7 +8,6 @@ DCAE Deployment (Installation)
:maxdepth: 1
:titlesonly:
- ./installation_heat.rst
./installation_oom.rst
./installation_pnda.rst
./installation_test.rst
diff --git a/docs/sections/installation_heat.rst b/docs/sections/installation_heat.rst
deleted file mode 100644
index 20242c02..00000000
--- a/docs/sections/installation_heat.rst
+++ /dev/null
@@ -1,138 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-
-OpenStack Heat Orchestration Template Based DCAE Deployment
-===========================================================
-
-This document describes the details of the OpenStack Heat Orchestration Template deployment process and how to configure DCAE related parameters in the Heat template and its parameter file.
-
-
-ONAP Deployment Overview
-------------------------
-
-ONAP R3 supports an OpenStack Heat template based system deployment. The Heat Orchestration Template file and its parameter input file can be found under the **heat/ONAP** directory of the **demo** repo.
-
-When a new "stack" is created using the template, the following virtual resources will be launched in the target OpenStack tenant:
-
-* A four-character alphanumerical random text string, to be used as the ID of the deployment. It is denoted as {{RAND}} in the remainder of this document.
-* A private OAM network interconnecting all ONAP VMs, named oam_onap_{{RAND}}.
-* A virtual router interconnecting the private OAM network with the external network of the OpenStack installation.
-* A key-pair named onap_key_{{RAND}}.
-* A security group named onap_sg_{{RAND}}.
-* A list of VMs for ONAP components. Each VM has one NIC connected to the OAM network and assigned a fixed IP. Each VM is also assigned a floating IP address from the external network. The VM hostnames are name consistently across different ONAP deployments, a user defined prefix, denoted as {{PREFIX}}, followed by a descriptive string for the ONAP component this VM runs, and optionally followed by a sub-function name. In the parameter env file supplied when running the Heat template, the {{PREFIX}} is defined by the **vm_base_name** parameter. The VMs of the same ONAP role across different ONAP deployments will always have the same OAM network IP address. For example, the Message Router will always have the OAM network IP address of 10.0.11.1.
-
-
-The list below provides the IP addresses and hostnames for ONAP components that are relevant to DCAE.
-
-============== ========================== ==========================
-ONAP Role VM (Neutron) hostname OAM IP address(s)
-============== ========================== ==========================
-A&AI {{PREFIX}}-aai-inst1 10.0.1.1
-SDC {{PREFIX}}-sdc 10.0.3.1
-DCAE {{PREFIX}}-dcae 10.0.4.1
-Policy {{PREFIX}}-policy 10.0.6.1
-SD&C {{PREFIX}}-sdnc 10.0.7.1
-Robot TF {{PREFIX}}-robot 10.0.10.1
-Message Router {{PREFIX}}-message-router 10.0.11.1
-CLAMP {{PREFIX}}-clamp 10.0.12.1
-Private DNS {{PREFIX}}-dns-server 10.0.100.1
-============== ========================== ==========================
-
-(Each of the above VMs will also be associated with a floating IP address from the external network.)
-
-
-DCAE Deployment
----------------
-
-Within the Heat template yaml file, there is a section which specifies the DCAE VM as a "service". Majority of the service block is the script that the VM will execute after being launched. This is known as the "cloud-init" script. This script writes configuration parameters to VM disk files under the /opt/config directory of the VM file system, one parameter per file, with the file names matching with the parameter names. At the end, the cloud-init script invokes DCAE's installtioan script dcae2-install.sh, and DCAE deployment script dcae2_vm_init.sh. While the dace2_install.sh script installs the necessary software packages, the dcae2_vm_init.sh script actually deploys the DCAE Docker containers to the DCAE VM.
-
-Firstly, during the execution of the dcae2_vm_init.sh script, files under the **heat** directory of the **dcaegen2/deployments** repo are downloaded and any templates in these files referencing the configuration files under the /opt/config directories are expanded by the contents of the corresponding files. For example, a template of {{ **dcae_ip_addr** }} is replaced with the contents of the file /opt/config/**dcae_ip_addr**.txt file. The resultant files are placed under the /opt/app/config directory of the DCAE VM file system.
-
-In addition, the dcae2_vm_init.sh script also calls the scripts to register the components with Consul about their health check APIs, and their default configurations.
-
-Next, the dcae2_vm_init.sh script deploys the resources defined in the docker-compose-1.yaml and docker-compose-2.yaml files, with proper waiting in between to make sure the resource in docker-compose-1.yaml file have entered ready state before deploying the docker-compose-2.yaml file because the formers are the dependencies of the latter. These resources are a number of services components and their minimum supporting platform components (i.e. Consul server and Config Binding Service). With these resources, DCAE is able to provide a minimum configuration that supports the ONAP R2 use cases, namely, the vFW/vDNS, vCPE, cVoLTE use cases. However, lacking the DCAE full platform, this configuration does not support CLAMP and Policy update from Policy Framework. The only way to change the configurations of the service components (e.g. publishing to a different DMaaP topic) can only be accomplished by changing the value on the Consul for the KV of the service component, using Consul GUI or API call.
-
-For more complete deployment, the dcae2_vm_init.sh script further deploys docker-compose-3.yaml file, which deploys the rest of the DCAE platform components, and if configured so docker-compose-4.yaml file, which deploys DCAE R3 stretch goal service components such as PRH, Missing Heartbeat,HV-VES, DataFile etc.
-
-After all DCAE components are deployed, the dcae2_vm_init.sh starts to provide health check results. Due to the complexity of the DCAE system, a proxy is set up for returning a single binary result for DCAE health check instead of having each individual DCAE component report its health status. To accomplish this, the dcae2_vm_init.sh script deploys a Nginx reverse proxy then enters an infinite health check loop.
-
-During each iteration of the loop, the script checks Consul's service health status API and compare the received healthy service list with a pre-determined list to assess whether the DACE system is healthy. The list of services that must be healthy for the DCAE system to be assessed as healthy depends on the deployment profile which will be covered in the next subsection. For example, if the deployment profile only calls for a minimum configuration for passing use case data, whether DCAE platform components such as Deployment Handler are heathy does not affect the result.
-
-If the DCAE system is considered healthy, the dcae2_vm_init.sh script will generate a file that lists all the healthy components and the Nginx will return this file as the body of a 200 response for any DCAE health check. Otherwise, the Nginx will return a 404 response.
-
-
-Heat Template Parameters
-------------------------
-
-In DCAE R3, the configuration for DCAE deployment in Heat is greatly simplified. In addition to paramaters such as docker container image tags, the only parameter that configures DCAE deployment behavior is dcae_deployment_profiles.
-
-* dcae_deployment_profile: the parameter determines which DCAE components (containers) will be deployed. The following profiles are supported for R2:
- * R3MVP: This profile includes a minimum set of DACE components that will support the vFW/vDNS, vCPE. and vVoLTE use cases. It will deploy the following components:
- * Consul server,
- * Config Binding Service,
- * Postgres database,
- * VES collector
- * TCA analytics
- * Holmes rule management
- * Holmes engine management.
- * R3: This profile also deploys the rest of the DCAE platform. With R3 deployment profile, DCAE supports CLAMP and full control loop functionalities. These additional components are:
- * Cloudify Manager,
- * Deployment Handler,
- * Policy Handler,
- * Service Change Handler,
- * Inventory API.
- * R3PLUS: This profile deploys the DCAE R2 stretch goal service components, namely:
- * PNF Registration Handler,
- * SNMP Trap collector,
- * HV-VES Collector
- * Missing Heartbeat Detection analytics,
- * Universal Mapper
-
-Note: Missing Heartbeat and Universal Mapper are not part of official Casablanca release
-
-Tips for Manual Interventions
------------------------------
-
-During DCAE deployment, there are several places where manual interventions are possible:
-
-* Running dcae2_install.sh
-* Running dcae2_vm_init.sh
-* Individual docker-compose-?.yaml file
-
-All these require ssh-ing into the dcae VM, then change directory or /opt and sudo.
-Configurations injected from the Heat template and cloud init can be found under /opt/config.
-DCAE run time configuration values can be found under /opt/app/config. After any parameters are changed, the dcae2_vm_init.sh script needs to be rerun.
-
-Redpeloying/updating resources defines in docker-compose-?.yaml files can be achieved by running the following:
-
- $ cd /opt/app/config
- $ /opt/docker/docker-compose -f ./docker-compose-4.yaml down
- $ /opt/docker/docker-compose -f ./docker-compose-4.yaml up -d
-
-
-Some manual interventions may also require interaction with the OpenStack environment. This can be
-done by using the OpenStack CLI tool. OpenStack CLI tool comes very handy for various uses in deployment and maintenance of ONAP/DCAE.
-
-It is usually most convenient to install OpenStack CLI tool in a Python virtual environment. Here are the steps and commands::
-
- # create and activate the virtual environment, install CLI
- $ virtualenv openstackcli
- $ . openstackcli/bin/activate
- $ pip install --upgrade pip python-openstackclient python-designateclient python-novaclient python-keystoneclient python-heatclient
-
- # here we need to download the RC file form OpenStack dashboard:
- # Compute->Access & Security_>API Aceess->Download OpenStack RC file
-
- # activate the environment variables with values point to the taregt OpenStack tenant
- (openstackcli) $ . ./openrc.sh
-
-Now we are all set for using OpenStack cli tool to run various commands. For example::
-
- # list all tenants
- (openstackcli) $ openstack project list
-
-Finally to deactivate from the virtual environment, run::
-
- (openstackcli) $ deactivate
-
-
diff --git a/docs/sections/installation_pnda.rst b/docs/sections/installation_pnda.rst
index d1c0a383..1cafc01f 100644
--- a/docs/sections/installation_pnda.rst
+++ b/docs/sections/installation_pnda.rst
@@ -1,14 +1,20 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
-Installing PNDA During Helm Chart Based DCAE Deployment
-=======================================================
+Installing PNDA Platform through Helm Chart
+===========================================
PNDA is integrated into ONAP as a component system of DCAE. It is possible to deploy PNDA as
part of an ONAP OOM deployment on Openstack infrastructure. This is achieved by using a
pnda-bootstrap container in kubernetes to deploy Openstack VMs and then install a PNDA cluster
onto those VMs.
+Note: The docker images used for PNDA deployments are maintained in registry outside of ONAP currently under pndareg.ctao6.net.
+This will be moved to ONAP nexus3 repo part of future release work.
+
+* onap/org.onap.dcaegen2.deployments.pnda-bootstrap-container:5.0.0
+* onap/org.onap.dcaegen2.deployments.pnda-mirror-container:5.0.0
+
Requirements
------------
diff --git a/docs/sections/installation_test.rst b/docs/sections/installation_test.rst
index 83d4c8e3..c39923f5 100644
--- a/docs/sections/installation_test.rst
+++ b/docs/sections/installation_test.rst
@@ -1,53 +1,115 @@
-Testing and Debugging ONAP DCAE Deployment
-===========================================
-
-
-Check Component Status
+ONAP DCAE Deployment Validation
+===============================
+
+
+Check Deployment Status
+-----------------------
+
+The healthcheck service is exposed as a Kubernetes ClusterIP Service named
+`dcae-healthcheck`. The service can be queried for status as shown below.
+
+.. code-block::
+
+ $ curl dcae-healthcheck
+ {
+ "type": "summary",
+ "count": 14,
+ "ready": 14,
+ "items": [
+ {
+ "name": "dev-dcaegen2-dcae-cloudify-manager",
+ "ready": 1,
+ "unavailable": 0
+ },
+ {
+ "name": "dev-dcaegen2-dcae-config-binding-service",
+ "ready": 1,
+ "unavailable": 0
+ },
+ {
+ "name": "dev-dcaegen2-dcae-inventory-api",
+ "ready": 1,
+ "unavailable": 0
+ },
+ {
+ "name": "dev-dcaegen2-dcae-servicechange-handler",
+ "ready": 1,
+ "unavailable": 0
+ },
+ {
+ "name": "dev-dcaegen2-dcae-deployment-handler",
+ "ready": 1,
+ "unavailable": 0
+ },
+ {
+ "name": "dev-dcaegen2-dcae-policy-handler",
+ "ready": 1,
+ "unavailable": 0
+ },
+ {
+ "name": "dep-dcae-ves-collector",
+ "ready": 1,
+ "unavailable": 0
+ },
+ {
+ "name": "dep-dcae-tca-analytics",
+ "ready": 1,
+ "unavailable": 0
+ },
+ {
+ "name": "dep-dcae-prh",
+ "ready": 1,
+ "unavailable": 0
+ },
+ {
+ "name": "dep-dcae-hv-ves-collector",
+ "ready": 1,
+ "unavailable": 0
+ },
+ {
+ "name": "dep-dcae-dashboard",
+ "ready": 1,
+ "unavailable": 0
+ },
+ {
+ "name": "dep-dcae-snmptrap-collector",
+ "ready": 1,
+ "unavailable": 0
+ },
+ {
+ "name": "dep-holmes-engine-mgmt",
+ "ready": 1,
+ "unavailable": 0
+ },
+ {
+ "name": "dep-holmes-rule-mgmt",
+ "ready": 1,
+ "unavailable": 0
+ }
+ ]
+ }
+
+
+Data Flow Verification
----------------------
-Testing of a DCAE system starts with checking the health of the deployed components. This can be done by accessing the Consul becsue all DCAE components register their staus with Consul. Such API is accessible at http://{{ANY_CONSUL_VM_IP}}:8500.
-
-In addition, more details status information can be obtained in additional ways.
-
-1. Check VES Status
- VES status and running logs can be found on the {{RAND}}doks00 VM. The detailed API and access methods can be found in the logging and human interface sections.
-
-2. Check TCA Status
- TCA has its own GUI that provides detailed operation information. Point browser to http://{{CDAP02_VM_IP}}:11011/oldcdap/ns/cdap_tca_hi_lo/apps/, select the application with Description "DCAE Analytics Threshold Crossing Alert Application"; then select "TCAVESCollectorFlow". This leads to a flow display where all stages of processing are illustrated and the number inside of each stage icon shows the number of events/messages processed.
-
-
-3. Check Message Router Status
- Run **curl {{MESSAGE_ROUTER_IP}}:3904/topics** to check the status of the message router. It should return with a list of message topics currently active on the Message Router;
- * Among the topics, find one called "unauthenticated.SEC_MEASUREMENT_OUTPUT", which is the topics VES collector publishes its data to, and the other called "unauthenticated.DCAE_CL_OUTPUT", which is used for TCA to publish analytics events.
-
-
-Check data Flow
----------------
-
-After the platform is assessed as heathy, the next step is to check the functionality of the system. This can be monitored at a number of "observation" points.
-
-1. Check incoming VNF Data
-
- For R1 use cases, VNF data enters the DCAE system via the VES collector. This can be verified in the following steps:
-
- 1. ssh into the {{RAND}}doks00 VM;
- 2. Run: **sudo docker ps** to see that the VES collector container is running;
- * Optionally run: **docker logs -f {{ID_OF_THE_VES_CONTAINER}}** to check the VES container log information;
- 3. Run: **netstat -ln** to see that port 8080 is open;
- 4. Run: **sudo tcpdump dst port 8080** to see incoming packets (from VNFs) into the VM's 8080 port, which is mapped to the VES collectors's 8080 port.
+After the platform is assessed as healthy, the next step is to check the functionality of the system. This can be monitored at a number of "observation" points.
+1. Incoming VNF Data into VES Collector can be verified through logs using kubectl
+
+ kubectl logs -f -n onap <vescollectorpod> dcae-ves-collector
2. Check VES Output
VES publishes received VNF data, after authentication and syntax check, onto DMaaP Message Router. To check this is happening we can subscribe to the publishing topic.
- 1. Run the subscription command to subscribe to the topic: **curl -H "Content-Type:text/plain" -X GET http://{{MESSAGE_ROUTER_IP}}:3904/events/unauthenticated.SEC_MEASUREMENT_OUTPUT/group19/C1?timeout=50000**. The actual format and use of Message Router API can be found in DMaaP project documentation.
+ 1. Run the subscription command to subscribe to the topic: **curl -H "Content-Type:text/plain" -X GET http://{{K8S_NODEIP}}:30227/events/unauthenticated.VES_MEASUREMENT_OUTPUT/group1/C1?timeout=50000**. The actual format and use of Message Router API can be found in DMaaP project documentation.
* When there are messages being published, this command returns with the JSON array of messages;
* If no message being published, up to the timeout value (i.e. 50000 seconds as in the example above), the call is returned with empty JAON array;
- * It may be useful to run this command in a loop: **while :; do curl -H "Content-Type:text/plain" -X GET http://{{MESSAGE_ROUTER_IP}}:3904/events/unauthenticated.SEC_MEASUREMENT_OUTPUT/group19/C1?timeout=50000; echo; done**;
+ * It may be useful to run this command in a loop: **while :; do curl -H "Content-Type:text/plain" -X GET http://{{K8S_NODEIP}}:3904/events/unauthenticated.VES_MEASUREMENT_OUTPUT/group1/C1?timeout=50000; echo; done**;
3. Check TCA Output
TCA also publishes its events to Message Router under the topic of "unauthenticated.DCAE_CL_OUTPUT". The same Message Router subscription command can be used for checking the messages being published by TCA;
- * Run the subscription command to subscribe to the topic: **curl -H "Content-Type:text/plain" -X GET http://{{MESSAGE_ROUTER_IP}}:3904/events/unauthenticated.DCAE_CL_OUTPUT/group19/C1?timeout=50000**.
- * Or run the command in a loop: **while :; do curl -H "Content-Type:text/plain" -X GET http://{{MESSAGE_ROUTER_IP}}:3904/events/unauthenticated.DCAE_CL_OUTPUT/group19/C1?timeout=50000; echo; done**;
+ * Run the subscription command to subscribe to the topic: **curl -H "Content-Type:text/plain" -X GET http://{{K8S_NODEIP}}:3904/events/unauthenticated.DCAE_CL_OUTPUT/group1/C1?timeout=50000**.
+ * Or run the command in a loop: **while :; do curl -H "Content-Type:text/plain" -X GET http://{{K8S_NODEIP}}:3904/events/unauthenticated.DCAE_CL_OUTPUT/group1/C1?timeout=50000; echo; done**;
diff --git a/docs/sections/offeredapis.rst b/docs/sections/offeredapis.rst
index 46114252..969684a0 100644
--- a/docs/sections/offeredapis.rst
+++ b/docs/sections/offeredapis.rst
@@ -10,7 +10,6 @@ DCAEGEN2 Components Offered APIs
apis/inventory.rst
apis/ves.rst
apis/ves-hv/index.rst
- apis/dcaecdap.rst
apis/PRH.rst
apis/DFC.rst
apis/PNDA.rst
diff --git a/docs/sections/release-notes.rst b/docs/sections/release-notes.rst
index b6d1210b..0f5245e3 100644
--- a/docs/sections/release-notes.rst
+++ b/docs/sections/release-notes.rst
@@ -3,6 +3,188 @@
Release Notes
=============
+Version: 4.0.0
+--------------
+
+:Release Date: 2019-06-06
+
+**New Features**
+
+DCAE R4 improves upon previous release with the following new features:
+
+- DCAE Platform Enhancement
+ - Multisite K8S cluster deployment support for DCAE services (via K8S plugin)
+ - Support helm chart deployment in DCAE using new Helm cloudify plugin
+ - DCAE Healthcheck enhancement to cover static and dynamic deployments
+ - Dynamic AAF based topic provisioning support through Dmaap cloudify plugin
+ - Dashboard Integration (UI for deployment/verification)
+ - PolicyHandler Enhancement to support new Policy Lifecycle API’s
+ - Blueprint generator tool to simplify deployment artifact creation
+ - Cloudify Manager resiliency
+
+- Following new services are delivered with Dublin
+ - Collectors
+ - RESTConf collector 
+ - Event Processors
+ - VES/Universal Mapper
+ - 3gpp PM-Mapper
+ - BBS Event processor
+ - Analytics/RCA
+ - SON-Handler
+ - Heartbeat MS
+
+Most platform components has been migrated to helm charts. Following is complete list of DCAE components available part of default ONAP/dcae installation.
+ - Platform components
+ - Cloudify Manager (helm chart)
+ - Bootstrap container (helm chart)
+ - Configuration Binding Service (helm chart)
+ - Deployment Handler (helm chart)
+ - Policy Handler (helm chart
+ - Service Change Handler (helm chart)
+ - Inventory API (helm chart)
+ - Dashboard (Cloudify Blueprint)
+ - Service components
+ - VES Collector
+ - SNMP Collector
+ - Threshold Crossing Analytics
+ - HV-VES Collector
+ - PNF-Registration Handler
+ - Holmes Rule Management *
+ - Holmes Engine Management *
+ - Additional resources that DCAE utilizes:
+ - Postgres Database
+ - Redis Cluster Database
+ - Consul Cluster *
+
+ Notes:
+ \* These components are delivered by the Holmes project.
+
+
+Under OOM (Kubernetes) deployment all DCAE component containers are deployed as Kubernetes Pods/Deployments/Services into Kubernetes cluster. DCAE R3 includes enhancement to Cloudify Manager plugin (k8splugin) that is capable of expanding a Blueprint node specification written for Docker container to a full Kubernetes specification, with additional enhancements such as replica scaling, sidecar for logging to ONAP ELK stack, registering services to MSB, etc.
+
+- All DCAE components are designed to support platform maturity requirements.
+
+
+**Source Code**
+
+Source code of DCAE components are released under the following repositories on gerrit.onap.org:
+ - dcaegen2
+ - dcaegen2.analytics.tca
+ - dcaegen2.collectors.snmptrap
+ - dcaegen2.collectors.ves
+ - dcaegen2.collectors.hv-ves
+ - dcaegen2.collectors.datafile
+ - dcaegen2.collectors.restconf
+ - dcaegen2.deployments
+ - dcaegen2.platform.blueprints
+ - dcaegen2.platform.cli
+ - dcaegen2.platform.configbinding
+ - dcaegen2.platform.deployment-handler
+ - dcaegen2.platform.inventory-api
+ - dcaegen2.platform.plugins
+ - dcaegen2.platform.policy-handler
+ - dcaegen2.platform.servicechange-handler
+ - dcaegen2.services.heartbeat
+ - dcaegen2.services.mapper
+ - dcaegen2.services.pm-mapper
+ - dcaegen2.services.prh
+ - dcaegen2.services.son-handler
+ - dcaegen2.services
+ - dcaegen2.services.sdk
+ - dcaegen2.utils
+ - ccsdk.platform.plugins
+ - ccsdk.dashboard
+
+**Bug Fixes**
+
+**Known Issues**
+
+**Security Notes**
+
+DCAE code has been formally scanned during build time using NexusIQ and all Critical vulnerabilities have been addressed, items that remain open have been assessed for risk and determined to be false positive. The DCAE open Critical security vulnerabilities and their risk assessment have been documented as part of the `project <https://wiki.onap.org/pages/viewpage.action?pageId=51282478>`_.
+
+Quick Links:
+ - `DCAE project page <https://wiki.onap.org/display/DW/Data+Collection+Analytics+and+Events+Project>`_
+
+ - `Passing Badge information for DCAE <https://bestpractices.coreinfrastructure.org/en/projects/1718>`_
+
+ - `Project Vulnerability Review Table for DCAE <https://wiki.onap.org/pages/viewpage.action?pageId=51282478>`_
+
+
+**New component Notes**
+The following components are introduced in R4
+
+ - Dashboard
+ - Docker container tag: onap/org.onap.ccsdk.dashboard.ccsdk-app-os:1.1.0
+ - Description: Dashboard provides an UI interface for users/operation to deploy and manage service components in DCAE
+ - Blueprint generator
+ - Java artifact : /org/onap/dcaegen2/platform/cli/blueprint-generator/1.0.0/blueprint-generator-1.0.0.jar
+ - Description: Tool to generate the deployment artifact (cloudify blueprints) based on component spec
+ - RESTConf collector 
+ - Docker container tag: onap/org.onap.dcaegen2.collectors.restconfcollector:1.1.1
+ - Description: Provides RESTConf interfaces to events from external domain controllers
+ - VES/Universal Mapper
+ - Docker container tag: onap/org.onap.dcaegen2.services.mapper.vesadapter.universalvesadaptor:1.0.0
+ - Description: Standardizes events recieved from SNMP and RESTConf collector into VES for further processing with DCAE analytics services
+ - 3gpp PM-Mapper
+ - Docker container tag: onap/org.onap.dcaegen2.services.pm-mapper:1.0.1
+ - Description: Transforms 3gpp data feed recieved from DMAAP-DR into VES events
+ - BBS Event processor
+ - Docker container tag: onap/org.onap.dcaegen2.services.components.bbs-event-processor:1.0.0
+ - Description: Handles PNF-Reregistration and CPE authentication events and generate CL events
+ - SON-Handler
+ - Docker container tag: onap/org.onap.dcaegen2.services.son-handler:1.0.1
+ - Description: Supports PC-ANR optimization analysis and generating CL events output
+ - Heartbeat MS
+ - Docker container tag: onap/org.onap.dcaegen2.services.heartbeat:2.1.0
+ - Description: Generates missing heartbeat CL events based on configured threshold for VES heartbeats/VNF type.
+
+
+**Upgrade Notes**
+
+The following components are upgraded from R3
+ - Cloudify Manager:
+ - Docker container tag: onap/org.onap.dcaegen2.deployments.cm-container:1.6.2
+ - Description: DCAE's Cloudify Manager container is based on Cloudify Manager Community Version 19.01.24, which is based on Cloudify Manager 4.5.
+ - K8S Bootstrap container:
+ - Docker container tag: onap/org.onap.dcaegen2.deployments.k8s-bootstrap-container:1.4.18
+ - Description: K8s bootstrap container updated to include new plugin and remove DCAE Controller components which have been migrated to Helm chart.
+ - Configuration Binding Service:
+ - Docker container tag: onap/org.onap.dcaegen2.platform.configbinding.app-app:2.3.0
+ - Description: Code optimization and bug fixes
+ - Deployment Handler
+ - Docker container image tag: onap/org.onap.dcaegen2.platform.deployment-handler:4.0.1
+ - Include updates for health and service endpoint check and bug fixes
+ - Policy Handler
+ - Docker container image tag: onap/org.onap.dcaegen2.platform.policy-handler:5.0.0
+ - Description: Policy Handler supports the new lifecycle API's from Policy framework
+ - Service Change Handler
+ - Docker container image tag: onap/org.onap.dcaegen2.platform.servicechange-handler:1.1.5
+ - Description: No update from R3
+ - Inventory API
+ - Docker container image tag: onap/org.onap.dcaegen2.platform.inventory-api:3.2.0
+ - Description: Refactoring and updates for health and service endpoint check
+ - VES Collector
+ - Docker container image tag: onap/org.onap.dcaegen2.collectors.ves.vescollector:1.4.4
+ - Description : Authentication enhancement, refactoring and bug-fixes
+ - Threshold Crossing Analytics
+ - Docker container image tag: onap/org.onap.dcaegen2.deployments.tca-cdap-container:1.1.2
+ - Description: Config updates. Replaced Hadoop VM Cluster based file system with regular host file system; repackaged full TCA-CDAP stack into Docker container; transactional state separation from TCA in-memory to off-node Redis cluster for supporting horizontal scaling.
+ - DataFile Collector
+ - Docker container tag: onap/org.onap.dcaegen2.collectors.datafile.datafile-app-server:1.1.2
+ - Description : Code optimization, bug fixes, logging and performance improvement
+ - PNF Registrator handler
+ - Docker container tag: onap/org.onap.dcaegen2.services.prh.prh-app-server:1.2.3
+ - Description : Code optimization, SDK integration, PNF-UPDATE flow support
+ - HV-VES Collector
+ - Docker container tag: onap/org.onap.dcaegen2.collectors.hv-ves.hv-collector-main:1.1.0
+ - Description : Code optimization, bug fixes, and enables SASL for kafka interface
+ - SNMP Trap Collector
+ - Docker container tag: onap/org.onap.dcaegen2.collectors.snmptrap:1.4.0
+ - Description : Code coverage improvements
+
+
+
Version: 3.0.1
--------------
@@ -47,6 +229,12 @@ The following containers are updated in R3.0.1
- An issue related to VESCollector basic authentication was noted and tracked under DCAEGEN2-1130. This configuration is not enabled by default for R3.0.1; and fix will be handled in Dublin
+- Certificates under onap/org.onap.dcaegen2.deployments.tls-init-container:1.0.0 has expired March'2019 and impacting CL deployment from CLAMP. Follow below workaround to update the certificate
+ kubectl get deployments -n onap | grep deployment-handler
+ kubectl edit deployment -n onap dev-dcaegen2-dcae-deployment-handler
+ Search and change tag onap/org.onap.dcaegen2.deployments.tls-init-container:1.0.0 to onap/org.onap.dcaegen2.deployments.tls-init-container:1.0.3
+
+
Version: 3.0.0
diff --git a/docs/sections/services/sdk/architecture.rst b/docs/sections/sdk/architecture.rst
index 3f3cdf55..3f3cdf55 100644
--- a/docs/sections/services/sdk/architecture.rst
+++ b/docs/sections/sdk/architecture.rst
diff --git a/docs/sections/sdk/index.rst b/docs/sections/sdk/index.rst
new file mode 100644
index 00000000..c5d27a2d
--- /dev/null
+++ b/docs/sections/sdk/index.rst
@@ -0,0 +1,16 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+
+DCAE SDK
+========
+
+With Dublin release, DCAE has introduced new SDK's to aid component development. **SDK** is a common software development kit written in Java. It contains various utilities and clients which may be used for getting configuration from CBS, consuming messages from DMaaP, interacting with A&AI, etc.
+
+
+SDK Overview
+------------
+
+.. toctree::
+ :maxdepth: 1
+
+ ./architecture.rst \ No newline at end of file
diff --git a/docs/sections/services/bbs-event-processor/index.rst b/docs/sections/services/bbs-event-processor/index.rst
index 6d54474d..bcaa700b 100644
--- a/docs/sections/services/bbs-event-processor/index.rst
+++ b/docs/sections/services/bbs-event-processor/index.rst
@@ -1,11 +1,11 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
-=====================================
-BBS-EP (BBS use case event processor)
-=====================================
+==================
+BBS-EventProcessor
+==================
-:Date: 2019-04-11
+:Date: 2019-06-06
.. contents::
:depth: 3
diff --git a/docs/sections/services/dfc/index.rst b/docs/sections/services/dfc/index.rst
index cad36f56..0979bfe4 100644
--- a/docs/sections/services/dfc/index.rst
+++ b/docs/sections/services/dfc/index.rst
@@ -2,8 +2,8 @@
.. http://creativecommons.org/licenses/by/4.0
-DATAFILE COLLECTOR MS (DFC)
-=============================
+DataFile Collector(DFC)
+=======================
.. Add or remove sections below as appropriate for the platform component.
diff --git a/docs/sections/services/heartbeat-ms/index.rst b/docs/sections/services/heartbeat-ms/index.rst
index 148b8da8..d8a77fa5 100644
--- a/docs/sections/services/heartbeat-ms/index.rst
+++ b/docs/sections/services/heartbeat-ms/index.rst
@@ -1,8 +1,8 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
-Heartbeat Microservice (version 2.1.0)
-======================================
+Heartbeat Microservice
+======================
The main objective of **Heartbeat Microservice** is to receive the periodic
heartbeat from the configured eventNames and report the loss of heartbeat
diff --git a/docs/sections/services/mapper/index.rst b/docs/sections/services/mapper/index.rst
index f88f4214..de534be2 100644
--- a/docs/sections/services/mapper/index.rst
+++ b/docs/sections/services/mapper/index.rst
@@ -3,11 +3,12 @@
.. Copyright 2018-2019 Tech Mahindra Ltd.
-Mapper
-=====================
+VES-Mapper
+==========
-| **Problem:** Different VNF vendors generate event and telemetry data in different formats. Out of the box, all VNF vendors may not support VES format.
-| **Solution**: A generic adapter which can convert different formats of event and telemetry data to VES format can be of use here.
+Different VNF vendors generate event and telemetry data in different formats. Out of the box, all VNF vendors may not support VES format.
+VES-Mapper provides a generic adapter to convert different formats of event and telemetry data into VES structure that can be consumed by existing DCAE analytics applications.
+
| *Note*: Currently mapping files are available for SNMP collector and RESTConf collector.
**VES-Mapper** converts the telemetry data into the required VES format and publishes to the DMaaP for further action to be taken by the DCAE analytics applications.
diff --git a/docs/sections/services/restconf/index.rst b/docs/sections/services/restconf/index.rst
index ec98c959..4b81b211 100644
--- a/docs/sections/services/restconf/index.rst
+++ b/docs/sections/services/restconf/index.rst
@@ -1,9 +1,9 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
-=====================================
-RestConfCollector
-=====================================
+==================
+RestConf Collector
+==================
.. contents::
:depth: 3
diff --git a/docs/sections/services/sdk/index.rst b/docs/sections/services/sdk/index.rst
deleted file mode 100644
index 0af47ec5..00000000
--- a/docs/sections/services/sdk/index.rst
+++ /dev/null
@@ -1,19 +0,0 @@
-.. This work is licensed under a Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-
-DCAE SDK
-========
-
-**SDK** is a common software development kit written in Java. It contains various utilities and clients which may be used for getting configuration from CBS, consuming messages from DMaaP, interacting with A&AI, etc.
-
-
-SDK Overview
-------------
-
-.. toctree::
- :maxdepth: 1
-
- ./architecture.rst
-
-
-API reference \ No newline at end of file
diff --git a/docs/sections/services/serviceindex.rst b/docs/sections/services/serviceindex.rst
index 9825fe9a..fdc8aedd 100644
--- a/docs/sections/services/serviceindex.rst
+++ b/docs/sections/services/serviceindex.rst
@@ -2,8 +2,8 @@
.. http://creativecommons.org/licenses/by/4.0
-Service components under DCAE
-=============================
+DCAE Service components
+=======================
.. Add or remove sections below as appropriate for the platform component.
@@ -19,6 +19,5 @@ Service components under DCAE
./heartbeat-ms/index.rst
./pm-mapper/index.rst
./bbs-event-processor/index.rst
- ./sdk/index.rst
./son-handler/index.rst
- ./restconf/index.rst
+ ./restconf/index.rst \ No newline at end of file
diff --git a/docs/sections/services/snmptrap/index.rst b/docs/sections/services/snmptrap/index.rst
index 0a27cdd8..bee611c1 100644
--- a/docs/sections/services/snmptrap/index.rst
+++ b/docs/sections/services/snmptrap/index.rst
@@ -2,8 +2,8 @@
.. http://creativecommons.org/licenses/by/4.0
-SNMP TRAP COLLECTOR MS (DCAE)
-=============================
+SNMP Trap Collector
+===================
.. Add or remove sections below as appropriate for the platform component.
diff --git a/docs/sections/services/ves-http/architecture.rst b/docs/sections/services/ves-http/architecture.rst
index 5994255b..29077afe 100644
--- a/docs/sections/services/ves-http/architecture.rst
+++ b/docs/sections/services/ves-http/architecture.rst
@@ -2,7 +2,7 @@
.. http://creativecommons.org/licenses/by/4.0
VES Architecture
-===================
+================
.. image:: ./ves-deployarch.png
@@ -32,8 +32,8 @@ Schema definition files are contained within VES collector gerrit repo - https:/
Features Supported
==================
- VES collector deployed as docker containers
-- Acknowledged the sender with appropriate response code (both successful and failure)
-- Authentication of the events posted to collector
+- Acknowledgement to sender with appropriate response code (both successful and failure)
+- Authentication of the events posted to collector (support 4 types of authentication setting)
- Support single or batch JSON events input
- Schema validation (against standard VES definition)
- Multiple schema support and backward compatibility