summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--docs/requirements-docs.txt4
-rw-r--r--docs/sections/architecture.rst76
-rw-r--r--docs/sections/images/R10_architecture_diagram.pngbin0 -> 95706 bytes
-rw-r--r--docs/sections/images/R9_architecture_diagram.pngbin0 -> 130764 bytes
-rw-r--r--docs/sections/installation_oom.rst176
-rw-r--r--docs/sections/services/snmptrap/installation.rst4
6 files changed, 78 insertions, 182 deletions
diff --git a/docs/requirements-docs.txt b/docs/requirements-docs.txt
index 5a3d2f17..3b3441a8 100644
--- a/docs/requirements-docs.txt
+++ b/docs/requirements-docs.txt
@@ -1 +1,3 @@
-lfdocs-conf \ No newline at end of file
+lfdocs-conf
+sphinx>=4.2.0 # BSD
+sphinx-rtd-theme>=1.0.0 # MIT
diff --git a/docs/sections/architecture.rst b/docs/sections/architecture.rst
index 31164d62..04d41daa 100644
--- a/docs/sections/architecture.rst
+++ b/docs/sections/architecture.rst
@@ -6,46 +6,26 @@
Architecture
============
-Data Collection Analytics and Events (DCAE) is the primary data collection and analysis system of ONAP. DCAE architecture comprises of DCAE Platform and
-DCAE Service components making DCAE flexible, elastic, and expansive enough for supporting the potentially infinite number of ways of constructing intelligent
-and automated control loops on distributed and heterogeneous infrastructure.
+DCAE project provides intelligence for ONAP to support automation (via open-loop and CL) by performing network data collections, analytics & correlation and trigger actionable rootcause events.
-DCAE Platform supports the functions to deploy, host and perform LCM applications of Service components. DCAE Platform components enable model driven deployment of
-service components and middleware infrastructures that service components depend upon, such as special storage and computation platforms. When triggered by an
-invocation call (such as CLAMP or via DCAE Dashboard), DCAE Platform follows the TOSCA model of the control loop that is specified by the triggering call,
-interacts with the underlying networking and computing infrastructure such as OpenSatck installations and Kubernetes clusters to deploy and configure the virtual
-apparatus (i.e. the collectors, the analytics, and auxiliary microservices) that are needed to form the control loop, at locations that requested.
-DCAE Platform also provisions DMaaP topics and manages the distribution scopes of the topics following the prescription of the control loop model by interacting
-with controlling function of DMaaP.
+Prior to Jakara release, DCAE architecture comprised of DCAE Platform and DCAE Service components; DCAE Platform supported the functions to deploy, host and perform LCM applications of Service components.
-DCAE Service components are the functional entities that realize the collection and analytics needs of ONAP control loops. They include the collectors for various
-data collection needs, event processors for data standardization, analytics that assess collected data, and various auxiliary microservices that assist data
-collection and analytics, and support other ONAP functions. Service components and DMaaP buses form the "data plane" for DCAE, where DCAE collected data is
-transported among different DCAE service components.
+With Jakarta release, DCAE Platform component centered around Cloudify has been deprecated. All Microservice orchestration and lifecycle management will be handled through Helm/Kubernetes.
-DCAE use Consul's distributed K-V store service to manage component configurations where each key is based on the unique identity of a DCAE component (identified by ServiceComponentName), and the value is the configuration for the corresponding component. The K-V store for each service components is created during deployment. DCAE platform creates and updates the K-V pairs based on information provided as part of the control loop blueprint deployment, or through a notification/trigger received from other ONAP components such as Policy Framework and CLAMP. Either through periodically polling or proactive pushing, the DCAE components get the configuration updates in realtime and apply the configuration updates. DCAE Platform also offers dynamic template resolution for configuration parameters that are dynamic and only known by the DCAE platform, such as dynamically provisioned DMaaP topics. This approach standardizes component deployment and configuration management for DCAE service components in multi-site deployment.
+The DCAE services components includes all the microservices - collectors, analytics and event processor which supports active data-flow and processing as required by ONAP usecases. These Service components are the functional entities that realize the various
+data collection needs, event processors for data standardization, analytics that assess collected data, and various auxiliary microservices that assist automated closed loop flows.
+
+The architecture of DCAE with Helm transformation is more flexible, microsservice oriented and supports model based component design and deployment through DCAE-MOD. Also with migration to helm, DCAE miroservice deployments can be handled independently, dependencies are captured under its helm charts.
+
+Prior to Jakarta release, DCAE components relied on Consul's distributed K-V to manage and store component configuration. However with Jakarta release, Consul dependency has been removed completely across all DCAE service components.
+All Microservice configuration are resolved through files mounted via Configmap created part of
+dcae-services helm chart deployment.
DCAE Components
---------------
-The following lists the components included in ONAP DCAE . All DCAE components are offered as Docker containers. Following ONAP level deployment methods, these components can be deployed as Kubernetes Deployments and Services.
-
-- DCAE Platform
- - Core Platform
- - Cloudify Manager: TOSCA model executor. Materializes TOSCA models of control loop, or Blueprints, into properly configured and managed virtual DCAE functional components.
- - Plugins (K8S, Dmaap, Policy, Clamp, Postgres)
- - Extended Platform
- - Configuration Binding Service: Agent for service component configuration fetching; providing configuration parameter resolution.
- - Deployment Handler: API for triggering control loop deployment based on control loop's TOSCA model.
- - Policy Handler: Handler for fetching policy updates from Policy engine; and updating the configuration policies of KV entries in Consul cluster KV store for DCAE components.
- - Service Change Handler: Handler for interfacing with SDC; receiving new TOSCA models; and storing them in DCAE's own inventory.
- - DCAE Inventory-API: API for DCAE's TOSCA model store.
- - VES OpenApi Manager: Optional validator of VES_EVENT type artifacts executed during Service distributions.
- - Platform services
- - Consul: Distributed service discovery service and KV store.
- - Postgres Database: DCAE's TOSCA model store.
- - Redis Database: DCAE's transactional state store, used by TCA for supporting persistence and seamless scaling.
+The following lists the components included in ONAP DCAE. All DCAE components are offered as Docker containers. Following ONAP level deployment methods, these components can be deployed as Kubernetes Deployments and Services.
- DCAE Services
- Collectors
@@ -68,6 +48,8 @@ The following lists the components included in ONAP DCAE . All DCAE components
- BBS-EventProcessor Service
- PM Subcription Handler
- DataLake Handlers (DL-Admin, DL-Feeder, DES)
+ - Misc Services
+ - VES OpenApi Manager (Optional validator of VES_EVENT type artifacts executed during Service distributions)
The figure below shows the DCAE architecture and how the components work with each other. The components on the right constitute the Platform/controller components which are statically deployed. The components on the right represent the services which can be both deployed statically or dynamically (via CLAMP)
@@ -76,20 +58,29 @@ The figure below shows the DCAE architecture and how the components work with ea
The following diagram has been created on https://app.diagrams.net/. There is an editable version of the diagram
in repository under path docs/sections/images/architecture_diagram. Import this file to mentioned page to edit diagram.
-.. image:: images/R8_architecture_diagram.png
+.. image:: images/R10_architecture_diagram.png
Deployment Scenarios
--------------------
-Because DCAE service components are deployed on-demand following the control loop needs for managing ONAP deployed services, DCAE must support dynamic and on-demand deployment of service components based on ONAP control loop demands. This is why all other ONAP components are launched from the ONAP level method, DCAE only deploys a subset of its components during this ONAP deployment process and rest of DCAE components will be deployed on-demand based on usecase needs triggered by control loop request originated from CLAMP, or even by operator manually invoking DCAE's deployment API call.
+Because DCAE service components are deployed on-demand following the control loop needs for managing ONAP deployed services, DCAE must
+support dynamic and on-demand deployment of service components based on ONAP control loop demands.
-ONAP supports deployment through OOM Helm Chart currently (Heat deployment support is discontinued since R3). Hence all DCAE Platform components are deployed via Helm charts - this includes Cloudify Manager, ConfigBinding service, ServiceChange Handler, Policy Handler, Dashboard and Inventory, each with corresponding Helm charts under OOM (https://git.onap.org/oom/tree/kubernetes/dcaegen2/components). Once DCAE platform components are up and running, rest of DCAE service components required for ONAP flow are deployed via bootstrap POD, which invokes Cloudify Manager API with Blueprints for various DCAE components that are needed for the built-in collections and control loops flow support.
-
-To keep the ONAP footprint minimal, only minimal set of MS (required for ONAP Integration usecases) are deployed via bootstrap pod. Rest of service blueprints are available for operator to deploy on-demand as required.
+With DCAE Transformation to Helm completed in Jakarta/R10 release, all DCAE components deployment will be supported only via helm.
+ Charts for individual MS are available under **dcaegen2-services** directory under OOM project
+ (https://git.onap.org/oom/tree/kubernetes/dcaegen2-services/components). To keep the ONAP footprint minimal, only minimal set of MS
+ (required for ONAP Integration usecases) are enabled by default on ONAP/DCAE deployment, which includes four DCAE services (HV VES
+ collector, VES collector, PNF Registration Handler, and TCA (Gen2) analytics service).
More details of the DCAE deployment can be found under Installation section.
+Architectural Reference
+-----------------------
+
+ - `ARC DCAE Component Description <https://wiki.onap.org/display/DW/ARC+DCAE+Component+Description+-+Jakarta-R10>`_
+ - `R10 M2 ARC Proposal <https://wiki.onap.org/display/DW/DCAE+R10+M2+Architecture+Review>`_
+
Usage Scenarios
---------------
@@ -106,13 +97,18 @@ For ONAP DCAE participates in the following use cases.
- CCVPN : RestConf Collector, Holmes
-- BBS : VES Collector, PRH, BBS-Event Processor, VES-Mapper, RESTConf Collector
+- PNF Registration: VES Collector, PRH
-- 5G Bulk PM : DataFile Collector, PM-Mapper, HV-VES
+- 5G Bulk PM : DataFile Collector, PM-Mapper, HV-VES, PMSH
- 5G OOF SON: VES collector, SON-Handler
- 5G E2E Network Slicing: VES collector, Slice Analysis, DES, PM-Mapper, DFC, Datalake feeder
+
+- IBN/CCVPN : VES collector, Slice Analysis, DES, Datalake feeder
-In addition, DCAE supports on-demand deployment and configuration of service components via CLAMP. In such case CLAMP invokes the deployment and configuration of additional TCA instances.
+DCAE supports on-demand deployment and configuration of all its microservices via helm charts. As components can also be onboarded
+ through MOD, the flow output is distributed as helm charts which can be installed on-demand also by the operators.
+
+Policy/CLAMP K8S participant is another ONAP client which can trigger deployment of DCAE Microservice charts.
diff --git a/docs/sections/images/R10_architecture_diagram.png b/docs/sections/images/R10_architecture_diagram.png
new file mode 100644
index 00000000..c862abd1
--- /dev/null
+++ b/docs/sections/images/R10_architecture_diagram.png
Binary files differ
diff --git a/docs/sections/images/R9_architecture_diagram.png b/docs/sections/images/R9_architecture_diagram.png
new file mode 100644
index 00000000..0f6bdc1c
--- /dev/null
+++ b/docs/sections/images/R9_architecture_diagram.png
Binary files differ
diff --git a/docs/sections/installation_oom.rst b/docs/sections/installation_oom.rst
index f71675bf..41cf34ab 100644
--- a/docs/sections/installation_oom.rst
+++ b/docs/sections/installation_oom.rst
@@ -1,8 +1,8 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
-DCAE Deployment (using Helm and Cloudify)
-=========================================
+DCAE Deployment (using Helm)
+============================
This document describes the details of the Helm chart based deployment process for ONAP and how DCAE is deployed through this process.
@@ -21,18 +21,22 @@ At deployment time, with a single **helm deploy** command, Helm resolves all the
and invokes Kubernetes deployment operations for all the resources.
All ONAP Helm charts are organized under the **kubernetes** directory of the **OOM** project, where roughly each ONAP component occupies a subdirectory.
-DCAE platform components are deployed using Helm charts under the **dcaegen2** directory.
-With DCAE Transformation to Helm in Istabul, all DCAE components are supported for both helm and Cloudify/Blueprint deployments. Charts for individual MS are available under **dcaegen2-services** directory under OOM project (https://git.onap.org/oom/tree/kubernetes/dcaegen2-services/components). With ONAP deployment, four DCAE services (HV VES collector, VES collector, PNF Registration Handler, and TCA (Gen2) analytics service) are bootstrapped via Helm charts.
-Other DCAE Services are deployed on-demand, after ONAP/DCAE installation, through Cloudify Blueprints or helm-charts. For on-demand helm chart, refer to steps described in :ref:`Helm install/upgrade section <dcae-service-deployment>`.
-Operators can deploy on-demand other MS required for their usecases also via Cloudify as described in :doc:`On-demand MS Installation <./installation_MS_ondemand>`.
+With DCAE Transformation to Helm completed in Jakarta/R10 release, all DCAE components deployment will be supported only via helm. Charts for individual MS are available under **dcaegen2-services** directory under OOM project (https://git.onap.org/oom/tree/kubernetes/dcaegen2-services/components). With ONAP deployment, four DCAE services (HV VES collector, VES collector, PNF Registration Handler, and TCA (Gen2) analytics service) are bootstrapped via Helm charts.
+
+Other DCAE Services can be deployed on-demand via their independent helm-charts. For on-demand helm chart, refer to steps described in :ref:`Helm install/upgrade section <dcae-service-deployment>`.
+
+
+.. note::
+ DCAE platform components deployments is optionally available through Helm charts under the **dcaegen2** directory however this mode is not supported with Jakarta release. These charts will be removed in subsequent release.
+
DCAE Chart Organization
-----------------------
-Following Helm conventions, the DCAE Helm chart directory (``oom/kubernetes/dcaegen2``) consists of the following files and subdirectories:
+Following Helm conventions, the DCAE Helm chart directory (``oom/kubernetes/dcaegen2-services`` & ``oom/kubernetes/dcaegen2``) consists of the following files and subdirectories:
* ``Chart.yaml``: metadata.
* ``requirements.yaml``: dependency charts.
@@ -41,19 +45,6 @@ Following Helm conventions, the DCAE Helm chart directory (``oom/kubernetes/dcae
* ``Makefile``: make file to build DCAE charts
* ``components``: subdirectory for DCAE sub-charts.
-The dcaegen2 chart has the following sub-charts:
-
-* ``dcae-bootstrap``: deploys the DCAE bootstrap service that performs some DCAE initialization and deploys additional DCAE components.
-* ``dcae-cloudify-manager``: deploys the DCAE Cloudify Manager instance.
-* ``dcae-config-binding-service``: deploys the DCAE config binding service.
-* ``dcae-deployment-handler``: deploys the DCAE deployment handler service.
-* ``dcae-healthcheck``: deploys the DCAE healthcheck service that provides an API to check the health of all DCAE components.
-* ``dcae-policy-handler``: deploys the DCAE policy handler service.
-* ``dcae-redis``: deploys the DCAE Redis cluster.
-* ``dcae-dashboard``: deploys the DCAE Dashboard for managing DCAE microservices deployments
-* ``dcae-servicechange-handler``: deploys the DCAE service change handler service.
-* ``dcae-inventory-api``: deploys the DCAE inventory API service.
-* ``dcae-ves-openapi-manager``: deploys the DCAE service validator of VES_EVENT type artifacts from distributed services.
The dcaegen2-services chart has the following sub-charts:
@@ -79,6 +70,7 @@ The dcaegen2-services chart has the following sub-charts:
* ``dcae-snmptrap-collector``: deploys the DCAE SNMPTRAP collector service.
* ``dcae-son-handler``: deploys the DCAE SON-Handler microservice.
* ``dcae-ves-mapper``: deploys the DCAE VES Mapper microservice.
+* ``dcae-ves-openapi-manager``: deploys the DCAE service validator of VES_EVENT type artifacts from distributed services.
The dcaegen2-services sub-charts depend on a set of common templates, found under the ``common`` subdirectory under ``dcaegen2-services``.
@@ -86,64 +78,48 @@ The dcaegen2-services sub-charts depend on a set of common templates, found unde
Information about using the common templates to deploy a microservice can be
found in :doc:`Using Helm to deploy DCAE Microservices <./dcaeservice_helm_template>`.
+The dcaegen2 chart has the following sub-charts:
+
+* ``dcae-bootstrap``: deploys the DCAE bootstrap service that performs some DCAE initialization and deploys additional DCAE components.
+* ``dcae-cloudify-manager``: deploys the DCAE Cloudify Manager instance.
+* ``dcae-config-binding-service``: deploys the DCAE config binding service.
+* ``dcae-deployment-handler``: deploys the DCAE deployment handler service.
+* ``dcae-healthcheck``: deploys the DCAE healthcheck service that provides an API to check the health of all DCAE components.
+* ``dcae-policy-handler``: deploys the DCAE policy handler service.
+* ``dcae-redis``: deploys the DCAE Redis cluster.
+* ``dcae-dashboard``: deploys the DCAE Dashboard for managing DCAE microservices deployments
+* ``dcae-servicechange-handler``: deploys the DCAE service change handler service.
+* ``dcae-inventory-api``: deploys the DCAE inventory API service.
+
+These components are by default disabled under ONAP for Jakarta release and charts will be removed next release
+
DCAE Deployment
---------------
At deployment time for ONAP, when the **helm deploy** command is executed,
-DCAE resources defined within the subcharts - "dcaegen2" above are deployed
-along with subset of DCAE Microservices (based on override file configuration
-defined in `values.yaml <https://git.onap.org/oom/tree/kubernetes/dcaegen2-services/values.yaml>`_
-
+only the DCAE resources defined within the subcharts - "dcaegen2-services" above are deployed
+(based on override file configuration defined in `values.yaml <https://git.onap.org/oom/tree/kubernetes/dcaegen2-services/values.yaml>`_
+
These include:
-* DCAE bootstrap service
-* DCAE healthcheck service
-* DCAE platform components:
-
- * Cloudify Manager
- * Config binding service
- * Deployment handler
- * Policy handler
- * Service change handler
- * Inventory API service
- * Inventory postgres database service (launched as a dependency of the inventory API service)
- * DCAE postgres database service (launched as a dependency of the bootstrap service)
- * DCAE Mongo database service (launched as a dependency of the bootstrap service)
- * VES OpenAPI Manager
-
* DCAE Service components:
* VES Collector
* HV-VES Collector
* PNF-Registration Handler Service
* Threshold Crossing Analysis (TCA-gen2)
+* DCAE-Services healthcheck
+* VES OpenAPI Manager
Some of the DCAE subcharts include an initContainer that checks to see if
other services that they need in order to run have become ready. The installation
of these subcharts will pause until the needed services are available.
-In addition, DCAE operations depends on a Consul server cluster.
-For ONAP OOM deployment, the Consul cluster is provided as a shared
-resource. Its charts are defined under the ``oom/kubernetes/consul``
-directory, not as part of the DCAE chart hierarchy.
-
-With Istanbul release, DCAE bootstrapped Microservice deployment are managed completely under Helm. The Cloudify
-Bootstrap container preloads the microservice blueprints into DCAE Inventory, thereby making them available
-for On-Demand deployment support (trigger from CLAMP or external projects).
-
-The dcae-bootstrap service has a number of prerequisites because the subsequently deployed DCAE components depends on a number of resources having entered their normal operation state. DCAE bootstrap job will not start before these resources are ready. They are:
-
- * dcae-cloudify-manager
- * consul-server
- * msb-discovery
- * kube2msb
- * dcae-config-binding-service
- * dcae-db
- * dcae-mongodb
- * dcae-inventory-api
+Since Istanbul release, DCAE bootstrapped Microservice deployment are managed completely under Helm.
Additionaly tls-init-container invoked during component deployment relies on AAF to generate the required certificate hence AAF
-must be enabled under OOM deployment configuration.
+must be enabled under OOM deployment configuration.
+As majority of DCAE services rely on DMAAP (MR and DR) interfaces, ONAP/DMAAP must also be enabled under OOM deployment configuration.
DCAE Configuration
------------------
@@ -159,64 +135,15 @@ Deployment time configuration of DCAE components are defined in several places.
* In a Helm chart hierarchy, values defined in values.yaml files in higher level supersedes values defined in values.yaml files in lower level;
* Helm command line supplied values supersedes values defined in any values.yaml files.
-In addition, for DCAE components deployed through Cloudify Manager blueprints, their configuration parameters are defined in the following places:
-
- * The blueprint files can contain static values for configuration parameters;
- * The blueprint files are defined under the ``blueprints`` directory of the ``dcaegen2/platform/blueprints`` repo, named with "k8s" prefix.
- * The blueprint files can specify input parameters and the values of these parameters will be used for configuring parameters in Blueprints. The values for these input parameters can be supplied in several ways as listed below in the order of precedence (low to high):
- * The blueprint files can define default values for the input parameters;
- * The blueprint input files can contain static values for input parameters of blueprints. These input files are provided as config resources under the dcae-bootstrap chart;
- * The blueprint input files may contain Helm templates, which are resolved into actual deployment time values following the rules for Helm values.
-
-
-Now we walk through an example, how to configure the Docker image for the DCAE VESCollector, which is deployed by Cloudify Manager.
-
-(*Note: Beginning with the Istanbul release, VESCollector is no longer deployed using Cloudify Manager during bootstrap. However, the example is still
-useful for understanding how to deploy other components using a Cloudify blueprint.*)
-
-In the `k8s-ves.yaml <https://git.onap.org/dcaegen2/platform/blueprints/tree/blueprints/k8s-ves.yaml>`_ blueprint, the Docker image to use is defined as an input parameter with a default value:
-
-.. code-block:: yaml
-
- tag_version:
- type: string
- default: "nexus3.onap.org:10001/onap/org.onap.dcaegen2.collectors.ves.vescollector:1.5.4"
-
-The corresponding input file, ``https://git.onap.org/oom/tree/kubernetes/dcaegen2/components/dcae-bootstrap/resources/inputs/k8s-ves-inputs-tls.yaml``,
-it is defined again as:
-
-.. code-block:: yaml
- {{ if .Values.componentImages.ves }}
- tag_version: {{ include "common.repository" . }}/{{ .Values.componentImages.ves }}
- {{ end }}
-
-
-Thus, when ``common.repository`` and ``componentImages.ves`` are defined in the ``values.yaml`` files,
-their values will be plugged in here and the resulting ``tag_version`` value
-will be passed to the blueprint as the Docker image tag to use instead of the default value in the blueprint.
-
-The ``componentImages.ves`` value is provided in the ``oom/kubernetes/dcaegen2/charts/dcae-bootstrap/values.yaml`` file:
-
-.. code-block:: yaml
-
- componentImages:
- ves: onap/org.onap.dcaegen2.collectors.ves.vescollector:1.5.4
-
-
-The final result is that when DCAE bootstrap calls Cloudify Manager to deploy the DCAE VES collector, the 1.5.4 image will be deployed.
-
.. _dcae-service-deployment:
On-demand deployment/upgrade through Helm
-----------------------------------------
-Under DCAE Transformation to Helm, all DCAE components has been delivered as helm charts under
+Under DCAE Transformation to Helm, all DCAE components has been delivered as helm charts under
OOM repository (https://git.onap.org/oom/tree/kubernetes/dcaegen2-services).
-Blueprint deployment is also available to support regression usecases; ``Istanbul will be final release where
-Cloudify blueprint for components/microservices will be supported.``
-
All DCAE component charts follows standard Helm structure. Each Microservice charts has predefined configuration defined under
``applicationConfig`` which can be modified or overridden at deployment time.
@@ -306,15 +233,7 @@ Below is a table of default hostnames and ports for DCAE component service endpo
HV-VES dcae-hv-ves-collector:6061 dcae-hv-ves-collector.onap:30222
TCA-Gen2 dcae-tcagen2:9091 NA
PRH dcae-prh:8100 NA
- Policy Handler policy-handler:25577 NA
- Deployment Handler deployment-handler:8443 NA
- Inventory inventory:8080 NA
- Config binding config-binding-service:10000/10001 NA
- DCAE Healthcheck dcae-healthcheck:80 NA
DCAE MS Healthcheck dcae-ms-healthcheck:8080 NA
- Cloudify Manager dcae-cloudify-manager:80 NA
- DCAE Dashboard dashboard:8443 dashboard:30418
- DCAE mongo dcae-mongo-read:27017 NA
=================== ================================== =======================================================
In addition, a number of ONAP service endpoints that are used by DCAE components are listed as follows
@@ -323,15 +242,10 @@ for reference by DCAE developers and testers:
==================== ============================ ================================
Component Cluster Internal (host:port) Cluster external (svc_name:port)
==================== ============================ ================================
- Consul Server consul-server-ui:8500 NA
Robot robot:88 robot:30209 TCP
Message router message-router:3904 NA
Message router message-router:3905 message-router-external:30226
Message router Kafka message-router-kafka:9092 NA
- MSB Discovery msb-discovery:10081 msb-discovery:30281
- Logging log-kibana:5601 log-kibana:30253
- AAI aai:8080 aai:30232
- AAI aai:8443 aai:30233
==================== ============================ ================================
Uninstalling DCAE
@@ -341,7 +255,7 @@ All of the DCAE components deployed using the OOM Helm charts will be
deleted by the ``helm undeploy`` command. This command can be used to
uninstall all of ONAP by undeploying the top-level Helm release that was
created by the ``helm deploy`` command. The command can also be used to
-uninstall just DCAE, by having the command undeploy the `top_level_release_name`-``dcaegen2``
+uninstall just DCAE, by having the command undeploy the `top_level_release_name`-``dcaegen2-services``
Helm sub-release.
Helm will undeploy only the components that were originally deployed using
@@ -353,22 +267,6 @@ used for the deployment (typically ``onap``) after running the undeploy
operation. Deleting the namespace will get rid of any remaining resources
in the namespace, including the components deployed by Cloudify Manager.
-When uninstalling DCAE alone, deleting the namespace would delete the
-rest of ONAP as well. To delete DCAE alone, and to make sure all of the
-DCAE components deployed by Cloudify Manager are uninstalled:
-
-* Find the Cloudify Manager pod identifier, using a command like:
-
- ``kubectl -n onap get pods | grep dcae-cloudify-manager``
-* Execute the DCAE cleanup script on the Cloudify Manager pod, using a command like:
-
- ``kubectl -n onap exec`` `cloudify-manager-pod-id` ``-- /scripts/dcae-cleanup.sh``
-* Finally, run ``helm undeploy`` against the DCAE Helm subrelease
-
-The DCAE cleanup script uses Cloudify Manager and the DCAE Kubernetes
-plugin to instruct Kubernetes to delete the components deployed by Cloudify
-Manager. This includes the components deployed when the DCAE bootstrap
-service ran and any components deployed after bootstrap.
To undeploy the DCAE services deployed via Helm (the hv-ves-collector, ves-collector, tcagen2,
and prh), use the ``helm undeploy`` command against the `top_level_release_name`-``dcaegen2-services``
diff --git a/docs/sections/services/snmptrap/installation.rst b/docs/sections/services/snmptrap/installation.rst
index d134a895..9c549948 100644
--- a/docs/sections/services/snmptrap/installation.rst
+++ b/docs/sections/services/snmptrap/installation.rst
@@ -24,7 +24,7 @@ configuration assets to instantiated containers as needed.
Also required is a working DMAAP/MR environment. trapd
publishes traps to DMAAP/MR as JSON messages and expects the host
-resources and publishing credentials to be included in the *Config Binding Service*
+resources and publishing credentials to be included in the *Config Binding Service*
config.
Installation
@@ -33,7 +33,7 @@ Installation
The following command will download the latest trapd container from
nexus and launch it in the container named "trapd":
- ``docker run --detach -t --rm -p 162:6162/udp -P --name=trapd nexus3.onap.org:10001/onap/org.onap.dcaegen2.collectors.snmptrap:2.0.3 ./bin/snmptrapd.sh start``
+ ``docker run --detach -t --rm -p 162:6162/udp -P --name=trapd nexus3.onap.org:10001/onap/org.onap.dcaegen2.collectors.snmptrap:2.0.6 ./bin/snmptrapd.sh start``
Running an instance of **trapd** will result in arriving traps being published
to the topic specified by Config Binding Services.