summaryrefslogtreecommitdiffstats
path: root/docs/sections/services
diff options
context:
space:
mode:
Diffstat (limited to 'docs/sections/services')
-rw-r--r--docs/sections/services/datalake-handler/index.rst2
-rw-r--r--docs/sections/services/datalake-handler/installation.rst18
-rw-r--r--docs/sections/services/datalake-handler/overview.rst8
-rw-r--r--docs/sections/services/datalake-handler/userguide.rst26
-rw-r--r--docs/sections/services/dfc/certificates.rst1
-rw-r--r--docs/sections/services/heartbeat-ms/build_setup.rst7
-rw-r--r--docs/sections/services/heartbeat-ms/testprocedure.rst2
-rw-r--r--docs/sections/services/mapper/SampleSnmpTrapConversion.rst2
-rw-r--r--docs/sections/services/mapper/delivery.rst4
-rw-r--r--docs/sections/services/mapper/flow.rst39
-rw-r--r--docs/sections/services/mapper/installation.rst9
-rw-r--r--docs/sections/services/mapper/mappingfile.rst5
-rw-r--r--docs/sections/services/mapper/troubleshooting.rst4
-rw-r--r--docs/sections/services/pm-mapper/configuration.rst2
-rw-r--r--docs/sections/services/snmptrap/offeredapis.rst2
-rw-r--r--docs/sections/services/snmptrap/release-notes.rst19
-rw-r--r--docs/sections/services/son-handler/son_handler_troubleshooting.rst4
-rw-r--r--docs/sections/services/tcagen2-docker/installation.rst6
-rw-r--r--docs/sections/services/ves-http/installation.rst3
-rw-r--r--docs/sections/services/ves-hv/index.rst2
20 files changed, 84 insertions, 81 deletions
diff --git a/docs/sections/services/datalake-handler/index.rst b/docs/sections/services/datalake-handler/index.rst
index 56ada5f2..3b445a55 100644
--- a/docs/sections/services/datalake-handler/index.rst
+++ b/docs/sections/services/datalake-handler/index.rst
@@ -3,7 +3,7 @@
DataLake-Handler MS
-==============
+===================
**DataLake-Handler MS** is a software component of ONAP that can systematically persist the events from DMaaP into supported Big Data storage systems.
It has a Admin UI, where a system administrator configures which Topics to be monitored, and to which data storage to store the data.
diff --git a/docs/sections/services/datalake-handler/installation.rst b/docs/sections/services/datalake-handler/installation.rst
index 5d8b3341..16294b98 100644
--- a/docs/sections/services/datalake-handler/installation.rst
+++ b/docs/sections/services/datalake-handler/installation.rst
@@ -3,17 +3,17 @@ Deployment Steps
DL-handler consists of two pods- the feeder and admin UI. It can be deployed by using cloudify blueprint. Datalake can be easily deployed through DCAE cloudify manager. The following steps guides you launch Datalake though cloudify manager.
Pre-requisite
-----------------
+-------------
- Make sure mariadb-galera from OOM is properly deployed and functional.
- An external database, such as Elasticsearch and MongoDB is deployed.
After datalake getting deployed, the admin UI can be used to configure the sink database address and credentials.
Log-in to the DCAE Bootstrap POD
----------------------------------------------------
+--------------------------------
First, we should find the bootstrap pod name through the following command and make sure that DCAE coudify manager is properly deployed.
- .. image :: .images/bootstrap-pod.png
+ .. image :: ./images/bootstrap-pod.png
Login to the DCAE bootstrap pod through the following command.
.. code-block :: bash
@@ -38,17 +38,17 @@ After validating, we can start to proceed blueprints uploading.
Verify Uploaded Blueprints
--------------------------
-Using "cft blueprint list" to varify your work.
+Using "cfy blueprint list" to verify your work.
.. code-block :: bash
#cfy blueprint list
You can see the following returned message to show the blueprints have been correctly uploaded.
- .. image :: ./imagesblueprint-list.png
+ .. image :: ./images/blueprint-list.png
Verify Plugin Versions
-------------------------------------------------------------------------------
+----------------------
If the version of the plugin used is different, update the blueprint import to match.
.. code-block :: bash
@@ -74,11 +74,13 @@ Next, we are going to launch the datalake.
Verify the Deployment Result
-----------------------------
The following command can be used to list the datalake logs.
+
.. code-block :: bash
+
#kubectl logs <datalake-pod> -n onap
The output should looks like.
- .. image :: ./feeder-log.png
+ .. image :: ./images/feeder-log.png
If you find any Java exception from log, make sure that the external database and datalake configuration are properly configured.
Admin UI can be used to configure the external database configuration.
@@ -97,4 +99,4 @@ Delete Blueprint
.. code-block :: bash
#cfy blueprints delete datalake-feeder
- #cfy blueprints deltet datalake-admin-ui
+ #cfy blueprints delete datalake-admin-ui
diff --git a/docs/sections/services/datalake-handler/overview.rst b/docs/sections/services/datalake-handler/overview.rst
index 51dab104..09e41a5b 100644
--- a/docs/sections/services/datalake-handler/overview.rst
+++ b/docs/sections/services/datalake-handler/overview.rst
@@ -30,6 +30,7 @@ Note that not all data storage systems in the picture are supported. In R6, the
- Couchbase
- Elasticsearch and Kibana
- HDFS
+
Depending on demands, new systems may be added to the supported list. In the following we use the term database for the storage,
even though HDFS is a file system (but with simple settings, it can be treats as a database, e.g. Hive.)
@@ -61,12 +62,9 @@ Features
- Read data directly from Kafka for performance.
- Support for pluggable databases. To add a new database, we only need to implement its corrosponding service.
- - Support REST API for inter-component communications. Besides managing DatAlake settings in MariaDB,
- Admin UI also use this API to start/stop Feeder, query Feeder status and statistics.
+ - Support REST API for inter-component communications. Besides managing DatAlake settings in MariaDB, Admin UI also use this API to start/stop Feeder, query Feeder status and statistics.
- Use MariaDB to store settings.
- - Support data processing features. Before persisting data, data can be massaged in Feeder.
- Currently two features are implemented: Correlate Cleared Message (in org.onap.datalake.feeder.service.db.ElasticsearchService)
- and Flatten JSON Array (org.onap.datalake.feeder.service.StoreService).
+ - Support data processing features. Before persisting data, data can be massaged in Feeder. Currently two features are implemented: Correlate Cleared Message (in org.onap.datalake.feeder.service.db.ElasticsearchService) and Flatten JSON Array (org.onap.datalake.feeder.service.StoreService).
- Connection to Kafka and DBs are secured
diff --git a/docs/sections/services/datalake-handler/userguide.rst b/docs/sections/services/datalake-handler/userguide.rst
index b3be9491..f1de54d0 100644
--- a/docs/sections/services/datalake-handler/userguide.rst
+++ b/docs/sections/services/datalake-handler/userguide.rst
@@ -1,8 +1,8 @@
Admin UI User Guide
----------------------
+-------------------
Introduction
-~~~~~~~~
+~~~~~~~~~~~~
DataLake Admin UI aims to provide a user-friendly dashboard to easily monitor and
manage DataLake configurations for the involved components, ONAP topics, databases,
and 3rd-party tools. Please refer to the link to access the Admin UI portal
@@ -10,8 +10,9 @@ via http://datalake-admin-ui:30479
DataLake Feeder Management
-******************
+**************************
.. image:: ./images/adminui-feeder.png
+
Click the "DataLake Feeder" on the menu bar, and the dashboard will show
the overview DataLake Feeder information, such as the numbers of topics.
Also, you can enable or disable DataLake Feeder process backend process
@@ -19,12 +20,14 @@ by using the toggle switch.
Kafka Management
-******************
+****************
.. image:: ./images/adminui-kafka.png
+
Click the "Kafka" on the menu bar, and it provides the kafka resource settings
including add, modify and delete in the page to fulfill your management demand.
.. image:: ./images/adminui-kafka-edit.png
+
You can modify the kafka resource via clicking the card,
and click the plus button to add a new Kafka resource.
Then, you will need to fill the required information such as identifying name,
@@ -32,11 +35,12 @@ message router address and zookeeper address, and so on to build it up.
Topics Management
-******************
+*****************
.. image:: ./images/adminui-topics.png
.. image:: ./images/adminui-topic-edit1.png
.. image:: ./images/adminui-topic-edit2.png
.. image:: ./images/adminui-topic-edit3.png
+
The Topic page lists down all the topics which you have been configured
by topic management. You can edit the topic setting via double click the specific row.
The setting includes DataLake feeder status - catch the topic or not,
@@ -45,37 +49,41 @@ And choose one or more Kafka items as topic resource
and define the databased to store topic info are necessary.
.. image:: ./images/adminui-topic-config.png
+
For the default configuration of Topics, you can click the "Default configurations" button
to do the setting. When you add a new topic, these configurations will be filled into the form automatically.
.. image:: ./images/adminui-topic-new.png
+
To add a new topic for the DataLake Feeder, you can click the "plus icon" button
to catch the data into the 3rd-party database.
Please be noted that only existing topics in the Kafka can be added.
Database Management
-******************
+*******************
.. image:: ./images/adminui-dbs.png
.. image:: ./images/adminui-dbs-edit.png
+
In the Database Management page, it allows you to add, modify and delete the database resources
where the message from topics will be stored.
DataLake supports a bunch of databases including Couchbase DB, Apache Druid, Elasticsearch, HDFS, and MongoDB.
3rd-Party Tools Management
-******************
+**************************
.. image:: ./images/adminui-tools.png
+
In the Tools page, it allows you to manage the resources of 3rd-party tools for data visualization.
Currently, DataLake supports two Tools which are Kibana and Apache Superset.
3rd-Party Design Tools Management
-******************
+*********************************
.. image:: ./images/adminui-design.png
.. image:: ./images/adminui-design-edit.png
+
After setting up the 3rd-party tools, you can import the template as the JSON, YAML or other formats
for data exploration, data visualization and dashboarding. DataLake supports Kibana dashboarding,
Kibana searching, Kibana visualization, Elasticsearch field mapping template,
and Apache Druid Kafka indexing service.
-
diff --git a/docs/sections/services/dfc/certificates.rst b/docs/sections/services/dfc/certificates.rst
index 2dc557b6..350cda63 100644
--- a/docs/sections/services/dfc/certificates.rst
+++ b/docs/sections/services/dfc/certificates.rst
@@ -1,5 +1,6 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
+
Certificates (From AAF)
=======================
DCAE service components will use common certifcates generated from AAF/test instance and made available during deployment of DCAE TLS init container.
diff --git a/docs/sections/services/heartbeat-ms/build_setup.rst b/docs/sections/services/heartbeat-ms/build_setup.rst
index 6033affc..5df47234 100644
--- a/docs/sections/services/heartbeat-ms/build_setup.rst
+++ b/docs/sections/services/heartbeat-ms/build_setup.rst
@@ -95,7 +95,8 @@ CBS polling. The following environment variables are to be set.**
The sample consul KV is as below.
::
- http://10.12.6.50:8500/ui/#/dc1/kv/mvp-dcaegen2-heartbeat-static
+
+ http://10.12.6.50:8500/ui/#/dc1/kv/mvp-dcaegen2-heartbeat-static
Go to the above link and click on KEY/VALUE tab
@@ -164,18 +165,21 @@ CBS polling. The following environment variables are to be set.**
To check whether image is built or not, run below command
::
+
sudo Docker images |grep heartbeat.test1
**Run the Docker using below command which uses the environment file
mentioned in the above section.**
::
+
sudo Docker run -d --name hb1 --env-file env.list
heartbeat.test1:latest
To check the logs, run below command
::
+
sudo Docker logs -f hb1
**To stop the Docker run**
@@ -198,6 +202,7 @@ mentioned in the above section.**
To run the maven build, execute any one of them.
::
+
sudo mvn -s settings.xml deploy
OR
sudo mvn -s settings.xml -X deploy
diff --git a/docs/sections/services/heartbeat-ms/testprocedure.rst b/docs/sections/services/heartbeat-ms/testprocedure.rst
index a7c6f799..c312ee51 100644
--- a/docs/sections/services/heartbeat-ms/testprocedure.rst
+++ b/docs/sections/services/heartbeat-ms/testprocedure.rst
@@ -12,6 +12,7 @@ Login into postgres DB
Run below commands to login into postgres DB and connect to HB Micro service DB.
::
+
sudo su postgres
psql
\l hb_vnf
@@ -19,6 +20,7 @@ Run below commands to login into postgres DB and connect to HB Micro service DB.
Sample output is as below
::
+
ubuntu@r3-dcae:~$ sudo su postgres
postgres@r3-dcae:/home/ubuntu$ psql
psql (9.5.14)
diff --git a/docs/sections/services/mapper/SampleSnmpTrapConversion.rst b/docs/sections/services/mapper/SampleSnmpTrapConversion.rst
index 71f5718b..b6ba41e4 100644
--- a/docs/sections/services/mapper/SampleSnmpTrapConversion.rst
+++ b/docs/sections/services/mapper/SampleSnmpTrapConversion.rst
@@ -3,7 +3,7 @@
.. Copyright 2018 Tech Mahindra Ltd.
Sample Snmp trap Conversion:
-===========================
+============================
Following is the **Sample SNMP Trap** that will be received by the Universal VES Adapter from the Snmp Trap Collector :
diff --git a/docs/sections/services/mapper/delivery.rst b/docs/sections/services/mapper/delivery.rst
index 6cb2cf2f..3f667635 100644
--- a/docs/sections/services/mapper/delivery.rst
+++ b/docs/sections/services/mapper/delivery.rst
@@ -9,7 +9,7 @@ Mapper is delivered with **1 Docker container** having spring boot microservice,
| In current release, the UniversalVesAdapter is integrated with DCAE's config binding service. On start, it fetches the initial configuration from CBS and uses the same. Currently it is not having functionality to refresh the configuration changes made into Consul KV store.
Docker Containers
----------------
+-----------------
Docker images can be pulled from ONAP Nexus repository with below commands:
- ``docker pull nexus3.onap.org:10001/onap/org.onap.dcaegen2.services.mapper.vesadapter.universalvesadaptor:1.0.0-SNAPSHOT``
+ ``docker pull nexus3.onap.org:10001/onap/org.onap.dcaegen2.services.mapper.vesadapter.universalvesadaptor:latest``
diff --git a/docs/sections/services/mapper/flow.rst b/docs/sections/services/mapper/flow.rst
index 9fe3bfbb..ed05332c 100644
--- a/docs/sections/services/mapper/flow.rst
+++ b/docs/sections/services/mapper/flow.rst
@@ -2,38 +2,33 @@
.. http://creativecommons.org/licenses/by/4.0
.. Copyright 2018-2019 Tech Mahindra Ltd.
-============
-Flow for converting Rest Conf Collector notification
-============
-.. [1] RestConf Collector generates rcc-notication in JSON format and publishes it on DMaaP topic **unathenticated.DCAE_RCC_OUTPUT**
-.. [2] The Universal VES Adapter(UVA) microservice has subscribed to this DMaaP topic.
-.. [3] On receiving an event from DMaaP, the adapter uses the corresponding mapping file and converts received notification into the VES event. It uses the notification-id from the received notification to find the required mapping file.
-.. [4] Those notifications for which no mapping file is identified, a default mapping file is used with generic mappings to create the VES event.
-.. [5] The VES formatted Event will be then published on DMaaP topic **unauthenticated.VES_PNFREG_OUTPUT**.
+
+Flow for converting RestConf Collector notification
+===================================================
+[1] RestConf Collector generates rcc-notication in JSON format and publishes it on DMaaP topic **unathenticated.DCAE_RCC_OUTPUT**
+[2] The Universal VES Adapter(UVA) microservice has subscribed to this DMaaP topic.
+[3] On receiving an event from DMaaP, the adapter uses the corresponding mapping file and converts the received notification into the VES event. It uses the notification-id from the received notification to find the required mapping file.
+[4] Those notifications for which no mapping file is identified, a default mapping file is used with generic mappings to create the VES event.
+[5] The VES formatted Event will be then published on DMaaP topic **unauthenticated.VES_PNFREG_OUTPUT**.
.. image:: ./flow-rest-conf.png
- :height: 200px
- :width: 300 px
- :scale: 50 %
:alt: alternate text
:align: left
- ============
+
Flow for converting SNMP Collector notification
-============
-.. [1] VNF submits SNMP traps to the SNMP collector.
-.. [2] Collector converts the trap into JSON format and publishes it on DMaaP topic **unauthenticated.ONAP-COLLECTOR-SNMPTRAP**
-.. [3] The Universal VES Adapter(UVA) microservice has subscribed to this DMaaP topic.
-.. [4] On receiving an event from DMaaP, the adapter uses the corresponding mapping file and converts received event into the VES event. It uses the enterprise ID from the received event to find the required mapping file.
-.. [5] Those SNMP Traps for which no mapping file is identified, a default mapping file is used with generic mappings to create the VES event.
-.. [6] The VES formatted Event will be then published on DMaaP topic **unauthenticated.SEC_FAULT_OUTPUT**.
+===============================================
+
+[1] VNF submits SNMP traps to the SNMP collector.
+[2] Collector converts the trap into JSON format and publishes it on DMaaP topic **unauthenticated.ONAP-COLLECTOR-SNMPTRAP**
+[3] The Universal VES Adapter(UVA) microservice has subscribed to this DMaaP topic.
+[4] On receiving an event from DMaaP, the adapter uses the corresponding mapping file and converts the received event into the VES event. It uses the enterprise ID from the received event to find the required mapping file.
+[5] Those SNMP Traps for which no mapping file is identified, a default mapping file is used with generic mappings to create the VES event.
+[6] The VES formatted Event will be then published on DMaaP topic **unauthenticated.SEC_FAULT_OUTPUT**.
.. image:: ./flow.png
- :height: 200px
- :width: 300 px
- :scale: 50 %
:alt: alternate text
:align: left \ No newline at end of file
diff --git a/docs/sections/services/mapper/installation.rst b/docs/sections/services/mapper/installation.rst
index 2add9d92..da0bcdc1 100644
--- a/docs/sections/services/mapper/installation.rst
+++ b/docs/sections/services/mapper/installation.rst
@@ -17,7 +17,7 @@ VES-Mapper can be deployed individually though it will throw errors if it can't
VES-Mapper blueprint is available @
https://git.onap.org/dcaegen2/services/mapper/tree/UniversalVesAdapter/dpo/blueprints/k8s-vesmapper.yaml-template.yaml?h=elalto
-VES-Mapper docker image is available in Nexus repo @ `nexus3.onap.org:10001/onap/org.onap.dcaegen2.services.mapper.vesadapter.universalvesadaptor:1.0.1 <nexus3.onap.org:10001/onap/org.onap.dcaegen2.services.mapper.vesadapter.universalvesadaptor:1.0.1>`_
+VES-Mapper docker image is available in Nexus repo @ `nexus3.onap.org:10001/onap/org.onap.dcaegen2.services.mapper.vesadapter.universalvesadaptor:latest <nexus3.onap.org:10001/onap/org.onap.dcaegen2.services.mapper.vesadapter.universalvesadaptor:latest>`_
@@ -27,11 +27,8 @@ VES-Mapper docker image is available in Nexus repo @ `nexus3.onap.org:10001/onap
*a. Verify DMaaP configurations in the blueprint as per setup*
Dmaap Configuration consists of subscribe url to fetch notifications from the respective collector and publish url to publish ves event.
-
-
-``streams_publishes`` and ``streams_subscribes`` points to the publishing topic and subscribe topic respectively.
-
-update these ``urls`` as per your DMaaP configurations in the blueprint.
+
+``streams_publishes`` and ``streams_subscribes`` point to the publishing topic and subscribe topic respectively. Update these ``urls`` as per your DMaaP configurations in the blueprint.
*b. Verify the Smooks mapping configuration in the blueprint as per the usecase. Blueprint contains default mapping for each supported collector ( SNMP Collector and RESTConf collector currently) which may serve the purpose for the usecase. The ``mapping-files`` in ``collectors`` contains the contents of the mapping file.
diff --git a/docs/sections/services/mapper/mappingfile.rst b/docs/sections/services/mapper/mappingfile.rst
index daf515d3..7333963c 100644
--- a/docs/sections/services/mapper/mappingfile.rst
+++ b/docs/sections/services/mapper/mappingfile.rst
@@ -12,7 +12,7 @@ The Adapter uses Smooks Framework to do the data format conversion by using the
| http://www.smooks.org/guide
SNMP Collector Default Mapping File
-============
+===================================
Following is the default snmp mapping file which is used when no mapping file is found while processing event from SNMP Trap Collector.
.. code-block:: xml
@@ -60,7 +60,8 @@ Following is the default snmp mapping file which is used when no mapping file is
</jb:bean></smooks-resource-list>
RestConf Collector Default Mapping File
-============
+=======================================
+
Following is the default RestConf collector mapping file which is used when no mapping file is found while processing notification from RestConf Collector.
.. code-block:: xml
diff --git a/docs/sections/services/mapper/troubleshooting.rst b/docs/sections/services/mapper/troubleshooting.rst
index 5d524e5c..859bf6e4 100644
--- a/docs/sections/services/mapper/troubleshooting.rst
+++ b/docs/sections/services/mapper/troubleshooting.rst
@@ -34,7 +34,7 @@ Error and warning logs contain also:
**Do not rely on exact log messages or their presence, as they are often subject to change.**
Deployment/Installation errors
---------------------
+------------------------------
**Missing Default Config File in case of using local config instead of Consul**
@@ -45,10 +45,10 @@ Deployment/Installation errors
|13:04:37.537 [main] ERROR errorLogger - Application stoped due to missing default Config file
|13:04:37.538 [main] INFO o.s.s.c.ThreadPoolTaskExecutor - Shutting down ExecutorService 'applicationTaskExecutor'
|15:40:43.982 [main] WARN debugLogger - All Smooks objects closed
+
**These log messages are printed when the default configuration file "kv.json", was not present.**
-
**Invalid Default Config File in case of using local config instead of Consul**
If Default Config File is an invalid json file, we will get below exception
diff --git a/docs/sections/services/pm-mapper/configuration.rst b/docs/sections/services/pm-mapper/configuration.rst
index a9f4f5bf..c699a35b 100644
--- a/docs/sections/services/pm-mapper/configuration.rst
+++ b/docs/sections/services/pm-mapper/configuration.rst
@@ -5,7 +5,7 @@ Configuration and Performance
=============================
PM Mapper Filtering
-"""""""""
+"""""""""""""""""""
The PM Mapper performs data reduction, by filtering the PM telemetry data it receives.
This filtering information is provided to the service as part of its configuration, and is used to identify desired PM measurements (measType) contained within the data.
The service can accept an exact match to the measType or regex(java.util.regex) identifying multiple measTypes (it is possible to use both types simultaneously).
diff --git a/docs/sections/services/snmptrap/offeredapis.rst b/docs/sections/services/snmptrap/offeredapis.rst
index 33a2c821..fabaff5f 100644
--- a/docs/sections/services/snmptrap/offeredapis.rst
+++ b/docs/sections/services/snmptrap/offeredapis.rst
@@ -1,6 +1,6 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
-.. _offeredapis:
+.. _snmpofferedapis:
Offered APIs
============
diff --git a/docs/sections/services/snmptrap/release-notes.rst b/docs/sections/services/snmptrap/release-notes.rst
index 5c46d606..98ea3d40 100644
--- a/docs/sections/services/snmptrap/release-notes.rst
+++ b/docs/sections/services/snmptrap/release-notes.rst
@@ -15,11 +15,9 @@ Version: 2.3.0
**New Features**
- - `https://jira.onap.org/browse/DCAEGEN2-2020`
- Eliminate use of consul service discovery in snmptrap
+ - `https://jira.onap.org/browse/DCAEGEN2-2020` Eliminate use of consul service discovery in snmptrap
- - `https://jira.onap.org/browse/DCAEGEN2-2068`
- Updated dependency library version; stormwatch support
+ - `https://jira.onap.org/browse/DCAEGEN2-2068` Updated dependency library version; stormwatch support
**Bug Fixes**
@@ -45,12 +43,10 @@ Version: 1.4.0
**New Features**
- - `https://jira.onap.org/browse/DCAEGEN2-630`
- Added support for SNMPv3 traps with varying levels of privacy and authentication support.
+ - `https://jira.onap.org/browse/DCAEGEN2-630` Added support for SNMPv3 traps with varying levels of privacy and authentication support.
**Bug Fixes**
- - `https://jira.onap.org/browse/DCAEGEN2-842`
- Remove additional RFC3584 (Sec 3.1 (4)) varbinds from published/logged SNMPv1 messages, fix DMAAP publish error for traps with no varbinds present.
+ - `https://jira.onap.org/browse/DCAEGEN2-842` Remove additional RFC3584 (Sec 3.1 (4)) varbinds from published/logged SNMPv1 messages, fix DMAAP publish error for traps with no varbinds present.
**Known Issues**
@@ -77,9 +73,9 @@ Support for config binding services.
**Bug Fixes**
- `https://jira.onap.org/browse/DCAEGEN2-465`
+
**Known Issues**
- - `https://jira.onap.org/browse/DCAEGEN2-465`
- Default config causes standalone instance startup failure.
+ - `https://jira.onap.org/browse/DCAEGEN2-465` Default config causes standalone instance startup failure.
**Security Issues**
- None
@@ -91,6 +87,3 @@ Support for config binding services.
**Other**
-===========
-
-End of Release Notes
diff --git a/docs/sections/services/son-handler/son_handler_troubleshooting.rst b/docs/sections/services/son-handler/son_handler_troubleshooting.rst
index 98dde1d1..644b0826 100644
--- a/docs/sections/services/son-handler/son_handler_troubleshooting.rst
+++ b/docs/sections/services/son-handler/son_handler_troubleshooting.rst
@@ -6,13 +6,15 @@ Troubleshooting steps
Possible reasons & Solutions:
1. Microservice is not registered with the consul
- Check the consul if the microservice is registered with it and the MS is able to fetch the app config from the CBS. Check if CBS and consul are deployed properly and try to redeploy the MS
+
The below logs will be seen if CBS is not reachable by the MS
- 15:14:13.861 [main] WARN org.postgresql.Driver - JDBC URL port: 0 not valid (1:65535)
+ 15:14:13.861 [main] WARN org.postgresql.Driver - JDBC URL port: 0 not valid (1:65535)
15:14:13.862 [main] WARN o.s.b.w.s.c.AnnotationConfigServletWebServerApplicationContext - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaConfiguration': Unsatisfied dependency expressed through constructor parameter 0; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'dataSource' defined in org.onap.dcaegen2.services.sonhms.Application: Initialization of bean failed; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.boot.autoconfigure.jdbc.DataSourceInitializerInvoker': Invocation of init method failed; nested exception is org.springframework.jdbc.datasource.init.UncategorizedScriptException: Failed to execute database script; nested exception is java.lang.RuntimeException: Driver org.postgresql.Driver claims to not accept jdbcUrl, jdbc:postgresql://null:0/sonhms
15:14:13.865 [main] INFO o.a.catalina.core.StandardService - Stopping service [Tomcat]
15:14:13.877 [main] INFO o.s.b.a.l.ConditionEvaluationReportLoggingListener - Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
15:14:13.880 [main] ERROR o.s.boot.SpringApplication - Application run failed
+
2. MS is not able to fetch the config policies from the policy handler.
- Check if the config policy for the MS is created and pushed into the policy module. The below logs will be seen if the config policies are not available.
diff --git a/docs/sections/services/tcagen2-docker/installation.rst b/docs/sections/services/tcagen2-docker/installation.rst
index 792f8a48..e0a5b738 100644
--- a/docs/sections/services/tcagen2-docker/installation.rst
+++ b/docs/sections/services/tcagen2-docker/installation.rst
@@ -18,10 +18,9 @@ Following are steps if manual deployment/undeployment is required. Steps to dep
Enter the Cloudify Manager kuberenetes pod
- - Tca-gen2 blueprint directory (/blueprints/k8s-tcagen2.yaml). The blueprint is also maintained in gerrit and can be downloaded from
- https://git.onap.org/dcaegen2/platform/blueprints/tree/blueprints/k8s-tcagen2.yaml
+ - Tca-gen2 blueprint directory (/blueprints/k8s-tcagen2.yaml). The blueprint is also maintained in gerrit and can be downloaded from https://git.onap.org/dcaegen2/platform/blueprints/tree/blueprints/k8s-tcagen2.yaml
- - Create input file required for deployment
+ - Create input file required for deployment
Configuration of the service consists of generating an inputs file (YAML) which will be used as part of the
Cloudify install. The tca-gen2 blueprints was designed with known defaults for the majority of the fields.
@@ -34,6 +33,7 @@ Enter the Cloudify Manager kuberenetes pod
:widths: auto
:delim: ;
:header: Property , Sample Value , Description , Required
+
tca_handle_in_subscribe_url ; http://message-router:3904/events/unauthenticated.TCAGEN2_OUTPUT/; DMaap topic to publish CL event output ; No
tca_handle_in_subscribe_url ; http://message-router:3904/events/unauthenticated.VES_MEASUREMENT_OUTPUT/; DMaap topic to subscribe VES measurement feeds ; No
tag_version ; nexus3.onap.org:10001/onap/org.onap.dcaegen2.analytics.tca-gen2.dcae-analytics-tca-web:1.0.1 ; The tag of the Docker image will be used when deploying the tca-gen2. ; No
diff --git a/docs/sections/services/ves-http/installation.rst b/docs/sections/services/ves-http/installation.rst
index 5ecafcee..6c976684 100644
--- a/docs/sections/services/ves-http/installation.rst
+++ b/docs/sections/services/ves-http/installation.rst
@@ -41,8 +41,7 @@ If VESCollector instance need to be deployed with authentication disabled, follo
- Execute into Bootstrap POD using kubectl command
-- VES blueprint is available under /blueprints directory ``k8s-ves-tls.yaml``. A corresponding input files is also pre-loaded into bootstrap
-pod under /inputs/k8s-ves-inputs.yaml
+- VES blueprint is available under /blueprints directory ``k8s-ves-tls.yaml``. A corresponding input file is also pre-loaded into bootstrap pod under /inputs/k8s-ves-inputs.yaml
- Deploy blueprint
.. code-block:: bash
diff --git a/docs/sections/services/ves-hv/index.rst b/docs/sections/services/ves-hv/index.rst
index 8c1105a1..94703119 100644
--- a/docs/sections/services/ves-hv/index.rst
+++ b/docs/sections/services/ves-hv/index.rst
@@ -26,7 +26,7 @@ High Volume VES Collector overview and functions
.. toctree::
:maxdepth: 1
-
+
architecture
design
repositories