diff options
author | Eric Debeau <eric.debeau@orange.com> | 2017-11-13 22:16:07 +0000 |
---|---|---|
committer | Eric Debeau <eric.debeau@orange.com> | 2017-11-13 22:19:02 +0000 |
commit | de4209833da37685f6f05a6732366e38dfadbed6 (patch) | |
tree | 271b34edf2e4c63bc3b2bbbe39a211eea019aa4f | |
parent | 09f2cc5eb10c6193ac8406555da983479576e672 (diff) |
Fix editorial errors
Modify ONPA by ONAP
Correct some JSON file example
Add code-block for code snippet
Change-Id: Ia313b6adc615bcd353f4e3c3e363c3f854d25181
ISSUE-ID: DCAEGEN2-199
Signed-off-by: Eric Debeau <eric.debeau@orange.com>
-rw-r--r-- | docs/sections/architecture.rst | 2 | ||||
-rw-r--r-- | docs/sections/installation.rst | 4 | ||||
-rw-r--r-- | docs/sections/installation_heat.rst | 42 | ||||
-rw-r--r-- | docs/sections/installation_manual.rst | 645 | ||||
-rw-r--r-- | docs/sections/installation_test.rst | 8 |
5 files changed, 272 insertions, 429 deletions
diff --git a/docs/sections/architecture.rst b/docs/sections/architecture.rst index 8014ed49..b45e1055 100644 --- a/docs/sections/architecture.rst +++ b/docs/sections/architecture.rst @@ -44,7 +44,7 @@ in ONAP R1 Usage Scenarios --------------- -For ONPA R1 DCAE participates in all use cases. +For ONAP R1 DCAE participates in all use cases. vDNS/vFW: VES collector, TCA analytics vCPE: VES collector, TCA analytics diff --git a/docs/sections/installation.rst b/docs/sections/installation.rst index f6c7d0d9..02e7a9ca 100644 --- a/docs/sections/installation.rst +++ b/docs/sections/installation.rst @@ -1,5 +1,5 @@ -DCAE mS Installation -==================== +DCAE Installation +================= .. toctree:: :maxdepth: 1 diff --git a/docs/sections/installation_heat.rst b/docs/sections/installation_heat.rst index b90ebb41..35057e6f 100644 --- a/docs/sections/installation_heat.rst +++ b/docs/sections/installation_heat.rst @@ -4,7 +4,7 @@ OpenStack Heat Template Based ONAP Deployment For ONAP R1, ONAP is deployed using OpenStack Heat template. DCAE is also deployed through this process. This document describes the details of the Heat template deployment process and how to configure DCAE related parameters in the Heat template and its parameter file. -ONAP Deployment +ONAP Deployment --------------- ONAP supports an OpenStack Heat template based system deployment. When a new "stack" is created using the template, the following virtual resources will be launched in the target OpenStack tenant: @@ -14,7 +14,7 @@ ONAP supports an OpenStack Heat template based system deployment. When a new "s * A virtual router interconnecting the private OAM network with the external network of the OpenStack installation. * A key-pair named onap_key_{{RAND}}. * A security group named onap_sg_{{RAND}}. -* A list of VMs for ONAP components. Each VM has one NIC connected to the OAM network and assigned a fixed IP. Each VM is also assigned a floating IP address from the external network. The VM hostnames are name consistently across different ONAP deployments, a user defined prefix, denoted as {{PREFIX}}, followed by a descriptive string for the ONAP component this VM runs, and optionally followed by a sub-function name. The VMs of the same ONAP role across different ONAP deployments will always have the same OAM network IP address. For example, the Message Router will always have the OAM network IP address of 10.0.11.1. +* A list of VMs for ONAP components. Each VM has one NIC connected to the OAM network and assigned a fixed IP. Each VM is also assigned a floating IP address from the external network. The VM hostnames are name consistently across different ONAP deployments, a user defined prefix, denoted as {{PREFIX}}, followed by a descriptive string for the ONAP component this VM runs, and optionally followed by a sub-function name. The VMs of the same ONAP role across different ONAP deployments will always have the same OAM network IP address. For example, the Message Router will always have the OAM network IP address of 10.0.11.1. ============== ========================== ========================== ONAP Role VM (Neutron) hostname OAM IP address(s) @@ -38,29 +38,29 @@ ONAP supports an OpenStack Heat template based system deployment. When a new "s * A list of DCAE VMs, launched by the {{PREFIX}}-dcae-bootstrap VM. These VMs are also connected to the OAM network and associated with floating IP addresses on teh external network. What's different is that their OAM IP addresses are DHCP assigned, not statically assigned. The table below lists the DCAE VMs that are deployed for R1 use stories. - ===================== ============================ - DCAE Role VM (Neutron) hostname(s) - ===================== ============================ + ===================== ============================ + DCAE Role VM (Neutron) hostname(s) + ===================== ============================ Cloudify Manager {{DCAEPREFIX}}orcl{00} Consul cluster {{DCAEPREFIX}}cnsl{00-02} Platform Docker Host {{DCAEPREFIX}}dokp{00} - Service Docker Host {{DCAEPREFIX}}dokp{00} + Service Docker Host {{DCAEPREFIX}}doks{00} CDAP cluster {{DCAEPREFIX}}cdap{00-06} Postgres {{DCAEPREFIX}}pgvm{00} - ===================== ============================ + ===================== ============================ DNS === -ONAP VMs deployed by Heat template are all registered with the private DNS server under the domain name of **simpledemo.onap.org**. This domain can not be exposed to any where outside of the ONAP deployment because all ONAP deployments use the same domain name and same address space. Hence these host names remain only resolvable within the same ONAP deployment. +ONAP VMs deployed by Heat template are all registered with the private DNS server under the domain name of **simpledemo.onap.org**. This domain can not be exposed to any where outside of the ONAP deployment because all ONAP deployments use the same domain name and same address space. Hence these host names remain only resolvable within the same ONAP deployment. -On the other hand DCAE VMs, although attached to the same OAM network as the rest of ONAP VMs, all have dynamic IP addresses allocated by the DHCP server and resort to a DNS based solution for registering the hostname and IP address mapping. DCAE VMs of different ONAP deployments are registered under different zones named as **{{RAND}}.dcaeg2.onap.org**. The API that DCAE calls to request the DNS zone registration and record registration is provided by OpenStack's DNS as a Service technology Designate. +On the other hand DCAE VMs, although attached to the same OAM network as the rest of ONAP VMs, all have dynamic IP addresses allocated by the DHCP server and resort to a DNS based solution for registering the hostname and IP address mapping. DCAE VMs of different ONAP deployments are registered under different zones named as **{{RAND}}.dcaeg2.onap.org**. The API that DCAE calls to request the DNS zone registration and record registration is provided by OpenStack's DNS as a Service technology Designate. -To enable VMs spun up by ONPA Heat template and DCAE's bootstrap process communicate with each other using hostnames, all VMs are configured to use the private DNS server launched by the Heat template as their name resolution server. In the configuration of this private DNS server, the DNS server that backs up Designate API frontend is used as the DNS forwarder. +To enable VMs spun up by ONAP Heat template and DCAE's bootstrap process communicate with each other using hostnames, all VMs are configured to use the private DNS server launched by the Heat template as their name resolution server. In the configuration of this private DNS server, the DNS server that backs up Designate API frontend is used as the DNS forwarder. -For simpledemo.onap.org VM to simpledemo.onap.org VM communications and {{RAND}}.dcaeg2.onap.org VM to simpledemo.onap.org VM communications, the resolution is completed by the private DNS server itself. For simpledemo.onap.org VM to {{RAND}}.dcaeg2.onap.org VM communications and {{RAND}}.dcaeg2.onap.org VM to {{RAND}}.dcaeg2.onap.org VM communications, the resolution request is forwarded from the private DNS server to the Designate DNS server and resolved there. Communications to outside world are resolved also by the Designate DNS server if the hostname belongs to a zone registered under the Designate DNS server, or forwarded to the next DNS server, either an organizational DNS server or a DNS server even higher in the global DNS server hierarchy. +For simpledemo.onap.org VM to simpledemo.onap.org VM communications and {{RAND}}.dcaeg2.onap.org VM to simpledemo.onap.org VM communications, the resolution is completed by the private DNS server itself. For simpledemo.onap.org VM to {{RAND}}.dcaeg2.onap.org VM communications and {{RAND}}.dcaeg2.onap.org VM to {{RAND}}.dcaeg2.onap.org VM communications, the resolution request is forwarded from the private DNS server to the Designate DNS server and resolved there. Communications to outside world are resolved also by the Designate DNS server if the hostname belongs to a zone registered under the Designate DNS server, or forwarded to the next DNS server, either an organizational DNS server or a DNS server even higher in the global DNS server hierarchy. -For OpenStack installations where there is no existing DNS service, a "proxyed" Designate solution is supported. In this arrangement, DCAE bootstrap process will use MultiCloud service node as its Keystone API endpoint. For non Designate API calls, the MultiCloud service node forwards to the underlying cloud provider. However, for Designate API calls, the MultiCloud service node forwards to an off-stack Designate server. +For OpenStack installations where there is no existing DNS service, a "proxyed" Designate solution is supported. In this arrangement, DCAE bootstrap process will use MultiCloud service node as its Keystone API endpoint. For non Designate API calls, the MultiCloud service node forwards to the underlying cloud provider. However, for Designate API calls, the MultiCloud service node forwards to an off-stack Designate server. Heat Template Parameters ======================== @@ -69,21 +69,21 @@ Here we list Heat template parameters that are related to DCAE operation. Bold * public_net_id: the UUID of the external network where floating IPs are assigned from. For example: 971040b2-7059-49dc-b220-4fab50cb2ad4 * public_net_name: the name of the external network where floating IPs are assigned from. For example: external -* openstack_tenant_id: the ID of the OpenStack tenant/project that will host the ONPA deployment. For example: dd327af0542e47d7853e0470fe9ad625. -* openstack_tenant_name: the name of the OpenStack tenant/project that will host the ONPA deployment. For example: Integration-SB-01. -* openstack_username: the username for accessing the OpenStack tenant specified by openstack_tenant_id/ openstack_tenant_name. -* openstack_api_key: the password for accessing the OpenStack tenant specified by openstack_tenant_id/ openstack_tenant_name. +* openstack_tenant_id: the ID of the OpenStack tenant/project that will host the ONAP deployment. For example: dd327af0542e47d7853e0470fe9ad625. +* openstack_tenant_name: the name of the OpenStack tenant/project that will host the ONAP deployment. For example: Integration-SB-01. +* openstack_username: the username for accessing the OpenStack tenant specified by openstack_tenant_id/openstack_tenant_name. +* openstack_api_key: the password for accessing the OpenStack tenant specified by openstack_tenant_id/openstack_tenant_name. * openstack_auth_method: **password** * openstack_region: **RegionOne** * cloud_env: **openstack** -* dns_forwarder: This is the DNS forwarder for the ONAP deployment private DNS server. It must point to the IP address of the Designate DNS. For example '10.12.25.5'. +* dns_forwarder: This is the DNS forwarder for the ONAP deployment private DNS server. It must point to the IP address of the Designate DNS. For example '10.12.25.5'. * dcae_ip_addr: **10.0.4.1**. The static IP address on the OAM network that is assigned to the DCAE bootstraping VM. -* dnsaas_config_enabled: Whether a proxy-ed Designate solution is used. For example: **true**. -* dnsaas_region: The region of the Designate providing OpenStack. For example: RegionOne +* dnsaas_config_enabled: Whether a proxy-ed Designate solution is used. For example: **true**. +* dnsaas_region: The region of the Designate providing OpenStack. For example: RegionOne * dnsaas_tenant_name: The tenant/project name of the Designate providing OpenStack. For example Integration-SB-01. * dnsaas_keystone_url: The keystone URL of the Designate providing OpenStack. For example http://10.12.25.5:5000/v3. -* dnsaas_username: The username for accessing the Designate providing OpenStack. -* dnsaas_password: The password for accessing the Designate providing OpenStack. +* dnsaas_username: The username for accessing the Designate providing OpenStack. +* dnsaas_password: The password for accessing the Designate providing OpenStack. * dcae_keystone_url: This is the API endpoint for MltiCloud service node. **"http://10.0.14.1/api/multicloud-titanium_cloud/v0/pod25_RegionOne/identity/v2.0"** * dcae_centos_7_image: The name of the CentOS-7 image. * dcae_domain: The domain under which ONAP deployment zones are registered. For example: 'dcaeg2.onap.org'. diff --git a/docs/sections/installation_manual.rst b/docs/sections/installation_manual.rst index 070e36ab..d308028e 100644 --- a/docs/sections/installation_manual.rst +++ b/docs/sections/installation_manual.rst @@ -1,5 +1,5 @@ -DCAE mS Installation -==================== +DCAE Installation +================= The below steps covers manual setup of DCAE VM’s and DCAE service components. @@ -15,27 +15,32 @@ storage 1. Install docker - sudo apt-get update +.. code-block:: bash - sudo apt install `docker.io <http://docker.io/>`__ + sudo apt-get update + sudo apt install `docker.io <http://docker.io/>`__ 2. Pull the latest container from onap nexus - sudo docker login -u docker -p docker - `nexus.onap.org <http://nexus.onap.org/>`__:10001 +.. code-block:: bash - sudo docker pull - `nexus.onap.org <http://nexus.onap.org/>`__:10001/onap/org.onap.dcaegen2.collectors.ves.vescollector:1.1 + sudo docker login -u docker -p docker + `nexus.onap.org <http://nexus.onap.org/>`__:10001 + + sudo docker pull + `nexus.onap.org <http://nexus.onap.org/>`__:10001/onap/org.onap.dcaegen2.collectors.ves.vescollector:1.1 3. Start the VESCollector with below command - sudo docker run -d --name vescollector -p 8080:8080/tcp -p - 8443:8443/tcp -P -e DMAAPHOST='<dmaap IP>' - `nexus.onap.org <http://nexus.onap.org/>`__:10001/onap/org.onap.dcaegen2.collectors.ves.vescollector:1.1 +.. code-block:: bash + + sudo docker run -d --name vescollector -p 8080:8080/tcp -p + 8443:8443/tcp -P -e DMAAPHOST='<dmaap IP>' + `nexus.onap.org <http://nexus.onap.org/>`__:10001/onap/org.onap.dcaegen2.collectors.ves.vescollector:1.1 - Note: Change the dmaaphost to required DMAAP ip. To change the - dmaap information for a running container, stop the active - container and rerun above command changing the dmaap IP. +.. Note: Change the dmaaphost to required DMAAP ip. To change the + dmaap information for a running container, stop the active + container and rerun above command changing the dmaap IP. 4. Verification @@ -44,12 +49,13 @@ i. Check logs under container /opt/app/VESCollector/logs/collector.log ii. If no active feed, you can simulate an event into collector via curl - curl -i -X POST -d @<sampleves> --header "Content-Type: - application/json" http://localhost:8080/eventListener/v5 -k +.. code-block:: bash + + curl -i -X POST -d @<sampleves> --header "Content-Type:application/json" -k http://localhost:8080/eventListener/v5 - Note: If DMAAPHOST provided is invalid, you will see exception - around publish on the collector.logs (collector queues and attempts - to resend the event hence exceptions reported will be periodic). +.. Note: If DMAAPHOST provided is invalid, you will see exception + around publish on the collector.logs (collector queues and attempts + to resend the event hence exceptions reported will be periodic). i. Below two topic configuration are pre-set into this container. When valid DMAAP instance ip was provided and VES events are received, @@ -62,41 +68,32 @@ i. Below two topic configuration are pre-set into this container. When -http://<dmaaphost>:3904/events/unauthenticated.SEC\_MEASUREMENT\_OUTPUT VM Init -~~~~~~ +~~~~~~~ To address windriver server in-stability, the below **init.sh** script was used to start the container on VM restart. -+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ -| #!/bin/sh | -| | -| sudo docker ps \| grep "vescollector" | -| | -| if [ $? -ne 0 ]; then | -| | -| sudo docker login -u docker -p docker nexus.onap.org:10001 | -| | -| sudo docker pull nexus.onap.org:10001/onap/org.onap.dcaegen2.collectors.ves.vescollector:1.1 | -| | -| sudo docker rm -f vescollector | -| | -| echo "Collector process not running - $(date)" >> /home/ubuntu/startuplog | -| | -| sudo docker run -d --name vescollector -p 8080:8080/tcp -p 8443:8443/tcp -P -e DMAAPHOST='10.12.25.96' nexus.onap.org:10001/onap/org.onap.dcaegen2.collectors.ves.vescollector:1.1 | -| | -| else | -| | -| echo "Collector process running - $(date)" >> /home/ubuntu/startuplog | -| | -| fi | -+==============================================================================================================================================================================================+ -+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ +.. code-block:: bash + + #!/bin/sh + sudo docker ps | grep “vescollector” + if [ $? -ne 0 ]; then + sudo docker login -u docker -p docker nexus.onap.org:10001 + sudo docker pull nexus.onap.org:10001/onap/org.onap.dcaegen2.collectors.ves.vescollector:1.1 + sudo docker rm -f vescollector + echo “Collector process not running - $(date)” >> /home/ubuntu/startuplog + sudo docker run -d –name vescollector -p 8080:8080/tcp -p 8443:8443/tcp -P -e DMAAPHOST=‘10.12.25.96’ nexus.onap.org:10001/onap/org.onap.dcaegen2.collectors.ves.vescollector:1.1 + else + echo “Collector process running - $(date)” >> /home/ubuntu/startuplog + fi + This script was invoked via VM init script (rc.d). -ln -s /home/ubuntu/init.sh /etc/init.d/init.sh +.. code-block:: bash -sudo update-rc.d init.sh start 2 + ln -s /home/ubuntu/init.sh /etc/init.d/init.sh + sudo update-rc.d init.sh start 2 @@ -110,391 +107,237 @@ m1.medium size and 50gb cinder volumes. 1. Install docker - sudo apt-get update +.. code-block:: bash - sudo apt install `docker.io <http://docker.io/>`__ + sudo apt-get update + sudo apt install `docker.io <http://docker.io/>`__ 2. Pull CDAP SDK container -sudo docker pull caskdata/cdap-standalone:4.1.2 +.. code-block:: bash + + sudo docker pull caskdata/cdap-standalone:4.1.2 3. Deploy and run the CDAP container - sudo docker run -d --name cdap-sdk-2 -p 11011:11011 -p 11015:11015 - caskdata/cdap-standalone:4.1.2 +.. code-block:: bash + + sudo docker run -d --name cdap-sdk-2 -p 11011:11011 -p 11015:11015 + caskdata/cdap-standalone:4.1.2 4. Create Namespace on CDAP application -curl -X PUT http://localhost:11015/v3/namespaces/cdap_tca_hi_lo - -5. Create TCA app config file - "tca\_app\_config.json" under ~ubuntu as - below - -+------------------------------------------------------------------------------+ -| { | -| | -| "artifact": { | -| | -| "name": "dcae-analytics-cdap-tca", | -| | -| "version": "2.0.0", | -| | -| "scope": "user" | -| | -| }, | -| | -| "config": { | -| | -| "appName": "dcae-tca", | -| | -| "appDescription": "DCAE Analytics Threshold Crossing Alert Application", | -| | -| "tcaVESMessageStatusTableName": "TCAVESMessageStatusTable", | -| | -| "tcaVESMessageStatusTableTTLSeconds": 86400.0, | -| | -| "tcaAlertsAbatementTableName": "TCAAlertsAbatementTable", | -| | -| "tcaAlertsAbatementTableTTLSeconds": 1728000.0, | -| | -| "tcaVESAlertsTableName": "TCAVESAlertsTable", | -| | -| "tcaVESAlertsTableTTLSeconds": 1728000.0, | -| | -| "thresholdCalculatorFlowletInstances": 2.0, | -| | -| "tcaSubscriberOutputStreamName": "TCASubscriberOutputStream" | -| | -| } | -| | -| } | -+==============================================================================+ -+------------------------------------------------------------------------------+ +.. code-block:: bash + + curl -X PUT http://localhost:11015/v3/namespaces/cdap_tca_hi_lo + +5. Create TCA app config file - "tca\_app\_config.json" under ~ubuntu as below + +.. code-block:: json + + { + "artifact": { + "name": "dcae-analytics-cdap-tca", + "version": "2.0.0", + "scope": "user" + }, + + "config": { + "appName": "dcae-tca", + "appDescription": "DCAE Analytics Threshold Crossing Alert Application", + "tcaVESMessageStatusTableName": "TCAVESMessageStatusTable", + "tcaVESMessageStatusTableTTLSeconds": 86400.0, + "tcaAlertsAbatementTableName": "TCAAlertsAbatementTable", + "tcaAlertsAbatementTableTTLSeconds": 1728000.0, + "tcaVESAlertsTableName": "TCAVESAlertsTable", + "tcaVESAlertsTableTTLSeconds": 1728000.0, + "thresholdCalculatorFlowletInstances": 2.0, + "tcaSubscriberOutputStreamName": "TCASubscriberOutputStream" + } + } + 6. Create TCA app preference file under ~ubuntu as below -+--------------------------------------------------------------------------------------------------------------------------------------------+ -| { | -| | -| "publisherContentType" : "application/json", | -| | -| "publisherHostName" : "10.12.25.96", | -| | -| "publisherHostPort" : "3904", | -| | -| "publisherMaxBatchSize" : "1", | -| | -| "publisherMaxRecoveryQueueSize" : "100000", | -| | -| "publisherPollingInterval" : "20000", | -| | -| "publisherProtocol" : "http", | -| | -| "publisherTopicName" : "unauthenticated.DCAE\_CL\_OUTPUT", | -| | -| "subscriberConsumerGroup" : "OpenDCAE-c1", | -| | -| "subscriberConsumerId" : "c1", | -| | -| "subscriberContentType" : "application/json", | -| | -| "subscriberHostName" : "10.12.25.96", | -| | -| "subscriberHostPort" : "3904", | -| | -| "subscriberMessageLimit" : "-1", | -| | -| "subscriberPollingInterval" : "20000", | -| | -| "subscriberProtocol" : "http", | -| | -| "subscriberTimeoutMS" : "-1", | -| | -| "subscriberTopicName" : "unauthenticated.SEC\_MEASUREMENT\_OUTPUT", | -| | -| "enableAAIEnrichment" : false, | -| | -| "aaiEnrichmentHost" : "10.12.25.72", | -| | -| "aaiEnrichmentPortNumber" : 8443, | -| | -| "aaiEnrichmentProtocol" : "https", | -| | -| "aaiEnrichmentUserName" : "DCAE", | -| | -| "aaiEnrichmentUserPassword" : "DCAE", | -| | -| "aaiEnrichmentIgnoreSSLCertificateErrors" : false, | -| | -| "aaiVNFEnrichmentAPIPath" : "/aai/v11/network/generic-vnfs/generic-vnf", | -| | -| "aaiVMEnrichmentAPIPath" : "/aai/v11/search/nodes-query", | -| | -| "tca\_policy" : "{ | -| | -| \\"domain\\": \\"measurementsForVfScaling\\", | -| | -| \\"metricsPerEventName\\": [{ | -| | -| \\"eventName\\": \\"vFirewallBroadcastPackets\\", | -| | -| \\"controlLoopSchemaType\\": \\"VNF\\", | -| | -| \\"policyScope\\": \\"DCAE\\", | -| | -| \\"policyName\\": \\"DCAE.Config\_tca-hi-lo\\", | -| | -| \\"policyVersion\\": \\"v0.0.1\\", | -| | -| \\"thresholds\\": [{ | -| | -| \\"closedLoopControlName\\": \\"ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a\\", | -| | -| \\"version\\": \\"1.0.2\\", | -| | -| \\"fieldPath\\": \\"$.event.measurementsForVfScalingFields.vNicUsageArray[\*].receivedTotalPacketsDelta\\", | -| | -| \\"thresholdValue\\": 300, | -| | -| \\"direction\\": \\"LESS\_OR\_EQUAL\\", | -| | -| \\"severity\\": \\"MAJOR\\", | -| | -| \\"closedLoopEventStatus\\": \\"ONSET\\" | -| | -| }, { | -| | -| \\"closedLoopControlName\\": \\"ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a\\", | -| | -| \\"version\\": \\"1.0.2\\", | -| | -| \\"fieldPath\\": \\"$.event.measurementsForVfScalingFields.vNicUsageArray[\*].receivedTotalPacketsDelta\\", | -| | -| \\"thresholdValue\\": 700, | -| | -| \\"direction\\": \\"GREATER\_OR\_EQUAL\\", | -| | -| \\"severity\\": \\"CRITICAL\\", | -| | -| \\"closedLoopEventStatus\\": \\"ONSET\\" | -| | -| }] | -| | -| }, { | -| | -| \\"eventName\\": \\"vLoadBalancer\\", | -| | -| \\"controlLoopSchemaType\\": \\"VM\\", | -| | -| \\"policyScope\\": \\"DCAE\\", | -| | -| \\"policyName\\": \\"DCAE.Config\_tca-hi-lo\\", | -| | -| \\"policyVersion\\": \\"v0.0.1\\", | -| | -| \\"thresholds\\": [{ | -| | -| \\"closedLoopControlName\\": \\"ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3\\", | -| | -| \\"version\\": \\"1.0.2\\", | -| | -| \\"fieldPath\\": \\"$.event.measurementsForVfScalingFields.vNicUsageArray[\*].receivedTotalPacketsDelta\\", | -| | -| \\"thresholdValue\\": 300, | -| | -| \\"direction\\": \\"GREATER\_OR\_EQUAL\\", | -| | -| \\"severity\\": \\"CRITICAL\\", | -| | -| \\"closedLoopEventStatus\\": \\"ONSET\\" | -| | -| }] | -| | -| }, { | -| | -| \\"eventName\\": \\"Measurement\_vGMUX\\", | -| | -| \\"controlLoopSchemaType\\": \\"VNF\\", | -| | -| \\"policyScope\\": \\"DCAE\\", | -| | -| \\"policyName\\": \\"DCAE.Config\_tca-hi-lo\\", | -| | -| \\"policyVersion\\": \\"v0.0.1\\", | -| | -| \\"thresholds\\": [{ | -| | -| \\"closedLoopControlName\\": \\"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\\", | -| | -| \\"version\\": \\"1.0.2\\", | -| | -| \\"fieldPath\\": \\"$.event.measurementsForVfScalingFields.additionalMeasurements[\*].arrayOfFields[0].value\\", | -| | -| \\"thresholdValue\\": 0, | -| | -| \\"direction\\": \\"EQUAL\\", | -| | -| \\"severity\\": \\"MAJOR\\", | -| | -| \\"closedLoopEventStatus\\": \\"ABATED\\" | -| | -| }, { | -| | -| \\"closedLoopControlName\\": \\"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\\", | -| | -| \\"version\\": \\"1.0.2\\", | -| | -| \\"fieldPath\\": \\"$.event.measurementsForVfScalingFields.additionalMeasurements[\*].arrayOfFields[0].value\\", | -| | -| \\"thresholdValue\\": 0, | -| | -| \\"direction\\": \\"GREATER\\", | -| | -| \\"severity\\": \\"CRITICAL\\", | -| | -| \\"closedLoopEventStatus\\": \\"ONSET\\" | -| | -| }] | -| | -| }] | -| | -| }" | -| | -| } | -+============================================================================================================================================+ -+--------------------------------------------------------------------------------------------------------------------------------------------+ - - Note: Dmaap configuration are specified on this file on - publisherHostName and subscriberHostName. To be changed as - required\*\* - -7. Copy below script to CDAP server (this gets latest image from nexus - and deploys TCA application) and execute it - -+--------------------------------------------------------------------------------------------------------------------------------------------------+ -| #!/bin/sh | -| | -| TCA\_JAR=dcae-analytics-cdap-tca-2.0.0.jar | -| | -| rm -f /home/ubuntu/$TCA\_JAR | -| | -| cd /home/ubuntu/ | -| | -| wget https://nexus.onap.org/service/local/repositories/staging/content/org/onap/dcaegen2/analytics/tca/dcae-analytics-cdap-tca/2.0.0/$TCA\_JAR | -| | -| if [ $? -eq 0 ]; then | -| | -| if [ -f /home/ubuntu/$TCA\_JAR ]; then | -| | -| echo "Restarting TCA CDAP application using $TCA\_JAR artifact" | -| | -| else | -| | -| echo "ERROR: $TCA\_JAR missing" | -| | -| exit 1 | -| | -| fi | -| | -| else | -| | -| echo "ERROR: $TCA\_JAR not found in nexus" | -| | -| exit 1 | -| | -| fi | -| | -| # stop programs | -| | -| curl -X POST http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/workers/TCADMaaPMRPublisherWorker/stop | -| | -| curl -X POST http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/workers/TCADMaaPMRSubscriberWorker/stop | -| | -| curl -X POST http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/flows/TCAVESCollectorFlow/stop | -| | -| # delete application | -| | -| curl -X DELETE http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca | -| | -| # delete artifact | -| | -| curl -X DELETE http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/artifacts/dcae-analytics-cdap-tca/versions/2.0.0 | -| | -| # load artifact | -| | -| curl -X POST --data-binary @/home/ubuntu/$TCA\_JAR http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/artifacts/dcae-analytics-cdap-tca | -| | -| # create app | -| | -| curl -X PUT -d @/home/ubuntu/tca\_app\_config.json http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca | -| | -| # load preferences | -| | -| curl -X PUT -d @/home/ubuntu/tca\_app\_preferences.json http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/preferences | -| | -| # start programs | -| | -| curl -X POST http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/workers/TCADMaaPMRPublisherWorker/start | -| | -| curl -X POST http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/workers/TCADMaaPMRSubscriberWorker/start | -| | -| curl -X POST http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/flows/TCAVESCollectorFlow/start | -| | -| echo | -| | -| # get status of programs | -| | -| curl http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/workers/TCADMaaPMRPublisherWorker/status | -| | -| curl http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/workers/TCADMaaPMRSubscriberWorker/status | -| | -| curl http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/flows/TCAVESCollectorFlow/status | -| | -| echo | -+==================================================================================================================================================+ -+--------------------------------------------------------------------------------------------------------------------------------------------------+ +.. code-block:: json + + { + "publisherContentType" : "application/json", + "publisherHostName" : "10.12.25.96", + "publisherHostPort" : "3904", + "publisherMaxBatchSize" : "1", + "publisherMaxRecoveryQueueSize" : "100000", + "publisherPollingInterval" : "20000", + "publisherProtocol" : "http", + "publisherTopicName" : "unauthenticated.DCAE_CL_OUTPUT", + "subscriberConsumerGroup" : "OpenDCAE-c1", + "subscriberConsumerId" : "c1", + "subscriberContentType" : "application/json", + "subscriberHostName" : "10.12.25.96", + "subscriberHostPort" : "3904", + "subscriberMessageLimit" : "-1", + "subscriberPollingInterval" : "20000", + "subscriberProtocol" : "http", + "subscriberTimeoutMS" : "-1", + "subscriberTopicName" : "unauthenticated.SEC_MEASUREMENT_OUTPUT", + "enableAAIEnrichment" : false, + "aaiEnrichmentHost" : "10.12.25.72", + "aaiEnrichmentPortNumber" : 8443, + "aaiEnrichmentProtocol" : "https", + "aaiEnrichmentUserName" : "DCAE", + "aaiEnrichmentUserPassword" : "DCAE", + "aaiEnrichmentIgnoreSSLCertificateErrors" : false, + "aaiVNFEnrichmentAPIPath" : "/aai/v11/network/generic-vnfs/generic-vnf", + "aaiVMEnrichmentAPIPath" : "/aai/v11/search/nodes-query", + "tca_policy" : { + "domain": "measurementsForVfScaling", + "metricsPerEventName": [{ + "eventName": "vFirewallBroadcastPackets", + "controlLoopSchemaType": "VNF", + "policyScope": "DCAE", + "policyName": "DCAE.Config_tca-hi-lo", + "policyVersion": "v0.0.1", + "thresholds": [{ + "closedLoopControlName": "ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a", + "version": "1.0.2", + "fieldPath": "$.event.measurementsForVfScalingFields.vNicUsageArray[*].receivedTotalPacketsDelta", + "thresholdValue": 300, + "direction": "LESS_OR_EQUAL", + "severity": "MAJOR", + "closedLoopEventStatus": "ONSET" + }, { + "closedLoopControlName": "ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a", + "version": "1.0.2", + "fieldPath": "$.event.measurementsForVfScalingFields.vNicUsageArray[*].receivedTotalPacketsDelta", + "thresholdValue": 700, + "direction": "GREATER_OR_EQUAL", + "severity": "CRITICAL", + "closedLoopEventStatus": "ONSET" + }] + }, { + "eventName": "vLoadBalancer", + "controlLoopSchemaType": "VM", + "policyScope": "DCAE", + "policyName": "DCAE.Config_tca-hi-lo", + "policyVersion": "v0.0.1", + "thresholds": [{ + "closedLoopControlName": "ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3", + "version": "1.0.2", + "fieldPath": "$.event.measurementsForVfScalingFields.vNicUsageArray[*].receivedTotalPacketsDelta", + "thresholdValue": 300, + "direction": "GREATER_OR_EQUAL", + "severity": "CRITICAL", + "closedLoopEventStatus": "ONSET" + }] + }, { + "eventName": "Measurement_vGMUX", + "controlLoopSchemaType": "VNF", + "policyScope": "DCAE", + "policyName": "DCAE.Config_tca-hi-lo", + "policyVersion": "v0.0.1", + "thresholds": [{ + "closedLoopControlName": "ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e", + "version": "1.0.2", + "fieldPath": "$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value", + "thresholdValue": 0, + "direction": "EQUAL", + "severity": "MAJOR", + "closedLoopEventStatus": "ABATED" + }, { + "closedLoopControlName": "ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e", + "version": "1.0.2", + "fieldPath": "$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value", + "thresholdValue": 0, + "direction": "GREATER", + "severity": "CRITICAL", + "closedLoopEventStatus": "ONSET" + }] + }] + } + } + + +.. Note: Dmaap configuration are specified on this file on + publisherHostName and subscriberHostName. To be changed as + required\*\* + +7. Copy below script to CDAP server (this gets latest image from nexus and deploys TCA application) and execute it + +.. code-block:: bash + + #!/bin/sh + TCA_JAR=dcae-analytics-cdap-tca-2.0.0.jar + rm -f /home/ubuntu/$TCA_JAR + cd /home/ubuntu/ + wget https://nexus.onap.org/service/local/repositories/staging/content/org/onap/dcaegen2/analytics/tca/dcae-analytics-cdap-tca/2.0.0/$TCA_JAR + if [ $? -eq 0 ]; then + if [ -f /home/ubuntu/$TCA_JAR ]; then + echo “Restarting TCA CDAP application using $TCA_JAR artifact” + else + echo “ERROR: $TCA_JAR missing” + exit 1 + fi + else + echo “ERROR: $TCA_JAR not found in nexus” + exit 1 + fi + # stop programs + curl -X POST http://localhost:11015/v3/namespaces/cdap_tca_hi_lo/apps/dcae-tca/workers/TCADMaaPMRPublisherWorker/stop + curl -X POST http://localhost:11015/v3/namespaces/cdap_tca_hi_lo/apps/dcae-tca/workers/TCADMaaPMRSubscriberWorker/stop + curl -X POST http://localhost:11015/v3/namespaces/cdap_tca_hi_lo/apps/dcae-tca/flows/TCAVESCollectorFlow/stop + # delete application + curl -X DELETE http://localhost:11015/v3/namespaces/cdap_tca_hi_lo/apps/dcae-tca + # delete artifact + curl -X DELETE http://localhost:11015/v3/namespaces/cdap_tca_hi_lo/artifacts/dcae-analytics-cdap-tca/versions/2.0.0 + # load artifact + curl -X POST –data-binary @/home/ubuntu/$TCA_JAR http://localhost:11015/v3/namespaces/cdap_tca_hi_lo/artifacts/dcae-analytics-cdap-tca + # create app + curl -X PUT -d @/home/ubuntu/tca_app_config.json http://localhost:11015/v3/namespaces/cdap_tca_hi_lo/apps/dcae-tca + # load preferences + curl -X PUT -d @/home/ubuntu/tca_app_preferences.json http://localhost:11015/v3/namespaces/cdap_tca_hi_lo/apps/dcae-tca/preferences + # start programs + curl -X POST http://localhost:11015/v3/namespaces/cdap_tca_hi_lo/apps/dcae-tca/workers/TCADMaaPMRPublisherWorker/start + curl -X POST http://localhost:11015/v3/namespaces/cdap_tca_hi_lo/apps/dcae-tca/workers/TCADMaaPMRSubscriberWorker/start + curl -X POST http://localhost:11015/v3/namespaces/cdap_tca_hi_lo/apps/dcae-tca/flows/TCAVESCollectorFlow/start + echo + # get status of programs + curl http://localhost:11015/v3/namespaces/cdap_tca_hi_lo/apps/dcae-tca/workers/TCADMaaPMRPublisherWorker/status + curl http://localhost:11015/v3/namespaces/cdap_tca_hi_lo/apps/dcae-tca/workers/TCADMaaPMRSubscriberWorker/status + curl http://localhost:11015/v3/namespaces/cdap_tca_hi_lo/apps/dcae-tca/flows/TCAVESCollectorFlow/status + echo + 8. Verify TCA application and logs via CDAP GUI processes The overall flow can be checked here TCA Configuration Change -~~~~~~~~~~~~~~~~~~~~~~~ +~~~~~~~~~~~~~~~~~~~~~~~~ Typical configuration changes include changing DMAAP host and/or Policy configuration. If necessary, modify the file on step #6 and run the script noted as step #7 to redeploy TCA with updated configuration. -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ VM Init -~~~~~~ +~~~~~~~ To address windriver server in-stability, the below **init.sh** script was used to restart the container on VM restart. This script was invoked via VM init script (rc.d). -+------------------------------------------------------------------------------------------------------------------------------+ -| #!/bin/sh | -| | -| #docker run -d --name cdap-sdk -p 11011:11011 -p 11015:11015 caskdata/cdap-standalone:4.1.2 | -| | -| sudo docker restart cdap-sdk-2 | -| | -| sleep 30 | -| | -| # start program | -| | -| curl -X POST http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/workers/TCADMaaPMRPublisherWorker/start | -| | -| curl -X POST http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/workers/TCADMaaPMRSubscriberWorker/start | -| | -| curl -X POST http://localhost:11015/v3/namespaces/cdap\_tca\_hi\_lo/apps/dcae-tca/flows/TCAVESCollectorFlow/start | -+==============================================================================================================================+ -+------------------------------------------------------------------------------------------------------------------------------+ +.. code-block:: bash - + #!/bin/sh + #docker run -d –name cdap-sdk -p 11011:11011 -p 11015:11015 caskdata/cdap-standalone:4.1.2 + sudo docker restart cdap-sdk-2 + sleep 30 + # start program + curl -X POST http://localhost:11015/v3/namespaces/cdap_tca_hi_lo/apps/dcae-tca/workers/TCADMaaPMRPublisherWorker/start + curl -X POST http://localhost:11015/v3/namespaces/cdap_tca_hi_lo/apps/dcae-tca/workers/TCADMaaPMRSubscriberWorker/start + curl -X POST http://localhost:11015/v3/namespaces/cdap_tca_hi_lo/apps/dcae-tca/flows/TCAVESCollectorFlow/start -This script was invoked via VM init script (rc.d). -ln -s /home/ubuntu/init.sh /etc/init.d/init.sh +This script was invoked via VM init script (rc.d). -sudo update-rc.d init.sh start 2 +.. code-block:: bash + ln -s /home/ubuntu/init.sh /etc/init.d/init.sh + sudo update-rc.d init.sh start 2 diff --git a/docs/sections/installation_test.rst b/docs/sections/installation_test.rst index 2c49a957..641a8616 100644 --- a/docs/sections/installation_test.rst +++ b/docs/sections/installation_test.rst @@ -5,15 +5,15 @@ Testing and Debugging ONAP DCAE Deployment Check Component Status ====================== -Testing of a DCAE system starts with checking the health of the deployed components. This can be done by accessing the Consul becsue all DCAE components register their staus with Consul. Such API is accessible at http://{{ANY_CONSUL_VM_IP}}:8500 . +Testing of a DCAE system starts with checking the health of the deployed components. This can be done by accessing the Consul becsue all DCAE components register their staus with Consul. Such API is accessible at http://{{ANY_CONSUL_VM_IP}}:8500. -In addition, more details status information can be obtained in additional ways. +In addition, more details status information can be obtained in additional ways. 1. Check VES Status - VES status and running logs can be found on the {{RAND}}doks00 VM. The detailed API and access methods can be found in the logging and human interface sections. + VES status and running logs can be found on the {{RAND}}doks00 VM. The detailed API and access methods can be found in the logging and human interface sections. 2. Check TCA Status - TCA has its own GUI that provides detailed operation information. Point browser to http://{{CDAP02_VM_IP}}:11011/oldcdap/ns/cdap_tca_hi_lo/apps/, select the application with Description "DCAE Analytics Threshold Crossing Alert Application"; then select "TCAVESCollectorFlow". This leads to a flow display where all stages of processing are illustrated and the number inside of each stage icon shows the number of events/messages processed. + TCA has its own GUI that provides detailed operation information. Point browser to http://{{CDAP02_VM_IP}}:11011/oldcdap/ns/cdap_tca_hi_lo/apps/, select the application with Description "DCAE Analytics Threshold Crossing Alert Application"; then select "TCAVESCollectorFlow". This leads to a flow display where all stages of processing are illustrated and the number inside of each stage icon shows the number of events/messages processed. 3. Check Message Router Status |