diff options
Diffstat (limited to 'docs/sections')
-rw-r--r-- | docs/sections/healthcheck.rst | 6 | ||||
-rw-r--r-- | docs/sections/installation_test.rst | 11 | ||||
-rw-r--r-- | docs/sections/services/mapper/installation.rst | 38 | ||||
-rw-r--r-- | docs/sections/tls_enablement.rst | 4 |
4 files changed, 47 insertions, 12 deletions
diff --git a/docs/sections/healthcheck.rst b/docs/sections/healthcheck.rst index f7fcba15..b85a1dea 100644 --- a/docs/sections/healthcheck.rst +++ b/docs/sections/healthcheck.rst @@ -26,6 +26,12 @@ blueprints after the initial DCAE installation. The healthcheck service is exposed as a Kubernetes ClusterIP Service named `dcae-healthcheck`. The service can be queried for status as shown below. +.. note:: + Run the below commands before running "curl dcae-healthcheck" + + * To get the dcae-healthcheck pod name, run this: kubectl get pods -n onap | grep dcae-healthcheck + * Then enter in to the shell of the container, run this: kubectl exec -it <dcae-healthcheck pod> -n onap bash + .. code-block:: json $ curl dcae-healthcheck diff --git a/docs/sections/installation_test.rst b/docs/sections/installation_test.rst index 2d2f357d..1c36bf37 100644 --- a/docs/sections/installation_test.rst +++ b/docs/sections/installation_test.rst @@ -99,17 +99,20 @@ After the platform is assessed as healthy, the next step is to check the functio kubectl logs -f -n onap <vescollectorpod> dcae-ves-collector +.. note:: + To get the "vescollectorpod" run this command: kubectl -n onap get pods | grep dcae-ves-collector + 2. Check VES Output VES publishes received VNF data, after authentication and syntax check, onto DMaaP Message Router. To check this is happening we can subscribe to the publishing topic. - 1. Run the subscription command to subscribe to the topic: **curl -H "Content-Type:text/plain" -X GET http://{{K8S_NODEIP}}:30227/events/unauthenticated.VES_MEASUREMENT_OUTPUT/group1/C1?timeout=50000**. The actual format and use of Message Router API can be found in DMaaP project documentation. + 1. Run the subscription command to subscribe to the topic: **curl -H "Content-Type:text/plain" -k -X GET https://{{K8S_NODEIP}}:30226/events/unauthenticated.VES_MEASUREMENT_OUTPUT/group1/C1?timeout=50000**. The actual format and use of Message Router API can be found in DMaaP project documentation. * When there are messages being published, this command returns with the JSON array of messages; * If no message being published, up to the timeout value (i.e. 50000 seconds as in the example above), the call is returned with empty JAON array; - * It may be useful to run this command in a loop: **while :; do curl -H "Content-Type:text/plain" -X GET http://{{K8S_NODEIP}}:3904/events/unauthenticated.VES_MEASUREMENT_OUTPUT/group1/C1?timeout=50000; echo; done**; + * It may be useful to run this command in a loop: **while :; do curl -H "Content-Type:text/plain" -k -X GET https://{{K8S_NODEIP}}:30226/events/unauthenticated.VES_MEASUREMENT_OUTPUT/group1/C1?timeout=50000; echo; done**; 3. Check TCA Output TCA also publishes its events to Message Router under the topic of "unauthenticated.DCAE_CL_OUTPUT". The same Message Router subscription command can be used for checking the messages being published by TCA; - * Run the subscription command to subscribe to the topic: **curl -H "Content-Type:text/plain" -X GET http://{{K8S_NODEIP}}:3904/events/unauthenticated.DCAE_CL_OUTPUT/group1/C1?timeout=50000**. - * Or run the command in a loop: **while :; do curl -H "Content-Type:text/plain" -X GET http://{{K8S_NODEIP}}:3904/events/unauthenticated.DCAE_CL_OUTPUT/group1/C1?timeout=50000; echo; done**; + * Run the subscription command to subscribe to the topic: **curl -H "Content-Type:text/plain" -k -X GET https://{{K8S_NODEIP}}:30226/events/unauthenticated.DCAE_CL_OUTPUT/group1/C1?timeout=50000**. + * Or run the command in a loop: **while :; do curl -H "Content-Type:text/plain" -k -X GET https://{{K8S_NODEIP}}:30226/events/unauthenticated.DCAE_CL_OUTPUT/group1/C1?timeout=50000; echo; done**; diff --git a/docs/sections/services/mapper/installation.rst b/docs/sections/services/mapper/installation.rst index d8d00396..af4189fe 100644 --- a/docs/sections/services/mapper/installation.rst +++ b/docs/sections/services/mapper/installation.rst @@ -11,12 +11,10 @@ Installation VES-Mapper can be deployed individually though it will throw errors if it can't reach to DMaaP instance's APIs. To test it functionally, DMaaP is the only required prerequisite outside DCAE. As VES-Mapper is integrated with Consul / CBS, it fetches the initial configuration from Consul. -**Note:** Currently VES-Mapper fetches configuration from Consul only during initialization. It does not periodically refresh the local configuration by getting updates from Consul. This is planned for E release. - **Blueprint/model/image** VES-Mapper blueprint is available @ -https://git.onap.org/dcaegen2/services/mapper/tree/UniversalVesAdapter/dpo/blueprints/k8s-vesmapper.yaml-template.yaml?h=elalto +https://git.onap.org/dcaegen2/platform/blueprints/tree/blueprints/k8s-ves-mapper.yaml?h=guilin VES-Mapper docker image is available in Nexus repo @ `nexus3.onap.org:10001/onap/org.onap.dcaegen2.services.mapper.vesadapter.universalvesadaptor:latest <nexus3.onap.org:10001/onap/org.onap.dcaegen2.services.mapper.vesadapter.universalvesadaptor:latest>`_ @@ -37,13 +35,26 @@ VES-Mapper docker image is available in Nexus repo @ `nexus3.onap.org:10001/onap For this step, DCAE's Cloudify instance should be in running state. Transfer blueprint file in DCAE bootstrap POD under /blueprints directory. Log-in to the DCAE bootstrap POD's main container. +.. note:: + For doing this, we should run the below commands + + * To get the bootstrap pod name, run this: kubectl get pods -n onap | grep bootstrap + * To transfer blueprint file in bootstrap pod, run this: kubectl cp <source file path> <bootstrap pod>:/blueprints -n onap + * To login to bootstrap pod name, run this: kubectl exec -it <bootstrap pod> bash -n onap + +.. note:: + Verify the below versions before validate blueprint + + * The version of the plugin used is different from "cfy plugins list", update the blueprint import to match. + * If the tag_version under inputs is old, update with the latest + Validate blueprint - ``cfy blueprints validate /blueprints/k8s-vesmapper.yaml-template.yaml`` + ``cfy blueprints validate /blueprints/k8s-ves-mapper.yaml`` Use following command for validated blueprint to upload: - ``cfy blueprints upload -b ves-mapper /blueprints/k8s-vesmapper.yaml-template.yaml`` + ``cfy blueprints upload -b ves-mapper /blueprints/k8s-ves-mapper.yaml`` *d. Create the Deployment* After VES-Mapper's validated blueprint is uploaded, create Cloudify Deployment by following command @@ -54,10 +65,25 @@ After VES-Mapper's validated blueprint is uploaded, create Cloudify Deployment ``cfy executions start -d ves-mapper install`` +To undeploy running ves-mapper, follow the below steps + +*a. cfy uninstall ves-mapper -f* + +.. note:: + The deployment uninstall will also delete the blueprint. In some case you might notice 400 error reported indicating active deployment exist such as below. + + Ex: An error occurred on the server: 400: Can't delete deployment ves-mapper - There are running or queued executions for this deployment. Running executions ids: d89fdd0c-8e12-4dfa-ba39-a6187fcf2f18 + +*b. In that case, cancel the execution ID then run uninstall as below* + +.. code-block:: bash + + cfy executions cancel <Running executions ID> + cfy uninstall ves-mapper **2.To run on standalone mode** Though this is not a preferred way, to run VES-Mapper container on standalone mode using local configuration file carried in the docker image, following docker run command can be used. - ``docker run -d nexus3.onap.org:10003/onap/org.onap.dcaegen2.services.mapper.vesadapter.universalvesadaptor:1.0.1`` + ``docker run -d nexus3.onap.org:10003/onap/org.onap.dcaegen2.services.mapper.vesadapter.universalvesadaptor:1.1.0`` diff --git a/docs/sections/tls_enablement.rst b/docs/sections/tls_enablement.rst index 85ba13d0..4a049039 100644 --- a/docs/sections/tls_enablement.rst +++ b/docs/sections/tls_enablement.rst @@ -176,7 +176,7 @@ From k8splugin 3.4.1 when external TLS is enabled (use_external_tls=true), keyst * A string (``output_type``) that indicates certificate output type. * A dictionary (``external_certificate_parameters``) with two elements: * A string (``common_name``) that indicates common name which should be present in certificate. Specific for every blueprint (e.g. dcae-ves-collector for VES). - * A string (``sans``) that indicates list of Subject Alternative Names (SANs) which should be present in certificate. Delimiter - : Should contain common_name value and other FQDNs under which given component is accessible. + * A string (``sans``) that indicates list of Subject Alternative Names (SANs) which should be present in certificate. Delimiter - , Should contain common_name value and other FQDNs under which given component is accessible. As a final step of the plugin the generated CMPv2 truststore entries will be appended to AAF CA truststore (see certificate artifacts below). @@ -191,7 +191,7 @@ From k8splugin 3.4.1 when external TLS is enabled (use_external_tls=true), keyst cert_type: "P12" external_certificate_parameters: common_name: "simpledemo.onap.org" - sans: "simpledemo.onap.org;ves.simpledemo.onap.org;ves.onap.org" + sans: "simpledemo.onap.org,ves.simpledemo.onap.org,ves.onap.org" For this example the certificates are mounted into ``/opt/app/dcae-certificate/external`` directory within the container. |