summaryrefslogtreecommitdiffstats
path: root/docs/development/devtools
diff options
context:
space:
mode:
Diffstat (limited to 'docs/development/devtools')
-rw-r--r--docs/development/devtools/apex-s3p.rst2
-rw-r--r--docs/development/devtools/clamp-cl-participant-protocol-smoke.rst144
-rw-r--r--docs/development/devtools/clamp-dcae.rst115
-rw-r--r--docs/development/devtools/clamp-policy.rst124
-rw-r--r--docs/development/devtools/clamp-s3p.rst70
-rw-r--r--docs/development/devtools/clamp-smoke.rst357
-rw-r--r--docs/development/devtools/db-migrator-smoke.rst413
-rw-r--r--docs/development/devtools/devtools.rst18
-rw-r--r--docs/development/devtools/images/cl-commission.pngbin0 -> 161307 bytes
-rw-r--r--docs/development/devtools/images/cl-create.pngbin0 -> 226752 bytes
-rw-r--r--docs/development/devtools/images/cl-instantiation.pngbin0 -> 230788 bytes
-rw-r--r--docs/development/devtools/images/cl-passive.pngbin0 -> 206486 bytes
-rw-r--r--docs/development/devtools/images/cl-running-state.pngbin0 -> 226765 bytes
-rw-r--r--docs/development/devtools/images/cl-running.pngbin0 -> 206577 bytes
-rw-r--r--docs/development/devtools/images/cl-uninitialise.pngbin0 -> 206284 bytes
-rw-r--r--docs/development/devtools/images/cl-uninitialised-state.pngbin0 -> 227934 bytes
-rw-r--r--docs/development/devtools/images/create-instance.pngbin0 -> 209643 bytes
-rw-r--r--docs/development/devtools/images/update-instance.pngbin0 -> 129767 bytes
-rw-r--r--docs/development/devtools/tosca/pairwise-testing.yml996
19 files changed, 2199 insertions, 40 deletions
diff --git a/docs/development/devtools/apex-s3p.rst b/docs/development/devtools/apex-s3p.rst
index bfed24e0..ce61e55e 100644
--- a/docs/development/devtools/apex-s3p.rst
+++ b/docs/development/devtools/apex-s3p.rst
@@ -102,7 +102,7 @@ The following steps can be used to configure the parameters of test plan.
wait Wait time if required after a request (in milliseconds)
threads Number of threads to run test cases in parallel
threadsTimeOutInMs Synchronization timer for threads running in parallel (in milliseconds)
-=================== ================================================================================
+=================== ===============================================================================
Run Test
--------
diff --git a/docs/development/devtools/clamp-cl-participant-protocol-smoke.rst b/docs/development/devtools/clamp-cl-participant-protocol-smoke.rst
new file mode 100644
index 00000000..98d7fcda
--- /dev/null
+++ b/docs/development/devtools/clamp-cl-participant-protocol-smoke.rst
@@ -0,0 +1,144 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+.. _clamp-gui-controlloop-smoke-tests:
+CLAMP Participant Protocol Smoke Tests
+---------------------------
+1. Introduction
+***************
+The CLAMP Control Loop Participant protocol is an asynchronous protocol that is used by the CLAMP runtime
+to coordinate life cycle management of Control Loop instances.
+This document will serve as a guide to do smoke tests on the different usecases that are involved when
+working with the Participant protocol and outline how they operate.
+It will also show a developer how to set up their environment for carrying out smoke tests on the participants.
+
+2. Setup Guide
+**************
+This section will show the developer how to set up their environment to start testing participants with some instruction on how to carry out the tests. There are a number of prerequisites. Note that this guide is written by a Linux user - although the majority of the steps show will be exactly the same in Windows or other systems.
+
+2.1 Prerequisites
+=================
+- Java 11
+- Docker
+- Maven 3
+- Git
+- Refer to this guide for basic environment setup `Setting up dev environment <https://wiki.onap.org/display/DW/Setting+Up+Your+Development+Environment>`_
+
+2.2 Setting up the components
+=============================
+- Controlloop runtime component docker image is started and running.
+- Participant docker images policy-clamp-cl-pf-ppnt, policy-clamp-cl-http-ppnt, policy-clamp-cl-k8s-ppnt are started and running.
+- Dmaap simulator for communication between components.
+- mariadb docker container for policy and controlloop database.
+- policy-api for communication between policy participant and policy-framework
+In this setup guide, we will be setting up all the components technically required for a working convenient dev environment. We will not be setting up all of the participants - we will setup only the policy participant as an example.
+
+2.2.1 MariaDB Setup
+===================
+We will be using Docker to run our mariadb instance. It will have a total of two databases running in it.
+- controlloop: the runtime-controlloop db
+- policyadmin: the policy-api db
+
+3. Running Tests of protocol dialogues
+**************************************
+lloop type definitions and common property values for participant types
+In this section, we will run through the functionalities mentioned at the start of this document is section 1. Each functionality will be tested and we will confirm that they were carried out successfully. There is a tosca service template that can be used for this test
+:download:`Tosca Service Template <tosca/tosca-for-gui-smoke-tests.yaml>`
+
+3.1 Participant Registration
+============================
+Action: Bring up the participant
+Test result:
+- Observe PARTICIPANT_REGISTER going from participant to runtime
+- Observe PARTICIPANT_REGISTER_ACK going from runtime to participant
+- Observe PARTICIPANT_UPDATE going from runtime to participant
+
+3.2 Participant Deregistration
+==============================
+Action: Bring down the participant
+Test result:
+- Observe PARTICIPANT_DEREGISTER going from participant to runtime
+- Observe PARTICIPANT_DEREGISTER_ACK going from runtime to participant
+
+3.3 Participant Priming
+=======================
+When a control loop is primed, the portion of the Control Loop Type Definition and Common Property values for the participants
+of each participant type mentioned in the Control Loop Definition are sent to the participants.
+Action: Invoke a REST API to prime controlloop type definitions and set values of common properties
+Test result:
+- Observe PARTICIPANT_UPDATE going from runtime to participant with controlloop type definitions and common property values for participant types
+- Observe that the controlloop type definitions and common property values for participant types are stored on ParticipantHandler
+- Observe PARTICIPANT_UPDATE_ACK going from runtime to participant
+
+3.4 Participant DePriming
+=========================
+When a control loop is de-primed, the portion of the Control Loop Type Definition and Common Property values for the participants
+of each participant type mentioned in the Control Loop Definition are deleted on participants.
+Action: Invoke a REST API to deprime controlloop type definitions
+Test result:
+- If controlloop instances exist in runtime database, return a response for the REST API with error response saying "Cannot decommission controlloop type definition"
+- If no controlloop instances exist in runtime database, Observe PARTICIPANT_UPDATE going from runtime to participant with definitions as null
+- Observe that the controlloop type definitions and common property values for participant types are removed on ParticipantHandler
+- Observe PARTICIPANT_UPDATE_ACK going from runtime to participant
+
+3.5 Control Loop Update
+=======================
+Control Loop Update handles creation, change, and deletion of control loops on participants.
+Action: Trigger controlloop instantiation from GUI
+Test result:
+- Observe CONTROL_LOOP_UPDATE going from runtime to participant
+- Observe that the controlloop type instances and respective property values for participant types are stored on ControlLoopHandler
+- Observe that the controlloop state is UNINITIALISED
+- Observe CONTROL_LOOP_UPDATE_ACK going from participant to runtime
+
+3.6 Control Loop state change to PASSIVE
+========================================
+Control Loop Update handles creation, change, and deletion of control loops on participants.
+Action: Change state of the controlloop to PASSIVE
+Test result:
+- Observe CONTROL_LOOP_STATE_CHANGE going from runtime to participant
+- Observe that the ControlLoopElements state is PASSIVE
+- Observe that the controlloop state is PASSIVE
+- Observe CONTROL_LOOP_STATE_CHANGE_ACK going from participant to runtime
+
+3.7 Control Loop state change to RUNNING
+========================================
+Control Loop Update handles creation, change, and deletion of control loops on participants.
+Action: Change state of the controlloop to RUNNING
+Test result:
+- Observe CONTROL_LOOP_STATE_CHANGE going from runtime to participant
+- Observe that the ControlLoopElements state is RUNNING
+- Observe that the controlloop state is RUNNING
+- Observe CONTROL_LOOP_STATE_CHANGE_ACK going from participant to runtime
+
+3.8 Control Loop state change to PASSIVE
+========================================
+Control Loop Update handles creation, change, and deletion of control loops on participants.
+Action: Change state of the controlloop to PASSIVE
+Test result:
+- Observe CONTROL_LOOP_STATE_CHANGE going from runtime to participant
+- Observe that the ControlLoopElements state is PASSIVE
+- Observe that the controlloop state is PASSIVE
+- Observe CONTROL_LOOP_STATE_CHANGE_ACK going from participant to runtime
+
+3.9 Control Loop state change to UNINITIALISED
+==============================================
+Control Loop Update handles creation, change, and deletion of control loops on participants.
+Action: Change state of the controlloop to UNINITIALISED
+Test result:
+- Observe CONTROL_LOOP_STATE_CHANGE going from runtime to participant
+- Observe that the ControlLoopElements state is UNINITIALISED
+- Observe that the controlloop state is UNINITIALISED
+- Observe that the ControlLoopElements undeploy the instances from respective frameworks
+- Observe that the control loop instances are removed from participants
+- Observe CONTROL_LOOP_STATE_CHANGE_ACK going from participant to runtime
+
+3.10 Control Loop monitoring and reporting
+==========================================
+This dialogue is used as a heartbeat mechanism for participants, to monitor the status of Control Loop Elements, and to gather statistics on control loops. The ParticipantStatus message is sent periodically by each participant. The reporting interval for sending the message is configurable
+Action: Bring up participant
+Test result:
+- Observe that PARTICIPANT_STATUS message is sent from participants to runtime in a regular interval
+- Trigger a PARTICIPANT_STATUS_REQ from runtime and observe a PARTICIPANT_STATUS message with tosca definitions of control loop type definitions sent
+from all the participants to runtime
+
+This concluded the required smoke tests
+
diff --git a/docs/development/devtools/clamp-dcae.rst b/docs/development/devtools/clamp-dcae.rst
new file mode 100644
index 00000000..c0cd41bf
--- /dev/null
+++ b/docs/development/devtools/clamp-dcae.rst
@@ -0,0 +1,115 @@
+.. This work is licensed under a
+.. Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+
+.. _clamp-pairwise-testing-label:
+
+.. toctree::
+ :maxdepth: 2
+
+CLAMP <-> Dcae
+~~~~~~~~~~~~~~
+
+The pairwise testing is executed against a default ONAP installation in the OOM.
+CLAMP-Control loop interacts with DCAE to deploy dcaegen2 services like PMSH.
+This test verifies the interaction between DCAE and controlloop works as expected.
+
+General Setup
+*************
+
+The kubernetes installation allocated all policy components across multiple worker node VMs.
+The worker VM hosting the policy components has the following spec:
+
+- 16GB RAM
+- 8 VCPU
+- 160GB Ephemeral Disk
+
+
+The ONAP components used during the pairwise tests are:
+
+- CLAMP control loop runtime, policy participant, kubernetes participant.
+- DCAE for running dcaegen2-service via kubernetes participant.
+- ChartMuseum service from platform, initialised with DCAE helm charts.
+- DMaaP for the communication between Control loop runtime and participants.
+- Policy Gui for instantiation and commissioning of control loops.
+
+
+ChartMuseum Setup
+*****************
+
+The chartMuseum helm chart from the platform is deployed in the same cluster. The chart server is then initialized with the helm charts of dcaegen2-services by running the below script in OOM repo.
+The script accepts the directory path as an argument where the helm charts are located.
+
+.. code-block:: bash
+
+ #!/bin/sh
+ ./oom/kubernetes/contrib/tools/registry-initialize.sh -d /oom/kubernetes/dcaegen2-services/charts/
+
+Testing procedure
+*****************
+
+The test set focused on the following use cases:
+
+- Deployment and Configuration of DCAE microservice PMSH
+- Undeployment of PMSH
+
+Creation of the Control Loop:
+-----------------------------
+A Control Loop is created by commissioning a Tosca template with Control loop definitions and instantiating the Control Loop with the state "UNINITIALISED".
+
+- Upload a TOSCA template from the POLICY GUI. The definitions includes a kubernetes participant and control loop elements that deploys and configures a microservice in the kubernetes cluster.
+ Control loop element for kubernetes participant includes a helm chart information of DCAE microservice and the element for Http Participant includes the configuration entity for the microservice.
+ :download:`Sample Tosca template <tosca/pairwise-testing.yml>`
+
+ .. image:: images/cl-commission.png
+
+ Verification: The template is commissioned successfully without errors.
+
+- Instantiate the commissioned Control loop definitions from the Policy Gui under 'Instantiation Management'.
+
+ .. image:: images/create-instance.png
+
+ Update instance properties of the Control Loop Elements if required.
+
+ .. image:: images/update-instance.PNG
+
+ Verification: The control loop is created with default state "UNINITIALISED" without errors.
+
+ .. image:: images/cl-instantiation.png
+
+
+Deployment and Configuration of DCAE microservice (PMSH):
+---------------------------------------------------------
+The Control Loop state is changed from "UNINITIALISED" to "PASSIVE" from the Policy Gui. The kubernetes participant deploys the PMSH helm chart from the DCAE chartMuseum server.
+
+.. image:: images/cl-passive.png
+
+Verification:
+
+- DCAE service PMSH is deployed in to the kubernetes cluster. PMSH pods are in RUNNING state.
+ `helm ls -n <namespace>` - The helm deployment of dcaegen2 service PMSH is listed.
+ `kubectl get pod -n <namespace>` - The PMSH pods are deployed, up and Running.
+
+- The subscription configuration for PMSH microservice from the TOSCA definitions are updated in the Consul server. The configuration can be verified on the Consul server UI `http://<CONSUL-SERVER_IP>/ui/#/dc1/kv/`
+
+- The overall state of the Control Loop is changed to "PASSIVE" in the Policy Gui.
+
+.. image:: images/cl-create.png
+
+
+Undeployment of DCAE microservice (PMSH):
+-----------------------------------------
+The Control Loop state is changed from "PASSIVE" to "UNINITIALISED" from the Policy Gui.
+
+.. image:: images/cl-uninitialise.png
+
+Verification:
+
+- The kubernetes participant uninstall the DCAE PMSH helm chart from the kubernetes cluster. The pods are removed from the cluster.
+
+- The overall state of the Control Loop is changed to "UNINITIALISED" in the Policy Gui.
+
+.. image:: images/cl-uninitialised-state.png
+
+
+
diff --git a/docs/development/devtools/clamp-policy.rst b/docs/development/devtools/clamp-policy.rst
new file mode 100644
index 00000000..72a9a1b1
--- /dev/null
+++ b/docs/development/devtools/clamp-policy.rst
@@ -0,0 +1,124 @@
+.. This work is licensed under a
+.. Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+
+.. _clamp-pairwise-testing-label:
+
+.. toctree::
+ :maxdepth: 2
+
+CLAMP <-> Policy Core
+~~~~~~~~~~~~~~~~~~~~~
+
+The pairwise testing is executed against a default ONAP installation in the OOM.
+CLAMP-Control loop interacts with Policy framework to create and deploy policies.
+This test verifies the interaction between policy and controlloop works as expected.
+
+General Setup
+*************
+
+The kubernetes installation allocated all policy components across multiple worker node VMs.
+The worker VM hosting the policy components has the following spec:
+
+- 16GB RAM
+- 8 VCPU
+- 160GB Ephemeral Disk
+
+
+The ONAP components used during the pairwise tests are:
+
+- CLAMP control loop runtime, policy participant, kubernetes participant.
+- DMaaP for the communication between Control loop runtime and participants.
+- Policy API to create (and delete at the end of the tests) policies for each
+ scenario under test.
+- Policy PAP to deploy (and undeploy at the end of the tests) policies for each scenario under test.
+- Policy Gui for instantiation and commissioning of control loops.
+
+
+Testing procedure
+*****************
+
+The test set focused on the following use cases:
+
+- creation/Deletion of policies
+- Deployment/Undeployment of policies
+
+Creation of the Control Loop:
+-----------------------------
+A Control Loop is created by commissioning a Tosca template with Control loop definitions and instantiating the Control Loop with the state "UNINITIALISED".
+
+- Upload a TOSCA template from the POLICY GUI. The definitions includes a policy participant and a control loop element that creates and deploys required policies. :download:`Sample Tosca template <tosca/pairwise-testing.yml>`
+
+ .. image:: images/cl-commission.png
+
+ Verification: The template is commissioned successfully without errors.
+
+- Instantiate the commissioned Control loop from the Policy Gui under 'Instantiation Management'.
+
+ .. image:: images/create-instance.png
+
+ Update instance properties of the Control Loop Elements if required.
+
+ .. image:: images/update-instance.PNG
+
+ Verification: The control loop is created with default state "UNINITIALISED" without errors.
+
+ .. image:: images/cl-instantiation.png
+
+
+Creation of policies:
+---------------------
+The Control Loop state is changed from "UNINITIALISED" to "PASSIVE" from the Policy Gui. Verify the POLICY API endpoint for the creation of policy types that are defined in the TOSCA template.
+
+.. image:: images/cl-passive.png
+
+Verification:
+
+- The policy types defined in the tosca template is created by the policy participant and listed in the policy Api.
+ Policy Api endpoint: `<https://<POLICY-API-IP>/policy/api/v1/policytypes>`
+
+- The overall state of the Control Loop is changed to "PASSIVE" in the Policy Gui.
+
+.. image:: images/cl-create.png
+
+
+Deployment of policies:
+-----------------------
+The Control Loop state is changed from "PASSIVE" to "RUNNING" from the Policy Gui.
+
+.. image:: images/cl-running.png
+
+Verification:
+
+- The policy participant deploys the policies of Tosca Control loop elements in Policy PAP for all the pdp groups.
+ Policy PAP endpoint: `<https://<POLICY-PAP-IP>/policy/pap/v1/pdps>`
+
+- The overall state of the Control Loop is changed to "RUNNING" in the Policy Gui.
+
+.. image:: images/cl-running-state.png
+
+Deletion of Policies:
+---------------------
+The Control Loop state is changed from "RUNNING" to "PASSIVE" from the Policy Gui.
+
+Verification:
+
+- The policy participant deletes the created policy types which can be verified on the Policy Api. The policy types created as part of the control loop should not be listed on the Policy Api.
+ Policy Api endpoint: `<https://<POLICY-API-IP>/policy/api/v1/policytypes>`
+
+- The overall state of the Control Loop is changed to "PASSIVE" in the Policy Gui.
+
+.. image:: images/cl-create.png
+
+Undeployment of policies:
+-------------------------
+The Control Loop state is changed from "PASSIVE" to "UNINITIALISED" from the Policy Gui.
+
+Verification:
+
+- The policy participant undeploys the policies of the control loop element from the pdp groups. The policies deployed as part of the control loop should not be listed on the Policy PAP.
+ Policy PAP endpoint: `<https://<POLICY-PAP-IP>/policy/pap/v1/pdps>`
+
+- The overall state of the Control Loop is changed to "UNINITIALISED" in the Policy Gui.
+
+.. image:: images/cl-uninitialised-state.png
diff --git a/docs/development/devtools/clamp-s3p.rst b/docs/development/devtools/clamp-s3p.rst
index e01848da..08f0953c 100644
--- a/docs/development/devtools/clamp-s3p.rst
+++ b/docs/development/devtools/clamp-s3p.rst
@@ -48,14 +48,14 @@ The following steps can be used to configure the parameters of test plan.
- **HTTP Header Manager** - used to store headers which will be used for making HTTP requests.
- **User Defined Variables** - used to store following user defined parameters.
-=========== ===================================================================
- **Name** **Description**
-=========== ===================================================================
- RUNTIME_HOST IP Address or host name of controlloop runtime component
- RUNTIME_PORT Port number of controlloop runtime components for making REST API calls
- POLICY_PARTICIPANT_HOST IP Address or host name of policy participant
- POLICY_PARTICIPANT_HOST_PORT Port number of policy participant
-=========== ===================================================================
+============================= ========================================================================
+ **Name** **Description**
+============================= ========================================================================
+ RUNTIME_HOST IP Address or host name of controlloop runtime component
+ RUNTIME_PORT Port number of controlloop runtime components for making REST API calls
+ POLICY_PARTICIPANT_HOST IP Address or host name of policy participant
+ POLICY_PARTICIPANT_HOST_PORT Port number of policy participant
+============================= ========================================================================
The test was run in the background via "nohup", to prevent it from being interrupted:
@@ -88,17 +88,17 @@ Stability test plan was triggered for 72 hours.
**Controloop component Setup**
-================ ======================= ================== ==========================
-**CONTAINER ID** **IMAGE** **PORTS** **NAMES**
-================ ======================= ================== ================================== ==========================
- a9cb0cd103cf onap/policy-clamp-cl-runtime:latest 6969/tcp policy-clamp-cl-runtime
- 886e572b8438 onap/policy-clamp-cl-pf-ppnt:latest 6973/tcp policy-clamp-cl-pf-ppnt
- 035707b1b95f nexus3.onap.org:10001/onap/policy-api:latest 6969/tcp policy-api
- d34204f95ff3 onap/policy-clamp-cl-http-ppnt:latest 6971/tcp policy-clamp-cl-http-ppnt
- 4470e608c9a8 onap/policy-clamp-cl-k8s-ppnt:latest 6972/tcp, 8083/tcp policy-clamp-cl-k8s-ppnt
- 62229d46b79c nexus3.onap.org:10001/onap/policy-models-simulator:latest 3905/tcp, 6666/tcp, 6668-6670/tcp, 6680/tcp simulator
- efaf0ca5e1f0 nexus3.onap.org:10001/mariadb:10.5.8 3306/tcp mariadb
-======================= ================= ================== ====================================== ===========================
+================ ========================================================= =========================================== =========================
+**CONTAINER ID** **IMAGE** **PORTS** **NAMES**
+================ ========================================================= =========================================== =========================
+ a9cb0cd103cf onap/policy-clamp-cl-runtime:latest 6969/tcp policy-clamp-cl-runtime
+ 886e572b8438 onap/policy-clamp-cl-pf-ppnt:latest 6973/tcp policy-clamp-cl-pf-ppnt
+ 035707b1b95f nexus3.onap.org:10001/onap/policy-api:latest 6969/tcp policy-api
+ d34204f95ff3 onap/policy-clamp-cl-http-ppnt:latest 6971/tcp policy-clamp-cl-http-ppnt
+ 4470e608c9a8 onap/policy-clamp-cl-k8s-ppnt:latest 6972/tcp, 8083/tcp policy-clamp-cl-k8s-ppnt
+ 62229d46b79c nexus3.onap.org:10001/onap/policy-models-simulator:latest 3905/tcp, 6666/tcp, 6668-6670/tcp, 6680/tcp simulator
+ efaf0ca5e1f0 nexus3.onap.org:10001/mariadb:10.5.8 3306/tcp mariadb
+================ ========================================================= =========================================== =========================
.. Note::
@@ -108,11 +108,11 @@ Stability test plan was triggered for 72 hours.
**JMeter Screenshot**
-.. image:: clamp-s3p-results/controlloop_stability_jmeter.PNG
+.. image:: clamp-s3p-results/controlloop_stability_jmeter.png
**JMeter Screenshot**
-.. image:: clamp-s3p-results/controlloop_stability_table.PNG
+.. image:: clamp-s3p-results/controlloop_stability_table.png
**Memory and CPU usage**
@@ -120,11 +120,11 @@ The memory and CPU usage can be monitored by running "docker stats" command. A s
Memory and CPU usage before test execution:
-.. image:: clamp-s3p-results/Stability_before_stats.PNG
+.. image:: clamp-s3p-results/Stability_before_stats.png
Memory and CPU usage after test execution:
-.. image:: clamp-s3p-results/Stability_after_stats.PNG
+.. image:: clamp-s3p-results/Stability_after_stats.png
Performance Test of Controlloop components
@@ -180,18 +180,18 @@ Test results are shown as below.
**Controloop component Setup**
-================ ======================= ================== ==========================
-**CONTAINER ID** **IMAGE** **PORTS** **NAMES**
-================ ======================= ================== ================================== ==========================
- a9cb0cd103cf onap/policy-clamp-cl-runtime:latest 6969/tcp policy-clamp-cl-runtime
- 886e572b8438 onap/policy-clamp-cl-pf-ppnt:latest 6973/tcp policy-clamp-cl-pf-ppnt
- 035707b1b95f nexus3.onap.org:10001/onap/policy-api:latest 6969/tcp policy-api
- d34204f95ff3 onap/policy-clamp-cl-http-ppnt:latest 6971/tcp policy-clamp-cl-http-ppnt
- 4470e608c9a8 onap/policy-clamp-cl-k8s-ppnt:latest 6972/tcp, 8083/tcp policy-clamp-cl-k8s-ppnt
- 62229d46b79c nexus3.onap.org:10001/onap/policy-models-simulator:latest 3905/tcp, 6666/tcp, 6668-6670/tcp, 6680/tcp simulator
- efaf0ca5e1f0 nexus3.onap.org:10001/mariadb:10.5.8 3306/tcp mariadb
-======================= ================= ================== ====================================== ===========================
+================ ========================================================= =========================================== =========================
+**CONTAINER ID** **IMAGE** **PORTS** **NAMES**
+================ ========================================================= =========================================== =========================
+ a9cb0cd103cf onap/policy-clamp-cl-runtime:latest 6969/tcp policy-clamp-cl-runtime
+ 886e572b8438 onap/policy-clamp-cl-pf-ppnt:latest 6973/tcp policy-clamp-cl-pf-ppnt
+ 035707b1b95f nexus3.onap.org:10001/onap/policy-api:latest 6969/tcp policy-api
+ d34204f95ff3 onap/policy-clamp-cl-http-ppnt:latest 6971/tcp policy-clamp-cl-http-ppnt
+ 4470e608c9a8 onap/policy-clamp-cl-k8s-ppnt:latest 6972/tcp, 8083/tcp policy-clamp-cl-k8s-ppnt
+ 62229d46b79c nexus3.onap.org:10001/onap/policy-models-simulator:latest 3905/tcp, 6666/tcp, 6668-6670/tcp, 6680/tcp simulator
+ efaf0ca5e1f0 nexus3.onap.org:10001/mariadb:10.5.8 3306/tcp mariadb
+================ ========================================================= =========================================== =========================
**JMeter Screenshot**
-.. image:: clamp-s3p-results/cl-s3p-performance-result-jmeter.PNG
+.. image:: clamp-s3p-results/cl-s3p-performance-result-jmeter.png
diff --git a/docs/development/devtools/clamp-smoke.rst b/docs/development/devtools/clamp-smoke.rst
new file mode 100644
index 00000000..06ec6db7
--- /dev/null
+++ b/docs/development/devtools/clamp-smoke.rst
@@ -0,0 +1,357 @@
+.. This work is licensed under a
+.. Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+
+.. _policy-development-tools-label:
+
+CLAMP control loop runtime Smoke Tests
+######################################
+
+.. contents::
+ :depth: 3
+
+
+This article explains how to build the CLAMP control loop runtime for development purposes and how to run smoke tests for control loop runtime. To start, the developer should consult the latest ONAP Wiki to familiarize themselves with developer best practices and how-tos to setup their environment, see `https://wiki.onap.org/display/DW/Developer+Best+Practices`.
+
+
+This article assumes that:
+
+* You are using a *\*nix* operating system such as linux or macOS.
+* You are using a directory called *git* off your home directory *(~/git)* for your git repositories
+* Your local maven repository is in the location *~/.m2/repository*
+* You have copied the settings.xml from oparent to *~/.m2/* directory
+* You have added settings to access the ONAP Nexus to your M2 configuration, see `Maven Settings Example <https://wiki.onap.org/display/DW/Setting+Up+Your+Development+Environment>`_ (bottom of the linked page)
+
+The procedure documented in this article has been verified using Unbuntu 20.04 LTS VM.
+
+Cloning CLAMP control loop runtime and all dependency
+*****************************************************
+
+Run a script such as the script below to clone the required modules from the `ONAP git repository <https://gerrit.onap.org/r/#/admin/projects/?filter=policy>`_. This script clones CLAMP control loop runtime and all dependency.
+
+ONAP Policy Framework has dependencies to the ONAP Parent *oparent* module, the ONAP ECOMP SDK *ecompsdkos* module, and the A&AI Schema module.
+
+
+.. code-block:: bash
+ :caption: Typical ONAP Policy Framework Clone Script
+ :linenos:
+
+ #!/usr/bin/env bash
+
+ ## script name for output
+ MOD_SCRIPT_NAME='basename $0'
+
+ ## the ONAP clone directory, defaults to "onap"
+ clone_dir="onap"
+
+ ## the ONAP repos to clone
+ onap_repos="\
+ policy/parent \
+ policy/common \
+ policy/models \
+ policy/clamp \
+ policy/docker "
+
+ ##
+ ## Help screen and exit condition (i.e. too few arguments)
+ ##
+ Help()
+ {
+ echo ""
+ echo "$MOD_SCRIPT_NAME - clones all required ONAP git repositories"
+ echo ""
+ echo " Usage: $MOD_SCRIPT_NAME [-options]"
+ echo ""
+ echo " Options"
+ echo " -d - the ONAP clone directory, defaults to '.'"
+ echo " -h - this help screen"
+ echo ""
+ exit 255;
+ }
+
+ ##
+ ## read command line
+ ##
+ while [ $# -gt 0 ]
+ do
+ case $1 in
+ #-d ONAP clone directory
+ -d)
+ shift
+ if [ -z "$1" ]; then
+ echo "$MOD_SCRIPT_NAME: no clone directory"
+ exit 1
+ fi
+ clone_dir=$1
+ shift
+ ;;
+
+ #-h prints help and exists
+ -h)
+ Help;exit 0;;
+
+ *) echo "$MOD_SCRIPT_NAME: undefined CLI option - $1"; exit 255;;
+ esac
+ done
+
+ if [ -f "$clone_dir" ]; then
+ echo "$MOD_SCRIPT_NAME: requested clone directory '$clone_dir' exists as file"
+ exit 2
+ fi
+ if [ -d "$clone_dir" ]; then
+ echo "$MOD_SCRIPT_NAME: requested clone directory '$clone_dir' exists as directory"
+ exit 2
+ fi
+
+ mkdir $clone_dir
+ if [ $? != 0 ]
+ then
+ echo cannot clone ONAP repositories, could not create directory '"'$clone_dir'"'
+ exit 3
+ fi
+
+ for repo in $onap_repos
+ do
+ repoDir=`dirname "$repo"`
+ repoName=`basename "$repo"`
+
+ if [ ! -z $dirName ]
+ then
+ mkdir "$clone_dir/$repoDir"
+ if [ $? != 0 ]
+ then
+ echo cannot clone ONAP repositories, could not create directory '"'$clone_dir/repoDir'"'
+ exit 4
+ fi
+ fi
+
+ git clone https://gerrit.onap.org/r/${repo} $clone_dir/$repo
+ done
+
+ echo ONAP has been cloned into '"'$clone_dir'"'
+
+
+Execution of the script above results in the following directory hierarchy in your *~/git* directory:
+
+ * ~/git/onap
+ * ~/git/onap/policy
+ * ~/git/onap/policy/parent
+ * ~/git/onap/policy/common
+ * ~/git/onap/policy/models
+ * ~/git/onap/policy/clamp
+ * ~/git/onap/policy/docker
+
+
+Building CLAMP control loop runtime and all dependency
+******************************************************
+
+**Step 1:** Optionally, for a completely clean build, remove the ONAP built modules from your local repository.
+
+ .. code-block:: bash
+
+ rm -fr ~/.m2/repository/org/onap
+
+
+**Step 2:** A pom such as the one below can be used to build the ONAP Policy Framework modules. Create the *pom.xml* file in the directory *~/git/onap/policy*.
+
+.. code-block:: xml
+ :caption: Typical pom.xml to build the ONAP Policy Framework
+ :linenos:
+
+ <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+ <modelVersion>4.0.0</modelVersion>
+ <groupId>org.onap</groupId>
+ <artifactId>onap-policy</artifactId>
+ <version>1.0.0-SNAPSHOT</version>
+ <packaging>pom</packaging>
+ <name>${project.artifactId}</name>
+ <inceptionYear>2017</inceptionYear>
+ <organization>
+ <name>ONAP</name>
+ </organization>
+
+ <modules>
+ <module>parent</module>
+ <module>common</module>
+ <module>models</module>
+ <module>clamp</module>
+ </modules>
+ </project>
+
+
+**Step 3:** You can now build the Policy framework.
+
+Java artifacts only:
+
+ .. code-block:: bash
+
+ cd ~/git/onap/policy
+ mvn -pl '!org.onap.policy.clamp:policy-clamp-runtime' install
+
+With docker images:
+
+ .. code-block:: bash
+
+ cd ~/git/onap/policy/clamp/packages/
+ mvn clean install -P docker
+
+Running MariaDb and DMaaP Simulator
+***********************************
+
+Running a MariaDb Instance
+++++++++++++++++++++++++++
+
+Assuming you have successfully built the codebase using the instructions above. There are two requirements for the Clamp controlloop runtime component to run, one of them is a
+running MariaDb database instance. The easiest way to do this is to run the docker image locally.
+
+An sql such as the one below can be used to build the SQL initialization. Create the *mariadb.sql* file in the directory *~/git*.
+
+ .. code-block:: SQL
+
+ create database controlloop;
+ CREATE USER 'policy'@'%' IDENTIFIED BY 'P01icY';
+ GRANT ALL PRIVILEGES ON controlloop.* TO 'policy'@'%';
+
+
+Execution of the command above results in the creation and start of the *mariadb-smoke-test* container.
+
+ .. code-block:: bash
+
+ cd ~/git
+ docker run --name mariadb-smoke-test \
+ -p 3306:3306 \
+ -e MYSQL_ROOT_PASSWORD=my-secret-pw \
+ --mount type=bind,source=~/git/mariadb.sql,target=/docker-entrypoint-initdb.d/data.sql \
+ mariadb:10.5.8
+
+
+Running the DMaaP Simulator during Development
+++++++++++++++++++++++++++++++++++++++++++++++
+The second requirement for the Clamp controlloop runtime component to run is to run the DMaaP simulator. You can run it from the command line using Maven.
+
+
+Change the local configuration file *src/test/resources/simParameters.json* using the below code:
+
+.. code-block:: json
+
+ {
+ "dmaapProvider": {
+ "name": "DMaaP simulator",
+ "topicSweepSec": 900
+ },
+ "restServers": [
+ {
+ "name": "DMaaP simulator",
+ "providerClass": "org.onap.policy.models.sim.dmaap.rest.DmaapSimRestControllerV1",
+ "host": "localhost",
+ "port": 3904,
+ "https": false
+ }
+ ]
+ }
+
+Run the following commands:
+
+ .. code-block:: bash
+
+ cd ~/git/onap/policy/models/models-sim/policy-models-simulators
+ mvn exec:java -Dexec.mainClass=org.onap.policy.models.simulators.Main -Dexec.args="src/test/resources/simParameters.json"
+
+
+Developing and Debugging CLAMP control loop runtime
+***************************************************
+
+Running on the Command Line using Maven
++++++++++++++++++++++++++++++++++++++++
+
+Once the mariadb and DMaap simulator are up and running, run the following commands:
+
+ .. code-block:: bash
+
+ cd ~/git/onap/policy/clamp/runtime-controlloop
+ mvn spring-boot:run
+
+
+Running on the Command Line
++++++++++++++++++++++++++++
+
+ .. code-block:: bash
+
+ cd ~/git/onap/policy/clamp/runtime-controlloop
+ java -jar target/policy-clamp-runtime-controlloop-6.1.3-SNAPSHOT.jar
+
+
+Running in Eclipse
+++++++++++++++++++
+
+1. Check out the policy models repository
+2. Go to the *policy-clamp-runtime-controlloop* module in the clamp repo
+3. Specify a run configuration using the class *org.onap.policy.clamp.controlloop.runtime.Application* as the main class
+4. Run the configuration
+
+Swagger UI of Control loop runtime is available at *http://localhost:6969/onap/controlloop/swagger-ui/*, and swagger JSON at *http://localhost:6969/onap/controlloop/v2/api-docs/*
+
+
+Running one or more participant simulators
+++++++++++++++++++++++++++++++++++++++++++
+
+Into *docker\csit\clamp\tests\data* you can find a test case with policy-participant. In order to use that test you can use particpant-simulator.
+Copy the file *src/main/resources/config/application.yaml* and paste into *src/test/resources/*, after that change *participantId* and *participantType* as showed below:
+
+ .. code-block:: yaml
+
+ participantId:
+ name: org.onap.policy.controlloop.PolicyControlLoopParticipant
+ version: 2.3.1
+ participantType:
+ name: org.onap.PM_Policy
+ version: 1.0.0
+
+Run the following commands:
+
+ .. code-block:: bash
+
+ cd ~/git/onap/policy/clamp/participant/participant-impl/participant-impl-simulator
+ java -jar target/policy-clamp-participant-impl-simulator-6.1.3-SNAPSHOT.jar --spring.config.location=src/test/resources/application.yaml
+
+
+Creating self-signed certificate
+++++++++++++++++++++++++++++++++
+
+There is an additional requirement for the Clamp control loop runtime docker image to run, is creating the SSL self-signed certificate.
+
+Run the following commands:
+
+ .. code-block:: bash
+
+ cd ~/git/onap/policy/docker/csit/
+ ./gen_truststore.sh
+ ./gen_keystore.sh
+
+Execution of the commands above results additional files into the following directory *~/git/onap/policy/docker/csit/config*:
+
+ * ~/git/onap/policy/docker/csit/config/cakey.pem
+ * ~/git/onap/policy/docker/csit/config/careq.pem
+ * ~/git/onap/policy/docker/csit/config/caroot.cer
+ * ~/git/onap/policy/docker/csit/config/ks.cer
+ * ~/git/onap/policy/docker/csit/config/ks.csr
+ * ~/git/onap/policy/docker/csit/config/ks.jks
+
+
+Running the CLAMP control loop runtime docker image
++++++++++++++++++++++++++++++++++++++++++++++++++++
+
+Run the following command:
+
+ .. code-block:: bash
+
+ docker run --name runtime-smoke-test \
+ -p 6969:6969 \
+ -e mariadb.host=host.docker.internal \
+ -e topicServer=host.docker.internal \
+ --mount type=bind,source=~/git/onap/policy/docker/csit/config/ks.jks,target=/opt/app/policy/clamp/etc/ssl/policy-keystore \
+ --mount type=bind,source=~/git/onap/policy/clamp/runtime-controlloop/src/main/resources/application.yaml,target=/opt/app/policy/clamp/etc/ClRuntimeParameters.yaml \
+ onap/policy-clamp-cl-runtime
+
+
+Swagger UI of Control loop runtime is available at *https://localhost:6969/onap/controlloop/swagger-ui/*, and swagger JSON at *https://localhost:6969/onap/controlloop/v2/api-docs/*
diff --git a/docs/development/devtools/db-migrator-smoke.rst b/docs/development/devtools/db-migrator-smoke.rst
new file mode 100644
index 00000000..4aa41e46
--- /dev/null
+++ b/docs/development/devtools/db-migrator-smoke.rst
@@ -0,0 +1,413 @@
+.. This work is licensed under a Creative Commons Attribution
+.. 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+
+Policy DB Migrator Smoke Tests
+##############################
+
+Prerequisites
+*************
+
+Check number of files in each release
+
+.. code::
+ :number-lines:
+
+ ls 0800/upgrade/*.sql | wc -l = 96
+ ls 0900/upgrade/*.sql | wc -l = 13
+ ls 0800/downgrade/*.sql | wc -l = 96
+ ls 0900/downgrade/*.sql | wc -l = 13
+
+Upgrade scripts
+===============
+
+.. code::
+ :number-lines:
+
+ /opt/app/policy/bin/prepare_upgrade.sh policyadmin
+ /opt/app/policy/bin/db-migrator -s policyadmin -o upgrade
+
+.. note::
+ You can also run db-migrator upgrade with the -t and -f options
+
+Downgrade scripts
+=================
+
+.. code::
+ :number-lines:
+
+ /opt/app/policy/bin/prepare_downgrade.sh policyadmin
+ /opt/app/policy/bin/db-migrator -s policyadmin -o downgrade -f 0900 -t 0800
+
+Db migrator initialization script
+=================================
+
+Update /oom/kubernetes/policy/resources/config/db_migrator_policy_init.sh with the appropriate upgrade/downgrade calls.
+
+The policy version you are deploying should either be an upgrade or downgrade from the current db migrator schema version.
+
+Every time you modify db_migrator_policy_init.sh you will have to undeploy, make and redeploy before updates are applied.
+
+1. Fresh Install
+****************
+
+.. list-table::
+ :widths: 60 20
+ :header-rows: 0
+
+ * - Number of files run
+ - 109
+ * - Tables in policyadmin
+ - 75
+ * - Records Added
+ - 109
+ * - schema_version
+ - 0900
+
+2. Downgrade to Honolulu (0800)
+*******************************
+
+Modify db_migrator_policy_init.sh - remove any lines referencing upgrade and add the 2 lines under "Downgrade scripts"
+
+Make/Redeploy to run downgrade.
+
+.. list-table::
+ :widths: 60 20
+ :header-rows: 0
+
+ * - Number of files run
+ - 13
+ * - Tables in policyadmin
+ - 73
+ * - Records Added
+ - 13
+ * - schema_version
+ - 0800
+
+3. Upgrade to Istanbul (0900)
+*****************************
+
+Modify db_migrator_policy_init.sh - remove any lines referencing downgrade and add the 2 lines under "Upgrade scripts".
+
+Make/Redeploy to run upgrade.
+
+.. list-table::
+ :widths: 60 20
+ :header-rows: 0
+
+ * - Number of files run
+ - 13
+ * - Tables in policyadmin
+ - 75
+ * - Records Added
+ - 13
+ * - schema_version
+ - 0900
+
+4. Upgrade to Istanbul (0900) without any information in the migration schema
+*****************************************************************************
+
+Ensure you are on release 0800. (This may require running a downgrade before starting the test)
+
+Drop db-migrator tables in migration schema:
+
+.. code::
+ :number-lines:
+
+ DROP TABLE schema_versions;
+ DROP TABLE policyadmin_schema_changelog;
+
+Modify db_migrator_policy_init.sh - remove any lines referencing downgrade and add the 2 lines under "Upgrade scripts".
+
+Make/Redeploy to run upgrade.
+
+.. list-table::
+ :widths: 60 20
+ :header-rows: 0
+
+ * - Number of files run
+ - 13
+ * - Tables in policyadmin
+ - 75
+ * - Records Added
+ - 13
+ * - schema_version
+ - 0900
+
+5. Upgrade to Istanbul (0900) after failed downgrade
+****************************************************
+
+Ensure you are on release 0900.
+
+Rename pdpstatistics table in policyadmin schema:
+
+.. code::
+
+ RENAME TABLE pdpstatistics TO backup_pdpstatistics;
+
+Modify db_migrator_policy_init.sh - remove any lines referencing upgrade and add the 2 lines under "Downgrade scripts"
+
+Make/Redeploy to run downgrade
+
+This should result in an error (last row in policyadmin_schema_changelog will have a success value of 0)
+
+Rename backup_pdpstatistic table in policyadmin schema:
+
+.. code::
+
+ RENAME TABLE backup_pdpstatistics TO pdpstatistics;
+
+Modify db_migrator_policy_init.sh - Remove any lines referencing downgrade and add the 2 lines under "Upgrade scripts"
+
+Make/Redeploy to run upgrade
+
+.. list-table::
+ :widths: 60 20
+ :header-rows: 0
+
+ * - Number of files run
+ - 11
+ * - Tables in policyadmin
+ - 75
+ * - Records Added
+ - 11
+ * - schema_version
+ - 0900
+
+6. Downgrade to Honolulu (0800) after failed downgrade
+******************************************************
+
+Ensure you are on release 0900.
+
+Add timeStamp column to papdpstatistics_enginestats:
+
+.. code::
+
+ ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN timeStamp datetime DEFAULT NULL NULL AFTER UPTIME;
+
+Modify db_migrator_policy_init.sh - remove any lines referencing upgrade and add the 2 lines under "Downgrade scripts"
+
+Make/Redeploy to run downgrade
+
+This should result in an error (last row in policyadmin_schema_changelog will have a success value of 0)
+
+Remove timeStamp column from jpapdpstatistics_enginestats:
+
+.. code::
+
+ ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp;
+
+The config job will retry 5 times. If you make your fix before this limit is reached you won't need to redeploy.
+
+Redeploy to run downgrade
+
+.. list-table::
+ :widths: 60 20
+ :header-rows: 0
+
+ * - Number of files run
+ - 14
+ * - Tables in policyadmin
+ - 73
+ * - Records Added
+ - 14
+ * - schema_version
+ - 0800
+
+7. Downgrade to Honolulu (0800) after failed upgrade
+****************************************************
+
+Ensure you are on release 0800.
+
+Modify db_migrator_policy_init.sh - remove any lines referencing downgrade and add the 2 lines under "Upgrade scripts"
+
+Update pdpstatistics:
+
+.. code::
+
+ ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL NULL AFTER POLICYEXECUTEDSUCCESSCOUNT;
+
+Make/Redeploy to run upgrade
+
+This should result in an error (last row in policyadmin_schema_changelog will have a success value of 0)
+
+Once the retry count has been reached, update pdpstatistics:
+
+.. code::
+
+ ALTER TABLE pdpstatistics DROP COLUMN POLICYUNDEPLOYCOUNT;
+
+Modify db_migrator_policy_init.sh - Remove any lines referencing upgrade and add the 2 lines under "Downgrade scripts"
+
+Make/Redeploy to run downgrade
+
+.. list-table::
+ :widths: 60 20
+ :header-rows: 0
+
+ * - Number of files run
+ - 7
+ * - Tables in policyadmin
+ - 73
+ * - Records Added
+ - 7
+ * - schema_version
+ - 0800
+
+8. Upgrade to Istanbul (0900) after failed upgrade
+**************************************************
+
+Ensure you are on release 0800.
+
+Modify db_migrator_policy_init.sh - remove any lines referencing downgrade and add the 2 lines under "Upgrade scripts"
+
+Update PDP table:
+
+.. code::
+
+ ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY;
+
+Make/Redeploy to run upgrade
+
+This should result in an error (last row in policyadmin_schema_changelog will have a success value of 0)
+
+Update PDP table:
+
+.. code::
+
+ ALTER TABLE pdp DROP COLUMN LASTUPDATE;
+
+The config job will retry 5 times. If you make your fix before this limit is reached you won't need to redeploy.
+
+Redeploy to run upgrade
+
+.. list-table::
+ :widths: 60 20
+ :header-rows: 0
+
+ * - Number of files run
+ - 14
+ * - Tables in policyadmin
+ - 75
+ * - Records Added
+ - 14
+ * - schema_version
+ - 0900
+
+9. Downgrade to Honolulu (0800) with data in pdpstatistics and jpapdpstatistics_enginestats
+*******************************************************************************************
+
+Ensure you are on release 0900.
+
+Check pdpstatistics and jpapdpstatistics_enginestats are populated with data.
+
+.. code::
+ :number-lines:
+
+ SELECT count(*) FROM pdpstatistics;
+ SELECT count(*) FROM jpapdpstatistics_enginestats;
+
+Modify db_migrator_policy_init.sh - remove any lines referencing upgrade and add the 2 lines under "Downgrade scripts"
+
+Make/Redeploy to run downgrade
+
+Check the tables to ensure the number records is the same.
+
+.. code::
+ :number-lines:
+
+ SELECT count(*) FROM pdpstatistics;
+ SELECT count(*) FROM jpapdpstatistics_enginestats;
+
+Check pdpstatistics to ensure the primary key has changed:
+
+.. code::
+
+ SELECT column_name, constraint_name FROM information_schema.key_column_usage WHERE table_name='pdpstatistics';
+
+Check jpapdpstatistics_enginestats to ensure id column has been dropped and timestamp column added.
+
+.. code::
+
+ SELECT table_name, column_name, data_type FROM information_schema.columns WHERE table_name = 'jpapdpstatistics_enginestats';
+
+Check the pdp table to ensure the LASTUPDATE column has been dropped.
+
+.. code::
+
+ SELECT table_name, column_name, data_type FROM information_schema.columns WHERE table_name = 'pdp';
+
+
+.. list-table::
+ :widths: 60 20
+ :header-rows: 0
+
+ * - Number of files run
+ - 13
+ * - Tables in policyadmin
+ - 73
+ * - Records Added
+ - 13
+ * - schema_version
+ - 0800
+
+10. Upgrade to Istanbul (0900) with data in pdpstatistics and jpapdpstatistics_enginestats
+******************************************************************************************
+
+Ensure you are on release 0800.
+
+Check pdpstatistics and jpapdpstatistics_enginestats are populated with data.
+
+.. code::
+ :number-lines:
+
+ SELECT count(*) FROM pdpstatistics;
+ SELECT count(*) FROM jpapdpstatistics_enginestats;
+
+Modify db_migrator_policy_init.sh - remove any lines referencing downgrade and add the 2 lines under "Upgrade scripts"
+
+Make/Redeploy to run upgrade
+
+Check the tables to ensure the number records is the same.
+
+.. code::
+ :number-lines:
+
+ SELECT count(*) FROM pdpstatistics;
+ SELECT count(*) FROM jpapdpstatistics_enginestats;
+
+Check pdpstatistics to ensure the primary key has changed:
+
+.. code::
+
+ SELECT column_name, constraint_name FROM information_schema.key_column_usage WHERE table_name='pdpstatistics';
+
+Check jpapdpstatistics_enginestats to ensure timestamp column has been dropped and id column added.
+
+.. code::
+
+ SELECT table_name, column_name, data_type FROM information_schema.columns WHERE table_name = 'jpapdpstatistics_enginestats';
+
+Check the pdp table to ensure the LASTUPDATE column has been added and the value has defaulted to the CURRENT_TIMESTAMP.
+
+.. code::
+
+ SELECT table_name, column_name, data_type, column_default FROM information_schema.columns WHERE table_name = 'pdp';
+
+.. list-table::
+ :widths: 60 20
+ :header-rows: 0
+
+ * - Number of files run
+ - 13
+ * - Tables in policyadmin
+ - 75
+ * - Records Added
+ - 13
+ * - schema_version
+ - 0900
+
+.. note::
+ The number of records added may vary depnding on the number of retries.
+
+End of Document
diff --git a/docs/development/devtools/devtools.rst b/docs/development/devtools/devtools.rst
index 0cf11a4c..dff8819d 100644
--- a/docs/development/devtools/devtools.rst
+++ b/docs/development/devtools/devtools.rst
@@ -276,6 +276,12 @@ familiar with the Policy Framework components and test any local changes.
.. toctree::
:maxdepth: 1
+<<<<<<< HEAD (25d5a6 Merge "Refactor s3p documents" into istanbul)
+=======
+ policy-gui-controlloop-smoke.rst
+
+ db-migrator-smoke.rst
+>>>>>>> CHANGE (33149a Added doc for smoke testing db-migrator)
..
api-smoke.rst
@@ -297,6 +303,9 @@ familiar with the Policy Framework components and test any local changes.
..
clamp-smoke.rst
+..
+ clamp-cl-participant-protocol-smoke.rst
+
Running the Stability/Performance Tests
***************************************
@@ -315,7 +324,7 @@ familiar with the Policy Framework components and test any local changes.
clamp-s3p.rst
Running the Pairwise Tests
-***********************
+**************************
The following links contain instructions on how to run the pairwise tests. These may be helpful to developers check that
the Policy Framework works in a full ONAP deployment.
@@ -323,6 +332,10 @@ the Policy Framework works in a full ONAP deployment.
.. toctree::
:maxdepth: 1
+ clamp-policy.rst
+
+ clamp-dcae.rst
+
..
api-pairwise.rst
@@ -341,9 +354,6 @@ the Policy Framework works in a full ONAP deployment.
..
distribution-pairwise.rst
-..
- clamp-pairwise.rst
-
Generating Swagger Documentation
********************************
diff --git a/docs/development/devtools/images/cl-commission.png b/docs/development/devtools/images/cl-commission.png
new file mode 100644
index 00000000..ee1bab17
--- /dev/null
+++ b/docs/development/devtools/images/cl-commission.png
Binary files differ
diff --git a/docs/development/devtools/images/cl-create.png b/docs/development/devtools/images/cl-create.png
new file mode 100644
index 00000000..df97a170
--- /dev/null
+++ b/docs/development/devtools/images/cl-create.png
Binary files differ
diff --git a/docs/development/devtools/images/cl-instantiation.png b/docs/development/devtools/images/cl-instantiation.png
new file mode 100644
index 00000000..b1101ffb
--- /dev/null
+++ b/docs/development/devtools/images/cl-instantiation.png
Binary files differ
diff --git a/docs/development/devtools/images/cl-passive.png b/docs/development/devtools/images/cl-passive.png
new file mode 100644
index 00000000..def811a5
--- /dev/null
+++ b/docs/development/devtools/images/cl-passive.png
Binary files differ
diff --git a/docs/development/devtools/images/cl-running-state.png b/docs/development/devtools/images/cl-running-state.png
new file mode 100644
index 00000000..ab7b73c5
--- /dev/null
+++ b/docs/development/devtools/images/cl-running-state.png
Binary files differ
diff --git a/docs/development/devtools/images/cl-running.png b/docs/development/devtools/images/cl-running.png
new file mode 100644
index 00000000..e9730e0d
--- /dev/null
+++ b/docs/development/devtools/images/cl-running.png
Binary files differ
diff --git a/docs/development/devtools/images/cl-uninitialise.png b/docs/development/devtools/images/cl-uninitialise.png
new file mode 100644
index 00000000..d10b214c
--- /dev/null
+++ b/docs/development/devtools/images/cl-uninitialise.png
Binary files differ
diff --git a/docs/development/devtools/images/cl-uninitialised-state.png b/docs/development/devtools/images/cl-uninitialised-state.png
new file mode 100644
index 00000000..f8a77da8
--- /dev/null
+++ b/docs/development/devtools/images/cl-uninitialised-state.png
Binary files differ
diff --git a/docs/development/devtools/images/create-instance.png b/docs/development/devtools/images/create-instance.png
new file mode 100644
index 00000000..3b3c0c21
--- /dev/null
+++ b/docs/development/devtools/images/create-instance.png
Binary files differ
diff --git a/docs/development/devtools/images/update-instance.png b/docs/development/devtools/images/update-instance.png
new file mode 100644
index 00000000..fa1ee095
--- /dev/null
+++ b/docs/development/devtools/images/update-instance.png
Binary files differ
diff --git a/docs/development/devtools/tosca/pairwise-testing.yml b/docs/development/devtools/tosca/pairwise-testing.yml
new file mode 100644
index 00000000..e6c25d0d
--- /dev/null
+++ b/docs/development/devtools/tosca/pairwise-testing.yml
@@ -0,0 +1,996 @@
+tosca_definitions_version: tosca_simple_yaml_1_3
+data_types:
+ onap.datatypes.ToscaConceptIdentifier:
+ derived_from: tosca.datatypes.Root
+ properties:
+ name:
+ type: string
+ required: true
+ version:
+ type: string
+ required: true
+ onap.datatype.controlloop.Target:
+ derived_from: tosca.datatypes.Root
+ description: Definition for a entity in A&AI to perform a control loop operation on
+ properties:
+ targetType:
+ type: string
+ description: Category for the target type
+ required: true
+ constraints:
+ - valid_values:
+ - VNF
+ - VM
+ - VFMODULE
+ - PNF
+ entityIds:
+ type: map
+ description: |
+ Map of values that identify the resource. If none are provided, it is assumed that the
+ entity that generated the ONSET event will be the target.
+ required: false
+ metadata:
+ clamp_possible_values: ClampExecution:CSAR_RESOURCES
+ entry_schema:
+ type: string
+ onap.datatype.controlloop.Actor:
+ derived_from: tosca.datatypes.Root
+ description: An actor/operation/target definition
+ properties:
+ actor:
+ type: string
+ description: The actor performing the operation.
+ required: true
+ metadata:
+ clamp_possible_values: Dictionary:DefaultActors,ClampExecution:CDS/actor
+ operation:
+ type: string
+ description: The operation the actor is performing.
+ metadata:
+ clamp_possible_values: Dictionary:DefaultOperations,ClampExecution:CDS/operation
+ required: true
+ target:
+ type: onap.datatype.controlloop.Target
+ description: The resource the operation should be performed on.
+ required: true
+ payload:
+ type: map
+ description: Name/value pairs of payload information passed by Policy to the actor
+ required: false
+ metadata:
+ clamp_possible_values: ClampExecution:CDS/payload
+ entry_schema:
+ type: string
+ onap.datatype.controlloop.Operation:
+ derived_from: tosca.datatypes.Root
+ description: An operation supported by an actor
+ properties:
+ id:
+ type: string
+ description: Unique identifier for the operation
+ required: true
+ description:
+ type: string
+ description: A user-friendly description of the intent for the operation
+ required: false
+ operation:
+ type: onap.datatype.controlloop.Actor
+ description: The definition of the operation to be performed.
+ required: true
+ timeout:
+ type: integer
+ description: The amount of time for the actor to perform the operation.
+ required: true
+ retries:
+ type: integer
+ description: The number of retries the actor should attempt to perform the operation.
+ required: true
+ default: 0
+ success:
+ type: string
+ description: Points to the operation to invoke on success. A value of "final_success" indicates and end to the operation.
+ required: false
+ default: final_success
+ failure:
+ type: string
+ description: Points to the operation to invoke on Actor operation failure.
+ required: false
+ default: final_failure
+ failure_timeout:
+ type: string
+ description: Points to the operation to invoke when the time out for the operation occurs.
+ required: false
+ default: final_failure_timeout
+ failure_retries:
+ type: string
+ description: Points to the operation to invoke when the current operation has exceeded its max retries.
+ required: false
+ default: final_failure_retries
+ failure_exception:
+ type: string
+ description: Points to the operation to invoke when the current operation causes an exception.
+ required: false
+ default: final_failure_exception
+ failure_guard:
+ type: string
+ description: Points to the operation to invoke when the current operation is blocked due to guard policy enforcement.
+ required: false
+ default: final_failure_guard
+ onap.datatypes.monitoring.managedObjectDNsBasic:
+ constraints: []
+ properties:
+ DN:
+ name: DN
+ type: string
+ typeVersion: 0.0.0
+ description: Managed object distinguished name
+ required: true
+ constraints: []
+ metadata: {}
+ name: onap.datatypes.monitoring.managedObjectDNsBasic
+ version: 0.0.0
+ derived_from: tosca.datatypes.Root
+ metadata: {}
+ onap.datatypes.monitoring.managedObjectDNsBasics:
+ constraints: []
+ properties:
+ managedObjectDNsBasic:
+ name: managedObjectDNsBasic
+ type: map
+ typeVersion: 0.0.0
+ description: Managed object distinguished name object
+ required: true
+ constraints: []
+ entry_schema:
+ type: onap.datatypes.monitoring.managedObjectDNsBasic
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ name: onap.datatypes.monitoring.managedObjectDNsBasics
+ version: 0.0.0
+ derived_from: tosca.datatypes.Root
+ metadata: {}
+ onap.datatypes.monitoring.measurementGroup:
+ constraints: []
+ properties:
+ measurementTypes:
+ name: measurementTypes
+ type: list
+ typeVersion: 0.0.0
+ description: List of measurement types
+ required: true
+ constraints: []
+ entry_schema:
+ type: onap.datatypes.monitoring.measurementTypes
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ managedObjectDNsBasic:
+ name: managedObjectDNsBasic
+ type: list
+ typeVersion: 0.0.0
+ description: List of managed object distinguished names
+ required: true
+ constraints: []
+ entry_schema:
+ type: onap.datatypes.monitoring.managedObjectDNsBasics
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ name: onap.datatypes.monitoring.measurementGroup
+ version: 0.0.0
+ derived_from: tosca.datatypes.Root
+ metadata: {}
+ onap.datatypes.monitoring.measurementGroups:
+ constraints: []
+ properties:
+ measurementGroup:
+ name: measurementGroup
+ type: map
+ typeVersion: 0.0.0
+ description: Measurement Group
+ required: true
+ constraints: []
+ entry_schema:
+ type: onap.datatypes.monitoring.measurementGroup
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ name: onap.datatypes.monitoring.measurementGroups
+ version: 0.0.0
+ derived_from: tosca.datatypes.Root
+ metadata: {}
+ onap.datatypes.monitoring.measurementType:
+ constraints: []
+ properties:
+ measurementType:
+ name: measurementType
+ type: string
+ typeVersion: 0.0.0
+ description: Measurement type
+ required: true
+ constraints: []
+ metadata: {}
+ name: onap.datatypes.monitoring.measurementType
+ version: 0.0.0
+ derived_from: tosca.datatypes.Root
+ metadata: {}
+ onap.datatypes.monitoring.measurementTypes:
+ constraints: []
+ properties:
+ measurementType:
+ name: measurementType
+ type: map
+ typeVersion: 0.0.0
+ description: Measurement type object
+ required: true
+ constraints: []
+ entry_schema:
+ type: onap.datatypes.monitoring.measurementType
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ name: onap.datatypes.monitoring.measurementTypes
+ version: 0.0.0
+ derived_from: tosca.datatypes.Root
+ metadata: {}
+ onap.datatypes.monitoring.nfFilter:
+ constraints: []
+ properties:
+ modelNames:
+ name: modelNames
+ type: list
+ typeVersion: 0.0.0
+ description: List of model names
+ required: true
+ constraints: []
+ entry_schema:
+ type: string
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ modelInvariantIDs:
+ name: modelInvariantIDs
+ type: list
+ typeVersion: 0.0.0
+ description: List of model invariant IDs
+ required: true
+ constraints: []
+ entry_schema:
+ type: string
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ modelVersionIDs:
+ name: modelVersionIDs
+ type: list
+ typeVersion: 0.0.0
+ description: List of model version IDs
+ required: true
+ constraints: []
+ entry_schema:
+ type: string
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ nfNames:
+ name: nfNames
+ type: list
+ typeVersion: 0.0.0
+ description: List of network functions
+ required: true
+ constraints: []
+ entry_schema:
+ type: string
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ name: onap.datatypes.monitoring.nfFilter
+ version: 0.0.0
+ derived_from: tosca.datatypes.Root
+ metadata: {}
+ onap.datatypes.monitoring.subscription:
+ constraints: []
+ properties:
+ measurementGroups:
+ name: measurementGroups
+ type: list
+ typeVersion: 0.0.0
+ description: Measurement Groups
+ required: true
+ constraints: []
+ entry_schema:
+ type: onap.datatypes.monitoring.measurementGroups
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ fileBasedGP:
+ name: fileBasedGP
+ type: integer
+ typeVersion: 0.0.0
+ description: File based granularity period
+ required: true
+ constraints: []
+ metadata: {}
+ fileLocation:
+ name: fileLocation
+ type: string
+ typeVersion: 0.0.0
+ description: ROP file location
+ required: true
+ constraints: []
+ metadata: {}
+ subscriptionName:
+ name: subscriptionName
+ type: string
+ typeVersion: 0.0.0
+ description: Name of the subscription
+ required: true
+ constraints: []
+ metadata: {}
+ administrativeState:
+ name: administrativeState
+ type: string
+ typeVersion: 0.0.0
+ description: State of the subscription
+ required: true
+ constraints:
+ - valid_values:
+ - LOCKED
+ - UNLOCKED
+ metadata: {}
+ nfFilter:
+ name: nfFilter
+ type: map
+ typeVersion: 0.0.0
+ description: Network function filter
+ required: true
+ constraints: []
+ entry_schema:
+ type: onap.datatypes.monitoring.nfFilter
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ name: onap.datatypes.monitoring.subscription
+ version: 0.0.0
+ derived_from: tosca.datatypes.Root
+ metadata: {}
+ org.onap.datatypes.policy.clamp.controlloop.httpControlLoopElement.RestRequest:
+ version: 1.0.0
+ derived_from: tosca.datatypes.Root
+ properties:
+ restRequestId:
+ type: onap.datatypes.ToscaConceptIdentifier
+ typeVersion: 1.0.0
+ required: true
+ description: The name and version of a REST request to be sent to a REST endpoint
+ httpMethod:
+ type: string
+ required: true
+ constraints:
+ - valid_values: [POST, PUT, GET, DELETE]
+ description: The REST method to use
+ path:
+ type: string
+ required: true
+ description: The path of the REST request relative to the base URL
+ body:
+ type: string
+ required: false
+ description: The body of the REST request for PUT and POST requests
+ expectedResponse:
+ type: integer
+ required: true
+ constraints:
+ - in_range: [100, 599]
+ description: THe expected HTTP status code for the REST request
+ org.onap.datatypes.policy.clamp.controlloop.httpControlLoopElement.ConfigurationEntity:
+ version: 1.0.0
+ derived_from: tosca.datatypes.Root
+ properties:
+ configurationEntityId:
+ type: onap.datatypes.ToscaConceptIdentifier
+ typeVersion: 1.0.0
+ required: true
+ description: The name and version of a Configuration Entity to be handled by the HTTP Control Loop Element
+ restSequence:
+ type: list
+ entry_schema:
+ type: org.onap.datatypes.policy.clamp.controlloop.httpControlLoopElement.RestRequest
+ typeVersion: 1.0.0
+ description: A sequence of REST commands to send to the REST endpoint
+policy_types:
+ onap.policies.Monitoring:
+ derived_from: tosca.policies.Root
+ description: a base policy type for all policies that govern monitoring provisioning
+ version: 1.0.0
+ name: onap.policies.Monitoring
+ onap.policies.Sirisha:
+ derived_from: tosca.policies.Root
+ description: a base policy type for all policies that govern monitoring provisioning
+ version: 1.0.0
+ name: onap.policies.Sirisha
+ onap.policies.monitoring.dcae-pm-subscription-handler:
+ properties:
+ pmsh_policy:
+ name: pmsh_policy
+ type: onap.datatypes.monitoring.subscription
+ typeVersion: 0.0.0
+ description: PMSH Policy JSON
+ required: false
+ constraints: []
+ metadata: {}
+ name: onap.policies.monitoring.dcae-pm-subscription-handler
+ version: 1.0.0
+ derived_from: onap.policies.Monitoring
+ metadata: {}
+ onap.policies.controlloop.operational.Common:
+ derived_from: tosca.policies.Root
+ version: 1.0.0
+ name: onap.policies.controlloop.operational.Common
+ description: |
+ Operational Policy for Control Loop execution. Originated in Frankfurt to support TOSCA Compliant
+ Policy Types. This does NOT support the legacy Policy YAML policy type.
+ properties:
+ id:
+ type: string
+ description: The unique control loop id.
+ required: true
+ timeout:
+ type: integer
+ description: |
+ Overall timeout for executing all the operations. This timeout should equal or exceed the total
+ timeout for each operation listed.
+ required: true
+ abatement:
+ type: boolean
+ description: Whether an abatement event message will be expected for the control loop from DCAE.
+ required: true
+ default: false
+ trigger:
+ type: string
+ description: Initial operation to execute upon receiving an Onset event message for the Control Loop.
+ required: true
+ operations:
+ type: list
+ description: List of operations to be performed when Control Loop is triggered.
+ required: true
+ entry_schema:
+ type: onap.datatype.controlloop.Operation
+ onap.policies.controlloop.operational.common.Apex:
+ derived_from: onap.policies.controlloop.operational.Common
+ type_version: 1.0.0
+ version: 1.0.0
+ name: onap.policies.controlloop.operational.common.Apex
+ description: Operational policies for Apex PDP
+ properties:
+ engineServiceParameters:
+ type: string
+ description: The engine parameters like name, instanceCount, policy implementation, parameters etc.
+ required: true
+ eventInputParameters:
+ type: string
+ description: The event input parameters.
+ required: true
+ eventOutputParameters:
+ type: string
+ description: The event output parameters.
+ required: true
+ javaProperties:
+ type: string
+ description: Name/value pairs of properties to be set for APEX if needed.
+ required: false
+node_types:
+ org.onap.policy.clamp.controlloop.Participant:
+ version: 1.0.1
+ derived_from: tosca.nodetypes.Root
+ properties:
+ provider:
+ type: string
+ requred: false
+ org.onap.policy.clamp.controlloop.ControlLoopElement:
+ version: 1.0.1
+ derived_from: tosca.nodetypes.Root
+ properties:
+ provider:
+ type: string
+ required: false
+ metadata:
+ common: true
+ description: Specifies the organization that provides the control loop element
+ participant_id:
+ type: onap.datatypes.ToscaConceptIdentifier
+ requred: true
+ metadata:
+ common: true
+ participantType:
+ type: onap.datatypes.ToscaConceptIdentifier
+ required: true
+ metadata:
+ common: true
+ description: The identity of the participant type that hosts this type of Control Loop Element
+ startPhase:
+ type: integer
+ required: false
+ constraints:
+ - greater_or_equal: 0
+ metadata:
+ common: true
+ description: A value indicating the start phase in which this control loop element will be started, the
+ first start phase is zero. Control Loop Elements are started in their start_phase order and stopped
+ in reverse start phase order. Control Loop Elements with the same start phase are started and
+ stopped simultaneously
+ uninitializedToPassiveTimeout:
+ type: integer
+ required: false
+ constraints:
+ - greater_or_equal: 0
+ default: 60
+ metadata:
+ common: true
+ description: The maximum time in seconds to wait for a state chage from uninitialized to passive
+ passiveToRunningTimeout:
+ type: integer
+ required: false
+ constraints:
+ - greater_or_equal: 0
+ default: 60
+ metadata:
+ common: true
+ description: The maximum time in seconds to wait for a state chage from passive to running
+ runningToPassiveTimeout:
+ type: integer
+ required: false
+ constraints:
+ - greater_or_equal: 0
+ default: 60
+ metadata:
+ common: true
+ description: The maximum time in seconds to wait for a state chage from running to passive
+ passiveToUninitializedTimeout:
+ type: integer
+ required: false
+ constraints:
+ - greater_or_equal: 0
+ default: 60
+ metadata:
+ common: true
+ description: The maximum time in seconds to wait for a state chage from passive to uninitialized
+ org.onap.policy.clamp.controlloop.ControlLoop:
+ version: 1.0.1
+ derived_from: tosca.nodetypes.Root
+ properties:
+ provider:
+ type: string
+ required: false
+ metadata:
+ common: true
+ description: Specifies the organization that provides the control loop element
+ elements:
+ type: list
+ required: true
+ metadata:
+ common: true
+ entry_schema:
+ type: onap.datatypes.ToscaConceptIdentifier
+ description: Specifies a list of control loop element definitions that make up this control loop definition
+ org.onap.policy.clamp.controlloop.PolicyControlLoopElement:
+ version: 1.0.1
+ derived_from: org.onap.policy.clamp.controlloop.ControlLoopElement
+ properties:
+ policy_type_id:
+ type: onap.datatypes.ToscaConceptIdentifier
+ requred: true
+ policy_id:
+ type: onap.datatypes.ToscaConceptIdentifier
+ requred: false
+ org.onap.policy.clamp.controlloop.DerivedPolicyControlLoopElement:
+ version: 1.0.1
+ derived_from: org.onap.policy.clamp.controlloop.PolicyControlLoopElement
+ properties:
+ policy_type_id:
+ type: onap.datatypes.ToscaConceptIdentifier
+ requred: true
+ policy_id:
+ type: onap.datatypes.ToscaConceptIdentifier
+ requred: false
+ org.onap.policy.clamp.controlloop.DerivedDerivedPolicyControlLoopElement:
+ version: 1.0.1
+ derived_from: org.onap.policy.clamp.controlloop.DerivedPolicyControlLoopElement
+ properties:
+ policy_type_id:
+ type: onap.datatypes.ToscaConceptIdentifier
+ requred: true
+ policy_id:
+ type: onap.datatypes.ToscaConceptIdentifier
+ requred: false
+ org.onap.policy.clamp.controlloop.CDSControlLoopElement:
+ version: 1.0.1
+ derived_from: org.onap.policy.clamp.controlloop.ControlLoopElement
+ properties:
+ cds_blueprint_id:
+ type: onap.datatypes.ToscaConceptIdentifier
+ requred: true
+ org.onap.policy.clamp.controlloop.K8SMicroserviceControlLoopElement:
+ version: 1.0.1
+ derived_from: org.onap.policy.clamp.controlloop.ControlLoopElement
+ properties:
+ chart:
+ type: string
+ required: true
+ configs:
+ type: list
+ required: false
+ requirements:
+ type: string
+ requred: false
+ templates:
+ type: list
+ required: false
+ entry_schema:
+ values:
+ type: string
+ requred: true
+ org.onap.policy.clamp.controlloop.HttpControlLoopElement:
+ version: 1.0.1
+ derived_from: org.onap.policy.clamp.controlloop.ControlLoopElement
+ properties:
+ baseUrl:
+ type: string
+ required: true
+ description: The base URL to be prepended to each path, identifies the host for the REST endpoints.
+ httpHeaders:
+ type: map
+ required: false
+ entry_schema:
+ type: string
+ description: HTTP headers to send on REST requests
+ configurationEntities:
+ type: map
+ required: true
+ entry_schema:
+ type: org.onap.datatypes.policy.clamp.controlloop.httpControlLoopElement.ConfigurationEntity
+ typeVersion: 1.0.0
+ description: The connfiguration entities the Control Loop Element is managing and their associated REST requests
+
+topology_template:
+ inputs:
+ pmsh_monitoring_policy:
+ type: onap.datatypes.ToscaConceptIdentifier
+ description: The ID of the PMSH monitoring policy to use
+ default:
+ name: MICROSERVICE_vLoadBalancerMS_v1_0_dcae-pm-subscription-handler_1_0_0test
+ version: 1.0.0
+ pmsh_operational_policy:
+ type: onap.datatypes.ToscaConceptIdentifier
+ description: The ID of the PMSH operational policy to use
+ default:
+ name: operational.apex.pmcontrol
+ version: 1.0.0
+ node_templates:
+ org.onap.policy.controlloop.PolicyControlLoopParticipant:
+ version: 2.3.1
+ type: org.onap.policy.clamp.controlloop.Participant
+ type_version: 1.0.1
+ description: Participant for DCAE microservices
+ properties:
+ provider: ONAP
+ org.onap.domain.pmsh.PMSH_MonitoringPolicyControlLoopElement:
+ version: 1.2.3
+ type: org.onap.policy.clamp.controlloop.PolicyControlLoopElement
+ type_version: 1.0.0
+ description: Control loop element for the monitoring policy for Performance Management Subscription Handling
+ properties:
+ provider: Ericsson
+ participant_id:
+ name: org.onap.PM_Policy
+ version: 1.0.0
+ participantType:
+ name: org.onap.policy.controlloop.PolicyControlLoopParticipant
+ version: 2.3.1
+ policy_type_id:
+ name: onap.policies.monitoring.pm-subscription-handler
+ version: 1.0.0
+ policy_id:
+ get_input: pmsh_monitoring_policy
+ org.onap.domain.pmsh.PMSH_OperationalPolicyControlLoopElement:
+ version: 1.2.3
+ type: org.onap.policy.clamp.controlloop.PolicyControlLoopElement
+ type_version: 1.0.0
+ description: Control loop element for the operational policy for Performance Management Subscription Handling
+ properties:
+ provider: Ericsson
+ participant_id:
+ name: org.onap.PM_Policy
+ version: 1.0.0
+ participantType:
+ name: org.onap.policy.controlloop.PolicyControlLoopParticipant
+ version: 2.3.1
+ policy_type_id:
+ name: onap.policies.operational.pm-subscription-handler
+ version: 1.0.0
+ policy_id:
+ get_input: pmsh_operational_policy
+ org.onap.k8s.controlloop.K8SControlLoopParticipant:
+ version: 2.3.4
+ type: org.onap.policy.clamp.controlloop.Participant
+ type_version: 1.0.1
+ description: Participant for K8S
+ properties:
+ provider: ONAP
+ org.onap.domain.database.PMSH_K8SMicroserviceControlLoopElement:
+ # Chart from new repository
+ version: 1.2.3
+ type: org.onap.policy.clamp.controlloop.K8SMicroserviceControlLoopElement
+ type_version: 1.0.0
+ description: Control loop element for the K8S microservice for PMSH
+ properties:
+ provider: ONAP
+ participant_id:
+ name: K8sParticipant0
+ version: 1.0.0
+ participantType:
+ name: org.onap.k8s.controlloop.K8SControlLoopParticipant
+ version: 2.3.4
+ chart:
+ chartId:
+ name: dcae-pmsh
+ version: 8.0.0
+ namespace: onap
+ releaseName: pmshms
+ repository:
+ repoName: chartmuseum
+ protocol: http
+ address: chart-museum
+ port: 80
+ userName: onapinitializer
+ password: demo123456!
+ overrideParams:
+ global.masterPassword: test
+
+ org.onap.domain.database.Local_K8SMicroserviceControlLoopElement:
+ # Chart installation without passing repository info
+ version: 1.2.3
+ type: org.onap.policy.clamp.controlloop.K8SMicroserviceControlLoopElement
+ type_version: 1.0.0
+ description: Control loop element for the K8S microservice for local chart
+ properties:
+ provider: ONAP
+ participant_id:
+ name: K8sParticipant0
+ version: 1.0.0
+ participantType:
+ name: org.onap.k8s.controlloop.K8SControlLoopParticipant
+ version: 2.3.4
+ chart:
+ chartId:
+ name: nginx-ingress
+ version: 0.9.1
+ releaseName: nginxms
+ namespace: test
+ org.onap.controlloop.HttpControlLoopParticipant:
+ version: 2.3.4
+ type: org.onap.policy.clamp.controlloop.Participant
+ type_version: 1.0.1
+ description: Participant for Http requests
+ properties:
+ provider: ONAP
+ org.onap.domain.database.Http_PMSHMicroserviceControlLoopElement:
+ # Consul http config for PMSH.
+ version: 1.2.3
+ type: org.onap.policy.clamp.controlloop.HttpControlLoopElement
+ type_version: 1.0.1
+ description: Control loop element for the http requests of PMSH microservice
+ properties:
+ provider: ONAP
+ participant_id:
+ name: HttpParticipant0
+ version: 1.0.0
+ participantType:
+ name: org.onap.k8s.controlloop.HttpControlLoopParticipant
+ version: 2.3.4
+ uninitializedToPassiveTimeout: 180
+ startPhase: 1
+ baseUrl: http://consul-server-ui:8500
+ httpHeaders:
+ Content-Type: application/json
+ configurationEntities:
+ - configurationEntityId:
+ name: entity1
+ version: 1.0.1
+ restSequence:
+ - restRequestId:
+ name: request1
+ version: 1.0.1
+ httpMethod: PUT
+ path: v1/kv/dcae-pmsh2
+ body: '{
+ "control_loop_name":"pmsh-control-loop",
+ "operational_policy_name":"pmsh-operational-policy",
+ "aaf_password":"demo123456!",
+ "aaf_identity":"dcae@dcae.onap.org",
+ "cert_path":"/opt/app/pmsh/etc/certs/cert.pem",
+ "key_path":"/opt/app/pmsh/etc/certs/key.pem",
+ "ca_cert_path":"/opt/app/pmsh/etc/certs/cacert.pem",
+ "enable_tls":"true",
+ "pmsh_policy":{
+ "subscription":{
+ "subscriptionName":"ExtraPM-All-gNB-R2B",
+ "administrativeState":"UNLOCKED",
+ "fileBasedGP":15,
+ "fileLocation":"\/pm\/pm.xml",
+ "nfFilter":{
+ "nfNames":[
+ "^pnf.*",
+ "^vnf.*"
+ ],
+ "modelInvariantIDs":[
+ ],
+ "modelVersionIDs":[
+ ],
+ "modelNames":[
+ ]
+ },
+ "measurementGroups":[
+ {
+ "measurementGroup":{
+ "measurementTypes":[
+ {
+ "measurementType":"countera"
+ },
+ {
+ "measurementType":"counterb"
+ }
+ ],
+ "managedObjectDNsBasic":[
+ {
+ "DN":"dna"
+ },
+ {
+ "DN":"dnb"
+ }
+ ]
+ }
+ },
+ {
+ "measurementGroup":{
+ "measurementTypes":[
+ {
+ "measurementType":"counterc"
+ },
+ {
+ "measurementType":"counterd"
+ }
+ ],
+ "managedObjectDNsBasic":[
+ {
+ "DN":"dnc"
+ },
+ {
+ "DN":"dnd"
+ }
+ ]
+ }
+ }
+ ]
+ }
+ },
+ "streams_subscribes":{
+ "aai_subscriber":{
+ "type":"message_router",
+ "dmaap_info":{
+ "topic_url":"https://10.152.183.151:3905/events/AAI_EVENT",
+ "client_role":"org.onap.dcae.aaiSub",
+ "location":"san-francisco",
+ "client_id":"1575976809466"
+ }
+ },
+ "policy_pm_subscriber":{
+ "type":"message_router",
+ "dmaap_info":{
+ "topic_url":"https://10.152.183.151:3905/events/org.onap.dmaap.mr.PM_SUBSCRIPTIONS",
+ "client_role":"org.onap.dcae.pmSubscriber",
+ "location":"san-francisco",
+ "client_id":"1575876809456"
+ }
+ }
+ },
+ "streams_publishes":{
+ "policy_pm_publisher":{
+ "type":"message_router",
+ "dmaap_info":{
+ "topic_url":"https://10.152.183.151:3905/events/org.onap.dmaap.mr.PM_SUBSCRIPTIONS",
+ "client_role":"org.onap.dcae.pmPublisher",
+ "location":"san-francisco",
+ "client_id":"1475976809466"
+ }
+ },
+ "other_publisher":{
+ "type":"message_router",
+ "dmaap_info":{
+ "topic_url":"https://10.152.183.151:3905/events/org.onap.dmaap.mr.SOME_OTHER_TOPIC",
+ "client_role":"org.onap.dcae.pmControlPub",
+ "location":"san-francisco",
+ "client_id":"1875976809466"
+ }
+ }
+ }
+ }'
+ expectedResponse: 200
+ org.onap.domain.sample.GenericK8s_ControlLoopDefinition:
+ version: 1.2.3
+ type: org.onap.policy.clamp.controlloop.ControlLoop
+ type_version: 1.0.0
+ description: Control loop for Hello World
+ properties:
+ provider: ONAP
+ elements:
+ - name: org.onap.domain.database.PMSH_K8SMicroserviceControlLoopElement
+ version: 1.2.3
+ - name: org.onap.domain.database.Local_K8SMicroserviceControlLoopElement
+ version: 1.2.3
+ - name: org.onap.domain.database.Http_PMSHMicroserviceControlLoopElement
+ version: 1.2.3
+ - name: org.onap.domain.pmsh.PMSH_MonitoringPolicyControlLoopElement
+ version: 1.2.3
+ - name: org.onap.domain.pmsh.PMSH_OperationalPolicyControlLoopElement
+ version: 1.2.3
+
+ policies:
+ - MICROSERVICE_vLoadBalancerMS_v1_0_dcae-pm-subscription-handler_1_0_0test:
+ type: onap.policies.monitoring.dcae-pm-subscription-handler
+ type_version: 1.0.0
+ name: MICROSERVICE_vLoadBalancerMS_v1_0_dcae-pm-subscription-handler_1_0_0test
+ version: 1.0.0
+ metadata:
+ policy-id: MICROSERVICE_vLoadBalancerMS_v1_0_dcae-pm-subscription-handler_1_0_0test
+ policy-version: 1.0.0
+ properties:
+ pmsh_policy:
+ fileBasedGP: 15
+ fileLocation: /pm/pm.xml
+ subscriptionName: subscriptiona
+ administrativeState: UNLOCKED
+ nfFilter:
+ onap.datatypes.monitoring.nfFilter:
+ modelVersionIDs:
+ - e80a6ae3-cafd-4d24-850d-e14c084a5ca9
+ modelInvariantIDs:
+ - 5845y423-g654-6fju-po78-8n53154532k6
+ - 7129e420-d396-4efb-af02-6b83499b12f8
+ modelNames: []
+ nfNames:
+ - '"^pnf1.*"'
+ measurementGroups:
+ - measurementGroup:
+ onap.datatypes.monitoring.measurementGroup:
+ measurementTypes:
+ - measurementType:
+ onap.datatypes.monitoring.measurementType:
+ measurementType: countera
+ - measurementType:
+ onap.datatypes.monitoring.measurementType:
+ measurementType: counterb
+ managedObjectDNsBasic:
+ - managedObjectDNsBasic:
+ onap.datatypes.monitoring.managedObjectDNsBasic:
+ DN: dna
+ - managedObjectDNsBasic:
+ onap.datatypes.monitoring.managedObjectDNsBasic:
+ DN: dnb
+ - measurementGroup:
+ onap.datatypes.monitoring.measurementGroup:
+ measurementTypes:
+ - measurementType:
+ onap.datatypes.monitoring.measurementType:
+ measurementType: counterc
+ - measurementType:
+ onap.datatypes.monitoring.measurementType:
+ measurementType: counterd
+ managedObjectDNsBasic:
+ - managedObjectDNsBasic:
+ onap.datatypes.monitoring.managedObjectDNsBasic:
+ DN: dnc
+ - managedObjectDNsBasic:
+ onap.datatypes.monitoring.managedObjectDNsBasic:
+ DN: dnd \ No newline at end of file