summaryrefslogtreecommitdiffstats
path: root/docs/development/devtools
diff options
context:
space:
mode:
Diffstat (limited to 'docs/development/devtools')
-rw-r--r--docs/development/devtools/clamp-dcae.rst115
-rw-r--r--docs/development/devtools/clamp-policy.rst124
-rw-r--r--docs/development/devtools/clamp-smoke.rst357
-rw-r--r--docs/development/devtools/db-migrator-smoke.rst413
-rw-r--r--docs/development/devtools/devtools.rst13
-rw-r--r--docs/development/devtools/images/cl-commission.pngbin0 -> 161307 bytes
-rw-r--r--docs/development/devtools/images/cl-create.pngbin0 -> 226752 bytes
-rw-r--r--docs/development/devtools/images/cl-instantiation.pngbin0 -> 230788 bytes
-rw-r--r--docs/development/devtools/images/cl-passive.pngbin0 -> 206486 bytes
-rw-r--r--docs/development/devtools/images/cl-running-state.pngbin0 -> 226765 bytes
-rw-r--r--docs/development/devtools/images/cl-running.pngbin0 -> 206577 bytes
-rw-r--r--docs/development/devtools/images/cl-uninitialise.pngbin0 -> 206284 bytes
-rw-r--r--docs/development/devtools/images/cl-uninitialised-state.pngbin0 -> 227934 bytes
-rw-r--r--docs/development/devtools/images/create-instance.pngbin0 -> 209643 bytes
-rw-r--r--docs/development/devtools/images/update-instance.pngbin0 -> 129767 bytes
-rw-r--r--docs/development/devtools/tosca/pairwise-testing.yml996
16 files changed, 2015 insertions, 3 deletions
diff --git a/docs/development/devtools/clamp-dcae.rst b/docs/development/devtools/clamp-dcae.rst
new file mode 100644
index 00000000..c0cd41bf
--- /dev/null
+++ b/docs/development/devtools/clamp-dcae.rst
@@ -0,0 +1,115 @@
+.. This work is licensed under a
+.. Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+
+.. _clamp-pairwise-testing-label:
+
+.. toctree::
+ :maxdepth: 2
+
+CLAMP <-> Dcae
+~~~~~~~~~~~~~~
+
+The pairwise testing is executed against a default ONAP installation in the OOM.
+CLAMP-Control loop interacts with DCAE to deploy dcaegen2 services like PMSH.
+This test verifies the interaction between DCAE and controlloop works as expected.
+
+General Setup
+*************
+
+The kubernetes installation allocated all policy components across multiple worker node VMs.
+The worker VM hosting the policy components has the following spec:
+
+- 16GB RAM
+- 8 VCPU
+- 160GB Ephemeral Disk
+
+
+The ONAP components used during the pairwise tests are:
+
+- CLAMP control loop runtime, policy participant, kubernetes participant.
+- DCAE for running dcaegen2-service via kubernetes participant.
+- ChartMuseum service from platform, initialised with DCAE helm charts.
+- DMaaP for the communication between Control loop runtime and participants.
+- Policy Gui for instantiation and commissioning of control loops.
+
+
+ChartMuseum Setup
+*****************
+
+The chartMuseum helm chart from the platform is deployed in the same cluster. The chart server is then initialized with the helm charts of dcaegen2-services by running the below script in OOM repo.
+The script accepts the directory path as an argument where the helm charts are located.
+
+.. code-block:: bash
+
+ #!/bin/sh
+ ./oom/kubernetes/contrib/tools/registry-initialize.sh -d /oom/kubernetes/dcaegen2-services/charts/
+
+Testing procedure
+*****************
+
+The test set focused on the following use cases:
+
+- Deployment and Configuration of DCAE microservice PMSH
+- Undeployment of PMSH
+
+Creation of the Control Loop:
+-----------------------------
+A Control Loop is created by commissioning a Tosca template with Control loop definitions and instantiating the Control Loop with the state "UNINITIALISED".
+
+- Upload a TOSCA template from the POLICY GUI. The definitions includes a kubernetes participant and control loop elements that deploys and configures a microservice in the kubernetes cluster.
+ Control loop element for kubernetes participant includes a helm chart information of DCAE microservice and the element for Http Participant includes the configuration entity for the microservice.
+ :download:`Sample Tosca template <tosca/pairwise-testing.yml>`
+
+ .. image:: images/cl-commission.png
+
+ Verification: The template is commissioned successfully without errors.
+
+- Instantiate the commissioned Control loop definitions from the Policy Gui under 'Instantiation Management'.
+
+ .. image:: images/create-instance.png
+
+ Update instance properties of the Control Loop Elements if required.
+
+ .. image:: images/update-instance.PNG
+
+ Verification: The control loop is created with default state "UNINITIALISED" without errors.
+
+ .. image:: images/cl-instantiation.png
+
+
+Deployment and Configuration of DCAE microservice (PMSH):
+---------------------------------------------------------
+The Control Loop state is changed from "UNINITIALISED" to "PASSIVE" from the Policy Gui. The kubernetes participant deploys the PMSH helm chart from the DCAE chartMuseum server.
+
+.. image:: images/cl-passive.png
+
+Verification:
+
+- DCAE service PMSH is deployed in to the kubernetes cluster. PMSH pods are in RUNNING state.
+ `helm ls -n <namespace>` - The helm deployment of dcaegen2 service PMSH is listed.
+ `kubectl get pod -n <namespace>` - The PMSH pods are deployed, up and Running.
+
+- The subscription configuration for PMSH microservice from the TOSCA definitions are updated in the Consul server. The configuration can be verified on the Consul server UI `http://<CONSUL-SERVER_IP>/ui/#/dc1/kv/`
+
+- The overall state of the Control Loop is changed to "PASSIVE" in the Policy Gui.
+
+.. image:: images/cl-create.png
+
+
+Undeployment of DCAE microservice (PMSH):
+-----------------------------------------
+The Control Loop state is changed from "PASSIVE" to "UNINITIALISED" from the Policy Gui.
+
+.. image:: images/cl-uninitialise.png
+
+Verification:
+
+- The kubernetes participant uninstall the DCAE PMSH helm chart from the kubernetes cluster. The pods are removed from the cluster.
+
+- The overall state of the Control Loop is changed to "UNINITIALISED" in the Policy Gui.
+
+.. image:: images/cl-uninitialised-state.png
+
+
+
diff --git a/docs/development/devtools/clamp-policy.rst b/docs/development/devtools/clamp-policy.rst
new file mode 100644
index 00000000..72a9a1b1
--- /dev/null
+++ b/docs/development/devtools/clamp-policy.rst
@@ -0,0 +1,124 @@
+.. This work is licensed under a
+.. Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+
+.. _clamp-pairwise-testing-label:
+
+.. toctree::
+ :maxdepth: 2
+
+CLAMP <-> Policy Core
+~~~~~~~~~~~~~~~~~~~~~
+
+The pairwise testing is executed against a default ONAP installation in the OOM.
+CLAMP-Control loop interacts with Policy framework to create and deploy policies.
+This test verifies the interaction between policy and controlloop works as expected.
+
+General Setup
+*************
+
+The kubernetes installation allocated all policy components across multiple worker node VMs.
+The worker VM hosting the policy components has the following spec:
+
+- 16GB RAM
+- 8 VCPU
+- 160GB Ephemeral Disk
+
+
+The ONAP components used during the pairwise tests are:
+
+- CLAMP control loop runtime, policy participant, kubernetes participant.
+- DMaaP for the communication between Control loop runtime and participants.
+- Policy API to create (and delete at the end of the tests) policies for each
+ scenario under test.
+- Policy PAP to deploy (and undeploy at the end of the tests) policies for each scenario under test.
+- Policy Gui for instantiation and commissioning of control loops.
+
+
+Testing procedure
+*****************
+
+The test set focused on the following use cases:
+
+- creation/Deletion of policies
+- Deployment/Undeployment of policies
+
+Creation of the Control Loop:
+-----------------------------
+A Control Loop is created by commissioning a Tosca template with Control loop definitions and instantiating the Control Loop with the state "UNINITIALISED".
+
+- Upload a TOSCA template from the POLICY GUI. The definitions includes a policy participant and a control loop element that creates and deploys required policies. :download:`Sample Tosca template <tosca/pairwise-testing.yml>`
+
+ .. image:: images/cl-commission.png
+
+ Verification: The template is commissioned successfully without errors.
+
+- Instantiate the commissioned Control loop from the Policy Gui under 'Instantiation Management'.
+
+ .. image:: images/create-instance.png
+
+ Update instance properties of the Control Loop Elements if required.
+
+ .. image:: images/update-instance.PNG
+
+ Verification: The control loop is created with default state "UNINITIALISED" without errors.
+
+ .. image:: images/cl-instantiation.png
+
+
+Creation of policies:
+---------------------
+The Control Loop state is changed from "UNINITIALISED" to "PASSIVE" from the Policy Gui. Verify the POLICY API endpoint for the creation of policy types that are defined in the TOSCA template.
+
+.. image:: images/cl-passive.png
+
+Verification:
+
+- The policy types defined in the tosca template is created by the policy participant and listed in the policy Api.
+ Policy Api endpoint: `<https://<POLICY-API-IP>/policy/api/v1/policytypes>`
+
+- The overall state of the Control Loop is changed to "PASSIVE" in the Policy Gui.
+
+.. image:: images/cl-create.png
+
+
+Deployment of policies:
+-----------------------
+The Control Loop state is changed from "PASSIVE" to "RUNNING" from the Policy Gui.
+
+.. image:: images/cl-running.png
+
+Verification:
+
+- The policy participant deploys the policies of Tosca Control loop elements in Policy PAP for all the pdp groups.
+ Policy PAP endpoint: `<https://<POLICY-PAP-IP>/policy/pap/v1/pdps>`
+
+- The overall state of the Control Loop is changed to "RUNNING" in the Policy Gui.
+
+.. image:: images/cl-running-state.png
+
+Deletion of Policies:
+---------------------
+The Control Loop state is changed from "RUNNING" to "PASSIVE" from the Policy Gui.
+
+Verification:
+
+- The policy participant deletes the created policy types which can be verified on the Policy Api. The policy types created as part of the control loop should not be listed on the Policy Api.
+ Policy Api endpoint: `<https://<POLICY-API-IP>/policy/api/v1/policytypes>`
+
+- The overall state of the Control Loop is changed to "PASSIVE" in the Policy Gui.
+
+.. image:: images/cl-create.png
+
+Undeployment of policies:
+-------------------------
+The Control Loop state is changed from "PASSIVE" to "UNINITIALISED" from the Policy Gui.
+
+Verification:
+
+- The policy participant undeploys the policies of the control loop element from the pdp groups. The policies deployed as part of the control loop should not be listed on the Policy PAP.
+ Policy PAP endpoint: `<https://<POLICY-PAP-IP>/policy/pap/v1/pdps>`
+
+- The overall state of the Control Loop is changed to "UNINITIALISED" in the Policy Gui.
+
+.. image:: images/cl-uninitialised-state.png
diff --git a/docs/development/devtools/clamp-smoke.rst b/docs/development/devtools/clamp-smoke.rst
new file mode 100644
index 00000000..06ec6db7
--- /dev/null
+++ b/docs/development/devtools/clamp-smoke.rst
@@ -0,0 +1,357 @@
+.. This work is licensed under a
+.. Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+
+.. _policy-development-tools-label:
+
+CLAMP control loop runtime Smoke Tests
+######################################
+
+.. contents::
+ :depth: 3
+
+
+This article explains how to build the CLAMP control loop runtime for development purposes and how to run smoke tests for control loop runtime. To start, the developer should consult the latest ONAP Wiki to familiarize themselves with developer best practices and how-tos to setup their environment, see `https://wiki.onap.org/display/DW/Developer+Best+Practices`.
+
+
+This article assumes that:
+
+* You are using a *\*nix* operating system such as linux or macOS.
+* You are using a directory called *git* off your home directory *(~/git)* for your git repositories
+* Your local maven repository is in the location *~/.m2/repository*
+* You have copied the settings.xml from oparent to *~/.m2/* directory
+* You have added settings to access the ONAP Nexus to your M2 configuration, see `Maven Settings Example <https://wiki.onap.org/display/DW/Setting+Up+Your+Development+Environment>`_ (bottom of the linked page)
+
+The procedure documented in this article has been verified using Unbuntu 20.04 LTS VM.
+
+Cloning CLAMP control loop runtime and all dependency
+*****************************************************
+
+Run a script such as the script below to clone the required modules from the `ONAP git repository <https://gerrit.onap.org/r/#/admin/projects/?filter=policy>`_. This script clones CLAMP control loop runtime and all dependency.
+
+ONAP Policy Framework has dependencies to the ONAP Parent *oparent* module, the ONAP ECOMP SDK *ecompsdkos* module, and the A&AI Schema module.
+
+
+.. code-block:: bash
+ :caption: Typical ONAP Policy Framework Clone Script
+ :linenos:
+
+ #!/usr/bin/env bash
+
+ ## script name for output
+ MOD_SCRIPT_NAME='basename $0'
+
+ ## the ONAP clone directory, defaults to "onap"
+ clone_dir="onap"
+
+ ## the ONAP repos to clone
+ onap_repos="\
+ policy/parent \
+ policy/common \
+ policy/models \
+ policy/clamp \
+ policy/docker "
+
+ ##
+ ## Help screen and exit condition (i.e. too few arguments)
+ ##
+ Help()
+ {
+ echo ""
+ echo "$MOD_SCRIPT_NAME - clones all required ONAP git repositories"
+ echo ""
+ echo " Usage: $MOD_SCRIPT_NAME [-options]"
+ echo ""
+ echo " Options"
+ echo " -d - the ONAP clone directory, defaults to '.'"
+ echo " -h - this help screen"
+ echo ""
+ exit 255;
+ }
+
+ ##
+ ## read command line
+ ##
+ while [ $# -gt 0 ]
+ do
+ case $1 in
+ #-d ONAP clone directory
+ -d)
+ shift
+ if [ -z "$1" ]; then
+ echo "$MOD_SCRIPT_NAME: no clone directory"
+ exit 1
+ fi
+ clone_dir=$1
+ shift
+ ;;
+
+ #-h prints help and exists
+ -h)
+ Help;exit 0;;
+
+ *) echo "$MOD_SCRIPT_NAME: undefined CLI option - $1"; exit 255;;
+ esac
+ done
+
+ if [ -f "$clone_dir" ]; then
+ echo "$MOD_SCRIPT_NAME: requested clone directory '$clone_dir' exists as file"
+ exit 2
+ fi
+ if [ -d "$clone_dir" ]; then
+ echo "$MOD_SCRIPT_NAME: requested clone directory '$clone_dir' exists as directory"
+ exit 2
+ fi
+
+ mkdir $clone_dir
+ if [ $? != 0 ]
+ then
+ echo cannot clone ONAP repositories, could not create directory '"'$clone_dir'"'
+ exit 3
+ fi
+
+ for repo in $onap_repos
+ do
+ repoDir=`dirname "$repo"`
+ repoName=`basename "$repo"`
+
+ if [ ! -z $dirName ]
+ then
+ mkdir "$clone_dir/$repoDir"
+ if [ $? != 0 ]
+ then
+ echo cannot clone ONAP repositories, could not create directory '"'$clone_dir/repoDir'"'
+ exit 4
+ fi
+ fi
+
+ git clone https://gerrit.onap.org/r/${repo} $clone_dir/$repo
+ done
+
+ echo ONAP has been cloned into '"'$clone_dir'"'
+
+
+Execution of the script above results in the following directory hierarchy in your *~/git* directory:
+
+ * ~/git/onap
+ * ~/git/onap/policy
+ * ~/git/onap/policy/parent
+ * ~/git/onap/policy/common
+ * ~/git/onap/policy/models
+ * ~/git/onap/policy/clamp
+ * ~/git/onap/policy/docker
+
+
+Building CLAMP control loop runtime and all dependency
+******************************************************
+
+**Step 1:** Optionally, for a completely clean build, remove the ONAP built modules from your local repository.
+
+ .. code-block:: bash
+
+ rm -fr ~/.m2/repository/org/onap
+
+
+**Step 2:** A pom such as the one below can be used to build the ONAP Policy Framework modules. Create the *pom.xml* file in the directory *~/git/onap/policy*.
+
+.. code-block:: xml
+ :caption: Typical pom.xml to build the ONAP Policy Framework
+ :linenos:
+
+ <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+ <modelVersion>4.0.0</modelVersion>
+ <groupId>org.onap</groupId>
+ <artifactId>onap-policy</artifactId>
+ <version>1.0.0-SNAPSHOT</version>
+ <packaging>pom</packaging>
+ <name>${project.artifactId}</name>
+ <inceptionYear>2017</inceptionYear>
+ <organization>
+ <name>ONAP</name>
+ </organization>
+
+ <modules>
+ <module>parent</module>
+ <module>common</module>
+ <module>models</module>
+ <module>clamp</module>
+ </modules>
+ </project>
+
+
+**Step 3:** You can now build the Policy framework.
+
+Java artifacts only:
+
+ .. code-block:: bash
+
+ cd ~/git/onap/policy
+ mvn -pl '!org.onap.policy.clamp:policy-clamp-runtime' install
+
+With docker images:
+
+ .. code-block:: bash
+
+ cd ~/git/onap/policy/clamp/packages/
+ mvn clean install -P docker
+
+Running MariaDb and DMaaP Simulator
+***********************************
+
+Running a MariaDb Instance
+++++++++++++++++++++++++++
+
+Assuming you have successfully built the codebase using the instructions above. There are two requirements for the Clamp controlloop runtime component to run, one of them is a
+running MariaDb database instance. The easiest way to do this is to run the docker image locally.
+
+An sql such as the one below can be used to build the SQL initialization. Create the *mariadb.sql* file in the directory *~/git*.
+
+ .. code-block:: SQL
+
+ create database controlloop;
+ CREATE USER 'policy'@'%' IDENTIFIED BY 'P01icY';
+ GRANT ALL PRIVILEGES ON controlloop.* TO 'policy'@'%';
+
+
+Execution of the command above results in the creation and start of the *mariadb-smoke-test* container.
+
+ .. code-block:: bash
+
+ cd ~/git
+ docker run --name mariadb-smoke-test \
+ -p 3306:3306 \
+ -e MYSQL_ROOT_PASSWORD=my-secret-pw \
+ --mount type=bind,source=~/git/mariadb.sql,target=/docker-entrypoint-initdb.d/data.sql \
+ mariadb:10.5.8
+
+
+Running the DMaaP Simulator during Development
+++++++++++++++++++++++++++++++++++++++++++++++
+The second requirement for the Clamp controlloop runtime component to run is to run the DMaaP simulator. You can run it from the command line using Maven.
+
+
+Change the local configuration file *src/test/resources/simParameters.json* using the below code:
+
+.. code-block:: json
+
+ {
+ "dmaapProvider": {
+ "name": "DMaaP simulator",
+ "topicSweepSec": 900
+ },
+ "restServers": [
+ {
+ "name": "DMaaP simulator",
+ "providerClass": "org.onap.policy.models.sim.dmaap.rest.DmaapSimRestControllerV1",
+ "host": "localhost",
+ "port": 3904,
+ "https": false
+ }
+ ]
+ }
+
+Run the following commands:
+
+ .. code-block:: bash
+
+ cd ~/git/onap/policy/models/models-sim/policy-models-simulators
+ mvn exec:java -Dexec.mainClass=org.onap.policy.models.simulators.Main -Dexec.args="src/test/resources/simParameters.json"
+
+
+Developing and Debugging CLAMP control loop runtime
+***************************************************
+
+Running on the Command Line using Maven
++++++++++++++++++++++++++++++++++++++++
+
+Once the mariadb and DMaap simulator are up and running, run the following commands:
+
+ .. code-block:: bash
+
+ cd ~/git/onap/policy/clamp/runtime-controlloop
+ mvn spring-boot:run
+
+
+Running on the Command Line
++++++++++++++++++++++++++++
+
+ .. code-block:: bash
+
+ cd ~/git/onap/policy/clamp/runtime-controlloop
+ java -jar target/policy-clamp-runtime-controlloop-6.1.3-SNAPSHOT.jar
+
+
+Running in Eclipse
+++++++++++++++++++
+
+1. Check out the policy models repository
+2. Go to the *policy-clamp-runtime-controlloop* module in the clamp repo
+3. Specify a run configuration using the class *org.onap.policy.clamp.controlloop.runtime.Application* as the main class
+4. Run the configuration
+
+Swagger UI of Control loop runtime is available at *http://localhost:6969/onap/controlloop/swagger-ui/*, and swagger JSON at *http://localhost:6969/onap/controlloop/v2/api-docs/*
+
+
+Running one or more participant simulators
+++++++++++++++++++++++++++++++++++++++++++
+
+Into *docker\csit\clamp\tests\data* you can find a test case with policy-participant. In order to use that test you can use particpant-simulator.
+Copy the file *src/main/resources/config/application.yaml* and paste into *src/test/resources/*, after that change *participantId* and *participantType* as showed below:
+
+ .. code-block:: yaml
+
+ participantId:
+ name: org.onap.policy.controlloop.PolicyControlLoopParticipant
+ version: 2.3.1
+ participantType:
+ name: org.onap.PM_Policy
+ version: 1.0.0
+
+Run the following commands:
+
+ .. code-block:: bash
+
+ cd ~/git/onap/policy/clamp/participant/participant-impl/participant-impl-simulator
+ java -jar target/policy-clamp-participant-impl-simulator-6.1.3-SNAPSHOT.jar --spring.config.location=src/test/resources/application.yaml
+
+
+Creating self-signed certificate
+++++++++++++++++++++++++++++++++
+
+There is an additional requirement for the Clamp control loop runtime docker image to run, is creating the SSL self-signed certificate.
+
+Run the following commands:
+
+ .. code-block:: bash
+
+ cd ~/git/onap/policy/docker/csit/
+ ./gen_truststore.sh
+ ./gen_keystore.sh
+
+Execution of the commands above results additional files into the following directory *~/git/onap/policy/docker/csit/config*:
+
+ * ~/git/onap/policy/docker/csit/config/cakey.pem
+ * ~/git/onap/policy/docker/csit/config/careq.pem
+ * ~/git/onap/policy/docker/csit/config/caroot.cer
+ * ~/git/onap/policy/docker/csit/config/ks.cer
+ * ~/git/onap/policy/docker/csit/config/ks.csr
+ * ~/git/onap/policy/docker/csit/config/ks.jks
+
+
+Running the CLAMP control loop runtime docker image
++++++++++++++++++++++++++++++++++++++++++++++++++++
+
+Run the following command:
+
+ .. code-block:: bash
+
+ docker run --name runtime-smoke-test \
+ -p 6969:6969 \
+ -e mariadb.host=host.docker.internal \
+ -e topicServer=host.docker.internal \
+ --mount type=bind,source=~/git/onap/policy/docker/csit/config/ks.jks,target=/opt/app/policy/clamp/etc/ssl/policy-keystore \
+ --mount type=bind,source=~/git/onap/policy/clamp/runtime-controlloop/src/main/resources/application.yaml,target=/opt/app/policy/clamp/etc/ClRuntimeParameters.yaml \
+ onap/policy-clamp-cl-runtime
+
+
+Swagger UI of Control loop runtime is available at *https://localhost:6969/onap/controlloop/swagger-ui/*, and swagger JSON at *https://localhost:6969/onap/controlloop/v2/api-docs/*
diff --git a/docs/development/devtools/db-migrator-smoke.rst b/docs/development/devtools/db-migrator-smoke.rst
new file mode 100644
index 00000000..4aa41e46
--- /dev/null
+++ b/docs/development/devtools/db-migrator-smoke.rst
@@ -0,0 +1,413 @@
+.. This work is licensed under a Creative Commons Attribution
+.. 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+
+Policy DB Migrator Smoke Tests
+##############################
+
+Prerequisites
+*************
+
+Check number of files in each release
+
+.. code::
+ :number-lines:
+
+ ls 0800/upgrade/*.sql | wc -l = 96
+ ls 0900/upgrade/*.sql | wc -l = 13
+ ls 0800/downgrade/*.sql | wc -l = 96
+ ls 0900/downgrade/*.sql | wc -l = 13
+
+Upgrade scripts
+===============
+
+.. code::
+ :number-lines:
+
+ /opt/app/policy/bin/prepare_upgrade.sh policyadmin
+ /opt/app/policy/bin/db-migrator -s policyadmin -o upgrade
+
+.. note::
+ You can also run db-migrator upgrade with the -t and -f options
+
+Downgrade scripts
+=================
+
+.. code::
+ :number-lines:
+
+ /opt/app/policy/bin/prepare_downgrade.sh policyadmin
+ /opt/app/policy/bin/db-migrator -s policyadmin -o downgrade -f 0900 -t 0800
+
+Db migrator initialization script
+=================================
+
+Update /oom/kubernetes/policy/resources/config/db_migrator_policy_init.sh with the appropriate upgrade/downgrade calls.
+
+The policy version you are deploying should either be an upgrade or downgrade from the current db migrator schema version.
+
+Every time you modify db_migrator_policy_init.sh you will have to undeploy, make and redeploy before updates are applied.
+
+1. Fresh Install
+****************
+
+.. list-table::
+ :widths: 60 20
+ :header-rows: 0
+
+ * - Number of files run
+ - 109
+ * - Tables in policyadmin
+ - 75
+ * - Records Added
+ - 109
+ * - schema_version
+ - 0900
+
+2. Downgrade to Honolulu (0800)
+*******************************
+
+Modify db_migrator_policy_init.sh - remove any lines referencing upgrade and add the 2 lines under "Downgrade scripts"
+
+Make/Redeploy to run downgrade.
+
+.. list-table::
+ :widths: 60 20
+ :header-rows: 0
+
+ * - Number of files run
+ - 13
+ * - Tables in policyadmin
+ - 73
+ * - Records Added
+ - 13
+ * - schema_version
+ - 0800
+
+3. Upgrade to Istanbul (0900)
+*****************************
+
+Modify db_migrator_policy_init.sh - remove any lines referencing downgrade and add the 2 lines under "Upgrade scripts".
+
+Make/Redeploy to run upgrade.
+
+.. list-table::
+ :widths: 60 20
+ :header-rows: 0
+
+ * - Number of files run
+ - 13
+ * - Tables in policyadmin
+ - 75
+ * - Records Added
+ - 13
+ * - schema_version
+ - 0900
+
+4. Upgrade to Istanbul (0900) without any information in the migration schema
+*****************************************************************************
+
+Ensure you are on release 0800. (This may require running a downgrade before starting the test)
+
+Drop db-migrator tables in migration schema:
+
+.. code::
+ :number-lines:
+
+ DROP TABLE schema_versions;
+ DROP TABLE policyadmin_schema_changelog;
+
+Modify db_migrator_policy_init.sh - remove any lines referencing downgrade and add the 2 lines under "Upgrade scripts".
+
+Make/Redeploy to run upgrade.
+
+.. list-table::
+ :widths: 60 20
+ :header-rows: 0
+
+ * - Number of files run
+ - 13
+ * - Tables in policyadmin
+ - 75
+ * - Records Added
+ - 13
+ * - schema_version
+ - 0900
+
+5. Upgrade to Istanbul (0900) after failed downgrade
+****************************************************
+
+Ensure you are on release 0900.
+
+Rename pdpstatistics table in policyadmin schema:
+
+.. code::
+
+ RENAME TABLE pdpstatistics TO backup_pdpstatistics;
+
+Modify db_migrator_policy_init.sh - remove any lines referencing upgrade and add the 2 lines under "Downgrade scripts"
+
+Make/Redeploy to run downgrade
+
+This should result in an error (last row in policyadmin_schema_changelog will have a success value of 0)
+
+Rename backup_pdpstatistic table in policyadmin schema:
+
+.. code::
+
+ RENAME TABLE backup_pdpstatistics TO pdpstatistics;
+
+Modify db_migrator_policy_init.sh - Remove any lines referencing downgrade and add the 2 lines under "Upgrade scripts"
+
+Make/Redeploy to run upgrade
+
+.. list-table::
+ :widths: 60 20
+ :header-rows: 0
+
+ * - Number of files run
+ - 11
+ * - Tables in policyadmin
+ - 75
+ * - Records Added
+ - 11
+ * - schema_version
+ - 0900
+
+6. Downgrade to Honolulu (0800) after failed downgrade
+******************************************************
+
+Ensure you are on release 0900.
+
+Add timeStamp column to papdpstatistics_enginestats:
+
+.. code::
+
+ ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN timeStamp datetime DEFAULT NULL NULL AFTER UPTIME;
+
+Modify db_migrator_policy_init.sh - remove any lines referencing upgrade and add the 2 lines under "Downgrade scripts"
+
+Make/Redeploy to run downgrade
+
+This should result in an error (last row in policyadmin_schema_changelog will have a success value of 0)
+
+Remove timeStamp column from jpapdpstatistics_enginestats:
+
+.. code::
+
+ ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp;
+
+The config job will retry 5 times. If you make your fix before this limit is reached you won't need to redeploy.
+
+Redeploy to run downgrade
+
+.. list-table::
+ :widths: 60 20
+ :header-rows: 0
+
+ * - Number of files run
+ - 14
+ * - Tables in policyadmin
+ - 73
+ * - Records Added
+ - 14
+ * - schema_version
+ - 0800
+
+7. Downgrade to Honolulu (0800) after failed upgrade
+****************************************************
+
+Ensure you are on release 0800.
+
+Modify db_migrator_policy_init.sh - remove any lines referencing downgrade and add the 2 lines under "Upgrade scripts"
+
+Update pdpstatistics:
+
+.. code::
+
+ ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL NULL AFTER POLICYEXECUTEDSUCCESSCOUNT;
+
+Make/Redeploy to run upgrade
+
+This should result in an error (last row in policyadmin_schema_changelog will have a success value of 0)
+
+Once the retry count has been reached, update pdpstatistics:
+
+.. code::
+
+ ALTER TABLE pdpstatistics DROP COLUMN POLICYUNDEPLOYCOUNT;
+
+Modify db_migrator_policy_init.sh - Remove any lines referencing upgrade and add the 2 lines under "Downgrade scripts"
+
+Make/Redeploy to run downgrade
+
+.. list-table::
+ :widths: 60 20
+ :header-rows: 0
+
+ * - Number of files run
+ - 7
+ * - Tables in policyadmin
+ - 73
+ * - Records Added
+ - 7
+ * - schema_version
+ - 0800
+
+8. Upgrade to Istanbul (0900) after failed upgrade
+**************************************************
+
+Ensure you are on release 0800.
+
+Modify db_migrator_policy_init.sh - remove any lines referencing downgrade and add the 2 lines under "Upgrade scripts"
+
+Update PDP table:
+
+.. code::
+
+ ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY;
+
+Make/Redeploy to run upgrade
+
+This should result in an error (last row in policyadmin_schema_changelog will have a success value of 0)
+
+Update PDP table:
+
+.. code::
+
+ ALTER TABLE pdp DROP COLUMN LASTUPDATE;
+
+The config job will retry 5 times. If you make your fix before this limit is reached you won't need to redeploy.
+
+Redeploy to run upgrade
+
+.. list-table::
+ :widths: 60 20
+ :header-rows: 0
+
+ * - Number of files run
+ - 14
+ * - Tables in policyadmin
+ - 75
+ * - Records Added
+ - 14
+ * - schema_version
+ - 0900
+
+9. Downgrade to Honolulu (0800) with data in pdpstatistics and jpapdpstatistics_enginestats
+*******************************************************************************************
+
+Ensure you are on release 0900.
+
+Check pdpstatistics and jpapdpstatistics_enginestats are populated with data.
+
+.. code::
+ :number-lines:
+
+ SELECT count(*) FROM pdpstatistics;
+ SELECT count(*) FROM jpapdpstatistics_enginestats;
+
+Modify db_migrator_policy_init.sh - remove any lines referencing upgrade and add the 2 lines under "Downgrade scripts"
+
+Make/Redeploy to run downgrade
+
+Check the tables to ensure the number records is the same.
+
+.. code::
+ :number-lines:
+
+ SELECT count(*) FROM pdpstatistics;
+ SELECT count(*) FROM jpapdpstatistics_enginestats;
+
+Check pdpstatistics to ensure the primary key has changed:
+
+.. code::
+
+ SELECT column_name, constraint_name FROM information_schema.key_column_usage WHERE table_name='pdpstatistics';
+
+Check jpapdpstatistics_enginestats to ensure id column has been dropped and timestamp column added.
+
+.. code::
+
+ SELECT table_name, column_name, data_type FROM information_schema.columns WHERE table_name = 'jpapdpstatistics_enginestats';
+
+Check the pdp table to ensure the LASTUPDATE column has been dropped.
+
+.. code::
+
+ SELECT table_name, column_name, data_type FROM information_schema.columns WHERE table_name = 'pdp';
+
+
+.. list-table::
+ :widths: 60 20
+ :header-rows: 0
+
+ * - Number of files run
+ - 13
+ * - Tables in policyadmin
+ - 73
+ * - Records Added
+ - 13
+ * - schema_version
+ - 0800
+
+10. Upgrade to Istanbul (0900) with data in pdpstatistics and jpapdpstatistics_enginestats
+******************************************************************************************
+
+Ensure you are on release 0800.
+
+Check pdpstatistics and jpapdpstatistics_enginestats are populated with data.
+
+.. code::
+ :number-lines:
+
+ SELECT count(*) FROM pdpstatistics;
+ SELECT count(*) FROM jpapdpstatistics_enginestats;
+
+Modify db_migrator_policy_init.sh - remove any lines referencing downgrade and add the 2 lines under "Upgrade scripts"
+
+Make/Redeploy to run upgrade
+
+Check the tables to ensure the number records is the same.
+
+.. code::
+ :number-lines:
+
+ SELECT count(*) FROM pdpstatistics;
+ SELECT count(*) FROM jpapdpstatistics_enginestats;
+
+Check pdpstatistics to ensure the primary key has changed:
+
+.. code::
+
+ SELECT column_name, constraint_name FROM information_schema.key_column_usage WHERE table_name='pdpstatistics';
+
+Check jpapdpstatistics_enginestats to ensure timestamp column has been dropped and id column added.
+
+.. code::
+
+ SELECT table_name, column_name, data_type FROM information_schema.columns WHERE table_name = 'jpapdpstatistics_enginestats';
+
+Check the pdp table to ensure the LASTUPDATE column has been added and the value has defaulted to the CURRENT_TIMESTAMP.
+
+.. code::
+
+ SELECT table_name, column_name, data_type, column_default FROM information_schema.columns WHERE table_name = 'pdp';
+
+.. list-table::
+ :widths: 60 20
+ :header-rows: 0
+
+ * - Number of files run
+ - 13
+ * - Tables in policyadmin
+ - 75
+ * - Records Added
+ - 13
+ * - schema_version
+ - 0900
+
+.. note::
+ The number of records added may vary depnding on the number of retries.
+
+End of Document
diff --git a/docs/development/devtools/devtools.rst b/docs/development/devtools/devtools.rst
index 0654b3a5..dff8819d 100644
--- a/docs/development/devtools/devtools.rst
+++ b/docs/development/devtools/devtools.rst
@@ -276,6 +276,12 @@ familiar with the Policy Framework components and test any local changes.
.. toctree::
:maxdepth: 1
+<<<<<<< HEAD (25d5a6 Merge "Refactor s3p documents" into istanbul)
+=======
+ policy-gui-controlloop-smoke.rst
+
+ db-migrator-smoke.rst
+>>>>>>> CHANGE (33149a Added doc for smoke testing db-migrator)
..
api-smoke.rst
@@ -326,6 +332,10 @@ the Policy Framework works in a full ONAP deployment.
.. toctree::
:maxdepth: 1
+ clamp-policy.rst
+
+ clamp-dcae.rst
+
..
api-pairwise.rst
@@ -344,9 +354,6 @@ the Policy Framework works in a full ONAP deployment.
..
distribution-pairwise.rst
-..
- clamp-pairwise.rst
-
Generating Swagger Documentation
********************************
diff --git a/docs/development/devtools/images/cl-commission.png b/docs/development/devtools/images/cl-commission.png
new file mode 100644
index 00000000..ee1bab17
--- /dev/null
+++ b/docs/development/devtools/images/cl-commission.png
Binary files differ
diff --git a/docs/development/devtools/images/cl-create.png b/docs/development/devtools/images/cl-create.png
new file mode 100644
index 00000000..df97a170
--- /dev/null
+++ b/docs/development/devtools/images/cl-create.png
Binary files differ
diff --git a/docs/development/devtools/images/cl-instantiation.png b/docs/development/devtools/images/cl-instantiation.png
new file mode 100644
index 00000000..b1101ffb
--- /dev/null
+++ b/docs/development/devtools/images/cl-instantiation.png
Binary files differ
diff --git a/docs/development/devtools/images/cl-passive.png b/docs/development/devtools/images/cl-passive.png
new file mode 100644
index 00000000..def811a5
--- /dev/null
+++ b/docs/development/devtools/images/cl-passive.png
Binary files differ
diff --git a/docs/development/devtools/images/cl-running-state.png b/docs/development/devtools/images/cl-running-state.png
new file mode 100644
index 00000000..ab7b73c5
--- /dev/null
+++ b/docs/development/devtools/images/cl-running-state.png
Binary files differ
diff --git a/docs/development/devtools/images/cl-running.png b/docs/development/devtools/images/cl-running.png
new file mode 100644
index 00000000..e9730e0d
--- /dev/null
+++ b/docs/development/devtools/images/cl-running.png
Binary files differ
diff --git a/docs/development/devtools/images/cl-uninitialise.png b/docs/development/devtools/images/cl-uninitialise.png
new file mode 100644
index 00000000..d10b214c
--- /dev/null
+++ b/docs/development/devtools/images/cl-uninitialise.png
Binary files differ
diff --git a/docs/development/devtools/images/cl-uninitialised-state.png b/docs/development/devtools/images/cl-uninitialised-state.png
new file mode 100644
index 00000000..f8a77da8
--- /dev/null
+++ b/docs/development/devtools/images/cl-uninitialised-state.png
Binary files differ
diff --git a/docs/development/devtools/images/create-instance.png b/docs/development/devtools/images/create-instance.png
new file mode 100644
index 00000000..3b3c0c21
--- /dev/null
+++ b/docs/development/devtools/images/create-instance.png
Binary files differ
diff --git a/docs/development/devtools/images/update-instance.png b/docs/development/devtools/images/update-instance.png
new file mode 100644
index 00000000..fa1ee095
--- /dev/null
+++ b/docs/development/devtools/images/update-instance.png
Binary files differ
diff --git a/docs/development/devtools/tosca/pairwise-testing.yml b/docs/development/devtools/tosca/pairwise-testing.yml
new file mode 100644
index 00000000..e6c25d0d
--- /dev/null
+++ b/docs/development/devtools/tosca/pairwise-testing.yml
@@ -0,0 +1,996 @@
+tosca_definitions_version: tosca_simple_yaml_1_3
+data_types:
+ onap.datatypes.ToscaConceptIdentifier:
+ derived_from: tosca.datatypes.Root
+ properties:
+ name:
+ type: string
+ required: true
+ version:
+ type: string
+ required: true
+ onap.datatype.controlloop.Target:
+ derived_from: tosca.datatypes.Root
+ description: Definition for a entity in A&AI to perform a control loop operation on
+ properties:
+ targetType:
+ type: string
+ description: Category for the target type
+ required: true
+ constraints:
+ - valid_values:
+ - VNF
+ - VM
+ - VFMODULE
+ - PNF
+ entityIds:
+ type: map
+ description: |
+ Map of values that identify the resource. If none are provided, it is assumed that the
+ entity that generated the ONSET event will be the target.
+ required: false
+ metadata:
+ clamp_possible_values: ClampExecution:CSAR_RESOURCES
+ entry_schema:
+ type: string
+ onap.datatype.controlloop.Actor:
+ derived_from: tosca.datatypes.Root
+ description: An actor/operation/target definition
+ properties:
+ actor:
+ type: string
+ description: The actor performing the operation.
+ required: true
+ metadata:
+ clamp_possible_values: Dictionary:DefaultActors,ClampExecution:CDS/actor
+ operation:
+ type: string
+ description: The operation the actor is performing.
+ metadata:
+ clamp_possible_values: Dictionary:DefaultOperations,ClampExecution:CDS/operation
+ required: true
+ target:
+ type: onap.datatype.controlloop.Target
+ description: The resource the operation should be performed on.
+ required: true
+ payload:
+ type: map
+ description: Name/value pairs of payload information passed by Policy to the actor
+ required: false
+ metadata:
+ clamp_possible_values: ClampExecution:CDS/payload
+ entry_schema:
+ type: string
+ onap.datatype.controlloop.Operation:
+ derived_from: tosca.datatypes.Root
+ description: An operation supported by an actor
+ properties:
+ id:
+ type: string
+ description: Unique identifier for the operation
+ required: true
+ description:
+ type: string
+ description: A user-friendly description of the intent for the operation
+ required: false
+ operation:
+ type: onap.datatype.controlloop.Actor
+ description: The definition of the operation to be performed.
+ required: true
+ timeout:
+ type: integer
+ description: The amount of time for the actor to perform the operation.
+ required: true
+ retries:
+ type: integer
+ description: The number of retries the actor should attempt to perform the operation.
+ required: true
+ default: 0
+ success:
+ type: string
+ description: Points to the operation to invoke on success. A value of "final_success" indicates and end to the operation.
+ required: false
+ default: final_success
+ failure:
+ type: string
+ description: Points to the operation to invoke on Actor operation failure.
+ required: false
+ default: final_failure
+ failure_timeout:
+ type: string
+ description: Points to the operation to invoke when the time out for the operation occurs.
+ required: false
+ default: final_failure_timeout
+ failure_retries:
+ type: string
+ description: Points to the operation to invoke when the current operation has exceeded its max retries.
+ required: false
+ default: final_failure_retries
+ failure_exception:
+ type: string
+ description: Points to the operation to invoke when the current operation causes an exception.
+ required: false
+ default: final_failure_exception
+ failure_guard:
+ type: string
+ description: Points to the operation to invoke when the current operation is blocked due to guard policy enforcement.
+ required: false
+ default: final_failure_guard
+ onap.datatypes.monitoring.managedObjectDNsBasic:
+ constraints: []
+ properties:
+ DN:
+ name: DN
+ type: string
+ typeVersion: 0.0.0
+ description: Managed object distinguished name
+ required: true
+ constraints: []
+ metadata: {}
+ name: onap.datatypes.monitoring.managedObjectDNsBasic
+ version: 0.0.0
+ derived_from: tosca.datatypes.Root
+ metadata: {}
+ onap.datatypes.monitoring.managedObjectDNsBasics:
+ constraints: []
+ properties:
+ managedObjectDNsBasic:
+ name: managedObjectDNsBasic
+ type: map
+ typeVersion: 0.0.0
+ description: Managed object distinguished name object
+ required: true
+ constraints: []
+ entry_schema:
+ type: onap.datatypes.monitoring.managedObjectDNsBasic
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ name: onap.datatypes.monitoring.managedObjectDNsBasics
+ version: 0.0.0
+ derived_from: tosca.datatypes.Root
+ metadata: {}
+ onap.datatypes.monitoring.measurementGroup:
+ constraints: []
+ properties:
+ measurementTypes:
+ name: measurementTypes
+ type: list
+ typeVersion: 0.0.0
+ description: List of measurement types
+ required: true
+ constraints: []
+ entry_schema:
+ type: onap.datatypes.monitoring.measurementTypes
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ managedObjectDNsBasic:
+ name: managedObjectDNsBasic
+ type: list
+ typeVersion: 0.0.0
+ description: List of managed object distinguished names
+ required: true
+ constraints: []
+ entry_schema:
+ type: onap.datatypes.monitoring.managedObjectDNsBasics
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ name: onap.datatypes.monitoring.measurementGroup
+ version: 0.0.0
+ derived_from: tosca.datatypes.Root
+ metadata: {}
+ onap.datatypes.monitoring.measurementGroups:
+ constraints: []
+ properties:
+ measurementGroup:
+ name: measurementGroup
+ type: map
+ typeVersion: 0.0.0
+ description: Measurement Group
+ required: true
+ constraints: []
+ entry_schema:
+ type: onap.datatypes.monitoring.measurementGroup
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ name: onap.datatypes.monitoring.measurementGroups
+ version: 0.0.0
+ derived_from: tosca.datatypes.Root
+ metadata: {}
+ onap.datatypes.monitoring.measurementType:
+ constraints: []
+ properties:
+ measurementType:
+ name: measurementType
+ type: string
+ typeVersion: 0.0.0
+ description: Measurement type
+ required: true
+ constraints: []
+ metadata: {}
+ name: onap.datatypes.monitoring.measurementType
+ version: 0.0.0
+ derived_from: tosca.datatypes.Root
+ metadata: {}
+ onap.datatypes.monitoring.measurementTypes:
+ constraints: []
+ properties:
+ measurementType:
+ name: measurementType
+ type: map
+ typeVersion: 0.0.0
+ description: Measurement type object
+ required: true
+ constraints: []
+ entry_schema:
+ type: onap.datatypes.monitoring.measurementType
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ name: onap.datatypes.monitoring.measurementTypes
+ version: 0.0.0
+ derived_from: tosca.datatypes.Root
+ metadata: {}
+ onap.datatypes.monitoring.nfFilter:
+ constraints: []
+ properties:
+ modelNames:
+ name: modelNames
+ type: list
+ typeVersion: 0.0.0
+ description: List of model names
+ required: true
+ constraints: []
+ entry_schema:
+ type: string
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ modelInvariantIDs:
+ name: modelInvariantIDs
+ type: list
+ typeVersion: 0.0.0
+ description: List of model invariant IDs
+ required: true
+ constraints: []
+ entry_schema:
+ type: string
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ modelVersionIDs:
+ name: modelVersionIDs
+ type: list
+ typeVersion: 0.0.0
+ description: List of model version IDs
+ required: true
+ constraints: []
+ entry_schema:
+ type: string
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ nfNames:
+ name: nfNames
+ type: list
+ typeVersion: 0.0.0
+ description: List of network functions
+ required: true
+ constraints: []
+ entry_schema:
+ type: string
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ name: onap.datatypes.monitoring.nfFilter
+ version: 0.0.0
+ derived_from: tosca.datatypes.Root
+ metadata: {}
+ onap.datatypes.monitoring.subscription:
+ constraints: []
+ properties:
+ measurementGroups:
+ name: measurementGroups
+ type: list
+ typeVersion: 0.0.0
+ description: Measurement Groups
+ required: true
+ constraints: []
+ entry_schema:
+ type: onap.datatypes.monitoring.measurementGroups
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ fileBasedGP:
+ name: fileBasedGP
+ type: integer
+ typeVersion: 0.0.0
+ description: File based granularity period
+ required: true
+ constraints: []
+ metadata: {}
+ fileLocation:
+ name: fileLocation
+ type: string
+ typeVersion: 0.0.0
+ description: ROP file location
+ required: true
+ constraints: []
+ metadata: {}
+ subscriptionName:
+ name: subscriptionName
+ type: string
+ typeVersion: 0.0.0
+ description: Name of the subscription
+ required: true
+ constraints: []
+ metadata: {}
+ administrativeState:
+ name: administrativeState
+ type: string
+ typeVersion: 0.0.0
+ description: State of the subscription
+ required: true
+ constraints:
+ - valid_values:
+ - LOCKED
+ - UNLOCKED
+ metadata: {}
+ nfFilter:
+ name: nfFilter
+ type: map
+ typeVersion: 0.0.0
+ description: Network function filter
+ required: true
+ constraints: []
+ entry_schema:
+ type: onap.datatypes.monitoring.nfFilter
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ name: onap.datatypes.monitoring.subscription
+ version: 0.0.0
+ derived_from: tosca.datatypes.Root
+ metadata: {}
+ org.onap.datatypes.policy.clamp.controlloop.httpControlLoopElement.RestRequest:
+ version: 1.0.0
+ derived_from: tosca.datatypes.Root
+ properties:
+ restRequestId:
+ type: onap.datatypes.ToscaConceptIdentifier
+ typeVersion: 1.0.0
+ required: true
+ description: The name and version of a REST request to be sent to a REST endpoint
+ httpMethod:
+ type: string
+ required: true
+ constraints:
+ - valid_values: [POST, PUT, GET, DELETE]
+ description: The REST method to use
+ path:
+ type: string
+ required: true
+ description: The path of the REST request relative to the base URL
+ body:
+ type: string
+ required: false
+ description: The body of the REST request for PUT and POST requests
+ expectedResponse:
+ type: integer
+ required: true
+ constraints:
+ - in_range: [100, 599]
+ description: THe expected HTTP status code for the REST request
+ org.onap.datatypes.policy.clamp.controlloop.httpControlLoopElement.ConfigurationEntity:
+ version: 1.0.0
+ derived_from: tosca.datatypes.Root
+ properties:
+ configurationEntityId:
+ type: onap.datatypes.ToscaConceptIdentifier
+ typeVersion: 1.0.0
+ required: true
+ description: The name and version of a Configuration Entity to be handled by the HTTP Control Loop Element
+ restSequence:
+ type: list
+ entry_schema:
+ type: org.onap.datatypes.policy.clamp.controlloop.httpControlLoopElement.RestRequest
+ typeVersion: 1.0.0
+ description: A sequence of REST commands to send to the REST endpoint
+policy_types:
+ onap.policies.Monitoring:
+ derived_from: tosca.policies.Root
+ description: a base policy type for all policies that govern monitoring provisioning
+ version: 1.0.0
+ name: onap.policies.Monitoring
+ onap.policies.Sirisha:
+ derived_from: tosca.policies.Root
+ description: a base policy type for all policies that govern monitoring provisioning
+ version: 1.0.0
+ name: onap.policies.Sirisha
+ onap.policies.monitoring.dcae-pm-subscription-handler:
+ properties:
+ pmsh_policy:
+ name: pmsh_policy
+ type: onap.datatypes.monitoring.subscription
+ typeVersion: 0.0.0
+ description: PMSH Policy JSON
+ required: false
+ constraints: []
+ metadata: {}
+ name: onap.policies.monitoring.dcae-pm-subscription-handler
+ version: 1.0.0
+ derived_from: onap.policies.Monitoring
+ metadata: {}
+ onap.policies.controlloop.operational.Common:
+ derived_from: tosca.policies.Root
+ version: 1.0.0
+ name: onap.policies.controlloop.operational.Common
+ description: |
+ Operational Policy for Control Loop execution. Originated in Frankfurt to support TOSCA Compliant
+ Policy Types. This does NOT support the legacy Policy YAML policy type.
+ properties:
+ id:
+ type: string
+ description: The unique control loop id.
+ required: true
+ timeout:
+ type: integer
+ description: |
+ Overall timeout for executing all the operations. This timeout should equal or exceed the total
+ timeout for each operation listed.
+ required: true
+ abatement:
+ type: boolean
+ description: Whether an abatement event message will be expected for the control loop from DCAE.
+ required: true
+ default: false
+ trigger:
+ type: string
+ description: Initial operation to execute upon receiving an Onset event message for the Control Loop.
+ required: true
+ operations:
+ type: list
+ description: List of operations to be performed when Control Loop is triggered.
+ required: true
+ entry_schema:
+ type: onap.datatype.controlloop.Operation
+ onap.policies.controlloop.operational.common.Apex:
+ derived_from: onap.policies.controlloop.operational.Common
+ type_version: 1.0.0
+ version: 1.0.0
+ name: onap.policies.controlloop.operational.common.Apex
+ description: Operational policies for Apex PDP
+ properties:
+ engineServiceParameters:
+ type: string
+ description: The engine parameters like name, instanceCount, policy implementation, parameters etc.
+ required: true
+ eventInputParameters:
+ type: string
+ description: The event input parameters.
+ required: true
+ eventOutputParameters:
+ type: string
+ description: The event output parameters.
+ required: true
+ javaProperties:
+ type: string
+ description: Name/value pairs of properties to be set for APEX if needed.
+ required: false
+node_types:
+ org.onap.policy.clamp.controlloop.Participant:
+ version: 1.0.1
+ derived_from: tosca.nodetypes.Root
+ properties:
+ provider:
+ type: string
+ requred: false
+ org.onap.policy.clamp.controlloop.ControlLoopElement:
+ version: 1.0.1
+ derived_from: tosca.nodetypes.Root
+ properties:
+ provider:
+ type: string
+ required: false
+ metadata:
+ common: true
+ description: Specifies the organization that provides the control loop element
+ participant_id:
+ type: onap.datatypes.ToscaConceptIdentifier
+ requred: true
+ metadata:
+ common: true
+ participantType:
+ type: onap.datatypes.ToscaConceptIdentifier
+ required: true
+ metadata:
+ common: true
+ description: The identity of the participant type that hosts this type of Control Loop Element
+ startPhase:
+ type: integer
+ required: false
+ constraints:
+ - greater_or_equal: 0
+ metadata:
+ common: true
+ description: A value indicating the start phase in which this control loop element will be started, the
+ first start phase is zero. Control Loop Elements are started in their start_phase order and stopped
+ in reverse start phase order. Control Loop Elements with the same start phase are started and
+ stopped simultaneously
+ uninitializedToPassiveTimeout:
+ type: integer
+ required: false
+ constraints:
+ - greater_or_equal: 0
+ default: 60
+ metadata:
+ common: true
+ description: The maximum time in seconds to wait for a state chage from uninitialized to passive
+ passiveToRunningTimeout:
+ type: integer
+ required: false
+ constraints:
+ - greater_or_equal: 0
+ default: 60
+ metadata:
+ common: true
+ description: The maximum time in seconds to wait for a state chage from passive to running
+ runningToPassiveTimeout:
+ type: integer
+ required: false
+ constraints:
+ - greater_or_equal: 0
+ default: 60
+ metadata:
+ common: true
+ description: The maximum time in seconds to wait for a state chage from running to passive
+ passiveToUninitializedTimeout:
+ type: integer
+ required: false
+ constraints:
+ - greater_or_equal: 0
+ default: 60
+ metadata:
+ common: true
+ description: The maximum time in seconds to wait for a state chage from passive to uninitialized
+ org.onap.policy.clamp.controlloop.ControlLoop:
+ version: 1.0.1
+ derived_from: tosca.nodetypes.Root
+ properties:
+ provider:
+ type: string
+ required: false
+ metadata:
+ common: true
+ description: Specifies the organization that provides the control loop element
+ elements:
+ type: list
+ required: true
+ metadata:
+ common: true
+ entry_schema:
+ type: onap.datatypes.ToscaConceptIdentifier
+ description: Specifies a list of control loop element definitions that make up this control loop definition
+ org.onap.policy.clamp.controlloop.PolicyControlLoopElement:
+ version: 1.0.1
+ derived_from: org.onap.policy.clamp.controlloop.ControlLoopElement
+ properties:
+ policy_type_id:
+ type: onap.datatypes.ToscaConceptIdentifier
+ requred: true
+ policy_id:
+ type: onap.datatypes.ToscaConceptIdentifier
+ requred: false
+ org.onap.policy.clamp.controlloop.DerivedPolicyControlLoopElement:
+ version: 1.0.1
+ derived_from: org.onap.policy.clamp.controlloop.PolicyControlLoopElement
+ properties:
+ policy_type_id:
+ type: onap.datatypes.ToscaConceptIdentifier
+ requred: true
+ policy_id:
+ type: onap.datatypes.ToscaConceptIdentifier
+ requred: false
+ org.onap.policy.clamp.controlloop.DerivedDerivedPolicyControlLoopElement:
+ version: 1.0.1
+ derived_from: org.onap.policy.clamp.controlloop.DerivedPolicyControlLoopElement
+ properties:
+ policy_type_id:
+ type: onap.datatypes.ToscaConceptIdentifier
+ requred: true
+ policy_id:
+ type: onap.datatypes.ToscaConceptIdentifier
+ requred: false
+ org.onap.policy.clamp.controlloop.CDSControlLoopElement:
+ version: 1.0.1
+ derived_from: org.onap.policy.clamp.controlloop.ControlLoopElement
+ properties:
+ cds_blueprint_id:
+ type: onap.datatypes.ToscaConceptIdentifier
+ requred: true
+ org.onap.policy.clamp.controlloop.K8SMicroserviceControlLoopElement:
+ version: 1.0.1
+ derived_from: org.onap.policy.clamp.controlloop.ControlLoopElement
+ properties:
+ chart:
+ type: string
+ required: true
+ configs:
+ type: list
+ required: false
+ requirements:
+ type: string
+ requred: false
+ templates:
+ type: list
+ required: false
+ entry_schema:
+ values:
+ type: string
+ requred: true
+ org.onap.policy.clamp.controlloop.HttpControlLoopElement:
+ version: 1.0.1
+ derived_from: org.onap.policy.clamp.controlloop.ControlLoopElement
+ properties:
+ baseUrl:
+ type: string
+ required: true
+ description: The base URL to be prepended to each path, identifies the host for the REST endpoints.
+ httpHeaders:
+ type: map
+ required: false
+ entry_schema:
+ type: string
+ description: HTTP headers to send on REST requests
+ configurationEntities:
+ type: map
+ required: true
+ entry_schema:
+ type: org.onap.datatypes.policy.clamp.controlloop.httpControlLoopElement.ConfigurationEntity
+ typeVersion: 1.0.0
+ description: The connfiguration entities the Control Loop Element is managing and their associated REST requests
+
+topology_template:
+ inputs:
+ pmsh_monitoring_policy:
+ type: onap.datatypes.ToscaConceptIdentifier
+ description: The ID of the PMSH monitoring policy to use
+ default:
+ name: MICROSERVICE_vLoadBalancerMS_v1_0_dcae-pm-subscription-handler_1_0_0test
+ version: 1.0.0
+ pmsh_operational_policy:
+ type: onap.datatypes.ToscaConceptIdentifier
+ description: The ID of the PMSH operational policy to use
+ default:
+ name: operational.apex.pmcontrol
+ version: 1.0.0
+ node_templates:
+ org.onap.policy.controlloop.PolicyControlLoopParticipant:
+ version: 2.3.1
+ type: org.onap.policy.clamp.controlloop.Participant
+ type_version: 1.0.1
+ description: Participant for DCAE microservices
+ properties:
+ provider: ONAP
+ org.onap.domain.pmsh.PMSH_MonitoringPolicyControlLoopElement:
+ version: 1.2.3
+ type: org.onap.policy.clamp.controlloop.PolicyControlLoopElement
+ type_version: 1.0.0
+ description: Control loop element for the monitoring policy for Performance Management Subscription Handling
+ properties:
+ provider: Ericsson
+ participant_id:
+ name: org.onap.PM_Policy
+ version: 1.0.0
+ participantType:
+ name: org.onap.policy.controlloop.PolicyControlLoopParticipant
+ version: 2.3.1
+ policy_type_id:
+ name: onap.policies.monitoring.pm-subscription-handler
+ version: 1.0.0
+ policy_id:
+ get_input: pmsh_monitoring_policy
+ org.onap.domain.pmsh.PMSH_OperationalPolicyControlLoopElement:
+ version: 1.2.3
+ type: org.onap.policy.clamp.controlloop.PolicyControlLoopElement
+ type_version: 1.0.0
+ description: Control loop element for the operational policy for Performance Management Subscription Handling
+ properties:
+ provider: Ericsson
+ participant_id:
+ name: org.onap.PM_Policy
+ version: 1.0.0
+ participantType:
+ name: org.onap.policy.controlloop.PolicyControlLoopParticipant
+ version: 2.3.1
+ policy_type_id:
+ name: onap.policies.operational.pm-subscription-handler
+ version: 1.0.0
+ policy_id:
+ get_input: pmsh_operational_policy
+ org.onap.k8s.controlloop.K8SControlLoopParticipant:
+ version: 2.3.4
+ type: org.onap.policy.clamp.controlloop.Participant
+ type_version: 1.0.1
+ description: Participant for K8S
+ properties:
+ provider: ONAP
+ org.onap.domain.database.PMSH_K8SMicroserviceControlLoopElement:
+ # Chart from new repository
+ version: 1.2.3
+ type: org.onap.policy.clamp.controlloop.K8SMicroserviceControlLoopElement
+ type_version: 1.0.0
+ description: Control loop element for the K8S microservice for PMSH
+ properties:
+ provider: ONAP
+ participant_id:
+ name: K8sParticipant0
+ version: 1.0.0
+ participantType:
+ name: org.onap.k8s.controlloop.K8SControlLoopParticipant
+ version: 2.3.4
+ chart:
+ chartId:
+ name: dcae-pmsh
+ version: 8.0.0
+ namespace: onap
+ releaseName: pmshms
+ repository:
+ repoName: chartmuseum
+ protocol: http
+ address: chart-museum
+ port: 80
+ userName: onapinitializer
+ password: demo123456!
+ overrideParams:
+ global.masterPassword: test
+
+ org.onap.domain.database.Local_K8SMicroserviceControlLoopElement:
+ # Chart installation without passing repository info
+ version: 1.2.3
+ type: org.onap.policy.clamp.controlloop.K8SMicroserviceControlLoopElement
+ type_version: 1.0.0
+ description: Control loop element for the K8S microservice for local chart
+ properties:
+ provider: ONAP
+ participant_id:
+ name: K8sParticipant0
+ version: 1.0.0
+ participantType:
+ name: org.onap.k8s.controlloop.K8SControlLoopParticipant
+ version: 2.3.4
+ chart:
+ chartId:
+ name: nginx-ingress
+ version: 0.9.1
+ releaseName: nginxms
+ namespace: test
+ org.onap.controlloop.HttpControlLoopParticipant:
+ version: 2.3.4
+ type: org.onap.policy.clamp.controlloop.Participant
+ type_version: 1.0.1
+ description: Participant for Http requests
+ properties:
+ provider: ONAP
+ org.onap.domain.database.Http_PMSHMicroserviceControlLoopElement:
+ # Consul http config for PMSH.
+ version: 1.2.3
+ type: org.onap.policy.clamp.controlloop.HttpControlLoopElement
+ type_version: 1.0.1
+ description: Control loop element for the http requests of PMSH microservice
+ properties:
+ provider: ONAP
+ participant_id:
+ name: HttpParticipant0
+ version: 1.0.0
+ participantType:
+ name: org.onap.k8s.controlloop.HttpControlLoopParticipant
+ version: 2.3.4
+ uninitializedToPassiveTimeout: 180
+ startPhase: 1
+ baseUrl: http://consul-server-ui:8500
+ httpHeaders:
+ Content-Type: application/json
+ configurationEntities:
+ - configurationEntityId:
+ name: entity1
+ version: 1.0.1
+ restSequence:
+ - restRequestId:
+ name: request1
+ version: 1.0.1
+ httpMethod: PUT
+ path: v1/kv/dcae-pmsh2
+ body: '{
+ "control_loop_name":"pmsh-control-loop",
+ "operational_policy_name":"pmsh-operational-policy",
+ "aaf_password":"demo123456!",
+ "aaf_identity":"dcae@dcae.onap.org",
+ "cert_path":"/opt/app/pmsh/etc/certs/cert.pem",
+ "key_path":"/opt/app/pmsh/etc/certs/key.pem",
+ "ca_cert_path":"/opt/app/pmsh/etc/certs/cacert.pem",
+ "enable_tls":"true",
+ "pmsh_policy":{
+ "subscription":{
+ "subscriptionName":"ExtraPM-All-gNB-R2B",
+ "administrativeState":"UNLOCKED",
+ "fileBasedGP":15,
+ "fileLocation":"\/pm\/pm.xml",
+ "nfFilter":{
+ "nfNames":[
+ "^pnf.*",
+ "^vnf.*"
+ ],
+ "modelInvariantIDs":[
+ ],
+ "modelVersionIDs":[
+ ],
+ "modelNames":[
+ ]
+ },
+ "measurementGroups":[
+ {
+ "measurementGroup":{
+ "measurementTypes":[
+ {
+ "measurementType":"countera"
+ },
+ {
+ "measurementType":"counterb"
+ }
+ ],
+ "managedObjectDNsBasic":[
+ {
+ "DN":"dna"
+ },
+ {
+ "DN":"dnb"
+ }
+ ]
+ }
+ },
+ {
+ "measurementGroup":{
+ "measurementTypes":[
+ {
+ "measurementType":"counterc"
+ },
+ {
+ "measurementType":"counterd"
+ }
+ ],
+ "managedObjectDNsBasic":[
+ {
+ "DN":"dnc"
+ },
+ {
+ "DN":"dnd"
+ }
+ ]
+ }
+ }
+ ]
+ }
+ },
+ "streams_subscribes":{
+ "aai_subscriber":{
+ "type":"message_router",
+ "dmaap_info":{
+ "topic_url":"https://10.152.183.151:3905/events/AAI_EVENT",
+ "client_role":"org.onap.dcae.aaiSub",
+ "location":"san-francisco",
+ "client_id":"1575976809466"
+ }
+ },
+ "policy_pm_subscriber":{
+ "type":"message_router",
+ "dmaap_info":{
+ "topic_url":"https://10.152.183.151:3905/events/org.onap.dmaap.mr.PM_SUBSCRIPTIONS",
+ "client_role":"org.onap.dcae.pmSubscriber",
+ "location":"san-francisco",
+ "client_id":"1575876809456"
+ }
+ }
+ },
+ "streams_publishes":{
+ "policy_pm_publisher":{
+ "type":"message_router",
+ "dmaap_info":{
+ "topic_url":"https://10.152.183.151:3905/events/org.onap.dmaap.mr.PM_SUBSCRIPTIONS",
+ "client_role":"org.onap.dcae.pmPublisher",
+ "location":"san-francisco",
+ "client_id":"1475976809466"
+ }
+ },
+ "other_publisher":{
+ "type":"message_router",
+ "dmaap_info":{
+ "topic_url":"https://10.152.183.151:3905/events/org.onap.dmaap.mr.SOME_OTHER_TOPIC",
+ "client_role":"org.onap.dcae.pmControlPub",
+ "location":"san-francisco",
+ "client_id":"1875976809466"
+ }
+ }
+ }
+ }'
+ expectedResponse: 200
+ org.onap.domain.sample.GenericK8s_ControlLoopDefinition:
+ version: 1.2.3
+ type: org.onap.policy.clamp.controlloop.ControlLoop
+ type_version: 1.0.0
+ description: Control loop for Hello World
+ properties:
+ provider: ONAP
+ elements:
+ - name: org.onap.domain.database.PMSH_K8SMicroserviceControlLoopElement
+ version: 1.2.3
+ - name: org.onap.domain.database.Local_K8SMicroserviceControlLoopElement
+ version: 1.2.3
+ - name: org.onap.domain.database.Http_PMSHMicroserviceControlLoopElement
+ version: 1.2.3
+ - name: org.onap.domain.pmsh.PMSH_MonitoringPolicyControlLoopElement
+ version: 1.2.3
+ - name: org.onap.domain.pmsh.PMSH_OperationalPolicyControlLoopElement
+ version: 1.2.3
+
+ policies:
+ - MICROSERVICE_vLoadBalancerMS_v1_0_dcae-pm-subscription-handler_1_0_0test:
+ type: onap.policies.monitoring.dcae-pm-subscription-handler
+ type_version: 1.0.0
+ name: MICROSERVICE_vLoadBalancerMS_v1_0_dcae-pm-subscription-handler_1_0_0test
+ version: 1.0.0
+ metadata:
+ policy-id: MICROSERVICE_vLoadBalancerMS_v1_0_dcae-pm-subscription-handler_1_0_0test
+ policy-version: 1.0.0
+ properties:
+ pmsh_policy:
+ fileBasedGP: 15
+ fileLocation: /pm/pm.xml
+ subscriptionName: subscriptiona
+ administrativeState: UNLOCKED
+ nfFilter:
+ onap.datatypes.monitoring.nfFilter:
+ modelVersionIDs:
+ - e80a6ae3-cafd-4d24-850d-e14c084a5ca9
+ modelInvariantIDs:
+ - 5845y423-g654-6fju-po78-8n53154532k6
+ - 7129e420-d396-4efb-af02-6b83499b12f8
+ modelNames: []
+ nfNames:
+ - '"^pnf1.*"'
+ measurementGroups:
+ - measurementGroup:
+ onap.datatypes.monitoring.measurementGroup:
+ measurementTypes:
+ - measurementType:
+ onap.datatypes.monitoring.measurementType:
+ measurementType: countera
+ - measurementType:
+ onap.datatypes.monitoring.measurementType:
+ measurementType: counterb
+ managedObjectDNsBasic:
+ - managedObjectDNsBasic:
+ onap.datatypes.monitoring.managedObjectDNsBasic:
+ DN: dna
+ - managedObjectDNsBasic:
+ onap.datatypes.monitoring.managedObjectDNsBasic:
+ DN: dnb
+ - measurementGroup:
+ onap.datatypes.monitoring.measurementGroup:
+ measurementTypes:
+ - measurementType:
+ onap.datatypes.monitoring.measurementType:
+ measurementType: counterc
+ - measurementType:
+ onap.datatypes.monitoring.measurementType:
+ measurementType: counterd
+ managedObjectDNsBasic:
+ - managedObjectDNsBasic:
+ onap.datatypes.monitoring.managedObjectDNsBasic:
+ DN: dnc
+ - managedObjectDNsBasic:
+ onap.datatypes.monitoring.managedObjectDNsBasic:
+ DN: dnd \ No newline at end of file