aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--docs/clamp/controlloop/design-impl/clamp-controlloop-runtime.rst248
-rw-r--r--docs/clamp/controlloop/design-impl/participants/http-participant.rst52
-rw-r--r--docs/clamp/controlloop/design-impl/participants/k8s-participant.rst132
-rw-r--r--docs/clamp/controlloop/design-impl/participants/policy-framework-participant.rst67
-rw-r--r--docs/clamp/controlloop/design-impl/participants/swagger/k8s-participant-swagger.json399
-rw-r--r--docs/clamp/controlloop/design-impl/participants/tosca/tosca-http-participant.yml439
-rw-r--r--docs/clamp/controlloop/design-impl/participants/tosca/tosca-k8s-participant.yml304
-rw-r--r--docs/clamp/controlloop/images/participants/k8s-participant.pngbin0 -> 63460 bytes
-rw-r--r--docs/clamp/controlloop/images/participants/k8s-rest.pngbin0 -> 80224 bytes
-rw-r--r--docs/development/devtools/clamp-dcae.rst115
-rw-r--r--docs/development/devtools/clamp-policy.rst124
-rw-r--r--docs/development/devtools/clamp-smoke.rst357
-rw-r--r--docs/development/devtools/db-migrator-smoke.rst413
-rw-r--r--docs/development/devtools/devtools.rst13
-rw-r--r--docs/development/devtools/images/cl-commission.pngbin0 -> 161307 bytes
-rw-r--r--docs/development/devtools/images/cl-create.pngbin0 -> 226752 bytes
-rw-r--r--docs/development/devtools/images/cl-instantiation.pngbin0 -> 230788 bytes
-rw-r--r--docs/development/devtools/images/cl-passive.pngbin0 -> 206486 bytes
-rw-r--r--docs/development/devtools/images/cl-running-state.pngbin0 -> 226765 bytes
-rw-r--r--docs/development/devtools/images/cl-running.pngbin0 -> 206577 bytes
-rw-r--r--docs/development/devtools/images/cl-uninitialise.pngbin0 -> 206284 bytes
-rw-r--r--docs/development/devtools/images/cl-uninitialised-state.pngbin0 -> 227934 bytes
-rw-r--r--docs/development/devtools/images/create-instance.pngbin0 -> 209643 bytes
-rw-r--r--docs/development/devtools/images/update-instance.pngbin0 -> 129767 bytes
-rw-r--r--docs/development/devtools/tosca/pairwise-testing.yml996
25 files changed, 3640 insertions, 19 deletions
diff --git a/docs/clamp/controlloop/design-impl/clamp-controlloop-runtime.rst b/docs/clamp/controlloop/design-impl/clamp-controlloop-runtime.rst
index 5bea627f..0077b3de 100644
--- a/docs/clamp/controlloop/design-impl/clamp-controlloop-runtime.rst
+++ b/docs/clamp/controlloop/design-impl/clamp-controlloop-runtime.rst
@@ -5,4 +5,250 @@
The CLAMP Control Loop Runtime
##############################
-To be completed.
+.. contents::
+ :depth: 3
+
+
+This article explains how CLAMP Control Loop Runtime is implemented.
+
+Terminology
+***********
+- Broadcast message: a message for all participants (participantId=null and participantType=null)
+- Message to a participant: a message only for a participant (participantId and participantType properly filled)
+- ThreadPoolExecutor: ThreadPoolExecutor executes the given task, into SupervisionAspect class is configured to execute tasks in ordered manner, one by one
+- Spring Scheduling: into SupervisionAspect class, the @Scheduled annotation invokes "schedule()" method every "runtime.participantParameters.heartBeatMs" milliseconds with a fixed delay
+- MessageIntercept: "@MessageIntercept" annotation is used into SupervisionHandler class to intercept "handleParticipantMessage" method calls using spring aspect oriented programming
+- GUI: graphical user interface, Postman or a Front-End Application
+
+Design of Rest Api
+******************
+
+Create of a Control Loop Type
++++++++++++++++++++++++++++++
+- GUI calls POST "/commission" endpoint with a Control Loop Type Definition (Tosca Service Template) as body
+- CL-runtime receives the call by Rest-Api (CommissioningController)
+- It saves to DB the Tosca Service Template using PolicyModelsProvider
+- if there are participants registered, it triggers the execution to send a broadcast PARTICIPANT_UPDATE message
+- the message is built by ParticipantUpdatePublisher using Tosca Service Template data (to fill the list of ParticipantDefinition)
+
+Delete of a Control Loop Type
++++++++++++++++++++++++++++++
+- GUI calls DELETE "/commission" endpoint
+- CL-runtime receives the call by Rest-Api (CommissioningController)
+- if there are participants registered, CL-runtime triggers the execution to send a broadcast PARTICIPANT_UPDATE message
+- the message is built by ParticipantUpdatePublisher with an empty list of ParticipantDefinition
+- It deletes the Control Loop Type from DB
+
+Create of a Control Loop
+++++++++++++++++++++++++
+- GUI calls POST "/instantiation" endpoint with a Control Loop as body
+- CL-runtime receives the call by Rest-Api (InstantiationController)
+- It validates the Control Loop
+- It saves the Control Loop to DB
+- Design of an update of a Control Loop
+- GUI calls PUT "/instantiation" endpoint with a Control Loop as body
+- CL-runtime receives the call by Rest-Api (InstantiationController)
+- It validates the Control Loop
+- It saves the Control Loop to DB
+
+Delete of a Control Loop
+++++++++++++++++++++++++
+- GUI calls DELETE "/instantiation" endpoint
+- CL-runtime receives the call by Rest-Api (InstantiationController)
+- It checks that Control Loop is in UNINITIALISED status
+- It deletes the Control Loop from DB
+
+"issues control loop commands to control loops"
++++++++++++++++++++++++++++++++++++++++++++++++
+
+case **UNINITIALISED to PASSIVE**
+
+- GUI calls "/instantiation/command" endpoint with PASSIVE as orderedState
+- CL-runtime checks if participants registered are matching with the list of control Loop Element
+- It updates control loop and control loop elements to DB (orderedState = PASSIVE)
+- It validates the status order issued
+- It triggers the execution to send a broadcast CONTROL_LOOP_UPDATE message
+- the message is built by ControlLoopUpdatePublisher using Tosca Service Template data and ControlLoop data. (with startPhase = 0)
+- It updates control loop and control loop elements to DB (state = UNINITIALISED2PASSIVE)
+
+case **PASSIVE to UNINITIALISED**
+
+- GUI calls "/instantiation/command" endpoint with UNINITIALISED as orderedState
+- CL-runtime checks if participants registered are matching with the list of control Loop Element
+- It updates control loop and control loop elements to DB (orderedState = UNINITIALISED)
+- It validates the status order issued
+- It triggers the execution to send a broadcast CONTROL_LOOP_STATE_CHANGE message
+- the message is built by ControlLoopStateChangePublisher with controlLoopId
+- It updates control loop and control loop elements to DB (state = PASSIVE2UNINITIALISED)
+
+case **PASSIVE to RUNNING**
+
+- GUI calls "/instantiation/command" endpoint with RUNNING as orderedState
+- CL-runtime checks if participants registered are matching with the list of control Loop Element.
+- It updates control loop and control loop elements to DB (orderedState = RUNNING)
+- It validates the status order issued
+- It triggers the execution to send a broadcast CONTROL_LOOP_STATE_CHANGE message
+- the message is built by ControlLoopStateChangePublisher with controlLoopId
+- It updates control loop and control loop elements to DB (state = PASSIVE2RUNNING)
+
+case **RUNNING to PASSIVE**
+
+- GUI calls "/instantiation/command" endpoint with UNINITIALISED as orderedState
+- CL-runtime checks if participants registered are matching with the list of control Loop Element
+- It updates control loop and control loop elements to db (orderedState = RUNNING)
+- It validates the status order issued
+- It triggers the execution to send a broadcast CONTROL_LOOP_STATE_CHANGE message
+- the message is built by ControlLoopStateChangePublisher with controlLoopId
+- It updates control loop and control loop elements to db (state = RUNNING2PASSIVE)
+
+StartPhase
+**********
+The startPhase is particularly important in control loop update and control loop state changes because sometime the user wishes to control the order in which the state changes in Control Loop Elements in a control loop.
+
+How to define StartPhase
+++++++++++++++++++++++++
+StartPhase is defined as shown below in the Definition of TOSCA fundamental Control Loop Types yaml file.
+
+.. code-block:: YAML
+
+ startPhase:
+ type: integer
+ required: false
+ constraints:
+ - greater-or-equal: 0
+ description: A value indicating the start phase in which this control loop element will be started, the
+ first start phase is zero. Control Loop Elements are started in their start_phase order and stopped
+ in reverse start phase order. Control Loop Elements with the same start phase are started and
+ stopped simultaneously
+ metadata:
+ common: true
+
+The "common: true" value in the metadata of the startPhase property identifies that property as being a common property.
+This property will be set on the CLAMP GUI during control loop commissioning.
+Example where it could be used:
+
+.. code-block:: YAML
+
+ org.onap.domain.database.Http_PMSHMicroserviceControlLoopElement:
+ # Consul http config for PMSH.
+ version: 1.2.3
+ type: org.onap.policy.clamp.controlloop.HttpControlLoopElement
+ type_version: 1.0.1
+ description: Control loop element for the http requests of PMSH microservice
+ properties:
+ provider: ONAP
+ participant_id:
+ name: HttpParticipant0
+ version: 1.0.0
+ participantType:
+ name: org.onap.k8s.controlloop.HttpControlLoopParticipant
+ version: 2.3.4
+ uninitializedToPassiveTimeout: 180
+ startPhase: 1
+
+How StartPhase works
+++++++++++++++++++++
+In state changes from UNITITIALISED → PASSIVE, control loop elements are started in increasing order of their startPhase.
+
+Example with Http_PMSHMicroserviceControlLoopElement with startPhase to 1 and PMSH_K8SMicroserviceControlLoopElement with startPhase to 0
+
+- CL-runtime sends a broadcast CONTROL_LOOP_UPDATE message to all participants with startPhase = 0
+- participant receives the CONTROL_LOOP_UPDATE message and runs to PASSIVE state (only CL elements defined as startPhase = 0)
+- CL-runtime receives CONTROL_LOOP_UPDATE_ACT messages from participants and set the state (from the CL element of the message) to PASSIVE
+- CL-runtime calculates that all CL elements with startPhase = 0 are set to proper state and sends a broadcast CONTROL_LOOP_UPDATE message with startPhase = 1
+- participant receives the CONTROL_LOOP_UPDATE message and runs to PASSIVE state (only CL elements defined as startPhase = 1)
+- CL-runtime calculates that all CL elements are set to proper state and set CL to PASSIVE
+
+In that scenario the message CONTROL_LOOP_UPDATE has been sent two times.
+
+Design of managing messages
+***************************
+
+PARTICIPANT_REGISTER
+++++++++++++++++++++
+- A participant starts and send a PARTICIPANT_REGISTER message
+- ParticipantRegisterListener collects the message from DMaap
+- if not present, it saves participant reference with status UNKNOWN to DB
+- if is present a Control Loop Type, it triggers the execution to send a PARTICIPANT_UPDATE message to the participant registered (message of Priming)
+- the message is built by ParticipantUpdatePublisher using Tosca Service Template data (to fill the list of ParticipantDefinition)
+- It triggers the execution to send a PARTICIPANT_REGISTER_ACK message to the participant registered
+- MessageIntercept intercepts that event, if PARTICIPANT_UPDATE message has been sent, it will be add a task to handle PARTICIPANT_REGISTER in SupervisionScanner
+- SupervisionScanner starts the monitoring for participantUpdate
+
+PARTICIPANT_UPDATE_ACK
+++++++++++++++++++++++
+- A participant sends PARTICIPANT_UPDATE_ACK message in response to a PARTICIPANT_UPDATE message
+- ParticipantUpdateAckListener collects the message from DMaap
+- MessageIntercept intercepts that event and adds a task to handle PARTICIPANT_UPDATE_ACK in SupervisionScanner
+- SupervisionScanner removes the monitoring for participantUpdate
+- It updates the status of the participant to DB
+
+PARTICIPANT_STATUS
+++++++++++++++++++
+- A participant sends a scheduled PARTICIPANT_STATUS message
+- ParticipantStatusListener collects the message from DMaap
+- MessageIntercept intercepts that event and adds a task to handle PARTICIPANT_STATUS in SupervisionScanner
+- SupervisionScanner clears and starts the monitoring for participantStatus
+
+CONTROLLOOP_UPDATE_ACK
+++++++++++++++++++++++
+- A participant sends CONTROLLOOP_UPDATE_ACK message in response to a CONTROLLOOP_UPDATE message. It will send a CONTROLLOOP_UPDATE_ACK - for each CL-elements moved to the ordered state as indicated by the CONTROLLOOP_UPDATE
+- ControlLoopUpdateAckListener collects the message from DMaap
+- It checks the status of all control loop elements and checks if the control loop is primed
+- It updates the CL to DB if it is changed
+- MessageIntercept intercepts that event and adds a task to handle a monitoring execution in SupervisionScanner
+
+CONTROLLOOP_STATECHANGE_ACK
++++++++++++++++++++++++++++
+Design of a CONTROLLOOP_STATECHANGE_ACK is similar to the design for CONTROLLOOP_UPDATE_ACK
+
+Design of monitoring execution in SupervisionScanner
+****************************************************
+Monitoring is designed to process the follow operations:
+
+- to determine the next startPhase in a CONTROLLOOP_UPDATE message
+- to update CL state: in a scenario that "ControlLoop.state" is in a kind of transitional state (example UNINITIALISED2PASSIVE), if all - CL-elements are moved properly to the specific state, the "ControlLoop.state" will be updated to that and saved to DB
+- to retry CONTROLLOOP_UPDATE/CONTROL_LOOP_STATE_CHANGE messages. if there is a CL Element not in the proper state, it will retry a broadcast message
+- to retry PARTICIPANT_UPDATE message to the participant in a scenario that CL-runtime do not receive PARTICIPANT_UPDATE_ACT from it
+- to send PARTICIPANT_STATUS_REQ to the participant in a scenario that CL-runtime do not receive PARTICIPANT_STATUS from it
+
+The solution Design of retry, timeout, and reporting for all Participant message dialogues are implemented into the monitoring execution.
+
+- Spring Scheduling inserts the task to monitor retry execution into ThreadPoolExecutor
+- ThreadPoolExecutor executes the task
+- a message will be retry if CL-runtime do no receive Act message before MaxWaitMs milliseconds
+
+Design of Exception handling
+****************************
+GlobalControllerExceptionHandler
+++++++++++++++++++++++++++++++++
+If error occurred during the Rest Api call, CL-runtime responses with a proper status error code and a JSON message error.
+This class is implemented to intercept and handle ControlLoopException, PfModelException and PfModelRuntimeException if they are thrown during the Rest Ali calls.
+All of those classes must implement ErrorResponseInfo that contains message error and status response code.
+So the Exception is converted in JSON message.
+
+RuntimeErrorController
+++++++++++++++++++++++
+If wrong end-point is called or an Exception not intercepted by GlobalControllerExceptionHandler, CL-runtime responses with a proper status error code and a JSON message error.
+This class is implemented to redirect the standard Web error page to a JSON message error.
+Typically that happen when a wrong end-point is called, but also could be happen for not authorized call, or any other Exception not intercepted by GlobalControllerExceptionHandler.
+
+Handle version and "X-ONAP-RequestID"
+*************************************
+RequestResponseLoggingFilter class handles version and "X-ONAP-RequestID" during a Rest-Api call; it works as a filter, so intercepts the Rest-Api and adds to the header those information.
+
+Media Type Support
+******************
+CL-runtime Rest Api supports **application/json**, **application/yaml** and **text/plain** Media Types. The configuration is implemented in CoderHttpMesageConverter.
+
+application/json
+++++++++++++++++
+JSON format is a standard for Rest Api. For the conversion from JSON to Object and vice-versa will be used **org.onap.policy.common.utils.coder.StandardCoder**.
+
+application/yaml
+++++++++++++++++
+YAML format is a standard for Control Loop Type Definition. For the conversion from YAML to Object and vice-versa will be used **org.onap.policy.common.utils.coder.StandardYamlCoder**.
+
+text/plain
+++++++++++
+Text format is used by Prometheus. For the conversion from Object to String will be used **StringHttpMessageConverter**.
diff --git a/docs/clamp/controlloop/design-impl/participants/http-participant.rst b/docs/clamp/controlloop/design-impl/participants/http-participant.rst
index 87f0ec6f..b4b9b858 100644
--- a/docs/clamp/controlloop/design-impl/participants/http-participant.rst
+++ b/docs/clamp/controlloop/design-impl/participants/http-participant.rst
@@ -5,8 +5,6 @@
HTTP Participant
################
-.. warning:: To be completed
-
The CLAMP HTTP participant receives configuration information from the CLAMP runtime,
maps the configuration information to a REST URL, and makes a REST call on the URL.
Typically the HTTP Participant is used with another participant such as the
@@ -16,8 +14,10 @@ participant can be used to configure the microservice over its REST interface.Of
the HTTP participant works towards any REST service, it is not restricted to REST
services started by participants.
+
.. image:: ../../images/participants/http-participant.png
+
The HTTP participant runs a Control Loop Element to handle the REST dialogues for a
particular application domain. The REST dialogues are whatever REST calls that are
required to implement the functionality for the application domain.
@@ -26,12 +26,6 @@ The HTTP participant allows the REST dialogues for a Control Loop to be managed.
particular Control Loop may require many *things* to be configured and managed and this
may require many REST dialogues to achieve.
-A *Configuration Entity* describes a concept that is managed by the HTTP participant. A
-Configuration Entity can be created, Read, Updated, and Deleted (CRUD). The user defines
-the Configuration Entities that it wants its HTTP Control Loop Element to manage and
-provides a sequence of parameterized REST commands to Create, Read, Update, and Delete
-each Configuration Entity.
-
When a control loop is initialized, the HTTP participant starts a HTTP Control Loop
element for the control loop. It reads the configuration information sent from the
Control Loop Runtime runs a HTTP client to talk to the REST endpoint that is receiving
@@ -42,8 +36,15 @@ Control Loop B.
Configuring a Control Loop Element on the HTTP participant for a Control Loop
-----------------------------------------------------------------------------
+A *Configuration Entity* describes a concept that is managed by the HTTP participant. A
+Configuration Entity can be created, Read, Updated, and Deleted (CRUD). The user defines
+the Configuration Entities that it wants its HTTP Control Loop Element to manage and
+provides a sequence of parameterized REST commands to Create, Read, Update, and Delete
+each Configuration Entity.
+
+Sample tosca template defining a http participant and a control loop element for a control loop. :download:`click here <tosca/tosca-http-participant.yml>`
-The user configures the following properties in the CLAMP GUI for the HTTP participant:
+The user configures the following properties in the TOSCA for the HTTP participant:
.. list-table::
:widths: 15 10 50
@@ -93,11 +94,40 @@ The *RestRequest* type is described in the following table:
- An enum for the HTTP method {GET, PUT, POST, DELETE}
* - path
- String
- - The path of the REST endopint relative to the baseUrl
+ - The path of the REST endpoint relative to the baseUrl
* - body
- String
- The body of the request for POST and PUT methods
* - expectedResponse
- HttpStatus
- The expected HTTP response code fo the REST request
- \ No newline at end of file
+
+Http participant Interactions:
+------------------------------
+The http participant interacts with Control Loop Runtime on the northbound via DMaap. It interacts with any microservice on the southbound over http for configuration.
+
+The communication for the Control loop updates and state change requests are sent from the Control Loop Runtime to the participant via DMaap.
+The participant invokes the appropriate http endpoint of the microservice based on the received messages from the Control Loop Runtime.
+
+
+startPhase:
+-----------
+The http participant is often used along with :ref:`Kubernetes Participant <clamp-controlloop-k8s-participant>` to configure the microservice after the deployment.
+This requires the Control Loop Element of http participant to be started after the completion of deployment of the microservice. This can be achieved by adding the property `startPhase`
+in the Control Loop Element of http participant. Control Loop Runtime starts the elements based on the `startPhase` value defined in the Tosca. The default value of startPhase is taken as '0'
+which takes precedence over the Control Loop Elements with the startPhase value '1'. Http Control Loop Elements are defined with value '1' in order to start the Control Loop Element in the second phase.
+
+Http participant Workflow:
+--------------------------
+Once the participant is started, it sends a "REGISTER" event to the DMaap topic which is then consumed by the Control Loop Runtime to register this participant on the runtime database.
+The user can commission the tosca definitions from the Policy Gui to the Control Loop Runtime that further updates the participant with these definitions via DMaap.
+Once the control loop definitions are available in the runtime database, the Control Loop can be instantiated with the default state "UNINITIALISED" from the Policy Gui.
+
+When the state of the Control Loop is changed from "UNINITIALISED" to "PASSIVE" from the Policy Gui, the http participant receives the control loop state change event from the runtime and
+configures the microservice of the corresponding Control Loop Element over http.
+The configuration entity for a microservice is associated with each Control Loop Element for the http participant.
+The http participant holds the executed http requests information along with the responses received.
+
+The participant is used in a generic way to configure any entity over http and it does not hold the information about the microservice to unconfigure/revert the configurations when the
+state of Control Loop changes from "PASSIVE" to "UNINITIALISED".
+
diff --git a/docs/clamp/controlloop/design-impl/participants/k8s-participant.rst b/docs/clamp/controlloop/design-impl/participants/k8s-participant.rst
index 1e1a05a3..b30dff39 100644
--- a/docs/clamp/controlloop/design-impl/participants/k8s-participant.rst
+++ b/docs/clamp/controlloop/design-impl/participants/k8s-participant.rst
@@ -5,4 +5,134 @@
Kubernetes Participant
######################
-.. warning:: To be completed
+The kubernetes participant receives a helm chart information from the CLAMP runtime and installs the helm chart in to the
+k8s cluster on the specified namespace. It can fetch the helm chart from remote helm repositories as well as from any of the repositories
+that are configured on the helm client. The participant acts as a wrapper around the helm client and creates the required
+resources in the k8s cluster.
+
+The kubernetes participant also exposes REST endpoints for onboarding, installing and uninstalling of helm charts from the
+local chart database which facilitates the user to also use this component as a standalone application for helm operations.
+
+In Istanbul version, the kubernetes participant supports the following methods of installation of helm charts.
+
+- Installation of helm charts from configured helm repositories and remote repositories passed via TOSCA in CLAMP.
+- Installation of helm charts from the local chart database via the participant's REST Api.
+
+Prerequisites for using Kubernetes participant in Istanbul version:
+-------------------------------------------------------------------
+
+- A running Kubernetes cluster.
+
+ Note:
+
+ - If the kubernetes participant is deployed outside the cluster , the config file of the k8s cluster needs to be copied to the `./kube` folder of kubernetes participant's home directory to make the participant work with the external cluster.
+
+ - If the participant needs additional permission to create resources on the cluster, cluster-admin role binding can be created for the service account of the participant with the below command.
+
+ Example: `kubectl create clusterrolebinding k8s-participant-admin-binding --clusterrole=cluster-admin --serviceaccount=<k8s participant service account>`
+
+
+.. image:: ../../images/participants/k8s-participant.png
+
+Defining a TOSCA CL definition for kubernetes participant:
+-------------------------------------------------------
+A *chart* parameter map describes the helm chart parameters in tosca template for a microservice that is used by the kubernetes participant for the deployment.
+A Control Loop element in TOSCA is mapped to the kubernetes participant and also holds the helm chart parameters for a microservice defined under the properties of the Control Loop Element.
+
+Sample tosca template defining a participant and a control loop element for a control loop. :download:`click here <tosca/tosca-k8s-participant.yml>`
+
+
+Configuring a Control Loop Element on the kubernetes participant for a Control Loop
+-----------------------------------------------------------------------------------
+
+The user configures the following properties in the TOSCA template for the kubernetes participant:
+
+.. list-table::
+ :widths: 15 10 50
+ :header-rows: 1
+
+ * - Property
+ - Type
+ - Description
+ * - chartId
+ - ToscaConceptIdentifier
+ - The name and version of the helm chart that needs to be managed by the kubernetes participant
+ * - namespace
+ - String
+ - The namespace in the k8s cluster where the helm chart needs to be installed
+ * - releaseName
+ - String
+ - The helm deployment name that specifies the installed component in the k8s cluster
+ * - repository (optional)
+ - map
+ - A map of *<String, String>* defining the helm repository parameters for the chart
+ * - overrideParams (optional)
+ - map
+ - A map of *<String, String>* defining the helm chart parameters that needs to be overridden
+
+Note: The repository property can be skipped if the helm chart is available in the local chart database or
+in a repository that is already configured on the helm client. The participant does a chart lookup by default.
+
+The *repository* type is described in the following table:
+
+.. list-table::
+ :widths: 15 10 50
+ :header-rows: 1
+
+ * - Field
+ - Type
+ - Description
+ * - repoName
+ - String
+ - The name of the helm repository that needs to be configured on the helm client
+ * - protocol
+ - String
+ - Specifies http/https protocols to connect with repository url
+ * - address
+ - String
+ - Specifies the ip address or the host name
+ * - port (optional)
+ - String
+ - Specifies the port where the repository service is running
+ * - userName (optional)
+ - String
+ - The username to login the helm repository
+ * - password (optional)
+ - String
+ - The password to login the helm repository
+
+
+Kubernetes participant Interactions:
+------------------------------------
+The kubernetes participant interacts with Control Loop Runtime on the northbound via DMaap. It interacts with the helm client on the southbound for performing various helm operations to the k8s cluster.
+
+The communication for the Control loop updates and state change requests are sent from the Control Loop Runtime to the participant via DMaap.
+The participant performs appropriate operations on the k8s cluster via helm client based on the received messages from the Control Loop Runtime.
+
+
+kubernetes participant Workflow:
+--------------------------------
+Once the participant is started, it sends a "REGISTER" event to the DMaap topic which is then consumed by the Control Loop Runtime to register this participant on the runtime database.
+The user can commission the tosca definitions from the Policy Gui to the Control Loop Runtime that further updates the participant with these definitions via DMaap.
+Once the control loop definitions are available in the runtime database, the Control Loop can be instantiated with the default state "UNINITIALISED" from the Policy Gui.
+
+When the state of the Control Loop is changed from "UNINITIALISED" to "PASSIVE" from the Policy Gui, the kubernetes participant receives the control loop state change event from the runtime and
+deploys the helm charts associated with each Control Loop Elements by creating appropriate namespace on the cluster.
+If the repository of the helm chart is not passed via TOSCA, the participant looks for the helm chart in the configured helm repositories of helm client.
+It also performs a chart look up on the local chart database where the helm charts are onboarded via the participant's REST Api.
+
+The participant also monitors the deployed pods for the next 3 minutes until the pods comes to RUNNING state.
+It holds the deployment information of the pods including the current status of the pods after the deployment.
+
+When the state of the Control Loop is changed from "PASSIVE" to "UNINITIALISED" back, the participant also undeploys the helm charts from the cluster that are part of the Control Loop Element.
+
+REST APIs on Kubernetes participant
+-----------------------------------
+
+Kubernetes participant can also be installed as a standalone application which exposes REST endpoints for onboarding,
+installing, uninstalling helm charts from local chart database.
+
+
+.. image:: ../../images/participants/k8s-rest.png
+
+:download:`Download Kubernetes participant API Swagger <swagger/k8s-participant-swagger.json>` \ No newline at end of file
diff --git a/docs/clamp/controlloop/design-impl/participants/policy-framework-participant.rst b/docs/clamp/controlloop/design-impl/participants/policy-framework-participant.rst
index 746dd529..99f2981a 100644
--- a/docs/clamp/controlloop/design-impl/participants/policy-framework-participant.rst
+++ b/docs/clamp/controlloop/design-impl/participants/policy-framework-participant.rst
@@ -2,7 +2,68 @@
.. _clamp-controlloop-policy-framework-participant:
-Policy Framework Participant
-############################
+The CLAMP Policy Framework Participant
+######################################
-To be completed.
+.. contents::
+ :depth: 3
+
+Control Loop Elements in the Policy Framework Participant are configured using TOSCA metadata defined for the Policy Control Loop Element type.
+
+The Policy Framework participant receives messages through participant-intermediary common code, and handles them by invoking REST APIs towards policy-framework.
+
+For example, When a ControlLoopUpdate message is received by policy participant, it contains full ToscaServiceTemplate describing all components participating in a control loop. When the control loop element state changed from UNINITIALIZED to PASSIVE, the Policy-participant triggers the creation of policy-types and policies in Policy-Framework.
+
+When the state changes from PASSIVE to UNINITIALIZED, Policy-Participant deletes the policies, policy-types by invoking REST APIs towards the policy-framework.
+
+Run Policy Framework Participant command line using Maven
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+
+mvn spring-boot:run -Dspring-boot.run.arguments="--server.port=8082"
+
+Run Policy Framework Participant command line using Jar
++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+
+java -jar -Dserver.port=8082 -DtopicServer=localhost target/policy-clamp-participant-impl-policy-6.1.2-SNAPSHOT.jar
+
+Distributing Policies
++++++++++++++++++++++
+
+The Policy Framework participant uses the Policy PAP API to deploy and undeploy policies.
+
+When a Policy Framework Control Loop Element changes from state PASSIVE to state RUNNING, the policy is deployed. When it changes from state RUNNING to state PASSIVE, the policy is undeployed.
+
+The PDP group to which the policy should be deployed is specified in the Control Loop Element metadata, see the Policy Control Loop Element type definition. If the PDP group specified for policy deployment does not exist, an error is reported.
+
+The PAP Policy Status API and Policy Deployment Status API are used to retrieve data to report on the deployment status of policies in Participant Status messages.
+
+The PDP Statistics API is used to get statistics for statistics report from the Policy Framework Participant back to the CLAMP runtime.
+
+Policy Type and Policy References
++++++++++++++++++++++++++++++++++
+
+The Policy Framework uses the policyType and policyId properties defined in the Policy Control Loop Element type references to specify what policy type and policy should be used by a Policy Control Loop Element.
+
+The Policy Type and Policy specified in the policyType and policyId reference must of course be available in the Policy Framework in order for them to be used in Control Loop instances. In some cases, the Policy Type and/or the Policy may be already loaded in the Policy Framework. In other cases, the Policy Framework participant must load the Policy Type and/or policy.
+
+Policy Type References
+**********************
+
+The Policy Participant uses the following steps for Policy Type References:
+
+ 1. The Policy Participant reads the Policy Type ID from the policyType property specified for the Control Loop Element
+ 2. It checks if a Policy Type with that Policy Type ID has been specified in the ToscaServiceTemplateFragment field in the ControLoopElement definition in the
+ ControlLoopUpdate message, see The CLAMP Control Loop Participant Protocol#Messages.
+ a. If the Policy Type has been specified, the Participant stores the Policy Type in the Policy framework. If the Policy Type is successfully stored, execution proceeds, otherwise an error is reported
+ b. If the Policy Type has not been specified, the Participant checks that the Policy Type is already in the Policy framework. If the Policy Type already exists, execution proceeds, otherwise an error is reported
+
+Policy References
+*****************
+
+The Policy Participant uses the following steps for Policy References:
+
+ 1. The Policy Participant reads the Policy ID from the policyId property specified for the Control Loop Element
+ 2. It checks if a Policy with that Policy ID has been specified in the ToscaServiceTemplateFragment field in the ControLoopElement definition in the
+ ControlLoopUpdate message, see The CLAMP Control Loop Participant Protocol#Messages.
+ a. If the Policy has been specified, the Participant stores the Policy in the Policy framework. If the Policy is successfully stored, execution proceeds, otherwise an error is reported
+ b. If the Policy has not been specified, the Participant checks that the Policy is already in the Policy framework. If the Policy already exists, execution proceeds, otherwise an error is reported \ No newline at end of file
diff --git a/docs/clamp/controlloop/design-impl/participants/swagger/k8s-participant-swagger.json b/docs/clamp/controlloop/design-impl/participants/swagger/k8s-participant-swagger.json
new file mode 100644
index 00000000..b2fca37a
--- /dev/null
+++ b/docs/clamp/controlloop/design-impl/participants/swagger/k8s-participant-swagger.json
@@ -0,0 +1,399 @@
+{
+ "swagger":"2.0",
+ "info":{
+ "description":"Api Documentation",
+ "version":"1.0",
+ "title":"Api Documentation",
+ "termsOfService":"urn:tos",
+ "contact":{},
+ "license":{
+ "name":"Apache 2.0",
+ "url":"http://www.apache.org/licenses/LICENSE-2.0"
+ }
+ },
+ "host":"localhost:8083",
+ "tags":[
+ {
+ "name":"k8s-participant",
+ "description":"Chart Controller"
+ }
+ ],
+ "paths":{
+ "/onap/k8sparticipant/helm/chart/{name}/{version}":{
+ "delete":{
+ "tags":[
+ "k8s-participant"
+ ],
+ "summary":"Delete the chart",
+ "operationId":"deleteChartUsingDELETE",
+ "produces":[
+ "*/*"
+ ],
+ "parameters":[
+ {
+ "name":"name",
+ "in":"path",
+ "description":"name",
+ "required":true,
+ "type":"string"
+ },
+ {
+ "name":"version",
+ "in":"path",
+ "description":"version",
+ "required":true,
+ "type":"string"
+ }
+ ],
+ "responses":{
+ "200":{
+ "description":"OK",
+ "schema":{
+ "type":"object"
+ }
+ },
+ "204":{
+ "description":"Chart Deleted"
+ },
+ "401":{
+ "description":"Unauthorized"
+ },
+ "403":{
+ "description":"Forbidden"
+ }
+ }
+ }
+ },
+ "/onap/k8sparticipant/helm/charts":{
+ "get":{
+ "tags":[
+ "k8s-participant"
+ ],
+ "summary":"Return all Charts",
+ "operationId":"getAllChartsUsingGET",
+ "produces":[
+ "application/json"
+ ],
+ "responses":{
+ "200":{
+ "description":"chart List",
+ "schema":{
+ "$ref":"#/definitions/ChartList",
+ "originalRef":"ChartList"
+ }
+ },
+ "401":{
+ "description":"Unauthorized"
+ },
+ "403":{
+ "description":"Forbidden"
+ },
+ "404":{
+ "description":"Not Found"
+ }
+ }
+ }
+ },
+ "/onap/k8sparticipant/helm/install":{
+ "post":{
+ "tags":[
+ "k8s-participant"
+ ],
+ "summary":"Install the chart",
+ "operationId":"installChartUsingPOST",
+ "consumes":[
+ "application/json"
+ ],
+ "produces":[
+ "application/json"
+ ],
+ "parameters":[
+ {
+ "in":"body",
+ "name":"info",
+ "description":"info",
+ "required":true,
+ "schema":{
+ "$ref":"#/definitions/InstallationInfo",
+ "originalRef":"InstallationInfo"
+ }
+ }
+ ],
+ "responses":{
+ "200":{
+ "description":"OK",
+ "schema":{
+ "type":"object"
+ }
+ },
+ "201":{
+ "description":"chart Installed",
+ "schema":{
+ "type":"object"
+ }
+ },
+ "401":{
+ "description":"Unauthorized"
+ },
+ "403":{
+ "description":"Forbidden"
+ },
+ "404":{
+ "description":"Not Found"
+ }
+ }
+ }
+ },
+ "/onap/k8sparticipant/helm/onboard/chart":{
+ "post":{
+ "tags":[
+ "k8s-participant"
+ ],
+ "summary":"Onboard the Chart",
+ "operationId":"onboardChartUsingPOST",
+ "consumes":[
+ "multipart/form-data"
+ ],
+ "produces":[
+ "application/json"
+ ],
+ "parameters":[
+ {
+ "name":"chart",
+ "in":"formData",
+ "required":false,
+ "type":"file"
+ },
+ {
+ "name":"info",
+ "in":"formData",
+ "required":false,
+ "type":"string"
+ },
+ {
+ "in":"body",
+ "name":"values",
+ "description":"values",
+ "required":false,
+ "schema":{
+ "type":"string",
+ "format":"binary"
+ }
+ }
+ ],
+ "responses":{
+ "200":{
+ "description":"OK",
+ "schema":{
+ "type":"string"
+ }
+ },
+ "201":{
+ "description":"Chart Onboarded",
+ "schema":{
+ "type":"string"
+ }
+ },
+ "401":{
+ "description":"Unauthorized"
+ },
+ "403":{
+ "description":"Forbidden"
+ },
+ "404":{
+ "description":"Not Found"
+ }
+ }
+ }
+ },
+ "/onap/k8sparticipant/helm/repo":{
+ "post":{
+ "tags":[
+ "k8s-participant"
+ ],
+ "summary":"Configure helm repository",
+ "operationId":"configureRepoUsingPOST",
+ "consumes":[
+ "application/json"
+ ],
+ "produces":[
+ "application/json"
+ ],
+ "parameters":[
+ {
+ "in":"body",
+ "name":"repo",
+ "description":"repo",
+ "required":true,
+ "schema":{
+ "type":"string"
+ }
+ }
+ ],
+ "responses":{
+ "200":{
+ "description":"OK",
+ "schema":{
+ "type":"object"
+ }
+ },
+ "201":{
+ "description":"Repository added",
+ "schema":{
+ "type":"object"
+ }
+ },
+ "401":{
+ "description":"Unauthorized"
+ },
+ "403":{
+ "description":"Forbidden"
+ },
+ "404":{
+ "description":"Not Found"
+ }
+ }
+ }
+ },
+ "/onap/k8sparticipant/helm/uninstall/{name}/{version}":{
+ "delete":{
+ "tags":[
+ "k8s-participant"
+ ],
+ "summary":"Uninstall the Chart",
+ "operationId":"uninstallChartUsingDELETE",
+ "produces":[
+ "application/json"
+ ],
+ "parameters":[
+ {
+ "name":"name",
+ "in":"path",
+ "description":"name",
+ "required":true,
+ "type":"string"
+ },
+ {
+ "name":"version",
+ "in":"path",
+ "description":"version",
+ "required":true,
+ "type":"string"
+ }
+ ],
+ "responses":{
+ "200":{
+ "description":"OK",
+ "schema":{
+ "type":"object"
+ }
+ },
+ "201":{
+ "description":"chart Uninstalled",
+ "schema":{
+ "type":"object"
+ }
+ },
+ "204":{
+ "description":"No Content"
+ },
+ "401":{
+ "description":"Unauthorized"
+ },
+ "403":{
+ "description":"Forbidden"
+ }
+ }
+ }
+ }
+ },
+ "definitions":{
+ "ChartInfo":{
+ "type":"object",
+ "properties":{
+ "chartId":{
+ "$ref":"#/definitions/ToscaConceptIdentifier",
+ "originalRef":"ToscaConceptIdentifier"
+ },
+ "namespace":{
+ "type":"string"
+ },
+ "overrideParams":{
+ "type":"object",
+ "additionalProperties":{
+ "type":"string"
+ }
+ },
+ "releaseName":{
+ "type":"string"
+ },
+ "repository":{
+ "$ref":"#/definitions/HelmRepository",
+ "originalRef":"HelmRepository"
+ }
+ },
+ "title":"ChartInfo"
+ },
+ "ChartList":{
+ "type":"object",
+ "properties":{
+ "charts":{
+ "type":"array",
+ "items":{
+ "$ref":"#/definitions/ChartInfo",
+ "originalRef":"ChartInfo"
+ }
+ }
+ },
+ "title":"ChartList"
+ },
+ "HelmRepository":{
+ "type":"object",
+ "properties":{
+ "address":{
+ "type":"string"
+ },
+ "password":{
+ "type":"string"
+ },
+ "port":{
+ "type":"string"
+ },
+ "protocol":{
+ "type":"string"
+ },
+ "repoName":{
+ "type":"string"
+ },
+ "userName":{
+ "type":"string"
+ }
+ },
+ "title":"HelmRepository"
+ },
+ "InstallationInfo":{
+ "type":"object",
+ "properties":{
+ "name":{
+ "type":"string"
+ },
+ "version":{
+ "type":"string"
+ }
+ },
+ "title":"InstallationInfo"
+ },
+ "ToscaConceptIdentifier":{
+ "type":"object",
+ "properties":{
+ "name":{
+ "type":"string"
+ },
+ "version":{
+ "type":"string"
+ }
+ },
+ "title":"ToscaConceptIdentifier"
+ }
+ }
+}
diff --git a/docs/clamp/controlloop/design-impl/participants/tosca/tosca-http-participant.yml b/docs/clamp/controlloop/design-impl/participants/tosca/tosca-http-participant.yml
new file mode 100644
index 00000000..dae4c76a
--- /dev/null
+++ b/docs/clamp/controlloop/design-impl/participants/tosca/tosca-http-participant.yml
@@ -0,0 +1,439 @@
+tosca_definitions_version: tosca_simple_yaml_1_3
+data_types:
+ onap.datatypes.ToscaConceptIdentifier:
+ derived_from: tosca.datatypes.Root
+ properties:
+ name:
+ type: string
+ required: true
+ version:
+ type: string
+ required: true
+ onap.datatype.controlloop.Target:
+ derived_from: tosca.datatypes.Root
+ description: Definition for a entity in A&AI to perform a control loop operation on
+ properties:
+ targetType:
+ type: string
+ description: Category for the target type
+ required: true
+ constraints:
+ - valid_values:
+ - VNF
+ - VM
+ - VFMODULE
+ - PNF
+ entityIds:
+ type: map
+ description: |
+ Map of values that identify the resource. If none are provided, it is assumed that the
+ entity that generated the ONSET event will be the target.
+ required: false
+ metadata:
+ clamp_possible_values: ClampExecution:CSAR_RESOURCES
+ entry_schema:
+ type: string
+ onap.datatype.controlloop.Actor:
+ derived_from: tosca.datatypes.Root
+ description: An actor/operation/target definition
+ properties:
+ actor:
+ type: string
+ description: The actor performing the operation.
+ required: true
+ metadata:
+ clamp_possible_values: Dictionary:DefaultActors,ClampExecution:CDS/actor
+ operation:
+ type: string
+ description: The operation the actor is performing.
+ metadata:
+ clamp_possible_values: Dictionary:DefaultOperations,ClampExecution:CDS/operation
+ required: true
+ target:
+ type: onap.datatype.controlloop.Target
+ description: The resource the operation should be performed on.
+ required: true
+ payload:
+ type: map
+ description: Name/value pairs of payload information passed by Policy to the actor
+ required: false
+ metadata:
+ clamp_possible_values: ClampExecution:CDS/payload
+ entry_schema:
+ type: string
+ onap.datatype.controlloop.Operation:
+ derived_from: tosca.datatypes.Root
+ description: An operation supported by an actor
+ properties:
+ id:
+ type: string
+ description: Unique identifier for the operation
+ required: true
+ description:
+ type: string
+ description: A user-friendly description of the intent for the operation
+ required: false
+ operation:
+ type: onap.datatype.controlloop.Actor
+ description: The definition of the operation to be performed.
+ required: true
+ timeout:
+ type: integer
+ description: The amount of time for the actor to perform the operation.
+ required: true
+ retries:
+ type: integer
+ description: The number of retries the actor should attempt to perform the operation.
+ required: true
+ default: 0
+ success:
+ type: string
+ description: Points to the operation to invoke on success. A value of "final_success" indicates and end to the operation.
+ required: false
+ default: final_success
+ failure:
+ type: string
+ description: Points to the operation to invoke on Actor operation failure.
+ required: false
+ default: final_failure
+ failure_timeout:
+ type: string
+ description: Points to the operation to invoke when the time out for the operation occurs.
+ required: false
+ default: final_failure_timeout
+ failure_retries:
+ type: string
+ description: Points to the operation to invoke when the current operation has exceeded its max retries.
+ required: false
+ default: final_failure_retries
+ failure_exception:
+ type: string
+ description: Points to the operation to invoke when the current operation causes an exception.
+ required: false
+ default: final_failure_exception
+ failure_guard:
+ type: string
+ description: Points to the operation to invoke when the current operation is blocked due to guard policy enforcement.
+ required: false
+ default: final_failure_guard
+ org.onap.datatypes.policy.clamp.controlloop.httpControlLoopElement.RestRequest:
+ version: 1.0.0
+ derived_from: tosca.datatypes.Root
+ properties:
+ restRequestId:
+ type: onap.datatypes.ToscaConceptIdentifier
+ typeVersion: 1.0.0
+ required: true
+ description: The name and version of a REST request to be sent to a REST endpoint
+ httpMethod:
+ type: string
+ required: true
+ constraints:
+ - valid_values: [POST, PUT, GET, DELETE]
+ description: The REST method to use
+ path:
+ type: string
+ required: true
+ description: The path of the REST request relative to the base URL
+ body:
+ type: string
+ required: false
+ description: The body of the REST request for PUT and POST requests
+ expectedResponse:
+ type: integer
+ required: true
+ constraints:
+ - in_range: [100, 599]
+ description: THe expected HTTP status code for the REST request
+ org.onap.datatypes.policy.clamp.controlloop.httpControlLoopElement.ConfigurationEntity:
+ version: 1.0.0
+ derived_from: tosca.datatypes.Root
+ properties:
+ configurationEntityId:
+ type: onap.datatypes.ToscaConceptIdentifier
+ typeVersion: 1.0.0
+ required: true
+ description: The name and version of a Configuration Entity to be handled by the HTTP Control Loop Element
+ restSequence:
+ type: list
+ entry_schema:
+ type: org.onap.datatypes.policy.clamp.controlloop.httpControlLoopElement.RestRequest
+ typeVersion: 1.0.0
+ description: A sequence of REST commands to send to the REST endpoint
+node_types:
+ org.onap.policy.clamp.controlloop.Participant:
+ version: 1.0.1
+ derived_from: tosca.nodetypes.Root
+ properties:
+ provider:
+ type: string
+ requred: false
+ org.onap.policy.clamp.controlloop.ControlLoopElement:
+ version: 1.0.1
+ derived_from: tosca.nodetypes.Root
+ properties:
+ provider:
+ type: string
+ required: false
+ metadata:
+ common: true
+ description: Specifies the organization that provides the control loop element
+ participant_id:
+ type: onap.datatypes.ToscaConceptIdentifier
+ requred: true
+ metadata:
+ common: true
+ participantType:
+ type: onap.datatypes.ToscaConceptIdentifier
+ required: true
+ metadata:
+ common: true
+ description: The identity of the participant type that hosts this type of Control Loop Element
+ startPhase:
+ type: integer
+ required: false
+ constraints:
+ - greater_or_equal: 0
+ metadata:
+ common: true
+ description: A value indicating the start phase in which this control loop element will be started, the
+ first start phase is zero. Control Loop Elements are started in their start_phase order and stopped
+ in reverse start phase order. Control Loop Elements with the same start phase are started and
+ stopped simultaneously
+ uninitializedToPassiveTimeout:
+ type: integer
+ required: false
+ constraints:
+ - greater_or_equal: 0
+ default: 60
+ metadata:
+ common: true
+ description: The maximum time in seconds to wait for a state chage from uninitialized to passive
+ passiveToRunningTimeout:
+ type: integer
+ required: false
+ constraints:
+ - greater_or_equal: 0
+ default: 60
+ metadata:
+ common: true
+ description: The maximum time in seconds to wait for a state chage from passive to running
+ runningToPassiveTimeout:
+ type: integer
+ required: false
+ constraints:
+ - greater_or_equal: 0
+ default: 60
+ metadata:
+ common: true
+ description: The maximum time in seconds to wait for a state chage from running to passive
+ passiveToUninitializedTimeout:
+ type: integer
+ required: false
+ constraints:
+ - greater_or_equal: 0
+ default: 60
+ metadata:
+ common: true
+ description: The maximum time in seconds to wait for a state chage from passive to uninitialized
+ org.onap.policy.clamp.controlloop.ControlLoop:
+ version: 1.0.1
+ derived_from: tosca.nodetypes.Root
+ properties:
+ provider:
+ type: string
+ required: false
+ metadata:
+ common: true
+ description: Specifies the organization that provides the control loop element
+ elements:
+ type: list
+ required: true
+ metadata:
+ common: true
+ entry_schema:
+ type: onap.datatypes.ToscaConceptIdentifier
+ description: Specifies a list of control loop element definitions that make up this control loop definition
+ org.onap.policy.clamp.controlloop.HttpControlLoopElement:
+ version: 1.0.1
+ derived_from: org.onap.policy.clamp.controlloop.ControlLoopElement
+ properties:
+ baseUrl:
+ type: string
+ required: true
+ description: The base URL to be prepended to each path, identifies the host for the REST endpoints.
+ httpHeaders:
+ type: map
+ required: false
+ entry_schema:
+ type: string
+ description: HTTP headers to send on REST requests
+ configurationEntities:
+ type: map
+ required: true
+ entry_schema:
+ type: org.onap.datatypes.policy.clamp.controlloop.httpControlLoopElement.ConfigurationEntity
+ typeVersion: 1.0.0
+ description: The connfiguration entities the Control Loop Element is managing and their associated REST requests
+
+topology_template:
+ node_templates:
+ org.onap.controlloop.HttpControlLoopParticipant:
+ version: 2.3.4
+ type: org.onap.policy.clamp.controlloop.Participant
+ type_version: 1.0.1
+ description: Participant for Http requests
+ properties:
+ provider: ONAP
+ org.onap.domain.database.Http_PMSHMicroserviceControlLoopElement:
+ # Consul http config for PMSH.
+ version: 1.2.3
+ type: org.onap.policy.clamp.controlloop.HttpControlLoopElement
+ type_version: 1.0.1
+ description: Control loop element for the http requests of PMSH microservice
+ properties:
+ provider: ONAP
+ participant_id:
+ name: HttpParticipant0
+ version: 1.0.0
+ participantType:
+ name: org.onap.k8s.controlloop.HttpControlLoopParticipant
+ version: 2.3.4
+ uninitializedToPassiveTimeout: 180
+ baseUrl: http://consul-server-ui:8500
+ httpHeaders:
+ Content-Type: application/json
+ configurationEntities:
+ - configurationEntityId:
+ name: entity1
+ version: 1.0.1
+ restSequence:
+ - restRequestId:
+ name: request1
+ version: 1.0.1
+ httpMethod: PUT
+ path: v1/kv/dcae-pmsh2
+ body: '{
+ "control_loop_name":"pmsh-control-loop",
+ "operational_policy_name":"pmsh-operational-policy",
+ "aaf_password":"demo123456!",
+ "aaf_identity":"dcae@dcae.onap.org",
+ "cert_path":"/opt/app/pmsh/etc/certs/cert.pem",
+ "key_path":"/opt/app/pmsh/etc/certs/key.pem",
+ "ca_cert_path":"/opt/app/pmsh/etc/certs/cacert.pem",
+ "enable_tls":"true",
+ "pmsh_policy":{
+ "subscription":{
+ "subscriptionName":"ExtraPM-All-gNB-R2B",
+ "administrativeState":"UNLOCKED",
+ "fileBasedGP":15,
+ "fileLocation":"\/pm\/pm.xml",
+ "nfFilter":{
+ "nfNames":[
+ "^pnf.*",
+ "^vnf.*"
+ ],
+ "modelInvariantIDs":[
+ ],
+ "modelVersionIDs":[
+ ],
+ "modelNames":[
+ ]
+ },
+ "measurementGroups":[
+ {
+ "measurementGroup":{
+ "measurementTypes":[
+ {
+ "measurementType":"countera"
+ },
+ {
+ "measurementType":"counterb"
+ }
+ ],
+ "managedObjectDNsBasic":[
+ {
+ "DN":"dna"
+ },
+ {
+ "DN":"dnb"
+ }
+ ]
+ }
+ },
+ {
+ "measurementGroup":{
+ "measurementTypes":[
+ {
+ "measurementType":"counterc"
+ },
+ {
+ "measurementType":"counterd"
+ }
+ ],
+ "managedObjectDNsBasic":[
+ {
+ "DN":"dnc"
+ },
+ {
+ "DN":"dnd"
+ }
+ ]
+ }
+ }
+ ]
+ }
+ },
+ "streams_subscribes":{
+ "aai_subscriber":{
+ "type":"message_router",
+ "dmaap_info":{
+ "topic_url":"https://10.152.183.151:3905/events/AAI_EVENT",
+ "client_role":"org.onap.dcae.aaiSub",
+ "location":"san-francisco",
+ "client_id":"1575976809466"
+ }
+ },
+ "policy_pm_subscriber":{
+ "type":"message_router",
+ "dmaap_info":{
+ "topic_url":"https://10.152.183.151:3905/events/org.onap.dmaap.mr.PM_SUBSCRIPTIONS",
+ "client_role":"org.onap.dcae.pmSubscriber",
+ "location":"san-francisco",
+ "client_id":"1575876809456"
+ }
+ }
+ },
+ "streams_publishes":{
+ "policy_pm_publisher":{
+ "type":"message_router",
+ "dmaap_info":{
+ "topic_url":"https://10.152.183.151:3905/events/org.onap.dmaap.mr.PM_SUBSCRIPTIONS",
+ "client_role":"org.onap.dcae.pmPublisher",
+ "location":"san-francisco",
+ "client_id":"1475976809466"
+ }
+ },
+ "other_publisher":{
+ "type":"message_router",
+ "dmaap_info":{
+ "topic_url":"https://10.152.183.151:3905/events/org.onap.dmaap.mr.SOME_OTHER_TOPIC",
+ "client_role":"org.onap.dcae.pmControlPub",
+ "location":"san-francisco",
+ "client_id":"1875976809466"
+ }
+ }
+ }
+ }'
+ expectedResponse: 200
+ org.onap.domain.sample.GenericK8s_ControlLoopDefinition:
+ version: 1.2.3
+ type: org.onap.policy.clamp.controlloop.ControlLoop
+ type_version: 1.0.0
+ description: Control loop for Hello World
+ properties:
+ provider: ONAP
+ elements:
+ - name: org.onap.domain.database.Http_PMSHMicroserviceControlLoopElement
+ version: 1.2.3
+
diff --git a/docs/clamp/controlloop/design-impl/participants/tosca/tosca-k8s-participant.yml b/docs/clamp/controlloop/design-impl/participants/tosca/tosca-k8s-participant.yml
new file mode 100644
index 00000000..70bbe928
--- /dev/null
+++ b/docs/clamp/controlloop/design-impl/participants/tosca/tosca-k8s-participant.yml
@@ -0,0 +1,304 @@
+tosca_definitions_version: tosca_simple_yaml_1_3
+data_types:
+ onap.datatypes.ToscaConceptIdentifier:
+ derived_from: tosca.datatypes.Root
+ properties:
+ name:
+ type: string
+ required: true
+ version:
+ type: string
+ required: true
+ onap.datatype.controlloop.Target:
+ derived_from: tosca.datatypes.Root
+ description: Definition for a entity in A&AI to perform a control loop operation on
+ properties:
+ targetType:
+ type: string
+ description: Category for the target type
+ required: true
+ constraints:
+ - valid_values:
+ - VNF
+ - VM
+ - VFMODULE
+ - PNF
+ entityIds:
+ type: map
+ description: |
+ Map of values that identify the resource. If none are provided, it is assumed that the
+ entity that generated the ONSET event will be the target.
+ required: false
+ metadata:
+ clamp_possible_values: ClampExecution:CSAR_RESOURCES
+ entry_schema:
+ type: string
+ onap.datatype.controlloop.Actor:
+ derived_from: tosca.datatypes.Root
+ description: An actor/operation/target definition
+ properties:
+ actor:
+ type: string
+ description: The actor performing the operation.
+ required: true
+ metadata:
+ clamp_possible_values: Dictionary:DefaultActors,ClampExecution:CDS/actor
+ operation:
+ type: string
+ description: The operation the actor is performing.
+ metadata:
+ clamp_possible_values: Dictionary:DefaultOperations,ClampExecution:CDS/operation
+ required: true
+ target:
+ type: onap.datatype.controlloop.Target
+ description: The resource the operation should be performed on.
+ required: true
+ payload:
+ type: map
+ description: Name/value pairs of payload information passed by Policy to the actor
+ required: false
+ metadata:
+ clamp_possible_values: ClampExecution:CDS/payload
+ entry_schema:
+ type: string
+ onap.datatype.controlloop.Operation:
+ derived_from: tosca.datatypes.Root
+ description: An operation supported by an actor
+ properties:
+ id:
+ type: string
+ description: Unique identifier for the operation
+ required: true
+ description:
+ type: string
+ description: A user-friendly description of the intent for the operation
+ required: false
+ operation:
+ type: onap.datatype.controlloop.Actor
+ description: The definition of the operation to be performed.
+ required: true
+ timeout:
+ type: integer
+ description: The amount of time for the actor to perform the operation.
+ required: true
+ retries:
+ type: integer
+ description: The number of retries the actor should attempt to perform the operation.
+ required: true
+ default: 0
+ success:
+ type: string
+ description: Points to the operation to invoke on success. A value of "final_success" indicates and end to the operation.
+ required: false
+ default: final_success
+ failure:
+ type: string
+ description: Points to the operation to invoke on Actor operation failure.
+ required: false
+ default: final_failure
+ failure_timeout:
+ type: string
+ description: Points to the operation to invoke when the time out for the operation occurs.
+ required: false
+ default: final_failure_timeout
+ failure_retries:
+ type: string
+ description: Points to the operation to invoke when the current operation has exceeded its max retries.
+ required: false
+ default: final_failure_retries
+ failure_exception:
+ type: string
+ description: Points to the operation to invoke when the current operation causes an exception.
+ required: false
+ default: final_failure_exception
+ failure_guard:
+ type: string
+ description: Points to the operation to invoke when the current operation is blocked due to guard policy enforcement.
+ required: false
+ default: final_failure_guard
+node_types:
+ org.onap.policy.clamp.controlloop.Participant:
+ version: 1.0.1
+ derived_from: tosca.nodetypes.Root
+ properties:
+ provider:
+ type: string
+ requred: false
+ org.onap.policy.clamp.controlloop.ControlLoopElement:
+ version: 1.0.1
+ derived_from: tosca.nodetypes.Root
+ properties:
+ provider:
+ type: string
+ required: false
+ metadata:
+ common: true
+ description: Specifies the organization that provides the control loop element
+ participant_id:
+ type: onap.datatypes.ToscaConceptIdentifier
+ requred: true
+ metadata:
+ common: true
+ participantType:
+ type: onap.datatypes.ToscaConceptIdentifier
+ required: true
+ metadata:
+ common: true
+ description: The identity of the participant type that hosts this type of Control Loop Element
+ startPhase:
+ type: integer
+ required: false
+ constraints:
+ - greater_or_equal: 0
+ metadata:
+ common: true
+ description: A value indicating the start phase in which this control loop element will be started, the
+ first start phase is zero. Control Loop Elements are started in their start_phase order and stopped
+ in reverse start phase order. Control Loop Elements with the same start phase are started and
+ stopped simultaneously
+ uninitializedToPassiveTimeout:
+ type: integer
+ required: false
+ constraints:
+ - greater_or_equal: 0
+ default: 60
+ metadata:
+ common: true
+ description: The maximum time in seconds to wait for a state chage from uninitialized to passive
+ passiveToRunningTimeout:
+ type: integer
+ required: false
+ constraints:
+ - greater_or_equal: 0
+ default: 60
+ metadata:
+ common: true
+ description: The maximum time in seconds to wait for a state chage from passive to running
+ runningToPassiveTimeout:
+ type: integer
+ required: false
+ constraints:
+ - greater_or_equal: 0
+ default: 60
+ metadata:
+ common: true
+ description: The maximum time in seconds to wait for a state chage from running to passive
+ passiveToUninitializedTimeout:
+ type: integer
+ required: false
+ constraints:
+ - greater_or_equal: 0
+ default: 60
+ metadata:
+ common: true
+ description: The maximum time in seconds to wait for a state chage from passive to uninitialized
+ org.onap.policy.clamp.controlloop.ControlLoop:
+ version: 1.0.1
+ derived_from: tosca.nodetypes.Root
+ properties:
+ provider:
+ type: string
+ required: false
+ metadata:
+ common: true
+ description: Specifies the organization that provides the control loop element
+ elements:
+ type: list
+ required: true
+ metadata:
+ common: true
+ entry_schema:
+ type: onap.datatypes.ToscaConceptIdentifier
+ description: Specifies a list of control loop element definitions that make up this control loop definition
+ org.onap.policy.clamp.controlloop.K8SMicroserviceControlLoopElement:
+ version: 1.0.1
+ derived_from: org.onap.policy.clamp.controlloop.ControlLoopElement
+ properties:
+ chart:
+ type: string
+ required: true
+ configs:
+ type: list
+ required: false
+ requirements:
+ type: string
+ requred: false
+ templates:
+ type: list
+ required: false
+ entry_schema:
+ values:
+ type: string
+ required: true
+
+topology_template:
+ node_templates:
+ org.onap.k8s.controlloop.K8SControlLoopParticipant:
+ version: 2.3.4
+ type: org.onap.policy.clamp.controlloop.Participant
+ type_version: 1.0.1
+ description: Participant for K8S
+ properties:
+ provider: ONAP
+ org.onap.domain.database.PMSH_K8SMicroserviceControlLoopElement:
+ # Chart from new repository
+ version: 1.2.3
+ type: org.onap.policy.clamp.controlloop.K8SMicroserviceControlLoopElement
+ type_version: 1.0.0
+ description: Control loop element for the K8S microservice for PMSH
+ properties:
+ provider: ONAP
+ participant_id:
+ name: K8sParticipant0
+ version: 1.0.0
+ participantType:
+ name: org.onap.k8s.controlloop.K8SControlLoopParticipant
+ version: 2.3.4
+ chart:
+ chartId:
+ name: dcae-pmsh
+ version: 8.0.0
+ namespace: onap
+ releaseName: pmshms
+ repository:
+ repoName: chartmuseum
+ protocol: http
+ address: chart-museum
+ port: 80
+ userName: onapinitializer
+ password: demo123456!
+ overrideParams:
+ global.masterPassword: test
+
+ org.onap.domain.database.Local_K8SMicroserviceControlLoopElement:
+ # Chart installation without passing repository info
+ version: 1.2.3
+ type: org.onap.policy.clamp.controlloop.K8SMicroserviceControlLoopElement
+ type_version: 1.0.0
+ description: Control loop element for the K8S microservice for local chart
+ properties:
+ provider: ONAP
+ participant_id:
+ name: K8sParticipant0
+ version: 1.0.0
+ participantType:
+ name: org.onap.k8s.controlloop.K8SControlLoopParticipant
+ version: 2.3.4
+ chart:
+ chartId:
+ name: nginx-ingress
+ version: 0.9.1
+ releaseName: nginxms
+ namespace: test
+ org.onap.domain.sample.GenericK8s_ControlLoopDefinition:
+ version: 1.2.3
+ type: org.onap.policy.clamp.controlloop.ControlLoop
+ type_version: 1.0.0
+ description: Control loop for Hello World
+ properties:
+ provider: ONAP
+ elements:
+ - name: org.onap.domain.database.PMSH_K8SMicroserviceControlLoopElement
+ version: 1.2.3
+ - name: org.onap.domain.database.Local_K8SMicroserviceControlLoopElement
+ version: 1.2.3
diff --git a/docs/clamp/controlloop/images/participants/k8s-participant.png b/docs/clamp/controlloop/images/participants/k8s-participant.png
new file mode 100644
index 00000000..55945bc3
--- /dev/null
+++ b/docs/clamp/controlloop/images/participants/k8s-participant.png
Binary files differ
diff --git a/docs/clamp/controlloop/images/participants/k8s-rest.png b/docs/clamp/controlloop/images/participants/k8s-rest.png
new file mode 100644
index 00000000..d08982a9
--- /dev/null
+++ b/docs/clamp/controlloop/images/participants/k8s-rest.png
Binary files differ
diff --git a/docs/development/devtools/clamp-dcae.rst b/docs/development/devtools/clamp-dcae.rst
new file mode 100644
index 00000000..c0cd41bf
--- /dev/null
+++ b/docs/development/devtools/clamp-dcae.rst
@@ -0,0 +1,115 @@
+.. This work is licensed under a
+.. Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+
+.. _clamp-pairwise-testing-label:
+
+.. toctree::
+ :maxdepth: 2
+
+CLAMP <-> Dcae
+~~~~~~~~~~~~~~
+
+The pairwise testing is executed against a default ONAP installation in the OOM.
+CLAMP-Control loop interacts with DCAE to deploy dcaegen2 services like PMSH.
+This test verifies the interaction between DCAE and controlloop works as expected.
+
+General Setup
+*************
+
+The kubernetes installation allocated all policy components across multiple worker node VMs.
+The worker VM hosting the policy components has the following spec:
+
+- 16GB RAM
+- 8 VCPU
+- 160GB Ephemeral Disk
+
+
+The ONAP components used during the pairwise tests are:
+
+- CLAMP control loop runtime, policy participant, kubernetes participant.
+- DCAE for running dcaegen2-service via kubernetes participant.
+- ChartMuseum service from platform, initialised with DCAE helm charts.
+- DMaaP for the communication between Control loop runtime and participants.
+- Policy Gui for instantiation and commissioning of control loops.
+
+
+ChartMuseum Setup
+*****************
+
+The chartMuseum helm chart from the platform is deployed in the same cluster. The chart server is then initialized with the helm charts of dcaegen2-services by running the below script in OOM repo.
+The script accepts the directory path as an argument where the helm charts are located.
+
+.. code-block:: bash
+
+ #!/bin/sh
+ ./oom/kubernetes/contrib/tools/registry-initialize.sh -d /oom/kubernetes/dcaegen2-services/charts/
+
+Testing procedure
+*****************
+
+The test set focused on the following use cases:
+
+- Deployment and Configuration of DCAE microservice PMSH
+- Undeployment of PMSH
+
+Creation of the Control Loop:
+-----------------------------
+A Control Loop is created by commissioning a Tosca template with Control loop definitions and instantiating the Control Loop with the state "UNINITIALISED".
+
+- Upload a TOSCA template from the POLICY GUI. The definitions includes a kubernetes participant and control loop elements that deploys and configures a microservice in the kubernetes cluster.
+ Control loop element for kubernetes participant includes a helm chart information of DCAE microservice and the element for Http Participant includes the configuration entity for the microservice.
+ :download:`Sample Tosca template <tosca/pairwise-testing.yml>`
+
+ .. image:: images/cl-commission.png
+
+ Verification: The template is commissioned successfully without errors.
+
+- Instantiate the commissioned Control loop definitions from the Policy Gui under 'Instantiation Management'.
+
+ .. image:: images/create-instance.png
+
+ Update instance properties of the Control Loop Elements if required.
+
+ .. image:: images/update-instance.PNG
+
+ Verification: The control loop is created with default state "UNINITIALISED" without errors.
+
+ .. image:: images/cl-instantiation.png
+
+
+Deployment and Configuration of DCAE microservice (PMSH):
+---------------------------------------------------------
+The Control Loop state is changed from "UNINITIALISED" to "PASSIVE" from the Policy Gui. The kubernetes participant deploys the PMSH helm chart from the DCAE chartMuseum server.
+
+.. image:: images/cl-passive.png
+
+Verification:
+
+- DCAE service PMSH is deployed in to the kubernetes cluster. PMSH pods are in RUNNING state.
+ `helm ls -n <namespace>` - The helm deployment of dcaegen2 service PMSH is listed.
+ `kubectl get pod -n <namespace>` - The PMSH pods are deployed, up and Running.
+
+- The subscription configuration for PMSH microservice from the TOSCA definitions are updated in the Consul server. The configuration can be verified on the Consul server UI `http://<CONSUL-SERVER_IP>/ui/#/dc1/kv/`
+
+- The overall state of the Control Loop is changed to "PASSIVE" in the Policy Gui.
+
+.. image:: images/cl-create.png
+
+
+Undeployment of DCAE microservice (PMSH):
+-----------------------------------------
+The Control Loop state is changed from "PASSIVE" to "UNINITIALISED" from the Policy Gui.
+
+.. image:: images/cl-uninitialise.png
+
+Verification:
+
+- The kubernetes participant uninstall the DCAE PMSH helm chart from the kubernetes cluster. The pods are removed from the cluster.
+
+- The overall state of the Control Loop is changed to "UNINITIALISED" in the Policy Gui.
+
+.. image:: images/cl-uninitialised-state.png
+
+
+
diff --git a/docs/development/devtools/clamp-policy.rst b/docs/development/devtools/clamp-policy.rst
new file mode 100644
index 00000000..72a9a1b1
--- /dev/null
+++ b/docs/development/devtools/clamp-policy.rst
@@ -0,0 +1,124 @@
+.. This work is licensed under a
+.. Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+
+.. _clamp-pairwise-testing-label:
+
+.. toctree::
+ :maxdepth: 2
+
+CLAMP <-> Policy Core
+~~~~~~~~~~~~~~~~~~~~~
+
+The pairwise testing is executed against a default ONAP installation in the OOM.
+CLAMP-Control loop interacts with Policy framework to create and deploy policies.
+This test verifies the interaction between policy and controlloop works as expected.
+
+General Setup
+*************
+
+The kubernetes installation allocated all policy components across multiple worker node VMs.
+The worker VM hosting the policy components has the following spec:
+
+- 16GB RAM
+- 8 VCPU
+- 160GB Ephemeral Disk
+
+
+The ONAP components used during the pairwise tests are:
+
+- CLAMP control loop runtime, policy participant, kubernetes participant.
+- DMaaP for the communication between Control loop runtime and participants.
+- Policy API to create (and delete at the end of the tests) policies for each
+ scenario under test.
+- Policy PAP to deploy (and undeploy at the end of the tests) policies for each scenario under test.
+- Policy Gui for instantiation and commissioning of control loops.
+
+
+Testing procedure
+*****************
+
+The test set focused on the following use cases:
+
+- creation/Deletion of policies
+- Deployment/Undeployment of policies
+
+Creation of the Control Loop:
+-----------------------------
+A Control Loop is created by commissioning a Tosca template with Control loop definitions and instantiating the Control Loop with the state "UNINITIALISED".
+
+- Upload a TOSCA template from the POLICY GUI. The definitions includes a policy participant and a control loop element that creates and deploys required policies. :download:`Sample Tosca template <tosca/pairwise-testing.yml>`
+
+ .. image:: images/cl-commission.png
+
+ Verification: The template is commissioned successfully without errors.
+
+- Instantiate the commissioned Control loop from the Policy Gui under 'Instantiation Management'.
+
+ .. image:: images/create-instance.png
+
+ Update instance properties of the Control Loop Elements if required.
+
+ .. image:: images/update-instance.PNG
+
+ Verification: The control loop is created with default state "UNINITIALISED" without errors.
+
+ .. image:: images/cl-instantiation.png
+
+
+Creation of policies:
+---------------------
+The Control Loop state is changed from "UNINITIALISED" to "PASSIVE" from the Policy Gui. Verify the POLICY API endpoint for the creation of policy types that are defined in the TOSCA template.
+
+.. image:: images/cl-passive.png
+
+Verification:
+
+- The policy types defined in the tosca template is created by the policy participant and listed in the policy Api.
+ Policy Api endpoint: `<https://<POLICY-API-IP>/policy/api/v1/policytypes>`
+
+- The overall state of the Control Loop is changed to "PASSIVE" in the Policy Gui.
+
+.. image:: images/cl-create.png
+
+
+Deployment of policies:
+-----------------------
+The Control Loop state is changed from "PASSIVE" to "RUNNING" from the Policy Gui.
+
+.. image:: images/cl-running.png
+
+Verification:
+
+- The policy participant deploys the policies of Tosca Control loop elements in Policy PAP for all the pdp groups.
+ Policy PAP endpoint: `<https://<POLICY-PAP-IP>/policy/pap/v1/pdps>`
+
+- The overall state of the Control Loop is changed to "RUNNING" in the Policy Gui.
+
+.. image:: images/cl-running-state.png
+
+Deletion of Policies:
+---------------------
+The Control Loop state is changed from "RUNNING" to "PASSIVE" from the Policy Gui.
+
+Verification:
+
+- The policy participant deletes the created policy types which can be verified on the Policy Api. The policy types created as part of the control loop should not be listed on the Policy Api.
+ Policy Api endpoint: `<https://<POLICY-API-IP>/policy/api/v1/policytypes>`
+
+- The overall state of the Control Loop is changed to "PASSIVE" in the Policy Gui.
+
+.. image:: images/cl-create.png
+
+Undeployment of policies:
+-------------------------
+The Control Loop state is changed from "PASSIVE" to "UNINITIALISED" from the Policy Gui.
+
+Verification:
+
+- The policy participant undeploys the policies of the control loop element from the pdp groups. The policies deployed as part of the control loop should not be listed on the Policy PAP.
+ Policy PAP endpoint: `<https://<POLICY-PAP-IP>/policy/pap/v1/pdps>`
+
+- The overall state of the Control Loop is changed to "UNINITIALISED" in the Policy Gui.
+
+.. image:: images/cl-uninitialised-state.png
diff --git a/docs/development/devtools/clamp-smoke.rst b/docs/development/devtools/clamp-smoke.rst
new file mode 100644
index 00000000..06ec6db7
--- /dev/null
+++ b/docs/development/devtools/clamp-smoke.rst
@@ -0,0 +1,357 @@
+.. This work is licensed under a
+.. Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+
+.. _policy-development-tools-label:
+
+CLAMP control loop runtime Smoke Tests
+######################################
+
+.. contents::
+ :depth: 3
+
+
+This article explains how to build the CLAMP control loop runtime for development purposes and how to run smoke tests for control loop runtime. To start, the developer should consult the latest ONAP Wiki to familiarize themselves with developer best practices and how-tos to setup their environment, see `https://wiki.onap.org/display/DW/Developer+Best+Practices`.
+
+
+This article assumes that:
+
+* You are using a *\*nix* operating system such as linux or macOS.
+* You are using a directory called *git* off your home directory *(~/git)* for your git repositories
+* Your local maven repository is in the location *~/.m2/repository*
+* You have copied the settings.xml from oparent to *~/.m2/* directory
+* You have added settings to access the ONAP Nexus to your M2 configuration, see `Maven Settings Example <https://wiki.onap.org/display/DW/Setting+Up+Your+Development+Environment>`_ (bottom of the linked page)
+
+The procedure documented in this article has been verified using Unbuntu 20.04 LTS VM.
+
+Cloning CLAMP control loop runtime and all dependency
+*****************************************************
+
+Run a script such as the script below to clone the required modules from the `ONAP git repository <https://gerrit.onap.org/r/#/admin/projects/?filter=policy>`_. This script clones CLAMP control loop runtime and all dependency.
+
+ONAP Policy Framework has dependencies to the ONAP Parent *oparent* module, the ONAP ECOMP SDK *ecompsdkos* module, and the A&AI Schema module.
+
+
+.. code-block:: bash
+ :caption: Typical ONAP Policy Framework Clone Script
+ :linenos:
+
+ #!/usr/bin/env bash
+
+ ## script name for output
+ MOD_SCRIPT_NAME='basename $0'
+
+ ## the ONAP clone directory, defaults to "onap"
+ clone_dir="onap"
+
+ ## the ONAP repos to clone
+ onap_repos="\
+ policy/parent \
+ policy/common \
+ policy/models \
+ policy/clamp \
+ policy/docker "
+
+ ##
+ ## Help screen and exit condition (i.e. too few arguments)
+ ##
+ Help()
+ {
+ echo ""
+ echo "$MOD_SCRIPT_NAME - clones all required ONAP git repositories"
+ echo ""
+ echo " Usage: $MOD_SCRIPT_NAME [-options]"
+ echo ""
+ echo " Options"
+ echo " -d - the ONAP clone directory, defaults to '.'"
+ echo " -h - this help screen"
+ echo ""
+ exit 255;
+ }
+
+ ##
+ ## read command line
+ ##
+ while [ $# -gt 0 ]
+ do
+ case $1 in
+ #-d ONAP clone directory
+ -d)
+ shift
+ if [ -z "$1" ]; then
+ echo "$MOD_SCRIPT_NAME: no clone directory"
+ exit 1
+ fi
+ clone_dir=$1
+ shift
+ ;;
+
+ #-h prints help and exists
+ -h)
+ Help;exit 0;;
+
+ *) echo "$MOD_SCRIPT_NAME: undefined CLI option - $1"; exit 255;;
+ esac
+ done
+
+ if [ -f "$clone_dir" ]; then
+ echo "$MOD_SCRIPT_NAME: requested clone directory '$clone_dir' exists as file"
+ exit 2
+ fi
+ if [ -d "$clone_dir" ]; then
+ echo "$MOD_SCRIPT_NAME: requested clone directory '$clone_dir' exists as directory"
+ exit 2
+ fi
+
+ mkdir $clone_dir
+ if [ $? != 0 ]
+ then
+ echo cannot clone ONAP repositories, could not create directory '"'$clone_dir'"'
+ exit 3
+ fi
+
+ for repo in $onap_repos
+ do
+ repoDir=`dirname "$repo"`
+ repoName=`basename "$repo"`
+
+ if [ ! -z $dirName ]
+ then
+ mkdir "$clone_dir/$repoDir"
+ if [ $? != 0 ]
+ then
+ echo cannot clone ONAP repositories, could not create directory '"'$clone_dir/repoDir'"'
+ exit 4
+ fi
+ fi
+
+ git clone https://gerrit.onap.org/r/${repo} $clone_dir/$repo
+ done
+
+ echo ONAP has been cloned into '"'$clone_dir'"'
+
+
+Execution of the script above results in the following directory hierarchy in your *~/git* directory:
+
+ * ~/git/onap
+ * ~/git/onap/policy
+ * ~/git/onap/policy/parent
+ * ~/git/onap/policy/common
+ * ~/git/onap/policy/models
+ * ~/git/onap/policy/clamp
+ * ~/git/onap/policy/docker
+
+
+Building CLAMP control loop runtime and all dependency
+******************************************************
+
+**Step 1:** Optionally, for a completely clean build, remove the ONAP built modules from your local repository.
+
+ .. code-block:: bash
+
+ rm -fr ~/.m2/repository/org/onap
+
+
+**Step 2:** A pom such as the one below can be used to build the ONAP Policy Framework modules. Create the *pom.xml* file in the directory *~/git/onap/policy*.
+
+.. code-block:: xml
+ :caption: Typical pom.xml to build the ONAP Policy Framework
+ :linenos:
+
+ <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+ <modelVersion>4.0.0</modelVersion>
+ <groupId>org.onap</groupId>
+ <artifactId>onap-policy</artifactId>
+ <version>1.0.0-SNAPSHOT</version>
+ <packaging>pom</packaging>
+ <name>${project.artifactId}</name>
+ <inceptionYear>2017</inceptionYear>
+ <organization>
+ <name>ONAP</name>
+ </organization>
+
+ <modules>
+ <module>parent</module>
+ <module>common</module>
+ <module>models</module>
+ <module>clamp</module>
+ </modules>
+ </project>
+
+
+**Step 3:** You can now build the Policy framework.
+
+Java artifacts only:
+
+ .. code-block:: bash
+
+ cd ~/git/onap/policy
+ mvn -pl '!org.onap.policy.clamp:policy-clamp-runtime' install
+
+With docker images:
+
+ .. code-block:: bash
+
+ cd ~/git/onap/policy/clamp/packages/
+ mvn clean install -P docker
+
+Running MariaDb and DMaaP Simulator
+***********************************
+
+Running a MariaDb Instance
+++++++++++++++++++++++++++
+
+Assuming you have successfully built the codebase using the instructions above. There are two requirements for the Clamp controlloop runtime component to run, one of them is a
+running MariaDb database instance. The easiest way to do this is to run the docker image locally.
+
+An sql such as the one below can be used to build the SQL initialization. Create the *mariadb.sql* file in the directory *~/git*.
+
+ .. code-block:: SQL
+
+ create database controlloop;
+ CREATE USER 'policy'@'%' IDENTIFIED BY 'P01icY';
+ GRANT ALL PRIVILEGES ON controlloop.* TO 'policy'@'%';
+
+
+Execution of the command above results in the creation and start of the *mariadb-smoke-test* container.
+
+ .. code-block:: bash
+
+ cd ~/git
+ docker run --name mariadb-smoke-test \
+ -p 3306:3306 \
+ -e MYSQL_ROOT_PASSWORD=my-secret-pw \
+ --mount type=bind,source=~/git/mariadb.sql,target=/docker-entrypoint-initdb.d/data.sql \
+ mariadb:10.5.8
+
+
+Running the DMaaP Simulator during Development
+++++++++++++++++++++++++++++++++++++++++++++++
+The second requirement for the Clamp controlloop runtime component to run is to run the DMaaP simulator. You can run it from the command line using Maven.
+
+
+Change the local configuration file *src/test/resources/simParameters.json* using the below code:
+
+.. code-block:: json
+
+ {
+ "dmaapProvider": {
+ "name": "DMaaP simulator",
+ "topicSweepSec": 900
+ },
+ "restServers": [
+ {
+ "name": "DMaaP simulator",
+ "providerClass": "org.onap.policy.models.sim.dmaap.rest.DmaapSimRestControllerV1",
+ "host": "localhost",
+ "port": 3904,
+ "https": false
+ }
+ ]
+ }
+
+Run the following commands:
+
+ .. code-block:: bash
+
+ cd ~/git/onap/policy/models/models-sim/policy-models-simulators
+ mvn exec:java -Dexec.mainClass=org.onap.policy.models.simulators.Main -Dexec.args="src/test/resources/simParameters.json"
+
+
+Developing and Debugging CLAMP control loop runtime
+***************************************************
+
+Running on the Command Line using Maven
++++++++++++++++++++++++++++++++++++++++
+
+Once the mariadb and DMaap simulator are up and running, run the following commands:
+
+ .. code-block:: bash
+
+ cd ~/git/onap/policy/clamp/runtime-controlloop
+ mvn spring-boot:run
+
+
+Running on the Command Line
++++++++++++++++++++++++++++
+
+ .. code-block:: bash
+
+ cd ~/git/onap/policy/clamp/runtime-controlloop
+ java -jar target/policy-clamp-runtime-controlloop-6.1.3-SNAPSHOT.jar
+
+
+Running in Eclipse
+++++++++++++++++++
+
+1. Check out the policy models repository
+2. Go to the *policy-clamp-runtime-controlloop* module in the clamp repo
+3. Specify a run configuration using the class *org.onap.policy.clamp.controlloop.runtime.Application* as the main class
+4. Run the configuration
+
+Swagger UI of Control loop runtime is available at *http://localhost:6969/onap/controlloop/swagger-ui/*, and swagger JSON at *http://localhost:6969/onap/controlloop/v2/api-docs/*
+
+
+Running one or more participant simulators
+++++++++++++++++++++++++++++++++++++++++++
+
+Into *docker\csit\clamp\tests\data* you can find a test case with policy-participant. In order to use that test you can use particpant-simulator.
+Copy the file *src/main/resources/config/application.yaml* and paste into *src/test/resources/*, after that change *participantId* and *participantType* as showed below:
+
+ .. code-block:: yaml
+
+ participantId:
+ name: org.onap.policy.controlloop.PolicyControlLoopParticipant
+ version: 2.3.1
+ participantType:
+ name: org.onap.PM_Policy
+ version: 1.0.0
+
+Run the following commands:
+
+ .. code-block:: bash
+
+ cd ~/git/onap/policy/clamp/participant/participant-impl/participant-impl-simulator
+ java -jar target/policy-clamp-participant-impl-simulator-6.1.3-SNAPSHOT.jar --spring.config.location=src/test/resources/application.yaml
+
+
+Creating self-signed certificate
+++++++++++++++++++++++++++++++++
+
+There is an additional requirement for the Clamp control loop runtime docker image to run, is creating the SSL self-signed certificate.
+
+Run the following commands:
+
+ .. code-block:: bash
+
+ cd ~/git/onap/policy/docker/csit/
+ ./gen_truststore.sh
+ ./gen_keystore.sh
+
+Execution of the commands above results additional files into the following directory *~/git/onap/policy/docker/csit/config*:
+
+ * ~/git/onap/policy/docker/csit/config/cakey.pem
+ * ~/git/onap/policy/docker/csit/config/careq.pem
+ * ~/git/onap/policy/docker/csit/config/caroot.cer
+ * ~/git/onap/policy/docker/csit/config/ks.cer
+ * ~/git/onap/policy/docker/csit/config/ks.csr
+ * ~/git/onap/policy/docker/csit/config/ks.jks
+
+
+Running the CLAMP control loop runtime docker image
++++++++++++++++++++++++++++++++++++++++++++++++++++
+
+Run the following command:
+
+ .. code-block:: bash
+
+ docker run --name runtime-smoke-test \
+ -p 6969:6969 \
+ -e mariadb.host=host.docker.internal \
+ -e topicServer=host.docker.internal \
+ --mount type=bind,source=~/git/onap/policy/docker/csit/config/ks.jks,target=/opt/app/policy/clamp/etc/ssl/policy-keystore \
+ --mount type=bind,source=~/git/onap/policy/clamp/runtime-controlloop/src/main/resources/application.yaml,target=/opt/app/policy/clamp/etc/ClRuntimeParameters.yaml \
+ onap/policy-clamp-cl-runtime
+
+
+Swagger UI of Control loop runtime is available at *https://localhost:6969/onap/controlloop/swagger-ui/*, and swagger JSON at *https://localhost:6969/onap/controlloop/v2/api-docs/*
diff --git a/docs/development/devtools/db-migrator-smoke.rst b/docs/development/devtools/db-migrator-smoke.rst
new file mode 100644
index 00000000..4aa41e46
--- /dev/null
+++ b/docs/development/devtools/db-migrator-smoke.rst
@@ -0,0 +1,413 @@
+.. This work is licensed under a Creative Commons Attribution
+.. 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+
+Policy DB Migrator Smoke Tests
+##############################
+
+Prerequisites
+*************
+
+Check number of files in each release
+
+.. code::
+ :number-lines:
+
+ ls 0800/upgrade/*.sql | wc -l = 96
+ ls 0900/upgrade/*.sql | wc -l = 13
+ ls 0800/downgrade/*.sql | wc -l = 96
+ ls 0900/downgrade/*.sql | wc -l = 13
+
+Upgrade scripts
+===============
+
+.. code::
+ :number-lines:
+
+ /opt/app/policy/bin/prepare_upgrade.sh policyadmin
+ /opt/app/policy/bin/db-migrator -s policyadmin -o upgrade
+
+.. note::
+ You can also run db-migrator upgrade with the -t and -f options
+
+Downgrade scripts
+=================
+
+.. code::
+ :number-lines:
+
+ /opt/app/policy/bin/prepare_downgrade.sh policyadmin
+ /opt/app/policy/bin/db-migrator -s policyadmin -o downgrade -f 0900 -t 0800
+
+Db migrator initialization script
+=================================
+
+Update /oom/kubernetes/policy/resources/config/db_migrator_policy_init.sh with the appropriate upgrade/downgrade calls.
+
+The policy version you are deploying should either be an upgrade or downgrade from the current db migrator schema version.
+
+Every time you modify db_migrator_policy_init.sh you will have to undeploy, make and redeploy before updates are applied.
+
+1. Fresh Install
+****************
+
+.. list-table::
+ :widths: 60 20
+ :header-rows: 0
+
+ * - Number of files run
+ - 109
+ * - Tables in policyadmin
+ - 75
+ * - Records Added
+ - 109
+ * - schema_version
+ - 0900
+
+2. Downgrade to Honolulu (0800)
+*******************************
+
+Modify db_migrator_policy_init.sh - remove any lines referencing upgrade and add the 2 lines under "Downgrade scripts"
+
+Make/Redeploy to run downgrade.
+
+.. list-table::
+ :widths: 60 20
+ :header-rows: 0
+
+ * - Number of files run
+ - 13
+ * - Tables in policyadmin
+ - 73
+ * - Records Added
+ - 13
+ * - schema_version
+ - 0800
+
+3. Upgrade to Istanbul (0900)
+*****************************
+
+Modify db_migrator_policy_init.sh - remove any lines referencing downgrade and add the 2 lines under "Upgrade scripts".
+
+Make/Redeploy to run upgrade.
+
+.. list-table::
+ :widths: 60 20
+ :header-rows: 0
+
+ * - Number of files run
+ - 13
+ * - Tables in policyadmin
+ - 75
+ * - Records Added
+ - 13
+ * - schema_version
+ - 0900
+
+4. Upgrade to Istanbul (0900) without any information in the migration schema
+*****************************************************************************
+
+Ensure you are on release 0800. (This may require running a downgrade before starting the test)
+
+Drop db-migrator tables in migration schema:
+
+.. code::
+ :number-lines:
+
+ DROP TABLE schema_versions;
+ DROP TABLE policyadmin_schema_changelog;
+
+Modify db_migrator_policy_init.sh - remove any lines referencing downgrade and add the 2 lines under "Upgrade scripts".
+
+Make/Redeploy to run upgrade.
+
+.. list-table::
+ :widths: 60 20
+ :header-rows: 0
+
+ * - Number of files run
+ - 13
+ * - Tables in policyadmin
+ - 75
+ * - Records Added
+ - 13
+ * - schema_version
+ - 0900
+
+5. Upgrade to Istanbul (0900) after failed downgrade
+****************************************************
+
+Ensure you are on release 0900.
+
+Rename pdpstatistics table in policyadmin schema:
+
+.. code::
+
+ RENAME TABLE pdpstatistics TO backup_pdpstatistics;
+
+Modify db_migrator_policy_init.sh - remove any lines referencing upgrade and add the 2 lines under "Downgrade scripts"
+
+Make/Redeploy to run downgrade
+
+This should result in an error (last row in policyadmin_schema_changelog will have a success value of 0)
+
+Rename backup_pdpstatistic table in policyadmin schema:
+
+.. code::
+
+ RENAME TABLE backup_pdpstatistics TO pdpstatistics;
+
+Modify db_migrator_policy_init.sh - Remove any lines referencing downgrade and add the 2 lines under "Upgrade scripts"
+
+Make/Redeploy to run upgrade
+
+.. list-table::
+ :widths: 60 20
+ :header-rows: 0
+
+ * - Number of files run
+ - 11
+ * - Tables in policyadmin
+ - 75
+ * - Records Added
+ - 11
+ * - schema_version
+ - 0900
+
+6. Downgrade to Honolulu (0800) after failed downgrade
+******************************************************
+
+Ensure you are on release 0900.
+
+Add timeStamp column to papdpstatistics_enginestats:
+
+.. code::
+
+ ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN timeStamp datetime DEFAULT NULL NULL AFTER UPTIME;
+
+Modify db_migrator_policy_init.sh - remove any lines referencing upgrade and add the 2 lines under "Downgrade scripts"
+
+Make/Redeploy to run downgrade
+
+This should result in an error (last row in policyadmin_schema_changelog will have a success value of 0)
+
+Remove timeStamp column from jpapdpstatistics_enginestats:
+
+.. code::
+
+ ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp;
+
+The config job will retry 5 times. If you make your fix before this limit is reached you won't need to redeploy.
+
+Redeploy to run downgrade
+
+.. list-table::
+ :widths: 60 20
+ :header-rows: 0
+
+ * - Number of files run
+ - 14
+ * - Tables in policyadmin
+ - 73
+ * - Records Added
+ - 14
+ * - schema_version
+ - 0800
+
+7. Downgrade to Honolulu (0800) after failed upgrade
+****************************************************
+
+Ensure you are on release 0800.
+
+Modify db_migrator_policy_init.sh - remove any lines referencing downgrade and add the 2 lines under "Upgrade scripts"
+
+Update pdpstatistics:
+
+.. code::
+
+ ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL NULL AFTER POLICYEXECUTEDSUCCESSCOUNT;
+
+Make/Redeploy to run upgrade
+
+This should result in an error (last row in policyadmin_schema_changelog will have a success value of 0)
+
+Once the retry count has been reached, update pdpstatistics:
+
+.. code::
+
+ ALTER TABLE pdpstatistics DROP COLUMN POLICYUNDEPLOYCOUNT;
+
+Modify db_migrator_policy_init.sh - Remove any lines referencing upgrade and add the 2 lines under "Downgrade scripts"
+
+Make/Redeploy to run downgrade
+
+.. list-table::
+ :widths: 60 20
+ :header-rows: 0
+
+ * - Number of files run
+ - 7
+ * - Tables in policyadmin
+ - 73
+ * - Records Added
+ - 7
+ * - schema_version
+ - 0800
+
+8. Upgrade to Istanbul (0900) after failed upgrade
+**************************************************
+
+Ensure you are on release 0800.
+
+Modify db_migrator_policy_init.sh - remove any lines referencing downgrade and add the 2 lines under "Upgrade scripts"
+
+Update PDP table:
+
+.. code::
+
+ ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY;
+
+Make/Redeploy to run upgrade
+
+This should result in an error (last row in policyadmin_schema_changelog will have a success value of 0)
+
+Update PDP table:
+
+.. code::
+
+ ALTER TABLE pdp DROP COLUMN LASTUPDATE;
+
+The config job will retry 5 times. If you make your fix before this limit is reached you won't need to redeploy.
+
+Redeploy to run upgrade
+
+.. list-table::
+ :widths: 60 20
+ :header-rows: 0
+
+ * - Number of files run
+ - 14
+ * - Tables in policyadmin
+ - 75
+ * - Records Added
+ - 14
+ * - schema_version
+ - 0900
+
+9. Downgrade to Honolulu (0800) with data in pdpstatistics and jpapdpstatistics_enginestats
+*******************************************************************************************
+
+Ensure you are on release 0900.
+
+Check pdpstatistics and jpapdpstatistics_enginestats are populated with data.
+
+.. code::
+ :number-lines:
+
+ SELECT count(*) FROM pdpstatistics;
+ SELECT count(*) FROM jpapdpstatistics_enginestats;
+
+Modify db_migrator_policy_init.sh - remove any lines referencing upgrade and add the 2 lines under "Downgrade scripts"
+
+Make/Redeploy to run downgrade
+
+Check the tables to ensure the number records is the same.
+
+.. code::
+ :number-lines:
+
+ SELECT count(*) FROM pdpstatistics;
+ SELECT count(*) FROM jpapdpstatistics_enginestats;
+
+Check pdpstatistics to ensure the primary key has changed:
+
+.. code::
+
+ SELECT column_name, constraint_name FROM information_schema.key_column_usage WHERE table_name='pdpstatistics';
+
+Check jpapdpstatistics_enginestats to ensure id column has been dropped and timestamp column added.
+
+.. code::
+
+ SELECT table_name, column_name, data_type FROM information_schema.columns WHERE table_name = 'jpapdpstatistics_enginestats';
+
+Check the pdp table to ensure the LASTUPDATE column has been dropped.
+
+.. code::
+
+ SELECT table_name, column_name, data_type FROM information_schema.columns WHERE table_name = 'pdp';
+
+
+.. list-table::
+ :widths: 60 20
+ :header-rows: 0
+
+ * - Number of files run
+ - 13
+ * - Tables in policyadmin
+ - 73
+ * - Records Added
+ - 13
+ * - schema_version
+ - 0800
+
+10. Upgrade to Istanbul (0900) with data in pdpstatistics and jpapdpstatistics_enginestats
+******************************************************************************************
+
+Ensure you are on release 0800.
+
+Check pdpstatistics and jpapdpstatistics_enginestats are populated with data.
+
+.. code::
+ :number-lines:
+
+ SELECT count(*) FROM pdpstatistics;
+ SELECT count(*) FROM jpapdpstatistics_enginestats;
+
+Modify db_migrator_policy_init.sh - remove any lines referencing downgrade and add the 2 lines under "Upgrade scripts"
+
+Make/Redeploy to run upgrade
+
+Check the tables to ensure the number records is the same.
+
+.. code::
+ :number-lines:
+
+ SELECT count(*) FROM pdpstatistics;
+ SELECT count(*) FROM jpapdpstatistics_enginestats;
+
+Check pdpstatistics to ensure the primary key has changed:
+
+.. code::
+
+ SELECT column_name, constraint_name FROM information_schema.key_column_usage WHERE table_name='pdpstatistics';
+
+Check jpapdpstatistics_enginestats to ensure timestamp column has been dropped and id column added.
+
+.. code::
+
+ SELECT table_name, column_name, data_type FROM information_schema.columns WHERE table_name = 'jpapdpstatistics_enginestats';
+
+Check the pdp table to ensure the LASTUPDATE column has been added and the value has defaulted to the CURRENT_TIMESTAMP.
+
+.. code::
+
+ SELECT table_name, column_name, data_type, column_default FROM information_schema.columns WHERE table_name = 'pdp';
+
+.. list-table::
+ :widths: 60 20
+ :header-rows: 0
+
+ * - Number of files run
+ - 13
+ * - Tables in policyadmin
+ - 75
+ * - Records Added
+ - 13
+ * - schema_version
+ - 0900
+
+.. note::
+ The number of records added may vary depnding on the number of retries.
+
+End of Document
diff --git a/docs/development/devtools/devtools.rst b/docs/development/devtools/devtools.rst
index 0654b3a5..dff8819d 100644
--- a/docs/development/devtools/devtools.rst
+++ b/docs/development/devtools/devtools.rst
@@ -276,6 +276,12 @@ familiar with the Policy Framework components and test any local changes.
.. toctree::
:maxdepth: 1
+<<<<<<< HEAD (25d5a6 Merge "Refactor s3p documents" into istanbul)
+=======
+ policy-gui-controlloop-smoke.rst
+
+ db-migrator-smoke.rst
+>>>>>>> CHANGE (33149a Added doc for smoke testing db-migrator)
..
api-smoke.rst
@@ -326,6 +332,10 @@ the Policy Framework works in a full ONAP deployment.
.. toctree::
:maxdepth: 1
+ clamp-policy.rst
+
+ clamp-dcae.rst
+
..
api-pairwise.rst
@@ -344,9 +354,6 @@ the Policy Framework works in a full ONAP deployment.
..
distribution-pairwise.rst
-..
- clamp-pairwise.rst
-
Generating Swagger Documentation
********************************
diff --git a/docs/development/devtools/images/cl-commission.png b/docs/development/devtools/images/cl-commission.png
new file mode 100644
index 00000000..ee1bab17
--- /dev/null
+++ b/docs/development/devtools/images/cl-commission.png
Binary files differ
diff --git a/docs/development/devtools/images/cl-create.png b/docs/development/devtools/images/cl-create.png
new file mode 100644
index 00000000..df97a170
--- /dev/null
+++ b/docs/development/devtools/images/cl-create.png
Binary files differ
diff --git a/docs/development/devtools/images/cl-instantiation.png b/docs/development/devtools/images/cl-instantiation.png
new file mode 100644
index 00000000..b1101ffb
--- /dev/null
+++ b/docs/development/devtools/images/cl-instantiation.png
Binary files differ
diff --git a/docs/development/devtools/images/cl-passive.png b/docs/development/devtools/images/cl-passive.png
new file mode 100644
index 00000000..def811a5
--- /dev/null
+++ b/docs/development/devtools/images/cl-passive.png
Binary files differ
diff --git a/docs/development/devtools/images/cl-running-state.png b/docs/development/devtools/images/cl-running-state.png
new file mode 100644
index 00000000..ab7b73c5
--- /dev/null
+++ b/docs/development/devtools/images/cl-running-state.png
Binary files differ
diff --git a/docs/development/devtools/images/cl-running.png b/docs/development/devtools/images/cl-running.png
new file mode 100644
index 00000000..e9730e0d
--- /dev/null
+++ b/docs/development/devtools/images/cl-running.png
Binary files differ
diff --git a/docs/development/devtools/images/cl-uninitialise.png b/docs/development/devtools/images/cl-uninitialise.png
new file mode 100644
index 00000000..d10b214c
--- /dev/null
+++ b/docs/development/devtools/images/cl-uninitialise.png
Binary files differ
diff --git a/docs/development/devtools/images/cl-uninitialised-state.png b/docs/development/devtools/images/cl-uninitialised-state.png
new file mode 100644
index 00000000..f8a77da8
--- /dev/null
+++ b/docs/development/devtools/images/cl-uninitialised-state.png
Binary files differ
diff --git a/docs/development/devtools/images/create-instance.png b/docs/development/devtools/images/create-instance.png
new file mode 100644
index 00000000..3b3c0c21
--- /dev/null
+++ b/docs/development/devtools/images/create-instance.png
Binary files differ
diff --git a/docs/development/devtools/images/update-instance.png b/docs/development/devtools/images/update-instance.png
new file mode 100644
index 00000000..fa1ee095
--- /dev/null
+++ b/docs/development/devtools/images/update-instance.png
Binary files differ
diff --git a/docs/development/devtools/tosca/pairwise-testing.yml b/docs/development/devtools/tosca/pairwise-testing.yml
new file mode 100644
index 00000000..e6c25d0d
--- /dev/null
+++ b/docs/development/devtools/tosca/pairwise-testing.yml
@@ -0,0 +1,996 @@
+tosca_definitions_version: tosca_simple_yaml_1_3
+data_types:
+ onap.datatypes.ToscaConceptIdentifier:
+ derived_from: tosca.datatypes.Root
+ properties:
+ name:
+ type: string
+ required: true
+ version:
+ type: string
+ required: true
+ onap.datatype.controlloop.Target:
+ derived_from: tosca.datatypes.Root
+ description: Definition for a entity in A&AI to perform a control loop operation on
+ properties:
+ targetType:
+ type: string
+ description: Category for the target type
+ required: true
+ constraints:
+ - valid_values:
+ - VNF
+ - VM
+ - VFMODULE
+ - PNF
+ entityIds:
+ type: map
+ description: |
+ Map of values that identify the resource. If none are provided, it is assumed that the
+ entity that generated the ONSET event will be the target.
+ required: false
+ metadata:
+ clamp_possible_values: ClampExecution:CSAR_RESOURCES
+ entry_schema:
+ type: string
+ onap.datatype.controlloop.Actor:
+ derived_from: tosca.datatypes.Root
+ description: An actor/operation/target definition
+ properties:
+ actor:
+ type: string
+ description: The actor performing the operation.
+ required: true
+ metadata:
+ clamp_possible_values: Dictionary:DefaultActors,ClampExecution:CDS/actor
+ operation:
+ type: string
+ description: The operation the actor is performing.
+ metadata:
+ clamp_possible_values: Dictionary:DefaultOperations,ClampExecution:CDS/operation
+ required: true
+ target:
+ type: onap.datatype.controlloop.Target
+ description: The resource the operation should be performed on.
+ required: true
+ payload:
+ type: map
+ description: Name/value pairs of payload information passed by Policy to the actor
+ required: false
+ metadata:
+ clamp_possible_values: ClampExecution:CDS/payload
+ entry_schema:
+ type: string
+ onap.datatype.controlloop.Operation:
+ derived_from: tosca.datatypes.Root
+ description: An operation supported by an actor
+ properties:
+ id:
+ type: string
+ description: Unique identifier for the operation
+ required: true
+ description:
+ type: string
+ description: A user-friendly description of the intent for the operation
+ required: false
+ operation:
+ type: onap.datatype.controlloop.Actor
+ description: The definition of the operation to be performed.
+ required: true
+ timeout:
+ type: integer
+ description: The amount of time for the actor to perform the operation.
+ required: true
+ retries:
+ type: integer
+ description: The number of retries the actor should attempt to perform the operation.
+ required: true
+ default: 0
+ success:
+ type: string
+ description: Points to the operation to invoke on success. A value of "final_success" indicates and end to the operation.
+ required: false
+ default: final_success
+ failure:
+ type: string
+ description: Points to the operation to invoke on Actor operation failure.
+ required: false
+ default: final_failure
+ failure_timeout:
+ type: string
+ description: Points to the operation to invoke when the time out for the operation occurs.
+ required: false
+ default: final_failure_timeout
+ failure_retries:
+ type: string
+ description: Points to the operation to invoke when the current operation has exceeded its max retries.
+ required: false
+ default: final_failure_retries
+ failure_exception:
+ type: string
+ description: Points to the operation to invoke when the current operation causes an exception.
+ required: false
+ default: final_failure_exception
+ failure_guard:
+ type: string
+ description: Points to the operation to invoke when the current operation is blocked due to guard policy enforcement.
+ required: false
+ default: final_failure_guard
+ onap.datatypes.monitoring.managedObjectDNsBasic:
+ constraints: []
+ properties:
+ DN:
+ name: DN
+ type: string
+ typeVersion: 0.0.0
+ description: Managed object distinguished name
+ required: true
+ constraints: []
+ metadata: {}
+ name: onap.datatypes.monitoring.managedObjectDNsBasic
+ version: 0.0.0
+ derived_from: tosca.datatypes.Root
+ metadata: {}
+ onap.datatypes.monitoring.managedObjectDNsBasics:
+ constraints: []
+ properties:
+ managedObjectDNsBasic:
+ name: managedObjectDNsBasic
+ type: map
+ typeVersion: 0.0.0
+ description: Managed object distinguished name object
+ required: true
+ constraints: []
+ entry_schema:
+ type: onap.datatypes.monitoring.managedObjectDNsBasic
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ name: onap.datatypes.monitoring.managedObjectDNsBasics
+ version: 0.0.0
+ derived_from: tosca.datatypes.Root
+ metadata: {}
+ onap.datatypes.monitoring.measurementGroup:
+ constraints: []
+ properties:
+ measurementTypes:
+ name: measurementTypes
+ type: list
+ typeVersion: 0.0.0
+ description: List of measurement types
+ required: true
+ constraints: []
+ entry_schema:
+ type: onap.datatypes.monitoring.measurementTypes
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ managedObjectDNsBasic:
+ name: managedObjectDNsBasic
+ type: list
+ typeVersion: 0.0.0
+ description: List of managed object distinguished names
+ required: true
+ constraints: []
+ entry_schema:
+ type: onap.datatypes.monitoring.managedObjectDNsBasics
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ name: onap.datatypes.monitoring.measurementGroup
+ version: 0.0.0
+ derived_from: tosca.datatypes.Root
+ metadata: {}
+ onap.datatypes.monitoring.measurementGroups:
+ constraints: []
+ properties:
+ measurementGroup:
+ name: measurementGroup
+ type: map
+ typeVersion: 0.0.0
+ description: Measurement Group
+ required: true
+ constraints: []
+ entry_schema:
+ type: onap.datatypes.monitoring.measurementGroup
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ name: onap.datatypes.monitoring.measurementGroups
+ version: 0.0.0
+ derived_from: tosca.datatypes.Root
+ metadata: {}
+ onap.datatypes.monitoring.measurementType:
+ constraints: []
+ properties:
+ measurementType:
+ name: measurementType
+ type: string
+ typeVersion: 0.0.0
+ description: Measurement type
+ required: true
+ constraints: []
+ metadata: {}
+ name: onap.datatypes.monitoring.measurementType
+ version: 0.0.0
+ derived_from: tosca.datatypes.Root
+ metadata: {}
+ onap.datatypes.monitoring.measurementTypes:
+ constraints: []
+ properties:
+ measurementType:
+ name: measurementType
+ type: map
+ typeVersion: 0.0.0
+ description: Measurement type object
+ required: true
+ constraints: []
+ entry_schema:
+ type: onap.datatypes.monitoring.measurementType
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ name: onap.datatypes.monitoring.measurementTypes
+ version: 0.0.0
+ derived_from: tosca.datatypes.Root
+ metadata: {}
+ onap.datatypes.monitoring.nfFilter:
+ constraints: []
+ properties:
+ modelNames:
+ name: modelNames
+ type: list
+ typeVersion: 0.0.0
+ description: List of model names
+ required: true
+ constraints: []
+ entry_schema:
+ type: string
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ modelInvariantIDs:
+ name: modelInvariantIDs
+ type: list
+ typeVersion: 0.0.0
+ description: List of model invariant IDs
+ required: true
+ constraints: []
+ entry_schema:
+ type: string
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ modelVersionIDs:
+ name: modelVersionIDs
+ type: list
+ typeVersion: 0.0.0
+ description: List of model version IDs
+ required: true
+ constraints: []
+ entry_schema:
+ type: string
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ nfNames:
+ name: nfNames
+ type: list
+ typeVersion: 0.0.0
+ description: List of network functions
+ required: true
+ constraints: []
+ entry_schema:
+ type: string
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ name: onap.datatypes.monitoring.nfFilter
+ version: 0.0.0
+ derived_from: tosca.datatypes.Root
+ metadata: {}
+ onap.datatypes.monitoring.subscription:
+ constraints: []
+ properties:
+ measurementGroups:
+ name: measurementGroups
+ type: list
+ typeVersion: 0.0.0
+ description: Measurement Groups
+ required: true
+ constraints: []
+ entry_schema:
+ type: onap.datatypes.monitoring.measurementGroups
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ fileBasedGP:
+ name: fileBasedGP
+ type: integer
+ typeVersion: 0.0.0
+ description: File based granularity period
+ required: true
+ constraints: []
+ metadata: {}
+ fileLocation:
+ name: fileLocation
+ type: string
+ typeVersion: 0.0.0
+ description: ROP file location
+ required: true
+ constraints: []
+ metadata: {}
+ subscriptionName:
+ name: subscriptionName
+ type: string
+ typeVersion: 0.0.0
+ description: Name of the subscription
+ required: true
+ constraints: []
+ metadata: {}
+ administrativeState:
+ name: administrativeState
+ type: string
+ typeVersion: 0.0.0
+ description: State of the subscription
+ required: true
+ constraints:
+ - valid_values:
+ - LOCKED
+ - UNLOCKED
+ metadata: {}
+ nfFilter:
+ name: nfFilter
+ type: map
+ typeVersion: 0.0.0
+ description: Network function filter
+ required: true
+ constraints: []
+ entry_schema:
+ type: onap.datatypes.monitoring.nfFilter
+ typeVersion: 0.0.0
+ constraints: []
+ metadata: {}
+ name: onap.datatypes.monitoring.subscription
+ version: 0.0.0
+ derived_from: tosca.datatypes.Root
+ metadata: {}
+ org.onap.datatypes.policy.clamp.controlloop.httpControlLoopElement.RestRequest:
+ version: 1.0.0
+ derived_from: tosca.datatypes.Root
+ properties:
+ restRequestId:
+ type: onap.datatypes.ToscaConceptIdentifier
+ typeVersion: 1.0.0
+ required: true
+ description: The name and version of a REST request to be sent to a REST endpoint
+ httpMethod:
+ type: string
+ required: true
+ constraints:
+ - valid_values: [POST, PUT, GET, DELETE]
+ description: The REST method to use
+ path:
+ type: string
+ required: true
+ description: The path of the REST request relative to the base URL
+ body:
+ type: string
+ required: false
+ description: The body of the REST request for PUT and POST requests
+ expectedResponse:
+ type: integer
+ required: true
+ constraints:
+ - in_range: [100, 599]
+ description: THe expected HTTP status code for the REST request
+ org.onap.datatypes.policy.clamp.controlloop.httpControlLoopElement.ConfigurationEntity:
+ version: 1.0.0
+ derived_from: tosca.datatypes.Root
+ properties:
+ configurationEntityId:
+ type: onap.datatypes.ToscaConceptIdentifier
+ typeVersion: 1.0.0
+ required: true
+ description: The name and version of a Configuration Entity to be handled by the HTTP Control Loop Element
+ restSequence:
+ type: list
+ entry_schema:
+ type: org.onap.datatypes.policy.clamp.controlloop.httpControlLoopElement.RestRequest
+ typeVersion: 1.0.0
+ description: A sequence of REST commands to send to the REST endpoint
+policy_types:
+ onap.policies.Monitoring:
+ derived_from: tosca.policies.Root
+ description: a base policy type for all policies that govern monitoring provisioning
+ version: 1.0.0
+ name: onap.policies.Monitoring
+ onap.policies.Sirisha:
+ derived_from: tosca.policies.Root
+ description: a base policy type for all policies that govern monitoring provisioning
+ version: 1.0.0
+ name: onap.policies.Sirisha
+ onap.policies.monitoring.dcae-pm-subscription-handler:
+ properties:
+ pmsh_policy:
+ name: pmsh_policy
+ type: onap.datatypes.monitoring.subscription
+ typeVersion: 0.0.0
+ description: PMSH Policy JSON
+ required: false
+ constraints: []
+ metadata: {}
+ name: onap.policies.monitoring.dcae-pm-subscription-handler
+ version: 1.0.0
+ derived_from: onap.policies.Monitoring
+ metadata: {}
+ onap.policies.controlloop.operational.Common:
+ derived_from: tosca.policies.Root
+ version: 1.0.0
+ name: onap.policies.controlloop.operational.Common
+ description: |
+ Operational Policy for Control Loop execution. Originated in Frankfurt to support TOSCA Compliant
+ Policy Types. This does NOT support the legacy Policy YAML policy type.
+ properties:
+ id:
+ type: string
+ description: The unique control loop id.
+ required: true
+ timeout:
+ type: integer
+ description: |
+ Overall timeout for executing all the operations. This timeout should equal or exceed the total
+ timeout for each operation listed.
+ required: true
+ abatement:
+ type: boolean
+ description: Whether an abatement event message will be expected for the control loop from DCAE.
+ required: true
+ default: false
+ trigger:
+ type: string
+ description: Initial operation to execute upon receiving an Onset event message for the Control Loop.
+ required: true
+ operations:
+ type: list
+ description: List of operations to be performed when Control Loop is triggered.
+ required: true
+ entry_schema:
+ type: onap.datatype.controlloop.Operation
+ onap.policies.controlloop.operational.common.Apex:
+ derived_from: onap.policies.controlloop.operational.Common
+ type_version: 1.0.0
+ version: 1.0.0
+ name: onap.policies.controlloop.operational.common.Apex
+ description: Operational policies for Apex PDP
+ properties:
+ engineServiceParameters:
+ type: string
+ description: The engine parameters like name, instanceCount, policy implementation, parameters etc.
+ required: true
+ eventInputParameters:
+ type: string
+ description: The event input parameters.
+ required: true
+ eventOutputParameters:
+ type: string
+ description: The event output parameters.
+ required: true
+ javaProperties:
+ type: string
+ description: Name/value pairs of properties to be set for APEX if needed.
+ required: false
+node_types:
+ org.onap.policy.clamp.controlloop.Participant:
+ version: 1.0.1
+ derived_from: tosca.nodetypes.Root
+ properties:
+ provider:
+ type: string
+ requred: false
+ org.onap.policy.clamp.controlloop.ControlLoopElement:
+ version: 1.0.1
+ derived_from: tosca.nodetypes.Root
+ properties:
+ provider:
+ type: string
+ required: false
+ metadata:
+ common: true
+ description: Specifies the organization that provides the control loop element
+ participant_id:
+ type: onap.datatypes.ToscaConceptIdentifier
+ requred: true
+ metadata:
+ common: true
+ participantType:
+ type: onap.datatypes.ToscaConceptIdentifier
+ required: true
+ metadata:
+ common: true
+ description: The identity of the participant type that hosts this type of Control Loop Element
+ startPhase:
+ type: integer
+ required: false
+ constraints:
+ - greater_or_equal: 0
+ metadata:
+ common: true
+ description: A value indicating the start phase in which this control loop element will be started, the
+ first start phase is zero. Control Loop Elements are started in their start_phase order and stopped
+ in reverse start phase order. Control Loop Elements with the same start phase are started and
+ stopped simultaneously
+ uninitializedToPassiveTimeout:
+ type: integer
+ required: false
+ constraints:
+ - greater_or_equal: 0
+ default: 60
+ metadata:
+ common: true
+ description: The maximum time in seconds to wait for a state chage from uninitialized to passive
+ passiveToRunningTimeout:
+ type: integer
+ required: false
+ constraints:
+ - greater_or_equal: 0
+ default: 60
+ metadata:
+ common: true
+ description: The maximum time in seconds to wait for a state chage from passive to running
+ runningToPassiveTimeout:
+ type: integer
+ required: false
+ constraints:
+ - greater_or_equal: 0
+ default: 60
+ metadata:
+ common: true
+ description: The maximum time in seconds to wait for a state chage from running to passive
+ passiveToUninitializedTimeout:
+ type: integer
+ required: false
+ constraints:
+ - greater_or_equal: 0
+ default: 60
+ metadata:
+ common: true
+ description: The maximum time in seconds to wait for a state chage from passive to uninitialized
+ org.onap.policy.clamp.controlloop.ControlLoop:
+ version: 1.0.1
+ derived_from: tosca.nodetypes.Root
+ properties:
+ provider:
+ type: string
+ required: false
+ metadata:
+ common: true
+ description: Specifies the organization that provides the control loop element
+ elements:
+ type: list
+ required: true
+ metadata:
+ common: true
+ entry_schema:
+ type: onap.datatypes.ToscaConceptIdentifier
+ description: Specifies a list of control loop element definitions that make up this control loop definition
+ org.onap.policy.clamp.controlloop.PolicyControlLoopElement:
+ version: 1.0.1
+ derived_from: org.onap.policy.clamp.controlloop.ControlLoopElement
+ properties:
+ policy_type_id:
+ type: onap.datatypes.ToscaConceptIdentifier
+ requred: true
+ policy_id:
+ type: onap.datatypes.ToscaConceptIdentifier
+ requred: false
+ org.onap.policy.clamp.controlloop.DerivedPolicyControlLoopElement:
+ version: 1.0.1
+ derived_from: org.onap.policy.clamp.controlloop.PolicyControlLoopElement
+ properties:
+ policy_type_id:
+ type: onap.datatypes.ToscaConceptIdentifier
+ requred: true
+ policy_id:
+ type: onap.datatypes.ToscaConceptIdentifier
+ requred: false
+ org.onap.policy.clamp.controlloop.DerivedDerivedPolicyControlLoopElement:
+ version: 1.0.1
+ derived_from: org.onap.policy.clamp.controlloop.DerivedPolicyControlLoopElement
+ properties:
+ policy_type_id:
+ type: onap.datatypes.ToscaConceptIdentifier
+ requred: true
+ policy_id:
+ type: onap.datatypes.ToscaConceptIdentifier
+ requred: false
+ org.onap.policy.clamp.controlloop.CDSControlLoopElement:
+ version: 1.0.1
+ derived_from: org.onap.policy.clamp.controlloop.ControlLoopElement
+ properties:
+ cds_blueprint_id:
+ type: onap.datatypes.ToscaConceptIdentifier
+ requred: true
+ org.onap.policy.clamp.controlloop.K8SMicroserviceControlLoopElement:
+ version: 1.0.1
+ derived_from: org.onap.policy.clamp.controlloop.ControlLoopElement
+ properties:
+ chart:
+ type: string
+ required: true
+ configs:
+ type: list
+ required: false
+ requirements:
+ type: string
+ requred: false
+ templates:
+ type: list
+ required: false
+ entry_schema:
+ values:
+ type: string
+ requred: true
+ org.onap.policy.clamp.controlloop.HttpControlLoopElement:
+ version: 1.0.1
+ derived_from: org.onap.policy.clamp.controlloop.ControlLoopElement
+ properties:
+ baseUrl:
+ type: string
+ required: true
+ description: The base URL to be prepended to each path, identifies the host for the REST endpoints.
+ httpHeaders:
+ type: map
+ required: false
+ entry_schema:
+ type: string
+ description: HTTP headers to send on REST requests
+ configurationEntities:
+ type: map
+ required: true
+ entry_schema:
+ type: org.onap.datatypes.policy.clamp.controlloop.httpControlLoopElement.ConfigurationEntity
+ typeVersion: 1.0.0
+ description: The connfiguration entities the Control Loop Element is managing and their associated REST requests
+
+topology_template:
+ inputs:
+ pmsh_monitoring_policy:
+ type: onap.datatypes.ToscaConceptIdentifier
+ description: The ID of the PMSH monitoring policy to use
+ default:
+ name: MICROSERVICE_vLoadBalancerMS_v1_0_dcae-pm-subscription-handler_1_0_0test
+ version: 1.0.0
+ pmsh_operational_policy:
+ type: onap.datatypes.ToscaConceptIdentifier
+ description: The ID of the PMSH operational policy to use
+ default:
+ name: operational.apex.pmcontrol
+ version: 1.0.0
+ node_templates:
+ org.onap.policy.controlloop.PolicyControlLoopParticipant:
+ version: 2.3.1
+ type: org.onap.policy.clamp.controlloop.Participant
+ type_version: 1.0.1
+ description: Participant for DCAE microservices
+ properties:
+ provider: ONAP
+ org.onap.domain.pmsh.PMSH_MonitoringPolicyControlLoopElement:
+ version: 1.2.3
+ type: org.onap.policy.clamp.controlloop.PolicyControlLoopElement
+ type_version: 1.0.0
+ description: Control loop element for the monitoring policy for Performance Management Subscription Handling
+ properties:
+ provider: Ericsson
+ participant_id:
+ name: org.onap.PM_Policy
+ version: 1.0.0
+ participantType:
+ name: org.onap.policy.controlloop.PolicyControlLoopParticipant
+ version: 2.3.1
+ policy_type_id:
+ name: onap.policies.monitoring.pm-subscription-handler
+ version: 1.0.0
+ policy_id:
+ get_input: pmsh_monitoring_policy
+ org.onap.domain.pmsh.PMSH_OperationalPolicyControlLoopElement:
+ version: 1.2.3
+ type: org.onap.policy.clamp.controlloop.PolicyControlLoopElement
+ type_version: 1.0.0
+ description: Control loop element for the operational policy for Performance Management Subscription Handling
+ properties:
+ provider: Ericsson
+ participant_id:
+ name: org.onap.PM_Policy
+ version: 1.0.0
+ participantType:
+ name: org.onap.policy.controlloop.PolicyControlLoopParticipant
+ version: 2.3.1
+ policy_type_id:
+ name: onap.policies.operational.pm-subscription-handler
+ version: 1.0.0
+ policy_id:
+ get_input: pmsh_operational_policy
+ org.onap.k8s.controlloop.K8SControlLoopParticipant:
+ version: 2.3.4
+ type: org.onap.policy.clamp.controlloop.Participant
+ type_version: 1.0.1
+ description: Participant for K8S
+ properties:
+ provider: ONAP
+ org.onap.domain.database.PMSH_K8SMicroserviceControlLoopElement:
+ # Chart from new repository
+ version: 1.2.3
+ type: org.onap.policy.clamp.controlloop.K8SMicroserviceControlLoopElement
+ type_version: 1.0.0
+ description: Control loop element for the K8S microservice for PMSH
+ properties:
+ provider: ONAP
+ participant_id:
+ name: K8sParticipant0
+ version: 1.0.0
+ participantType:
+ name: org.onap.k8s.controlloop.K8SControlLoopParticipant
+ version: 2.3.4
+ chart:
+ chartId:
+ name: dcae-pmsh
+ version: 8.0.0
+ namespace: onap
+ releaseName: pmshms
+ repository:
+ repoName: chartmuseum
+ protocol: http
+ address: chart-museum
+ port: 80
+ userName: onapinitializer
+ password: demo123456!
+ overrideParams:
+ global.masterPassword: test
+
+ org.onap.domain.database.Local_K8SMicroserviceControlLoopElement:
+ # Chart installation without passing repository info
+ version: 1.2.3
+ type: org.onap.policy.clamp.controlloop.K8SMicroserviceControlLoopElement
+ type_version: 1.0.0
+ description: Control loop element for the K8S microservice for local chart
+ properties:
+ provider: ONAP
+ participant_id:
+ name: K8sParticipant0
+ version: 1.0.0
+ participantType:
+ name: org.onap.k8s.controlloop.K8SControlLoopParticipant
+ version: 2.3.4
+ chart:
+ chartId:
+ name: nginx-ingress
+ version: 0.9.1
+ releaseName: nginxms
+ namespace: test
+ org.onap.controlloop.HttpControlLoopParticipant:
+ version: 2.3.4
+ type: org.onap.policy.clamp.controlloop.Participant
+ type_version: 1.0.1
+ description: Participant for Http requests
+ properties:
+ provider: ONAP
+ org.onap.domain.database.Http_PMSHMicroserviceControlLoopElement:
+ # Consul http config for PMSH.
+ version: 1.2.3
+ type: org.onap.policy.clamp.controlloop.HttpControlLoopElement
+ type_version: 1.0.1
+ description: Control loop element for the http requests of PMSH microservice
+ properties:
+ provider: ONAP
+ participant_id:
+ name: HttpParticipant0
+ version: 1.0.0
+ participantType:
+ name: org.onap.k8s.controlloop.HttpControlLoopParticipant
+ version: 2.3.4
+ uninitializedToPassiveTimeout: 180
+ startPhase: 1
+ baseUrl: http://consul-server-ui:8500
+ httpHeaders:
+ Content-Type: application/json
+ configurationEntities:
+ - configurationEntityId:
+ name: entity1
+ version: 1.0.1
+ restSequence:
+ - restRequestId:
+ name: request1
+ version: 1.0.1
+ httpMethod: PUT
+ path: v1/kv/dcae-pmsh2
+ body: '{
+ "control_loop_name":"pmsh-control-loop",
+ "operational_policy_name":"pmsh-operational-policy",
+ "aaf_password":"demo123456!",
+ "aaf_identity":"dcae@dcae.onap.org",
+ "cert_path":"/opt/app/pmsh/etc/certs/cert.pem",
+ "key_path":"/opt/app/pmsh/etc/certs/key.pem",
+ "ca_cert_path":"/opt/app/pmsh/etc/certs/cacert.pem",
+ "enable_tls":"true",
+ "pmsh_policy":{
+ "subscription":{
+ "subscriptionName":"ExtraPM-All-gNB-R2B",
+ "administrativeState":"UNLOCKED",
+ "fileBasedGP":15,
+ "fileLocation":"\/pm\/pm.xml",
+ "nfFilter":{
+ "nfNames":[
+ "^pnf.*",
+ "^vnf.*"
+ ],
+ "modelInvariantIDs":[
+ ],
+ "modelVersionIDs":[
+ ],
+ "modelNames":[
+ ]
+ },
+ "measurementGroups":[
+ {
+ "measurementGroup":{
+ "measurementTypes":[
+ {
+ "measurementType":"countera"
+ },
+ {
+ "measurementType":"counterb"
+ }
+ ],
+ "managedObjectDNsBasic":[
+ {
+ "DN":"dna"
+ },
+ {
+ "DN":"dnb"
+ }
+ ]
+ }
+ },
+ {
+ "measurementGroup":{
+ "measurementTypes":[
+ {
+ "measurementType":"counterc"
+ },
+ {
+ "measurementType":"counterd"
+ }
+ ],
+ "managedObjectDNsBasic":[
+ {
+ "DN":"dnc"
+ },
+ {
+ "DN":"dnd"
+ }
+ ]
+ }
+ }
+ ]
+ }
+ },
+ "streams_subscribes":{
+ "aai_subscriber":{
+ "type":"message_router",
+ "dmaap_info":{
+ "topic_url":"https://10.152.183.151:3905/events/AAI_EVENT",
+ "client_role":"org.onap.dcae.aaiSub",
+ "location":"san-francisco",
+ "client_id":"1575976809466"
+ }
+ },
+ "policy_pm_subscriber":{
+ "type":"message_router",
+ "dmaap_info":{
+ "topic_url":"https://10.152.183.151:3905/events/org.onap.dmaap.mr.PM_SUBSCRIPTIONS",
+ "client_role":"org.onap.dcae.pmSubscriber",
+ "location":"san-francisco",
+ "client_id":"1575876809456"
+ }
+ }
+ },
+ "streams_publishes":{
+ "policy_pm_publisher":{
+ "type":"message_router",
+ "dmaap_info":{
+ "topic_url":"https://10.152.183.151:3905/events/org.onap.dmaap.mr.PM_SUBSCRIPTIONS",
+ "client_role":"org.onap.dcae.pmPublisher",
+ "location":"san-francisco",
+ "client_id":"1475976809466"
+ }
+ },
+ "other_publisher":{
+ "type":"message_router",
+ "dmaap_info":{
+ "topic_url":"https://10.152.183.151:3905/events/org.onap.dmaap.mr.SOME_OTHER_TOPIC",
+ "client_role":"org.onap.dcae.pmControlPub",
+ "location":"san-francisco",
+ "client_id":"1875976809466"
+ }
+ }
+ }
+ }'
+ expectedResponse: 200
+ org.onap.domain.sample.GenericK8s_ControlLoopDefinition:
+ version: 1.2.3
+ type: org.onap.policy.clamp.controlloop.ControlLoop
+ type_version: 1.0.0
+ description: Control loop for Hello World
+ properties:
+ provider: ONAP
+ elements:
+ - name: org.onap.domain.database.PMSH_K8SMicroserviceControlLoopElement
+ version: 1.2.3
+ - name: org.onap.domain.database.Local_K8SMicroserviceControlLoopElement
+ version: 1.2.3
+ - name: org.onap.domain.database.Http_PMSHMicroserviceControlLoopElement
+ version: 1.2.3
+ - name: org.onap.domain.pmsh.PMSH_MonitoringPolicyControlLoopElement
+ version: 1.2.3
+ - name: org.onap.domain.pmsh.PMSH_OperationalPolicyControlLoopElement
+ version: 1.2.3
+
+ policies:
+ - MICROSERVICE_vLoadBalancerMS_v1_0_dcae-pm-subscription-handler_1_0_0test:
+ type: onap.policies.monitoring.dcae-pm-subscription-handler
+ type_version: 1.0.0
+ name: MICROSERVICE_vLoadBalancerMS_v1_0_dcae-pm-subscription-handler_1_0_0test
+ version: 1.0.0
+ metadata:
+ policy-id: MICROSERVICE_vLoadBalancerMS_v1_0_dcae-pm-subscription-handler_1_0_0test
+ policy-version: 1.0.0
+ properties:
+ pmsh_policy:
+ fileBasedGP: 15
+ fileLocation: /pm/pm.xml
+ subscriptionName: subscriptiona
+ administrativeState: UNLOCKED
+ nfFilter:
+ onap.datatypes.monitoring.nfFilter:
+ modelVersionIDs:
+ - e80a6ae3-cafd-4d24-850d-e14c084a5ca9
+ modelInvariantIDs:
+ - 5845y423-g654-6fju-po78-8n53154532k6
+ - 7129e420-d396-4efb-af02-6b83499b12f8
+ modelNames: []
+ nfNames:
+ - '"^pnf1.*"'
+ measurementGroups:
+ - measurementGroup:
+ onap.datatypes.monitoring.measurementGroup:
+ measurementTypes:
+ - measurementType:
+ onap.datatypes.monitoring.measurementType:
+ measurementType: countera
+ - measurementType:
+ onap.datatypes.monitoring.measurementType:
+ measurementType: counterb
+ managedObjectDNsBasic:
+ - managedObjectDNsBasic:
+ onap.datatypes.monitoring.managedObjectDNsBasic:
+ DN: dna
+ - managedObjectDNsBasic:
+ onap.datatypes.monitoring.managedObjectDNsBasic:
+ DN: dnb
+ - measurementGroup:
+ onap.datatypes.monitoring.measurementGroup:
+ measurementTypes:
+ - measurementType:
+ onap.datatypes.monitoring.measurementType:
+ measurementType: counterc
+ - measurementType:
+ onap.datatypes.monitoring.measurementType:
+ measurementType: counterd
+ managedObjectDNsBasic:
+ - managedObjectDNsBasic:
+ onap.datatypes.monitoring.managedObjectDNsBasic:
+ DN: dnc
+ - managedObjectDNsBasic:
+ onap.datatypes.monitoring.managedObjectDNsBasic:
+ DN: dnd \ No newline at end of file