aboutsummaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/apex/APEX-OnapPf-Guide.rst39
-rw-r--r--docs/apex/APEX-Policy-Guide.rst4
-rw-r--r--docs/architecture/architecture.rst61
-rw-r--r--docs/clamp/acm/acm-participant-guide.rst33
-rw-r--r--docs/clamp/acm/design-impl/clamp-runtime-acm.rst32
-rw-r--r--docs/development/devtools/devtools.rst12
-rw-r--r--docs/development/devtools/installation/local-installation.rst25
-rw-r--r--docs/development/devtools/smoke/acm-participants-smoke.rst12
-rw-r--r--docs/development/devtools/smoke/api-smoke.rst3
-rw-r--r--docs/development/devtools/smoke/clamp-ac-participant-protocol-smoke.rst14
-rw-r--r--docs/development/devtools/smoke/clamp-smoke.rst4
-rw-r--r--docs/development/devtools/smoke/db-migrator-smoke.rst422
-rw-r--r--docs/development/devtools/smoke/files/participant-http-application.yaml16
-rw-r--r--docs/development/devtools/smoke/files/participant-kubernetes-application.yaml18
-rw-r--r--docs/development/devtools/smoke/files/participant-policy-application.yaml18
-rw-r--r--docs/development/devtools/smoke/files/participant-sim-application.yaml16
-rw-r--r--docs/development/devtools/smoke/files/runtime-application.yaml17
-rw-r--r--docs/development/devtools/smoke/json/acm-instantiation.json2
-rw-r--r--docs/development/devtools/smoke/pap-smoke.rst8
-rw-r--r--docs/development/devtools/smoke/xacml-smoke.rst17
-rw-r--r--docs/development/devtools/testing/csit.rst64
-rw-r--r--docs/development/devtools/testing/s3p/apex-s3p-results/apex_metrics_after_72h.txt316
-rw-r--r--docs/development/devtools/testing/s3p/apex-s3p-results/apex_metrics_before_72h.txt175
-rw-r--r--docs/development/devtools/testing/s3p/apex-s3p-results/apex_performance_results.pngbin83623 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/apex-s3p-results/apex_stability_results.pngbin106062 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/apex-s3p-results/apex_top_after_72h.pngbin76131 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/apex-s3p-results/apex_top_before_72h.pngbin74785 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/apex-s3p.rst207
-rw-r--r--docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-distribution_J.pngbin84927 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-distribution_performance_J.pngbin113689 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-overtime_J.pngbin147285 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-overtime_performance_J.pngbin301598 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/api-s3p-results/api-s3p-jm-1_J.pngbin207292 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/api-s3p-results/api-s3p-jm-2_J.pngbin230292 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/api-s3p-results/api_stat_after_72h.pngbin5343 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/api-s3p-results/api_stat_before_72h.pngbin5310 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/api-s3p.rst210
-rw-r--r--docs/development/devtools/testing/s3p/clamp-s3p-results/Stability_after_stats.pngbin52736 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/clamp-s3p-results/acm_performance_jmeter.pngbin77257 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/clamp-s3p-results/acm_stability_jmeter.pngbin68189 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/clamp-s3p-results/acm_stability_table.pngbin101776 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/clamp-s3p.rst224
-rw-r--r--docs/development/devtools/testing/s3p/distribution-s3p-results/distribution-jmeter-testcases.pngbin107191 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/distribution-s3p-results/distribution-visualvm-snapshot.pngbin105888 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/distribution-s3p-results/performance-monitor.pngbin173091 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/distribution-s3p-results/performance-statistics.pngbin56736 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/distribution-s3p-results/performance-threads.pngbin120014 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/distribution-s3p-results/performance-threshold.pngbin184094 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/distribution-s3p-results/stability-monitor.pngbin93084 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/distribution-s3p-results/stability-statistics.pngbin56211 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/distribution-s3p-results/stability-threads.pngbin124682 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/distribution-s3p-results/stability-threshold.pngbin202685 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/distribution-s3p.rst238
-rw-r--r--docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-1.pngbin302657 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-2.pngbin216610 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-3.pngbin141505 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-4.pngbin200544 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/drools-s3p.rst74
-rw-r--r--docs/development/devtools/testing/s3p/images/workflow-results.pngbin0 -> 639728 bytes
-rw-r--r--docs/development/devtools/testing/s3p/images/workflow-test-result.pngbin0 -> 362596 bytes
-rw-r--r--docs/development/devtools/testing/s3p/images/workflows.pngbin0 -> 174516 bytes
-rw-r--r--docs/development/devtools/testing/s3p/pap-s3p-results/pap_metrics_after_72h.txt521
-rw-r--r--docs/development/devtools/testing/s3p/pap-s3p-results/pap_metrics_before_72h.txt228
-rw-r--r--docs/development/devtools/testing/s3p/pap-s3p-results/pap_performance_jmeter_results.pngbin83914 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/pap-s3p-results/pap_stability_jmeter_results.pngbin272805 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/pap-s3p-results/pap_stats_after_72h.pngbin14546 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/pap-s3p-results/pap_stats_before_72h.pngbin5186 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/pap-s3p-results/pap_stats_during_72h.pngbin5182 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/pap-s3p.rst198
-rw-r--r--docs/development/devtools/testing/s3p/run-s3p.rst45
-rw-r--r--docs/development/devtools/testing/s3p/s3p-test-overview.rst118
-rw-r--r--docs/development/devtools/testing/s3p/xacml-s3p-results/s3p-perf-xacml.pngbin106786 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/xacml-s3p-results/s3p-stability-xacml.pngbin105051 -> 0 bytes
-rw-r--r--docs/development/devtools/testing/s3p/xacml-s3p.rst198
-rw-r--r--docs/development/pdp/pdp-pap-interaction.rst6
-rw-r--r--docs/development/prometheus-metrics.rst4
-rwxr-xr-xdocs/drools/ctrlog_config.pngbin7721 -> 0 bytes
-rwxr-xr-xdocs/drools/ctrlog_enablefeature.pngbin61614 -> 0 bytes
-rwxr-xr-xdocs/drools/ctrlog_logback.pngbin11467 -> 0 bytes
-rwxr-xr-xdocs/drools/ctrlog_view.pngbin12464 -> 0 bytes
-rw-r--r--docs/drools/drools.rst24
-rw-r--r--docs/drools/feature_activestdbymgmt.rst109
-rw-r--r--docs/drools/feature_controllerlogging.rst48
-rw-r--r--docs/drools/feature_eelf.rst47
-rw-r--r--docs/drools/feature_mdcfilters.rst117
-rw-r--r--docs/drools/feature_nolocking.rst11
-rw-r--r--docs/drools/feature_pooling.rst44
-rw-r--r--docs/drools/feature_sesspersist.rst49
-rw-r--r--docs/drools/feature_statemgmt.rst310
-rw-r--r--docs/drools/feature_testtransaction.rst26
-rw-r--r--docs/drools/mdc_enablefeature.pngbin27429 -> 0 bytes
-rwxr-xr-xdocs/drools/mdc_properties.pngbin24399 -> 0 bytes
-rw-r--r--docs/drools/pdpdApps.rst179
-rw-r--r--docs/drools/pdpdEngine.rst766
-rw-r--r--docs/drools/poolingDesign.pngbin61828 -> 0 bytes
-rw-r--r--docs/installation/oom.rst15
-rw-r--r--docs/pap/pap.rst10
97 files changed, 843 insertions, 4511 deletions
diff --git a/docs/apex/APEX-OnapPf-Guide.rst b/docs/apex/APEX-OnapPf-Guide.rst
index f7f1f3a7..289d1ea6 100644
--- a/docs/apex/APEX-OnapPf-Guide.rst
+++ b/docs/apex/APEX-OnapPf-Guide.rst
@@ -154,7 +154,7 @@ Verify Installation - run APEXOnapPf
.. container:: paragraph
- OnapPfConfig.json is the file which contains the initial configuration to startup the ApexStarter service. The dmaap topics to be used for sending or receiving messages is also specified in the this file. Provide this file as argument while running the ApexOnapPf.
+ OnapPfConfig.json is the file which contains the initial configuration to startup the ApexStarter service. The Kafka topics to be used for sending or receiving messages is also specified in the this file. Provide this file as argument while running the ApexOnapPf.
.. container:: listingblock
@@ -204,7 +204,7 @@ Verify Installation - run APEXOnapPf
.. container:: paragraph
- The ApexOnapPf service is now running, sending heartbeat messages to dmaap (which will be received by PAP) and listening for messages from PAP on the dmaap topic specified. Based on instructions from PAP, the ApexOnapPf will deploy or undeploy policies on the ApexEngine.
+ The ApexOnapPf service is now running, sending heartbeat messages to Kafka (which will be received by PAP) and listening for messages from PAP on the Kafka topic specified. Based on instructions from PAP, the ApexOnapPf will deploy or undeploy policies on the ApexEngine.
.. container:: paragraph
@@ -343,16 +343,27 @@ Format of the configuration file (OnapPfConfig.json) explained
"supportedPolicyTypes":[{"name":"onap.policies.controlloop.operational.Apex","version":"1.0.0"}] (6)
},
"topicParameterGroup": {
- "topicSources" : [{ (7)
- "topic" : "POLICY-PDP-PAP", (8)
- "servers" : [ "message-router" ], (9)
- "topicCommInfrastructure" : "dmaap" (10)
- }],
- "topicSinks" : [{ (11)
- "topic" : "POLICY-PDP-PAP", (12)
- "servers" : [ "message-router" ], (13)
- "topicCommInfrastructure" : "dmaap" (14)
- }]
+ "topicSources": [
+ {
+ "topic": "policy-pdp-pap",
+ "servers": [
+ "kafka:9092"
+ ],
+ "useHttps": false,
+ "topicCommInfrastructure": "kafka",
+ "fetchTimeout": 15000
+ }
+ ],
+ "topicSinks": [
+ {
+ "topic": "policy-pdp-pap",
+ "servers": [
+ "kafka:9092"
+ ],
+ "useHttps": false,
+ "topicCommInfrastructure": "kafka"
+ }
+ ]
}
}
@@ -392,7 +403,7 @@ Format of the configuration file (OnapPfConfig.json) explained
| | topic. |
+-----------------------------------+-------------------------------------------------+
| **10** | The source topic infrastructure. |
- | | For e.g. dmaap, noop, ueb |
+ | | For e.g. kafka, noop |
+-----------------------------------+-------------------------------------------------+
| **11** | List of topics' details to which |
| | messages are sent. |
@@ -404,5 +415,5 @@ Format of the configuration file (OnapPfConfig.json) explained
| | topic. |
+-----------------------------------+-------------------------------------------------+
| **14** | The sink topic infrastructure. |
- | | For e.g. dmaap, noop, ueb |
+ | | For e.g. kafka, noop |
+-----------------------------------+-------------------------------------------------+
diff --git a/docs/apex/APEX-Policy-Guide.rst b/docs/apex/APEX-Policy-Guide.rst
index 60468917..2c3c684e 100644
--- a/docs/apex/APEX-Policy-Guide.rst
+++ b/docs/apex/APEX-Policy-Guide.rst
@@ -1870,7 +1870,7 @@ Writing Multiple Output Events from a Final State
Consider a simple example where a policy *CDSActionPolicy* has a state *MakeCDSRequestState* which is also a final
state. The state is triggered by an event *AAIEvent*. A task called *HandleCDSActionTask* is associated with
*MakeCDSRequestState*.There are two output events expected from *MakeCDSRequestState* which are *CDSRequestEvent*
- (request event sent to CDS) and *LogEvent* (log event sent to DMaaP).
+ (request event sent to CDS) and *LogEvent* (log event sent to Kafka).
Writing an APEX policy with this example will involve the below changes.
*Command File:*
@@ -1891,7 +1891,7 @@ Writing Multiple Output Events from a Final State
event create name=CDSRequestEvent version=0.0.1 nameSpace=org.onap.policy.apex.test source=APEX target=CDS
event parameter create name=CDSRequestEvent parName=actionIdentifiers schemaName=CDSActionIdentifiersType
..
- event create name=LogEvent version=0.0.1 nameSpace=org.onap.policy.apex.test source=APEX target=DMaaP
+ event create name=LogEvent version=0.0.1 nameSpace=org.onap.policy.apex.test source=APEX target=Kafka
event parameter create name=LogEvent parName=status schemaName=SimpleStringType
..
diff --git a/docs/architecture/architecture.rst b/docs/architecture/architecture.rst
index f582918c..18f2d233 100644
--- a/docs/architecture/architecture.rst
+++ b/docs/architecture/architecture.rst
@@ -187,7 +187,7 @@ such as *CLAMP* can use the *PolicyAPI* API to create, update, delete, and read
- Management of the deployment of policies to PDPs in an ONAP installation. *PolicyAdministration* gives each PDP group
a set of domain policies to execute.
-*PolicyAdministration* handles PDPs and policy allocation to PDPs using asynchronous messaging over DMaaP. It provides
+*PolicyAdministration* handles PDPs and policy allocation to PDPs using asynchronous messaging over Kafka. It provides
three APIs:
- a CRUD API for policy groups and subgroups
@@ -722,46 +722,18 @@ the clusters.
.. image:: images/MCSharedDB.svg
-2.3.8.2 DMaaP Arrangement
-"""""""""""""""""""""""""
+2.3.8.2 Kafka Communication
+"""""""""""""""""""""""""""
-As in prior releases, communication between the PAPs and PDPs still takes place via
-DMaaP. Two arrangements, described below, are supported.
-
-2.3.8.2.1 Local DMaaP
-~~~~~~~~~~~~~~~~~~~~~
-
-In this arrangement, each cluster is associated with its own, local
-DMaaP, and communication only happens between PAPs and PDPs within the same cluster.
-
-.. image:: images/MCLocalDmaap.svg
-
-The one
-limitation with this approach is that, when a PAP in cluster A deploys a policy, PAP
-is only able to inform the PDPs in the local cluster; the PDPs in the other clusters
-are not made aware of the new deployment until they generate a heartbeat, at which
-point, their local PAP will inform them of the new deployment. The same is true of
-changes made to the state of a PDP Group; changes only propagate to PDPs in other
-clusters in response to heartbeats generated by the PDPs.
-
-.. image:: images/MCLocalHB.svg
-
-2.3.8.2.2 Shared DMaaP
-~~~~~~~~~~~~~~~~~~~~~~
-
-In this arrangement, the PAPs and PDPs in all of the clusters are
-pointed to a common DMaaP. Because the PAP and PDPs all communicate via the same
-DMaaP, when a PAP deploys a policy, all PDPs are made aware, rather than having to
-wait for a heartbeat.
-
-.. image:: images/MCSharedDmaap.svg
+In prior releases, communication between the PAPs and PDPs took place via
+DMaaP. The communication now takes place over Kafka topics.
2.3.8.3 Missed Heartbeat
""""""""""""""""""""""""
To manage the removal of terminated PDPs from the DB, a record, containing a
"last-updated" timestamp, is maintained within the DB for each PDP. Whether using a
-local or shared DMaaP, any PAP receiving a message from a PDP will update the timestamp
+local or shared Kafka cluster, any PAP receiving a message from a PDP will update the timestamp
in the associated record, thus keeping the records “current”.
.. image:: images/MCSharedHB.svg
@@ -797,5 +769,26 @@ Policy Set A set of policies that are deployed on a PDP g
deployed on a PDP group
================================= ==================================================================================
+5. Security
+===========
+
+5.1 Threat Modeling
+-------------------
+
+====================== ==================================================== ==========================
+Threat category Attacker’s motive Affected security property
+====================== ==================================================== ==========================
+Spoofing Impersonating another user or system Authenticity
+Tampering Illegal modification of data in transit or at rest Integrity
+Repudiation Disputing an action that has taken place Non-repudiability
+Information Disclosure of confidential information Confidentiality
+Denial of Service Making system temporarily or permanently unavailable Availability
+Elevation of Privilege Gaining higher privileges than entitled to Authority
+====================== ==================================================== ==========================
+
+To ensure that this threat model is mitigated, use only ONAP Operations Manager `OOM <https://github.com/onap/oom>`_
+for production deployment. Policy docker and helm environment available at `policy-docker <https://github.com/onap/policy-docker>`_
+are for testing purposes only.
+
End of Document
diff --git a/docs/clamp/acm/acm-participant-guide.rst b/docs/clamp/acm/acm-participant-guide.rst
index 56710a4c..05532a02 100644
--- a/docs/clamp/acm/acm-participant-guide.rst
+++ b/docs/clamp/acm/acm-participant-guide.rst
@@ -85,6 +85,39 @@ and the same is configured for the 'ParticipantIntermediaryParameters' object in
typeName: org.onap.policy.clamp.acm.PolicyAutomationCompositionElement
typeVersion: 1.0.0
+Kafka Healthcheck
+-----------------
+
+Optionally is possible to add a Kafka Healthcheck by configuration. That feature is responsible of starting the Kafka configuration.
+If Kafka is not up and Kafka Healthcheck is not enable, Kafka messages configuration will fail.
+This feature check Kafka by an admin connection and if Kafka is up, it will start the Kafka messages configuration,
+but if Kafka is not up yet, it will retry this check later.
+
+Kafka Healthcheck supports the topics check and it could be enabled by configuration (using topicValidation parameter).
+Usually topics are getting created when first message happen, so in that scenario, topicValidation should be set as false.
+In different environment, the two topics will be created manually by a script with specific permissions. So Kafka could be up,
+but the Kafka messages configuration could be fail because the two topics are not get created yet.
+So in that scenario, topicValidation should be set as true, and if topics are not created yet, Healthcheck will retry that check later.
+
+For backward compatibility if Kafka Healthcheck is not configured, it will be disabled and Kafka messages configuration will start as normal.
+
+The following example shows the Kafka Healthcheck configuration.
+
+.. code-block:: bash
+
+ intermediaryParameters:
+ topicValidation: true
+ clampAdminTopics:
+ servers:
+ - ${topicServer:kafka:9092}
+ topicCommInfrastructure: kafka
+ fetchTimeout: 15000
+ topics:
+ operationTopic: policy-acruntime-participant
+ syncTopic: acm-ppnt-sync
+ ........
+
+
Interfaces to Implement
-----------------------
AutomationCompositionElementListener:
diff --git a/docs/clamp/acm/design-impl/clamp-runtime-acm.rst b/docs/clamp/acm/design-impl/clamp-runtime-acm.rst
index 46d4a85f..a3c22e69 100644
--- a/docs/clamp/acm/design-impl/clamp-runtime-acm.rst
+++ b/docs/clamp/acm/design-impl/clamp-runtime-acm.rst
@@ -430,3 +430,35 @@ YAML format is a standard for Automation Composition Type Definition. For the co
text/plain
++++++++++
Text format is used by Prometheus. For the conversion from Object to String will be used **StringHttpMessageConverter**.
+
+JSON log format
+***************
+ACM-runtime supports log in Json format. Below an example of appender for logback configuration to enable it.
+
+.. code-block:: xml
+ :caption: Part of logback configuration
+ :linenos:
+
+ <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
+ <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
+ <layout class="org.onap.policy.clamp.acm.runtime.config.LoggingConsoleLayout">
+ <timestampFormat>YYYY-MM-DDThh:mm:ss.sss+/-hh:mm</timestampFormat>
+ <timestampFormatTimezoneId>Etc/UTC</timestampFormatTimezoneId>
+ <staticParameters>service_id=policy-acm|application_id=policy-acm</staticParameters>
+ </layout>
+ </encoder>
+ </appender>
+
+LayoutWrappingEncoder implements the encoder interface and wraps the Java class LoggingConsoleLayout as layout to which it delegates the work of transforming an event into Json string.
+Parameters for LoggingConsoleLayout:
+
+- *timestampFormat*: Timestamp Format
+- *timestampFormatTimezoneId*: Time Zone used in the Timestamp Format
+- *staticParameters*: List of parameters do add into the log separated with a "|"
+
+Below un example of result:
+
+.. code-block:: json
+
+ {"severity":"INFO","extra_data":{"logger":"network","thread":"KAFKA-source-policy-acruntime-participant"},"service_id":"policy-acm","message":"[IN|KAFKA|policy-acruntime-participant]\n{\"state\":\"ON_LINE\",\"participantDefinitionUpdates\":[],\"automationCompositionInfoList\":[],\"participantSupportedElementType\":[{\"id\":\"f88c4463-f012-42e1-8927-12b552ecf380\",\"typeName\":\"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement\",\"typeVersion\":\"1.0.0\"}],\"messageType\":\"PARTICIPANT_STATUS\",\"messageId\":\"d3dc2f86-4253-4520-bbac-97c4c04547ad\",\"timestamp\":\"2025-01-21T16:14:27.087474035Z\",\"participantId\":\"101c62b3-8918-41b9-a747-d21eb79c6c93\",\"replicaId\":\"c1ba61d2-1dbd-44e4-80bd-135526c0615f\"}","application_id":"policy-acm","timestamp":"2025-01-21T16:14:27.114851006Z"}
+ {"severity":"INFO","extra_data":{"logger":"network","thread":"KAFKA-source-policy-acruntime-participant"},"service_id":"policy-acm","message":"[IN|KAFKA|policy-acruntime-participant]\n{\"state\":\"ON_LINE\",\"participantDefinitionUpdates\":[],\"automationCompositionInfoList\":[],\"participantSupportedElementType\":[{\"id\":\"4609a119-a8c7-41ee-96d1-6b49c3afaf2c\",\"typeName\":\"org.onap.policy.clamp.acm.HttpAutomationCompositionElement\",\"typeVersion\":\"1.0.0\"}],\"messageType\":\"PARTICIPANT_STATUS\",\"messageId\":\"ea29ab01-665d-4693-ab17-3a72491b5c71\",\"timestamp\":\"2025-01-21T16:14:27.117716317Z\",\"participantId\":\"101c62b3-8918-41b9-a747-d21eb79c6c91\",\"replicaId\":\"5e4f9690-742d-4190-a439-ebb4c820a010\"}","application_id":"policy-acm","timestamp":"2025-01-21T16:14:27.144379028Z"}
diff --git a/docs/development/devtools/devtools.rst b/docs/development/devtools/devtools.rst
index 4e63fdbc..b0b243e4 100644
--- a/docs/development/devtools/devtools.rst
+++ b/docs/development/devtools/devtools.rst
@@ -239,7 +239,7 @@ Running the API component standalone
++++++++++++++++++++++++++++++++++++
Assuming you have successfully built the codebase using the instructions above. The only requirement for the API
-component to run is a running MariaDb/Postgres database instance. The easiest way to do this is to run the docker
+component to run is a running Postgres database instance. The easiest way to do this is to run the docker
image, please see the official documentation for the latest information on doing so. Once the database is up and
running, a configuration file must be provided to the api in order for it to know how to connect to the database.
You can locate the default configuration file in the packaging of the api component:
@@ -260,7 +260,7 @@ An example of running the api using a docker compose script is located in the Po
Running the PAP component standalone
++++++++++++++++++++++++++++++++++++
-Once you have successfully built the PAP codebase, a running MariaDb/Postgres database and Kafka instance will also be
+Once you have successfully built the PAP codebase, a running Postgres database and Kafka instance will also be
required to start up the application. To start database and Kafka, check official documentation on how to run an
instance of each. After database and Kafka are up and running, a configuration file must be provided to the PAP
component in order for it to know how to connect to the database and Kafka along with other relevant configuration
@@ -303,14 +303,8 @@ to developers to become familiar with the Policy Framework components and test a
.. toctree::
:maxdepth: 2
+ testing/s3p/s3p-test-overview.rst
testing/s3p/run-s3p.rst
- testing/s3p/api-s3p.rst
- testing/s3p/pap-s3p.rst
- testing/s3p/apex-s3p.rst
- testing/s3p/drools-s3p.rst
- testing/s3p/xacml-s3p.rst
- testing/s3p/distribution-s3p.rst
- testing/s3p/clamp-s3p.rst
Running the Pairwise Tests
diff --git a/docs/development/devtools/installation/local-installation.rst b/docs/development/devtools/installation/local-installation.rst
index 861d4650..7e2e6899 100644
--- a/docs/development/devtools/installation/local-installation.rst
+++ b/docs/development/devtools/installation/local-installation.rst
@@ -54,6 +54,17 @@ Command Line
mvn spring-boot:run -Dspring-boot.run.arguments=”–server.port=8082”
+
+Models Simulators
+*****************
+
+Command Line
+------------
+
+ .. code-block:: bash
+
+ mvn -q -e clean compile exec:java -Dexec.mainClass="org.onap.policy.models.sim.pdp.PdpSimulatorMain" -Dexec.args="-c /PATH/TO/OnapPfConfig.json"
+
Apex-PDP
********
@@ -95,20 +106,6 @@ Command Line
mvn spring-boot:run -Dspring-boot.run.arguments=”–server.port=8082”
-Models Simulators
-*****************
-
-Command Line
-------------
-
- .. code-block:: bash
-
- mvn -q -e clean compile exec:java -Dexec.mainClass="org.onap.policy.models.sim.pdp.PdpSimulatorMain" -Dexec.args="-c /PATH/TO/OnapPfConfig.json"
-
- .. code-block:: bash
-
- mvn -q -e clean compile exec:java -Dexec.mainClass="org.onap.policy.models.sim.dmaap.startstop.Main" -Dexec.args="-c /PATH/TO/DefaultConfig.json"
-
XACML-PDP
*********
diff --git a/docs/development/devtools/smoke/acm-participants-smoke.rst b/docs/development/devtools/smoke/acm-participants-smoke.rst
index ad377768..869205a4 100644
--- a/docs/development/devtools/smoke/acm-participants-smoke.rst
+++ b/docs/development/devtools/smoke/acm-participants-smoke.rst
@@ -21,6 +21,8 @@ This article assumes that:
* Your local maven repository is in the location *~/.m2/repository*
* You have copied the settings.xml from oparent to *~/.m2/* directory
* You have added settings to access the ONAP Nexus to your M2 configuration, see `Maven Settings Example <https://wiki.onap.org/display/DW/Setting+Up+Your+Development+Environment>`_ (bottom of the linked page)
+* Your local helm is in the location /usr/local/bin/helm
+* Your local kubectl is in the location /usr/local/bin/kubectl
The procedure documented in this article has been verified using Ubuntu 20.04 LTS VM.
@@ -92,6 +94,9 @@ And into the file 'participant/participant-impl/participant-impl-kubernetes/src/
.. literalinclude:: files/participant-kubernetes-application.yaml
:language: yaml
+If the helm location is not '/usr/local/bin/helm' or the kubectl location is not '/usr/local/bin/kubectl', you have to update
+the file 'participant/participant-impl/participant-impl-kubernetes/src/main/java/org/onap/policy/clamp/acm/participant/kubernetes/helm/HelmClient.java'.
+
2.3.3 Automation composition Runtime
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To start the automation composition runtime service, we need to execute the following maven command from the "runtime-acm" directory in the clamp repo. Automation composition runtime uses the config file "src/main/resources/application.yaml" by default.
@@ -130,7 +135,7 @@ For building docker images of runtime-acm and participants:
.. code-block:: bash
- cd ~/git/onap/policy/clamp/packages/
+ cd ~/git/onap/policy/clamp/
mvn clean install -P docker
@@ -174,6 +179,8 @@ Request body:
"primeOrder": "PRIME"
}
+A successful prime request gives 202 responses in the postman client.
+
3.3 Create New Instances of Automation composition
==================================================
Once AC definition is primes, we can instantiate automation composition instances. This will create the instances with default state "UNDEPLOYED".
@@ -188,6 +195,8 @@ Request body:
:download:`Instantiation json <json/acm-instantiation.json>`
+A successful creation of new instance gives 201 responses in the postman client.
+
3.4 Change the State of the Instance
====================================
When the automation composition is updated with state “DEPLOYED”, the Kubernetes participant fetches the node template for all automation composition elements and deploys the helm chart of each AC element into the cluster. The following sample json input is passed on the request body.
@@ -206,6 +215,7 @@ Automation Composition Update Endpoint:
}
+A successful deploy request gives 202 responses in the postman client.
After the state changed to "DEPLOYED", nginx-ingress pod is deployed in the kubernetes cluster. And http participant should have posted the dummy data to the configured URL in the tosca template.
The following command can be used to verify the pods deployed successfully by kubernetes participant.
diff --git a/docs/development/devtools/smoke/api-smoke.rst b/docs/development/devtools/smoke/api-smoke.rst
index 8230f33b..b2c81f83 100644
--- a/docs/development/devtools/smoke/api-smoke.rst
+++ b/docs/development/devtools/smoke/api-smoke.rst
@@ -11,7 +11,8 @@ Policy API Smoke Test
~~~~~~~~~~~~~~~~~~~~~
The policy-api smoke testing is executed against a default ONAP installation as per OOM charts.
-This test verifies the execution of all the REST api's exposed by the component to make sure the contract works as expected.
+This test verifies the execution of all the REST api's exposed by the component to make sure the
+contract works as expected.
General Setup
*************
diff --git a/docs/development/devtools/smoke/clamp-ac-participant-protocol-smoke.rst b/docs/development/devtools/smoke/clamp-ac-participant-protocol-smoke.rst
index 95a27ee7..8bcdc3b8 100644
--- a/docs/development/devtools/smoke/clamp-ac-participant-protocol-smoke.rst
+++ b/docs/development/devtools/smoke/clamp-ac-participant-protocol-smoke.rst
@@ -67,7 +67,6 @@ Test result:
- Observe PARTICIPANT_REGISTER going from participant to runtime
- Observe PARTICIPANT_REGISTER_ACK going from runtime to participant
-- Observe PARTICIPANT_PRIME going from runtime to participant
3.2 Participant Deregistration
==============================
@@ -89,7 +88,8 @@ Test result:
- Observe PARTICIPANT_PRIME going from runtime to participant with acm type definitions and common property values for participant types
- Observe that the acm type definitions and common property values for participant types are stored on ParticipantHandler
-- Observe PARTICIPANT_PRIME_ACK going from runtime to participant
+- Observe PARTICIPANT_PRIME_ACK going from participant to runtime
+- Observe PARTICIPANT_SYNC_MSG going from runtime to participant
3.4 Participant DePriming
=========================
@@ -103,7 +103,8 @@ Test result:
- If acm instances exist in runtime database, return a response for the REST API with error response saying "Cannot decommission acm type definition"
- If no acm instances exist in runtime database, Observe PARTICIPANT_PRIME going from runtime to participant with definitions as null
- Observe that the acm type definitions and common property values for participant types are removed on ParticipantHandler
-- Observe PARTICIPANT_PRIME_ACK going from runtime to participant
+- Observe PARTICIPANT_PRIME_ACK going from participant to runtime
+- Observe PARTICIPANT_SYNC_MSG going from runtime to participant
3.5 Automation Composition Instance
===================================
@@ -128,6 +129,7 @@ Test result:
- Observe that the AutomationCompositionElements deploy state is DEPLOYED
- Observe that the acm deploy state is DEPLOYED
- Observe AUTOMATION_COMPOSITION_DEPLOY_ACK going from participant to runtime
+- Observe PARTICIPANT_SYNC_MSG going from runtime to participant
3.7 Automation Composition lock state change to UNLOCK
======================================================
@@ -140,6 +142,7 @@ Test result:
- Observe that the AutomationCompositionElements lock state is UNLOCK
- Observe that the acm state is UNLOCK
- Observe AUTOMATION_COMPOSITION_STATE_CHANGE_ACK going from participant to runtime
+- Observe PARTICIPANT_SYNC_MSG going from runtime to participant
3.8 Automation Composition lock state change to LOCK
====================================================
@@ -152,6 +155,7 @@ Test result:
- Observe that the AutomationCompositionElements lock state is LOCK
- Observe that the acm lock state is LOCK
- Observe AUTOMATION_COMPOSITION_STATE_CHANGE_ACK going from participant to runtime
+- Observe PARTICIPANT_SYNC_MSG going from runtime to participant
3.9 Automation Composition deploy state change to UNDEPLOYED
============================================================
@@ -166,6 +170,7 @@ Test result:
- Observe that the AutomationCompositionElements undeploy the instances from respective frameworks
- Observe that the automation composition instances are removed from participants
- Observe AUTOMATION_COMPOSITION_STATE_CHANGE_ACK going from participant to runtime
+- Observe PARTICIPANT_SYNC_MSG going from runtime to participant
3.10 Automation Composition monitoring and reporting
====================================================
@@ -176,8 +181,7 @@ Action: Bring up participant
Test result:
- Observe that PARTICIPANT_STATUS message is sent from participants to runtime in a regular interval
-- Trigger a PARTICIPANT_STATUS_REQ from runtime and observe a PARTICIPANT_STATUS message with tosca definitions of automation composition type definitions sent
- from all the participants to runtime
+- Trigger a PARTICIPANT_STATUS_REQ from runtime and observe a PARTICIPANT_STATUS message from all the participants to runtime
This concluded the required smoke tests
diff --git a/docs/development/devtools/smoke/clamp-smoke.rst b/docs/development/devtools/smoke/clamp-smoke.rst
index 2f4a7c9f..d1ca6fa8 100644
--- a/docs/development/devtools/smoke/clamp-smoke.rst
+++ b/docs/development/devtools/smoke/clamp-smoke.rst
@@ -121,7 +121,7 @@ Running on the Command Line
.. code-block:: bash
cd ~/git/clamp/runtime-acm
- java -jar target/policy-clamp-runtime-acm-7.1.3-SNAPSHOT.jar
+ java -jar target/policy-clamp-runtime-acm-8.1.0-SNAPSHOT.jar
Running participant simulator
@@ -132,7 +132,7 @@ Run the following commands:
.. code-block:: bash
cd ~/git/clamp/participant/participant-impl/participant-impl-simulator
- java -jar target/policy-clamp-participant-impl-simulator-7.1.3-SNAPSHOT.jar
+ java -jar target/policy-clamp-participant-impl-simulator-8.1.0-SNAPSHOT.jar
Running the CLAMP automation composition docker image
diff --git a/docs/development/devtools/smoke/db-migrator-smoke.rst b/docs/development/devtools/smoke/db-migrator-smoke.rst
index 74b8eddd..c6d8fd0d 100644
--- a/docs/development/devtools/smoke/db-migrator-smoke.rst
+++ b/docs/development/devtools/smoke/db-migrator-smoke.rst
@@ -8,415 +8,51 @@ Policy DB Migrator Smoke Tests
Prerequisites
*************
-Check number of files in each release
+- Have Docker and Docker compose installed
+- Some bash knowledge
-.. code::
- :number-lines:
+Preparing the test
+==================
- ls 0800/upgrade/*.sql | wc -l = 96
- ls 0900/upgrade/*.sql | wc -l = 13
- ls 1000/upgrade/*.sql | wc -l = 9
- ls 0800/downgrade/*.sql | wc -l = 96
- ls 0900/downgrade/*.sql | wc -l = 13
- ls 1000/downgrade/*.sql | wc -l = 9
+The goal for the smoke test is to confirm the any upgrade or downgrade operation between different
+db-migrator versions are completed without issues.
-Upgrade scripts
-===============
+So, before running the test, make sure that there are different tests doing upgrade and downgrade
+operations to the latest version. The script with test cases is under db-migrator folder in `docker
+repository <https://github.com/onap/policy-docker/tree/master/policy-db-migrator/smoke-test>`_
-.. code::
- :number-lines:
+Edit the `*-tests.sh` file to add the tests and also to check if the database variables (host,
+admin user, admin password) are set correctly.
- /opt/app/policy/bin/prepare_upgrade.sh policyadmin
- /opt/app/policy/bin/db-migrator -s policyadmin -o upgrade # upgrade to Jakarta version (latest)
- /opt/app/policy/bin/db-migrator -s policyadmin -o upgrade -t 0900 # upgrade to Istanbul
- /opt/app/policy/bin/db-migrator -s policyadmin -o upgrade -t 0800 # upgrade to Honolulu
+Running the test
+================
-.. note::
- You can also run db-migrator upgrade with the -t and -f options
+The script mentioned on the step above is ran against the `Docker compose configuration
+<https://github.com/onap/policy-docker/tree/master/compose>`_.
-Downgrade scripts
-=================
+Change the `db_migrator_policy_init.sh` on db-migrator service descriptor in the docker compose file
+to the `*-test.sh` file.
-.. code::
- :number-lines:
+Start the service
- /opt/app/policy/bin/prepare_downgrade.sh policyadmin
- /opt/app/policy/bin/db-migrator -s policyadmin -o downgrade -t 0900 # downgrade to Istanbul
- /opt/app/policy/bin/db-migrator -s policyadmin -o downgrade -t 0800 # downgrade to Honolulu
- /opt/app/policy/bin/db-migrator -s policyadmin -o downgrade -t 0 # delete all tables
+.. code-block:: bash
-Db migrator initialization script
-=================================
+ cd ~/git/docker/compose
+ ./start-compose.sh policy-db-migrator
-Update /oom/kubernetes/policy/resources/config/db_migrator_policy_init.sh with the appropriate upgrade/downgrade calls.
+To collect the logs
-The policy version you are deploying should either be an upgrade or downgrade from the current db migrator schema version.
+.. code-block:: bash
-Every time you modify db_migrator_policy_init.sh you will have to undeploy, make and redeploy before updates are applied.
+ docker compose logs
+ # or
+ docker logs policy-db-migrator
-1. Fresh Install
-****************
+To finish execution
-.. list-table::
- :widths: 60 20
- :header-rows: 0
+.. code-block:: bash
- * - Number of files run
- - 118
- * - Tables in policyadmin
- - 70
- * - Records Added
- - 118
- * - schema_version
- - 1000
+ ./stop-compose.sh
-2. Downgrade to Honolulu (0800)
-*******************************
-
-Modify db_migrator_policy_init.sh - remove any lines referencing upgrade and add the 2 lines under "Downgrade scripts" tagged as Honolulu
-
-Make/Redeploy to run downgrade.
-
-.. list-table::
- :widths: 60 20
- :header-rows: 0
-
- * - Number of files run
- - 13
- * - Tables in policyadmin
- - 73
- * - Records Added
- - 13
- * - schema_version
- - 0800
-
-3. Upgrade to Istanbul (0900)
-*****************************
-
-Modify db_migrator_policy_init.sh - remove any lines referencing downgrade and add the 2 lines under "Upgrade scripts".
-
-Make/Redeploy to run upgrade.
-
-.. list-table::
- :widths: 60 20
- :header-rows: 0
-
- * - Number of files run
- - 13
- * - Tables in policyadmin
- - 75
- * - Records Added
- - 13
- * - schema_version
- - 0900
-
-4. Upgrade to Istanbul (0900) without any information in the migration schema
-*****************************************************************************
-
-Ensure you are on release 0800. (This may require running a downgrade before starting the test)
-
-Drop db-migrator tables in migration schema:
-
-.. code::
- :number-lines:
-
- DROP TABLE schema_versions;
- DROP TABLE policyadmin_schema_changelog;
-
-Modify db_migrator_policy_init.sh - remove any lines referencing downgrade and add the 2 lines under "Upgrade scripts".
-
-Make/Redeploy to run upgrade.
-
-.. list-table::
- :widths: 60 20
- :header-rows: 0
-
- * - Number of files run
- - 13
- * - Tables in policyadmin
- - 75
- * - Records Added
- - 13
- * - schema_version
- - 0900
-
-5. Upgrade to Istanbul (0900) after failed downgrade
-****************************************************
-
-Ensure you are on release 0900.
-
-Rename pdpstatistics table in policyadmin schema:
-
-.. code::
-
- RENAME TABLE pdpstatistics TO backup_pdpstatistics;
-
-Modify db_migrator_policy_init.sh - remove any lines referencing upgrade and add the 2 lines under "Downgrade scripts"
-
-Make/Redeploy to run downgrade
-
-This should result in an error (last row in policyadmin_schema_changelog will have a success value of 0)
-
-Rename backup_pdpstatistic table in policyadmin schema:
-
-.. code::
-
- RENAME TABLE backup_pdpstatistics TO pdpstatistics;
-
-Modify db_migrator_policy_init.sh - Remove any lines referencing downgrade and add the 2 lines under "Upgrade scripts"
-
-Make/Redeploy to run upgrade
-
-.. list-table::
- :widths: 60 20
- :header-rows: 0
-
- * - Number of files run
- - 11
- * - Tables in policyadmin
- - 75
- * - Records Added
- - 11
- * - schema_version
- - 0900
-
-6. Downgrade to Honolulu (0800) after failed downgrade
-******************************************************
-
-Ensure you are on release 0900.
-
-Add timeStamp column to papdpstatistics_enginestats:
-
-.. code::
-
- ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN timeStamp datetime DEFAULT NULL NULL AFTER UPTIME;
-
-Modify db_migrator_policy_init.sh - remove any lines referencing upgrade and add the 2 lines under "Downgrade scripts"
-
-Make/Redeploy to run downgrade
-
-This should result in an error (last row in policyadmin_schema_changelog will have a success value of 0)
-
-Remove timeStamp column from jpapdpstatistics_enginestats:
-
-.. code::
-
- ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp;
-
-The config job will retry 5 times. If you make your fix before this limit is reached you won't need to redeploy.
-
-Redeploy to run downgrade
-
-.. list-table::
- :widths: 60 20
- :header-rows: 0
-
- * - Number of files run
- - 14
- * - Tables in policyadmin
- - 73
- * - Records Added
- - 14
- * - schema_version
- - 0800
-
-7. Downgrade to Honolulu (0800) after failed upgrade
-****************************************************
-
-Ensure you are on release 0800.
-
-Modify db_migrator_policy_init.sh - remove any lines referencing downgrade and add the 2 lines under "Upgrade scripts"
-
-Update pdpstatistics:
-
-.. code::
-
- ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL NULL AFTER POLICYEXECUTEDSUCCESSCOUNT;
-
-Make/Redeploy to run upgrade
-
-This should result in an error (last row in policyadmin_schema_changelog will have a success value of 0)
-
-Once the retry count has been reached, update pdpstatistics:
-
-.. code::
-
- ALTER TABLE pdpstatistics DROP COLUMN POLICYUNDEPLOYCOUNT;
-
-Modify db_migrator_policy_init.sh - Remove any lines referencing upgrade and add the 2 lines under "Downgrade scripts"
-
-Make/Redeploy to run downgrade
-
-.. list-table::
- :widths: 60 20
- :header-rows: 0
-
- * - Number of files run
- - 7
- * - Tables in policyadmin
- - 73
- * - Records Added
- - 7
- * - schema_version
- - 0800
-
-8. Upgrade to Istanbul (0900) after failed upgrade
-**************************************************
-
-Ensure you are on release 0800.
-
-Modify db_migrator_policy_init.sh - remove any lines referencing downgrade and add the 2 lines under "Upgrade scripts"
-
-Update PDP table:
-
-.. code::
-
- ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY;
-
-Make/Redeploy to run upgrade
-
-This should result in an error (last row in policyadmin_schema_changelog will have a success value of 0)
-
-Update PDP table:
-
-.. code::
-
- ALTER TABLE pdp DROP COLUMN LASTUPDATE;
-
-The config job will retry 5 times. If you make your fix before this limit is reached you won't need to redeploy.
-
-Redeploy to run upgrade
-
-.. list-table::
- :widths: 60 20
- :header-rows: 0
-
- * - Number of files run
- - 14
- * - Tables in policyadmin
- - 75
- * - Records Added
- - 14
- * - schema_version
- - 0900
-
-9. Downgrade to Honolulu (0800) with data in pdpstatistics and jpapdpstatistics_enginestats
-*******************************************************************************************
-
-Ensure you are on release 0900.
-
-Check pdpstatistics and jpapdpstatistics_enginestats are populated with data.
-
-.. code::
- :number-lines:
-
- SELECT count(*) FROM pdpstatistics;
- SELECT count(*) FROM jpapdpstatistics_enginestats;
-
-Modify db_migrator_policy_init.sh - remove any lines referencing upgrade and add the 2 lines under "Downgrade scripts"
-
-Make/Redeploy to run downgrade
-
-Check the tables to ensure the number of records is the same.
-
-.. code::
- :number-lines:
-
- SELECT count(*) FROM pdpstatistics;
- SELECT count(*) FROM jpapdpstatistics_enginestats;
-
-Check pdpstatistics to ensure the primary key has changed:
-
-.. code::
-
- SELECT column_name, constraint_name FROM information_schema.key_column_usage WHERE table_name='pdpstatistics';
-
-Check jpapdpstatistics_enginestats to ensure id column has been dropped and timestamp column added.
-
-.. code::
-
- SELECT table_name, column_name, data_type FROM information_schema.columns WHERE table_name = 'jpapdpstatistics_enginestats';
-
-Check the pdp table to ensure the LASTUPDATE column has been dropped.
-
-.. code::
-
- SELECT table_name, column_name, data_type FROM information_schema.columns WHERE table_name = 'pdp';
-
-
-.. list-table::
- :widths: 60 20
- :header-rows: 0
-
- * - Number of files run
- - 13
- * - Tables in policyadmin
- - 73
- * - Records Added
- - 13
- * - schema_version
- - 0800
-
-10. Upgrade to Istanbul (0900) with data in pdpstatistics and jpapdpstatistics_enginestats
-******************************************************************************************
-
-Ensure you are on release 0800.
-
-Check pdpstatistics and jpapdpstatistics_enginestats are populated with data.
-
-.. code::
- :number-lines:
-
- SELECT count(*) FROM pdpstatistics;
- SELECT count(*) FROM jpapdpstatistics_enginestats;
-
-Modify db_migrator_policy_init.sh - remove any lines referencing downgrade and add the 2 lines under "Upgrade scripts"
-
-Make/Redeploy to run upgrade
-
-Check the tables to ensure the number of records is the same.
-
-.. code::
- :number-lines:
-
- SELECT count(*) FROM pdpstatistics;
- SELECT count(*) FROM jpapdpstatistics_enginestats;
-
-Check pdpstatistics to ensure the primary key has changed:
-
-.. code::
-
- SELECT column_name, constraint_name FROM information_schema.key_column_usage WHERE table_name='pdpstatistics';
-
-Check jpapdpstatistics_enginestats to ensure timestamp column has been dropped and id column added.
-
-.. code::
-
- SELECT table_name, column_name, data_type FROM information_schema.columns WHERE table_name = 'jpapdpstatistics_enginestats';
-
-Check the pdp table to ensure the LASTUPDATE column has been added and the value has defaulted to the CURRENT_TIMESTAMP.
-
-.. code::
-
- SELECT table_name, column_name, data_type, column_default FROM information_schema.columns WHERE table_name = 'pdp';
-
-.. list-table::
- :widths: 60 20
- :header-rows: 0
-
- * - Number of files run
- - 13
- * - Tables in policyadmin
- - 75
- * - Records Added
- - 13
- * - schema_version
- - 0900
-
-.. note::
- The number of records added may vary depending on the number of retries.
-
-With addition of Postgres support to db-migrator, these tests can be also performed on a Postgres version of database.
-In addition, scripts running the aforementioned scenarios can be found under `smoke-tests` folder on db-migrator code base.
End of Document
diff --git a/docs/development/devtools/smoke/files/participant-http-application.yaml b/docs/development/devtools/smoke/files/participant-http-application.yaml
index 142c24e5..edf324b4 100644
--- a/docs/development/devtools/smoke/files/participant-http-application.yaml
+++ b/docs/development/devtools/smoke/files/participant-http-application.yaml
@@ -1,20 +1,28 @@
participant:
intermediaryParameters:
+ topics:
+ operationTopic: policy-acruntime-participant
+ syncTopic: acm-ppnt-sync
reportingTimeIntervalMs: 120000
description: Participant Description
participantId: 101c62b3-8918-41b9-a747-d21eb79c6c01
clampAutomationCompositionTopics:
topicSources:
- - topic: policy-acruntime-participant
+ - topic: ${participant.intermediaryParameters.topics.operationTopic}
servers:
- - ${topicServer:localhost:29092}
+ - localhost:29092
+ topicCommInfrastructure: kafka
+ fetchTimeout: 15000
+ - topic: ${participant.intermediaryParameters.topics.syncTopic}
+ servers:
+ - localhost:29092
topicCommInfrastructure: kafka
fetchTimeout: 15000
topicSinks:
- - topic: policy-acruntime-participant
+ - topic: ${participant.intermediaryParameters.topics.operationTopic}
servers:
- - ${topicServer:localhost:29092}
+ - localhost:29092
topicCommInfrastructure: kafka
participantSupportedElementTypes:
-
diff --git a/docs/development/devtools/smoke/files/participant-kubernetes-application.yaml b/docs/development/devtools/smoke/files/participant-kubernetes-application.yaml
index 9b25c615..59732bbf 100644
--- a/docs/development/devtools/smoke/files/participant-kubernetes-application.yaml
+++ b/docs/development/devtools/smoke/files/participant-kubernetes-application.yaml
@@ -3,22 +3,28 @@ participant:
localChartDirectory: /home/policy/local-charts
infoFileName: CHART_INFO.json
intermediaryParameters:
+ topics:
+ operationTopic: policy-acruntime-participant
+ syncTopic: acm-ppnt-sync
reportingTimeIntervalMs: 120000
description: Participant Description
participantId: 101c62b3-8918-41b9-a747-d21eb79c6c02
clampAutomationCompositionTopics:
topicSources:
- -
- topic: policy-acruntime-participant
+ - topic: ${participant.intermediaryParameters.topics.operationTopic}
servers:
- - ${topicServer:localhost:29092}
+ - localhost:29092
+ topicCommInfrastructure: kafka
+ fetchTimeout: 15000
+ - topic: ${participant.intermediaryParameters.topics.syncTopic}
+ servers:
+ - localhost:29092
topicCommInfrastructure: kafka
fetchTimeout: 15000
topicSinks:
- -
- topic: policy-acruntime-participant
+ - topic: ${participant.intermediaryParameters.topics.operationTopic}
servers:
- - ${topicServer:localhost:29092}
+ - localhost:29092
topicCommInfrastructure: kafka
participantSupportedElementTypes:
-
diff --git a/docs/development/devtools/smoke/files/participant-policy-application.yaml b/docs/development/devtools/smoke/files/participant-policy-application.yaml
index 5b87d1b3..c42146a5 100644
--- a/docs/development/devtools/smoke/files/participant-policy-application.yaml
+++ b/docs/development/devtools/smoke/files/participant-policy-application.yaml
@@ -18,22 +18,28 @@ participant:
useHttps: false
allowSelfSignedCerts: false
intermediaryParameters:
+ topics:
+ operationTopic: policy-acruntime-participant
+ syncTopic: acm-ppnt-sync
reportingTimeIntervalMs: 120000
description: Participant Description
participantId: 101c62b3-8918-41b9-a747-d21eb79c6c03
clampAutomationCompositionTopics:
topicSources:
- -
- topic: policy-acruntime-participant
+ - topic: ${participant.intermediaryParameters.topics.operationTopic}
servers:
- - ${topicServer:localhost:29092}
+ - localhost:29092
+ topicCommInfrastructure: kafka
+ fetchTimeout: 15000
+ - topic: ${participant.intermediaryParameters.topics.syncTopic}
+ servers:
+ - localhost:29092
topicCommInfrastructure: kafka
fetchTimeout: 15000
topicSinks:
- -
- topic: policy-acruntime-participant
+ - topic: ${participant.intermediaryParameters.topics.operationTopic}
servers:
- - ${topicServer:localhost:29092}
+ - localhost:29092
topicCommInfrastructure: kafka
participantSupportedElementTypes:
-
diff --git a/docs/development/devtools/smoke/files/participant-sim-application.yaml b/docs/development/devtools/smoke/files/participant-sim-application.yaml
index 2d23c12c..2a7efc3f 100644
--- a/docs/development/devtools/smoke/files/participant-sim-application.yaml
+++ b/docs/development/devtools/smoke/files/participant-sim-application.yaml
@@ -1,20 +1,28 @@
participant:
intermediaryParameters:
+ topics:
+ operationTopic: policy-acruntime-participant
+ syncTopic: acm-ppnt-sync
reportingTimeIntervalMs: 120000
description: Participant Description
participantId: 101c62b3-8918-41b9-a747-d21eb79c6c90
clampAutomationCompositionTopics:
topicSources:
- - topic: policy-acruntime-participant
+ - topic: ${participant.intermediaryParameters.topics.operationTopic}
servers:
- - ${topicServer:localhost:29092}
+ - localhost:29092
+ topicCommInfrastructure: kafka
+ fetchTimeout: 15000
+ - topic: ${participant.intermediaryParameters.topics.syncTopic}
+ servers:
+ - localhost:29092
topicCommInfrastructure: kafka
fetchTimeout: 15000
topicSinks:
- - topic: policy-acruntime-participant
+ - topic: ${participant.intermediaryParameters.topics.operationTopic}
servers:
- - ${topicServer:localhost:29092}
+ - localhost:29092
topicCommInfrastructure: kafka
participantSupportedElementTypes:
-
diff --git a/docs/development/devtools/smoke/files/runtime-application.yaml b/docs/development/devtools/smoke/files/runtime-application.yaml
index f798d5bb..d9639226 100644
--- a/docs/development/devtools/smoke/files/runtime-application.yaml
+++ b/docs/development/devtools/smoke/files/runtime-application.yaml
@@ -1,21 +1,26 @@
runtime:
+ topics:
+ operationTopic: policy-acruntime-participant
+ syncTopic: acm-ppnt-sync
participantParameters:
heartBeatMs: 20000
maxStatusWaitMs: 200000
topicParameterGroup:
topicSources:
- -
- topic: policy-acruntime-participant
+ - topic: ${runtime.topics.operationTopic}
servers:
- - ${topicServer:localhost:29092}
+ - localhost:29092
topicCommInfrastructure: kafka
fetchTimeout: 15000
topicSinks:
- -
- topic: policy-acruntime-participant
+ - topic: ${runtime.topics.operationTopic}
servers:
- - ${topicServer:localhost:29092}
+ - localhost:29092
+ topicCommInfrastructure: kafka
+ - topic: ${runtime.topics.syncTopic}
+ servers:
+ - localhost:29092
topicCommInfrastructure: kafka
acmParameters:
toscaElementName: org.onap.policy.clamp.acm.AutomationCompositionElement
diff --git a/docs/development/devtools/smoke/json/acm-instantiation.json b/docs/development/devtools/smoke/json/acm-instantiation.json
index 2cf009cd..85f22893 100644
--- a/docs/development/devtools/smoke/json/acm-instantiation.json
+++ b/docs/development/devtools/smoke/json/acm-instantiation.json
@@ -15,7 +15,7 @@
"chart": {
"chartId": {
"name": "nginx-ingress",
- "version": "0.11.0"
+ "version": "1.4.1"
},
"releaseName": "nginxapp",
"namespace": "onap"
diff --git a/docs/development/devtools/smoke/pap-smoke.rst b/docs/development/devtools/smoke/pap-smoke.rst
index a5f54c06..a17c8c6c 100644
--- a/docs/development/devtools/smoke/pap-smoke.rst
+++ b/docs/development/devtools/smoke/pap-smoke.rst
@@ -11,7 +11,8 @@ Policy PAP Smoke Test
~~~~~~~~~~~~~~~~~~~~~
The policy-pap smoke testing is executed against a default ONAP installation as per OOM charts.
-This test verifies the execution of all the REST api's exposed by the component to make sure the contract works as expected.
+This test verifies the execution of all the REST api's exposed by the component to make sure the
+contract works as expected.
General Setup
*************
@@ -28,7 +29,7 @@ The ONAP components used during the smoke tests are:
- Policy API to perform CRUD of policies.
- Policy DB to store the policies.
-- DMaaP for the communication between components.
+- Kafka for the communication between components.
- Policy PAP to perform runtime administration (deploy/undeploy/status/statistics/etc).
- Policy Apex-PDP to deploy & undeploy policies. And send heartbeats to PAP.
- Policy Drools-PDP to deploy & undeploy policies. And send heartbeats to PAP.
@@ -66,4 +67,5 @@ Make sure to execute the delete steps in order to clean the setup after testing.
Delete policies using policy-api
--------------------------------
-Use the previously downloaded policy-api postman collection to delete the policies created for testing.
+Use the previously downloaded policy-api postman collection to delete the policies created for
+testing.
diff --git a/docs/development/devtools/smoke/xacml-smoke.rst b/docs/development/devtools/smoke/xacml-smoke.rst
index 61f3551f..b57a3065 100644
--- a/docs/development/devtools/smoke/xacml-smoke.rst
+++ b/docs/development/devtools/smoke/xacml-smoke.rst
@@ -10,8 +10,8 @@
XACML PDP Smoke Test
~~~~~~~~~~~~~~~~~~~~
-The policy-xacml-pdp smoke testing can be executed against a kubernetes based policy framework installation,
-and/or a docker-compose set up similar to the one executed by CSIT tests.
+The policy-xacml-pdp smoke testing can be executed against a kubernetes based policy framework
+installation, and/or a docker-compose set up similar to the one executed by CSIT tests.
General Setup
*************
@@ -21,16 +21,21 @@ PF kubernetes Install
For installation instructions, please refer to the following documentation:
-`Policy Framework K8S Install <https://docs.onap.org/projects/onap-policy-parent/en/latest/development/devtools/testing/csit.html>`_
+`Policy Framework K8S Install
+<https://docs.onap.org/projects/onap-policy-parent/en/latest/development/devtools/testing/csit.html>`_
-The script referred to in the above link should handle the install of the of microk8s, docker and other required components for the install of the policy framework and clamp components. The scripts are used by policy as a means to run the CSIT tests in Kubernetes.
+The script referred to in the above link should handle the install of the of microk8s, docker and
+other required components for the install of the policy framework and clamp components. The scripts
+are used by policy as a means to run the CSIT tests in Kubernetes.
docker-compose based
--------------------
-A smaller testing environment can be put together by replicating the docker-based CSIT test environment. Details are on the same page as K8s setup:
+A smaller testing environment can be put together by replicating the docker-based CSIT test
+environment. Details are on the same page as K8s setup:
-`Policy CSIT Test Install Docker <https://docs.onap.org/projects/onap-policy-parent/en/latest/development/devtools/testing/csit.html>`_
+`Policy CSIT Test Install Docker
+<https://docs.onap.org/projects/onap-policy-parent/en/latest/development/devtools/testing/csit.html>`_
Testing procedures
******************
diff --git a/docs/development/devtools/testing/csit.rst b/docs/development/devtools/testing/csit.rst
index 4eb1256c..9151e166 100644
--- a/docs/development/devtools/testing/csit.rst
+++ b/docs/development/devtools/testing/csit.rst
@@ -17,6 +17,10 @@ This article provides the steps to run CSIT tests in a local environment, most c
significant code change.
.. note::
+ Both environments described in this page are for test or learning purposes only. For real deployment
+ environment, use `ONAP Operations Manager <https://github.com/onap/oom>`_
+
+.. note::
If building images locally, follow the instructions :ref:`here <building-pf-docker-images-label>`
@@ -38,51 +42,25 @@ Under the folder `~/git/policy/docker/csit`, there are two main scripts to run t
Running CSIT in Docker environment
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-If not familiar with the PF Docker structure, the detailed information can be found :ref:`here <docker-label>`
+If not familiar with the PF Docker structure, the detailed information can be found
+:ref:`here <docker-label>`
Running tests to validate code changes
--------------------------------------
-After building image(s) locally, the compose file needs to be edited to use the local image when
-bringing up the container. Open file `~/git/policy/docker/compose/docker-compose.yml` and remove the
-tag `${CONTAINER_LOCATION}` from the image variable in the service description.
-If change is GUI related, then `docker-compose.gui.yml` might need to be edited as well, although
-there are no GUI related test suites.
-
-For example, if testing against a PAP change, a new onap/policy-pap image with latest and
-x.y.z-SNAPSHOT versions is available. When editing the docker-compose file, the following change
-would be done:
-
-From:
-
-.. code-block:: yaml
-
- pap:
- image: ${CONTAINER_LOCATION}onap/policy-pap:${POLICY_PAP_VERSION}
- container_name: policy-pap
-
-
-To:
-
-.. code-block:: yaml
-
- pap:
- image: onap/policy-pap:latest
- container_name: policy-pap
-
+For *local* images, run the script with the `--local` flag.
.. note::
Make sure to do the same changes to any other components that are using locally built images.
-After finished with edits in compose file, then use the `run-project-csit.sh` script to run the
-test suite.
+Then use the `run-project-csit.sh` script to run the test suite.
.. code-block:: bash
cd ~/git/policy/docker
- ./csit/run-project-csit.sh <component>
+ ./csit/run-project-csit.sh <component> --local
The <component> input is any of the policy components available:
@@ -94,7 +72,8 @@ The <component> input is any of the policy components available:
- drools-pdp
- drools-applications
- xacml-pdp
- - policy-acm-runtime
+ - clamp
+ - opa-pdp
Keep in mind that after the Robot executions, logs from docker-compose are printed and
test logs might not be available on console and the containers are teared down. The tests results
@@ -105,12 +84,14 @@ Running tests for learning PF usage
-----------------------------------
In that case, no changes required on docker-compose files, but commenting the tear down of docker
-containers might be required. For that, edit the file `run-project-csit.sh` script and comment the
-following line:
+containers might be required. For that, run the `run-project-csit.sh` script with `--no-exit` flag:
.. code-block:: bash
- # source_safely ${WORKSPACE}/compose/stop-compose.sh (currently line 36)
+ cd ~/git/policy/docker
+ ./csit/run-project-csit.sh <component> --local --no-exit
+ # or
+ ./csit/run-project-csit.sh <component> --no-exit # will download images from nexus3 server
This way, the docker containers are still up and running for more investigation.
@@ -153,6 +134,7 @@ The <component> input is any of the policy components available:
- drools-pdp
- xacml-pdp
- clamp
+ - opa-pdp
Different from Docker usage, the microk8s installation is not removed when tests finish.
@@ -161,12 +143,12 @@ Different from Docker usage, the microk8s installation is not removed when tests
Installing all available PF components
--------------------------------------
-Use the `run-k8s-csit.sh` script to install PF components with Prometheus server available.
+Use the `cluster_setup.sh` script to install PF components with Prometheus server available.
.. code-block:: bash
- cd ~/git/policy/docker
- ./csit/run-k8s-csit.sh install
+ cd ~/git/policy/docker/csit/resources/scripts
+ ./cluster_setup.sh install
In this case, no tests are executed and the environment can be used for other integration tests
@@ -179,7 +161,7 @@ Uninstall and clean up
If running the CSIT tests with microk8s environment, docker images for the tests suites are created.
To clean them up, user `docker prune <https://docs.docker.com/config/pruning/>`_ command.
-To uninstall policy helm deployment and/or the microk8s cluster, use `run-k8s-csit.sh`
+To uninstall policy helm deployment and/or the microk8s cluster, use `cluster_setup.sh`
.. code-block:: bash
@@ -187,10 +169,10 @@ To uninstall policy helm deployment and/or the microk8s cluster, use `run-k8s-cs
cd ~/git/policy/docker
# to uninstall deployment
- ./csit/run-k8s-csit.sh uninstall
+ ./csit/resources/scripts/cluster_setup.sh uninstall
# to remove cluster
- ./csit/run-k8s-csit.sh clean
+ ./csit/resources/scripts/cluster_setup.sh clean
End of document \ No newline at end of file
diff --git a/docs/development/devtools/testing/s3p/apex-s3p-results/apex_metrics_after_72h.txt b/docs/development/devtools/testing/s3p/apex-s3p-results/apex_metrics_after_72h.txt
deleted file mode 100644
index 56f13907..00000000
--- a/docs/development/devtools/testing/s3p/apex-s3p-results/apex_metrics_after_72h.txt
+++ /dev/null
@@ -1,316 +0,0 @@
-# HELP jvm_threads_current Current thread count of a JVM
-# TYPE jvm_threads_current gauge
-jvm_threads_current 32.0
-# HELP jvm_threads_daemon Daemon thread count of a JVM
-# TYPE jvm_threads_daemon gauge
-jvm_threads_daemon 17.0
-# HELP jvm_threads_peak Peak thread count of a JVM
-# TYPE jvm_threads_peak gauge
-jvm_threads_peak 81.0
-# HELP jvm_threads_started_total Started thread count of a JVM
-# TYPE jvm_threads_started_total counter
-jvm_threads_started_total 423360.0
-# HELP jvm_threads_deadlocked Cycles of JVM-threads that are in deadlock waiting to acquire object monitors or ownable synchronizers
-# TYPE jvm_threads_deadlocked gauge
-jvm_threads_deadlocked 0.0
-# HELP jvm_threads_deadlocked_monitor Cycles of JVM-threads that are in deadlock waiting to acquire object monitors
-# TYPE jvm_threads_deadlocked_monitor gauge
-jvm_threads_deadlocked_monitor 0.0
-# HELP jvm_threads_state Current count of threads by state
-# TYPE jvm_threads_state gauge
-jvm_threads_state{state="BLOCKED",} 0.0
-jvm_threads_state{state="TIMED_WAITING",} 11.0
-jvm_threads_state{state="NEW",} 0.0
-jvm_threads_state{state="RUNNABLE",} 7.0
-jvm_threads_state{state="TERMINATED",} 0.0
-jvm_threads_state{state="WAITING",} 14.0
-# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
-# TYPE process_cpu_seconds_total counter
-process_cpu_seconds_total 16418.06
-# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
-# TYPE process_start_time_seconds gauge
-process_start_time_seconds 1.651077494162E9
-# HELP process_open_fds Number of open file descriptors.
-# TYPE process_open_fds gauge
-process_open_fds 357.0
-# HELP process_max_fds Maximum number of open file descriptors.
-# TYPE process_max_fds gauge
-process_max_fds 1048576.0
-# HELP process_virtual_memory_bytes Virtual memory size in bytes.
-# TYPE process_virtual_memory_bytes gauge
-process_virtual_memory_bytes 1.0165403648E10
-# HELP process_resident_memory_bytes Resident memory size in bytes.
-# TYPE process_resident_memory_bytes gauge
-process_resident_memory_bytes 5.58034944E8
-# HELP pdpa_engine_event_executions Total number of APEX events processed by the engine.
-# TYPE pdpa_engine_event_executions gauge
-pdpa_engine_event_executions{engine_instance_id="NSOApexEngine-1:0.0.1",} 30743.0
-pdpa_engine_event_executions{engine_instance_id="NSOApexEngine-4:0.0.1",} 30766.0
-pdpa_engine_event_executions{engine_instance_id="NSOApexEngine-3:0.0.1",} 30722.0
-pdpa_engine_event_executions{engine_instance_id="NSOApexEngine-0:0.0.1",} 30727.0
-pdpa_engine_event_executions{engine_instance_id="NSOApexEngine-2:0.0.1",} 30742.0
-# HELP jvm_buffer_pool_used_bytes Used bytes of a given JVM buffer pool.
-# TYPE jvm_buffer_pool_used_bytes gauge
-jvm_buffer_pool_used_bytes{pool="mapped",} 0.0
-jvm_buffer_pool_used_bytes{pool="direct",} 3.3833905E7
-# HELP jvm_buffer_pool_capacity_bytes Bytes capacity of a given JVM buffer pool.
-# TYPE jvm_buffer_pool_capacity_bytes gauge
-jvm_buffer_pool_capacity_bytes{pool="mapped",} 0.0
-jvm_buffer_pool_capacity_bytes{pool="direct",} 3.3833904E7
-# HELP jvm_buffer_pool_used_buffers Used buffers of a given JVM buffer pool.
-# TYPE jvm_buffer_pool_used_buffers gauge
-jvm_buffer_pool_used_buffers{pool="mapped",} 0.0
-jvm_buffer_pool_used_buffers{pool="direct",} 15.0
-# HELP pdpa_policy_executions_total The total number of TOSCA policy executions.
-# TYPE pdpa_policy_executions_total counter
-# HELP pdpa_policy_deployments_total The total number of policy deployments.
-# TYPE pdpa_policy_deployments_total counter
-pdpa_policy_deployments_total{operation="deploy",status="TOTAL",} 5.0
-pdpa_policy_deployments_total{operation="undeploy",status="TOTAL",} 5.0
-pdpa_policy_deployments_total{operation="undeploy",status="SUCCESS",} 5.0
-pdpa_policy_deployments_total{operation="deploy",status="SUCCESS",} 5.0
-# HELP pdpa_engine_average_execution_time_seconds Average time taken to execute an APEX policy in seconds.
-# TYPE pdpa_engine_average_execution_time_seconds gauge
-pdpa_engine_average_execution_time_seconds{engine_instance_id="NSOApexEngine-1:0.0.1",} 0.00515235988680349
-pdpa_engine_average_execution_time_seconds{engine_instance_id="NSOApexEngine-4:0.0.1",} 0.00521845543782099
-pdpa_engine_average_execution_time_seconds{engine_instance_id="NSOApexEngine-3:0.0.1",} 0.005200800729119198
-pdpa_engine_average_execution_time_seconds{engine_instance_id="NSOApexEngine-0:0.0.1",} 0.005191785725908804
-pdpa_engine_average_execution_time_seconds{engine_instance_id="NSOApexEngine-2:0.0.1",} 0.0051784854596317684
-# HELP pdpa_engine_state State of the APEX engine as integers mapped as - 0:UNDEFINED, 1:STOPPED, 2:READY, 3:EXECUTING, 4:STOPPING
-# TYPE pdpa_engine_state gauge
-pdpa_engine_state{engine_instance_id="NSOApexEngine-1:0.0.1",} 1.0
-pdpa_engine_state{engine_instance_id="NSOApexEngine-4:0.0.1",} 1.0
-pdpa_engine_state{engine_instance_id="NSOApexEngine-3:0.0.1",} 1.0
-pdpa_engine_state{engine_instance_id="NSOApexEngine-0:0.0.1",} 1.0
-pdpa_engine_state{engine_instance_id="NSOApexEngine-2:0.0.1",} 1.0
-# HELP jvm_gc_collection_seconds Time spent in a given JVM garbage collector in seconds.
-# TYPE jvm_gc_collection_seconds summary
-jvm_gc_collection_seconds_count{gc="Copy",} 5883.0
-jvm_gc_collection_seconds_sum{gc="Copy",} 97.808
-jvm_gc_collection_seconds_count{gc="MarkSweepCompact",} 3.0
-jvm_gc_collection_seconds_sum{gc="MarkSweepCompact",} 0.357
-# HELP pdpa_engine_last_start_timestamp_epoch Epoch timestamp of the instance when engine was last started.
-# TYPE pdpa_engine_last_start_timestamp_epoch gauge
-pdpa_engine_last_start_timestamp_epoch{engine_instance_id="NSOApexEngine-1:0.0.1",} 0.0
-pdpa_engine_last_start_timestamp_epoch{engine_instance_id="NSOApexEngine-4:0.0.1",} 0.0
-pdpa_engine_last_start_timestamp_epoch{engine_instance_id="NSOApexEngine-3:0.0.1",} 0.0
-pdpa_engine_last_start_timestamp_epoch{engine_instance_id="NSOApexEngine-0:0.0.1",} 0.0
-pdpa_engine_last_start_timestamp_epoch{engine_instance_id="NSOApexEngine-2:0.0.1",} 0.0
-# HELP jvm_memory_pool_allocated_bytes_total Total bytes allocated in a given JVM memory pool. Only updated after GC, not continuously.
-# TYPE jvm_memory_pool_allocated_bytes_total counter
-jvm_memory_pool_allocated_bytes_total{pool="Eden Space",} 8.29800936264E11
-jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'profiled nmethods'",} 4.839232E7
-jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'non-profiled nmethods'",} 3.5181056E7
-jvm_memory_pool_allocated_bytes_total{pool="Compressed Class Space",} 8194120.0
-jvm_memory_pool_allocated_bytes_total{pool="Metaspace",} 7.7729144E7
-jvm_memory_pool_allocated_bytes_total{pool="Tenured Gen",} 1.41180272E8
-jvm_memory_pool_allocated_bytes_total{pool="Survivor Space",} 4.78761928E8
-jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'non-nmethods'",} 1392128.0
-# HELP pdpa_engine_uptime Time elapsed since the engine was started.
-# TYPE pdpa_engine_uptime gauge
-pdpa_engine_uptime{engine_instance_id="NSOApexEngine-1:0.0.1",} 259200.522
-pdpa_engine_uptime{engine_instance_id="NSOApexEngine-4:0.0.1",} 259200.751
-pdpa_engine_uptime{engine_instance_id="NSOApexEngine-3:0.0.1",} 259200.678
-pdpa_engine_uptime{engine_instance_id="NSOApexEngine-0:0.0.1",} 259200.439
-pdpa_engine_uptime{engine_instance_id="NSOApexEngine-2:0.0.1",} 259200.601
-# HELP pdpa_engine_last_execution_time Time taken to execute the last APEX policy in seconds.
-# TYPE pdpa_engine_last_execution_time histogram
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="0.005",} 24726.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="0.01",} 50195.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="0.025",} 70836.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="0.05",} 71947.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="0.075",} 71996.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="0.1",} 72001.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="0.25",} 72002.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="0.5",} 72002.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="0.75",} 72002.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="1.0",} 72002.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="2.5",} 72002.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="5.0",} 72002.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="7.5",} 72002.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="10.0",} 72002.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="+Inf",} 72002.0
-pdpa_engine_last_execution_time_count{engine_instance_id="NSOApexEngine-1:0.0.1",} 72002.0
-pdpa_engine_last_execution_time_sum{engine_instance_id="NSOApexEngine-1:0.0.1",} 609.1939999998591
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="0.005",} 24512.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="0.01",} 50115.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="0.025",} 70746.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="0.05",} 71918.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="0.075",} 71966.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="0.1",} 71967.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="0.25",} 71967.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="0.5",} 71967.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="0.75",} 71967.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="1.0",} 71967.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="2.5",} 71967.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="5.0",} 71967.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="7.5",} 71967.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="10.0",} 71967.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="+Inf",} 71967.0
-pdpa_engine_last_execution_time_count{engine_instance_id="NSOApexEngine-4:0.0.1",} 71967.0
-pdpa_engine_last_execution_time_sum{engine_instance_id="NSOApexEngine-4:0.0.1",} 610.3469999998522
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="0.005",} 24607.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="0.01",} 50182.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="0.025",} 70791.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="0.05",} 71929.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="0.075",} 71965.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="0.1",} 71970.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="0.25",} 71970.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="0.5",} 71970.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="0.75",} 71970.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="1.0",} 71970.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="2.5",} 71970.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="5.0",} 71970.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="7.5",} 71970.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="10.0",} 71970.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="+Inf",} 71970.0
-pdpa_engine_last_execution_time_count{engine_instance_id="NSOApexEngine-3:0.0.1",} 71970.0
-pdpa_engine_last_execution_time_sum{engine_instance_id="NSOApexEngine-3:0.0.1",} 608.8539999998619
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="0.005",} 24623.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="0.01",} 50207.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="0.025",} 70783.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="0.05",} 71934.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="0.075",} 71981.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="0.1",} 71986.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="0.25",} 71988.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="0.5",} 71988.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="0.75",} 71988.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="1.0",} 71988.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="2.5",} 71988.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="5.0",} 71988.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="7.5",} 71988.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="10.0",} 71988.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="+Inf",} 71988.0
-pdpa_engine_last_execution_time_count{engine_instance_id="NSOApexEngine-0:0.0.1",} 71988.0
-pdpa_engine_last_execution_time_sum{engine_instance_id="NSOApexEngine-0:0.0.1",} 610.5579999998558
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="0.005",} 24594.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="0.01",} 50131.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="0.025",} 70816.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="0.05",} 71905.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="0.075",} 71959.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="0.1",} 71961.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="0.25",} 71962.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="0.5",} 71962.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="0.75",} 71962.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="1.0",} 71962.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="2.5",} 71962.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="5.0",} 71962.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="7.5",} 71962.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="10.0",} 71962.0
-pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="+Inf",} 71962.0
-pdpa_engine_last_execution_time_count{engine_instance_id="NSOApexEngine-2:0.0.1",} 71962.0
-pdpa_engine_last_execution_time_sum{engine_instance_id="NSOApexEngine-2:0.0.1",} 608.3549999998555
-# HELP jvm_memory_objects_pending_finalization The number of objects waiting in the finalizer queue.
-# TYPE jvm_memory_objects_pending_finalization gauge
-jvm_memory_objects_pending_finalization 0.0
-# HELP jvm_memory_bytes_used Used bytes of a given JVM memory area.
-# TYPE jvm_memory_bytes_used gauge
-jvm_memory_bytes_used{area="heap",} 1.90274552E8
-jvm_memory_bytes_used{area="nonheap",} 1.16193856E8
-# HELP jvm_memory_bytes_committed Committed (bytes) of a given JVM memory area.
-# TYPE jvm_memory_bytes_committed gauge
-jvm_memory_bytes_committed{area="heap",} 5.10984192E8
-jvm_memory_bytes_committed{area="nonheap",} 1.56127232E8
-# HELP jvm_memory_bytes_max Max (bytes) of a given JVM memory area.
-# TYPE jvm_memory_bytes_max gauge
-jvm_memory_bytes_max{area="heap",} 8.151564288E9
-jvm_memory_bytes_max{area="nonheap",} -1.0
-# HELP jvm_memory_bytes_init Initial bytes of a given JVM memory area.
-# TYPE jvm_memory_bytes_init gauge
-jvm_memory_bytes_init{area="heap",} 5.28482304E8
-jvm_memory_bytes_init{area="nonheap",} 7667712.0
-# HELP jvm_memory_pool_bytes_used Used bytes of a given JVM memory pool.
-# TYPE jvm_memory_pool_bytes_used gauge
-jvm_memory_pool_bytes_used{pool="CodeHeap 'non-nmethods'",} 1353600.0
-jvm_memory_pool_bytes_used{pool="Metaspace",} 7.7729144E7
-jvm_memory_pool_bytes_used{pool="Tenured Gen",} 1.41180272E8
-jvm_memory_pool_bytes_used{pool="CodeHeap 'profiled nmethods'",} 4831104.0
-jvm_memory_pool_bytes_used{pool="Eden Space",} 4.5145032E7
-jvm_memory_pool_bytes_used{pool="Survivor Space",} 3949248.0
-jvm_memory_pool_bytes_used{pool="Compressed Class Space",} 8194120.0
-jvm_memory_pool_bytes_used{pool="CodeHeap 'non-profiled nmethods'",} 2.4085888E7
-# HELP jvm_memory_pool_bytes_committed Committed bytes of a given JVM memory pool.
-# TYPE jvm_memory_pool_bytes_committed gauge
-jvm_memory_pool_bytes_committed{pool="CodeHeap 'non-nmethods'",} 2555904.0
-jvm_memory_pool_bytes_committed{pool="Metaspace",} 8.5348352E7
-jvm_memory_pool_bytes_committed{pool="Tenured Gen",} 3.52321536E8
-jvm_memory_pool_bytes_committed{pool="CodeHeap 'profiled nmethods'",} 3.3030144E7
-jvm_memory_pool_bytes_committed{pool="Eden Space",} 1.41033472E8
-jvm_memory_pool_bytes_committed{pool="Survivor Space",} 1.7629184E7
-jvm_memory_pool_bytes_committed{pool="Compressed Class Space",} 9175040.0
-jvm_memory_pool_bytes_committed{pool="CodeHeap 'non-profiled nmethods'",} 2.6017792E7
-# HELP jvm_memory_pool_bytes_max Max bytes of a given JVM memory pool.
-# TYPE jvm_memory_pool_bytes_max gauge
-jvm_memory_pool_bytes_max{pool="CodeHeap 'non-nmethods'",} 5828608.0
-jvm_memory_pool_bytes_max{pool="Metaspace",} -1.0
-jvm_memory_pool_bytes_max{pool="Tenured Gen",} 5.621809152E9
-jvm_memory_pool_bytes_max{pool="CodeHeap 'profiled nmethods'",} 1.22912768E8
-jvm_memory_pool_bytes_max{pool="Eden Space",} 2.248671232E9
-jvm_memory_pool_bytes_max{pool="Survivor Space",} 2.81083904E8
-jvm_memory_pool_bytes_max{pool="Compressed Class Space",} 1.073741824E9
-jvm_memory_pool_bytes_max{pool="CodeHeap 'non-profiled nmethods'",} 1.22916864E8
-# HELP jvm_memory_pool_bytes_init Initial bytes of a given JVM memory pool.
-# TYPE jvm_memory_pool_bytes_init gauge
-jvm_memory_pool_bytes_init{pool="CodeHeap 'non-nmethods'",} 2555904.0
-jvm_memory_pool_bytes_init{pool="Metaspace",} 0.0
-jvm_memory_pool_bytes_init{pool="Tenured Gen",} 3.52321536E8
-jvm_memory_pool_bytes_init{pool="CodeHeap 'profiled nmethods'",} 2555904.0
-jvm_memory_pool_bytes_init{pool="Eden Space",} 1.41033472E8
-jvm_memory_pool_bytes_init{pool="Survivor Space",} 1.7563648E7
-jvm_memory_pool_bytes_init{pool="Compressed Class Space",} 0.0
-jvm_memory_pool_bytes_init{pool="CodeHeap 'non-profiled nmethods'",} 2555904.0
-# HELP jvm_memory_pool_collection_used_bytes Used bytes after last collection of a given JVM memory pool.
-# TYPE jvm_memory_pool_collection_used_bytes gauge
-jvm_memory_pool_collection_used_bytes{pool="Tenured Gen",} 3.853812E7
-jvm_memory_pool_collection_used_bytes{pool="Eden Space",} 0.0
-jvm_memory_pool_collection_used_bytes{pool="Survivor Space",} 3949248.0
-# HELP jvm_memory_pool_collection_committed_bytes Committed after last collection bytes of a given JVM memory pool.
-# TYPE jvm_memory_pool_collection_committed_bytes gauge
-jvm_memory_pool_collection_committed_bytes{pool="Tenured Gen",} 3.52321536E8
-jvm_memory_pool_collection_committed_bytes{pool="Eden Space",} 1.41033472E8
-jvm_memory_pool_collection_committed_bytes{pool="Survivor Space",} 1.7629184E7
-# HELP jvm_memory_pool_collection_max_bytes Max bytes after last collection of a given JVM memory pool.
-# TYPE jvm_memory_pool_collection_max_bytes gauge
-jvm_memory_pool_collection_max_bytes{pool="Tenured Gen",} 5.621809152E9
-jvm_memory_pool_collection_max_bytes{pool="Eden Space",} 2.248671232E9
-jvm_memory_pool_collection_max_bytes{pool="Survivor Space",} 2.81083904E8
-# HELP jvm_memory_pool_collection_init_bytes Initial after last collection bytes of a given JVM memory pool.
-# TYPE jvm_memory_pool_collection_init_bytes gauge
-jvm_memory_pool_collection_init_bytes{pool="Tenured Gen",} 3.52321536E8
-jvm_memory_pool_collection_init_bytes{pool="Eden Space",} 1.41033472E8
-jvm_memory_pool_collection_init_bytes{pool="Survivor Space",} 1.7563648E7
-# HELP jvm_classes_loaded The number of classes that are currently loaded in the JVM
-# TYPE jvm_classes_loaded gauge
-jvm_classes_loaded 11386.0
-# HELP jvm_classes_loaded_total The total number of classes that have been loaded since the JVM has started execution
-# TYPE jvm_classes_loaded_total counter
-jvm_classes_loaded_total 11448.0
-# HELP jvm_classes_unloaded_total The total number of classes that have been unloaded since the JVM has started execution
-# TYPE jvm_classes_unloaded_total counter
-jvm_classes_unloaded_total 62.0
-# HELP jvm_info VM version info
-# TYPE jvm_info gauge
-jvm_info{runtime="OpenJDK Runtime Environment",vendor="Alpine",version="11.0.9+11-alpine-r1",} 1.0
-# HELP jvm_memory_pool_allocated_bytes_created Total bytes allocated in a given JVM memory pool. Only updated after GC, not continuously.
-# TYPE jvm_memory_pool_allocated_bytes_created gauge
-jvm_memory_pool_allocated_bytes_created{pool="Eden Space",} 1.651077501662E9
-jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'profiled nmethods'",} 1.651077501657E9
-jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'non-profiled nmethods'",} 1.651077501662E9
-jvm_memory_pool_allocated_bytes_created{pool="Compressed Class Space",} 1.651077501662E9
-jvm_memory_pool_allocated_bytes_created{pool="Metaspace",} 1.651077501662E9
-jvm_memory_pool_allocated_bytes_created{pool="Tenured Gen",} 1.651077501662E9
-jvm_memory_pool_allocated_bytes_created{pool="Survivor Space",} 1.651077501662E9
-jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'non-nmethods'",} 1.651077501662E9
-# HELP pdpa_engine_last_execution_time_created Time taken to execute the last APEX policy in seconds.
-# TYPE pdpa_engine_last_execution_time_created gauge
-pdpa_engine_last_execution_time_created{engine_instance_id="NSOApexEngine-1:0.0.1",} 1.651080501294E9
-pdpa_engine_last_execution_time_created{engine_instance_id="NSOApexEngine-4:0.0.1",} 1.651080501295E9
-pdpa_engine_last_execution_time_created{engine_instance_id="NSOApexEngine-3:0.0.1",} 1.651080501295E9
-pdpa_engine_last_execution_time_created{engine_instance_id="NSOApexEngine-0:0.0.1",} 1.651080501294E9
-pdpa_engine_last_execution_time_created{engine_instance_id="NSOApexEngine-2:0.0.1",} 1.651080501294E9
-# HELP pdpa_policy_deployments_created The total number of policy deployments.
-# TYPE pdpa_policy_deployments_created gauge
-pdpa_policy_deployments_created{operation="deploy",status="TOTAL",} 1.651080501289E9
-pdpa_policy_deployments_created{operation="undeploy",status="TOTAL",} 1.651081148331E9
-pdpa_policy_deployments_created{operation="undeploy",status="SUCCESS",} 1.651081148331E9
-pdpa_policy_deployments_created{operation="deploy",status="SUCCESS",} 1.651080501289E9
diff --git a/docs/development/devtools/testing/s3p/apex-s3p-results/apex_metrics_before_72h.txt b/docs/development/devtools/testing/s3p/apex-s3p-results/apex_metrics_before_72h.txt
deleted file mode 100644
index 4a3d8835..00000000
--- a/docs/development/devtools/testing/s3p/apex-s3p-results/apex_metrics_before_72h.txt
+++ /dev/null
@@ -1,175 +0,0 @@
-# HELP jvm_threads_current Current thread count of a JVM
-# TYPE jvm_threads_current gauge
-jvm_threads_current 31.0
-# HELP jvm_threads_daemon Daemon thread count of a JVM
-# TYPE jvm_threads_daemon gauge
-jvm_threads_daemon 16.0
-# HELP jvm_threads_peak Peak thread count of a JVM
-# TYPE jvm_threads_peak gauge
-jvm_threads_peak 31.0
-# HELP jvm_threads_started_total Started thread count of a JVM
-# TYPE jvm_threads_started_total counter
-jvm_threads_started_total 32.0
-# HELP jvm_threads_deadlocked Cycles of JVM-threads that are in deadlock waiting to acquire object monitors or ownable synchronizers
-# TYPE jvm_threads_deadlocked gauge
-jvm_threads_deadlocked 0.0
-# HELP jvm_threads_deadlocked_monitor Cycles of JVM-threads that are in deadlock waiting to acquire object monitors
-# TYPE jvm_threads_deadlocked_monitor gauge
-jvm_threads_deadlocked_monitor 0.0
-# HELP jvm_threads_state Current count of threads by state
-# TYPE jvm_threads_state gauge
-jvm_threads_state{state="BLOCKED",} 0.0
-jvm_threads_state{state="TIMED_WAITING",} 11.0
-jvm_threads_state{state="NEW",} 0.0
-jvm_threads_state{state="RUNNABLE",} 7.0
-jvm_threads_state{state="TERMINATED",} 0.0
-jvm_threads_state{state="WAITING",} 13.0
-# HELP jvm_gc_collection_seconds Time spent in a given JVM garbage collector in seconds.
-# TYPE jvm_gc_collection_seconds summary
-jvm_gc_collection_seconds_count{gc="Copy",} 2.0
-jvm_gc_collection_seconds_sum{gc="Copy",} 0.059
-jvm_gc_collection_seconds_count{gc="MarkSweepCompact",} 2.0
-jvm_gc_collection_seconds_sum{gc="MarkSweepCompact",} 0.185
-# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds.
-# TYPE process_cpu_seconds_total counter
-process_cpu_seconds_total 38.14
-# HELP process_start_time_seconds Start time of the process since unix epoch in seconds.
-# TYPE process_start_time_seconds gauge
-process_start_time_seconds 1.651077494162E9
-# HELP process_open_fds Number of open file descriptors.
-# TYPE process_open_fds gauge
-process_open_fds 355.0
-# HELP process_max_fds Maximum number of open file descriptors.
-# TYPE process_max_fds gauge
-process_max_fds 1048576.0
-# HELP process_virtual_memory_bytes Virtual memory size in bytes.
-# TYPE process_virtual_memory_bytes gauge
-process_virtual_memory_bytes 1.0070171648E10
-# HELP process_resident_memory_bytes Resident memory size in bytes.
-# TYPE process_resident_memory_bytes gauge
-process_resident_memory_bytes 2.9052928E8
-# HELP jvm_buffer_pool_used_bytes Used bytes of a given JVM buffer pool.
-# TYPE jvm_buffer_pool_used_bytes gauge
-jvm_buffer_pool_used_bytes{pool="mapped",} 0.0
-jvm_buffer_pool_used_bytes{pool="direct",} 187432.0
-# HELP jvm_buffer_pool_capacity_bytes Bytes capacity of a given JVM buffer pool.
-# TYPE jvm_buffer_pool_capacity_bytes gauge
-jvm_buffer_pool_capacity_bytes{pool="mapped",} 0.0
-jvm_buffer_pool_capacity_bytes{pool="direct",} 187432.0
-# HELP jvm_buffer_pool_used_buffers Used buffers of a given JVM buffer pool.
-# TYPE jvm_buffer_pool_used_buffers gauge
-jvm_buffer_pool_used_buffers{pool="mapped",} 0.0
-jvm_buffer_pool_used_buffers{pool="direct",} 9.0
-# HELP jvm_memory_pool_allocated_bytes_total Total bytes allocated in a given JVM memory pool. Only updated after GC, not continuously.
-# TYPE jvm_memory_pool_allocated_bytes_total counter
-jvm_memory_pool_allocated_bytes_total{pool="Eden Space",} 3.035482E8
-jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'profiled nmethods'",} 9772800.0
-jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'non-profiled nmethods'",} 2152064.0
-jvm_memory_pool_allocated_bytes_total{pool="Compressed Class Space",} 4912232.0
-jvm_memory_pool_allocated_bytes_total{pool="Metaspace",} 4.1337744E7
-jvm_memory_pool_allocated_bytes_total{pool="Tenured Gen",} 2.8136056E7
-jvm_memory_pool_allocated_bytes_total{pool="Survivor Space",} 6813240.0
-jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'non-nmethods'",} 1272320.0
-# HELP pdpa_policy_deployments_total The total number of policy deployments.
-# TYPE pdpa_policy_deployments_total counter
-# HELP jvm_memory_objects_pending_finalization The number of objects waiting in the finalizer queue.
-# TYPE jvm_memory_objects_pending_finalization gauge
-jvm_memory_objects_pending_finalization 0.0
-# HELP jvm_memory_bytes_used Used bytes of a given JVM memory area.
-# TYPE jvm_memory_bytes_used gauge
-jvm_memory_bytes_used{area="heap",} 9.5900224E7
-jvm_memory_bytes_used{area="nonheap",} 6.0285288E7
-# HELP jvm_memory_bytes_committed Committed (bytes) of a given JVM memory area.
-# TYPE jvm_memory_bytes_committed gauge
-jvm_memory_bytes_committed{area="heap",} 5.10984192E8
-jvm_memory_bytes_committed{area="nonheap",} 6.3922176E7
-# HELP jvm_memory_bytes_max Max (bytes) of a given JVM memory area.
-# TYPE jvm_memory_bytes_max gauge
-jvm_memory_bytes_max{area="heap",} 8.151564288E9
-jvm_memory_bytes_max{area="nonheap",} -1.0
-# HELP jvm_memory_bytes_init Initial bytes of a given JVM memory area.
-# TYPE jvm_memory_bytes_init gauge
-jvm_memory_bytes_init{area="heap",} 5.28482304E8
-jvm_memory_bytes_init{area="nonheap",} 7667712.0
-# HELP jvm_memory_pool_bytes_used Used bytes of a given JVM memory pool.
-# TYPE jvm_memory_pool_bytes_used gauge
-jvm_memory_pool_bytes_used{pool="CodeHeap 'non-nmethods'",} 1272320.0
-jvm_memory_pool_bytes_used{pool="Metaspace",} 4.1681312E7
-jvm_memory_pool_bytes_used{pool="Tenured Gen",} 2.8136056E7
-jvm_memory_pool_bytes_used{pool="CodeHeap 'profiled nmethods'",} 1.0006912E7
-jvm_memory_pool_bytes_used{pool="Eden Space",} 6.5005376E7
-jvm_memory_pool_bytes_used{pool="Survivor Space",} 2758792.0
-jvm_memory_pool_bytes_used{pool="Compressed Class Space",} 4913352.0
-jvm_memory_pool_bytes_used{pool="CodeHeap 'non-profiled nmethods'",} 2411392.0
-# HELP jvm_memory_pool_bytes_committed Committed bytes of a given JVM memory pool.
-# TYPE jvm_memory_pool_bytes_committed gauge
-jvm_memory_pool_bytes_committed{pool="CodeHeap 'non-nmethods'",} 2555904.0
-jvm_memory_pool_bytes_committed{pool="Metaspace",} 4.32128E7
-jvm_memory_pool_bytes_committed{pool="Tenured Gen",} 3.52321536E8
-jvm_memory_pool_bytes_committed{pool="CodeHeap 'profiled nmethods'",} 1.0092544E7
-jvm_memory_pool_bytes_committed{pool="Eden Space",} 1.41033472E8
-jvm_memory_pool_bytes_committed{pool="Survivor Space",} 1.7629184E7
-jvm_memory_pool_bytes_committed{pool="Compressed Class Space",} 5505024.0
-jvm_memory_pool_bytes_committed{pool="CodeHeap 'non-profiled nmethods'",} 2555904.0
-# HELP jvm_memory_pool_bytes_max Max bytes of a given JVM memory pool.
-# TYPE jvm_memory_pool_bytes_max gauge
-jvm_memory_pool_bytes_max{pool="CodeHeap 'non-nmethods'",} 5828608.0
-jvm_memory_pool_bytes_max{pool="Metaspace",} -1.0
-jvm_memory_pool_bytes_max{pool="Tenured Gen",} 5.621809152E9
-jvm_memory_pool_bytes_max{pool="CodeHeap 'profiled nmethods'",} 1.22912768E8
-jvm_memory_pool_bytes_max{pool="Eden Space",} 2.248671232E9
-jvm_memory_pool_bytes_max{pool="Survivor Space",} 2.81083904E8
-jvm_memory_pool_bytes_max{pool="Compressed Class Space",} 1.073741824E9
-jvm_memory_pool_bytes_max{pool="CodeHeap 'non-profiled nmethods'",} 1.22916864E8
-# HELP jvm_memory_pool_bytes_init Initial bytes of a given JVM memory pool.
-# TYPE jvm_memory_pool_bytes_init gauge
-jvm_memory_pool_bytes_init{pool="CodeHeap 'non-nmethods'",} 2555904.0
-jvm_memory_pool_bytes_init{pool="Metaspace",} 0.0
-jvm_memory_pool_bytes_init{pool="Tenured Gen",} 3.52321536E8
-jvm_memory_pool_bytes_init{pool="CodeHeap 'profiled nmethods'",} 2555904.0
-jvm_memory_pool_bytes_init{pool="Eden Space",} 1.41033472E8
-jvm_memory_pool_bytes_init{pool="Survivor Space",} 1.7563648E7
-jvm_memory_pool_bytes_init{pool="Compressed Class Space",} 0.0
-jvm_memory_pool_bytes_init{pool="CodeHeap 'non-profiled nmethods'",} 2555904.0
-# HELP jvm_memory_pool_collection_used_bytes Used bytes after last collection of a given JVM memory pool.
-# TYPE jvm_memory_pool_collection_used_bytes gauge
-jvm_memory_pool_collection_used_bytes{pool="Tenured Gen",} 2.8136056E7
-jvm_memory_pool_collection_used_bytes{pool="Eden Space",} 0.0
-jvm_memory_pool_collection_used_bytes{pool="Survivor Space",} 2758792.0
-# HELP jvm_memory_pool_collection_committed_bytes Committed after last collection bytes of a given JVM memory pool.
-# TYPE jvm_memory_pool_collection_committed_bytes gauge
-jvm_memory_pool_collection_committed_bytes{pool="Tenured Gen",} 3.52321536E8
-jvm_memory_pool_collection_committed_bytes{pool="Eden Space",} 1.41033472E8
-jvm_memory_pool_collection_committed_bytes{pool="Survivor Space",} 1.7629184E7
-# HELP jvm_memory_pool_collection_max_bytes Max bytes after last collection of a given JVM memory pool.
-# TYPE jvm_memory_pool_collection_max_bytes gauge
-jvm_memory_pool_collection_max_bytes{pool="Tenured Gen",} 5.621809152E9
-jvm_memory_pool_collection_max_bytes{pool="Eden Space",} 2.248671232E9
-jvm_memory_pool_collection_max_bytes{pool="Survivor Space",} 2.81083904E8
-# HELP jvm_memory_pool_collection_init_bytes Initial after last collection bytes of a given JVM memory pool.
-# TYPE jvm_memory_pool_collection_init_bytes gauge
-jvm_memory_pool_collection_init_bytes{pool="Tenured Gen",} 3.52321536E8
-jvm_memory_pool_collection_init_bytes{pool="Eden Space",} 1.41033472E8
-jvm_memory_pool_collection_init_bytes{pool="Survivor Space",} 1.7563648E7
-# HELP jvm_classes_loaded The number of classes that are currently loaded in the JVM
-# TYPE jvm_classes_loaded gauge
-jvm_classes_loaded 7378.0
-# HELP jvm_classes_loaded_total The total number of classes that have been loaded since the JVM has started execution
-# TYPE jvm_classes_loaded_total counter
-jvm_classes_loaded_total 7378.0
-# HELP jvm_classes_unloaded_total The total number of classes that have been unloaded since the JVM has started execution
-# TYPE jvm_classes_unloaded_total counter
-jvm_classes_unloaded_total 0.0
-# HELP jvm_info VM version info
-# TYPE jvm_info gauge
-jvm_info{runtime="OpenJDK Runtime Environment",vendor="Alpine",version="11.0.9+11-alpine-r1",} 1.0
-# HELP jvm_memory_pool_allocated_bytes_created Total bytes allocated in a given JVM memory pool. Only updated after GC, not continuously.
-# TYPE jvm_memory_pool_allocated_bytes_created gauge
-jvm_memory_pool_allocated_bytes_created{pool="Eden Space",} 1.651077501662E9
-jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'profiled nmethods'",} 1.651077501657E9
-jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'non-profiled nmethods'",} 1.651077501662E9
-jvm_memory_pool_allocated_bytes_created{pool="Compressed Class Space",} 1.651077501662E9
-jvm_memory_pool_allocated_bytes_created{pool="Metaspace",} 1.651077501662E9
-jvm_memory_pool_allocated_bytes_created{pool="Tenured Gen",} 1.651077501662E9
-jvm_memory_pool_allocated_bytes_created{pool="Survivor Space",} 1.651077501662E9
-jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'non-nmethods'",} 1.651077501662E9
diff --git a/docs/development/devtools/testing/s3p/apex-s3p-results/apex_performance_results.png b/docs/development/devtools/testing/s3p/apex-s3p-results/apex_performance_results.png
deleted file mode 100644
index 7ddb0a67..00000000
--- a/docs/development/devtools/testing/s3p/apex-s3p-results/apex_performance_results.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/apex-s3p-results/apex_stability_results.png b/docs/development/devtools/testing/s3p/apex-s3p-results/apex_stability_results.png
deleted file mode 100644
index 8dddd470..00000000
--- a/docs/development/devtools/testing/s3p/apex-s3p-results/apex_stability_results.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/apex-s3p-results/apex_top_after_72h.png b/docs/development/devtools/testing/s3p/apex-s3p-results/apex_top_after_72h.png
deleted file mode 100644
index dafc7002..00000000
--- a/docs/development/devtools/testing/s3p/apex-s3p-results/apex_top_after_72h.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/apex-s3p-results/apex_top_before_72h.png b/docs/development/devtools/testing/s3p/apex-s3p-results/apex_top_before_72h.png
deleted file mode 100644
index 2e2e7574..00000000
--- a/docs/development/devtools/testing/s3p/apex-s3p-results/apex_top_before_72h.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/apex-s3p.rst b/docs/development/devtools/testing/s3p/apex-s3p.rst
deleted file mode 100644
index 0bbb0363..00000000
--- a/docs/development/devtools/testing/s3p/apex-s3p.rst
+++ /dev/null
@@ -1,207 +0,0 @@
-.. This work is licensed under a
-.. Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-
-.. _apex-s3p-label:
-
-.. toctree::
- :maxdepth: 2
-
-Policy APEX PDP component
-~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Both the Stability and the Performance tests were executed in a full Policy Framework deployment in a VM.
-
-Setup Details
-+++++++++++++
-
-Deploying ONAP using OOM
-------------------------
-
-APEX-PDP along with all policy components are deployed as part of a full Policy Framework deployment.
-At a minimum, the following components are needed: policy, mariadb-galera, prometheus and kafka.
-
-The S3P tests utilise the ./run-s3p-tests script in the apex component. This will setup the microk8s environment, deploy
-policy and prometheus, expose the services so they can be reached by JMeter, install JMeter and run the tests based on
-the arguments provided.
-
-Set up policy-models-simulator
-------------------------------
-
-Kafka is deployed and is used during policy execution.
- Simulator configurations used are available in apex-pdp repository:
- testsuites/apex-pdp-stability/src/main/resources/simulatorConfig/
-
-The published port 29092 is used in JMeter for the Kafka.
-
-JMeter Tests
-------------
-
-Two APEX policies are executed in the APEX-PDP engine, and are triggered by multiple threads during the tests.
-Both tests were run via jMeter.
-
- Stability test script is available in apex-pdp repository:
- testsuites/apex-pdp-stability/src/main/resources/apexPdpStabilityTestPlan.jmx
-
- Performance test script is available in apex-pdp repository:
- testsuites/performance/performance-benchmark-test/src/main/resources/apexPdpPerformanceTestPlan.jmx
-
-.. Note::
- Policy executions are validated in a stricter fashion during the tests.
- There are test cases where up to 80 events are expected on the Kafka topic.
- Kafka is used to keep it simple and avoid any message pickup timing related issues.
-
-Stability Test of APEX-PDP
-++++++++++++++++++++++++++
-
-Test Plan
----------
-
-The 72 hours stability test ran the following steps.
-
-Setup Phase
-"""""""""""
-
-Policies are created and deployed to APEX-PDP during this phase. Only one thread is in action and this step is done only once.
-
-- **Create Policy onap.policies.apex.Simplecontrolloop** - creates the first APEX policy using policy/api component.
- This is a sample policy used for PNF testing.
-- **Create Policy onap.policies.apex.Example** - creates the second APEX policy using policy/api component.
- This is a sample policy used for VNF testing.
-- **Deploy Policies** - Deploy both the policies created to APEX-PDP using policy/pap component
-
-Main Phase
-""""""""""
-
-Once the policies are created and deployed to APEX-PDP by the setup thread, five threads execute the below tests for 72 hours.
-
-- **Healthcheck** - checks the health status of APEX-PDP
-- **Prometheus Metrics** - checks that APEX-PDP is exposing prometheus metrics
-- **Test Simplecontrolloop policy success case** - Send a trigger event to *unauthenticated.DCAE_CL_OUTPUT* Kafka topic.
- If the policy execution is successful, 3 different notification events are sent to *APEX-CL-MGT* topic by each one of the 5 threads.
- So, it is checked if 15 notification messages are received in total on *APEX-CL-MGT* topic with the relevant messages.
-- **Test Simplecontrolloop policy failure case** - Send a trigger event with invalid pnfName to *unauthenticated.DCAE_CL_OUTPUT* Kafka topic.
- The policy execution is expected to fail due to AAI failure response. 2 notification events are expected on *APEX-CL-MGT* topic by a thread in this case.
- It is checked if 10 notification messages are received in total on *APEX-CL-MGT* topic with the relevant messages.
-- **Test Example policy success case** - Send a trigger event to *unauthenticated.DCAE_POLICY_EXAMPLE_OUTPUT* Kafka topic.
- If the policy execution is successful, 4 different notification events are sent to *APEX-CL-MGT* topic by each one of the 5 threads.
- So, it is checked if 20 notification messages are received in total on *APEX-CL-MGT* topic with the relevant messages.
-- **Test Example policy failure case** - Send a trigger event with invalid vnfName to *unauthenticated.DCAE_POLICY_EXAMPLE_OUTPUT* Kafka topic.
- The policy execution is expected to fail due to AAI failure response. 2 notification events are expected on *APEX-CL-MGT* topic by a thread in this case.
- So, it is checked if 10 notification messages are received in total on *APEX-CL-MGT* topic with the relevant messages.
-
-Teardown Phase
-""""""""""""""
-
-Policies are undeployed from APEX-PDP and deleted during this phase.
-Only one thread is in action and this step is done only once after the Main phase is complete.
-
-- **Undeploy Policies** - Undeploy both the policies from APEX-PDP using policy/pap component
-- **Delete Policy onap.policies.apex.Simplecontrolloop** - delete the first APEX policy using policy/api component.
-- **Delete Policy onap.policies.apex.Example** - delete the second APEX policy also using policy/api component.
-
-Test Configuration
-------------------
-
-The following steps can be used to configure the parameters of test plan.
-
-- **HTTP Authorization Manager** - used to store user/password authentication details.
-- **HTTP Header Manager** - used to store headers which will be used for making HTTP requests.
-- **User Defined Variables** - used to store following user defined parameters.
-
-=================== ===============================================================================
- **Name** **Description**
-=================== ===============================================================================
- HOSTNAME IP Address or host name to access the components
- PAP_PORT Port number of PAP for making REST API calls such as deploy/undeploy of policy
- API_PORT Port number of API for making REST API calls such as create/delete of policy
- APEX_PORT Port number of APEX for making REST API calls such as healthcheck/metrics
- SIM_HOST IP Address or hostname running policy-models-simulator
- KAFKA_PORT Port number of Kafka bootstrap server for sending message events
- wait Wait time if required after a request (in milliseconds)
- threads Number of threads to run test cases in parallel
- threadsTimeOutInMs Synchronization timer for threads running in parallel (in milliseconds)
-=================== ===============================================================================
-
-Run Test
---------
-
-The test was run in the background via "nohup", to prevent it from being interrupted.
-
-Test Results
-------------
-
-**Summary**
-
-Stability test plan was triggered for 72 hours. There were no failures during the 72 hours test.
-
-
-**Test Statistics**
-
-======================= ================= ================== ==================================
-**Total # of requests** **Success %** **Error %** **Average time taken per request**
-======================= ================= ================== ==================================
-312366 100 % 0 % 4148ms
-======================= ================= ================== ==================================
-
-**JMeter Screenshot**
-
-.. image:: apex-s3p-results/apex_stability_results.png
-
-.. Note::
- These results show a huge dip in the number of requests compared to the previous release of Apex-PDP>
- Further investigation and improvement is needed in the coming release.
-
-Performance Test of APEX-PDP
-++++++++++++++++++++++++++++
-
-Introduction
-------------
-
-Performance test of APEX-PDP is done similar to the stability test, but in a more extreme manner using higher thread count.
-
-Setup Details
--------------
-
-The performance test is performed on a similar setup as Stability test.
-
-
-Test Plan
----------
-
-Performance test plan is the same as the stability test plan above except for the few differences listed below.
-
-- Increase the number of threads used in the Main Phase from 5 to 20.
-- Reduce the test time to 2 hours.
-
-Run Test
---------
-
-The test was run in the background via "nohup", to prevent it from being interrupted.
-
-Test Results
-------------
-
-Test results are shown as below.
-
-**Test Statistics**
-
-======================= ================= ================== ==================================
-**Total # of requests** **Success %** **Error %** **Average time taken per request**
-======================= ================= ================== ==================================
-344624 100 % 0 % 4178 ms
-======================= ================= ================== ==================================
-
-**JMeter Screenshot**
-
-.. image:: apex-s3p-results/apex_performance_results.png
-
-.. Note::
- These results show a huge dip in the number of requests compared to the previous release of Apex-PDP>
- Further investigation and improvement is needed in the coming release.
-
-Summary
-+++++++
-
-Multiple policies were executed in a multi-threaded fashion for both stability and performance tests.
-Both tests showed a dip in performance and stability.
diff --git a/docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-distribution_J.png b/docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-distribution_J.png
deleted file mode 100644
index 6d6033ae..00000000
--- a/docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-distribution_J.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-distribution_performance_J.png b/docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-distribution_performance_J.png
deleted file mode 100644
index aa2fd621..00000000
--- a/docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-distribution_performance_J.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-overtime_J.png b/docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-overtime_J.png
deleted file mode 100644
index aa40dd94..00000000
--- a/docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-overtime_J.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-overtime_performance_J.png b/docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-overtime_performance_J.png
deleted file mode 100644
index 4ba5dd75..00000000
--- a/docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-overtime_performance_J.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/api-s3p-results/api-s3p-jm-1_J.png b/docs/development/devtools/testing/s3p/api-s3p-results/api-s3p-jm-1_J.png
deleted file mode 100644
index 8dfbf55f..00000000
--- a/docs/development/devtools/testing/s3p/api-s3p-results/api-s3p-jm-1_J.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/api-s3p-results/api-s3p-jm-2_J.png b/docs/development/devtools/testing/s3p/api-s3p-results/api-s3p-jm-2_J.png
deleted file mode 100644
index 68b654c2..00000000
--- a/docs/development/devtools/testing/s3p/api-s3p-results/api-s3p-jm-2_J.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/api-s3p-results/api_stat_after_72h.png b/docs/development/devtools/testing/s3p/api-s3p-results/api_stat_after_72h.png
deleted file mode 100644
index 3ecef541..00000000
--- a/docs/development/devtools/testing/s3p/api-s3p-results/api_stat_after_72h.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/api-s3p-results/api_stat_before_72h.png b/docs/development/devtools/testing/s3p/api-s3p-results/api_stat_before_72h.png
deleted file mode 100644
index 927ab6a1..00000000
--- a/docs/development/devtools/testing/s3p/api-s3p-results/api_stat_before_72h.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/api-s3p.rst b/docs/development/devtools/testing/s3p/api-s3p.rst
deleted file mode 100644
index c3bbc9e9..00000000
--- a/docs/development/devtools/testing/s3p/api-s3p.rst
+++ /dev/null
@@ -1,210 +0,0 @@
-.. This work is licensed under a
-.. Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-
-.. _api-s3p-label:
-
-.. toctree::
- :maxdepth: 2
-
-Policy API S3P Tests
-####################
-
-
-72 Hours Stability Test of Policy API
-+++++++++++++++++++++++++++++++++++++
-
-Introduction
-------------
-
-The 72 hour stability test of policy API has the goal of verifying the stability of running policy design API REST
-service by ingesting a steady flow of transactions in a multi-threaded fashion to
-simulate multiple clients' behaviours.
-All the transaction flows are initiated from a test client server running JMeter for the duration of 72 hours.
-
-Setup Details
--------------
-
-The stability test was performed on a default Policy docker installation in the Nordix Lab environment.
-JMeter was installed on a separate VM to inject the traffic defined in the
-`API stability script
-<https://git.onap.org/policy/api/tree/testsuites/stability/src/main/resources/testplans/policy_api_stability.jmx>`_
-with the following command:
-
-.. code-block:: bash
-
- nohup apache-jmeter-5.6.2/bin/jmeter -n -t policy_api_stability.jmx -l stabilityTestResultsPolicyApi.jtl &
-
-The test was run in the background via “nohup” and “&”, to prevent it from being interrupted.
-
-Test Plan
----------
-
-The 72+ hours stability test will be running the following steps sequentially
-in multi-threaded loops. Thread number is set to 5 to simulate 5 API clients'
-behaviours (they can be calling the same policy CRUD API simultaneously).
-Each thread creates a different version of the policy types and policies to not
-interfere with one another while operating simultaneously. The point version
-of each entity is set to the running thread number.
-
-**Setup Thread (will be running only once)**
-
-- Get policy-api Healthcheck
-- Get Preloaded Policy Types
-
-**API Test Flow (5 threads running the same steps in the same loop)**
-
-- Create a new Monitoring Policy Type with Version 6.0.#
-- Create a new Monitoring Policy Type with Version 7.0.#
-- Create a new Optimization Policy Type with Version 6.0.#
-- Create a new Guard Policy Type with Version 6.0.#
-- Create a new Native APEX Policy Type with Version 6.0.#
-- Create a new Native Drools Policy Type with Version 6.0.#
-- Create a new Native XACML Policy Type with Version 6.0.#
-- Get All Policy Types
-- Get All Versions of the new Monitoring Policy Type
-- Get Version 6.0.# of the new Monitoring Policy Type
-- Get Version 6.0.# of the new Optimization Policy Type
-- Get Version 6.0.# of the new Guard Policy Type
-- Get Version 6.0.# of the new Native APEX Policy Type
-- Get Version 6.0.# of the new Native Drools Policy Type
-- Get Version 6.0.# of the new Native XACML Policy Type
-- Get the Latest Version of the New Monitoring Policy Type
-- Create Version 6.0.# of Node Template
-- Create Monitoring Policy Ver 6.0.# w/Monitoring Policy Type Ver 6.0.#
-- Create Monitoring Policy Ver 7.0.# w/Monitoring Policy Type Ver 7.0.#
-- Create Optimization Policy Ver 6.0.# w/Optimization Policy Type Ver 6.0.#
-- Create Guard Policy Ver 6.0.# w/Guard Policy Type Ver 6.0.#
-- Create Native APEX Policy Ver 6.0.# w/Native APEX Policy Type Ver 6.0.#
-- Create Native Drools Policy Ver 6.0.# w/Native Drools Policy Type Ver 6.0.#
-- Create Native XACML Policy Ver 6.0.# w/Native XACML Policy Type Ver 6.0.#
-- Create Version 6.0.# of PNF Example Policy with Metadata
-- Get Node Template
-- Get All TCA Policies
-- Get All Versions of Monitoring Policy Type
-- Get Version 6.0.# of the new Monitoring Policy
-- Get Version 6.0.# of the new Optimization Policy
-- Get Version 6.0.# of the new Guard Policy
-- Get Version 6.0.# of the new Native APEX Policy
-- Get Version 6.0.# of the new Native Drools Policy
-- Get Version 6.0.# of the new Native XACML Policy
-- Get the Latest Version of the new Monitoring Policy
-- Delete Version 6.0.# of the new Monitoring Policy
-- Delete Version 7.0.# of the new Monitoring Policy
-- Delete Version 6.0.# of the new OptimizationPolicy
-- Delete Version 6.0.# of the new Guard Policy
-- Delete Version 6.0.# of the new Native APEX Policy
-- Delete Version 6.0.# of PNF Example Policy having Metadata
-- Delete Version 6.0.# of the new Native Drools Policy
-- Delete Version 6.0.# of the new Native XACML Policy
-- Delete Monitoring Policy Type with Version 6.0.#
-- Delete Monitoring Policy Type with Version 7.0.#
-- Delete Optimization Policy Type with Version 6.0.#
-- Delete Guard Policy Type with Version 6.0.#
-- Delete Native APEX Policy Type with Version 6.0.#
-- Delete Native Drools Policy Type with Version 6.0.#
-- Delete Native XACML Policy Type with Version 6.0.#
-- Delete Node Template
-- Get Policy Metrics
-
-**TearDown Thread (will only be running after API Test Flow is completed)**
-
-- Get policy-api Healthcheck
-- Get Preloaded Policy Types
-
-
-Test Results
-------------
-
-**Summary**
-
-No errors were found during the 72 hours of the Policy API stability run.
-The load was performed against a non-tweaked Policy docker deployment.
-
-**Test Statistics**
-
-======================= ============= =========== =============================== =============================== ===============================
-**Total # of requests** **Success %** **TPS** **Avg. time taken per request** **Min. time taken per request** **Max. time taken per request**
-======================= ============= =========== =============================== =============================== ===============================
- 214617 100% 2.8 6028 ms 206 ms 115153 ms
-======================= ============= =========== =============================== =============================== ===============================
-
-.. image:: api-s3p-results/api-s3p-jm-1_J.png
-
-**JMeter Results**
-
-The following graphs show the response time distributions. The "Get Policy Types" API calls are the most expensive calls that
-average a 8.6 seconds plus response time.
-
-.. image:: api-s3p-results/api-response-time-distribution_J.png
-.. image:: api-s3p-results/api-response-time-overtime_J.png
-
-**Memory and CPU usage**
-
-The memory and CPU usage can be monitored by running "docker stats" command in the policy-api container.
-A snapshot is taken before and after test execution to monitor the changes in resource utilization.
-
-Memory and CPU usage before test execution:
-
-.. image:: api-s3p-results/api_stat_before_72h.png
-
-Memory and CPU usage after test execution:
-
-.. image:: api-s3p-results/api_stat_after_72h.png
-
-
-Performance Test of Policy API
-++++++++++++++++++++++++++++++
-
-Introduction
-------------
-
-Performance test of policy-api has the goal of testing the min/avg/max processing time and rest call throughput for all the requests when the number of requests are large enough to saturate the resource and find the bottleneck.
-
-Setup Details
--------------
-
-The performance test was performed on a default Policy docker installation in the Nordix Lab environment.
-JMeter was installed on a separate VM to inject the traffic defined in the
-`API performance script
-<https://git.onap.org/policy/api/tree/testsuites/performance/src/main/resources/testplans/policy_api_performance.jmx>`_
-with the following command:
-
-.. code-block:: bash
-
- nohup apache-jmeter-5.6.2/bin/jmeter -n -t policy_api_performance.jmx -l performanceTestResultsPolicyApi.jtl &
-
-The test was run in the background via “nohup” and “&”, to prevent it from being interrupted.
-
-Test Plan
----------
-
-Performance test plan is the same as stability test plan above.
-Only differences are, in performance test, we increase the number of threads up to 20 (simulating 20 users' behaviours at the same time) whereas reducing the test time down to 2.5 hours.
-
-Run Test
---------
-
-Running/Triggering performance test will be the same as stability test. That is, launch JMeter pointing to corresponding *.jmx* test plan. The *API_HOST* and *API_PORT* are already set up in *.jmx*.
-
-**Test Statistics**
-
-======================= ============= =========== =============================== =============================== ===============================
-**Total # of requests** **Success %** **TPS** **Avg. time taken per request** **Min. time taken per request** **Max. time taken per request**
-======================= ============= =========== =============================== =============================== ===============================
- 1671 99.7% 6.3 108379 ms 223 ms 1921999 ms
-======================= ============= =========== =============================== =============================== ===============================
-
-.. image:: api-s3p-results/api-s3p-jm-2_J.png
-
-Test Results
-------------
-
-The following graphs show the response time distributions.
-
-.. image:: api-s3p-results/api-response-time-distribution_performance_J.png
-.. image:: api-s3p-results/api-response-time-overtime_performance_J.png
-
-
-
-
diff --git a/docs/development/devtools/testing/s3p/clamp-s3p-results/Stability_after_stats.png b/docs/development/devtools/testing/s3p/clamp-s3p-results/Stability_after_stats.png
deleted file mode 100644
index e53641f5..00000000
--- a/docs/development/devtools/testing/s3p/clamp-s3p-results/Stability_after_stats.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/clamp-s3p-results/acm_performance_jmeter.png b/docs/development/devtools/testing/s3p/clamp-s3p-results/acm_performance_jmeter.png
deleted file mode 100644
index 38b6c000..00000000
--- a/docs/development/devtools/testing/s3p/clamp-s3p-results/acm_performance_jmeter.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/clamp-s3p-results/acm_stability_jmeter.png b/docs/development/devtools/testing/s3p/clamp-s3p-results/acm_stability_jmeter.png
deleted file mode 100644
index bd9d0e84..00000000
--- a/docs/development/devtools/testing/s3p/clamp-s3p-results/acm_stability_jmeter.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/clamp-s3p-results/acm_stability_table.png b/docs/development/devtools/testing/s3p/clamp-s3p-results/acm_stability_table.png
deleted file mode 100644
index 94402c8f..00000000
--- a/docs/development/devtools/testing/s3p/clamp-s3p-results/acm_stability_table.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/clamp-s3p.rst b/docs/development/devtools/testing/s3p/clamp-s3p.rst
deleted file mode 100644
index 2cf3e236..00000000
--- a/docs/development/devtools/testing/s3p/clamp-s3p.rst
+++ /dev/null
@@ -1,224 +0,0 @@
-.. This work is licensed under a
-.. Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-
-.. _acm-s3p-label:
-
-.. toctree::
- :maxdepth: 2
-
-Policy Clamp Automation Composition
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Both the Performance and the Stability tests were executed by performing requests
-against acm components installed as docker images in local environment. These tests we all
-performed on a Ubuntu VM with 32GB of memory, 16 CPU and 50GB of disk space.
-
-
-ACM Deployment
-++++++++++++++
-
-In an effort to allow the execution of the s3p tests to be as close to automatic as possible,
-a script will be executed that will perform the following:
-
-- Install of a microk8s kubernetes environment
-- Bring up the policy components
-- Checks that the components are successfully up and running before proceeding
-- Install Java 17
-- Install Jmeter locally and configure it
-- Specify whether you want to run stability or performance tests
-
-
-The remainder of this document outlines how to run the tests and the test results
-
-Common Setup
-++++++++++++
-The common setup for performance and stability tests is now automated - being carried out by a script in- **testsuites/run-s3p-test.sh**.
-
-Clone the policy-clamp repo to access the test scripts
-
-.. code-block:: sh
-
- git clone https://gerrit.onap.org/r/policy/clamp
-
-Stability Test of acm components
-++++++++++++++++++++++++++++++++
-
-Test Plan
----------
-The 72 hours stability test ran the following steps sequentially in a single threaded loop.
-
-- **Commission Automation Composition Definitions** - Commissions the ACM Definitions
-- **Register Participants** - Registers the presence of participants in the acm database
-- **Prime AC definition** - Primes the AC Definition to the participants
-- **Instantiate acm** - Instantiate the acm instance
-- **DEPLOY the ACM instance** - change the state of the acm to DEPLOYED
-- **Check instance state** - check the current state of instance and that it is DEPLOYED
-- **UNDEPLOY the ACM instance** - change the state of the ACM to UNDEPLOYED
-- **Check instance state** - check the current state of instance and that it is UNDEPLOYED
-- **Delete instance** - delete the instance from all participants and ACM db
-- **DEPRIME ACM definitions** - DEPRIME ACM definitions from participants
-- **Delete ACM Definition** - delete the ACM definition on runtime
-
-This runs for 72 hours. Test results are present in the **testsuites/automated-performance/s3pTestResults.jtl**
-directory. Logs are present for jmeter in **testsuites/automated-performance/jmeter.log** and
-**testsuites/automated-performance/nohup.out**
-
-Run Test
---------
-
-The code in the setup section also serves to run the tests. Just one execution needed to do it all.
-
-.. code-block:: sh
-
- ./run-s3p-test.sh run stability
-
-Once the test execution is completed, the results are present in the **automate-performance/s3pTestResults.jtl** file.
-
-This file can be imported into the Jmeter GUI for visualization. The below results are tabulated from the GUI.
-
-Test Results
-------------
-
-**Summary**
-
-Stability test plan was triggered for 72 hours.
-
-**Test Statistics**
-
-======================= ================= ================== ==================================
-**Total # of requests** **Success %** **Error %** **Average time taken per request**
-======================= ================= ================== ==================================
-261852 100.00 % 0.00 % 387.126 ms
-======================= ================= ================== ==================================
-
-**ACM component Setup**
-
-============================================== ================================================================== ====================
-**NAME** **IMAGE** **PORT**
-============================================== ================================================================== ====================
- zookeeper-deployment-7ff87c7fcc-ptkwv confluentinc/cp-zookeeper:latest 2181/TCP
- kafka-deployment-5c87d497b-2jv27 confluentinc/cp-kafka:latest 9092/TCP
- policy-models-simulator-6947667bdc-v4czs nexus3.onap.org:10001/onap/policy-models-simulator:latest 3904:30904/TCP
- prometheus-f66f97b6-rknvp nexus3.onap.org:10001/prom/prometheus:latest 9090:30909/TCP
- mariadb-galera-0 nexus3.onap.org:10001/bitnami/mariadb-galera:10.5.8 3306/TCP
- policy-apex-pdp-0 nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.3-SNAPSHOT 6969:30001/TCP
- policy-clamp-ac-http-ppnt-7d747b5d98-4phjf nexus3.onap.org:10001/onap/policy-clamp-ac-http-ppnt7.1.3-SNAPSHOT 8084/TCP
- policy-clamp-ac-sim-ppnt-97f487577-4p7ks nexus3.onap.org:10001/onap/policy-clamp-ac-sim-ppnt7.1.3-SNAPSHOT 6969/TCP
- policy-clamp-ac-k8s-ppnt-6bbd86bbc6-csknn nexus3.onap.org:10001/onap/policy-clamp-ac-k8s-ppnt7.1.3-SNAPSHOT 8083:30443/TCP
- policy-clamp-ac-pf-ppnt-5fcbbcdb6c-twkxw nexus3.onap.org:10001/onap/policy-clamp-ac-pf-ppnt7.1.3-SNAPSHOT 6969:30008/TCP
- policy-clamp-runtime-acm-66b5d6b64-4gnth nexus3.onap.org:10001/onap/policy-clamp-runtime-acm7.1.3-SNAPSHOT 6969:30007/TCP
- policy-pap-f7899d4cd-7m898 nexus3.onap.org:10001/onap/policy-pap:3.1.3-SNAPSHOT 6969:30003/TCP
- policy-api-7f7d995b4-ckb84 nexus3.onap.org:10001/onap/policy-api:3.1.3-SNAPSHOT 6969:30002/TCP
-============================================== ================================================================== ====================
-
-
-
-.. Note::
-
- .. container:: paragraph
-
- There were no failures during the 72 hours test.
-
-**JMeter Screenshot**
-
-.. image:: clamp-s3p-results/acm_stability_jmeter.png
-
-**JMeter Screenshot**
-
-.. image:: clamp-s3p-results/acm_stability_table.png
-
-
-Performance Test of acm components
-++++++++++++++++++++++++++++++++++
-
-Introduction
-------------
-
-Performance test of acm components has the goal of testing the min/avg/max processing time and rest call throughput for all the requests with multiple requests at the same time.
-
-Setup Details
--------------
-
-We can setup the environment and execute the tests like this from the **clamp/testsuites** directory
-
-.. code-block:: sh
-
- ./run-s3p-test.sh run performance
-
-Test results are present in the **testsuites/automate-performance/s3pTestResults.jtl**
-directory. Logs are present for jmeter in **testsuites/automate-performance/jmeter.log** and
-**testsuites/automated-performance/nohup.out**
-
-Test Plan
----------
-
-The Performance test ran the following steps sequentially by 5 threaded users. Any user will create 100 compositions/instances.
-
-- **SetUp** - SetUp Thread Group
- - **Register Participants** - Registers the presence of participants in the acm database
-- **AutomationComposition Test Flow** - flow by 5 threaded users.
- - **Creation and Deploy** - Creates 100 Compositions and Instances
- - **Commission Automation Composition Definitions** - Commissions the ACM Definitions
- - **Prime AC definition** - Primes the AC Definition to the participants
- - **Instantiate acm** - Instantiate the acm instance
- - **DEPLOY the ACM instance** - change the state of the acm to DEPLOYED
- - **Check instance state** - check the current state of instance and that it is DEPLOYED
- - **Get participants** - fetch all participants
- - **Get compositions** - fetch all compositions
- - **Undeploy and Delete** - Deletes instances and Compositions created before
- - **UNDEPLOY the ACM instance** - change the state of the ACM to UNDEPLOYED
- - **Check instance state** - check the current state of instance and that it is UNDEPLOYED
- - **Delete instance** - delete the instance from all participants and ACM db
- - **DEPRIME ACM definitions** - DEPRIME ACM definitions from participants
- - **Delete ACM Definition** - delete the ACM definition on runtime
-
-Run Test
---------
-
-The code in the setup section also serves to run the tests. Just one execution needed to do it all.
-
-.. code-block:: sh
-
- ./run-s3p-test.sh run performance
-
-Once the test execution is completed, the results are present in the **automate-performance/s3pTestResults.jtl** file.
-
-This file can be imported into the Jmeter GUI for visualization. The below results are tabulated from the Jmeter GUI.
-
-Test Results
-------------
-
-Test results are shown as below.
-
-**Test Statistics**
-
-======================= ================= ================== ==================================
-**Total # of requests** **Success %** **Error %** **Average time taken per request**
-======================= ================= ================== ==================================
-8624 100 % 0.00 % 1296.8 ms
-======================= ================= ================== ==================================
-
-**ACM component Setup**
-
-============================================== ================================================================== ====================
-**NAME** **IMAGE** **PORT**
-============================================== ================================================================== ====================
- zookeeper-deployment-7ff87c7fcc-5svgw confluentinc/cp-zookeeper:latest 2181/TCP
- kafka-deployment-5c87d497b-hmbhc confluentinc/cp-kafka:latest 9092/TCP
- policy-models-simulator-6947667bdc-crcwq nexus3.onap.org:10001/onap/policy-models-simulator:latest 3904:30904/TCP
- prometheus-f66f97b6-24dvx nexus3.onap.org:10001/prom/prometheus:latest 9090:30909/TCP
- mariadb-galera-0 nexus3.onap.org:10001/bitnami/mariadb-galera:10.5.8 3306/TCP
- policy-apex-pdp-0 nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.3-SNAPSHOT 6969:30001/TCP
- policy-clamp-ac-sim-ppnt-97f487577-pn56t nexus3.onap.org:10001/onap/policy-clamp-ac-sim-ppnt7.1.3-SNAPSHOT 6969/TCP
- policy-clamp-ac-http-ppnt-7d747b5d98-qjjlv nexus3.onap.org:10001/onap/policy-clamp-ac-http-ppnt7.1.3-SNAPSHOT 8084/TCP
- policy-clamp-ac-k8s-ppnt-6bbd86bbc6-ffbz2 nexus3.onap.org:10001/onap/policy-clamp-ac-k8s-ppnt7.1.3-SNAPSHOT 8083:30443/TCP
- policy-clamp-ac-pf-ppnt-5fcbbcdb6c-vmsnv nexus3.onap.org:10001/onap/policy-clamp-ac-pf-ppnt7.1.3-SNAPSHOT 6969:30008/TCP
- policy-clamp-runtime-acm-66b5d6b64-6vjl5 nexus3.onap.org:10001/onap/policy-clamp-runtime-acm7.1.3-SNAPSHOT 6969:30007/TCP
- policy-pap-f7899d4cd-8sjk9 nexus3.onap.org:10001/onap/policy-pap:3.1.3-SNAPSHOT 6969:30003/TCP
- policy-api-7f7d995b4-dktdw nexus3.onap.org:10001/onap/policy-api:3.1.3-SNAPSHOT 6969:30002/TCP
-============================================== ================================================================== ====================
-
-**JMeter Screenshot**
-
-.. image:: clamp-s3p-results/acm_performance_jmeter.png
diff --git a/docs/development/devtools/testing/s3p/distribution-s3p-results/distribution-jmeter-testcases.png b/docs/development/devtools/testing/s3p/distribution-s3p-results/distribution-jmeter-testcases.png
deleted file mode 100644
index 1159bca3..00000000
--- a/docs/development/devtools/testing/s3p/distribution-s3p-results/distribution-jmeter-testcases.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/distribution-s3p-results/distribution-visualvm-snapshot.png b/docs/development/devtools/testing/s3p/distribution-s3p-results/distribution-visualvm-snapshot.png
deleted file mode 100644
index b9b175a4..00000000
--- a/docs/development/devtools/testing/s3p/distribution-s3p-results/distribution-visualvm-snapshot.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/distribution-s3p-results/performance-monitor.png b/docs/development/devtools/testing/s3p/distribution-s3p-results/performance-monitor.png
deleted file mode 100644
index d535c4aa..00000000
--- a/docs/development/devtools/testing/s3p/distribution-s3p-results/performance-monitor.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/distribution-s3p-results/performance-statistics.png b/docs/development/devtools/testing/s3p/distribution-s3p-results/performance-statistics.png
deleted file mode 100644
index a2caab4e..00000000
--- a/docs/development/devtools/testing/s3p/distribution-s3p-results/performance-statistics.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/distribution-s3p-results/performance-threads.png b/docs/development/devtools/testing/s3p/distribution-s3p-results/performance-threads.png
deleted file mode 100644
index 9b6c3d23..00000000
--- a/docs/development/devtools/testing/s3p/distribution-s3p-results/performance-threads.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/distribution-s3p-results/performance-threshold.png b/docs/development/devtools/testing/s3p/distribution-s3p-results/performance-threshold.png
deleted file mode 100644
index 6a26a09e..00000000
--- a/docs/development/devtools/testing/s3p/distribution-s3p-results/performance-threshold.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/distribution-s3p-results/stability-monitor.png b/docs/development/devtools/testing/s3p/distribution-s3p-results/stability-monitor.png
deleted file mode 100644
index cbb675ba..00000000
--- a/docs/development/devtools/testing/s3p/distribution-s3p-results/stability-monitor.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/distribution-s3p-results/stability-statistics.png b/docs/development/devtools/testing/s3p/distribution-s3p-results/stability-statistics.png
deleted file mode 100644
index ae1853f9..00000000
--- a/docs/development/devtools/testing/s3p/distribution-s3p-results/stability-statistics.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/distribution-s3p-results/stability-threads.png b/docs/development/devtools/testing/s3p/distribution-s3p-results/stability-threads.png
deleted file mode 100644
index 67da4a62..00000000
--- a/docs/development/devtools/testing/s3p/distribution-s3p-results/stability-threads.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/distribution-s3p-results/stability-threshold.png b/docs/development/devtools/testing/s3p/distribution-s3p-results/stability-threshold.png
deleted file mode 100644
index 5aa6cc64..00000000
--- a/docs/development/devtools/testing/s3p/distribution-s3p-results/stability-threshold.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/distribution-s3p.rst b/docs/development/devtools/testing/s3p/distribution-s3p.rst
deleted file mode 100644
index 40ade31c..00000000
--- a/docs/development/devtools/testing/s3p/distribution-s3p.rst
+++ /dev/null
@@ -1,238 +0,0 @@
-.. This work is licensed under a
-.. Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-
-.. _distribution-s3p-label:
-
-Policy Distribution component
-#############################
-
-72h Stability and 4h Performance Tests of Distribution
-++++++++++++++++++++++++++++++++++++++++++++++++++++++
-
-Common Setup
-------------
-
-The common setup for performance and stability tests is now automated - being carried out by a script in- **testsuites/run-sc3-test.sh**.
-
-Clone the policy-distribution repo to access the test scripts
-
-.. code-block:: bash
-
- git clone https://gerrit.onap.org/r/policy/distribution
-
-**The following common steps are carried out by the scripts**
-
-* Updates the repo package lists for apt
-* Installs Java 17 open jdk
-* Installs docker
-* Installs docker-compose
-* Retrieves version information in environment variables from th release info file
-* Builds relevant images including the pdp simulator
-* Triggers docker compose to bring up containers required for the testing
-* Installs jmeter
-* Installs visualvm (and starts it in a GUI environment)
-* Configures permissions for monitoring
-* Starts jstatd
-* Waits for containers to come up
-* Runs either stability or performance tests for a specified duration depending on the arguments specified
-
-For example, the below runs performance tests for 2 hours. Start from the root directory of policy distribution
-
-.. code-block:: bash
-
- cd testsuites
- ./run-s3p-test.sh performance 7200
-
-.. note::
- The containers on this docker-compose are running with HTTP configuration. For HTTPS, ports
- and configurations will need to be changed, as well certificates and keys must be generated
- for security.
-
-The script will load up the visualvm GUI on your virtual machine. You will need to manually connect
-it to the distribution JMX port.
-
-Connect to Distribution JMX Port.
-
- 1. On the visualvm toolbar, click on "Add JMX Connection"
- 2. Enter localhost as the IP address and Port 9090. This is the JMX port exposed by the
- distribution container
- 3. Double click on the newly added nodes under "Local" to start monitoring CPU, Memory & GC.
-
-Example Screenshot of visualVM
-
-.. image:: distribution-s3p-results/distribution-visualvm-snapshot.png
-
-Teardown Docker
-
-Once the testing is finished, you can tear down the docker setup from **./testsuites** with:
-
-.. code-block:: bash
-
- docker-compose -f stability/src/main/resources/setup/docker-compose.yml down
-
-Stability Test of Policy Distribution
-+++++++++++++++++++++++++++++++++++++
-
-Introduction
-------------
-
-The 72 hour Stability Test for policy distribution has the goal of introducing a steady flow of
-transactions initiated from a test client server running JMeter. The policy distribution is
-configured with a special FileSystemReception plugin to monitor a local directory for newly added
-csar files to be processed by itself. The input CSAR will be added/removed by the test client
-(JMeter) and the result will be pulled from the backend (PAP and PolicyAPI) by the test client
-(JMeter).
-
-The test will be performed in an environment where Jmeter will continuously add/remove a test csar
-into the special directory where policy distribution is monitoring and will then get the processed
-results from PAP and PolicyAPI to verify the successful deployment of the policy. The policy will
-then be undeployed and the test will loop continuously until 72 hours have elapsed.
-
-
-Test Plan Sequence
-------------------
-
-The 72h stability test will run the following steps sequentially in a single threaded loop.
-
-- **Delete Old CSAR** - Checks if CSAR already exists in the watched directory, if so it deletes it
-- **Add CSAR** - Adds CSAR to the directory that distribution is watching
-- **Get Healthcheck** - Ensures Healthcheck is returning 200 OK
-- **Get Metrics** - Ensures Metrics is returning 200 OK
-- **Assert PDP Group Query** - Checks that PDPGroupQuery contains the deployed policy
-- **Assert PoliciesDeployed** - Checks that the policy is deployed
-- **Undeploy/Delete Policy** - Undeploys and deletes the Policy for the next loop
-- **Assert PDP Group Query for Deleted Policy** - Ensures the policy has been removed and does not exist
-
-The following steps can be used to configure the parameters of the test plan.
-
-- **HTTP Authorization Manager** - used to store user/password authentication details.
-- **HTTP Header Manager** - used to store headers which will be used for making HTTP requests.
-- **User Defined Variables** - used to store following user defined parameters.
-
-========== ===============================================
- **Name** **Description**
-========== ===============================================
- PAP_HOST IP Address or host name of PAP component
- PAP_PORT Port number of PAP for making REST API calls
- API_HOST IP Address or host name of API component
- API_PORT Port number of API for making REST API calls
- DURATION Duration of Test
-========== ===============================================
-
-Screenshot of Distribution stability test plan
-
-.. image:: distribution-s3p-results/distribution-jmeter-testcases.png
-
-
-Running the Test Plan
----------------------
-
-The main script takes care of everything. To run the 72 hour stability tests do as follows
-
-.. code-block:: bash
-
- cd testsuites
- ./run-s3p-test.sh stability 259200
-
-* visualvm produces the monitor and threads - we can screenshot those and add them to the test results
-* A jmeter .jtl file is produced by the run - it is called distribution-stability.jtl
-* The file can be imported into the jmeter GUI to view statistics
-* The application performance index table can be produced with jmeter on the cli as below:ls
-
-.. code-block:: bash
-
- jmeter -n -t your_test_plan.jmx -l test_results.jtl -e -o report_directory
-
-Test Results
-------------
-
-**Summary**
-
-- Stability test plan was triggered for 72 hours.
-- No errors were reported
-
-**Test Statistics**
-
-.. image:: distribution-s3p-results/stability-statistics.png
-.. image:: distribution-s3p-results/stability-threshold.png
-
-**VisualVM Screenshots**
-
-.. image:: distribution-s3p-results/stability-monitor.png
-.. image:: distribution-s3p-results/stability-threads.png
-
-
-Performance Test of Policy Distribution
-+++++++++++++++++++++++++++++++++++++++
-
-Introduction
-------------
-
-The 4h Performance Test of Policy Distribution has the goal of testing the min/avg/max processing
-time and rest call throughput for all the requests when the number of requests are large enough to
-saturate the resource and find the bottleneck.
-
-It also tests that distribution can handle multiple policy CSARs and that these are deployed within
-60 seconds consistently.
-
-
-Setup Details
--------------
-
-The performance test is based on the same setup as the distribution stability tests. This setup is done by the main
-**run-s3p-test.sh** script
-
-
-Test Plan Sequence
-------------------
-
-Performance test plan is different from the stability test plan.
-
-- Instead of handling one policy csar at a time, multiple csar's are deployed within the watched
- folder at the exact same time.
-- We expect all policies from these csar's to be deployed within 60 seconds.
-- There are also multithreaded tests running towards the healthcheck and statistics endpoints of
- the distribution service.
-
-
-Running the Test Plan
----------------------
-
-The main script takes care of everything. To run the 4 hour performance tests do as follows
-
-.. code-block:: bash
-
- cd testsuites
- ./run-s3p-test.sh performance 14400
-
-* visualvm produces the monitor and threads - we can screenshot those and add them to the test results
-* A jmeter .jtl file is produced by the run - it is called distribution-performance.jtl
-* The file can be imported into the jmeter GUI to view statistics
-* The application performance index table can be produced with jmeter on the cli as below:
-
-.. code-block:: bash
-
- jmeter -n -t your_test_plan.jmx -l test_results.jtl -e -o report_directory
-
-This produced html pages where statistics tables can be seen and added to the results.
-
-Test Results
-------------
-
-**Summary**
-
-- Performance test plan was triggered for 4 hours.
-- No errors were reported
-
-**Test Statistics**
-
-.. image:: distribution-s3p-results/performance-statistics.png
-.. image:: distribution-s3p-results/performance-threshold.png
-
-**VisualVM Screenshots**
-
-.. image:: distribution-s3p-results/performance-monitor.png
-.. image:: distribution-s3p-results/performance-threads.png
-
-End of document
diff --git a/docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-1.png b/docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-1.png
deleted file mode 100644
index 3c1e06f7..00000000
--- a/docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-1.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-2.png b/docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-2.png
deleted file mode 100644
index 7e124716..00000000
--- a/docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-2.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-3.png b/docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-3.png
deleted file mode 100644
index 50f2c148..00000000
--- a/docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-3.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-4.png b/docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-4.png
deleted file mode 100644
index 369d1f33..00000000
--- a/docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-4.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/drools-s3p.rst b/docs/development/devtools/testing/s3p/drools-s3p.rst
deleted file mode 100644
index 88f601bd..00000000
--- a/docs/development/devtools/testing/s3p/drools-s3p.rst
+++ /dev/null
@@ -1,74 +0,0 @@
-.. This work is licensed under a
-.. Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-
-.. _drools-s3p-label:
-
-.. toctree::
- :maxdepth: 2
-
-Policy Drools PDP component
-~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Both the Performance and the Stability tests were executed against an ONAP installation in the Policy tenant
-in the UNH lab, from the admin VM running the jmeter tool to inject the load.
-
-General Setup
-*************
-
-Agent VMs in this lab have the following configuration:
-
-- 16GB RAM
-- 8 VCPU
-
-Jmeter is run from the admin VM.
-
-The drools-pdp container uses the JVM memory and CPU settings from the default OOM installation.
-
-Other ONAP components exercised during the stability tests were:
-
-- Policy XACML PDP to process guard queries for each transaction.
-- DMaaP to carry PDP-D and jmeter initiated traffic to complete transactions.
-- Policy API to create (and delete at the end of the tests) policies for each
- scenario under test.
-- Policy PAP to deploy (and undeploy at the end of the tests) policies for each scenario under test.
-- XACML PDP Stability test was running at the same time.
-
-The following components are simulated during the tests.
-
-- SDNR.
-
-Stability Test of Policy PDP-D
-******************************
-
-PDP-D performance
-=================
-
-The tests focused on the following use cases running in parallel:
-
-- vCPE
-- SON O1
-- SON A1
-
-Three threads ran in parallel, one for each scenario. The transactions were initiated
-by each jmeter thread group. Each thread initiated a transaction, monitored the transaction, and
-started the next one 250 ms. later.
-
-The results are illustrated on the following graphs:
-
-.. image:: drools-s3p-results/s3p-drools-1.png
-.. image:: drools-s3p-results/s3p-drools-2.png
-.. image:: drools-s3p-results/s3p-drools-3.png
-
-
-Commentary
-==========
-
-There is around 1% unexpected failures during the 72-hour run. This can also be seen in the
-final output of jmeter:
-
-.. code-block:: bash
-
- summary = 4751546 in 72:00:37 = 18.3/s Avg: 150 Min: 0 Max: 15087 Err: 47891 (1.01%)
-
-Sporadic database errors have been observed and seem related to the 1% failure percentage rate.
diff --git a/docs/development/devtools/testing/s3p/images/workflow-results.png b/docs/development/devtools/testing/s3p/images/workflow-results.png
new file mode 100644
index 00000000..d287754a
--- /dev/null
+++ b/docs/development/devtools/testing/s3p/images/workflow-results.png
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/images/workflow-test-result.png b/docs/development/devtools/testing/s3p/images/workflow-test-result.png
new file mode 100644
index 00000000..d192205d
--- /dev/null
+++ b/docs/development/devtools/testing/s3p/images/workflow-test-result.png
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/images/workflows.png b/docs/development/devtools/testing/s3p/images/workflows.png
new file mode 100644
index 00000000..7b05e22d
--- /dev/null
+++ b/docs/development/devtools/testing/s3p/images/workflows.png
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/pap-s3p-results/pap_metrics_after_72h.txt b/docs/development/devtools/testing/s3p/pap-s3p-results/pap_metrics_after_72h.txt
deleted file mode 100644
index 1851bf63..00000000
--- a/docs/development/devtools/testing/s3p/pap-s3p-results/pap_metrics_after_72h.txt
+++ /dev/null
@@ -1,521 +0,0 @@
-# HELP hikaricp_connections_min Min connections
-# TYPE hikaricp_connections_min gauge
-hikaricp_connections_min{pool="HikariPool-1",} 10.0
-# HELP tomcat_sessions_created_sessions_total
-# TYPE tomcat_sessions_created_sessions_total counter
-tomcat_sessions_created_sessions_total 3.0
-# HELP disk_total_bytes Total space for path
-# TYPE disk_total_bytes gauge
-disk_total_bytes{path="/opt/app/policy/pap/bin/.",} 1.0386530304E11
-# HELP jvm_classes_loaded_classes The number of classes that are currently loaded in the Java virtual machine
-# TYPE jvm_classes_loaded_classes gauge
-jvm_classes_loaded_classes 20615.0
-# HELP hikaricp_connections_usage_seconds Connection usage time
-# TYPE hikaricp_connections_usage_seconds summary
-hikaricp_connections_usage_seconds_count{pool="HikariPool-1",} 321133.0
-hikaricp_connections_usage_seconds_sum{pool="HikariPool-1",} 45213.218
-# HELP hikaricp_connections_usage_seconds_max Connection usage time
-# TYPE hikaricp_connections_usage_seconds_max gauge
-hikaricp_connections_usage_seconds_max{pool="HikariPool-1",} 0.027
-# HELP hikaricp_connections_active Active connections
-# TYPE hikaricp_connections_active gauge
-hikaricp_connections_active{pool="HikariPool-1",} 0.0
-# HELP process_start_time_seconds Start time of the process since unix epoch.
-# TYPE process_start_time_seconds gauge
-process_start_time_seconds 1.700139959198E9
-# HELP jvm_memory_used_bytes The amount of used memory
-# TYPE jvm_memory_used_bytes gauge
-jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 3.2981376E7
-jvm_memory_used_bytes{area="heap",id="G1 Survivor Space",} 494864.0
-jvm_memory_used_bytes{area="heap",id="G1 Old Gen",} 2.1805824E8
-jvm_memory_used_bytes{area="nonheap",id="Metaspace",} 1.13110752E8
-jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 1472896.0
-jvm_memory_used_bytes{area="heap",id="G1 Eden Space",} 3.7748736E7
-jvm_memory_used_bytes{area="nonheap",id="Compressed Class Space",} 1.4127568E7
-jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 3.4159744E7
-# HELP jvm_gc_memory_promoted_bytes_total Count of positive increases in the size of the old generation memory pool before GC to after GC
-# TYPE jvm_gc_memory_promoted_bytes_total counter
-jvm_gc_memory_promoted_bytes_total 1.78894336E8
-# HELP jvm_threads_peak_threads The peak live thread count since the Java virtual machine started or peak was reset
-# TYPE jvm_threads_peak_threads gauge
-jvm_threads_peak_threads 43.0
-# HELP hikaricp_connections_creation_seconds_max Connection creation time
-# TYPE hikaricp_connections_creation_seconds_max gauge
-hikaricp_connections_creation_seconds_max{pool="HikariPool-1",} 0.0
-# HELP hikaricp_connections_creation_seconds Connection creation time
-# TYPE hikaricp_connections_creation_seconds summary
-hikaricp_connections_creation_seconds_count{pool="HikariPool-1",} 2131.0
-hikaricp_connections_creation_seconds_sum{pool="HikariPool-1",} 17.144
-# HELP system_cpu_count The number of processors available to the Java virtual machine
-# TYPE system_cpu_count gauge
-system_cpu_count 16.0
-# HELP spring_security_filterchains_session_url_encoding_after_total
-# TYPE spring_security_filterchains_session_url_encoding_after_total counter
-spring_security_filterchains_session_url_encoding_after_total{security_security_reached_filter_section="after",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 168998.0
-# HELP executor_queue_remaining_tasks The number of additional elements that this queue can ideally accept without blocking
-# TYPE executor_queue_remaining_tasks gauge
-executor_queue_remaining_tasks{name="applicationTaskExecutor",} 2.147483647E9
-# HELP hikaricp_connections Total connections
-# TYPE hikaricp_connections gauge
-hikaricp_connections{pool="HikariPool-1",} 10.0
-# HELP tomcat_sessions_expired_sessions_total
-# TYPE tomcat_sessions_expired_sessions_total counter
-tomcat_sessions_expired_sessions_total 2.0
-# HELP tomcat_sessions_active_current_sessions
-# TYPE tomcat_sessions_active_current_sessions gauge
-tomcat_sessions_active_current_sessions 1.0
-# HELP hikaricp_connections_timeout_total Connection timeout total count
-# TYPE hikaricp_connections_timeout_total counter
-hikaricp_connections_timeout_total{pool="HikariPool-1",} 0.0
-# HELP jvm_threads_live_threads The current number of live threads including both daemon and non-daemon threads
-# TYPE jvm_threads_live_threads gauge
-jvm_threads_live_threads 38.0
-# HELP spring_security_filterchains_active_seconds_max
-# TYPE spring_security_filterchains_active_seconds_max gauge
-spring_security_filterchains_active_seconds_max{security_security_reached_filter_section="after",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 0.0
-spring_security_filterchains_active_seconds_max{security_security_reached_filter_section="before",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 0.0
-# HELP spring_security_filterchains_active_seconds
-# TYPE spring_security_filterchains_active_seconds summary
-spring_security_filterchains_active_seconds_active_count{security_security_reached_filter_section="after",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 0.0
-spring_security_filterchains_active_seconds_duration_sum{security_security_reached_filter_section="after",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 0.0
-spring_security_filterchains_active_seconds_active_count{security_security_reached_filter_section="before",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 0.0
-spring_security_filterchains_active_seconds_duration_sum{security_security_reached_filter_section="before",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 0.0
-# HELP spring_security_filterchains_logout_after_total
-# TYPE spring_security_filterchains_logout_after_total counter
-spring_security_filterchains_logout_after_total{security_security_reached_filter_section="after",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 168998.0
-# HELP jvm_info JVM version info
-# TYPE jvm_info gauge
-jvm_info{runtime="OpenJDK Runtime Environment",vendor="Alpine",version="17.0.9+8-alpine-r0",} 1.0
-# HELP disk_free_bytes Usable space for path
-# TYPE disk_free_bytes gauge
-disk_free_bytes{path="/opt/app/policy/pap/bin/.",} 8.5940973568E10
-# HELP spring_security_authentications_active_seconds
-# TYPE spring_security_authentications_active_seconds summary
-spring_security_authentications_active_seconds_active_count{authentication_failure_type="n/a",authentication_method="ProviderManager",authentication_request_type="UsernamePasswordAuthenticationToken",authentication_result_type="n/a",} 0.0
-spring_security_authentications_active_seconds_duration_sum{authentication_failure_type="n/a",authentication_method="ProviderManager",authentication_request_type="UsernamePasswordAuthenticationToken",authentication_result_type="n/a",} 0.0
-# HELP spring_security_authentications_active_seconds_max
-# TYPE spring_security_authentications_active_seconds_max gauge
-spring_security_authentications_active_seconds_max{authentication_failure_type="n/a",authentication_method="ProviderManager",authentication_request_type="UsernamePasswordAuthenticationToken",authentication_result_type="n/a",} 0.0
-# HELP jvm_threads_daemon_threads The current number of live daemon threads
-# TYPE jvm_threads_daemon_threads gauge
-jvm_threads_daemon_threads 28.0
-# HELP spring_security_filterchains_context_holder_before_total
-# TYPE spring_security_filterchains_context_holder_before_total counter
-spring_security_filterchains_context_holder_before_total{security_security_reached_filter_section="before",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 168999.0
-# HELP spring_security_filterchains_context_holder_after_total
-# TYPE spring_security_filterchains_context_holder_after_total counter
-spring_security_filterchains_context_holder_after_total{security_security_reached_filter_section="after",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 168998.0
-# HELP jvm_gc_memory_allocated_bytes_total Incremented for an increase in the size of the (young) heap memory pool after one GC to before the next
-# TYPE jvm_gc_memory_allocated_bytes_total counter
-jvm_gc_memory_allocated_bytes_total 2.70538060492E12
-# HELP executor_pool_core_threads The core number of threads for the pool
-# TYPE executor_pool_core_threads gauge
-executor_pool_core_threads{name="applicationTaskExecutor",} 8.0
-# HELP spring_security_filterchains_authentication_anonymous_before_total
-# TYPE spring_security_filterchains_authentication_anonymous_before_total counter
-spring_security_filterchains_authentication_anonymous_before_total{security_security_reached_filter_section="before",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 168999.0
-# HELP jdbc_connections_active Current number of active connections that have been allocated from the data source.
-# TYPE jdbc_connections_active gauge
-jdbc_connections_active{name="dataSource",} 0.0
-# HELP http_server_requests_seconds
-# TYPE http_server_requests_seconds summary
-http_server_requests_seconds_count{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/metrics",} 16898.0
-http_server_requests_seconds_sum{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/metrics",} 3967.357676154
-http_server_requests_seconds_count{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/policies/status",} 16898.0
-http_server_requests_seconds_sum{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/policies/status",} 3952.559792217
-http_server_requests_seconds_count{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/healthcheck",} 8449.0
-http_server_requests_seconds_sum{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/healthcheck",} 1962.407770331
-http_server_requests_seconds_count{error="none",exception="none",method="DELETE",outcome="SUCCESS",status="200",uri="/pdps/groups/{name}",} 1.0
-http_server_requests_seconds_sum{error="none",exception="none",method="DELETE",outcome="SUCCESS",status="200",uri="/pdps/groups/{name}",} 1.13003001
-http_server_requests_seconds_count{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/pdps",} 33794.0
-http_server_requests_seconds_sum{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/pdps",} 8534.756994317
-http_server_requests_seconds_count{error="none",exception="none",method="POST",outcome="SUCCESS",status="202",uri="/pdps/policies",} 8449.0
-http_server_requests_seconds_sum{error="none",exception="none",method="POST",outcome="SUCCESS",status="202",uri="/pdps/policies",} 9029.386618813
-http_server_requests_seconds_count{error="none",exception="none",method="DELETE",outcome="SUCCESS",status="202",uri="/pdps/policies/{name}",} 8448.0
-http_server_requests_seconds_sum{error="none",exception="none",method="DELETE",outcome="SUCCESS",status="202",uri="/pdps/policies/{name}",} 9292.095374281
-http_server_requests_seconds_count{error="none",exception="none",method="GET",outcome="CLIENT_ERROR",status="401",uri="UNKNOWN",} 3.0
-http_server_requests_seconds_sum{error="none",exception="none",method="GET",outcome="CLIENT_ERROR",status="401",uri="UNKNOWN",} 0.146722928
-http_server_requests_seconds_count{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/policies/deployed",} 8448.0
-http_server_requests_seconds_sum{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/policies/deployed",} 1963.048694006
-http_server_requests_seconds_count{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/policies/audit/{pdpGroupName}",} 8448.0
-http_server_requests_seconds_sum{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/policies/audit/{pdpGroupName}",} 2081.277984093
-http_server_requests_seconds_count{error="none",exception="none",method="POST",outcome="SUCCESS",status="202",uri="/pdps/deployments/batch",} 16896.0
-http_server_requests_seconds_sum{error="none",exception="none",method="POST",outcome="SUCCESS",status="202",uri="/pdps/deployments/batch",} 18067.385431232
-http_server_requests_seconds_count{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/policies/status/{pdpGroupName}/{policyName}/{policyVersion}",} 16915.0
-http_server_requests_seconds_sum{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/policies/status/{pdpGroupName}/{policyName}/{policyVersion}",} 4012.92045444
-http_server_requests_seconds_count{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/policies/audit/{pdpGroupName}/{policyName}/{policyVersion}",} 16896.0
-http_server_requests_seconds_sum{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/policies/audit/{pdpGroupName}/{policyName}/{policyVersion}",} 4284.22784792
-http_server_requests_seconds_count{error="none",exception="none",method="PUT",outcome="SUCCESS",status="200",uri="/pdps/groups/{name}",} 3.0
-http_server_requests_seconds_sum{error="none",exception="none",method="PUT",outcome="SUCCESS",status="200",uri="/pdps/groups/{name}",} 1.687419501
-http_server_requests_seconds_count{error="none",exception="none",method="POST",outcome="SUCCESS",status="200",uri="/pdps/groups/batch",} 1.0
-http_server_requests_seconds_sum{error="none",exception="none",method="POST",outcome="SUCCESS",status="200",uri="/pdps/groups/batch",} 1.716173275
-http_server_requests_seconds_count{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/components/healthcheck",} 8448.0
-http_server_requests_seconds_sum{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/components/healthcheck",} 4213.059172045
-# HELP http_server_requests_seconds_max
-# TYPE http_server_requests_seconds_max gauge
-http_server_requests_seconds_max{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/metrics",} 0.0
-http_server_requests_seconds_max{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/policies/status",} 0.0
-http_server_requests_seconds_max{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/healthcheck",} 0.0
-http_server_requests_seconds_max{error="none",exception="none",method="DELETE",outcome="SUCCESS",status="200",uri="/pdps/groups/{name}",} 0.0
-http_server_requests_seconds_max{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/pdps",} 0.0
-http_server_requests_seconds_max{error="none",exception="none",method="POST",outcome="SUCCESS",status="202",uri="/pdps/policies",} 0.0
-http_server_requests_seconds_max{error="none",exception="none",method="DELETE",outcome="SUCCESS",status="202",uri="/pdps/policies/{name}",} 0.0
-http_server_requests_seconds_max{error="none",exception="none",method="GET",outcome="CLIENT_ERROR",status="401",uri="UNKNOWN",} 0.051127942
-http_server_requests_seconds_max{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/policies/deployed",} 0.0
-http_server_requests_seconds_max{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/policies/audit/{pdpGroupName}",} 0.0
-http_server_requests_seconds_max{error="none",exception="none",method="POST",outcome="SUCCESS",status="202",uri="/pdps/deployments/batch",} 0.0
-http_server_requests_seconds_max{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/policies/status/{pdpGroupName}/{policyName}/{policyVersion}",} 0.0
-http_server_requests_seconds_max{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/policies/audit/{pdpGroupName}/{policyName}/{policyVersion}",} 0.0
-http_server_requests_seconds_max{error="none",exception="none",method="PUT",outcome="SUCCESS",status="200",uri="/pdps/groups/{name}",} 0.0
-http_server_requests_seconds_max{error="none",exception="none",method="POST",outcome="SUCCESS",status="200",uri="/pdps/groups/batch",} 0.0
-http_server_requests_seconds_max{error="none",exception="none",method="GET",outcome="SUCCESS",status="200",uri="/components/healthcheck",} 0.0
-# HELP spring_security_filterchains_authentication_basic_before_total
-# TYPE spring_security_filterchains_authentication_basic_before_total counter
-spring_security_filterchains_authentication_basic_before_total{security_security_reached_filter_section="before",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 168999.0
-# HELP pap_policy_deployments_total
-# TYPE pap_policy_deployments_total counter
-pap_policy_deployments_total{operation="deploy",status="FAILURE",} 0.0
-pap_policy_deployments_total{operation="undeploy",status="SUCCESS",} 16896.0
-pap_policy_deployments_total{operation="deploy",status="SUCCESS",} 16897.0
-pap_policy_deployments_total{operation="undeploy",status="FAILURE",} 0.0
-# HELP jvm_buffer_total_capacity_bytes An estimate of the total capacity of the buffers in this pool
-# TYPE jvm_buffer_total_capacity_bytes gauge
-jvm_buffer_total_capacity_bytes{id="mapped - 'non-volatile memory'",} 0.0
-jvm_buffer_total_capacity_bytes{id="mapped",} 0.0
-jvm_buffer_total_capacity_bytes{id="direct",} 1544596.0
-# HELP jvm_gc_live_data_size_bytes Size of long-lived heap memory pool after reclamation
-# TYPE jvm_gc_live_data_size_bytes gauge
-jvm_gc_live_data_size_bytes 5.4770176E7
-# HELP process_files_max_files The maximum file descriptor count
-# TYPE process_files_max_files gauge
-process_files_max_files 1048576.0
-# HELP jvm_memory_committed_bytes The amount of memory in bytes that is committed for the Java virtual machine to use
-# TYPE jvm_memory_committed_bytes gauge
-jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 4.1222144E7
-jvm_memory_committed_bytes{area="heap",id="G1 Survivor Space",} 4194304.0
-jvm_memory_committed_bytes{area="heap",id="G1 Old Gen",} 2.60046848E8
-jvm_memory_committed_bytes{area="nonheap",id="Metaspace",} 1.13967104E8
-jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 3342336.0
-jvm_memory_committed_bytes{area="heap",id="G1 Eden Space",} 9.6468992E7
-jvm_memory_committed_bytes{area="nonheap",id="Compressed Class Space",} 1.4548992E7
-jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 3.5454976E7
-# HELP spring_data_repository_invocations_seconds_max Duration of repository invocations
-# TYPE spring_data_repository_invocations_seconds_max gauge
-spring_data_repository_invocations_seconds_max{exception="None",method="findByKeyParentKeyNameAndKeyParentKeyVersion",repository="PolicyStatusRepository",state="SUCCESS",} 0.0
-spring_data_repository_invocations_seconds_max{exception="None",method="save",repository="PdpGroupRepository",state="SUCCESS",} 0.0
-spring_data_repository_invocations_seconds_max{exception="None",method="saveAll",repository="PolicyStatusRepository",state="SUCCESS",} 0.0
-spring_data_repository_invocations_seconds_max{exception="None",method="findByPdpGroup",repository="PolicyAuditRepository",state="SUCCESS",} 0.0
-spring_data_repository_invocations_seconds_max{exception="None",method="findById",repository="ToscaServiceTemplateRepository",state="SUCCESS",} 0.0
-spring_data_repository_invocations_seconds_max{exception="None",method="findByPdpGroup",repository="PolicyStatusRepository",state="SUCCESS",} 0.0
-spring_data_repository_invocations_seconds_max{exception="None",method="findAll",repository="PdpGroupRepository",state="SUCCESS",} 0.010393136
-spring_data_repository_invocations_seconds_max{exception="None",method="deleteById",repository="PdpGroupRepository",state="SUCCESS",} 0.0
-spring_data_repository_invocations_seconds_max{exception="None",method="findAll",repository="PolicyStatusRepository",state="SUCCESS",} 0.0
-spring_data_repository_invocations_seconds_max{exception="None",method="saveAll",repository="PolicyAuditRepository",state="SUCCESS",} 0.0
-spring_data_repository_invocations_seconds_max{exception="None",method="findByPdpGroupAndNameAndVersion",repository="PolicyAuditRepository",state="SUCCESS",} 0.0
-spring_data_repository_invocations_seconds_max{exception="None",method="findById",repository="ToscaNodeTemplateRepository",state="SUCCESS",} 0.0
-spring_data_repository_invocations_seconds_max{exception="None",method="existsById",repository="PdpGroupRepository",state="SUCCESS",} 0.0
-spring_data_repository_invocations_seconds_max{exception="None",method="findByKeyName",repository="PdpGroupRepository",state="SUCCESS",} 0.0
-spring_data_repository_invocations_seconds_max{exception="None",method="save",repository="PdpSubGroupRepository",state="SUCCESS",} 0.0
-spring_data_repository_invocations_seconds_max{exception="None",method="save",repository="PdpRepository",state="SUCCESS",} 0.0
-spring_data_repository_invocations_seconds_max{exception="None",method="deleteAll",repository="PolicyStatusRepository",state="SUCCESS",} 0.0
-# HELP spring_data_repository_invocations_seconds Duration of repository invocations
-# TYPE spring_data_repository_invocations_seconds summary
-spring_data_repository_invocations_seconds_count{exception="None",method="findByKeyParentKeyNameAndKeyParentKeyVersion",repository="PolicyStatusRepository",state="SUCCESS",} 16915.0
-spring_data_repository_invocations_seconds_sum{exception="None",method="findByKeyParentKeyNameAndKeyParentKeyVersion",repository="PolicyStatusRepository",state="SUCCESS",} 34.05336667
-spring_data_repository_invocations_seconds_count{exception="None",method="save",repository="PdpGroupRepository",state="SUCCESS",} 33797.0
-spring_data_repository_invocations_seconds_sum{exception="None",method="save",repository="PdpGroupRepository",state="SUCCESS",} 14.614549552
-spring_data_repository_invocations_seconds_count{exception="None",method="saveAll",repository="PolicyStatusRepository",state="SUCCESS",} 135740.0
-spring_data_repository_invocations_seconds_sum{exception="None",method="saveAll",repository="PolicyStatusRepository",state="SUCCESS",} 60.741361443
-spring_data_repository_invocations_seconds_count{exception="None",method="findByPdpGroup",repository="PolicyAuditRepository",state="SUCCESS",} 8448.0
-spring_data_repository_invocations_seconds_sum{exception="None",method="findByPdpGroup",repository="PolicyAuditRepository",state="SUCCESS",} 140.930950983
-spring_data_repository_invocations_seconds_count{exception="None",method="findById",repository="ToscaServiceTemplateRepository",state="SUCCESS",} 52516.0
-spring_data_repository_invocations_seconds_sum{exception="None",method="findById",repository="ToscaServiceTemplateRepository",state="SUCCESS",} 125.813080008
-spring_data_repository_invocations_seconds_count{exception="None",method="findByPdpGroup",repository="PolicyStatusRepository",state="SUCCESS",} 50972.0
-spring_data_repository_invocations_seconds_sum{exception="None",method="findByPdpGroup",repository="PolicyStatusRepository",state="SUCCESS",} 106.770108329
-spring_data_repository_invocations_seconds_count{exception="None",method="findAll",repository="PdpGroupRepository",state="SUCCESS",} 102439.0
-spring_data_repository_invocations_seconds_sum{exception="None",method="findAll",repository="PdpGroupRepository",state="SUCCESS",} 1036.574172723
-spring_data_repository_invocations_seconds_count{exception="None",method="deleteById",repository="PdpGroupRepository",state="SUCCESS",} 1.0
-spring_data_repository_invocations_seconds_sum{exception="None",method="deleteById",repository="PdpGroupRepository",state="SUCCESS",} 0.007675311
-spring_data_repository_invocations_seconds_count{exception="None",method="findAll",repository="PolicyStatusRepository",state="SUCCESS",} 33795.0
-spring_data_repository_invocations_seconds_sum{exception="None",method="findAll",repository="PolicyStatusRepository",state="SUCCESS",} 53.729542707
-spring_data_repository_invocations_seconds_count{exception="None",method="saveAll",repository="PolicyAuditRepository",state="SUCCESS",} 33793.0
-spring_data_repository_invocations_seconds_sum{exception="None",method="saveAll",repository="PolicyAuditRepository",state="SUCCESS",} 41.990371471
-spring_data_repository_invocations_seconds_count{exception="None",method="findByPdpGroupAndNameAndVersion",repository="PolicyAuditRepository",state="SUCCESS",} 16896.0
-spring_data_repository_invocations_seconds_sum{exception="None",method="findByPdpGroupAndNameAndVersion",repository="PolicyAuditRepository",state="SUCCESS",} 371.016745717
-spring_data_repository_invocations_seconds_count{exception="None",method="findById",repository="ToscaNodeTemplateRepository",state="SUCCESS",} 25663.0
-spring_data_repository_invocations_seconds_sum{exception="None",method="findById",repository="ToscaNodeTemplateRepository",state="SUCCESS",} 2.215574842
-spring_data_repository_invocations_seconds_count{exception="None",method="existsById",repository="PdpGroupRepository",state="SUCCESS",} 1.0
-spring_data_repository_invocations_seconds_sum{exception="None",method="existsById",repository="PdpGroupRepository",state="SUCCESS",} 0.843078054
-spring_data_repository_invocations_seconds_count{exception="None",method="findByKeyName",repository="PdpGroupRepository",state="SUCCESS",} 16902.0
-spring_data_repository_invocations_seconds_sum{exception="None",method="findByKeyName",repository="PdpGroupRepository",state="SUCCESS",} 80.237779619
-spring_data_repository_invocations_seconds_count{exception="None",method="save",repository="PdpSubGroupRepository",state="SUCCESS",} 2.0
-spring_data_repository_invocations_seconds_sum{exception="None",method="save",repository="PdpSubGroupRepository",state="SUCCESS",} 0.03577736
-spring_data_repository_invocations_seconds_count{exception="None",method="save",repository="PdpRepository",state="SUCCESS",} 38194.0
-spring_data_repository_invocations_seconds_sum{exception="None",method="save",repository="PdpRepository",state="SUCCESS",} 52.220218057
-spring_data_repository_invocations_seconds_count{exception="None",method="deleteAll",repository="PolicyStatusRepository",state="SUCCESS",} 67870.0
-spring_data_repository_invocations_seconds_sum{exception="None",method="deleteAll",repository="PolicyStatusRepository",state="SUCCESS",} 24.905966529
-# HELP system_load_average_1m The sum of the number of runnable entities queued to available processors and the number of runnable entities running on the available processors averaged over a period of time
-# TYPE system_load_average_1m gauge
-system_load_average_1m 0.34375
-# HELP spring_security_filterchains_requestcache_before_total
-# TYPE spring_security_filterchains_requestcache_before_total counter
-spring_security_filterchains_requestcache_before_total{security_security_reached_filter_section="before",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 168999.0
-# HELP tomcat_sessions_alive_max_seconds
-# TYPE tomcat_sessions_alive_max_seconds gauge
-tomcat_sessions_alive_max_seconds 1853.0
-# HELP jvm_memory_max_bytes The maximum amount of memory in bytes that can be used for memory management
-# TYPE jvm_memory_max_bytes gauge
-jvm_memory_max_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 1.22023936E8
-jvm_memory_max_bytes{area="heap",id="G1 Survivor Space",} -1.0
-jvm_memory_max_bytes{area="heap",id="G1 Old Gen",} 8.434745344E9
-jvm_memory_max_bytes{area="nonheap",id="Metaspace",} -1.0
-jvm_memory_max_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 7606272.0
-jvm_memory_max_bytes{area="heap",id="G1 Eden Space",} -1.0
-jvm_memory_max_bytes{area="nonheap",id="Compressed Class Space",} 1.073741824E9
-jvm_memory_max_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 1.22028032E8
-# HELP hikaricp_connections_acquire_seconds Connection acquire time
-# TYPE hikaricp_connections_acquire_seconds summary
-hikaricp_connections_acquire_seconds_count{pool="HikariPool-1",} 321133.0
-hikaricp_connections_acquire_seconds_sum{pool="HikariPool-1",} 68.182780148
-# HELP hikaricp_connections_acquire_seconds_max Connection acquire time
-# TYPE hikaricp_connections_acquire_seconds_max gauge
-hikaricp_connections_acquire_seconds_max{pool="HikariPool-1",} 0.001688513
-# HELP hikaricp_connections_idle Idle connections
-# TYPE hikaricp_connections_idle gauge
-hikaricp_connections_idle{pool="HikariPool-1",} 10.0
-# HELP jvm_gc_pause_seconds Time spent in GC pause
-# TYPE jvm_gc_pause_seconds summary
-jvm_gc_pause_seconds_count{action="end of minor GC",cause="Metadata GC Threshold",gc="G1 Young Generation",} 1.0
-jvm_gc_pause_seconds_sum{action="end of minor GC",cause="Metadata GC Threshold",gc="G1 Young Generation",} 0.03
-jvm_gc_pause_seconds_count{action="end of minor GC",cause="GCLocker Initiated GC",gc="G1 Young Generation",} 5.0
-jvm_gc_pause_seconds_sum{action="end of minor GC",cause="GCLocker Initiated GC",gc="G1 Young Generation",} 0.032
-jvm_gc_pause_seconds_count{action="end of minor GC",cause="G1 Evacuation Pause",gc="G1 Young Generation",} 29819.0
-jvm_gc_pause_seconds_sum{action="end of minor GC",cause="G1 Evacuation Pause",gc="G1 Young Generation",} 205.153
-# HELP jvm_gc_pause_seconds_max Time spent in GC pause
-# TYPE jvm_gc_pause_seconds_max gauge
-jvm_gc_pause_seconds_max{action="end of minor GC",cause="Metadata GC Threshold",gc="G1 Young Generation",} 0.0
-jvm_gc_pause_seconds_max{action="end of minor GC",cause="GCLocker Initiated GC",gc="G1 Young Generation",} 0.0
-jvm_gc_pause_seconds_max{action="end of minor GC",cause="G1 Evacuation Pause",gc="G1 Young Generation",} 0.0
-# HELP spring_security_authentications_seconds_max
-# TYPE spring_security_authentications_seconds_max gauge
-spring_security_authentications_seconds_max{authentication_failure_type="n/a",authentication_method="ProviderManager",authentication_request_type="UsernamePasswordAuthenticationToken",authentication_result_type="UsernamePasswordAuthenticationToken",error="none",} 0.269684484
-# HELP spring_security_authentications_seconds
-# TYPE spring_security_authentications_seconds summary
-spring_security_authentications_seconds_count{authentication_failure_type="n/a",authentication_method="ProviderManager",authentication_request_type="UsernamePasswordAuthenticationToken",authentication_result_type="UsernamePasswordAuthenticationToken",error="none",} 168993.0
-spring_security_authentications_seconds_sum{authentication_failure_type="n/a",authentication_method="ProviderManager",authentication_request_type="UsernamePasswordAuthenticationToken",authentication_result_type="UsernamePasswordAuthenticationToken",error="none",} 38517.298249707
-# HELP tomcat_sessions_rejected_sessions_total
-# TYPE tomcat_sessions_rejected_sessions_total counter
-tomcat_sessions_rejected_sessions_total 0.0
-# HELP jvm_classes_unloaded_classes_total The total number of classes unloaded since the Java virtual machine has started execution
-# TYPE jvm_classes_unloaded_classes_total counter
-jvm_classes_unloaded_classes_total 268.0
-# HELP spring_security_http_secured_requests_seconds
-# TYPE spring_security_http_secured_requests_seconds summary
-spring_security_http_secured_requests_seconds_count{error="none",} 168992.0
-spring_security_http_secured_requests_seconds_sum{error="none",} 32721.168866206
-# HELP spring_security_http_secured_requests_seconds_max
-# TYPE spring_security_http_secured_requests_seconds_max gauge
-spring_security_http_secured_requests_seconds_max{error="none",} 0.0
-# HELP spring_security_filterchains_context_async_before_total
-# TYPE spring_security_filterchains_context_async_before_total counter
-spring_security_filterchains_context_async_before_total{security_security_reached_filter_section="before",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 168999.0
-# HELP hikaricp_connections_max Max connections
-# TYPE hikaricp_connections_max gauge
-hikaricp_connections_max{pool="HikariPool-1",} 10.0
-# HELP spring_security_http_secured_requests_active_seconds
-# TYPE spring_security_http_secured_requests_active_seconds summary
-spring_security_http_secured_requests_active_seconds_active_count 1.0
-spring_security_http_secured_requests_active_seconds_duration_sum 0.011941797
-# HELP spring_security_http_secured_requests_active_seconds_max
-# TYPE spring_security_http_secured_requests_active_seconds_max gauge
-spring_security_http_secured_requests_active_seconds_max 0.011942844
-# HELP spring_security_filterchains_authentication_basic_after_total
-# TYPE spring_security_filterchains_authentication_basic_after_total counter
-spring_security_filterchains_authentication_basic_after_total{security_security_reached_filter_section="after",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 168998.0
-# HELP process_cpu_usage The "recent cpu usage" for the Java Virtual Machine process
-# TYPE process_cpu_usage gauge
-process_cpu_usage 3.61387926826283E-4
-# HELP executor_completed_tasks_total The approximate total number of tasks that have completed execution
-# TYPE executor_completed_tasks_total counter
-executor_completed_tasks_total{name="applicationTaskExecutor",} 0.0
-# HELP jvm_threads_started_threads_total The total number of application threads started in the JVM
-# TYPE jvm_threads_started_threads_total counter
-jvm_threads_started_threads_total 4650.0
-# HELP process_uptime_seconds The uptime of the Java virtual machine
-# TYPE process_uptime_seconds gauge
-process_uptime_seconds 380261.777
-# HELP pap_policy_deployments_seconds Timer for HTTP request to deploy/undeploy a policy
-# TYPE pap_policy_deployments_seconds summary
-pap_policy_deployments_seconds_count{operation="deploy",status="FAILURE",} 0.0
-pap_policy_deployments_seconds_sum{operation="deploy",status="FAILURE",} 0.0
-pap_policy_deployments_seconds_count{operation="undeploy",status="SUCCESS",} 8448.0
-pap_policy_deployments_seconds_sum{operation="undeploy",status="SUCCESS",} 7322.301986411
-pap_policy_deployments_seconds_count{operation="deploy",status="SUCCESS",} 25345.0
-pap_policy_deployments_seconds_sum{operation="deploy",status="SUCCESS",} 21200.125523501
-pap_policy_deployments_seconds_count{operation="undeploy",status="FAILURE",} 0.0
-pap_policy_deployments_seconds_sum{operation="undeploy",status="FAILURE",} 0.0
-# HELP pap_policy_deployments_seconds_max Timer for HTTP request to deploy/undeploy a policy
-# TYPE pap_policy_deployments_seconds_max gauge
-pap_policy_deployments_seconds_max{operation="deploy",status="FAILURE",} 0.0
-pap_policy_deployments_seconds_max{operation="undeploy",status="SUCCESS",} 0.0
-pap_policy_deployments_seconds_max{operation="deploy",status="SUCCESS",} 0.0
-pap_policy_deployments_seconds_max{operation="undeploy",status="FAILURE",} 0.0
-# HELP jvm_gc_overhead_percent An approximation of the percent of CPU time used by GC activities over the last lookback period or since monitoring began, whichever is shorter, in the range [0..1]
-# TYPE jvm_gc_overhead_percent gauge
-jvm_gc_overhead_percent 0.0
-# HELP jvm_buffer_memory_used_bytes An estimate of the memory that the Java virtual machine is using for this buffer pool
-# TYPE jvm_buffer_memory_used_bytes gauge
-jvm_buffer_memory_used_bytes{id="mapped - 'non-volatile memory'",} 0.0
-jvm_buffer_memory_used_bytes{id="mapped",} 0.0
-jvm_buffer_memory_used_bytes{id="direct",} 1544596.0
-# HELP spring_security_filterchains_header_after_total
-# TYPE spring_security_filterchains_header_after_total counter
-spring_security_filterchains_header_after_total{security_security_reached_filter_section="after",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 168998.0
-# HELP jvm_gc_max_data_size_bytes Max size of long-lived heap memory pool
-# TYPE jvm_gc_max_data_size_bytes gauge
-jvm_gc_max_data_size_bytes 8.434745344E9
-# HELP spring_security_filterchains_authorization_before_total
-# TYPE spring_security_filterchains_authorization_before_total counter
-spring_security_filterchains_authorization_before_total{security_security_reached_filter_section="before",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 168999.0
-# HELP jvm_compilation_time_ms_total The approximate accumulated elapsed time spent in compilation
-# TYPE jvm_compilation_time_ms_total counter
-jvm_compilation_time_ms_total{compiler="HotSpot 64-Bit Tiered Compilers",} 425964.0
-# HELP application_started_time_seconds Time taken to start the application
-# TYPE application_started_time_seconds gauge
-application_started_time_seconds{main_application_class="org.onap.policy.pap.main.PolicyPapApplication",} 32.135
-# HELP jdbc_connections_min Minimum number of idle connections in the pool.
-# TYPE jdbc_connections_min gauge
-jdbc_connections_min{name="dataSource",} 10.0
-# HELP spring_security_filterchains_context_servlet_before_total
-# TYPE spring_security_filterchains_context_servlet_before_total counter
-spring_security_filterchains_context_servlet_before_total{security_security_reached_filter_section="before",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 168999.0
-# HELP hikaricp_connections_pending Pending threads
-# TYPE hikaricp_connections_pending gauge
-hikaricp_connections_pending{pool="HikariPool-1",} 0.0
-# HELP spring_security_filterchains_logout_before_total
-# TYPE spring_security_filterchains_logout_before_total counter
-spring_security_filterchains_logout_before_total{security_security_reached_filter_section="before",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 168999.0
-# HELP executor_pool_size_threads The current number of threads in the pool
-# TYPE executor_pool_size_threads gauge
-executor_pool_size_threads{name="applicationTaskExecutor",} 0.0
-# HELP spring_security_filterchains_context_async_after_total
-# TYPE spring_security_filterchains_context_async_after_total counter
-spring_security_filterchains_context_async_after_total{security_security_reached_filter_section="after",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 168998.0
-# HELP system_cpu_usage The "recent cpu usage" of the system the application is running in
-# TYPE system_cpu_usage gauge
-system_cpu_usage 0.02665967384267763
-# HELP spring_security_authorizations_active_seconds_max
-# TYPE spring_security_authorizations_active_seconds_max gauge
-spring_security_authorizations_active_seconds_max{spring_security_authentication_type="n/a",spring_security_authorization_decision="unknown",spring_security_object="request",} 0.0
-# HELP spring_security_authorizations_active_seconds
-# TYPE spring_security_authorizations_active_seconds summary
-spring_security_authorizations_active_seconds_active_count{spring_security_authentication_type="n/a",spring_security_authorization_decision="unknown",spring_security_object="request",} 0.0
-spring_security_authorizations_active_seconds_duration_sum{spring_security_authentication_type="n/a",spring_security_authorization_decision="unknown",spring_security_object="request",} 0.0
-# HELP jdbc_connections_idle Number of established but idle connections.
-# TYPE jdbc_connections_idle gauge
-jdbc_connections_idle{name="dataSource",} 10.0
-# HELP jdbc_connections_max Maximum number of active connections that can be allocated at the same time.
-# TYPE jdbc_connections_max gauge
-jdbc_connections_max{name="dataSource",} 10.0
-# HELP tomcat_sessions_active_max_sessions
-# TYPE tomcat_sessions_active_max_sessions gauge
-tomcat_sessions_active_max_sessions 2.0
-# HELP spring_security_filterchains_access_exceptions_after_total
-# TYPE spring_security_filterchains_access_exceptions_after_total counter
-spring_security_filterchains_access_exceptions_after_total{security_security_reached_filter_section="after",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 168998.0
-# HELP process_files_open_files The open file descriptor count
-# TYPE process_files_open_files gauge
-process_files_open_files 30.0
-# HELP spring_security_filterchains_authentication_anonymous_after_total
-# TYPE spring_security_filterchains_authentication_anonymous_after_total counter
-spring_security_filterchains_authentication_anonymous_after_total{security_security_reached_filter_section="after",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 168998.0
-# HELP executor_active_threads The approximate number of threads that are actively executing tasks
-# TYPE executor_active_threads gauge
-executor_active_threads{name="applicationTaskExecutor",} 0.0
-# HELP jvm_threads_states_threads The current number of threads
-# TYPE jvm_threads_states_threads gauge
-jvm_threads_states_threads{state="runnable",} 9.0
-jvm_threads_states_threads{state="blocked",} 0.0
-jvm_threads_states_threads{state="waiting",} 21.0
-jvm_threads_states_threads{state="timed-waiting",} 8.0
-jvm_threads_states_threads{state="new",} 0.0
-jvm_threads_states_threads{state="terminated",} 0.0
-# HELP logback_events_total Number of log events that were enabled by the effective log level
-# TYPE logback_events_total counter
-logback_events_total{level="warn",} 0.0
-logback_events_total{level="debug",} 0.0
-logback_events_total{level="error",} 76.0
-logback_events_total{level="trace",} 0.0
-logback_events_total{level="info",} 1846777.0
-# HELP executor_pool_max_threads The maximum allowed number of threads in the pool
-# TYPE executor_pool_max_threads gauge
-executor_pool_max_threads{name="applicationTaskExecutor",} 2.147483647E9
-# HELP spring_security_filterchains_requestcache_after_total
-# TYPE spring_security_filterchains_requestcache_after_total counter
-spring_security_filterchains_requestcache_after_total{security_security_reached_filter_section="after",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 168998.0
-# HELP spring_security_filterchains_context_servlet_after_total
-# TYPE spring_security_filterchains_context_servlet_after_total counter
-spring_security_filterchains_context_servlet_after_total{security_security_reached_filter_section="after",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 168998.0
-# HELP jvm_buffer_count_buffers An estimate of the number of buffers in the pool
-# TYPE jvm_buffer_count_buffers gauge
-jvm_buffer_count_buffers{id="mapped - 'non-volatile memory'",} 0.0
-jvm_buffer_count_buffers{id="mapped",} 0.0
-jvm_buffer_count_buffers{id="direct",} 16.0
-# HELP jvm_memory_usage_after_gc_percent The percentage of long-lived heap pool used after the last GC event, in the range [0..1]
-# TYPE jvm_memory_usage_after_gc_percent gauge
-jvm_memory_usage_after_gc_percent{area="heap",pool="long-lived",} 0.02585237978229115
-# HELP application_ready_time_seconds Time taken for the application to be ready to service requests
-# TYPE application_ready_time_seconds gauge
-application_ready_time_seconds{main_application_class="org.onap.policy.pap.main.PolicyPapApplication",} 32.272
-# HELP http_server_requests_active_seconds_max
-# TYPE http_server_requests_active_seconds_max gauge
-http_server_requests_active_seconds_max{exception="none",method="GET",outcome="SUCCESS",status="200",uri="UNKNOWN",} 0.293631789
-http_server_requests_active_seconds_max{exception="none",method="PUT",outcome="SUCCESS",status="200",uri="UNKNOWN",} 0.0
-http_server_requests_active_seconds_max{exception="none",method="DELETE",outcome="SUCCESS",status="200",uri="UNKNOWN",} 0.0
-http_server_requests_active_seconds_max{exception="none",method="POST",outcome="SUCCESS",status="200",uri="UNKNOWN",} 0.0
-# HELP http_server_requests_active_seconds
-# TYPE http_server_requests_active_seconds summary
-http_server_requests_active_seconds_active_count{exception="none",method="GET",outcome="SUCCESS",status="200",uri="UNKNOWN",} 1.0
-http_server_requests_active_seconds_duration_sum{exception="none",method="GET",outcome="SUCCESS",status="200",uri="UNKNOWN",} 0.293630483
-http_server_requests_active_seconds_active_count{exception="none",method="PUT",outcome="SUCCESS",status="200",uri="UNKNOWN",} 0.0
-http_server_requests_active_seconds_duration_sum{exception="none",method="PUT",outcome="SUCCESS",status="200",uri="UNKNOWN",} 0.0
-http_server_requests_active_seconds_active_count{exception="none",method="DELETE",outcome="SUCCESS",status="200",uri="UNKNOWN",} 0.0
-http_server_requests_active_seconds_duration_sum{exception="none",method="DELETE",outcome="SUCCESS",status="200",uri="UNKNOWN",} 0.0
-http_server_requests_active_seconds_active_count{exception="none",method="POST",outcome="SUCCESS",status="200",uri="UNKNOWN",} 0.0
-http_server_requests_active_seconds_duration_sum{exception="none",method="POST",outcome="SUCCESS",status="200",uri="UNKNOWN",} 0.0
-# HELP spring_security_filterchains_seconds_max
-# TYPE spring_security_filterchains_seconds_max gauge
-spring_security_filterchains_seconds_max{error="none",security_security_reached_filter_section="before",spring_security_filterchain_position="11",spring_security_filterchain_size="11",spring_security_reached_filter_name="AuthorizationFilter",} 0.272513877
-spring_security_filterchains_seconds_max{error="none",security_security_reached_filter_section="after",spring_security_filterchain_position="11",spring_security_filterchain_size="11",spring_security_reached_filter_name="DisableEncodeUrlFilter",} 0.001009437
-# HELP spring_security_filterchains_seconds
-# TYPE spring_security_filterchains_seconds summary
-spring_security_filterchains_seconds_count{error="none",security_security_reached_filter_section="before",spring_security_filterchain_position="11",spring_security_filterchain_size="11",spring_security_reached_filter_name="AuthorizationFilter",} 168999.0
-spring_security_filterchains_seconds_sum{error="none",security_security_reached_filter_section="before",spring_security_filterchain_position="11",spring_security_filterchain_size="11",spring_security_reached_filter_name="AuthorizationFilter",} 38579.546360899
-spring_security_filterchains_seconds_count{error="none",security_security_reached_filter_section="after",spring_security_filterchain_position="11",spring_security_filterchain_size="11",spring_security_reached_filter_name="DisableEncodeUrlFilter",} 168998.0
-spring_security_filterchains_seconds_sum{error="none",security_security_reached_filter_section="after",spring_security_filterchain_position="11",spring_security_filterchain_size="11",spring_security_reached_filter_name="DisableEncodeUrlFilter",} 17.300671502
-# HELP spring_security_filterchains_authorization_after_total
-# TYPE spring_security_filterchains_authorization_after_total counter
-spring_security_filterchains_authorization_after_total{security_security_reached_filter_section="after",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 168992.0
-# HELP spring_security_filterchains_access_exceptions_before_total
-# TYPE spring_security_filterchains_access_exceptions_before_total counter
-spring_security_filterchains_access_exceptions_before_total{security_security_reached_filter_section="before",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 168999.0
-# HELP spring_security_authorizations_seconds
-# TYPE spring_security_authorizations_seconds summary
-spring_security_authorizations_seconds_count{error="AccessDeniedException",spring_security_authentication_type="AnonymousAuthenticationToken",spring_security_authorization_decision="false",spring_security_object="request",} 6.0
-spring_security_authorizations_seconds_sum{error="AccessDeniedException",spring_security_authentication_type="AnonymousAuthenticationToken",spring_security_authorization_decision="false",spring_security_object="request",} 0.020998153
-spring_security_authorizations_seconds_count{error="none",spring_security_authentication_type="UsernamePasswordAuthenticationToken",spring_security_authorization_decision="true",spring_security_object="request",} 168993.0
-spring_security_authorizations_seconds_sum{error="none",spring_security_authentication_type="UsernamePasswordAuthenticationToken",spring_security_authorization_decision="true",spring_security_object="request",} 4.092135265
-# HELP spring_security_authorizations_seconds_max
-# TYPE spring_security_authorizations_seconds_max gauge
-spring_security_authorizations_seconds_max{error="AccessDeniedException",spring_security_authentication_type="AnonymousAuthenticationToken",spring_security_authorization_decision="false",spring_security_object="request",} 0.012322361
-spring_security_authorizations_seconds_max{error="none",spring_security_authentication_type="UsernamePasswordAuthenticationToken",spring_security_authorization_decision="true",spring_security_object="request",} 2.03312E-4
-# HELP spring_security_filterchains_session_url_encoding_before_total
-# TYPE spring_security_filterchains_session_url_encoding_before_total counter
-spring_security_filterchains_session_url_encoding_before_total{security_security_reached_filter_section="before",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 168999.0
-# HELP spring_security_filterchains_header_before_total
-# TYPE spring_security_filterchains_header_before_total counter
-spring_security_filterchains_header_before_total{security_security_reached_filter_section="before",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 168999.0
-# HELP executor_queued_tasks The approximate number of tasks that are queued for execution
-# TYPE executor_queued_tasks gauge
-executor_queued_tasks{name="applicationTaskExecutor",} 0.0
-
diff --git a/docs/development/devtools/testing/s3p/pap-s3p-results/pap_metrics_before_72h.txt b/docs/development/devtools/testing/s3p/pap-s3p-results/pap_metrics_before_72h.txt
deleted file mode 100644
index df6df25c..00000000
--- a/docs/development/devtools/testing/s3p/pap-s3p-results/pap_metrics_before_72h.txt
+++ /dev/null
@@ -1,228 +0,0 @@
-# HELP hikaricp_connections_acquire_seconds Connection acquire time
-# TYPE hikaricp_connections_acquire_seconds summary
-hikaricp_connections_acquire_seconds_count{pool="HikariPool-1",} 39.0
-hikaricp_connections_acquire_seconds_sum{pool="HikariPool-1",} 0.033820135
-# HELP hikaricp_connections_acquire_seconds_max Connection acquire time
-# TYPE hikaricp_connections_acquire_seconds_max gauge
-hikaricp_connections_acquire_seconds_max{pool="HikariPool-1",} 0.001545051
-# HELP hikaricp_connections_idle Idle connections
-# TYPE hikaricp_connections_idle gauge
-hikaricp_connections_idle{pool="HikariPool-1",} 10.0
-# HELP hikaricp_connections_min Min connections
-# TYPE hikaricp_connections_min gauge
-hikaricp_connections_min{pool="HikariPool-1",} 10.0
-# HELP jvm_gc_pause_seconds Time spent in GC pause
-# TYPE jvm_gc_pause_seconds summary
-jvm_gc_pause_seconds_count{action="end of minor GC",cause="G1 Evacuation Pause",gc="G1 Young Generation",} 1.0
-jvm_gc_pause_seconds_sum{action="end of minor GC",cause="G1 Evacuation Pause",gc="G1 Young Generation",} 0.037
-# HELP jvm_gc_pause_seconds_max Time spent in GC pause
-# TYPE jvm_gc_pause_seconds_max gauge
-jvm_gc_pause_seconds_max{action="end of minor GC",cause="G1 Evacuation Pause",gc="G1 Young Generation",} 0.037
-# HELP spring_security_authentications_seconds_max
-# TYPE spring_security_authentications_seconds_max gauge
-spring_security_authentications_seconds_max{authentication_failure_type="n/a",authentication_method="ProviderManager",authentication_request_type="UsernamePasswordAuthenticationToken",authentication_result_type="UsernamePasswordAuthenticationToken",error="none",} 0.320533592
-# HELP spring_security_authentications_seconds
-# TYPE spring_security_authentications_seconds summary
-spring_security_authentications_seconds_count{authentication_failure_type="n/a",authentication_method="ProviderManager",authentication_request_type="UsernamePasswordAuthenticationToken",authentication_result_type="UsernamePasswordAuthenticationToken",error="none",} 1.0
-spring_security_authentications_seconds_sum{authentication_failure_type="n/a",authentication_method="ProviderManager",authentication_request_type="UsernamePasswordAuthenticationToken",authentication_result_type="UsernamePasswordAuthenticationToken",error="none",} 0.320533592
-# HELP tomcat_sessions_created_sessions_total
-# TYPE tomcat_sessions_created_sessions_total counter
-tomcat_sessions_created_sessions_total 2.0
-# HELP disk_total_bytes Total space for path
-# TYPE disk_total_bytes gauge
-disk_total_bytes{path="/opt/app/policy/pap/bin/.",} 1.0386530304E11
-# HELP tomcat_sessions_rejected_sessions_total
-# TYPE tomcat_sessions_rejected_sessions_total counter
-tomcat_sessions_rejected_sessions_total 0.0
-# HELP jvm_classes_loaded_classes The number of classes that are currently loaded in the Java virtual machine
-# TYPE jvm_classes_loaded_classes gauge
-jvm_classes_loaded_classes 18927.0
-# HELP hikaricp_connections_usage_seconds Connection usage time
-# TYPE hikaricp_connections_usage_seconds summary
-hikaricp_connections_usage_seconds_count{pool="HikariPool-1",} 39.0
-hikaricp_connections_usage_seconds_sum{pool="HikariPool-1",} 9.34
-# HELP hikaricp_connections_usage_seconds_max Connection usage time
-# TYPE hikaricp_connections_usage_seconds_max gauge
-hikaricp_connections_usage_seconds_max{pool="HikariPool-1",} 0.052
-# HELP jvm_classes_unloaded_classes_total The total number of classes unloaded since the Java virtual machine has started execution
-# TYPE jvm_classes_unloaded_classes_total counter
-jvm_classes_unloaded_classes_total 0.0
-# HELP hikaricp_connections_active Active connections
-# TYPE hikaricp_connections_active gauge
-hikaricp_connections_active{pool="HikariPool-1",} 0.0
-# HELP spring_security_filterchains_context_async_before_total
-# TYPE spring_security_filterchains_context_async_before_total counter
-spring_security_filterchains_context_async_before_total{security_security_reached_filter_section="before",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 5.0
-# HELP process_start_time_seconds Start time of the process since unix epoch.
-# TYPE process_start_time_seconds gauge
-process_start_time_seconds 1.700139959198E9
-# HELP hikaricp_connections_max Max connections
-# TYPE hikaricp_connections_max gauge
-hikaricp_connections_max{pool="HikariPool-1",} 10.0
-# HELP spring_security_http_secured_requests_active_seconds
-# TYPE spring_security_http_secured_requests_active_seconds summary
-spring_security_http_secured_requests_active_seconds_active_count 1.0
-spring_security_http_secured_requests_active_seconds_duration_sum 0.199193291
-# HELP spring_security_http_secured_requests_active_seconds_max
-# TYPE spring_security_http_secured_requests_active_seconds_max gauge
-spring_security_http_secured_requests_active_seconds_max 0.1992777
-# HELP jvm_memory_used_bytes The amount of used memory
-# TYPE jvm_memory_used_bytes gauge
-jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 2.1837696E7
-jvm_memory_used_bytes{area="heap",id="G1 Survivor Space",} 1.2036896E7
-jvm_memory_used_bytes{area="heap",id="G1 Old Gen",} 4.231168E7
-jvm_memory_used_bytes{area="nonheap",id="Metaspace",} 9.6942648E7
-jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 1444224.0
-jvm_memory_used_bytes{area="heap",id="G1 Eden Space",} 3.7748736E7
-jvm_memory_used_bytes{area="nonheap",id="Compressed Class Space",} 1.2827304E7
-jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 9169024.0
-# HELP spring_security_filterchains_authentication_basic_after_total
-# TYPE spring_security_filterchains_authentication_basic_after_total counter
-spring_security_filterchains_authentication_basic_after_total{security_security_reached_filter_section="after",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 4.0
-# HELP jvm_gc_memory_promoted_bytes_total Count of positive increases in the size of the old generation memory pool before GC to after GC
-# TYPE jvm_gc_memory_promoted_bytes_total counter
-jvm_gc_memory_promoted_bytes_total 2964480.0
-# HELP jvm_threads_peak_threads The peak live thread count since the Java virtual machine started or peak was reset
-# TYPE jvm_threads_peak_threads gauge
-jvm_threads_peak_threads 37.0
-# HELP process_cpu_usage The "recent cpu usage" for the Java Virtual Machine process
-# TYPE process_cpu_usage gauge
-process_cpu_usage 0.0
-# HELP executor_completed_tasks_total The approximate total number of tasks that have completed execution
-# TYPE executor_completed_tasks_total counter
-executor_completed_tasks_total{name="applicationTaskExecutor",} 0.0
-# HELP hikaricp_connections_creation_seconds_max Connection creation time
-# TYPE hikaricp_connections_creation_seconds_max gauge
-hikaricp_connections_creation_seconds_max{pool="HikariPool-1",} 0.0
-# HELP hikaricp_connections_creation_seconds Connection creation time
-# TYPE hikaricp_connections_creation_seconds summary
-hikaricp_connections_creation_seconds_count{pool="HikariPool-1",} 0.0
-hikaricp_connections_creation_seconds_sum{pool="HikariPool-1",} 0.0
-# HELP jvm_threads_started_threads_total The total number of application threads started in the JVM
-# TYPE jvm_threads_started_threads_total counter
-jvm_threads_started_threads_total 41.0
-# HELP system_cpu_count The number of processors available to the Java virtual machine
-# TYPE system_cpu_count gauge
-system_cpu_count 16.0
-# HELP spring_security_filterchains_session_url_encoding_after_total
-# TYPE spring_security_filterchains_session_url_encoding_after_total counter
-spring_security_filterchains_session_url_encoding_after_total{security_security_reached_filter_section="after",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 4.0
-# HELP process_uptime_seconds The uptime of the Java virtual machine
-# TYPE process_uptime_seconds gauge
-process_uptime_seconds 824.109
-# HELP pap_policy_deployments_seconds Timer for HTTP request to deploy/undeploy a policy
-# TYPE pap_policy_deployments_seconds summary
-pap_policy_deployments_seconds_count{operation="deploy",status="FAILURE",} 0.0
-pap_policy_deployments_seconds_sum{operation="deploy",status="FAILURE",} 0.0
-pap_policy_deployments_seconds_count{operation="undeploy",status="SUCCESS",} 0.0
-pap_policy_deployments_seconds_sum{operation="undeploy",status="SUCCESS",} 0.0
-pap_policy_deployments_seconds_count{operation="deploy",status="SUCCESS",} 0.0
-pap_policy_deployments_seconds_sum{operation="deploy",status="SUCCESS",} 0.0
-pap_policy_deployments_seconds_count{operation="undeploy",status="FAILURE",} 0.0
-pap_policy_deployments_seconds_sum{operation="undeploy",status="FAILURE",} 0.0
-# HELP pap_policy_deployments_seconds_max Timer for HTTP request to deploy/undeploy a policy
-# TYPE pap_policy_deployments_seconds_max gauge
-pap_policy_deployments_seconds_max{operation="deploy",status="FAILURE",} 0.0
-pap_policy_deployments_seconds_max{operation="undeploy",status="SUCCESS",} 0.0
-pap_policy_deployments_seconds_max{operation="deploy",status="SUCCESS",} 0.0
-pap_policy_deployments_seconds_max{operation="undeploy",status="FAILURE",} 0.0
-# HELP jvm_gc_overhead_percent An approximation of the percent of CPU time used by GC activities over the last lookback period or since monitoring began, whichever is shorter, in the range [0..1]
-# TYPE jvm_gc_overhead_percent gauge
-jvm_gc_overhead_percent 0.0
-# HELP jvm_buffer_memory_used_bytes An estimate of the memory that the Java virtual machine is using for this buffer pool
-# TYPE jvm_buffer_memory_used_bytes gauge
-jvm_buffer_memory_used_bytes{id="mapped - 'non-volatile memory'",} 0.0
-jvm_buffer_memory_used_bytes{id="mapped",} 0.0
-jvm_buffer_memory_used_bytes{id="direct",} 114688.0
-# HELP executor_queue_remaining_tasks The number of additional elements that this queue can ideally accept without blocking
-# TYPE executor_queue_remaining_tasks gauge
-executor_queue_remaining_tasks{name="applicationTaskExecutor",} 2.147483647E9
-# HELP hikaricp_connections Total connections
-# TYPE hikaricp_connections gauge
-hikaricp_connections{pool="HikariPool-1",} 10.0
-# HELP spring_security_filterchains_header_after_total
-# TYPE spring_security_filterchains_header_after_total counter
-spring_security_filterchains_header_after_total{security_security_reached_filter_section="after",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 4.0
-# HELP tomcat_sessions_expired_sessions_total
-# TYPE tomcat_sessions_expired_sessions_total counter
-tomcat_sessions_expired_sessions_total 0.0
-# HELP jvm_gc_max_data_size_bytes Max size of long-lived heap memory pool
-# TYPE jvm_gc_max_data_size_bytes gauge
-jvm_gc_max_data_size_bytes 8.434745344E9
-# HELP tomcat_sessions_active_current_sessions
-# TYPE tomcat_sessions_active_current_sessions gauge
-tomcat_sessions_active_current_sessions 2.0
-# HELP spring_security_filterchains_authorization_before_total
-# TYPE spring_security_filterchains_authorization_before_total counter
-spring_security_filterchains_authorization_before_total{security_security_reached_filter_section="before",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 5.0
-# HELP jvm_compilation_time_ms_total The approximate accumulated elapsed time spent in compilation
-# TYPE jvm_compilation_time_ms_total counter
-jvm_compilation_time_ms_total{compiler="HotSpot 64-Bit Tiered Compilers",} 136782.0
-# HELP hikaricp_connections_timeout_total Connection timeout total count
-# TYPE hikaricp_connections_timeout_total counter
-hikaricp_connections_timeout_total{pool="HikariPool-1",} 0.0
-# HELP application_started_time_seconds Time taken to start the application
-# TYPE application_started_time_seconds gauge
-application_started_time_seconds{main_application_class="org.onap.policy.pap.main.PolicyPapApplication",} 32.135
-# HELP jvm_threads_live_threads The current number of live threads including both daemon and non-daemon threads
-# TYPE jvm_threads_live_threads gauge
-jvm_threads_live_threads 37.0
-# HELP spring_security_filterchains_active_seconds_max
-# TYPE spring_security_filterchains_active_seconds_max gauge
-spring_security_filterchains_active_seconds_max{security_security_reached_filter_section="after",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 0.0
-spring_security_filterchains_active_seconds_max{security_security_reached_filter_section="before",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 0.0
-# HELP spring_security_filterchains_active_seconds
-# TYPE spring_security_filterchains_active_seconds summary
-spring_security_filterchains_active_seconds_active_count{security_security_reached_filter_section="after",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 0.0
-spring_security_filterchains_active_seconds_duration_sum{security_security_reached_filter_section="after",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 0.0
-spring_security_filterchains_active_seconds_active_count{security_security_reached_filter_section="before",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 0.0
-spring_security_filterchains_active_seconds_duration_sum{security_security_reached_filter_section="before",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 0.0
-# HELP jdbc_connections_min Minimum number of idle connections in the pool.
-# TYPE jdbc_connections_min gauge
-jdbc_connections_min{name="dataSource",} 10.0
-# HELP spring_security_filterchains_context_servlet_before_total
-# TYPE spring_security_filterchains_context_servlet_before_total counter
-spring_security_filterchains_context_servlet_before_total{security_security_reached_filter_section="before",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 5.0
-# HELP hikaricp_connections_pending Pending threads
-# TYPE hikaricp_connections_pending gauge
-hikaricp_connections_pending{pool="HikariPool-1",} 0.0
-# HELP spring_security_filterchains_logout_after_total
-# TYPE spring_security_filterchains_logout_after_total counter
-spring_security_filterchains_logout_after_total{security_security_reached_filter_section="after",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 4.0
-# HELP spring_security_filterchains_logout_before_total
-# TYPE spring_security_filterchains_logout_before_total counter
-spring_security_filterchains_logout_before_total{security_security_reached_filter_section="before",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 5.0
-# HELP jvm_info JVM version info
-# TYPE jvm_info gauge
-jvm_info{runtime="OpenJDK Runtime Environment",vendor="Alpine",version="17.0.9+8-alpine-r0",} 1.0
-# HELP disk_free_bytes Usable space for path
-# TYPE disk_free_bytes gauge
-disk_free_bytes{path="/opt/app/policy/pap/bin/.",} 9.1789115392E10
-# HELP spring_security_authentications_active_seconds
-# TYPE spring_security_authentications_active_seconds summary
-spring_security_authentications_active_seconds_active_count{authentication_failure_type="n/a",authentication_method="ProviderManager",authentication_request_type="UsernamePasswordAuthenticationToken",authentication_result_type="n/a",} 0.0
-spring_security_authentications_active_seconds_duration_sum{authentication_failure_type="n/a",authentication_method="ProviderManager",authentication_request_type="UsernamePasswordAuthenticationToken",authentication_result_type="n/a",} 0.0
-# HELP spring_security_authentications_active_seconds_max
-# TYPE spring_security_authentications_active_seconds_max gauge
-spring_security_authentications_active_seconds_max{authentication_failure_type="n/a",authentication_method="ProviderManager",authentication_request_type="UsernamePasswordAuthenticationToken",authentication_result_type="n/a",} 0.0
-# HELP jvm_threads_daemon_threads The current number of live daemon threads
-# TYPE jvm_threads_daemon_threads gauge
-jvm_threads_daemon_threads 28.0
-# HELP executor_pool_size_threads The current number of threads in the pool
-# TYPE executor_pool_size_threads gauge
-executor_pool_size_threads{name="applicationTaskExecutor",} 0.0
-# HELP spring_security_filterchains_context_async_after_total
-# TYPE spring_security_filterchains_context_async_after_total counter
-spring_security_filterchains_context_async_after_total{security_security_reached_filter_section="after",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 4.0
-# HELP system_cpu_usage The "recent cpu usage" of the system the application is running in
-# TYPE system_cpu_usage gauge
-system_cpu_usage 0.1111111111111111
-# HELP spring_security_filterchains_context_holder_before_total
-# TYPE spring_security_filterchains_context_holder_before_total counter
-spring_security_filterchains_context_holder_before_total{security_security_reached_filter_section="before",spring_security_filterchain_position="0",spring_security_filterchain_size="0",spring_security_reached_filter_name="none",} 5.0
-# HELP spring_security_authorizations_active_seconds_max
-# TYPE spring_security_authorizations_active_seconds_max gauge
-spring_security_authorizations_active_seconds_max{spring_security_authentication_type="n/a",spring_security_authorization_decision="unknown",spring_security_object="request",} 0.0
-# HELP spring_security_authorizations_active_seconds
-# TYPE spring_security_authorizations_active_seconds summary
-spring_security_authorizations_active_seconds_active_count{spring_security_authentication_type="n/a",spring_security_authorization_decision="unknown",spring_security_object="request",} 0.0
diff --git a/docs/development/devtools/testing/s3p/pap-s3p-results/pap_performance_jmeter_results.png b/docs/development/devtools/testing/s3p/pap-s3p-results/pap_performance_jmeter_results.png
deleted file mode 100644
index e061ba47..00000000
--- a/docs/development/devtools/testing/s3p/pap-s3p-results/pap_performance_jmeter_results.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/pap-s3p-results/pap_stability_jmeter_results.png b/docs/development/devtools/testing/s3p/pap-s3p-results/pap_stability_jmeter_results.png
deleted file mode 100644
index c1c04f92..00000000
--- a/docs/development/devtools/testing/s3p/pap-s3p-results/pap_stability_jmeter_results.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/pap-s3p-results/pap_stats_after_72h.png b/docs/development/devtools/testing/s3p/pap-s3p-results/pap_stats_after_72h.png
deleted file mode 100644
index 7c56f74a..00000000
--- a/docs/development/devtools/testing/s3p/pap-s3p-results/pap_stats_after_72h.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/pap-s3p-results/pap_stats_before_72h.png b/docs/development/devtools/testing/s3p/pap-s3p-results/pap_stats_before_72h.png
deleted file mode 100644
index 0984521f..00000000
--- a/docs/development/devtools/testing/s3p/pap-s3p-results/pap_stats_before_72h.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/pap-s3p-results/pap_stats_during_72h.png b/docs/development/devtools/testing/s3p/pap-s3p-results/pap_stats_during_72h.png
deleted file mode 100644
index 1d86b175..00000000
--- a/docs/development/devtools/testing/s3p/pap-s3p-results/pap_stats_during_72h.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/pap-s3p.rst b/docs/development/devtools/testing/s3p/pap-s3p.rst
deleted file mode 100644
index c658cbc5..00000000
--- a/docs/development/devtools/testing/s3p/pap-s3p.rst
+++ /dev/null
@@ -1,198 +0,0 @@
-.. This work is licensed under a
-.. Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-
-.. _pap-s3p-label:
-
-.. toctree::
- :maxdepth: 2
-
-Policy PAP component
-~~~~~~~~~~~~~~~~~~~~
-
-Both the Performance and the Stability tests were executed by performing requests
-against Policy components installed as part of a full ONAP OOM deployment or a docker deployment in Nordix lab.
-
-Setup Details
-+++++++++++++
-
-- Policy-PAP along with all policy components deployed as part of a Policy docker deployment.
-- A second instance of APEX-PDP is spun up in the setup. Update the configuration file (OnapPfConfig.json) such that the PDP can register to the new group created by PAP in the tests.
-- Both tests were run via jMeter.
-
-Stability Test of PAP
-+++++++++++++++++++++
-
-Test Plan
----------
-The 72 hours stability test ran the following steps sequentially in a single threaded loop.
-
-Setup Phase (steps running only once)
-"""""""""""""""""""""""""""""""""""""
-
-- **Create Policy for defaultGroup** - creates an operational policy using policy/api component
-- **Create NodeTemplate metadata for sampleGroup policy** - creates a node template containing metadata using policy/api component
-- **Create Policy for sampleGroup** - creates an operational policy that refers to the metadata created above using policy/api component
-- **Change defaultGroup state to ACTIVE** - changes the state of defaultGroup PdpGroup to ACTIVE
-- **Create/Update PDP Group** - creates a new PDPGroup named sampleGroup.
- A second instance of the PDP that is already spun up gets registered to this new group
-- **Check PdpGroup Query** - makes a PdpGroup query request and verifies that both PdpGroups are in ACTIVE state.
-
-PAP Test Flow (steps running in a loop for 72 hours)
-""""""""""""""""""""""""""""""""""""""""""""""""""""
-
-- **Check Health** - checks the health status of pap
-- **PAP Metrics** - Fetch prometheus metrics before the deployment/undeployment cycle
- Save different counters such as deploy/undeploy-success/failure counters at API and engine level.
-- **Check PdpGroup Query** - makes a PdpGroup query request and verifies that both PdpGroups are in the ACTIVE state.
-- **Deploy Policy for defaultGroup** - deploys the policy defaultDomain to defaultGroup
-- **Check status of defaultGroup policy** - checks the status of defaultGroup PdpGroup with the defaultDomain policy 1.0.0.
-- **Check PdpGroup Audit defaultGroup** - checks the audit information for the defaultGroup PdpGroup.
-- **Check PdpGroup Audit Policy (defaultGroup)** - checks the audit information for the defaultGroup PdpGroup with the defaultDomain policy 1.0.0.
-- **Check PdpGroup Query** - makes a PdpGroup query request and verifies that 2 PdpGroups are in the ACTIVE state and defaultGroup has a policy deployed on it.
-- **Deployment Update for sampleGroup policy** - deploys the policy sampleDomain in sampleGroup PdpGroup using pap api
-- **Check status of sampleGroup** - checks the status of the sampleGroup PdpGroup.
-- **Check status of PdpGroups** - checks the status of both PdpGroups.
-- **Check PdpGroup Query** - makes a PdpGroup query request and verifies that the defaultGroup has a policy defaultDomain deployed on it and sampleGroup has policy sampleDomain deployed on it.
-- **Check Audit** - checks the audit information for all PdpGroups.
-- **Check Consolidated Health** - checks the consolidated health status of all policy components.
-- **Check Deployed Policies** - checks for all the deployed policies using pap api.
-- **Undeploy policy in sampleGroup** - undeploys the policy sampleDomain from sampleGroup PdpGroup using pap api
-- **Undeploy policy in defaultGroup** - undeploys the policy defaultDomain from PdpGroup
-- **Check status of policies** - checks the status of all policies and make sure both the policies are undeployed
-- **Check PdpGroup Query** - makes a PdpGroup query request and verifies that PdpGroup is in the PASSIVE state.
-- **PAP Metrics after deployments** - Fetch prometheus metrics after the deployment/undeployment cycle
- Save the new counter values such as deploy/undeploy-success/failure counters at API and engine level, and check that the deploySuccess and undeploySuccess counters are increased by 2.
-
-.. Note::
- To avoid putting a large Constant Timer value after every deployment/undeployment, the status API is polled until the deployment/undeployment
- is successfully completed, or until a timeout. This is to make sure that the operation is completed successfully and the PDPs gets enough time to respond back.
- Otherwise, before the deployment is marked successful by PAP, an undeployment could be triggered as part of other tests,
- and the operation's corresponding prometheus counter at engine level will not get updated.
-
-Teardown Phase (steps running only once after PAP Test Flow is completed)
-"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
-
-- **Change state to PASSIVE(sampleGroup)** - changes the state of sampleGroup PdpGroup to PASSIVE
-- **Delete PdpGroup sampleGroup** - delete the sampleGroup PdpGroup using pap api
-- **Change State to PASSIVE(defaultGroup)** - changes the state of defaultGroup PdpGroup to PASSIVE
-- **Delete policy created for defaultGroup** - deletes the operational policy defaultDomain using policy/api component
-- **Delete Policy created for sampleGroup** - deletes the operational policy sampleDomain using policy/api component
-- **Delete Nodetemplate metadata for sampleGroup policy** - deleted the nodetemplate containing metadata for sampleGroup policy
-
-The following steps can be used to configure the parameters of test plan.
-
-- **HTTP Authorization Manager** - used to store user/password authentication details.
-- **HTTP Header Manager** - used to store headers which will be used for making HTTP requests.
-- **User Defined Variables** - used to store following user defined parameters.
-
-=========== ===================================================================
- **Name** **Description**
-=========== ===================================================================
- PAP_HOST IP Address or host name of PAP component
- PAP_PORT Port number of PAP for making REST API calls
- API_HOST IP Address or host name of API component
- API_PORT Port number of API for making REST API calls
-=========== ===================================================================
-
-The test was run in the background via "nohup", to prevent it from being interrupted:
-
-.. code-block:: bash
-
- nohup apache-jmeter-5.6.2/bin/jmeter -n -t stability.jmx -l stabilityTestResults.jtl &
-
-Test Results
-------------
-
-**Summary**
-
-Stability test plan was triggered for 72 hours. There were no failures during the 72 hours test.
-
-
-**Test Statistics**
-
-======================= ================= ================== ==================================
-**Total # of requests** **Success %** **Error %** **Average time taken per request**
-======================= ================= ================== ==================================
- 170212 100 % 0.00 % 419 ms
-======================= ================= ================== ==================================
-
-
-**JMeter Screenshot**
-
-.. image:: pap-s3p-results/pap_stability_jmeter_results.png
-
-**Memory and CPU usage**
-
-The memory and CPU usage can be monitored by running "docker stats" command in the PAP container.
-A snapshot is taken before, during and after test execution to monitor the changes in resource utilization.
-Prometheus metrics is also collected before and after the test execution.
-
-Memory and CPU usage before test execution:
-
-.. image:: pap-s3p-results/pap_stats_before_72h.png
-
-:download:`Prometheus metrics before 72h test <pap-s3p-results/pap_metrics_before_72h.txt>`
-
-Memory and CPU usage during test execution:
-
-.. image:: pap-s3p-results/pap_stats_during_72h.png
-
-Memory and CPU usage after test execution:
-
-.. image:: pap-s3p-results/pap_stats_after_72h.png
-
-:download:`Prometheus metrics after 72h test <pap-s3p-results/pap_metrics_after_72h.txt>`
-
-Performance Test of PAP
-++++++++++++++++++++++++
-
-Introduction
-------------
-
-Performance test of PAP has the goal of testing the min/avg/max processing time and rest call throughput for all the requests with multiple requests at the same time.
-
-Setup Details
--------------
-
-The performance test is performed on a similar setup as Stability test. The JMeter VM will be sending a large number of REST requests to the PAP component and collecting the statistics.
-
-
-Test Plan
----------
-
-Performance test plan is the same as the stability test plan above except for the few differences listed below.
-
-- Increase the number of threads up to 10 (simulating 10 users' behaviours at the same time).
-- Reduce the test time to 2 hours.
-- Usage of counters (simulating each user) to create different pdpGroups, update their state and later delete them.
-- Removed the tests to deploy policies to newly created groups as this will need a larger setup with multiple pdps registered to each group, which will also slow down the performance test with the time needed for registration process etc.
-- Usage of counters (simulating each user) to create different drools policies and deploy them to defaultGroup.
- In the test, a thread count of 10 is used resulting in 10 different drools policies getting deployed and undeployed continuously for 2 hours.
- Other standard operations like checking the deployment status of policies, checking the metrics, health etc remains.
-
-Run Test
---------
-
-Running/Triggering the performance test will be the same as the stability test. That is, launch JMeter pointing to corresponding *.jmx* test plan. The *API_HOST* , *API_PORT* , *PAP_HOST* , *PAP_PORT* are already set up in *.jmx*.
-
-.. code-block:: bash
-
- nohup apache-jmeter-5.6.2/bin/jmeter -n -t performance.jmx -l performanceTestResults.jtl &
-
-Test Results
-------------
-
-Test results are shown as below.
-
-**Test Statistics**
-
-======================= ================= ================== ==================================
-**Total # of requests** **Success %** **Error %** **Average time taken per request**
-======================= ================= ================== ==================================
-48093 100 % 0.00 % 1116 ms
-======================= ================= ================== ==================================
-
-**JMeter Screenshot**
-
-.. image:: pap-s3p-results/pap_performance_jmeter_results.png
diff --git a/docs/development/devtools/testing/s3p/run-s3p.rst b/docs/development/devtools/testing/s3p/run-s3p.rst
index 17eba32a..1ac88442 100644
--- a/docs/development/devtools/testing/s3p/run-s3p.rst
+++ b/docs/development/devtools/testing/s3p/run-s3p.rst
@@ -6,11 +6,11 @@ Running the Policy Framework S3P Tests
Per release, the policy framework team perform stability and performance tests per component of the policy framework.
This testing work involves performing a series of test on a full OOM deployment and updating the various test plans to work towards the given deployment.
-This work can take some time to setup before performing any tests to begin with.
+This work can take some time to setup to begin with, before performing any tests.
For stability testing, a tool called JMeter is used to trigger a series of tests for a period of 72 hours which has to be manually initiated and monitored by the tester.
-Likewise, with the performance tests, but in this case for ~2 hours.
-As part of the work to make to automate this process a script can be now triggered to bring up a microk8s cluster on a VM, install JMeter, alter the cluster info to match the JMX test plans for JMeter to trigger and gather results at the end.
-These S3P tests will be triggered for a shorter period as part of the CSITs to prove the stability and performance of our components.
+Likewise, the performance tests run in the same manner but for a shorter time of ~2 hours.
+As part of the work to automate this process a script can be now triggered to bring up a microk8s cluster on a VM, install JMeter, alter the cluster info to match the JMX test plans for JMeter to trigger and gather results at the end.
+These S3P tests will be triggered for a shorter period as part of the GHAs to prove the stability and performance of our components.
There has been recent work completed to trigger our CSIT tests in a K8s environment.
As part of this work, a script has been created to bring up a microk8s cluster for testing purposes which includes all necessary components for our policy framework testing.
@@ -19,34 +19,15 @@ Once this cluster is brought up, a script is called to alter the cluster.
The IPS and PORTS of our policy components are set by this script to ensure consistency in the test plans.
JMeter is installed and the S3P test plans are triggered to run by their respective components.
-.. code-block:: bash
- :caption: Start S3P Script
+`run-s3p-tests.sh <https://github.com/onap/policy-docker/blob/master/csit/run-s3p-tests.sh>`_
- #===MAIN===#
- if [ -z "${WORKSPACE}" ]; then
- export WORKSPACE=$(git rev-parse --show-toplevel)
- fi
- export TESTDIR=${WORKSPACE}/testsuites
- export API_PERF_TEST_FILE=$TESTDIR/performance/src/main/resources/testplans/policy_api_performance.jmx
- export API_STAB_TEST_FILE=$TESTDIR/stability/src/main/resources/testplans/policy_api_stability.jmx
- if [ $1 == "run" ]
- then
- mkdir automate-performance;cd automate-performance;
- git clone "https://gerrit.onap.org/r/policy/docker"
- cd docker/csit
- if [ $2 == "performance" ]
- then
- bash start-s3p-tests.sh run $API_PERF_TEST_FILE;
- elif [ $2 == "stability" ]
- then
- bash start-s3p-tests.sh run $API_STAB_TEST_FILE;
- else
- echo "echo Invalid arguments provided. Usage: $0 [option..] {performance | stability}"
- fi
- else
- echo "Invalid arguments provided. Usage: $0 [option..] {run | uninstall}"
- fi
+This script automates the setup, execution, and teardown of S3P tests for policy components.
+It initializes a Kubernetes environment, installs Apache JMeter for running test plans, and executes specified JMX test files.
+The script logs all operations, tracks errors, warnings, and processed files, and provides a summary report upon completion.
+It includes options to either run tests (test <jmx_file>) or clean up the environment (clean). The clean option uninstalls the Kubernetes cluster and removes temporary resources.
+The script also ensures proper resource usage tracking and error handling throughout its execution.
-This script is triggered by each component.
-It will export the performance and stability testplans and trigger the start-s3p-test.sh script which will perform the steps to automatically run the s3p tests.
+`run-s3p-test.sh <https://github.com/onap/policy-api/blob/master/testsuites/run-s3p-test.sh>`_
+In summary, this script automates running performance or stability tests for a Policy Framework component by setting up necessary directories, cloning the required docker repository, and executing predefined test plans.
+It also provides a clean-up option to remove resources after testing. \ No newline at end of file
diff --git a/docs/development/devtools/testing/s3p/s3p-test-overview.rst b/docs/development/devtools/testing/s3p/s3p-test-overview.rst
new file mode 100644
index 00000000..f79ba921
--- /dev/null
+++ b/docs/development/devtools/testing/s3p/s3p-test-overview.rst
@@ -0,0 +1,118 @@
+.. This work is licensed under a
+.. Creative Commons Attribution 4.0 International License.
+.. http://creativecommons.org/licenses/by/4.0
+
+Policy Framework S3P Tests Overview
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. contents::
+ :depth: 2
+
+Starting with the Oslo release of the Policy Framework, S3P tests are now triggered differently.
+
+Our S3P tests for each component run automatically every Monday. This includes both performance and stability tests. These tests are triggered in a GitHub Actions environment, and the results for each test can be found under the "Actions" tab in the GitHub repository for each component.
+
+Stability and Performance Test Workflows
+----------------------------------------
+
+Each component of the Policy Framework contains two workflow files in the `.github/workflows` directory:
+- `gerrit-{componentName}-performance.yaml`
+- `gerrit-{componentName}-stability.yaml`
+
+.. image:: images/workflows.png
+
+An example of the configuration for one of these files is shown below:
+
+.. code-block:: yaml
+
+ name: policy-api-stability-test
+
+ on:
+ workflow_dispatch:
+ inputs:
+ GERRIT_BRANCH:
+ description: 'Branch that the change is against'
+ required: true
+ type: string
+ GERRIT_CHANGE_ID:
+ description: 'The ID for the change'
+ required: true
+ type: string
+ GERRIT_CHANGE_NUMBER:
+ description: 'The Gerrit change number'
+ required: true
+ type: string
+ GERRIT_CHANGE_URL:
+ description: 'URL of the change'
+ required: true
+ type: string
+ GERRIT_EVENT_TYPE:
+ description: 'The type of Gerrit event'
+ required: true
+ type: string
+ GERRIT_PATCHSET_NUMBER:
+ description: 'The patch number for the change'
+ required: true
+ type: string
+ GERRIT_PATCHSET_REVISION:
+ description: 'The SHA of the revision'
+ required: true
+ type: string
+ GERRIT_PROJECT:
+ description: 'The project in Gerrit'
+ required: true
+ type: string
+ GERRIT_REFSPEC:
+ description: 'The Gerrit refspec for the change'
+ required: true
+ type: string
+ branch_protection_rule:
+ # Ensures that the "Maintained" check is occasionally updated.
+ # See https://github.com/ossf/scorecard/blob/main/docs/checks.md#maintained
+
+ # Runs every Monday at 16:30 UTC
+ schedule:
+ - cron: '30 16 * * 1'
+
+ jobs:
+ run-s3p-tests:
+ runs-on: ubuntu-22.04
+
+ steps:
+ - uses: actions/checkout@v4
+
+ - name: Run S3P script
+ working-directory: ${{ github.workspace }}/testsuites
+ run: sudo bash ./run-s3p-test.sh run stability
+
+ - name: Archive result .jtl
+ uses: actions/upload-artifact@v4
+ with:
+ name: policy-api-s3p-results
+ path: ${{ github.workspace }}/testsuites/automate-performance/s3pTestResults.jtl
+
+ - name: Archive JMeter logs
+ uses: actions/upload-artifact@v4
+ with:
+ name: policy-api-s3p-jmeter-log
+ path: ${{ github.workspace }}/testsuites/automate-performance/jmeter.log
+
+Analyzing the Results
+#####################
+
+The results of each workflow run can be found under the "Actions" tab.
+
+.. image:: images/workflow-results.png
+
+To investigate the results further, click on a completed test run. You will see details about:
+- The test that was executed
+- The test's status (indicated by a green checkmark or a red "X")
+- The artifacts produced during the test
+
+The artifacts include:
+- A test result file in `.jtl` format
+- JMeter logs, which can assist in debugging test failures
+
+.. image:: images/workflow-test-result.png
+
+Both the stability and performance tests run for two hours each in the GitHub Actions environment. Since these tests are conducted weekly and closely monitored by the Policy Framework team, the previous practice of running stability tests for 72 hours has been deemed unnecessary.
diff --git a/docs/development/devtools/testing/s3p/xacml-s3p-results/s3p-perf-xacml.png b/docs/development/devtools/testing/s3p/xacml-s3p-results/s3p-perf-xacml.png
deleted file mode 100644
index 6f30f143..00000000
--- a/docs/development/devtools/testing/s3p/xacml-s3p-results/s3p-perf-xacml.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/xacml-s3p-results/s3p-stability-xacml.png b/docs/development/devtools/testing/s3p/xacml-s3p-results/s3p-stability-xacml.png
deleted file mode 100644
index 842ec9dd..00000000
--- a/docs/development/devtools/testing/s3p/xacml-s3p-results/s3p-stability-xacml.png
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/testing/s3p/xacml-s3p.rst b/docs/development/devtools/testing/s3p/xacml-s3p.rst
deleted file mode 100644
index 3b81406b..00000000
--- a/docs/development/devtools/testing/s3p/xacml-s3p.rst
+++ /dev/null
@@ -1,198 +0,0 @@
-.. This work is licensed under a
-.. Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-
-.. _xacml-s3p-label:
-
-.. toctree::
- :maxdepth: 2
-
-##########################
-
-Policy XACML PDP S3P Tests
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Both the Performance and the Stability tests were executed by performing requests
-against Policy components installed in Kubernetes environment. These tests were all
-performed on a Ubuntu VM with 32GB of memory, 16 CPU and 100GB of disk space.
-
-Policy XACML PDP Deployment
-+++++++++++++++++++++++++++
-
-In an effort to allow the execution of the s3p tests to be as close to automatic as possible,
-a script will be executed that will perform the following:
-
-- Install of a microk8s kubernetes environment
-- Bring up the policy components
-- Checks that the components are successfully up and running before proceeding
-- Install Java 17
-- Install Jmeter locally and configure it
-- Specify whether you want to run stability or performance tests
-
-
-The remainder of this document outlines how to run the tests and the test results
-
-Common Setup
-++++++++++++
-The common setup for performance and stability tests is now automated - being carried out by a script in- **testsuites/run-s3p-test.sh**.
-
-Clone the policy-xacml-pdp repo to access the test scripts
-
-.. code-block:: bash
-
- git clone https://gerrit.onap.org/r/policy/xacml-pdp xacml-pdp
-
-Stability Test of Policy XACML PDP
-++++++++++++++++++++++++++++++++++
-
-Test Plan
----------
-The 24 hours stability test ran the following steps.
-
-- Healthcheck, 2 simultaneous threads
-- Decisions, 2 simultaneous threads, each running the following tasks in sequence:
- - Monitoring Decision
- - Monitoring Decision, abbreviated
- - Naming Decision
- - Optimization Decision
- - Default Guard Decision (always "Permit")
- - Frequency Limiter Guard Decision
- - Min/Max Guard Decision
-
-This runs for 24 hours. Test results are present in the **testsuites/automated-performance/s3pTestResults.jtl**
-file and in **/tmp/** directory. Logs are present for jmeter in **testsuites/automated-performance/jmeter.log** and
-**testsuites/automated-performance/nohup.out**
-
-Run Test
---------
-
-The code in the setup section also serves to run the tests. Just one execution needed to do it all.
-
-.. code-block:: bash
-
- bash run-s3p-test.sh run stability
-
-Once the test execution is completed, the results are present in the **automate-performance/s3pTestResults.jtl** file.
-
-This file can be imported into the Jmeter GUI for visualization. The below results are tabulated from the GUI.
-
-Test Results
-------------
-
-**Summary**
-
-Stability test plan was triggered for 24 hours.
-
-**Test Statistics**
-
-======================= ================= ======================== =========================
-**Total # of requests** **Error %** **Average Latency (ms)** **Measured requests/sec**
-======================= ================= ======================== =========================
- 54472562 0.00 % 5 ms 630.1 ms
-======================= ================= ======================== =========================
-
-**JMeter Results**
-
-.. image:: xacml-s3p-results/s3p-stability-xacml.png
-
-**Policy component Setup**
-
-============================================== ==================================================================== =============================================
-**NAME** **IMAGE** **PORT**
-============================================== ==================================================================== =============================================
-zookeeper-deployment-7ff87c7fcc-fbsfb confluentinc/cp-zookeeper:latest 2181/TCP
-kafka-deployment-5c87d497b-m8s2g confluentinc/cp-kafka:latest 9092/TCP
-policy-drools-pdp-0 nexus3.onap.org:10001/onap/policy-pdpd-cl:2.1.3-SNAPSHOT 6969/TCP 9696/TCP
-policy-apex-pdp-0 nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.3-SNAPSHOT 6969/TCP
-policy-distribution-f48bff778-48pm2 nexus3.onap.org:10001/onap/policy-distribution:3.1.3-SNAPSHOT 6969/TCP
-policy-models-simulator-6947667bdc-wcd9r nexus3.onap.org:10001/onap/policy-models-simulator:3.1.3-SNAPSHOT 6666/TCP 6680/TCP 6668/TCP 6669/TCP 6670/TCP
-policy-clamp-ac-http-ppnt-7d747b5d98-wmr5n nexus3.onap.org:10001/onap/policy-clamp-ac-http-ppnt:7.1.3-SNAPSHOT 8084/TCP
-policy-clamp-ac-k8s-ppnt-6bbd86bbc6-vnvx6 nexus3.onap.org:10001/onap/policy-clamp-ac-k8s-ppnt:7.1.3-SNAPSHOT 8083/TCP
-policy-clamp-ac-pf-ppnt-5fcbbcdb6c-k2cbk nexus3.onap.org:10001/onap/policy-clamp-ac-pf-ppnt:7.1.3-SNAPSHOT 6969/TCP
-policy-clamp-ac-sim-ppnt-97f487577-m2zjr nexus3.onap.org:10001/onap/policy-clamp-ac-sim-ppnt:7.1.3-SNAPSHOT 6969/TCP
-policy-clamp-runtime-acm-66b5d6b64-l6dpq nexus3.onap.org:10001/onap/policy-clamp-runtime-acm:7.1.3-SNAPSHOT 6969/TCP
-mariadb-galera-0 docker.io/bitnami/mariadb-galera:10.5.8 3306/TCP
-prometheus-f66f97b6-kkmpq nexus3.onap.org:10001/prom/prometheus:latest 9090/TCP
-policy-api-7f7d995b4-2zhnw nexus3.onap.org:10001/onap/policy-api:3.1.3-SNAPSHOT 6969/TCP
-policy-pap-f7899d4cd-mfrtp nexus3.onap.org:10001/onap/policy-pap:3.1.3-SNAPSHOT 6969/TCP
-policy-xacml-pdp-6c86f85ff6-6qzgf nexus3.onap.org:10001/onap/policy-xacml-pdp:3.1.2 6969/TCP
-============================================== ==================================================================== =============================================
-
-.. Note::
-
- .. container:: paragraph
-
- There were no failures during the 24 hours test.
-
-The XACML PDP offered very good performance with JMeter for the traffic mix described above.
-The average transaction time is insignificant.
-
-
-Performance Test of Policy XACML PDP
-++++++++++++++++++++++++++++++++++++
-
-Introduction
-------------
-
-Performance test of acm components has the goal of testing the min/avg/max processing time and rest call throughput for all the requests with multiple requests at the same time.
-
-Setup Details
--------------
-
-We can setup the environment and execute the tests like this from the **xacml-pdp/testsuites** directory
-
-Test Plan
----------
-
-Performance test plan is the same as the stability test plan above except for the few differences listed below.
-
-- Increase the number of threads up to 10 (simulating 10 users' behaviours at the same time).
-- Reduce the test time to 20 minutes.
-
-The performance tests runs the following, all in parallel:
-
-- Healthcheck, 10 simultaneous threads
-- Decisions, 10 simultaneous threads, each running the following in sequence:
-
- - Monitoring Decision
- - Monitoring Decision, abbreviated
- - Naming Decision
- - Optimization Decision
- - Default Guard Decision (always "Permit")
- - Frequency Limiter Guard Decision
- - Min/Max Guard Decision
-
-When the script starts up, it uses policy-api to create, and policy-pap to deploy
-the policies that are needed by the test. It assumes that the "naming" policy has
-already been created and deployed. Once the test completes, it undeploys and deletes
-the policies that it previously created.
-
-Run Test
---------
-
-The code in the setup section also serves to run the tests. Just one execution needed to do it all.
-
-.. code-block:: bash
-
- bash run-s3p-test.sh run performance
-
-Once the test execution is completed, the results are present in the **automate-performance/s3pTestResults.jtl** file and in **/tmp/** directory.
-
-This file can be imported into the Jmeter GUI for visualization. The below results are tabulated from the GUI.
-
-Test Results
-------------
-
-**Summary**
-
-The test was run for 20 minutes with 10 users (i.e., threads), with the following results:
-
-**Test Statistics**
-
-======================= ================= ======================== =========================
-**Total # of requests** **Error %** **Average Latency (ms)** **Measured requests/sec**
-======================= ================= ======================== =========================
- 888047 0.00 % 25 ms 723.2 ms
-======================= ================= ======================== =========================
-
-.. image:: xacml-s3p-results/s3p-perf-xacml.png
diff --git a/docs/development/pdp/pdp-pap-interaction.rst b/docs/development/pdp/pdp-pap-interaction.rst
index 14a92517..eff8a79e 100644
--- a/docs/development/pdp/pdp-pap-interaction.rst
+++ b/docs/development/pdp/pdp-pap-interaction.rst
@@ -13,7 +13,7 @@ Guidelines for PDP-PAP interaction
A PDP (Policy Decision Point) is where the policy execution happens. The administrative actions such as
managing the PDPs, deploying or undeploying policies to these PDPs etc. are handled by PAP
(Policy Administration Point). Any PDP should follow certain behavior to be registered and functional in
-the Policy Framework. All the communications between PAP and PDP happen over DMaaP on topic *POLICY-PDP-PAP*.
+the Policy Framework. All the communications between PAP and PDP happen over Kafka on topic *POLICY-PDP-PAP*.
The below diagram shows how a PDP interacts with PAP.
.. image:: images/PDP_PAP.svg
@@ -23,7 +23,7 @@ The below diagram shows how a PDP interacts with PAP.
A PDP should be configured to start with the below information in its startup configuration file.
- *pdpGroup* to which the PDP should belong to.
-- *DMaaP topic* 'POLICY-PDP-PAP' which should be the source and sink for communicating with PAP.
+- *Kafka topic* 'POLICY-PDP-PAP' which should be the source and sink for communicating with PAP.
**2. PDP sends PDP_STATUS (registration message)**
@@ -81,7 +81,7 @@ PAP also sends the *pdpHeartbeatIntervalMs* which is the time interval in which
**4. PDP sends PDP_STATUS response to PDP_UPDATE**
-PDP on receiving the PDP_UPDATE message from the DMaaP topic, it first checks if the message is intended for the PDP.
+PDP on receiving the PDP_UPDATE message from the Kafka topic, it first checks if the message is intended for the PDP.
If so, it updates itself with the information in PDP_UPDATE message from PAP such as *pdpSubgroup*,
*pdpHeartbeatIntervalMs* and *policiesToBeDeployed* (if any). After handling the PDP_UPDATE message, the PDP sends
a response message back to PAP with the current status of the PDP along with the result of the PDP_UPDATE operation.
diff --git a/docs/development/prometheus-metrics.rst b/docs/development/prometheus-metrics.rst
index 74532311..e7d4d3a6 100644
--- a/docs/development/prometheus-metrics.rst
+++ b/docs/development/prometheus-metrics.rst
@@ -188,6 +188,6 @@ Key metrics for Policy Distribution
===================================================================
Policy Framework uses ServiceMonitor custom resource definition (CRD) to allow Prometheus to monitor the services it exposes. Label selection is used to determine which services are selected to be monitored.
-For label management and troubleshooting refer to the documentation at: `Prometheus operator <https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/troubleshooting.md>`__.
+For label management and troubleshooting refer to the documentation at: `Prometheus operator <https://github.com/prometheus-operator/prometheus-operator/tree/main/Documentation>`__.
-`OOM charts <https://github.com/onap/oom/tree/master/kubernetes/policy/components>`__ for policy include ServiceMonitor and properties can be overrided based on the deployment specifics.
+`OOM charts <https://github.com/onap/oom/tree/master/kubernetes/policy/components>`__ for policy include ServiceMonitor and properties can be overwritten based on the deployment specifics.
diff --git a/docs/drools/ctrlog_config.png b/docs/drools/ctrlog_config.png
deleted file mode 100755
index 8d5aeb65..00000000
--- a/docs/drools/ctrlog_config.png
+++ /dev/null
Binary files differ
diff --git a/docs/drools/ctrlog_enablefeature.png b/docs/drools/ctrlog_enablefeature.png
deleted file mode 100755
index dc1abf34..00000000
--- a/docs/drools/ctrlog_enablefeature.png
+++ /dev/null
Binary files differ
diff --git a/docs/drools/ctrlog_logback.png b/docs/drools/ctrlog_logback.png
deleted file mode 100755
index 252f3fe1..00000000
--- a/docs/drools/ctrlog_logback.png
+++ /dev/null
Binary files differ
diff --git a/docs/drools/ctrlog_view.png b/docs/drools/ctrlog_view.png
deleted file mode 100755
index 118bd64d..00000000
--- a/docs/drools/ctrlog_view.png
+++ /dev/null
Binary files differ
diff --git a/docs/drools/drools.rst b/docs/drools/drools.rst
index 1bcbda9a..447102df 100644
--- a/docs/drools/drools.rst
+++ b/docs/drools/drools.rst
@@ -9,7 +9,7 @@ Policy Drools PDP Engine
:depth: 1
The Drools PDP, aka PDP-D, is the PDP in the Policy Framework that uses the
-`Drools BRMS <https://www.drools.org/>`__ to enforce policies.
+`Drools BRMS <https://www.drools.org/>`_ to enforce policies.
The PDP-D functionality has been partitioned into two functional areas:
@@ -18,8 +18,8 @@ The PDP-D functionality has been partitioned into two functional areas:
**PDP-D Engine**
-The PDP-D Engine is the infrastructure that *policy applications* use.
-It provides networking services, resource grouping, and diagnostics.
+The PDP-D Engine is the infrastructure that *policy applications* use. It provides networking
+services, resource grouping, and diagnostics.
The PDP-D Engine supports the following Tosca Native Policy Types:
@@ -28,18 +28,16 @@ The PDP-D Engine supports the following Tosca Native Policy Types:
These types are used to dynamically add and configure new application controllers.
-The PDP-D Engine hosts applications by means of *controllers*.
-*Controllers* may support other Tosca Policy Types. The
-types supported by the *Control Loop* applications are:
+The PDP-D Engine hosts applications by means of *controllers*. *Controllers* may support other
+Tosca Policy Types. The types supported by the *Control Loop* applications are:
- onap.policies.controlloop.operational.common.Drools
**PDP-D Applications**
-A PDP-D application, ie. a *controller*, contains references to the
-resources that the application needs. These include networked endpoint references,
-and maven coordinates.
+A PDP-D application, ie. a *controller*, contains references to the resources that the application
+needs. These include networked endpoint references, and maven coordinates.
*Control Loop* applications are used in ONAP to enforce operational policies.
@@ -52,3 +50,11 @@ The following guides offer more information in these two functional areas.
pdpdEngine.rst
pdpdApps.rst
+
+Additional information
+======================
+
+For additional information, please see the
+`Drools PDP Development and Testing (In Depth) <https://wiki.onap.org/display/DW/2020-08+Frankfurt+Tutorials>`_ page.
+
+End of Document
diff --git a/docs/drools/feature_activestdbymgmt.rst b/docs/drools/feature_activestdbymgmt.rst
deleted file mode 100644
index 193c331f..00000000
--- a/docs/drools/feature_activestdbymgmt.rst
+++ /dev/null
@@ -1,109 +0,0 @@
-
-.. This work is licensed under a Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-
-.. _feature-asm-label:
-
-**********************************
-Feature: Active/Standby Management
-**********************************
-
-.. contents::
- :depth: 3
-
-When the Feature Session Persistence is enabled, there can only be one active/providing service Drools PDP due to the behavior of Drools persistence. The Active/Standby Management Feature controls the selection of the Drools PDP that is providing service. It utilizes its own database and the State Management Feature database in the election algorithm. All Drools PDP nodes periodically run the election algorithm and, since they all use the same data, all nodes come to the same conclusion with the "elected" node assuming an active/providingservice state. Thus, the algorithm is distributed and has no single point of failure - assuming the database is configured for high availability.
-
-When the algorithm selects a Drools PDP to be active/providing service the controllers and topic endpoints are unlocked and allowed to process transactions. When a Drools PDP transitions to a hotstandby or coldstandby state, the controllers and topic endpoints are locked, preventing the Drools PDP from handling transactions.
-
-
-Enabling and Disabling Feature State Management
-===============================================
-
-The Active/Standby Management Feature is enabled from the command line when logged in as policy after configuring the feature properties file (see Description Details section). From the command line:
-
-- > features status - Lists the status of features
-- > features enable active-standby-management - Enables the Active-Standby Management Feature
-- > features disable active-standby-management - Disables the Active-Standby Management Feature
-
-The Drools PDP must be stopped prior to enabling/disabling features and then restarted after the features have been enabled/disabled.
-
- .. code-block:: bash
- :caption: Enabling Active/Standby Management Feature
-
- policy@hyperion-4:/opt/app/policy$ policy stop
- [drools-pdp-controllers]
- L []: Stopping Policy Management... Policy Management (pid=354) is stopping... Policy Management has stopped.
- policy@hyperion-4:/opt/app/policy$ features enable active-standby-management
- name version status
- ---- ------- ------
- controlloop-utils 1.1.0-SNAPSHOT disabled
- healthcheck 1.1.0-SNAPSHOT disabled
- test-transaction 1.1.0-SNAPSHOT disabled
- eelf 1.1.0-SNAPSHOT disabled
- state-management 1.1.0-SNAPSHOT disabled
- active-standby-management 1.1.0-SNAPSHOT enabled
- session-persistence 1.1.0-SNAPSHOT disabled
-
-
-Description Details
-~~~~~~~~~~~~~~~~~~~
-
-Election Algorithm
-------------------
-
-The election algorithm selects the active/providingservice Drools PDP. The algorithm on each node reads the *standbystatus* from the *StateManagementEntity* table for all other nodes to determine if they are providingservice or in a hotstandby state and able to assume an active status. It uses the *DroolsPdpEntity* table to verify that other node election algorithms are currently functioning and when the other nodes were last designated as the active Drools PDP.
-
-In general terms, the election algorithm periodically gathers the standbystatus and designation status for all the Drools PDPs. If the node which is currently designated as providingservice is "current" in updating its status, no action is required. If the designated node is either not current or has a standbystatus other than providingservice, it is time to choose another designated *DroolsPDP*. The algorithm will build a list of all DroolsPDPs that are current and have a *standbystatus* of *hotstandby*. It will then give preference to DroolsPDPs within the same site, choosing the DroolsPDP with the lowest lexicographic value to the droolsPdpId (resourceName). If the chosen DroolsPDP is itself, it will promote its standbystatus from hotstandby to providingservice. If the chosen DroolsPDP is other than itself, it will do nothing.
-
-When the DroolsPDP promotes its *standbystatus* from hotstandby to providing service, a state change notification will occur and the Standby State Change Handler will take appropriate action.
-
-
-Standby State Change Handler
-----------------------------
-
-The Standby State Change Handler (*PMStandbyStateChangeHandler* class) extends the IntegrityMonitor StateChangeNotifier class which implements the Observer class. When the DroolsPDP is constructed, an instance of the handler is constructed and registered with StateManagement. Whenever StateManagement implements a state transition, it calls the *handleStateChange()* method of the handler. If the StandbyStatus transitions to hot or cold standby, the handler makes a call into the lower level management layer to lock the application controllers and topic endpoints, preventing it from handling transactions. If the StandbyStatus transitions to providingservice, the handler makes a call into the lower level management layer to unlock the application controllers and topic endpoints, allowing it to handle transactions.
-
-
-Database
---------
-
-The Active/Standby Feature creates a database named activestandbymanagement with a single table, **droolspdpentity**. The election handler uses that table to determine which DroolsPDP was/is designated as the active DroolsPDP and which DroolsPDP election handlers are healthy enough to periodically update their status.
-
-The **droolspdpentity** table has the following columns:
- - **pdpId** - The unique indentifier for the DroolsPDP. It is the same as the resourceName
- - **designated** - Has a value of 1 if the DroolsPDP is designated as active/providingservice. It has a value of 0 otherwise
- - **priority** - Indicates the priority level of the DroolsPDP for the election handler. In general, this is ignore and all have the same priority.
- - **updatedDate** - This is the timestamp for the most recent update of the record.
- - **designatedDate** - This is the timestamp that indicates when the designated column was most recently set to a value of 1
- - **site** - This is the name of the site
-
-Properties
-----------
-
-The properties are found in the feature-active-standby-management.properties file. In general, the properties are adequately described in the properties file. Parameters which must be replaced prior to usage are indicated thus: ${{parameter to be replaced}}
-
- .. code-block:: bash
- :caption: feature-active-standby-mangement.properties
-
- # DB properties
- javax.persistence.jdbc.driver=org.mariadb.jdbc.Driver
- javax.persistence.jdbc.url=jdbc:mariadb://${{SQL_HOST}}:3306/activestandbymanagement
- javax.persistence.jdbc.user=${{SQL_USER}}
- javax.persistence.jdbc.password=${{SQL_PASSWORD}}
-
- # Must be unique across the system
- resource.name=pdp1
- # Name of the site in which this node is hosted
- site_name=site1
-
- # Needed by DroolsPdpsElectionHandler
- pdp.checkInterval=1500 # The interval in ms between updates of the updatedDate
- pdp.updateInterval=1000 # The interval in ms between executions of the election handler
- #pdp.timeout=3000
- # Need long timeout, because testTransaction is only run every 10 seconds.
- pdp.timeout=15000
- #how long do we wait for the pdp table to populate on initial startup
- pdp.initialWait=20000
-
-
-End of Document
diff --git a/docs/drools/feature_controllerlogging.rst b/docs/drools/feature_controllerlogging.rst
deleted file mode 100644
index fc8d6dab..00000000
--- a/docs/drools/feature_controllerlogging.rst
+++ /dev/null
@@ -1,48 +0,0 @@
-
-.. This work is licensed under a Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-
-.. _feature_controllerlogging-label:
-
-***************************
-Feature: Controller Logging
-***************************
-
-.. contents::
- :depth: 3
-
-The controller logging feature provides a way to log network topic messages to a separate controller log file for each controller. This allows a clear separation of network traffic between all of the controllers.
-
-Type "features enable controller-logging". The feature will now display as "enabled".
-
- .. image:: ctrlog_enablefeature.png
-
-When the feature's enable script is executed, it will search the $POLICY_HOME/config directory for any logback files containing the prefix "logback-include-". These logger configuration files are typically provided with a feature that installs a controlloop (ex: controlloop-amsterdam and controlloop-casablanca features). Once these configuration files are found by the enable script, the logback.xml config file will be updated to include the configurations.
-
- .. image:: ctrlog_logback.png
-
-
-Controller Logger Configuration
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The contents of a logback-include-``*``.xml file follows the same configuration syntax as the logback.xml file. It will contain the configurations for the logger associated with the given controller.
-
- .. note:: A controller logger MUST be configured with the same name as the controller (ex: a controller named "casablanca" will have a logger named "casablanca").
-
- .. image:: ctrlog_config.png
-
-
-Viewing the Controller Logs
-^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Once a logger for the controller is configured, start the drools-pdp and navigate to the $POLICY_LOGS directory. A new controller specific network log will be added that contains all the network topic traffic of the controller.
-
- .. image:: ctrlog_view.png
-
-The original network log remains and will append traffic information from all topics regardless of which controller it is for. To abbreviate and customize messages for the network log, refer to the
-:ref:`Feature MDC Filters <feature_mdcfilters-label>` documentation.
-
-
-End of Document
-
-
diff --git a/docs/drools/feature_eelf.rst b/docs/drools/feature_eelf.rst
deleted file mode 100644
index a505490c..00000000
--- a/docs/drools/feature_eelf.rst
+++ /dev/null
@@ -1,47 +0,0 @@
-
-.. This work is licensed under a Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-
-*************************************************
-Feature: EELF (Event and Error Logging Framework)
-*************************************************
-
-.. contents::
- :depth: 3
-
-The EELF feature provides backwards compatibility with R0 logging functionality. It supports the use of EELF/Common Framework style logging at the same time as traditional logging.
-
-.. seealso:: Additional information for EELF logging can be found at `EELF wiki`_.
-
-.. _EELF wiki: https://github.com/att/EELF/wiki
-
-
-To utilize the eelf logging capabilities, first stop policy engine and then enable the feature using the "*features*" command.
-
- .. code-block:: bash
- :caption: Enabling EELF Feature
-
- policy@hyperion-4:/opt/app/policy$ policy stop
- [drools-pdp-controllers]
- L []: Stopping Policy Management... Policy Management (pid=354) is stopping... Policy Management has stopped.
- policy@hyperion-4:/opt/app/policy$ features enable eelf
- name version status
- ---- ------- ------
- controlloop-utils 1.1.0-SNAPSHOT disabled
- healthcheck 1.1.0-SNAPSHOT disabled
- test-transaction 1.1.0-SNAPSHOT disabled
- eelf 1.1.0-SNAPSHOT enabled
- state-management 1.1.0-SNAPSHOT disabled
- active-standby-management 1.1.0-SNAPSHOT disabled
- session-persistence 1.1.0-SNAPSHOT disabled
-
-The output of the enable command will indicate whether or not the feature was enabled successfully.
-
-Policy engine can then be started as usual.
-
-
-
-End of Document
-
-.. SSNote: Wiki page ref. https://wiki.onap.org/display/DW/Feature+EELF
-
diff --git a/docs/drools/feature_mdcfilters.rst b/docs/drools/feature_mdcfilters.rst
deleted file mode 100644
index b7077138..00000000
--- a/docs/drools/feature_mdcfilters.rst
+++ /dev/null
@@ -1,117 +0,0 @@
-
-.. This work is licensed under a Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-
-.. _feature_mdcfilters-label:
-
-********************
-Feature: MDC Filters
-********************
-
-.. contents::
- :depth: 3
-
-The MDC Filter Feature provides configurable properties for network topics to extract fields from JSON strings and place them in a mapped diagnostic context (MDC).
-
-Before enabling the feature, the network log contains the entire content of each message received on a topic. Below is a sample message from the network log. Note that the topic used for this tutorial is DCAE-CL.
-
- .. code-block:: bash
-
- [2019-03-22T16:36:42.942+00:00|DMAAP-source-DCAE-CL][IN|DMAAP|DCAE-CL]
- {"closedLoopControlName":"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e","closedLoopAlarmStart":1463679805324,"closedLoopEventClient":"DCAE_INSTANCE_ID.dcae-tca","closedLoopEventStatus":"ONSET","requestID":"664be3d2-6c12-4f4b-a3e7-c349acced200","target_type":"VNF","target":"generic-vnf.vnf-id","AAI":{"vserver.is-closed-loop-disabled":"false","vserver.prov-status":"ACTIVE","generic-vnf.vnf-id":"vCPE_Infrastructure_vGMUX_demo_app"},"from":"DCAE","version":"1.0.2"}
-
-The network log can become voluminous if messages received from various topics carry large messages for various controllers. With the MDC Filter Feature, users can define keywords in JSON messages to extract and structure according to a desired format. This is done through configuring the feature's properties.
-
-Configuring the MDC Filter Feature
-==================================
-
-To configure the feature, the feature must be enabled using the following command:
-
- .. code-block:: bash
-
- features enable mdc-filters
-
-
- .. image:: mdc_enablefeature.png
-
-Once the feature is enabled, there will be a new properties file in *$POLICY_HOME/config* called **feature-mdc-filters.properties**.
-
- .. image:: mdc_properties.png
-
-The properties file contains filters to extract key data from messages on the network topics that are saved in an MDC, which can be referenced in logback.xml. The configuration format is as follows:
-
- .. code-block:: bash
-
- <protocol>.<type>.topics.<topic-name>.mdcFilters=<filters>
-
- Where:
- <protocol> = ueb, dmaap, noop
- <type> = source, sink
- <topic-name> = Name of DMaaP or UEB topic
- <filters> = Comma separated list of key/json-path(s)
-
-The filters consist of an MDC key used by **logback.xml** (see below) and the JSON path(s) to the desired data. The path always begins with '$', which signifies the root of the JSON document. The underlying library, JsonPath, uses a query syntax for searching through a JSON file. The query syntax and some examples can be found at https://github.com/json-path/JsonPath. An example filter for the *DCAE-CL* is provided below:
-
- .. code-block:: bash
-
- dmaap.source.topics.DCAE-CL.mdcFilters=requestID=$.requestID
-
-This filter is specifying that the dmaap source topic *DCAE-CL* will search each message received for requestID by following the path starting at the root ($) and searching for the field *requestID*. If the field is found, it is placed in the MDC with the key "requestID" as signified by the left hand side of the filter before the "=".
-
-
-Configuring Multiple Filters and Paths
-======================================
-
-Multiple fields can be found for a given JSON document by a comma separated list of <mdcKey,jsonPath> pairs. For the previous example, another filter is added by adding a comma and specifying the filter as follows:
-
- .. code-block:: bash
-
- dmaap.source.topics.DCAE-CL.mdcFilters=requestID=$.requestID,closedLoopName=$.closedLoopControlName
-
-The feature will now search for both requestID and closedLoopControlName in a JSON message using the specified "$." path notations and put them in the MDC using the keys "requestID" and "closedLoopName" respectively. To further refine the filter, if a topic receives different message structures (ex: a response message structure vs an error message structure) the "|" notation allows multiple paths to a key to be defined. The feature will search through each specified path until a match is found. An example can be found below:
-
- .. code-block:: bash
-
- dmaap.source.topics.DCAE-CL.mdcFilters=requestID=$.requestID,closedLoopName=$.closedLoopControlName|$.AAI.closedLoopControlName
-
-Now when the filter is searching for closedLoopControlName it will check the first path "$.closedLoopControlName", if it is not present then it will try the second path "$.AAI.closedLoopControlName". If the user is unsure of the path to a field, JsonPath supports a deep scan by using the ".." notation. This will search the entire JSON document for the field without specifying the path.
-
-
-Accessing the MDC Values in logback.xml
-=======================================
-
-Once the feature properties have been defined, logback.xml contains a "abstractNetworkPattern" property that will hold the desired message structure defined by the user. The user has the flexibility to define the message structure however they choose but for this tutorial the following pattern is used:
-
- .. code-block:: bash
-
- <property name="abstractNetworkPattern" value="[%d{yyyy-MM-dd'T'HH:mm:ss.SSS+00:00, UTC}] [%X{networkEventType:-NULL}|%X{networkProtocol:-NULL}|%X{networkTopic:-NULL}|%X{requestID:-NULL}|%X{closedLoopName:-NULL}]%n" />
-
-The "value" portion consists of two headers in bracket notation, the first header defines the timestamp while the second header references the keys from the MDC filters defined in the feature properties. The standard logback syntax is used and more information on the syntax can be found here. Note that some of the fields here were not defined in the feature properties file. The feature automatically puts the network infrastructure information in the keys that are prepended with "network". The current supported network infrastructure information is listed below.
-
- +-------------------+-------------------------------------------------+
- | Field | Values |
- +===================+=================================================+
- | networkEventType | IN, OUT |
- +-------------------+-------------------------------------------------+
- | networkProtocol | DMAAP, UEB, NOOP |
- +-------------------+-------------------------------------------------+
- | networkTopic | The name of the topic that received the message |
- +-------------------+-------------------------------------------------+
-
-
-To reference the keys from the feature properties the syntax "%X{KEY_DEFINED_IN_PROPERTIES}" provides access to the value. An optional addition is to append ":-", which specifies a default value to display in the log if the field was not found in the message received. For this tutorial, a default of "NULL" is displayed for any of the fields that were not found while filtering. The "|" has no special meaning and is just used as a field separator for readability; the user can decorate the log format to their desired visual appeal.
-
-Network Log Structure After Feature Enabled
-===========================================
-
-Once the feature and logback.xml is configured to the user's desired settings, start the PDP-D by running "policy start". Based on the configurations from the previous sections of this tutorial, the following log message is written to network log when a message is received on the DCAE-CL topic:
-
- .. code-block:: bash
-
- [2019-03-22T16:38:23.884+00:00] [IN|DMAAP|DCAE-CL|664be3d2-6c12-4f4b-a3e7-c349acced200|ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e]
-
-The message has now been filtered to display the network infrastructure information and the extracted data from the JSON message based on the feature properties. In order to view the entire message received from a topic, a complementary feature was developed to display the entire message on a per controller basis while preserving the compact network log. Refer to the
-:ref:`Feature Controller Logging <feature_controllerlogging-label>` documentation for details.
-
-End of Document
-
diff --git a/docs/drools/feature_nolocking.rst b/docs/drools/feature_nolocking.rst
index e98cc8ee..12c6570a 100644
--- a/docs/drools/feature_nolocking.rst
+++ b/docs/drools/feature_nolocking.rst
@@ -9,11 +9,11 @@ Feature: no locking
.. contents::
:depth: 3
-The no-locking feature allows applications to use a Lock Manager that always succeeds. It does not deny
-acquiring resource locks.
+The no-locking feature allows applications to use a Lock Manager that always succeeds. It does not
+deny acquiring resource locks.
-To utilize the no-locking feature, first stop policy engine, disable other locking features, and then enable it
-using the "*features*" command.
+To utilize the no-locking feature, first stop policy engine, disable other locking features, and
+then enable it using the "*features*" command.
In an official OOM installation, place a script with a .pre.sh suffix:
@@ -35,6 +35,7 @@ under the directory:
and rebuild the policy charts.
-At container initialization, the distributed-locking will be disabled, and the no-locking feature will be enabled.
+At container initialization, the distributed-locking will be disabled, and the no-locking feature
+will be enabled.
End of Document
diff --git a/docs/drools/feature_pooling.rst b/docs/drools/feature_pooling.rst
index ba950a3d..705c98e3 100644
--- a/docs/drools/feature_pooling.rst
+++ b/docs/drools/feature_pooling.rst
@@ -8,7 +8,9 @@
Feature: Pooling
****************
-The Pooling feature provides the ability to load-balance work across a “pool” of active-active Drools-PDP hosts. This particular implementation uses a DMaaP topic for communication between the hosts within the pool.
+The Pooling feature provides the ability to load-balance work across a “pool” of active-active
+Drools-PDP hosts. This particular implementation uses a kafka topic for communication between the
+hosts within the pool.
The pool is adjusted automatically, with no manual intervention when:
* a new host is brought online
@@ -18,35 +20,36 @@ Assumptions and Limitations
===========================
* Session persistence is not required
* Data may be lost when processing is moved from one host to another
- * The entire pool may shut down if the inter-host DMaaP topic becomes inaccessible
-
- .. image:: poolingDesign.png
+ * The entire pool may shut down if the kafka topic becomes inaccessible
Key Points
==========
- * Requests are received on a common DMaaP topic
- - DMaaP distributes the requests randomly to the hosts
- - The request topic should have at least as many partitions as there are hosts
- * Uses a single, internal DMaaP topic for all inter-host communication
+ * Requests are received on a common kafka topic
+ * Uses a single, kafka topic for all inter-host communication
* Allocates buckets to each host
- Requests are assigned to buckets based on their respective “request IDs”
* No session persistence
* No objects copied between hosts
* Requires feature(s): distributed-locking
- * Precludes feature(s): session-persistence, active-standby, state-management
Example Scenario
================
- 1. Incoming DMaaP message is received on a topic — all hosts are listening, but only one random host receives the message
+ 1. Incoming message is received on a topic — all hosts are listening, but only one random host
+ receives the message
2. Decode message to determine “request ID” key (message-specific operation)
3. Hash request ID to determine the bucket number
4. Look up host associated with hash bucket (most likely remote)
- 5. Publish “forward” message to internal DMaaP topic, including remote host, bucket number, DMaaP topic information, and message body
- 6. Remote host verifies ownership of bucket, and routes the DMaaP message to its own rule engine for processing
+ 5. Publish “forward” message to internal topic, including remote host, bucket number, topic
+ information, and message body
+ 6. Remote host verifies ownership of bucket, and routes the message to its own rule engine for
+ processing
- The figure below shows several different hosts in a pool. Each host has a copy of the bucket assignments, which specifies which buckets are assigned to which hosts. Incoming requests are mapped to a bucket, and a bucket is mapped to a host, to which the request is routed. The host table includes an entry for each active host in the pool, to which one or more buckets are mapped.
+The figure below shows several different hosts in a pool. Each host has a copy of the bucket
+assignments, which specifies which buckets are assigned to which hosts. Incoming requests are mapped
+to a bucket, and a bucket is mapped to a host, to which the request is routed. The host table
+includes an entry for each active host in the pool, to which one or more buckets are mapped.
.. image:: poolingPdps.png
@@ -58,7 +61,12 @@ Bucket Reassignment
* Leaves buckets with their current owner, where possible
* Takes a few buckets from each host to assign to new hosts
- For example, in the diagram below, the left side shows how 32 buckets might be assigned among four different hosts. When the first host fails, the buckets from host 1 would be reassigned among the remaining hosts, similar to what is shown on the right side of the diagram. Any requests that were being processed by host 1 will be lost and must be restarted. However, the buckets that had already been assigned to the remaining hosts are unchanged, thus requests associated with those buckets are not impacted by the loss of host 1.
+For example, in the diagram below, the left side shows how 32 buckets might be assigned among four
+different hosts. When the first host fails, the buckets from host 1 would be reassigned among the
+remaining hosts, similar to what is shown on the right side of the diagram. Any requests that were
+being processed by host 1 will be lost and must be restarted. However, the buckets that had already
+been assigned to the remaining hosts are unchanged, thus requests associated with those buckets are
+not impacted by the loss of host 1.
.. image:: poolingBuckets.png
@@ -73,11 +81,11 @@ For pooling to be enabled, the distributed-locking feature must be also be enabl
policy stop
features enable distributed-locking
- features enable pooling-dmaap
+ features enable pooling-messages
The configuration is located at:
- * $POLICY_HOME/config/feature-pooling-dmaap.properties
+ * $POLICY_HOME/config/feature-pooling-messages.properties
.. code-block:: bash
@@ -90,12 +98,10 @@ For pooling to be enabled, the distributed-locking feature must be also be enabl
:caption: Disable the pooling feature
policy stop
- features disable pooling-dmaap
+ features disable pooling-messages
policy start
End of Document
.. SSNote: Wiki page ref. https://wiki.onap.org/display/DW/Feature+Pooling
-
-
diff --git a/docs/drools/feature_sesspersist.rst b/docs/drools/feature_sesspersist.rst
deleted file mode 100644
index 4bb5ef62..00000000
--- a/docs/drools/feature_sesspersist.rst
+++ /dev/null
@@ -1,49 +0,0 @@
-
-.. This work is licensed under a Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-
-************************************
-Feature: Session Persistence
-************************************
-
-The session persistence feature allows drools kie sessions to be persisted in a database surviving pdp-d restarts.
-
- .. code-block:: bash
- :caption: Enable session persistence
- :linenos:
-
- policy stop
- features enable session-persistence
-
-The configuration is located at:
-
- - *$POLICY_HOME/config/feature-session-persistence.properties*
-
-Each controller that wants to be started with persistence should contain the following line in its *<controller-name>-controller.properties*
-
- - *persistence.type=auto*
-
- .. code-block:: bash
- :caption: Start the PDP-D using session-persistence
- :linenos:
-
- db-migrator -o upgrade -s ALL
- policy start
-
-Facts will survive PDP-D restart using the native drools capabilities and introduce a performance overhead.
-
- .. code-block:: bash
- :caption: Disable the session-persistence feature
- :linenos:
-
- policy stop
- features disable session-persistence
- sed -i "/persistence.type=auto/d" <controller-name>-controller.properties
- db-migrator -o erase -s sessionpersistence # delete all its database data (optional)
- policy start
-
-End of Document
-
-.. SSNote: Wiki page ref. https://wiki.onap.org/display/DW/Feature+Session+Persistence
-
-
diff --git a/docs/drools/feature_statemgmt.rst b/docs/drools/feature_statemgmt.rst
deleted file mode 100644
index 29497003..00000000
--- a/docs/drools/feature_statemgmt.rst
+++ /dev/null
@@ -1,310 +0,0 @@
-
-.. This work is licensed under a Creative Commons Attribution 4.0 International License.
-.. http://creativecommons.org/licenses/by/4.0
-
-.. _feature-sm-label:
-
-*************************
-Feature: State Management
-*************************
-
-.. contents::
- :depth: 2
-
-The State Management Feature provides:
-
-- Node-level health monitoring
-- Monitoring the health of dependency nodes - nodes on which a particular node is dependent
-- Ability to lock/unlock a node and suspend or resume all application processing
-- Ability to suspend application processing on a node that is disabled or in a standby state
-- Interworking/Coordination of state values
-- Support for ITU X.731 states and state transitions for:
- - Administrative State
- - Operational State
- - Availability Status
- - Standby Status
-
-
-Enabling and Disabling Feature State Management
-===============================================
-
-The State Management Feature is enabled from the command line when logged in as policy after configuring the feature properties file (see Description Details section). From the command line:
-
-- > features status - Lists the status of features
-- > features enable state-management - Enables the State Management Feature
-- > features disable state-management - Disables the State Management Feature
-
-The Drools PDP must be stopped prior to enabling/disabling features and then restarted after the features have been enabled/disabled.
-
- .. code-block:: bash
- :caption: Enabling State Management Feature
-
- policy@hyperion-4:/opt/app/policy$ policy stop
- [drools-pdp-controllers]
- L []: Stopping Policy Management... Policy Management (pid=354) is stopping... Policy Management has stopped.
- policy@hyperion-4:/opt/app/policy$ features enable state-management
- name version status
- ---- ------- ------
- controlloop-utils 1.1.0-SNAPSHOT disabled
- healthcheck 1.1.0-SNAPSHOT disabled
- test-transaction 1.1.0-SNAPSHOT disabled
- eelf 1.1.0-SNAPSHOT disabled
- state-management 1.1.0-SNAPSHOT enabled
- active-standby-management 1.1.0-SNAPSHOT disabled
- session-persistence 1.1.0-SNAPSHOT disabled
-
-Description Details
-~~~~~~~~~~~~~~~~~~~
-
-State Model
-"""""""""""
-
-The state model follows the ITU X.731 standard for state management. The supported state values are:
- **Administrative State:**
- - Locked - All application transaction processing is prohibited
- - Unlocked - Application transaction processing is allowed
-
- **Administrative State Transitions:**
- - The transition from Unlocked to Locked state is triggered with a Lock operation
- - The transition from the Locked to Unlocked state is triggered with an Unlock operation
-
- **Operational State:**
- - Enabled - The node is healthy and able to process application transactions
- - Disabled - The node is not healthy and not able to process application transactions
-
- **Operational State Transitions:**
- - The transition from Enabled to Disabled is triggered with a disableFailed or disableDependency operation
- - The transition from Disabled to Enabled is triggered with an enableNotFailed and enableNoDependency operation
-
- **Availability Status:**
- - Null - The Operational State is Enabled
- - Failed - The Operational State is Disabled because the node is no longer healthy
- - Dependency - The Operational State is Disabled because all members of a dependency group are disabled
- - Dependency.Failed - The Operational State is Disabled because the node is no longer healthy and all members of a dependency group are disabled
-
- **Availability Status Transitions:**
- - The transition from Null to Failed is triggered with a disableFailed operation
- - The transtion from Null to Dependency is triggered with a disableDependency operation
- - The transition from Failed to Dependency.Failed is triggered with a disableDependency operation
- - The transition from Dependency to Dependency.Failed is triggered with a disableFailed operation
- - The transition from Dependency.Failed to Failed is triggered with an enableNoDependency operation
- - The transition from Dependency.Failed to Dependency is triggered with an enableNotFailed operation
- - The transition from Failed to Null is triggered with an enableNotFailed operation
- - The transition from Dependency to Null is triggered with an enableNoDependency operation
-
- **Standby Status:**
- - Null - The node does not support active-standby behavior
- - ProvidingService - The node is actively providing application transaction service
- - HotStandby - The node is capable of providing application transaction service, but is currently waiting to be promoted
- - ColdStandby - The node is not capable of providing application service because of a failure
-
- **Standby Status Transitions:**
- - The transition from Null to HotStandby is triggered by a demote operation when the Operational State is Enabled
- - The transition for Null to ColdStandby is triggered is a demote operation when the Operational State is Disabled
- - The transition from ColdStandby to HotStandby is triggered by a transition of the Operational State from Disabled to Enabled
- - The transition from HotStandby to ColdStandby is triggered by a transition of the Operational State from Enabled to Disabled
- - The transition from ProvidingService to ColdStandby is triggered by a transition of the Operational State from Enabled to Disabled
- - The transition from HotStandby to ProvidingService is triggered by a Promote operation
- - The transition from ProvidingService to HotStandby is triggered by a Demote operation
-
-Database
-~~~~~~~~
-
-The State Management feature creates a StateManagement database having three tables:
-
- **StateManagementEntity** - This table has the following columns:
- - **id** - Automatically created unique identifier
- - **resourceName** - The unique identifier for a node
- - **adminState** - The Administrative State
- - **opState** - The Operational State
- - **availStatus** - The Availability Status
- - **standbyStatus** - The Standby Status
- - **created_Date** - The timestamp the resource entry was created
- - **modifiedDate** - The timestamp the resource entry was last modified
-
- **ForwardProgressEntity** - This table has the following columns:
- - **forwardProgressId** - Automatically created unique identifier
- - **resourceName** - The unique identifier for a node
- - **fpc_count** - A forward progress counter which is periodically incremented if the node is healthy
- - **created_date** - The timestamp the resource entry was created
- - **last_updated** - The timestamp the resource entry was last updated
-
- **ResourceRegistrationEntity** - This table has the following columns:
- - **ResourceRegistrationId** - Automatically created unique identifier
- - **resourceName** - The unique identifier for a node
- - **resourceUrl** - The JMX URL used to check the health of a node
- - **site** - The name of the site in which the resource resides
- - **nodeType** - The type of the node (i.e, pdp_xacml, pdp_drools, pap, pap_admin, logparser, brms_gateway, astra_gateway, elk_server, pypdp)
- - **created_date** - The timestamp the resource entry was created
- - **last_updated** - The timestamp the resource entry was last updated
-
-Node Health Monitoring
-~~~~~~~~~~~~~~~~~~~~~~
-
-**Application Monitoring**
-
- Application monitoring can be implemented using the *startTransaction()* and *endTransaction()* methods. Whenever a transaction is started, the *startTransaction()* method is called. If the node is locked, disabled or in a hot/cold standby state, the method will throw an exception. Otherwise, it resets the timer which triggers the default *testTransaction()* method.
-
- When a transaction completes, calling *endTransaction()* increments the forward process counter in the *ForwardProgressEntity* DB table. As long as this counter is updating, the integrity monitor will assume the node is healthy/sane.
-
- If the *startTransaction()* method is not called within a provisioned period of time, a timer will expire which calls the *testTransaction()* method. The default implementation of this method simply increments the forward progress counter. The *testTransaction()* method may be overwritten to perform a more meaningful test of system sanity, if desired.
-
- If the forward progress counter stops incrementing, the integrity monitoring routine will assume the node application has lost sanity and it will trigger a *statechange* (disableFailed) to cause the operational state to become disabled and the availability status attribute to become failed. Once the forward progress counter again begins incrementing, the operational state will return to enabled.
-
-**Application Monitoring with AllSeemsWell**
-
- The IntegrityMonitor class provides a facility for applications to directly control updates of the forwardprogressentity table. As previously described, *startTransaction()* and *endTransaction()* are provided to monitor the forward progress of transactions. This, however, does not monitor things such as internal threads that may be blocked or die. An example is the feature-state-management *DroolsPdpElectionHandler.run()* method.
-
- The *run()* method is monitored by a timer task, *checkWaitTimer()*. If the *run()* method is stalled an extended period of time, the *checkWaitTimer()* method will call *StateManagementFeature.allSeemsWell(<className>, <AllSeemsWell State>, <String message>)* with the AllSeemsWell state of Boolean.FALSE.
-
- The IntegrityMonitor instance owned by StateManagementFeature will then store an entry in the allSeemsWellMap and block updates of the forwardprogressentity table. This in turn, will cause the Drools PDP operational state to be set to “disabled” and availability status to be set to “failed”.
-
- Once the blocking condition is cleared, the *checkWaiTimer()* will again call the *allSeemsWell()* method and include an AllSeemsWell state of Boolean.True. This will cause the IntegrityMonitor to remove the entry for that className from the allSeemsWellMap and allow updating of the forwardprogressentity table, so long as there are no other entries in the map.
-
-**Dependency Monitoring**
-
- When a Drools PDP (or other node using the *IntegrityMonitor* policy/common module) is dependent upon other nodes to perform its function, those other nodes can be defined as dependencies in the properties file. In order for the dependency algorithm to function, the other nodes must also be running the *IntegrityMonitor*. Periodically the Drools PDP will check the state of dependencies. If all of a node type have failed, the Drools PDP will declare that it can no longer function and change the operational state to disabled and the availability status to dependency.
-
- In addition to other policy node types, there is a *subsystemTest()* method that is periodically called by the *IntegrityMonitor*. In Drools PDP, *subsystemTest* has been overwritten to execute an audit of the Database and of the Maven Repository. If the audit is unable to verify the function of either the DB or the Maven Repository, he Drools PDP will declare that it can no longer function and change the operational state to disabled and the availability status to dependency.
-
- When a failed dependency returns to normal operation, the *IntegrityMontor* will change the operational state to enabled and availability status to null.
-
-**External Health Monitoring Interface**
-
- The Drools PDP has a http test interface which, when called, will return 200 if all seems well and 500 otherwise. The test interface URL is defined in the properties file.
-
-
-Site Manager
-~~~~~~~~~~~~
-
-The Site Manager is not deployed with the Drools PDP, but it is available in the policy/common repository in the site-manager directory.
-The Site Manager provides a lock/unlock interface for nodes and a way to display node information and status.
-
-The following is from the README file included with the Site Manager.
-
- .. code-block:: bash
- :caption: Site Manager README extract
-
- Before using 'siteManager', the file 'siteManager.properties' needs to be
- edited to configure the parameters used to access the database:
-
- javax.persistence.jdbc.driver - typically 'org.mariadb.jdbc.Driver'
-
- javax.persistence.jdbc.url - URL referring to the database,
- which typically has the form: 'jdbc:mariadb://<host>:<port>/<db>'
- ('<db>' is probably 'xacml' in this case)
-
- javax.persistence.jdbc.user - the user id for accessing the database
-
- javax.persistence.jdbc.password - password for accessing the database
-
- Once the properties file has been updated, the 'siteManager' script can be
- invoked as follows:
-
- siteManager show [ -s <site> | -r <resourceName> ] :
- display node information (Site, NodeType, ResourceName, AdminState,
- OpState, AvailStatus, StandbyStatus)
-
- siteManager setAdminState { -s <site> | -r <resourceName> } <new-state> :
- update admin state on selected nodes
-
- siteManager lock { -s <site> | -r <resourceName> } :
- lock selected nodes
-
- siteManager unlock { -s <site> | -r <resourceName> } :
- unlock selected nodes
-
-Note that the 'siteManager' script assumes that the script,
-'site-manager-${project.version}.jar' file and 'siteManager.properties' file
-are all in the same directory. If the files are separated, the 'siteManager'
-script will need to be modified so it can locate the jar and properties files.
-
-
-Properties
-~~~~~~~~~~
-
-The feature-state-mangement.properties file controls the function of the State Management Feature. In general, the properties have adequate descriptions in the file. Parameters which must be replaced prior to usage are indicated thus: ${{parameter to be replaced}}.
-
- .. code-block:: bash
- :caption: feature-state-mangement.properties
-
- # DB properties
- javax.persistence.jdbc.driver=org.mariadb.jdbc.Driver
- javax.persistence.jdbc.url=jdbc:mariadb://${{SQL_HOST}}:3306/statemanagement
- javax.persistence.jdbc.user=${{SQL_USER}}
- javax.persistence.jdbc.password=${{SQL_PASSWORD}}
-
- # DroolsPDPIntegrityMonitor Properties
- # Test interface host and port defaults may be overwritten here
- http.server.services.TEST.host=0.0.0.0
- http.server.services.TEST.port=9981
- #These properties will default to the following if no other values are provided:
- # http.server.services.TEST.restClasses=org.onap.policy.drools.statemanagement.IntegrityMonitorRestManager
- # http.server.services.TEST.managed=false
- # http.server.services.TEST.swagger=true
-
- #IntegrityMonitor Properties
-
- # Must be unique across the system
- resource.name=pdp1
- # Name of the site in which this node is hosted
- site_name=site1
- # Forward Progress Monitor update interval seconds
- fp_monitor_interval=30
- # Failed counter threshold before failover
- failed_counter_threshold=3
- # Interval between test transactions when no traffic seconds
- test_trans_interval=10
- # Interval between writes of the FPC to the DB seconds
- write_fpc_interval=5
- # Node type Note: Make sure you don't leave any trailing spaces, or you'll get an 'invalid node type' error!
- node_type=pdp_drools
- # Dependency groups are groups of resources upon which a node operational state is dependent upon.
- # Each group is a comma-separated list of resource names and groups are separated by a semicolon. For example:
- # dependency_groups=site_1.astra_1,site_1.astra_2;site_1.brms_1,site_1.brms_2;site_1.logparser_1;site_1.pypdp_1
- dependency_groups=
- # When set to true, dependent health checks are performed by using JMX to invoke test() on the dependent.
- # The default false is to use state checks for health.
- test_via_jmx=true
- # This is the max number of seconds beyond which a non incrementing FPC is considered a failure
- max_fpc_update_interval=120
- # Run the state audit every 60 seconds (60000 ms). The state audit finds stale DB entries in the
- # forwardprogressentity table and marks the node as disabled/failed in the statemanagemententity
- # table. NOTE! It will only run on nodes that have a standbystatus = providingservice.
- # A value of <= 0 will turn off the state audit.
- state_audit_interval_ms=60000
- # The refresh state audit is run every (default) 10 minutes (600000 ms) to clean up any state corruption in the
- # DB statemanagemententity table. It only refreshes the DB state entry for the local node. That is, it does not
- # refresh the state of any other nodes. A value <= 0 will turn the audit off. Any other value will override
- # the default of 600000 ms.
- refresh_state_audit_interval_ms=600000
-
- # Repository audit properties
- # Assume it's the releaseRepository that needs to be audited,
- # because that's the one BRMGW will publish to.
- repository.audit.id=${{releaseRepositoryID}}
- repository.audit.url=${{releaseRepositoryUrl}}
- repository.audit.username=${{repositoryUsername}}
- repository.audit.password=${{repositoryPassword}}
- repository2.audit.id=${{releaseRepository2ID}}
- repository2.audit.url=${{releaseRepository2Url}}
- repository2.audit.username=${{repositoryUsername2}}
- repository2.audit.password=${{repositoryPassword2}}
-
- # Repository Audit Properties
- # Flag to control the execution of the subsystemTest for the Nexus Maven repository
- repository.audit.is.active=false
- repository.audit.ignore.errors=true
- repository.audit.interval_sec=86400
- repository.audit.failure.threshold=3
-
- # DB Audit Properties
- # Flag to control the execution of the subsystemTest for the Database
- db.audit.is.active=false
-
-
-End of Document
-
-.. SSNote: Wiki page ref. https://wiki.onap.org/display/DW/Feature+State+Management
-
-
diff --git a/docs/drools/feature_testtransaction.rst b/docs/drools/feature_testtransaction.rst
index 8bec1421..8e99f0b6 100644
--- a/docs/drools/feature_testtransaction.rst
+++ b/docs/drools/feature_testtransaction.rst
@@ -11,15 +11,24 @@ Feature: Test Transaction
.. contents::
:depth: 3
-The Test Transaction feature provides a mechanism by which the health of drools policy controllers can be tested.
+The Test Transaction feature provides a mechanism by which the health of drools policy controllers
+can be tested.
-When enabled, the feature functions by injecting an event object (identified by a UUID) into the drools session of each policy controller that is active in the system. Only an object with this UUID can trigger the Test Transaction-specific drools logic to execute.
+When enabled, the feature functions by injecting an event object (identified by a UUID) into the
+drools session of each policy controller that is active in the system. Only an object with this UUID
+can trigger the Test Transaction-specific drools logic to execute.
-The injection of the event triggers the "TT" rule (see *TestTransactionTemplate.drl* below) to fire. The "TT" rule simply increments a ForwardProgress counter object, thereby confirming that the drools session for this particular controller is active and firing its rules accordingly. This cycle repeats at 20 second intervals.
+The injection of the event triggers the "TT" rule (see *TestTransactionTemplate.drl* below) to fire.
+The "TT" rule simply increments a ForwardProgress counter object, thereby confirming that the drools
+session for this particular controller is active and firing its rules accordingly. This cycle
+repeats at 20 second intervals.
-If it is ever the case that a drools controller does not have the "TT" rule present in its *.drl*, or that the forward progress counter is not incremented, the Test Transaction thread for that particular drools session (i.e. controller) is terminated and a message is logged to *error.log*.
+If it is ever the case that a drools controller does not have the "TT" rule present in its *.drl*,
+or that the forward progress counter is not incremented, the Test Transaction thread for that
+particular drools session (i.e. controller) is terminated and a message is logged to *error.log*.
-Prior to being enabled, the following drools rules need to be appended to the rules templates of any use-case that is to be monitored by the feature.
+Prior to being enabled, the following drools rules need to be appended to the rules templates of any
+use-case that is to be monitored by the feature.
.. code-block:: java
:caption: TestTransactionTemplate.drl
@@ -73,7 +82,8 @@ Prior to being enabled, the following drools rules need to be appended to the ru
ForwardProgress(counter >= 0, $ttc : counter)
end
-Once the proper artifacts are built and deployed with the addition of the TestTransactionTemplate rules, the feature can then be enabled by entering the following commands:
+Once the proper artifacts are built and deployed with the addition of the TestTransactionTemplate
+rules, the feature can then be enabled by entering the following commands:
.. code-block:: bash
:caption: PDPD Features Command
@@ -87,10 +97,6 @@ Once the proper artifacts are built and deployed with the addition of the TestTr
controlloop-utils 1.1.0-SNAPSHOT disabled
healthcheck 1.1.0-SNAPSHOT disabled
test-transaction 1.1.0-SNAPSHOT enabled
- eelf 1.1.0-SNAPSHOT disabled
- state-management 1.1.0-SNAPSHOT disabled
- active-standby-management 1.1.0-SNAPSHOT disabled
- session-persistence 1.1.0-SNAPSHOT disabled
The output of the enable command will indicate whether or not the feature was enabled successfully.
diff --git a/docs/drools/mdc_enablefeature.png b/docs/drools/mdc_enablefeature.png
deleted file mode 100644
index 26ae55a4..00000000
--- a/docs/drools/mdc_enablefeature.png
+++ /dev/null
Binary files differ
diff --git a/docs/drools/mdc_properties.png b/docs/drools/mdc_properties.png
deleted file mode 100755
index 63cea92e..00000000
--- a/docs/drools/mdc_properties.png
+++ /dev/null
Binary files differ
diff --git a/docs/drools/pdpdApps.rst b/docs/drools/pdpdApps.rst
index 6dceee5f..abcf2e69 100644
--- a/docs/drools/pdpdApps.rst
+++ b/docs/drools/pdpdApps.rst
@@ -28,47 +28,48 @@ Software
Source Code repositories
~~~~~~~~~~~~~~~~~~~~~~~~
-The PDP-D Applications software resides on the `policy/drools-applications <https://git.onap.org/policy/drools-applications>`__ repository. The actor libraries introduced in the *frankfurt* release reside in
-the `policy/models repository <https://git.onap.org/policy/models>`__.
+The PDP-D Applications software resides on the
+`policy/drools-applications <https://git.onap.org/policy/drools-applications>`_ repository.
+The actor libraries introduced in the *Frankfurt* release reside in the
+`policy/models repository <https://git.onap.org/policy/models>`_.
At this time, the *control loop* application is the only application supported in ONAP.
All the application projects reside under the
-`controlloop directory <https://git.onap.org/policy/drools-applications/tree/controlloop>`__.
+`controlloop directory <https://git.onap.org/policy/drools-applications/tree/controlloop>`_.
Docker Image
~~~~~~~~~~~~
-See the *drools-applications*
-`released versions <https://wiki.onap.org/display/DW/Policy+Framework+Project%3A+Component+Versions>`__
-for the latest images:
+Check the *drools-applications* `released versions <https://github.com/onap/policy-parent/tree/master/integration/src/main/resources/release>`_
+page for the latest versions. At the time of this writing *3.0.1* is the latest version.
.. code-block:: bash
- docker pull onap/policy-pdpd-cl:3.0.0
+ docker pull nexus3.onap.org:10001/onap/policy-pdpd-cl:3.0.1
-At the time of this writing *3.0.0* is the latest version.
+At the time of this writing *3.0.1* is the latest version.
-The *onap/policy-pdpd-cl* image extends the *onap/policy-drools* image with
-the *usecases* controller that realizes the *control loop* application.
+The *onap/policy-pdpd-cl* image extends the *onap/policy-drools* image with the *usecases*
+controller that realizes the *control loop* application.
Usecases Controller
===================
-The `usecases <https://git.onap.org/policy/drools-applications/tree/controlloop/common/controller-usecases>`__
+The `usecases <https://git.onap.org/policy/drools-applications/tree/controlloop/common/controller-usecases>`_
controller is the *control loop* application in ONAP.
There are three parts in this controller:
-* The `drl rules <https://git.onap.org/policy/drools-applications/tree/controlloop/common/controller-usecases/src/main/resources/usecases.drl>`__.
-* The `kmodule.xml <https://git.onap.org/policy/drools-applications/tree/controlloop/common/controller-usecases/src/main/resources/META-INF/kmodule.xml>`__.
-* The `dependencies <https://git.onap.org/policy/drools-applications/tree/controlloop/common/controller-usecases/pom.xml>`__.
+* The `drl rules <https://git.onap.org/policy/drools-applications/tree/controlloop/common/controller-usecases/src/main/resources/usecases.drl>`_.
+* The `kmodule.xml <https://git.onap.org/policy/drools-applications/tree/controlloop/common/controller-usecases/src/main/resources/META-INF/kmodule.xml>`_.
+* The `dependencies <https://git.onap.org/policy/drools-applications/tree/controlloop/common/controller-usecases/pom.xml>`_.
-The `kmodule.xml` specifies only one session, and declares in the *kbase* section the two operational policy types that
-it supports.
+The `kmodule.xml` specifies only one session, and declares in the *kbase* section the two
+operational policy types that it supports.
-The Usecases controller relies on the new Actor framework to interact with remote
-components, part of a control loop transaction. The reader is referred to the
-*Policy Platform Actor Development Guidelines* in the documentation for further information.
+The Usecases controller relies on the new Actor framework to interact with remote components, part
+of a control loop transaction. The reader is referred to the *Policy Platform Actor Development
+Guidelines* in the documentation for further information.
Operational Policy Types
========================
@@ -77,19 +78,20 @@ The *usecases* controller supports the following policy type:
- *onap.policies.controlloop.operational.common.Drools*.
-The *onap.policies.controlloop.operational.common.Drools*
-is the Tosca compliant policy type introduced in *frankfurt*.
+The *onap.policies.controlloop.operational.common.Drools* is the Tosca compliant policy type
+introduced in *Frankfurt*.
The Tosca Compliant Operational Policy Type is defined at the
-`onap.policies.controlloop.operational.common.Drools <https://git.onap.org/policy/models/tree/models-examples/src/main/resources/policytypes/onap.policies.controlloop.operational.common.Drools.yaml>`__.
+`onap.policies.controlloop.operational.common.Drools <https://git.onap.org/policy/models/tree/models-examples/src/main/resources/policytypes/onap.policies.controlloop.operational.common.Drools.yaml>`_.
-An example of a Tosca Compliant Operational Policy can be found
-`here <https://git.onap.org/policy/models/tree/models-examples/src/main/resources/policies/vDNS.policy.operational.input.tosca.json>`__.
+An example of a Tosca Compliant Operational Policy:
+`vDNS <https://git.onap.org/policy/models/tree/models-examples/src/main/resources/policies/vDNS.policy.operational.input.tosca.json>`_.
Policy Chaining
===============
-The *usecases* controller supports chaining of multiple operations inside a Tosca Operational Policy. The next operation can be chained based on the result/output from an operation.
+The *usecases* controller supports chaining of multiple operations inside a Tosca Operational
+Policy. The next operation can be chained based on the result/output from an operation.
The possibilities available for chaining are:
- *success: chain after the result of operation is success*
@@ -99,17 +101,17 @@ The possibilities available for chaining are:
- *failure_exception: chain after the result of operation is failure due to exception*
- *failure_guard: chain after the result of operation is failure due to guard not allowing the operation*
-An example of policy chaining for VNF can be found
-`here <https://github.com/onap/policy-models/blob/master/models-examples/src/main/resources/policies/vFirewall.cds.policy.operational.chaining.yaml>`__.
+An example of policy chaining for VNF:
+`vFirewall <https://github.com/onap/policy-models/blob/master/models-examples/src/main/resources/policies/vFirewall.cds.policy.operational.chaining.yaml>`_.
-An example of policy chaining for PNF can be found
-`here <https://github.com/onap/policy-models/blob/master/models-examples/src/main/resources/policies/pnf.cds.policy.operational.chaining.yaml>`__.
+An example of policy chaining for PNF:
+`pnf <https://github.com/onap/policy-models/blob/master/models-examples/src/main/resources/policies/pnf.cds.policy.operational.chaining.yaml>`_.
Features
========
-Since the PDP-D Control Loop Application image was created from the PDP-D Engine one (*onap/policy-drools*),
-it inherits all features and functionality.
+Since the PDP-D Control Loop Application image was created from the PDP-D Engine one
+(*onap/policy-drools*), it inherits all features and functionality.
The enabled features in the *onap/policy-pdpd-cl* image are:
@@ -118,29 +120,27 @@ The enabled features in the *onap/policy-pdpd-cl* image are:
- **lifecycle**: enables the lifecycle APIs.
- **controlloop-trans**: control loop transaction tracking.
- **controlloop-management**: generic controller capabilities.
-- **controlloop-usecases**: new *controller* introduced in the guilin release to realize the ONAP use cases.
+- **controlloop-usecases**: new *controller* introduced in the Guilin release to realize the ONAP
+ use cases.
Control Loops Transaction (controlloop-trans)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-It tracks Control Loop Transactions and Operations. These are recorded in
-the *$POLICY_LOGS/audit.log* and *$POLICY_LOGS/metrics.log*, and accessible
-through the telemetry APIs.
+It tracks Control Loop Transactions and Operations. These are recorded in the
+*$POLICY_LOGS/audit.log* and *$POLICY_LOGS/metrics.log*, and accessible through the telemetry APIs.
Control Loops Management (controlloop-management)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-It installs common control loop application resources, and provides
-telemetry API extensions. *Actor* configurations are packaged in this
-feature.
+It installs common control loop application resources, and provides telemetry API extensions.
+*Actor* configurations are packaged in this feature.
Usecases Controller (controlloop-usecases)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-It is the *guilin* release implementation of the ONAP use cases.
-It relies on the new *Actor* model framework to carry out a policy's
-execution.
+It is the *Guilin* release implementation of the ONAP use cases. It relies on the new *Actor* model
+framework to carry out a policy's execution.
Utilities (controlloop-utils)
@@ -151,16 +151,15 @@ Enables *actor simulators* for testing purposes.
Offline Mode
============
-The default ONAP installation in *onap/policy-pdpd-cl:1.8.2* is *OFFLINE*.
-In this configuration, the *rules* artifact and the *dependencies* are all in the local
-maven repository. This requires that the maven dependencies are preloaded in the local
-repository.
+The default ONAP installation in *onap/policy-pdpd-cl:1.8.2* is *OFFLINE*. In this configuration,
+the *rules* artifact and the *dependencies* are all in the local maven repository. This requires
+that the maven dependencies are preloaded in the local repository.
An offline configuration requires two configuration items:
-- *OFFLINE* environment variable set to true (see `values.yaml <https://git.onap.org/oom/tree/kubernetes/policy/values.yaml>`__.
-- override of the default *settings.xml* (see
- `settings.xml <https://git.onap.org/oom/tree/kubernetes/policy/components/policy-drools-pdp/resources/configmaps/settings.xml>`__) override.
+- *OFFLINE* environment variable set to true (see `values.yaml <https://git.onap.org/oom/tree/kubernetes/policy/values.yaml>`_.
+- override of the default *settings.xml* (see `settings.xml <https://git.onap.org/oom/tree/kubernetes/policy/components/policy-drools-pdp/resources/configmaps/settings.xml>`_)
+ override.
Running the PDP-D Control Loop Application in a single container
================================================================
@@ -205,13 +204,7 @@ First create an environment file (in this example *env.conf*) to configure the P
SQL_USER=
SQL_PASSWORD=
- # AAF
-
- AAF=false
- AAF_NAMESPACE=org.onap.policy
- AAF_HOST=aaf.api.simpledemo.onap.org
-
- # PDP-D DMaaP configuration channel
+ # PDP-D configuration channel
PDPD_CONFIGURATION_TOPIC=PDPD-CONFIGURATION
PDPD_CONFIGURATION_API_KEY=
@@ -258,7 +251,7 @@ First create an environment file (in this example *env.conf*) to configure the P
PDP_PASSWORD=password
GUARD_DISABLED=true
- # DCAE DMaaP
+ # DCAE Topic
DCAE_TOPIC=unauthenticated.DCAE_CL_OUTPUT
DCAE_SERVERS=localhost
@@ -301,7 +294,8 @@ Configuration
features.pre.sh
"""""""""""""""
-We can enable the *controlloop-utils* and disable the *distributed-locking* feature to avoid using the database.
+We can enable the *controlloop-utils* and disable the *distributed-locking* feature to avoid using
+the database.
.. code-block:: bash
@@ -310,22 +304,11 @@ We can enable the *controlloop-utils* and disable the *distributed-locking* feat
bash -c "/opt/app/policy/bin/features disable distributed-locking"
bash -c "/opt/app/policy/bin/features enable controlloop-utils"
-active.post.sh
-""""""""""""""
-
-The *active.post.sh* script makes the PDP-D active.
-
-.. code-block:: bash
-
- #!/bin/bash -x
-
- bash -c "http --verify=no -a ${TELEMETRY_USER}:${TELEMETRY_PASSWORD} PUT https://localhost:9696/policy/pdp/engine/lifecycle/state/ACTIVE"
-
Actor Properties
""""""""""""""""
-In the *guilin* release, some *actors* configurations need to be overridden to support *http* for compatibility
-with the *controlloop-utils* feature.
+In the *Guilin* release, some *actors* configurations need to be overridden to support *http* for
+compatibility with the *controlloop-utils* feature.
AAI-http-client.properties
""""""""""""""""""""""""""
@@ -420,8 +403,9 @@ Bring up the PDP-D Control Loop Application
To run the container in detached mode, add the *-d* flag.
-Note that we are opening the *9696* telemetry API port to the outside world, mounting the *config* host directory,
-and setting environment variables.
+.. note::
+ The *9696* telemetry API port is open to the outside world, the *config* host directory is mounted
+ as a volume and environment variables are set with an env-file option.
To open a shell into the PDP-D:
@@ -429,13 +413,12 @@ To open a shell into the PDP-D:
docker exec -it pdp-d bash
-Once in the container, run tools such as *telemetry*, *db-migrator*, *policy* to look at the system state:
+Once in the container, run tools such as *telemetry*, *policy* to look at the system state:
.. code-block:: bash
docker exec -it PDPD bash -c "/opt/app/policy/bin/telemetry"
docker exec -it PDPD bash -c "/opt/app/policy/bin/policy status"
- docker exec -it PDPD bash -c "/opt/app/policy/bin/db-migrator -s ALL -o report"
Controlled instantiation of the PDP-D Control Loop Appplication
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -565,11 +548,12 @@ To initiate a control loop transaction, simulate a DCAE ONSET to Policy:
http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/DCAE_TOPIC/events @dcae.vdns.onset.json Content-Type:'text/plain'
-This will trigger the scale out control loop transaction that will interact with the *SO*
-simulator to complete the transaction.
+This will trigger the scale out control loop transaction that will interact with the *SO* simulator
+to complete the transaction.
-Verify in *$POLICY_LOGS/network.log* that a *FINAL: SUCCESS* notification is sent over the POLICY-CL-MGT channel.
-An entry in the *$POLICY_LOGS/audit.log* should indicate successful completion as well.
+Verify in *$POLICY_LOGS/network.log* that a *FINAL: SUCCESS* notification is sent over the
+POLICY-CL-MGT channel. An entry in the *$POLICY_LOGS/audit.log* should indicate successful
+completion as well.
vCPE use case testing
=====================
@@ -661,8 +645,8 @@ To initiate a control loop transaction, simulate a DCAE ONSET to Policy:
http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/DCAE_TOPIC/events @dcae.vcpe.onset.json Content-Type:'text/plain'
-This will spawn a vCPE control loop transaction in the PDP-D. Policy will send a *restart* message over the
-*APPC-LCM-READ* channel to APPC and wait for a response.
+This will spawn a vCPE control loop transaction in the PDP-D. Policy will send a *restart* message
+over the *APPC-LCM-READ* channel to APPC and wait for a response.
Verify that you see this message in the network.log by looking for *APPC-LCM-READ* messages.
@@ -705,8 +689,9 @@ Send a simulated APPC response back to the PDP-D over the *APPC-LCM-WRITE* chann
http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/APPC-LCM-WRITE/events @appc.vcpe.success.json Content-Type:'text/plain'
-Verify in *$POLICY_LOGS/network.log* that a *FINAL: SUCCESS* notification is sent over the *POLICY-CL-MGT* channel,
-and an entry is added to the *$POLICY_LOGS/audit.log* indicating successful completion.
+Verify in *$POLICY_LOGS/network.log* that a *FINAL: SUCCESS* notification is sent over the
+*POLICY-CL-MGT* channel, and an entry is added to the *$POLICY_LOGS/audit.log* indicating successful
+completion.
vFirewall use case testing
==========================
@@ -807,13 +792,14 @@ To initiate a control loop transaction, simulate a DCAE ONSET to Policy:
http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/DCAE_TOPIC/events @dcae.vfw.onset.json Content-Type:'text/plain'
-This will spawn a vFW control loop transaction in the PDP-D. Policy will send a *ModifyConfig* message over the
-*APPC-CL* channel to APPC and wait for a response. This can be seen by searching the network.log for *APPC-CL*.
+This will spawn a vFW control loop transaction in the PDP-D. Policy will send a *ModifyConfig*
+message over the *APPC-CL* channel to APPC and wait for a response. This can be seen by searching
+the network.log for *APPC-CL*.
Note the *SubRequestId* field in the *ModifyConfig* message in the *APPC-CL* topic in the network.log
-Send a simulated APPC response back to the PDP-D over the *APPC-CL* channel.
-To do this, change the *REPLACEME* text in the *appc.vcpe.success.json* with this *SubRequestId*.
+Send a simulated APPC response back to the PDP-D over the *APPC-CL* channel. To do this, change the
+*REPLACEME* text in the *appc.vcpe.success.json* with this *SubRequestId*.
appc.vcpe.success.json
~~~~~~~~~~~~~~~~~~~~~~
@@ -842,24 +828,19 @@ appc.vcpe.success.json
http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/APPC-CL/events @appc.vcpe.success.json Content-Type:'text/plain'
-Verify in *$POLICY_LOGS/network.log* that a *FINAL: SUCCESS* notification is sent over the POLICY-CL-MGT channel,
-and an entry is added to the *$POLICY_LOGS/audit.log* indicating successful completion.
+Verify in *$POLICY_LOGS/network.log* that a *FINAL: SUCCESS* notification is sent over the
+POLICY-CL-MGT channel, and an entry is added to the *$POLICY_LOGS/audit.log* indicating successful
+completion.
Running PDP-D Control Loop Application with other components
============================================================
-The reader can also look at the `policy/docker repository <https://github.com/onap/policy-docker/tree/master/csit>`__.
+The reader can also look at the `policy/docker repository <https://github.com/onap/policy-docker/tree/master/csit>`_.
More specifically, these directories have examples of other PDP-D Control Loop configurations:
-* `plans <https://github.com/onap/policy-docker/tree/master/compose>`__: startup & teardown scripts.
-* `scripts <https://github.com/onap/policy-docker/blob/master/compose/compose.yaml>`__: docker-compose file.
-* `tests <https://github.com/onap/policy-docker/blob/master/csit/resources/tests/drools-applications-test.robot>`__: test plan.
-
-Additional information
-======================
-
-For additional information, please see the
-`Drools PDP Development and Testing (In Depth) <https://wiki.onap.org/display/DW/2020-08+Frankfurt+Tutorials>`__ page.
-
+* `plans <https://github.com/onap/policy-docker/tree/master/compose>`_: startup & teardown scripts.
+* `scripts <https://github.com/onap/policy-docker/blob/master/compose/compose.yaml>`_: docker-compose file.
+* `tests <https://github.com/onap/policy-docker/blob/master/csit/resources/tests/drools-applications-test.robot>`_: test plan.
+End of Document
diff --git a/docs/drools/pdpdEngine.rst b/docs/drools/pdpdEngine.rst
index 0ee4fc28..7a699025 100644
--- a/docs/drools/pdpdEngine.rst
+++ b/docs/drools/pdpdEngine.rst
@@ -12,21 +12,21 @@ PDP-D Engine
Overview
========
-The PDP-D Core Engine provides an infrastructure and services for `drools <https://www.drools.org/>`__ based applications
-in the context of Policies and ONAP.
+The PDP-D Core Engine provides an infrastructure and services for `drools <https://www.drools.org/>`_
+based applications in the context of Policies and ONAP.
-A PDP-D supports applications by means of *controllers*. A *controller* is a named
-grouping of resources. These typically include references to communication endpoints,
-maven artifact coordinates, and *coders* for message mapping.
+A PDP-D supports applications by means of *controllers*. A *controller* is a named grouping of
+resources. These typically include references to communication endpoints, maven artifact
+coordinates, and *coders* for message mapping.
-*Controllers* use *communication endpoints* to interact
-with remote networked entities typically using messaging (dmaap or ueb),
-or http.
+*Controllers* use *communication endpoints* to interact with remote networked entities typically
+using kafka messaging or http.
-PDP-D Engine capabilities can be extended via *features*. Integration with other
+PDP-D Engine capabilities can be extended via *features*. Integration with other
Policy Framework components (API, PAP, and PDP-X) is through one of them (*feature-lifecycle*).
-The PDP-D Engine infrastructure provides mechanisms for data migration, diagnostics, and application management.
+The PDP-D Engine infrastructure provides mechanisms for data migration, diagnostics, and application
+management.
Software
========
@@ -34,25 +34,29 @@ Software
Source Code repositories
~~~~~~~~~~~~~~~~~~~~~~~~
-The PDP-D software is mainly located in the `policy/drools repository <https://git.onap.org/policy/drools-pdp>`__ with the *communication endpoints* software residing in the `policy/common repository <https://git.onap.org/policy/common>`__ and Tosca policy models in the `policy/models repository <https://git.onap.org/policy/models>`__.
+The PDP-D software is mainly located in the `policy/drools repository <https://git.onap.org/policy/drools-pdp>`_
+with the *communication endpoints* software residing in the
+`policy/common repository <https://git.onap.org/policy/common>`_ and Tosca policy models in the
+`policy/models repository <https://git.onap.org/policy/models>`_.
Docker Image
~~~~~~~~~~~~
-Check the *drools-pdp* `released versions <https://wiki.onap.org/display/DW/Policy+Framework+Project%3A+Component+Versions>`__ page for the latest versions.
-At the time of this writing *3.0.0* is the latest version.
+Check the *drools-pdp* `released versions <https://github.com/onap/policy-parent/tree/master/integration/src/main/resources/release>`_
+page for the latest versions. At the time of this writing *3.0.1* is the latest version.
.. code-block:: bash
- docker pull onap/policy-drools:3.0.0
+ docker pull nexus3.onap.org:10001/onap/policy-drools:3.0.1
-A container instantiated from this image will run under the non-priviledged *policy* account.
+A container instantiated from this image will run under the non-privileged *policy* account.
The PDP-D root directory is located at the */opt/app/policy* directory (or *$POLICY_HOME*), with the
exception of the *$HOME/.m2* which contains the local maven repository.
The PDP-D configuration resides in the following directories:
-- **/opt/app/policy/config**: (*$POLICY_HOME/config* or *$POLICY_CONFIG*) contains *engine*, *controllers*, and *endpoint* configuration.
+- **/opt/app/policy/config**: (*$POLICY_HOME/config* or *$POLICY_CONFIG*) contains *engine*,
+ *controllers*, and *endpoint* configuration.
- **/home/policy/.m2**: (*$HOME/.m2*) maven repository configuration.
- **/opt/app/policy/etc/**: (*$POLICY_HOME/etc*) miscellaneous configuration such as certificate stores.
@@ -65,114 +69,91 @@ The following command can be used to explore the directory layout.
Communication Endpoints
=======================
-PDP-D supports the following networked infrastructures. This is also referred to as
+PDP-D supports the following networked infrastructures. This is also referred to as
*communication infrastructures* in the source code.
-- DMaaP
-- UEB
+- Kafka
- NOOP
- Http Servers
- Http Clients
The source code is located at
-`the policy-endpoints module <https://git.onap.org/policy/common/tree/policy-endpoints>`__
+`the policy-endpoints module <https://git.onap.org/policy/common/tree/policy-endpoints>`_
in the *policy/commons* repository.
-These network resources are *named* and typically have a *global* scope, therefore typically visible to
-the PDP-D engine (for administration purposes), application *controllers*,
-and *features*.
+These network resources are *named* and typically have a *global* scope, therefore typically visible
+to the PDP-D engine (for administration purposes), application *controllers*, and *features*.
-DMaaP, UEB, and NOOP are message-based communication infrastructures, hence the terminology of
+Kafka and NOOP are message-based communication infrastructures, hence the terminology of
source and sinks, to denote their directionality into or out of the *controller*, respectively.
An endpoint can either be *managed* or *unmanaged*. The default for an endpoint is to be *managed*,
meaning that they are globally accessible by name, and managed by the PDP-D engine.
-*Unmanaged* topics are used when neither global visibility, or centralized PDP-D management is desired.
-The software that uses *unmanaged* topics is responsible for their lifecycle management.
+*Unmanaged* topics are used when neither global visibility, or centralized PDP-D management is
+desired. The software that uses *unmanaged* topics is responsible for their lifecycle management.
-DMaaP Endpoints
+Kafka Topics
~~~~~~~~~~~~~~~
-These are messaging enpoints that use DMaaP as the communication infrastructure.
-
-Typically, a *managed* endpoint configuration is stored in the *<topic-name>-topic.properties* files.
+Typically, a *managed* topic configuration is stored in the *<topic-name>-topic.properties* files.
For example, the
-`DCAE_TOPIC-topic.properties <https://git.onap.org/policy/drools-applications/tree/controlloop/common/feature-controlloop-management/src/main/feature/config/DCAE_TOPIC-topic.properties>`__ is defined as
+`dcae_topic-topic.properties <https://git.onap.org/policy/drools-applications/tree/controlloop/common/feature-controlloop-management/src/main/feature/config/DCAE_TOPIC-topic.properties>`_ is defined as
.. code-block:: bash
- dmaap.source.topics=DCAE_TOPIC
-
- dmaap.source.topics.DCAE_TOPIC.effectiveTopic=${env:DCAE_TOPIC}
- dmaap.source.topics.DCAE_TOPIC.servers=${env:DMAAP_SERVERS}
- dmaap.source.topics.DCAE_TOPIC.consumerGroup=${env:DCAE_CONSUMER_GROUP}
- dmaap.source.topics.DCAE_TOPIC.https=true
+ kafka.source.topics=dcae_topic
+ kafka.source.topics.dcae_topic.effectiveTopic=${env:dcae_topic}
+ kafka.source.topics.dcae_topic.servers=${env:KAFKA_SERVERS}
+ kafka.source.topics.dcae_topic.consumerGroup=${env:DCAE_CONSUMER_GROUP}
+ kafka.source.topics.dcae_topic.https=false
-In this example, the generic name of the *source* endpoint
-is *DCAE_TOPIC*. This is known as the *canonical* name.
-The actual *topic* used in communication exchanges in a physical lab is contained
-in the *$DCAE_TOPIC* environment variable. This environment variable is usually
-set up by *devops* on a per installation basis to meet the needs of each
-lab spec.
+In this example, the generic name of the *source* topic is *dcae_topic*. This is known as the
+*canonical* name. The actual *topic* used in communication exchanges in a physical lab is contained
+in the *$dcae_topic* environment variable. This environment variable is usually set up by *devops*
+on a per installation basis to meet the needs of each lab spec.
-In the previous example, *DCAE_TOPIC* is a source-only topic.
+In the previous example, *dcae_topic* is a source-only topic.
-Sink topics are similarly specified but indicating that are sink endpoints
-from the perspective of the *controller*. For example, the *APPC-CL* topic
-is configured as
+Sink topics are similarly specified but indicating that are sink endpoints from the perspective of
+the *controller*. For example, the *appc-cl* topic is configured as:
.. code-block:: bash
- dmaap.source.topics=APPC-CL
- dmaap.sink.topics=APPC-CL
+ kafka.source.topics=appc-cl
+ kafka.sink.topics=appc-cl
- dmaap.source.topics.APPC-CL.servers=${env:DMAAP_SERVERS}
- dmaap.source.topics.APPC-CL.https=true
+ kafka.source.topics.appc-cl.servers=${env:KAFKA_SERVERS}
+ kafka.source.topics.appc-cl.https=false
- dmaap.sink.topics.APPC-CL.servers=${env:DMAAP_SERVERS}
- dmaap.sink.topics.APPC-CL.https=true
+ kafka.sink.topics.appc-cl.servers=${env:KAFKA_SERVERS}
+ kafka.sink.topics.appc-cl.https=false
-Although not shown in these examples, additional configuration options are available such as *user name*,
-*password*, *security keys*, *consumer group* and *consumer instance*.
-
-UEB Endpoints
-~~~~~~~~~~~~~
+Although not shown in these examples, additional configuration options are available such as
+*user name*, *password*, *security keys*, *consumer group* and *consumer instance*.
-Similary, UEB endpoints are messaging endpoints, similar to the DMaaP ones.
-
-For example, the
-`DCAE_TOPIC-topic.properties <https://git.onap.org/policy/drools-applications/tree/controlloop/common/feature-controlloop-management/src/main/feature/config/DCAE_TOPIC-topic.properties>`__ can be converted to an *UEB* one, by replacing the
-*dmaap* prefix with *ueb*. For example:
-
-.. code-block:: bash
-
- ueb.source.topics=DCAE_TOPIC
-
- ueb.source.topics.DCAE_TOPIC.effectiveTopic=${env:DCAE_TOPIC}
- ueb.source.topics.DCAE_TOPIC.servers=${env:DMAAP_SERVERS}
- ueb.source.topics.DCAE_TOPIC.consumerGroup=${env:DCAE_CONSUMER_GROUP}
- ueb.source.topics.DCAE_TOPIC.https=true
NOOP Endpoints
~~~~~~~~~~~~~~
NOOP (no-operation) endpoints are messaging endpoints that don't have any network attachments.
They are used for testing convenience.
-To convert the
-`DCAE_TOPIC-topic.properties <https://git.onap.org/policy/drools-applications/tree/controlloop/common/feature-controlloop-management/src/main/feature/config/DCAE_TOPIC-topic.properties>`__ to a *NOOP* endpoint, simply replace the *dmaap* prefix with *noop*:
+To convert the dcae_topic-topic.properties to a *NOOP* endpoint, simply replace the *kafka* prefix
+with *noop*:
.. code-block:: bash
- noop.source.topics=DCAE_TOPIC
- noop.source.topics.DCAE_TOPIC.effectiveTopic=${env:DCAE_TOPIC}
+ noop.source.topics=dcae_topic
+ noop.source.topics.dcae_topic.effectiveTopic=${env:dcae_topic}
HTTP Clients
~~~~~~~~~~~~
-HTTP Clients are typically stored in files following the naming convention: *<name>-http-client.properties* convention.
-One such example is
-the `AAI HTTP Client <https://git.onap.org/policy/drools-applications/tree/controlloop/common/feature-controlloop-management/src/main/feature/config/AAI-http-client.properties>`__:
+HTTP Clients are typically stored in files following the naming convention:
+*<name>-http-client.properties* convention.
+
+One such example is the
+`AAI HTTP Client <https://git.onap.org/policy/drools-applications/tree/controlloop/common/feature-controlloop-management/src/main/feature/config/AAI-http-client.properties>`_:
.. code-block:: bash
@@ -189,9 +170,9 @@ the `AAI HTTP Client <https://git.onap.org/policy/drools-applications/tree/contr
HTTP Servers
~~~~~~~~~~~~
-HTTP Servers are stored in files that follow a similar naming convention *<name>-http-server.properties*.
-The following is an example of a server named *CONFIG*, getting most of its configuration from
-environment variables.
+HTTP Servers are stored in files that follow a similar naming convention
+*<name>-http-server.properties*. The following is an example of a server named *CONFIG*, getting
+most of its configuration from environment variables.
.. code-block:: bash
@@ -204,21 +185,22 @@ environment variables.
http.server.services.CONFIG.restPackages=org.onap.policy.drools.server.restful
http.server.services.CONFIG.managed=false
http.server.services.CONFIG.swagger=true
- http.server.services.CONFIG.https=true
- http.server.services.CONFIG.aaf=${envd:AAF:false}
+ http.server.services.CONFIG.https=false
-*Endpoints* configuration resides in the *$POLICY_HOME/config* (or *$POLICY_CONFIG*) directory in a container.
+*Endpoints* configuration resides in the *$POLICY_HOME/config* (or *$POLICY_CONFIG*) directory in a
+container.
Controllers
===========
-*Controllers* are the means for the PDP-D to run *applications*. Controllers are
-defined in *<name>-controller.properties* files.
+*Controllers* are the means for the PDP-D to run *applications*. Controllers are defined in
+*<name>-controller.properties* files.
For example, see the
-`usecases controller configuration <https://git.onap.org/policy/drools-applications/tree/controlloop/common/feature-controlloop-usecases/src/main/feature/config/usecases-controller.properties>`__.
+`usecases controller configuration <https://git.onap.org/policy/drools-applications/tree/controlloop/common/feature-controlloop-usecases/src/main/feature/config/usecases-controller.properties>`_.
-This configuration file has two sections: *a)* application maven coordinates, and *b)* endpoint references and coders.
+This configuration file has two sections: *a)* application maven coordinates, and *b)* endpoint
+references and coders.
Maven Coordinates
~~~~~~~~~~~~~~~~~
@@ -236,7 +218,8 @@ It is the *brain* of the control loop application.
.....
This *kjar* contains the
-`usecases DRL <https://git.onap.org/policy/drools-applications/tree/controlloop/common/controller-usecases/src/main/resources/usecases.drl>`__ file (there may be more than one DRL file included).
+`usecases DRL <https://git.onap.org/policy/drools-applications/tree/controlloop/common/controller-usecases/src/main/resources/usecases.drl>`_
+file (there may be more than one DRL file included).
.. code-block:: bash
@@ -255,10 +238,10 @@ This *kjar* contains the
end
...
-The DRL in conjuction with the dependent java libraries in the kjar
-`pom <https://git.onap.org/policy/drools-applications/tree/controlloop/common/controller-usecases/pom.xml>`__
-realizes the application's function. For intance, it realizes the
-vFirewall, vCPE, and vDNS use cases in ONAP.
+The DRL in conjunction with the dependent java libraries in the kjar
+`pom <https://git.onap.org/policy/drools-applications/tree/controlloop/common/controller-usecases/pom.xml>`_
+realizes the application's function. For instance, it realizes the vFirewall, vCPE, and vDNS use
+cases in ONAP.
.. code-block:: bash
@@ -274,66 +257,64 @@ vFirewall, vCPE, and vDNS use cases in ONAP.
Endpoints References and Coders
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The *usecases-controller.properties* configuration also contains a mix of
-source (of incoming controller traffic) and sink (of outgoing controller traffic)
-configuration. This configuration also contains specific
-filtering and mapping rules for incoming and outgoing dmaap messages
-known as *coders*.
+The *usecases-controller.properties* configuration also contains a mix of source (of incoming
+controller traffic) and sink (of outgoing controller traffic) configuration. This configuration also
+contains specific filtering and mapping rules for incoming and outgoing messages known as *coders*.
.. code-block:: bash
...
- dmaap.source.topics=DCAE_TOPIC,APPC-CL,APPC-LCM-WRITE,SDNR-CL-RSP
- dmaap.sink.topics=APPC-CL,APPC-LCM-READ,POLICY-CL-MGT,SDNR-CL,DCAE_CL_RSP
-
+ kafka.source.topics=dcae_topic,appc-cl,appc-lcm-write,sdnr-cl-rsp
+ kafka.sink.topics=appc-cl,appc-lcm-read,policy-cl-mgt,sdnr-cl,dcae_cl_rsp
- dmaap.source.topics.APPC-LCM-WRITE.events=org.onap.policy.appclcm.AppcLcmDmaapWrapper
- dmaap.source.topics.APPC-LCM-WRITE.events.org.onap.policy.appclcm.AppcLcmDmaapWrapper.filter=[?($.type == 'response')]
- dmaap.source.topics.APPC-LCM-WRITE.events.custom.gson=org.onap.policy.appclcm.util.Serialization,gson
+ kafka.source.topics.appc-lcm-write.events=org.onap.policy.appclcm.AppcLcmMessageWrapper
+ kafka.source.topics.appc-lcm-write.events.org.onap.policy.appclcm.AppcLcmMessageWrapper.filter=[?($.type == 'response')]
+ kafka.source.topics.appc-lcm-write.events.custom.gson=org.onap.policy.appclcm.util.Serialization,gson
- dmaap.sink.topics.APPC-CL.events=org.onap.policy.appc.Request
- dmaap.sink.topics.APPC-CL.events.custom.gson=org.onap.policy.appc.util.Serialization,gsonPretty
+ kafka.sink.topics.appc-cl.events=org.onap.policy.appc.Request
+ kafka.sink.topics.appc-cl.events.custom.gson=org.onap.policy.appc.util.Serialization,gsonPretty
...
-In this example, the *coders* specify that incoming messages over the DMaaP endpoint
-reference *APPC-LCM-WRITE*, that have a field called *type* under the root JSON object with
-value *response* are allowed into the *controller* application. In this case, the incoming
-message is converted into an object (fact) of type *org.onap.policy.appclcm.AppcLcmDmaapWrapper*.
-The *coder* has attached a custom implementation provided by the *application* with class
-*org.onap.policy.appclcm.util.Serialization*. Note that the *coder* filter is expressed in JSONPath notation.
+In this example, the *coders* specify that incoming messages reference *appc-lcm-write*, that have a
+field called *type* under the root JSON object with value *response* are allowed into the
+*controller* application. In this case, the incoming message is converted into an object (fact) of
+type *org.onap.policy.appclcm.AppcLcmMessageWrapper*. The *coder* has attached a custom
+implementation provided by the *application* with class
+*org.onap.policy.appclcm.util.Serialization*. Note that the *coder* filter is expressed in JSONPath
+notation.
Note that not all the communication endpoint references need to be explicitly referenced within the
-*controller* configuration file. For example, *Http clients* do not.
-The reasons are historical, as the PDP-D was initially intended to only communicate
-through messaging-based protocols such as UEB or DMaaP in asynchronous unidirectional mode.
-The introduction of *Http* with synchronous bi-directional communication with remote endpoints made
-it more convenient for the application to manage each network exchange.
+*controller* configuration file. For example, *Http clients* do not. The reasons are historical, as
+the PDP-D was initially intended to only communicate through messaging-based protocols such as UEB
+or DMaaP in asynchronous unidirectional mode. The introduction of *Http* with synchronous
+bi-directional communication with remote endpoints made it more convenient for the application to
+manage each network exchange. UEB and DMaaP have been replaced by Kafka messaging since Kohn release.
-*Controllers* configuration resides in the *$POLICY_HOME/config* (or *$POLICY_CONFIG*) directory in a container.
+*Controllers* configuration resides in the *$POLICY_HOME/config* (or *$POLICY_CONFIG*) directory in
+a container.
Other Configuration Files
~~~~~~~~~~~~~~~~~~~~~~~~~
-There are other types of configuration files that *controllers* can use, for example *.environment* files
-that provides a means to share data across applications. The
-`controlloop.properties.environment <https://git.onap.org/policy/drools-applications/tree/controlloop/common/feature-controlloop-management/src/main/feature/config/controlloop.properties.environment>`__ is one such example.
+There are other types of configuration files that *controllers* can use, for example *.environment*
+files that provides a means to share data across applications. The
+`controlloop.properties.environment <https://git.onap.org/policy/drools-applications/tree/controlloop/common/feature-controlloop-management/src/main/feature/config/controlloop.properties.environment>`_
+is one such example.
Tosca Policies
==============
-PDP-D supports Tosca Policies through the *feature-lifecycle*. The *PDP-D* receives its policy set
-from the *PAP*. A policy conforms to its Policy Type specification.
-Policy Types and policy creation is done by the *API* component.
-Policy deployments are orchestrated by the *PAP*.
+PDP-D supports Tosca Policies through the *feature-lifecycle*. The *PDP-D* receives its policy set
+from the *PAP*. A policy conforms to its Policy Type specification. Policy Types and policy creation
+is done by the *API* component. Policy deployments are orchestrated by the *PAP*.
-All communication between *PAP* and PDP-D is over the DMaaP *POLICY-PDP-PAP* topic.
+All communication between *PAP* and PDP-D is over the Kafka *policy-pdp-pap* topic.
Native Policy Types
~~~~~~~~~~~~~~~~~~~
-The PDP-D Engine supports two (native) Tosca policy types by means of the *lifecycle*
-feature:
+The PDP-D Engine supports two (native) Tosca policy types by means of the *lifecycle* feature:
- *onap.policies.native.drools.Controller*
- *onap.policies.native.drools.Artifact*
@@ -342,7 +323,8 @@ These types can be used to dynamically deploy or undeploy application *controlle
assign policy types, and upgrade or downgrade their attached maven artifact versions.
For instance, an
-`example native controller <https://git.onap.org/policy/drools-pdp/tree/feature-lifecycle/src/test/resources/tosca-policy-native-controller-example.json>`__ policy is shown below.
+`example native controller <https://git.onap.org/policy/drools-pdp/tree/feature-lifecycle/src/test/resources/tosca-policy-native-controller-example.json>`_
+policy is shown below.
.. code-block:: bash
@@ -363,7 +345,7 @@ For instance, an
"controllerName": "lifecycle",
"sourceTopics": [
{
- "topicName": "DCAE_TOPIC",
+ "topicName": "dcae_topic",
"events": [
{
"eventClass": "java.util.HashMap",
@@ -378,7 +360,7 @@ For instance, an
],
"sinkTopics": [
{
- "topicName": "APPC-CL",
+ "topicName": "appc-cl",
"events": [
{
"eventClass": "java.util.HashMap",
@@ -397,8 +379,9 @@ For instance, an
}
}
-The actual application coordinates are provided with a policy of type onap.policies.native.drools.Artifact,
-see the `example native artifact <https://git.onap.org/policy/drools-pdp/tree/feature-lifecycle/src/test/resources/tosca-policy-native-artifact-example.json>`__
+The actual application coordinates are provided with a policy of type
+onap.policies.native.drools.Artifact, see the
+`example native artifact <https://git.onap.org/policy/drools-pdp/tree/feature-lifecycle/src/test/resources/tosca-policy-native-artifact-example.json>`_
.. code-block:: bash
@@ -434,16 +417,15 @@ see the `example native artifact <https://git.onap.org/policy/drools-pdp/tree/fe
Operational Policy Types
~~~~~~~~~~~~~~~~~~~~~~~~
-The PDP-D also recognizes Tosca Operational Policies, although it needs an
-application *controller* that understands them to execute them. These are:
+The PDP-D also recognizes Tosca Operational Policies, although it needs an application *controller*
+that understands them to execute them. These are:
- *onap.policies.controlloop.operational.common.Drools*
-A minimum of one application *controller* that supports these capabilities
-must be installed in order to honor the *operational policy types*.
-One such controller is the *usecases* controller residing in the
-`policy/drools-applications <https://git.onap.org/policy/drools-applications>`__
-repository.
+A minimum of one application *controller* that supports these capabilities must be installed in
+order to honor the *operational policy types*. One such controller is the *usecases* controller
+residing in the
+`policy/drools-applications <https://git.onap.org/policy/drools-applications>`_ repository.
Controller Policy Type Support
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -459,15 +441,14 @@ explicitly in a native *onap.policies.native.drools.Controller* policy.
The *controller* application could declare its supported policy types in the *kjar*.
For example, the *usecases controller* packages this information in the
-`kmodule.xml <https://git.onap.org/policy/drools-applications/tree/controlloop/common/controller-usecases/src/main/resources/META-INF/kmodule.xml>`__. One advantage of this approach is that the PDP-D would only
+`kmodule.xml <https://git.onap.org/policy/drools-applications/tree/controlloop/common/controller-usecases/src/main/resources/META-INF/kmodule.xml>`_. One advantage of this approach is that the PDP-D would only
commit to execute policies against these policy types if a supporting controller is up and running.
.. code-block:: bash
- <kmodule xmlns="http://jboss.org/kie/6.0.0/kmodule">
- <kbase name="onap.policies.controlloop.operational.common.Drools" default="false" equalsBehavior="equality"/>
- <kbase name="onap.policies.controlloop.Operational" equalsBehavior="equality"
- packages="org.onap.policy.controlloop" includes="onap.policies.controlloop.operational.common.Drools">
+ <kmodule xmlns="http://www.drools.org/xsd/kmodule">
+ <kbase name="onap.policies.controlloop.operational.common.Drools" equalsBehavior="equality"
+ packages="org.onap.policy.controlloop">
<ksession name="usecases"/>
</kbase>
</kmodule>
@@ -477,23 +458,23 @@ Software Architecture
PDP-D is divided into 2 layers:
-- core (`policy-core <https://git.onap.org/policy/drools-pdp/tree/policy-core>`__)
-- management (`policy-management <https://git.onap.org/policy/drools-pdp/tree/policy-management>`__)
+- core (`policy-core <https://git.onap.org/policy/drools-pdp/tree/policy-core>`_)
+- management (`policy-management <https://git.onap.org/policy/drools-pdp/tree/policy-management>`_)
Core Layer
~~~~~~~~~~
The core layer directly interfaces with the *drools* libraries with 2 main abstractions:
-* `PolicyContainer <https://git.onap.org/policy/drools-pdp/tree/policy-core/src/main/java/org/onap/policy/drools/core/PolicyContainer.java>`__, and
-* `PolicySession <https://git.onap.org/policy/drools-pdp/tree/policy-core/src/main/java/org/onap/policy/drools/core/PolicySession.java>`__.
+* `PolicyContainer <https://git.onap.org/policy/drools-pdp/tree/policy-core/src/main/java/org/onap/policy/drools/core/PolicyContainer.java>`_, and
+* `PolicySession <https://git.onap.org/policy/drools-pdp/tree/policy-core/src/main/java/org/onap/policy/drools/core/PolicySession.java>`_.
Policy Container and Sessions
"""""""""""""""""""""""""""""
-The *PolicyContainer* abstracts the drools *KieContainer*, while a *PolicySession* abstracts a drools *KieSession*.
-PDP-D uses stateful sessions in active mode (*fireUntilHalt*) (please visit the `drools <https://www.drools.org/>`__
-website for additional documentation).
+The *PolicyContainer* abstracts the drools *KieContainer*, while a *PolicySession* abstracts a
+drools *KieSession*. PDP-D uses stateful sessions in active mode (*fireUntilHalt*) (please visit the
+`drools <https://www.drools.org/>`_ website for additional documentation).
Management Layer
~~~~~~~~~~~~~~~~
@@ -503,53 +484,59 @@ The management layer manages the PDP-D and builds on top of the *core* capabilit
PolicyEngine
""""""""""""
-The PDP-D `PolicyEngine <https://git.onap.org/policy/drools-pdp/tree/policy-management/src/main/java/org/onap/policy/drools/system/PolicyEngine.java>`__ is the top abstraction and abstracts away the PDP-D and all the
-resources it holds. The reader looking at the source code can start looking at this component
-in a top-down fashion. Note that the *PolicyEngine* abstraction should not be confused with the
-sofware in the *policy/engine* repository, there is no relationship whatsoever other than in the naming.
+The PDP-D `PolicyEngine <https://git.onap.org/policy/drools-pdp/tree/policy-management/src/main/java/org/onap/policy/drools/system/PolicyEngine.java>`_ is the top abstraction and abstracts away the PDP-D and all the
+resources it holds. The reader looking at the source code can start looking at this component in a
+top-down fashion. Note that the *PolicyEngine* abstraction should not be confused with the software
+in the *policy/engine* repository, there is no relationship whatsoever other than in the naming.
-The *PolicyEngine* represents the PDP-D, holds all PDP-D resources, and orchestrates activities among those.
+The *PolicyEngine* represents the PDP-D, holds all PDP-D resources, and orchestrates activities
+among those.
-The *PolicyEngine* manages applications via the `PolicyController <https://git.onap.org/policy/drools-pdp/tree/policy-management/src/main/java/org/onap/policy/drools/system/PolicyController.java>`__ abstractions in the base code. The
+The *PolicyEngine* manages applications via the `PolicyController <https://git.onap.org/policy/drools-pdp/tree/policy-management/src/main/java/org/onap/policy/drools/system/PolicyController.java>`_ abstractions in the base code. The
relationship between the *PolicyEngine* and *PolicyController* is one to many.
-The *PolicyEngine* holds other global resources such as a *thread pool*, *policies validator*, *telemetry* server,
-and *unmanaged* topics for administration purposes.
+The *PolicyEngine* holds other global resources such as a *thread pool*, *policies validator*,
+*telemetry* server, and *unmanaged* topics for administration purposes.
The *PolicyEngine* has interception points that allow
-`*features* <https://git.onap.org/policy/drools-pdp/tree/policy-management/src/main/java/org/onap/policy/drools/features/PolicyEngineFeatureApi.java>`__
+`*features* <https://git.onap.org/policy/drools-pdp/tree/policy-management/src/main/java/org/onap/policy/drools/features/PolicyEngineFeatureApi.java>`_
to observe and alter the default *PolicyEngine* behavior.
-The *PolicyEngine* implements the `*Startable* <https://git.onap.org/policy/common/tree/capabilities/src/main/java/org/onap/policy/common/capabilities/Startable.java>`__ and `*Lockable* <https://git.onap.org/policy/common/tree/capabilities/src/main/java/org/onap/policy/common/capabilities/Lockable.java>`__ interfaces. These operations
-have a cascading effect on the resources the *PolicyEngine* holds, as it is the top level entity, thus
-affecting *controllers* and *endpoints*. These capabilities are intended to be used for extensions,
-for example active/standby multi-node capabilities. This programmability is
-exposed via the *telemetry* API, and *feature* hooks.
+The *PolicyEngine* implements the `*Startable* <https://git.onap.org/policy/common/tree/capabilities/src/main/java/org/onap/policy/common/capabilities/Startable.java>`_ and `*Lockable* <https://git.onap.org/policy/common/tree/capabilities/src/main/java/org/onap/policy/common/capabilities/Lockable.java>`_ interfaces. These operations
+have a cascading effect on the resources the *PolicyEngine* holds, as it is the top level entity,
+thus affecting *controllers* and *endpoints*. These capabilities are intended to be used for
+extensions, for example active/standby multi-node capabilities. This programmability is exposed via
+the *telemetry* API, and *feature* hooks.
Configuration
^^^^^^^^^^^^^
*PolicyEngine* related configuration is located in the
-`engine.properties <https://git.onap.org/policy/drools-pdp/tree/policy-management/src/main/server/config/engine.properties>`__,
-and `engine-system.properties <https://git.onap.org/policy/drools-pdp/tree/policy-management/src/main/server/config/engine.properties>`__.
+`engine.properties <https://git.onap.org/policy/drools-pdp/tree/policy-management/src/main/server/config/engine.properties>`_,
+and `engine-system.properties <https://git.onap.org/policy/drools-pdp/tree/policy-management/src/main/server/config/engine.properties>`_.
The *engine* configuration files reside in the *$POLICY_CONFIG* directory.
PolicyController
""""""""""""""""
-A *PolicyController* represents an application. Each *PolicyController* has an instance of a
-`DroolsController <https://git.onap.org/policy/drools-pdp/tree/policy-management/src/main/java/org/onap/policy/drools/system/PolicyController.java>`__. The *PolicyController* provides the means to group application specific resources
-into a single unit. Such resources include the application's *maven coordinates*, *endpoint references*, and *coders*.
+A *PolicyController* represents an application. Each
+`PolicyController <https://git.onap.org/policy/drools-pdp/tree/policy-management/src/main/java/org/onap/policy/drools/system/PolicyController.java>`_ has an instance of a
+has an instance of a
+`DroolsController <https://git.onap.org/policy/drools-pdp/tree/policy-management/src/main/java/org/onap/policy/drools/system/PolicyController.java>`_.
+The *PolicyController* provides the means to group application specific resources into a single
+unit. Such resources include the application's *maven coordinates*, *endpoint references*, and
+*coders*.
-A *PolicyController* uses a
-`DroolsController <https://git.onap.org/policy/drools-pdp/tree/policy-management/src/main/java/org/onap/policy/drools/controller/DroolsController.java>`__ to interface with the *core* layer (*PolicyContainer* and *PolicySession*).
+A *PolicyController* uses a *DroolsController* to interface with the *core* layer (*PolicyContainer*
+and *PolicySession*).
The relationship between the *PolicyController* and the *DroolsController* is one-to-one.
The *DroolsController* currently supports 2 implementations, the
-`MavenDroolsController <https://git.onap.org/policy/drools-pdp/tree/policy-management/src/main/java/org/onap/policy/drools/controller/internal/MavenDroolsController.java>`__, and the
-`NullDroolsController <https://git.onap.org/policy/drools-pdp/tree/policy-management/src/main/java/org/onap/policy/drools/controller/internal/NullDroolsController.java>`__.
-The *DroolsController*'s polymorphic behavior depends on whether a maven artifact is attached to the controller or not.
+`MavenDroolsController <https://git.onap.org/policy/drools-pdp/tree/policy-management/src/main/java/org/onap/policy/drools/controller/internal/MavenDroolsController.java>`_, and the
+`NullDroolsController <https://git.onap.org/policy/drools-pdp/tree/policy-management/src/main/java/org/onap/policy/drools/controller/internal/NullDroolsController.java>`_.
+The *DroolsController*'s polymorphic behavior depends on whether a maven artifact is attached to the
+controller or not.
Configuration
^^^^^^^^^^^^^
@@ -569,35 +556,35 @@ Using Features and Listeners
Features hook into the interception points provided by the the *PDP-D* main entities.
-*Endpoint Listeners*, see `here <https://git.onap.org/policy/common/tree/message-bus/src/main/java/org/onap/policy/common/message/bus/event/TopicListener.java>`__
-and `here <https://git.onap.org/policy/common/tree/policy-endpoints/src/main/java/org/onap/policy/common/endpoints/listeners>`__, can be used in conjunction with features for additional capabilities.
+`TopicListener <https://git.onap.org/policy/common/tree/message-bus/src/main/java/org/onap/policy/common/message/bus/event/TopicListener.java>`_
+and `other listeners <https://git.onap.org/policy/common/tree/policy-endpoints/src/main/java/org/onap/policy/common/endpoints/listeners>`_
+, can be used in conjunction with features for additional capabilities.
Using Maven-Drools applications
"""""""""""""""""""""""""""""""
-Maven-based drools applications can run any arbitrary functionality structured with rules and java logic.
+Maven-based drools applications can run any arbitrary functionality structured with rules and java
+logic.
Recommended Flow
""""""""""""""""
-Whenever possible it is suggested that PDP-D related operations flow through the
-*PolicyEngine* downwards in a top-down manner. This imposed order implies that
-all the feature hooks are always invoked in a deterministic fashion. It is also
-a good mechanism to safeguard against deadlocks.
+Whenever possible it is suggested that PDP-D related operations flow through the *PolicyEngine*
+downwards in a top-down manner. This imposed order implies that all the feature hooks are always
+invoked in a deterministic fashion. It is also a good mechanism to safeguard against deadlocks.
Telemetry Extensions
""""""""""""""""""""
-It is recommended to *features* (extensions) to offer a diagnostics REST API
-to integrate with the telemetry API. This is done by placing JAX-RS files under
-the package *org.onap.policy.drools.server.restful*. The root context path
-for all the telemetry services is */policy/pdp/engine*.
+It is recommended to *features* (extensions) to offer a diagnostics REST API to integrate with the
+telemetry API. This is done by placing JAX-RS files under the package
+*org.onap.policy.drools.server.restful*. The root context path for all the telemetry services is
+*/policy/pdp/engine*.
Features
========
-*Features* is an extension mechanism for the PDP-D functionality.
-Features can be toggled on and off.
+*Features* is an extension mechanism for the PDP-D functionality. Features can be toggled on and off.
A feature is composed of:
- Java libraries.
@@ -607,13 +594,13 @@ Java Extensions
~~~~~~~~~~~~~~~
Additional functionality can be provided in the form of java libraries that hook into the
-*PolicyEngine*, *PolicyController*, *DroolsController*, and *PolicySession* interception
-points to observe or alter the PDP-D logic.
+*PolicyEngine*, *PolicyController*, *DroolsController*, and *PolicySession* interception points to
+observe or alter the PDP-D logic.
See the Feature APIs available in the
-`management <https://git.onap.org/policy/drools-pdp/tree/policy-management/src/main/java/org/onap/policy/drools/features>`__
+`management <https://git.onap.org/policy/drools-pdp/tree/policy-management/src/main/java/org/onap/policy/drools/features>`_
and
-`core <https://git.onap.org/policy/drools-pdp/tree/policy-core/src/main/java/org/onap/policy/drools/core/PolicySessionFeatureApi.java>`__ layers.
+`core <https://git.onap.org/policy/drools-pdp/tree/policy-core/src/main/java/org/onap/policy/drools/core/PolicySessionFeatureApi.java>`_ layers.
The convention used for naming these extension modules are *api-<name>* for interfaces,
and *feature-<name>* for the actual java extensions.
@@ -623,9 +610,9 @@ Configuration Items
Installation items such as scripts, SQL, maven artifacts, and configuration files.
-The reader can refer to the `policy/drools-pdp repository <https://git.onap.org/policy/drools-pdp>`__
-and the <https://git.onap.org/policy/drools-applications>`__ repository for miscellaneous feature
-implementations.
+The reader can refer to the `policy/drools-pdp repository <https://git.onap.org/policy/drools-pdp>`_
+and the `policy/drools-applications repository <https://git.onap.org/policy/drools-applications>`_
+repository for miscellaneous feature implementations.
Layout
""""""
@@ -649,10 +636,6 @@ A feature is packaged in a *feature-<name>.zip* and has this internal layout:
#     |  | L─ <dependent-jar>+
#     │  L─ feature/
#     │  L─ <feature-jar>
- #     L─ [db]/
- #     │   L─ <db-name>/+
- #     │  L─ sql/
- #     │ L─ <sql-scripts>*
#     L─ [artifacts]/
#      L─ <artifact>+
#     L─ [install]
@@ -689,7 +672,7 @@ A feature is packaged in a *feature-<name>.zip* and has this internal layout:
# by the feature designer.
# ########################################################################################
-The `features <https://git.onap.org/policy/drools-pdp/tree/policy-management/src/main/server-gen/bin/features>`__
+The `features <https://git.onap.org/policy/drools-pdp/tree/policy-management/src/main/server-gen/bin/features>`_
is the tool used for administration purposes:
.. code-block:: bash
@@ -720,18 +703,22 @@ The following features are included in the image but disabled.
Healthcheck
"""""""""""
-The Healthcheck feature provides reports used to verify the health of *PolicyEngine.manager* in addition to the construction, operation, and deconstruction of HTTP server/client objects.
+The Healthcheck feature provides reports used to verify the health of *PolicyEngine.manager* in
+addition to the construction, operation, and deconstruction of HTTP server/client objects.
When enabled, the feature takes as input a properties file named "*feature-healtcheck.properties*.
-This file should contain configuration properties necessary for the construction of HTTP client and server objects.
+This file should contain configuration properties necessary for the construction of HTTP client and
+server objects.
-Upon initialization, the feature first constructs HTTP server and client objects using the properties
-from its properties file. A healthCheck operation is then triggered. The logic of the healthCheck verifies
-that *PolicyEngine.manager* is alive, and iteratively tests each HTTP server object by sending HTTP GET
-requests using its respective client object. If a server returns a "200 OK" message, it is marked as "healthy"
-in its individual report. Any other return code results in an "unhealthy" report.
+Upon initialization, the feature first constructs HTTP server and client objects using the
+properties from its properties file. A healthCheck operation is then triggered. The logic of the
+healthCheck verifies that *PolicyEngine.manager* is alive, and iteratively tests each HTTP server
+object by sending HTTP GET requests using its respective client object. If a server returns a"200
+OK" message, it is marked as "healthy" in its individual report. Any other return code results in an
+"unhealthy" report.
-After the testing of the server objects has completed, the feature returns a single consolidated report.
+After the testing of the server objects has completed, the feature returns a single consolidated
+report.
Lifecycle
"""""""""
@@ -744,41 +731,38 @@ The PAP interacts with the lifecycle feature to put a PDP-D in PASSIVE or ACTIVE
The PASSIVE state allows for Tosca Operational policies to be deployed.
Policy execution is enabled when the PDP-D transitions to the ACTIVE state.
-This feature can coexist side by side with the legacy mode of operation that pre-dates the Dublin release.
+This feature can coexist side by side with the legacy mode of operation that pre-dates the Dublin
+release.
Distributed Locking
"""""""""""""""""""
-The Distributed Locking Feature provides locking of resources across a pool of PDP-D hosts.
-The list of locks is maintained in a database, where each record includes a resource identifier,
-an owner identifier, and an expiration time. Typically, a drools application will unlock the resource
-when it's operation completes. However, if it fails to do so, then the resource will be automatically
+The Distributed Locking Feature provides locking of resources across a pool of PDP-D hosts. The list
+of locks is maintained in a database, where each record includes a resource identifier, an owner
+identifier, and an expiration time. Typically, a drools application will unlock the resource when
+it's operation completes. However, if it fails to do so, then the resource will be automatically
released when the lock expires, thus preventing a resource from becoming permanently locked.
Other features
~~~~~~~~~~~~~~
-The following features have been contributed to the *policy/drools-pdp* but are either
-unnecessary or have not been thoroughly tested:
+The following features have been contributed to the *policy/drools-pdp* but are either unnecessary
+or have not been thoroughly tested:
.. toctree::
:maxdepth: 1
- feature_activestdbymgmt.rst
- feature_controllerlogging.rst
- feature_eelf.rst
- feature_mdcfilters.rst
feature_pooling.rst
- feature_sesspersist.rst
- feature_statemgmt.rst
feature_testtransaction.rst
feature_nolocking.rst
Data Migration
==============
-PDP-D data is migrated across releases with the
-`db-migrator <https://git.onap.org/policy/docker/tree/policy-db-migrator/src/main/docker/db-migrator>`__.
+PDP-D data used to be migrated across releases with its own db-migrator until Kohn release. Since
+Oslo, the main policy database manager,
+`db-migrator <https://git.onap.org/policy/docker/tree/policy-db-migrator/>`_
+has been in use.
The migration occurs when different release data is detected. *db-migrator* will look under the
*$POLICY_HOME/etc/db/migration* for databases and SQL scripts to migrate.
@@ -793,37 +777,7 @@ where *<sql-file>* is of the form:
<VERSION>-<pdp|feature-name>[-description](.upgrade|.downgrade).sql
-The db-migrator tool syntax is
-
-.. code-block:: bash
-
- syntax: db-migrator
- -s <schema-name>
- [-b <migration-dir>]
- [-f <from-version>]
- [-t <target-version>]
- -o <operations>
-
- where <operations>=upgrade|downgrade|auto|version|erase|report
-
- Configuration Options:
- -s|--schema|--database: schema to operate on ('ALL' to apply on all)
- -b|--basedir: overrides base DB migration directory
- -f|--from: overrides current release version for operations
- -t|--target: overrides target release to upgrade/downgrade
-
- Operations:
- upgrade: upgrade operation
- downgrade: performs a downgrade operation
- auto: autonomous operation, determines upgrade or downgrade
- version: returns current version, and in conjunction if '-f' sets the current version
- erase: erase all data related <schema> (use with care)
- report: migration detailed report on an schema
- ok: is the migration status valid
-
-See the
-`feature-distributed-locking sql directory <https://git.onap.org/policy/docker/tree/policy-db-migrator/src/main/docker/config/pooling/sql>`__
-for an example of upgrade/downgrade scripts.
+More information on DB Migrator, check :ref:`Policy DB Migrator <policy-db-migrator-label>` page.
Maven Repositories
@@ -831,15 +785,15 @@ Maven Repositories
The drools libraries in the PDP-D uses maven to fetch rules artifacts and software dependencies.
-The default *settings.xml* file specifies the repositories to search. This configuration
-can be overriden with a custom copy that would sit in a mounted configuration
-directory. See an example of the OOM override
+The default *settings.xml* file specifies the repositories to search. This configuration can be
+overwritten with a custom copy that would sit in a mounted configuration directory. See an example
+of the OOM override
`settings.xml <https://github.com/onap/oom/blob/master/kubernetes/policy/components/policy-drools-pdp/resources/configmaps/settings.xml>`_.
-The default ONAP installation of the *control loop* child image *onap/policy-pdpd-cl:1.6.4* is *OFFLINE*.
-In this configuration, the *rules* artifact and the *dependencies* retrieves all the artifacts from the local
-maven repository. Of course, this requires that the maven dependencies are preloaded in the local
-repository for it to work.
+The default ONAP installation of the *control loop* child image *onap/policy-pdpd-cl:3.0.1* is
+*OFFLINE*. In this configuration, the *rules* artifact and the *dependencies* retrieves all the
+artifacts from the local maven repository. Of course, this requires that the maven dependencies are
+preloaded in the local repository for it to work.
An offline configuration requires two items:
@@ -847,13 +801,13 @@ An offline configuration requires two items:
- override *settings.xml* customization, see
`settings.xml <https://github.com/onap/oom/blob/master/kubernetes/policy/components/policy-drools-pdp/resources/configmaps/settings.xml>`_.
-The default mode in the *onap/policy-drools:1.6.3* is ONLINE instead.
+The default mode in the *onap/policy-drools:3.0.1* is ONLINE instead.
In *ONLINE* mode, the *controller* initialization can take a significant amount of time.
-The Policy ONAP installation includes a *nexus* repository component that can be used to host any arbitrary
-artifacts that an PDP-D application may require.
-The following environment variables configure its location:
+The Policy ONAP installation includes a *nexus* repository component that can be used to host any
+arbitrary artifacts that an PDP-D application may require. The following environment variables
+configure its location:
.. code-block:: bash
@@ -863,10 +817,10 @@ The following environment variables configure its location:
RELEASE_REPOSITORY_URL=http://nexus:8080/nexus/content/repositories/releases/
REPOSITORY_OFFLINE=false
-The *deploy-artifact* tool is used to deploy artifacts to the local or remote maven repositories.
-It also allows for dependencies to be installed locally. The *features* tool invokes it when artifacts are
-to be deployed as part of a feature. The tool can be useful for developers to test a new application
-in a container.
+The *deploy-artifact* tool is used to deploy artifacts to the local or remote maven repositories. It
+also allows for dependencies to be installed locally. The *features* tool invokes it when artifacts
+are to be deployed as part of a feature. The tool can be useful for developers to test a new
+application in a container.
.. code-block:: bash
@@ -903,30 +857,29 @@ The *status* option provides generic status of the system.
[features]
name version status
---- ------- ------
- healthcheck 1.6.3 enabled
- distributed-locking 1.6.3 enabled
- lifecycle 1.6.3 enabled
- controlloop-management 1.6.4 enabled
- controlloop-utils 1.6.4 enabled
- controlloop-trans 1.6.4 enabled
- controlloop-usecases 1.6.4 enabled
+ healthcheck 3.0.1 enabled
+ distributed-locking 3.0.1 enabled
+ lifecycle 3.0.1 enabled
+ controlloop-management 3.0.1 enabled
+ controlloop-utils 3.0.1 enabled
+ controlloop-trans 3.0.1 enabled
+ controlloop-usecases 3.0.1 enabled
- [migration]
- pooling: OK @ 1811
It contains 3 sections:
- *PDP-D* running status
- *features* applied
-The *start* and *stop* commands are useful for developers testing functionality on a docker container instance.
+The *start* and *stop* commands are useful for developers testing functionality on a docker
+container instance.
Telemetry Shell
===============
-*PDP-D* offers an ample set of REST APIs to debug, introspect, and change state on a running PDP-D. This is known as the
-*telemetry* API. The *telemetry* shell wraps these APIs for shell-like access using
-`http-prompt <http://http-prompt.com/>`__.
+*PDP-D* offers an ample set of REST APIs to debug, introspect, and change state on a running PDP-D.
+This is known as the *telemetry* API. The *telemetry* shell wraps these APIs for shell-like access
+using `http-prompt <http://http-prompt.com/>`_.
.. code-block:: bash
@@ -956,7 +909,8 @@ Refer to the *$POLICY_HOME/bin/* directory for additional tooling.
PDP-D Docker Container Configuration
====================================
-Both the PDP-D *onap/policy-drools* and *onap/policy-pdpd-cl* images can be used without other components.
+Both the PDP-D *onap/policy-drools* and *onap/policy-pdpd-cl* images can be used without other
+components.
There are 2 types of configuration data provided to the container:
@@ -966,9 +920,9 @@ There are 2 types of configuration data provided to the container:
Environment variables
~~~~~~~~~~~~~~~~~~~~~
-As it was shown in the *controller* and *endpoint* sections, PDP-D configuration can rely
-on environment variables. In a container environment, these variables are set up by the user
-in the host environment.
+As it was shown in the *controller* and *endpoint* sections, PDP-D configuration can rely on
+environment variables. In a container environment, these variables are set up by the user in the
+host environment.
Configuration Files and Shell Scripts
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -981,15 +935,15 @@ These are the configuration items that can reside externally and override the de
- **settings.xml** if working with external nexus repositories.
- **standalone-settings.xml** if an external *policy* nexus repository is not available.
-- ***.conf** files containing environment variables. This is an alternative to use environment variables,
- as these files will be sourced in before the PDP-D starts.
+- ***.conf** files containing environment variables. This is an alternative to use environment
+ variables, as these files will be sourced in before the PDP-D starts.
- **features*.zip** to load any arbitrary feature not present in the image.
- ***.pre.sh** scripts that will be executed before the PDP-D starts.
- ***.post.sh** scripts that will be executed after the PDP-D starts.
- **policy-keystore** to override the default PDP-D java keystore.
- **policy-truststore** to override the default PDP-D java truststore.
-- ***.properties** to override or add any properties file for the PDP-D, this includes *controller*, *endpoint*,
- *engine* or *system* configurations.
+- ***.properties** to override or add any properties file for the PDP-D, this includes *controller*,
+ *endpoint*, *engine* or *system* configurations.
- **logback*.xml** to override the default logging configuration.
- ***.xml** to override other .xml configuration that may be used for example by an *application*.
- ***.json** *json* configuration that may be used by an *application*.
@@ -1035,13 +989,7 @@ First create an environment file (in this example *env.conf*) to configure the P
SQL_USER=
SQL_PASSWORD=
- # AAF
-
- AAF=false
- AAF_NAMESPACE=org.onap.policy
- AAF_HOST=aaf.api.simpledemo.onap.org
-
- # PDP-D DMaaP configuration channel
+ # PDP-D configuration channel
PDPD_CONFIGURATION_TOPIC=PDPD-CONFIGURATION
PDPD_CONFIGURATION_API_KEY=
@@ -1052,42 +1000,30 @@ First create an environment file (in this example *env.conf*) to configure the P
# PAP-PDP configuration channel
- POLICY_PDP_PAP_TOPIC=POLICY-PDP-PAP
+ POLICY_PDP_PAP_TOPIC=policy-pdp-pap
POLICY_PDP_PAP_API_KEY=
POLICY_PDP_PAP_API_SECRET=
-Note that *SQL_HOST*, and *REPOSITORY* are empty, so the PDP-D does not attempt
-to integrate with those components.
+Note that *SQL_HOST*, and *REPOSITORY* are empty, so the PDP-D does not attempt to integrate with
+those components.
Configuration
~~~~~~~~~~~~~
-
-active.post.sh
-""""""""""""""
-
-To put the controller directly in active mode at initialization, place an *active.post.sh* script under the
-mounted host directory:
-
-.. code-block:: bash
-
- #!/bin/bash -x
-
- bash -c "http --verify=no -a ${TELEMETRY_USER}:${TELEMETRY_PASSWORD} PUT https://localhost:9696/policy/pdp/engine/lifecycle/state/ACTIVE"
-
Bring up the PDP-D
~~~~~~~~~~~~~~~~~~
.. code-block:: bash
- docker run --rm -p 9696:9696 -v ${PWD}/config:/tmp/policy-install/config --env-file ${PWD}/env/env.conf -it --name PDPD -h pdpd nexus3.onap.org:10001/onap/policy-drools:1.6.3
+ docker run --rm -p 9696:9696 -v ${PWD}/config:/tmp/policy-install/config --env-file ${PWD}/env/env.conf -it --name PDPD -h pdpd nexus3.onap.org:10001/onap/policy-drools:3.0.1
To run the container in detached mode, add the *-d* flag.
-Note that in this command, we are opening the *9696* telemetry API port to the outside world, the config directory
-(where the *noop.pre.sh* customization script resides) is mounted as /tmp/policy-install/config,
-and the customization environment variables (*env/env.conf*) are passed into the container.
+Note that in this command, we are opening the *9696* telemetry API port to the outside world, the
+config directory (where the *noop.pre.sh* customization script resides) is mounted as
+/tmp/policy-install/config, and the customization environment variables (*env/env.conf*) are passed
+into the container.
To open a shell into the PDP-D:
@@ -1095,7 +1031,7 @@ To open a shell into the PDP-D:
docker exec -it pdp-d bash
-Once in the container, run tools such as *telemetry*, *db-migrator*, *policy* to look at the system state:
+Once in the container, run tools such as *telemetry*, *policy* to look at the system state:
To run the *telemetry shell* and other tools from the host:
@@ -1113,7 +1049,7 @@ Sometimes a developer may want to start and stop the PDP-D manually:
# start a bash
- docker run --rm -p 9696:9696 -v ${PWD}/config:/tmp/policy-install/config --env-file ${PWD}/env/env.conf -it --name PDPD -h pdpd nexus3.onap.org:10001/onap/policy-drools:1.6.3 bash
+ docker run --rm -p 9696:9696 -v ${PWD}/config:/tmp/policy-install/config --env-file ${PWD}/env/env.conf -it --name PDPD -h pdpd nexus3.onap.org:10001/onap/policy-drools:3.0.1 bash
# use this command to start policy applying host customizations from /tmp/policy-install/config
@@ -1131,188 +1067,34 @@ Sometimes a developer may want to start and stop the PDP-D manually:
policy start
-Running PDP-D with nexus and mariadb
-====================================
-
-*docker-compose* can be used to test the PDP-D with other components. This is an example configuration
-that brings up *nexus*, *mariadb* and the PDP-D (*docker-compose-pdp.yml*)
-
-docker-compose-pdp.yml
-~~~~~~~~~~~~~~~~~~~~~~
-
-.. code-block:: bash
-
- version: '3'
- services:
- mariadb:
- image: mariadb:10.2.25
- container_name: mariadb
- hostname: mariadb
- command: ['--lower-case-table-names=1', '--wait_timeout=28800']
- env_file:
- - ${PWD}/db/db.conf
- volumes:
- - ${PWD}/db:/docker-entrypoint-initdb.d
- ports:
- - "3306:3306"
- nexus:
- image: sonatype/nexus:2.14.8-01
- container_name: nexus
- hostname: nexus
- ports:
- - "8081:8081"
- drools:
- image: nexus3.onap.org:10001/onap/policy-drools:1.6.3
- container_name: drools
- depends_on:
- - mariadb
- - nexus
- hostname: drools
- ports:
- - "9696:9696"
- volumes:
- - ${PWD}/config:/tmp/policy-install/config
- env_file:
- - ${PWD}/env/env.conf
-
-with *${PWD}/db/db.conf*:
-
-db.conf
-~~~~~~~
-
-.. code-block:: bash
-
- MYSQL_ROOT_PASSWORD=secret
- MYSQL_USER=policy_user
- MYSQL_PASSWORD=policy_user
-
-and *${PWD}/db/db.sh*:
-
-db.sh
-~~~~~
-
-.. code-block:: bash
-
- for db in support onap_sdk log migration operationshistory10 pooling policyadmin operationshistory
- do
- mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};"
- mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;"
- done
-
- mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;"
-
-env.conf
-~~~~~~~~
-
-The environment file *env/env.conf* for *PDP-D* can be set up with appropriate variables to point to the *nexus* instance
-and the *mariadb* database:
-
-.. code-block:: bash
-
- # SYSTEM software configuration
-
- POLICY_HOME=/opt/app/policy
- POLICY_LOGS=/var/log/onap/policy/pdpd
- KEYSTORE_PASSWD=Pol1cy_0nap
- TRUSTSTORE_PASSWD=Pol1cy_0nap
-
- # Telemetry credentials
-
- TELEMETRY_PORT=9696
- TELEMETRY_HOST=0.0.0.0
- TELEMETRY_USER=demo@people.osaaf.org
- TELEMETRY_PASSWORD=demo123456!
-
- # nexus repository
-
- SNAPSHOT_REPOSITORY_ID=policy-nexus-snapshots
- SNAPSHOT_REPOSITORY_URL=http://nexus:8081/nexus/content/repositories/snapshots/
- RELEASE_REPOSITORY_ID=policy-nexus-releases
- RELEASE_REPOSITORY_URL=http://nexus:8081/nexus/content/repositories/releases/
- REPOSITORY_USERNAME=admin
- REPOSITORY_PASSWORD=admin123
- REPOSITORY_OFFLINE=false
-
- MVN_SNAPSHOT_REPO_URL=https://nexus.onap.org/content/repositories/snapshots/
- MVN_RELEASE_REPO_URL=https://nexus.onap.org/content/repositories/releases/
-
- # Relational (SQL) DB access
-
- SQL_HOST=mariadb
- SQL_USER=policy_user
- SQL_PASSWORD=policy_user
+Running PDP-D with docker compose
+=================================
- # AAF
-
- AAF=false
- AAF_NAMESPACE=org.onap.policy
- AAF_HOST=aaf.api.simpledemo.onap.org
-
- # PDP-D DMaaP configuration channel
-
- PDPD_CONFIGURATION_TOPIC=PDPD-CONFIGURATION
- PDPD_CONFIGURATION_API_KEY=
- PDPD_CONFIGURATION_API_SECRET=
- PDPD_CONFIGURATION_CONSUMER_GROUP=
- PDPD_CONFIGURATION_CONSUMER_INSTANCE=
- PDPD_CONFIGURATION_PARTITION_KEY=
-
- # PAP-PDP configuration channel
-
- POLICY_PDP_PAP_TOPIC=POLICY-PDP-PAP
- POLICY_PDP_PAP_API_KEY=
- POLICY_PDP_PAP_API_SECRET=
-
-
-active.post.sh
-~~~~~~~~~~~~~~
-
-A post-start script *config/active.post.sh* can place PDP-D in *active* mode at initialization:
-
- .. code-block:: bash
-
- bash -c "http --verify=no -a ${TELEMETRY_USER}:${TELEMETRY_PASSWORD} PUT <http|https>://localhost:9696/policy/pdp/engine/lifecycle/state/ACTIVE"
-
-Bring up the PDP-D, nexus, and mariadb
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-To bring up the containers:
-
-.. code-block:: bash
-
- docker-compose -f docker-compose-pdpd.yaml up -d
-
-To take it down:
-
-.. code-block:: bash
-
- docker-compose -f docker-compose-pdpd.yaml down -v
+*docker-compose* can be used to test the PDP-D with other components.
+Refer to the docker usage documentation at
+`policy-docker <https://github.com/onap/policy-docker/tree/master/compose>`_
Other examples
~~~~~~~~~~~~~~
-The reader can also look at the `policy/docker repository <https://github.com/onap/policy-docker/tree/master/csit>`__.
+The reader can also look at the `policy/docker repository <https://github.com/onap/policy-docker/tree/master/csit>`_.
More specifically, these directories have examples of other PDP-D configurations:
-* `plans <https://github.com/onap/policy-docker/tree/master/compose>`__: startup & teardown scripts.
-* `scripts <https://github.com/onap/policy-docker/blob/master/compose/compose.yaml>`__: docker-compose file.
-* `tests <https://github.com/onap/policy-docker/blob/master/csit/resources/tests/drools-pdp-test.robot>`__: test plan.
+* `plans <https://github.com/onap/policy-docker/tree/master/compose>`_: startup & teardown scripts.
+* `scripts <https://github.com/onap/policy-docker/blob/master/compose/compose.yaml>`_: docker-compose file.
+* `tests <https://github.com/onap/policy-docker/blob/master/csit/resources/tests/drools-pdp-test.robot>`_: test plan.
Configuring the PDP-D in an OOM Kubernetes installation
=======================================================
-The `PDP-D OOM chart <https://github.com/onap/oom/tree/master/kubernetes/policy/components/policy-drools-pdp>`__ can be
-customized at the following locations:
-
-* `values.yaml <https://github.com/onap/oom/blob/master/kubernetes/policy/components/policy-drools-pdp/values.yaml>`__: custom values for your installation.
-* `configmaps <https://github.com/onap/oom/tree/master/kubernetes/policy/components/policy-drools-pdp/resources/configmaps>`__: place in this directory any configuration extensions or overrides to customize the PDP-D that does not contain sensitive information.
-* `secrets <https://github.com/onap/oom/tree/master/kubernetes/policy/components/policy-drools-pdp/resources/secrets>`__: place in this directory any configuration extensions or overrides to customize the PDP-D that does contain sensitive information.
+The `PDP-D OOM chart <https://github.com/onap/oom/tree/master/kubernetes/policy/components/policy-drools-pdp>`_
+can be customized at the following locations:
-The same customization techniques described in the docker sections for PDP-D, fully apply here, by placing the corresponding
-files or scripts in these two directories.
+* `values.yaml <https://github.com/onap/oom/blob/master/kubernetes/policy/components/policy-drools-pdp/values.yaml>`_: custom values for your installation.
+* `configmaps <https://github.com/onap/oom/tree/master/kubernetes/policy/components/policy-drools-pdp/resources/configmaps>`_: place in this directory any configuration extensions or overrides to customize the PDP-D that does not contain sensitive information.
+* `secrets <https://github.com/onap/oom/tree/master/kubernetes/policy/components/policy-drools-pdp/resources/secrets>`_: place in this directory any configuration extensions or overrides to customize the PDP-D that does contain sensitive information.
-Additional information
-======================
+The same customization techniques described in the docker sections for PDP-D, fully apply here, by
+placing the corresponding files or scripts in these two directories.
-For additional information, please see the
-`Drools PDP Development and Testing (In Depth) <https://wiki.onap.org/display/DW/2020-08+Frankfurt+Tutorials>`__ page.
+End of Document
diff --git a/docs/drools/poolingDesign.png b/docs/drools/poolingDesign.png
deleted file mode 100644
index 8040e809..00000000
--- a/docs/drools/poolingDesign.png
+++ /dev/null
Binary files differ
diff --git a/docs/installation/oom.rst b/docs/installation/oom.rst
index d975e752..263e5a2f 100644
--- a/docs/installation/oom.rst
+++ b/docs/installation/oom.rst
@@ -437,18 +437,3 @@ To *override the PDP-D keystore or trustore*, add a suitable replacement(s) unde
"drools/resources/secrets". Modify the drools chart values.yaml with
new credentials, and follow the procedures described at
:ref:`install-upgrade-policy-label` to redeploy the chart.
-
-To *disable https* for the DMaaP configuration topic, add a copy of
-`engine.properties <https://git.onap.org/policy/drools-pdp/tree/policy-management/src/main/server/config/engine.properties>`_
-with "dmaap.source.topics.PDPD-CONFIGURATION.https" set to "false", or alternatively
-create a ".pre.sh" script (see above) that edits this file before the PDP-D is
-started.
-
-To use *noop topics* for standalone testing, add a "noop.pre.sh" script under
-oom/kubernetes/policy/charts/drools/resources/configmaps/:
-
-.. code-block:: bash
-
- #!/bin/bash
- sed -i "s/^dmaap/noop/g" $POLICY_HOME/config/*.properties
-
diff --git a/docs/pap/pap.rst b/docs/pap/pap.rst
index 1515af53..ac21c786 100644
--- a/docs/pap/pap.rst
+++ b/docs/pap/pap.rst
@@ -97,7 +97,7 @@ The purpose of this API is to support CRUD of PDP groups and subgroups and to su
policies on PDP sub groups and PDPs. This API is provided by the *PolicyAdministration* component (PAP) of the Policy
Framework, see the :ref:`ONAP Policy Framework Architecture <architecture-label>` page.
-PDP groups and subgroups may be prefedined in the system. Predefined groups and subgroups may be modified or deleted
+PDP groups and subgroups may be predefined in the system. Predefined groups and subgroups may be modified or deleted
over this API. The policies running on predefined groups or subgroups as well as the instance counts and properties may
also be modified.
@@ -150,8 +150,10 @@ Here is a sample notification:
2 PAP REST API Swagger
======================
-It is worth noting that we use basic authorization for access with user name and password set to *policyadmin* and
-*zb!XztG34*, respectively.
+.. note::
+ PF uses basic authorization for access with user name and password, to be set on application.yaml
+ properties file. An example can be seen at
+ `the docker configuration papParameters.yaml <https://github.com/onap/policy-docker/blob/master/compose/config/pap/papParameters.yaml>`_
For every call, the client is encouraged to insert a uuid-type *requestID* as parameter. It is helpful for tracking each
http transaction and facilitates debugging. More importantly, it complies with Logging requirements v1.2. If the client
@@ -390,7 +392,7 @@ Here is a sample response:
The *PolicyAdministration* component (PAP) is initialized using a configuration file: `papParameters.yaml
<https://github.com/onap/policy-pap/blob/master/packages/policy-pap-tarball/src/main/resources/etc/papParameters.yaml>`_
-The configuration file is a YAML file containing the relevant fields for configuring the REST server, Database and DMaaP connectivity and so on.
+The configuration file is a YAML file containing the relevant fields for configuring the REST server, Database and Kafka connectivity and so on.
End of Document