aboutsummaryrefslogtreecommitdiffstats
path: root/docs/development/devtools
diff options
context:
space:
mode:
Diffstat (limited to 'docs/development/devtools')
-rw-r--r--docs/development/devtools/api-s3p.rst307
-rw-r--r--docs/development/devtools/distribution-s3p.rst60
-rw-r--r--docs/development/devtools/drools-s3p.rst299
-rw-r--r--docs/development/devtools/images/ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e.pngbin0 -> 24840 bytes
-rw-r--r--docs/development/devtools/images/ControlLoop-vCPE-Fail.pngbin0 -> 21432 bytes
-rw-r--r--docs/development/devtools/images/ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3.pngbin0 -> 26996 bytes
-rw-r--r--docs/development/devtools/images/ControlLoop-vDNS-Fail.pngbin0 -> 21654 bytes
-rw-r--r--docs/development/devtools/images/ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a.pngbin0 -> 27189 bytes
-rw-r--r--docs/development/devtools/images/api-s3p-jm-1_F.pngbin0 -> 290616 bytes
-rw-r--r--docs/development/devtools/images/distribution-performance-api-report.pngbin0 -> 76255 bytes
-rw-r--r--docs/development/devtools/images/distribution-performance-summary-report.pngbin0 -> 98261 bytes
11 files changed, 512 insertions, 154 deletions
diff --git a/docs/development/devtools/api-s3p.rst b/docs/development/devtools/api-s3p.rst
index 77205008..982571ba 100644
--- a/docs/development/devtools/api-s3p.rst
+++ b/docs/development/devtools/api-s3p.rst
@@ -17,8 +17,8 @@ Policy API S3P Tests
Introduction
------------
-The 72 hour stability test of policy API has the goal of verifying the stability of running policy design API REST service by
-ingesting a steady flow of transactions of policy design API calls in a multi-thread fashion to simulate multiple clients' behaviors.
+The 72 hour stability test of policy API has the goal of verifying the stability of running policy design API REST service by
+ingesting a steady flow of transactions of policy design API calls in a multi-thread fashion to simulate multiple clients' behaviors.
All the transaction flows are initiated from a test client server running JMeter for the duration of 72+ hours.
Setup Details
@@ -33,7 +33,7 @@ VM2 will be running API REST service and visualVM.
**Lab Environment**
-Intel ONAP Integration and Deployment Labs
+Intel ONAP Integration and Deployment Labs
`Physical Labs <https://wiki.onap.org/display/DW/Physical+Labs>`_,
`Wind River <https://www.windriver.com/>`_
@@ -76,49 +76,55 @@ JMeter: 5.1.1
Make the etc/hosts entries
.. code-block:: bash
-
+
$ echo $(hostname -I | cut -d\ -f1) $(hostname) | sudo tee -a /etc/hosts
-
+
Update the Ubuntu software installer
.. code-block:: bash
-
+
$ sudo apt-get update
-
+
Check and install Java
.. code-block:: bash
-
+
$ sudo apt-get install -y openjdk-8-jdk
$ java -version
-
+
Ensure that the Java version executing is OpenJDK version 8
-
+
Check and install docker
.. code-block:: bash
-
+
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
$ sudo apt-get update
$ sudo apt-cache policy docker-ce
- $ sudo apt-get install -y docker-ce
+ $ sudo apt-get install -y unzip docker-ce
$ systemctl status docker
$ docker ps
Change the permissions of the Docker socket file
.. code-block:: bash
-
+
$ sudo chmod 777 /var/run/docker.sock
+Or add the current user to the docker group
+
+.. code-block:: bash
+
+ $ sudo usermod -aG docker $USER
+
Check the status of the Docker service and ensure it is running correctly
.. code-block:: bash
-
+
$ service docker status
$ docker ps
-
+
**VM1 in lab**
**Install JMeter**
@@ -126,27 +132,27 @@ Check the status of the Docker service and ensure it is running correctly
Download & install JMeter
.. code-block:: bash
-
+
$ mkdir jMeter
$ cd jMeter
- $ wget http://mirrors.whoishostingthis.com/apache//jmeter/binaries/apache-jmeter-5.1.1.zip
- $ unzip apache-jmeter-5.1.1.zip
-
+ $ wget http://mirrors.whoishostingthis.com/apache//jmeter/binaries/apache-jmeter-5.2.1.zip
+ $ unzip apache-jmeter-5.2.1.zip
+
**Install other necessary components**
Pull api code & run setup components script
.. code-block:: bash
-
+
$ cd ~
$ git clone https://git.onap.org/policy/api
$ cd api/testsuites/stability/src/main/resources/simulatorsetup
- $ ./setup_components.sh
-
+ $ . ./setup_components.sh
+
After installation, make sure the following mariadb container is up and running
.. code-block:: bash
-
+
ubuntu@test:~/api/testsuites/stability/src/main/resources/simulatorsetup$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3849ce44b86d mariadb:10.2.14 "docker-entrypoint.s…" 11 days ago Up 11 days 0.0.0.0:3306->3306/tcp mariadb
@@ -158,16 +164,16 @@ After installation, make sure the following mariadb container is up and running
Pull api code & run setup api script
.. code-block:: bash
-
+
$ cd ~
$ git clone https://git.onap.org/policy/api
$ cd api/testsuites/stability/src/main/resources/apisetup
- $ ./setup_api.sh <host ip running api> <host ip running mariadb>
+ $ . ./setup_api.sh <host ip running api> <host ip running mariadb>
After installation, make sure the following api container is up and running
.. code-block:: bash
-
+
ubuntu@tools-2:~/api/testsuites/stability/src/main/resources/apisetup$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4f08f9972e55 nexus3.onap.org:10001/onap/policy-api:2.1.1-SNAPSHOT "bash ./policy-api.sh" 11 days ago Up 11 days 0.0.0.0:6969->6969/tcp, 0.0.0.0:9090->9090/tcp policy-api
@@ -179,22 +185,22 @@ VisualVM needs to be installed in the virtual machine having API up and running.
Install visualVM
.. code-block:: bash
-
+
$ sudo apt-get install visualvm
-
+
Run few commands to configure permissions
.. code-block:: bash
-
+
$ cd /usr/lib/jvm/java-8-openjdk-amd64/bin/
$ sudo touch visualvm.policy
$ sudo chmod 777 visualvm.policy
-
+
$ vi visualvm.policy
-
+
Add the following in visualvm.policy
-
-
+
+
grant codebase "file:/usr/lib/jvm/java-8-openjdk-amd64/lib/tools.jar" {
permission java.security.AllPermission;
};
@@ -202,10 +208,10 @@ Run few commands to configure permissions
Run following commands to start jstatd using port 1111
.. code-block:: bash
-
+
$ cd /usr/lib/jvm/java-8-openjdk-amd64/bin/
$ ./jstatd -p 1111 -J-Djava.security.policy=visualvm.policy &
-
+
**Local Machine**
**Run & configure visualVM**
@@ -213,9 +219,9 @@ Run following commands to start jstatd using port 1111
Run visualVM by typing
.. code-block:: bash
-
+
$ jvisualvm
-
+
Connect to jstatd & remote policy-api JVM
1. Right click on "Remote" in the left panel of the screen and select "Add Remote Host..."
@@ -228,115 +234,6 @@ Sample Screenshot of visualVM
.. image:: images/results-5.png
-Test Plan
----------
-
-The 72+ hours stability test will be running the following steps sequentially in multi-threaded loops.
-Thread number is set to 5 to simulate 5 API clients' behaviors (they can be calling the same policy CRUD API simultaneously).
-
-**Setup Thread (will be running only once)**
-
-- Get policy-api Healthcheck
-- Get API Counter Statistics
-- Get Preloaded Policy Types
-
-**API Test Flow (5 threads running the same steps in the same loop)**
-
-- Create a new TCA Policy Type with Version 1.0.0
-- Create a new TCA Policy Type with Version 2.0.0
-- Create a new TCA Policy Type with Version 3.0.0
-- Create a new TCA Policy Type with Version 4.0.0
-- Create a new TCA Policy Type with Version 5.0.0
-- Create a new TCA Policy Type with Version 6.0.0
-- Create a new TCA Policy Type with Version 7.0.0
-- Create a new TCA Policy Type with Version 8.0.0
-- Create a new TCA Policy Type with Version 9.0.0
-- Create a new TCA Policy Type with Version 10.0.0
-- Create a new TCA Policy Type with Version 11.0.0
-- A 10 sec timer
-- Get All Existing Policy Types
-- Get All Existing Versions of the New TCA Policy Type
-- Get Version 1.0.0 of the New TCA Policy Type
-- Get Version 2.0.0 of the New TCA Policy Type
-- Get Version 3.0.0 of the New TCA Policy Type
-- Get Version 4.0.0 of the New TCA Policy Type
-- Get Version 5.0.0 of the New TCA Policy Type
-- Get Version 6.0.0 of the New TCA Policy Type
-- Get Version 7.0.0 of the New TCA Policy Type
-- Get Version 8.0.0 of the New TCA Policy Type
-- Get Version 9.0.0 of the New TCA Policy Type
-- Get Version 10.0.0 of the New TCA Policy Type
-- Get Version 11.0.0 of the New TCA Policy Type
-- Get the Latest Version of the New TCA Policy Type
-- A 10 sec timer
-- Create a New TCA Policy with Version 1.0.0 over the New TCA Policy Type Version 2.0.0
-- Create a New TCA Policy with Version 2.0.0 over the New TCA Policy Type Version 2.0.0
-- Create a New TCA Policy with Version 3.0.0 over the New TCA Policy Type Version 2.0.0
-- Create a New TCA Policy with Version 4.0.0 over the New TCA Policy Type Version 2.0.0
-- Create a New TCA Policy with Version 5.0.0 over the New TCA Policy Type Version 2.0.0
-- Create a New TCA Policy with Version 6.0.0 over the New TCA Policy Type Version 2.0.0
-- Create a New TCA Policy with Version 7.0.0 over the New TCA Policy Type Version 2.0.0
-- Create a New TCA Policy with Version 8.0.0 over the New TCA Policy Type Version 2.0.0
-- Create a New TCA Policy with Version 9.0.0 over the New TCA Policy Type Version 2.0.0
-- Create a New TCA Policy with Version 10.0.0 over the New TCA Policy Type Version 2.0.0
-- Create a New TCA Policy with Version 11.0.0 over the New TCA Policy Type Version 2.0.0
-- A 10 sec Timer
-- Get All Existing TCA Policies
-- Get All Existing Versions of TCA Policies
-- Get Version 1.0.0 of the New TCA Policy
-- Get Version 2.0.0 of the New TCA Policy
-- Get Version 3.0.0 of the New TCA Policy
-- Get Version 4.0.0 of the New TCA Policy
-- Get Version 5.0.0 of the New TCA Policy
-- Get Version 6.0.0 of the New TCA Policy
-- Get Version 7.0.0 of the New TCA Policy
-- Get Version 8.0.0 of the New TCA Policy
-- Get Version 9.0.0 of the New TCA Policy
-- Get Version 10.0.0 of the New TCA Policy
-- Get Version 11.0.0 of the New TCA Policy
-- Get the Latest Version of the New TCA Policy
-- A 10 sec Timer
-- Create a New Guard Policy with Version 1
-- Create a New Guard Policy with Version 5
-- Create a New Guard Policy with Version 9
-- Create a New Guard Policy with Version 12
-- A 10 sec Timer
-- Get Version 1 of the New Guard Policy
-- Get Version 5 of the New Guard Policy
-- Get Version 9 of the New Guard Policy
-- Get Version 12 of the New Guard Policy
-- Get the Latest Version of the New Guard Policy
-- A 10 sec Timer
-
-**TearDown Thread (will only be running after API Test Flow is completed)**
-
-- Delete Version 2.0.0 of the New TCA Policy Type (suppose to return 409-Conflict)
-- Delete Version 3.0.0 of the New TCA Policy Type
-- Delete Version 4.0.0 of the New TCA Policy Type
-- Delete Version 5.0.0 of the New TCA Policy Type
-- Delete Version 6.0.0 of the New TCA Policy Type
-- Delete Version 7.0.0 of the New TCA Policy Type
-- Delete Version 8.0.0 of the New TCA Policy Type
-- Delete Version 9.0.0 of the New TCA Policy Type
-- Delete Version 10.0.0 of the New TCA Policy Type
-- Delete Version 11.0.0 of the New TCA Policy Type
-- Delete Version 1.0.0 of the New TCA Policy
-- Delete Version 2.0.0 of the New TCA Policy
-- Delete Version 3.0.0 of the New TCA Policy
-- Delete Version 4.0.0 of the New TCA Policy
-- Delete Version 5.0.0 of the New TCA Policy
-- Delete Version 6.0.0 of the New TCA Policy
-- Delete Version 7.0.0 of the New TCA Policy
-- Delete Version 8.0.0 of the New TCA Policy
-- Delete Version 9.0.0 of the New TCA Policy
-- Delete Version 10.0.0 of the New TCA Policy
-- Delete Version 11.0.0 of the New TCA Policy
-- Re-Delete Version 2.0.0 of the New TCA Policy Type (will return 200 now since all TCA policies created over have been deleted)
-- Delete Version 1 of the new Guard Policy
-- Delete Version 5 of the new Guard Policy
-- Delete Version 9 of the new Guard Policy
-- Delete Version 12 of the new Guard Policy
-
Run Test
--------
@@ -345,9 +242,9 @@ Run Test
Connect to lab VPN
.. code-block:: bash
-
+
$ sudo openvpn --config <path to lab ovpn key file>
-
+
SSH into JMeter VM (VM1)
.. code-block:: bash
@@ -357,9 +254,9 @@ SSH into JMeter VM (VM1)
Run JMeter test in background for 72+ hours
.. code-block:: bash
-
+
$ mkdir s3p
- $ nohup ./jMeter/apache-jmeter-5.1.1/bin/jmeter.sh -n -t ~/api/testsuites/stability/src/main/resources/testplans/policy_api_stability.jmx &
+ $ nohup ./jMeter/apache-jmeter-5.2.1/bin/jmeter.sh -n -t ~/api/testsuites/stability/src/main/resources/testplans/policy_api_stability.jmx &
(Optional) Monitor JMeter test that is running in background (anytime after re-logging into JMeter VM - VM1)
@@ -367,9 +264,77 @@ Run JMeter test in background for 72+ hours
$ tail -f s3p/stability.log nohup.out
+Test Plan
+---------
-Test Results
-------------
+The 72+ hours stability test will be running the following steps sequentially
+in multi-threaded loops. Thread number is set to 5 to simulate 5 API clients'
+behaviors (they can be calling the same policy CRUD API simultaneously).
+Each thread creates a different version of the policy types and policies to not
+interfere with one another while operating simultaneously. The point version
+of each entity is set to the running thread number.
+
+**Setup Thread (will be running only once)**
+
+- Get policy-api Healthcheck
+- Get API Counter Statistics
+- Get Preloaded Policy Types
+
+**API Test Flow (5 threads running the same steps in the same loop)**
+
+- Create a new Monitoring Policy Type with Version 6.0.#
+- Create a new Monitoring Policy Type with Version 7.0.#
+- Create a new Optimization Policy Type with Version 6.0.#
+- Create a new Guard Policy Type with Version 6.0.#
+- Create a new Native APEX Policy Type with Version 6.0.#
+- Create a new Native Drools Policy Type with Version 6.0.#
+- Create a new Native XACML Policy Type with Version 6.0.#
+- Get All Policy Types
+- Get All Versions of the new Monitoring Policy Type
+- Get Version 6.0.# of the new Monitoring Policy Type
+- Get Version 6.0.# of the new Optimzation Policy Type
+- Get Version 6.0.# of the new Guard Policy Type
+- Get Version 6.0.# of the new Native APEX Policy Type
+- Get Version 6.0.# of the new Native Drools Policy Type
+- Get Version 6.0.# of the new Native XACML Policy Type
+- Get the Latest Version of the New Monitoring Policy Type
+- Create Monitoring Policy Ver 6.0.# w/Monitoring Policy Type Ver 6.0.#
+- Create Monitoring Policy Ver 7.0.# w/Monitoring Policy Type Ver 7.0.#
+- Create Optimization Policy Ver 6.0.# w/Optimization Policy Type Ver 6.0.#
+- Create Guard Policy Ver 6.0.# w/Guard Policy Type Ver 6.0.#
+- Create Native APEX Policy Ver 6.0.# w/Native APEX Policy Type Ver 6.0.#
+- Create Native Drools Policy Ver 6.0.# w/Native Drools Policy Type Ver 6.0.#
+- Create Native XACML Policy Ver 6.0.# w/Native XACML Policy Type Ver 6.0.#
+- Get Version 6.0.# of the new Monitoring Policy
+- Get Version 6.0.# of the new Optimzation Policy
+- Get Version 6.0.# of the new Guard Policy
+- Get Version 6.0.# of the new Native APEX Policy
+- Get Version 6.0.# of the new Native Drools Policy
+- Get Version 6.0.# of the new Native XACML Policy
+- Get the Latest Version of the new Monitoring Policy
+- Delete Version 6.0.# of the new Monitoring Policy
+- Delete Version 7.0.# of the new Monitoring Policy
+- Delete Version 6.0.# of the new Optimzation Policy
+- Delete Version 6.0.# of the new Guard Policy
+- Delete Version 6.0.# of the new Native APEX Policy
+- Delete Version 6.0.# of the new Native Drools Policy
+- Delete Version 6.0.# of the new Native XACML Policy
+- Delete Monitoring Policy Type with Version 6.0.#
+- Delete Monitoring Policy Type with Version 7.0.#
+- Delete Optimization Policy Type with Version 6.0.#
+- Delete Guard Policy Type with Version 6.0.#
+- Delete Native APEX Policy Type with Version 6.0.#
+- Delete Native Drools Policy Type with Version 6.0.#
+- Delete Native XACML Policy Type with Version 6.0.#
+
+**TearDown Thread (will only be running after API Test Flow is completed)**
+
+- Get policy-api Healthcheck
+- Get Preloaded Policy Types
+
+
+Test Results El-Alto
+--------------------
**Summary**
@@ -396,6 +361,40 @@ Policy API stability test plan was triggered and running for 72+ hours without a
.. image:: images/results-4.png
+Test Results Frankfurt
+----------------------
+
+PFPP ONAP Windriver lab
+
+**Summary**
+
+Policy API stability test plan was triggered and running for 72+ hours without
+any real errors occurring. The single failure was on teardown and was due to
+simultaneous test plans running concurrently on the lab system.
+
+Compared to El-Alto, 10x the number of API calls were made in the 72 hour run.
+However, the latency increased (most likely due to the synchronization added
+from
+`POLICY-2533 <https://jira.onap.org/browse/POLICY-2533>`_.
+This will be addressed in the next release.
+
+**Test Statistics**
+
+======================= ============= =========== =============================== =============================== ===============================
+**Total # of requests** **Success %** **Error %** **Avg. time taken per request** **Min. time taken per request** **Max. time taken per request**
+======================= ============= =========== =============================== =============================== ===============================
+ 514953 100% 0% 2510 ms 336 ms 15034 ms
+======================= ============= =========== =============================== =============================== ===============================
+
+**VisualVM Results**
+
+VisualVM results were not captured as this was run in the PFPP ONAP Windriver
+lab.
+
+**JMeter Results**
+
+.. image:: images/api-s3p-jm-1_F.png
+
Performance Test of Policy API
++++++++++++++++++++++++++++++
@@ -403,7 +402,7 @@ Performance Test of Policy API
Introduction
------------
-Performance test of policy-api has the goal of testing the min/avg/max processing time and rest call throughput for all the requests when the number of requests are large enough to saturate the resource and find the bottleneck.
+Performance test of policy-api has the goal of testing the min/avg/max processing time and rest call throughput for all the requests when the number of requests are large enough to saturate the resource and find the bottleneck.
Setup Details
-------------
@@ -417,7 +416,7 @@ Test Plan
---------
Performance test plan is the same as stability test plan above.
-Only differences are, in performance test, we increase the number of threads up to 20 (simulating 20 users' behaviors at the same time) whereas reducing the test time down to 1 hour.
+Only differences are, in performance test, we increase the number of threads up to 20 (simulating 20 users' behaviors at the same time) whereas reducing the test time down to 1 hour.
Run Test
--------
diff --git a/docs/development/devtools/distribution-s3p.rst b/docs/development/devtools/distribution-s3p.rst
index f448690b..093e28c0 100644
--- a/docs/development/devtools/distribution-s3p.rst
+++ b/docs/development/devtools/distribution-s3p.rst
@@ -270,3 +270,63 @@ Stability test plan was triggered for 72 hours.
.. image:: images/distribution-summary-report.png
.. image:: images/distribution-results-tree.png
+
+Performance Test of Policy Distribution
++++++++++++++++++++++++++++++++++++++++
+
+Introduction
+------------
+
+Performance test of distribution has the goal of testing the min/avg/max processing time and
+rest call throughput for all the requests when the number of requests are large enough to saturate
+the resource and find the bottleneck.
+It also tests that distribution can handle multiple policy csar's and that these are deployed within 30 seconds consistently.
+
+Setup Details
+-------------
+
+The performance test is based on the same setup as the distribution stability tests.
+
+Test Plan
+---------
+
+Performance test plan is different from the stability test plan.
+Instead of handling one policy csar at a time, multiple csar's are deployed within the watched folder at the exact same time.
+We then expect all policies from these csar's to be deployed within 30 seconds.
+Alongside these, there are multithreaded tests running towards the healtchcheck and statistics endpoints of the distribution service.
+
+Run Test
+--------
+
+Copy the performance test plans folder onto VM2.
+Edit the /tmp/ folder permissions to allow the Testplan to insert the CSAR into the /tmp/policydistribution/distributionmount/ folder.
+
+.. code-block:: bash
+
+ $ sudo chmod a+trwx /tmp
+
+From the apache jMeter folder run the test, pointing it towards the stabiltiy.jmx file inside the testplans folder
+
+.. code-block:: bash
+
+ $ ./bin/jmeter -n -t /home/rossc/testplans/performance.jmx -Jduration=259200 -l testresults.jtl
+
+Test Results
+------------
+
+**Summary**
+
+Performance test plan was triggered for 4 hours.
+
+**Test Statistics**
+
+======================= ================= ================== ==================================
+**Total # of requests** **Success %** **Error %** **Average time taken per request**
+======================= ================= ================== ==================================
+239819 100 % 0 % 100 ms
+======================= ================= ================== ==================================
+
+**JMeter Screenshot**
+
+.. image:: images/distribution-performance-summary-report.png
+.. image:: images/distribution-performance-api-report.png
diff --git a/docs/development/devtools/drools-s3p.rst b/docs/development/devtools/drools-s3p.rst
index 3082732f..429186b6 100644
--- a/docs/development/devtools/drools-s3p.rst
+++ b/docs/development/devtools/drools-s3p.rst
@@ -10,3 +10,302 @@
Policy Drools PDP component
~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Both the Performance and the Stability tests were executed against a default ONAP installation in the PFPP tenant, from an independent VM running the jmeter tool to inject the load.
+
+General Setup
+*************
+
+The kubernetes installation allocated all policy components in the same worker node VM and some additional ones. The worker VM hosting the policy components has the
+following spec:
+
+- 16GB RAM
+- 8 VCPU
+- 160GB Ephemeral Disk
+
+The standalone VM designated to run jmeter has the same configuration and was only
+used to run this tool allocating 12G of heap memory to the jmeter tool.
+
+Other ONAP components used during the estability tests are:
+
+- Policy XACML PDP to process guard queries for each transaction.
+- DMaaP to carry PDP-D and jmeter initiated traffic to complete transactions.
+- Policy API to create (and delete at the end of the tests) policies for each
+ scenario under test.
+- Policy PAP to deploy (and undeploy at the end of the tests) policies for each scenario under test.
+
+The following components are simulated during the tests.
+
+- SO actor for the vDNS use case.
+- APPC responses for the vCPE and vFW use cases.
+- AAI to answer queries for the usecases under test.
+
+In order to restrict APPC responses to just the jmeter too driving all transactions,
+the APPC component was disabled.
+
+SO, and AAI actors were simulated internally within the PDP-D by enabling the
+feature-controlloop-utils previous to run the tests.
+
+PDP-D Setup
+***********
+
+The kubernetes charts were modified previous to the installation with
+the changes below.
+
+The oom/kubernetes/policy/charts/drools/resources/configmaps/base.conf was
+modified:
+
+.. code-block:: bash
+
+ --- a/kubernetes/policy/charts/drools/resources/configmaps/base.conf
+ +++ b/kubernetes/policy/charts/drools/resources/configmaps/base.conf
+ @@ -85,27 +85,27 @@ DMAAP_SERVERS=message-router
+
+ # AAI
+
+ -AAI_HOST=aai.{{.Release.Namespace}}
+ -AAI_PORT=8443
+ +AAI_HOST=localhost
+ +AAI_PORT=6666
+ AAI_CONTEXT_URI=
+
+ # MSO
+
+ -SO_HOST=so.{{.Release.Namespace}}
+ -SO_PORT=8080
+ -SO_CONTEXT_URI=onap/so/infra/
+ -SO_URL=https://so.{{.Release.Namespace}}:8080/onap/so/infra
+ +SO_HOST=localhost
+ +SO_PORT=6667
+ +SO_CONTEXT_URI=
+ +SO_URL=https://localhost:6667/
+
+ # VFC
+
+ -VFC_HOST=
+ -VFC_PORT=
+ +VFC_HOST=localhost
+ +VFC_PORT=6668
+ VFC_CONTEXT_URI=api/nslcm/v1/
+
+ # SDNC
+
+ -SDNC_HOST=sdnc.{{.Release.Namespace}}
+ -SDNC_PORT=8282
+ +SDNC_HOST=localhost
+ +SDNC_PORT=6670
+ SDNC_CONTEXT_URI=restconf/operations/
+
+The AAI actor had to be modified to disable https to talk to the AAI simulator.
+
+.. code-block:: bash
+
+ ~/oom/kubernetes/policy/charts/drools/resources/configmaps/AAI-http-client.properties
+
+ http.client.services=AAI
+
+ http.client.services.AAI.managed=true
+ http.client.services.AAI.https=false
+ http.client.services.AAI.host=${envd:AAI_HOST}
+ http.client.services.AAI.port=${envd:AAI_PORT}
+ http.client.services.AAI.userName=${envd:AAI_USERNAME}
+ http.client.services.AAI.password=${envd:AAI_PASSWORD}
+ http.client.services.AAI.contextUriPath=${envd:AAI_CONTEXT_URI}
+
+The SO actor had to be modified similarly.
+
+.. code-block:: bash
+
+ oom/kubernetes/policy/charts/drools/resources/configmaps/SO-http-client.properties:
+
+ http.client.services=SO
+
+ http.client.services.SO.managed=true
+ http.client.services.SO.https=false
+ http.client.services.SO.host=${envd:SO_HOST}
+ http.client.services.SO.port=${envd:SO_PORT}
+ http.client.services.SO.userName=${envd:SO_USERNAME}
+ http.client.services.SO.password=${envd:SO_PASSWORD}
+ http.client.services.SO.contextUriPath=${envd:SO_CONTEXT_URI}
+
+The feature-controlloop-utils was started by adding the following script:
+
+.. code-block:: bash
+
+ oom/kubernetes/policy/charts/drools/resources/configmaps/features.pre.sh:
+
+ #!/bin/bash
+ bash -c "features enable controlloop-utils"
+
+The PDP-D uses a small configuration:
+
+
+Stability Test of Policy PDP-D
+******************************
+
+The 72 hour stability test happened in parallel with the estability run of the API component.
+
+.. code-block:: bash
+
+ small:
+ limits:
+ cpu: 1
+ memory: 4Gi
+ requests:
+ cpu: 100m
+ memory: 1Gi
+
+Approximately 3.75G heap was allocated to the PDP-D JVM at initialization.
+
+Worker Node performance
+=======================
+
+The VM named onap-k8s-07 was monitored for the duration of the two parallel
+stability runs. The table below show the usage ranges:
+
+.. code-block:: bash
+
+ NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
+ onap-k8s-07 <=1374m <=20% <=10643Mi <=66%
+
+PDP-D performance
+=================
+
+The PDP-D was monitored during the run an stayed below the following ranges:
+
+.. code-block:: bash
+
+ NAME CPU(cores) MEMORY(bytes)
+ dev-drools-0 <=142m 684Mi
+
+Garbagge collection was monitored without detecting any major spike.
+
+The following use cases were tested:
+
+- vCPE
+- vDNS
+- vFirewall
+
+For 72 hours the following 5 scenarios were run in parallel:
+
+- vCPE success scenario
+- vCPE failure scenario (failure returned by simulated APPC recipient through DMaaP).
+- vDNS success scenario.
+- vDNS failure scenario.
+- vFirewall success scenario.
+
+Five threads, one for each scenario described above, push the traffic back to back
+with no pauses.
+
+All transactions completed successfully as expected in each scenario.
+
+The command executed was
+
+.. code-block:: bash
+
+ jmeter -n -t /home/ubuntu/jhh/s3p.jmx > /dev/null 2>&1
+
+The results were computed by taking the ellapsed time from the audit.log
+(this log reports all end to end transactions, marking the start, end, and
+ellapsed times).
+
+The count reflects the number of successful transactions as expected in the
+use case, as well as the average, standard deviation, and max/min. An histogram
+of the response times have been added as a visual indication on the most common transaction times.
+
+vCPE Success scenario
+=====================
+
+ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e:
+
+.. code-block:: bash
+
+ count 155246.000000
+ mean 269.894226
+ std 64.556282
+ min 133.000000
+ 50% 276.000000
+ max 1125.000000
+
+
+Transaction Times histogram:
+
+.. image:: images/ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e.png
+
+
+vCPE Failure scenario
+=====================
+
+ControlLoop-vCPE-Fail:
+
+.. code-block:: bash
+
+ ControlLoop-vCPE-Fail :
+ count 149621.000000
+ mean 280.483522
+ std 67.226550
+ min 134.000000
+ 50% 279.000000
+ max 5394.000000
+
+
+Transaction Times histogram:
+
+.. image:: images/ControlLoop-vCPE-Fail.png
+
+vDNS Success scenario
+=====================
+
+ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3:
+
+.. code-block:: bash
+
+ count 293000.000000
+ mean 21.961792
+ std 7.921396
+ min 15.000000
+ 50% 20.000000
+ max 672.000000
+
+Transaction Times histogram:
+
+.. image:: images/ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3.png
+
+vDNS Failure scenario
+=====================
+
+ControlLoop-vDNS-Fail:
+
+.. code-block:: bash
+
+ count 59357.000000
+ mean 3010.261267
+ std 76.599948
+ min 0.000000
+ 50% 3010.000000
+ max 3602.000000
+
+Transaction Times histogram:
+
+.. image:: images/ControlLoop-vDNS-Fail.png
+
+vFirewall Failure scenario
+==========================
+
+ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a:
+
+.. code-block:: bash
+
+ count 175401.000000
+ mean 184.581251
+ std 35.619075
+ min 136.000000
+ 50% 181.000000
+ max 3972.000000
+
+Transaction Times histogram:
+
+.. image:: images/ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a.png
+
+
+
+
diff --git a/docs/development/devtools/images/ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e.png b/docs/development/devtools/images/ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e.png
new file mode 100644
index 00000000..788e2313
--- /dev/null
+++ b/docs/development/devtools/images/ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e.png
Binary files differ
diff --git a/docs/development/devtools/images/ControlLoop-vCPE-Fail.png b/docs/development/devtools/images/ControlLoop-vCPE-Fail.png
new file mode 100644
index 00000000..16fc9836
--- /dev/null
+++ b/docs/development/devtools/images/ControlLoop-vCPE-Fail.png
Binary files differ
diff --git a/docs/development/devtools/images/ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3.png b/docs/development/devtools/images/ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3.png
new file mode 100644
index 00000000..92f82eb6
--- /dev/null
+++ b/docs/development/devtools/images/ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3.png
Binary files differ
diff --git a/docs/development/devtools/images/ControlLoop-vDNS-Fail.png b/docs/development/devtools/images/ControlLoop-vDNS-Fail.png
new file mode 100644
index 00000000..e5f4ce3b
--- /dev/null
+++ b/docs/development/devtools/images/ControlLoop-vDNS-Fail.png
Binary files differ
diff --git a/docs/development/devtools/images/ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a.png b/docs/development/devtools/images/ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a.png
new file mode 100644
index 00000000..345ea7d0
--- /dev/null
+++ b/docs/development/devtools/images/ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a.png
Binary files differ
diff --git a/docs/development/devtools/images/api-s3p-jm-1_F.png b/docs/development/devtools/images/api-s3p-jm-1_F.png
new file mode 100644
index 00000000..48190165
--- /dev/null
+++ b/docs/development/devtools/images/api-s3p-jm-1_F.png
Binary files differ
diff --git a/docs/development/devtools/images/distribution-performance-api-report.png b/docs/development/devtools/images/distribution-performance-api-report.png
new file mode 100644
index 00000000..12102718
--- /dev/null
+++ b/docs/development/devtools/images/distribution-performance-api-report.png
Binary files differ
diff --git a/docs/development/devtools/images/distribution-performance-summary-report.png b/docs/development/devtools/images/distribution-performance-summary-report.png
new file mode 100644
index 00000000..3cea8e99
--- /dev/null
+++ b/docs/development/devtools/images/distribution-performance-summary-report.png
Binary files differ