diff options
36 files changed, 100 insertions, 637 deletions
diff --git a/docs/development/devtools/api-s3p.rst b/docs/development/devtools/api-s3p.rst index 982571ba..3e68f5b0 100644 --- a/docs/development/devtools/api-s3p.rst +++ b/docs/development/devtools/api-s3p.rst @@ -17,252 +17,24 @@ Policy API S3P Tests Introduction ------------ -The 72 hour stability test of policy API has the goal of verifying the stability of running policy design API REST service by -ingesting a steady flow of transactions of policy design API calls in a multi-thread fashion to simulate multiple clients' behaviors. -All the transaction flows are initiated from a test client server running JMeter for the duration of 72+ hours. +The 72 hour stability test of policy API has the goal of verifying the stability of running policy design API REST +service by ingesting a steady flow of transactions in a multi-threaded fashion to +simulate multiple clients' behaviors. +All the transaction flows are initiated from a test client server running JMeter for the duration of 72 hours. Setup Details ------------- -The stability test is performed on VMs running in Intel Wind River Lab environment. -There are 2 seperate VMs. One for running API while the other running JMeter & other necessary components, e.g. MariaDB, to simulate steady flow of transactions. -For simplicity, let's assume: - -VM1 will be running JMeter, MariaDB. -VM2 will be running API REST service and visualVM. - -**Lab Environment** - -Intel ONAP Integration and Deployment Labs -`Physical Labs <https://wiki.onap.org/display/DW/Physical+Labs>`_, -`Wind River <https://www.windriver.com/>`_ - -**API VM Details (VM2)** - -OS: Ubuntu 18.04 LTS - -CPU: 4 core - -RAM: 8 GB - -HardDisk: 91 GB - -Docker Version: 18.09.8 - -Java: OpenJDK 1.8.0_212 - -**JMeter VM Details (VM1)** - -OS: Ubuntu 18.04 LTS - -CPU: 4 core - -RAM: 8GB - -HardDisk: 91GB - -Docker Version: 18.09.8 - -Java: OpenJDK 1.8.0_212 - -JMeter: 5.1.1 - -**Software Installation & Configuration** - -**VM1 & VM2 in lab** - -**Install Java & Docker** - -Make the etc/hosts entries - -.. code-block:: bash - - $ echo $(hostname -I | cut -d\ -f1) $(hostname) | sudo tee -a /etc/hosts - -Update the Ubuntu software installer - -.. code-block:: bash - - $ sudo apt-get update - -Check and install Java - -.. code-block:: bash - - $ sudo apt-get install -y openjdk-8-jdk - $ java -version - -Ensure that the Java version executing is OpenJDK version 8 - -Check and install docker - -.. code-block:: bash - - $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - - $ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" - $ sudo apt-get update - $ sudo apt-cache policy docker-ce - $ sudo apt-get install -y unzip docker-ce - $ systemctl status docker - $ docker ps - -Change the permissions of the Docker socket file - -.. code-block:: bash - - $ sudo chmod 777 /var/run/docker.sock - -Or add the current user to the docker group - -.. code-block:: bash - - $ sudo usermod -aG docker $USER - -Check the status of the Docker service and ensure it is running correctly - -.. code-block:: bash - - $ service docker status - $ docker ps - -**VM1 in lab** - -**Install JMeter** - -Download & install JMeter - -.. code-block:: bash - - $ mkdir jMeter - $ cd jMeter - $ wget http://mirrors.whoishostingthis.com/apache//jmeter/binaries/apache-jmeter-5.2.1.zip - $ unzip apache-jmeter-5.2.1.zip - -**Install other necessary components** - -Pull api code & run setup components script - -.. code-block:: bash - - $ cd ~ - $ git clone https://git.onap.org/policy/api - $ cd api/testsuites/stability/src/main/resources/simulatorsetup - $ . ./setup_components.sh - -After installation, make sure the following mariadb container is up and running - -.. code-block:: bash - - ubuntu@test:~/api/testsuites/stability/src/main/resources/simulatorsetup$ docker ps - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - 3849ce44b86d mariadb:10.2.14 "docker-entrypoint.s…" 11 days ago Up 11 days 0.0.0.0:3306->3306/tcp mariadb - -**VM2 in lab** - -**Install policy-api** - -Pull api code & run setup api script - -.. code-block:: bash - - $ cd ~ - $ git clone https://git.onap.org/policy/api - $ cd api/testsuites/stability/src/main/resources/apisetup - $ . ./setup_api.sh <host ip running api> <host ip running mariadb> - -After installation, make sure the following api container is up and running - -.. code-block:: bash - - ubuntu@tools-2:~/api/testsuites/stability/src/main/resources/apisetup$ docker ps - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - 4f08f9972e55 nexus3.onap.org:10001/onap/policy-api:2.1.1-SNAPSHOT "bash ./policy-api.sh" 11 days ago Up 11 days 0.0.0.0:6969->6969/tcp, 0.0.0.0:9090->9090/tcp policy-api - -**Install & configure visualVM** - -VisualVM needs to be installed in the virtual machine having API up and running. It will be used to monitor CPU, Memory, GC for API while stability test is running. - -Install visualVM - -.. code-block:: bash - - $ sudo apt-get install visualvm - -Run few commands to configure permissions - -.. code-block:: bash - - $ cd /usr/lib/jvm/java-8-openjdk-amd64/bin/ - $ sudo touch visualvm.policy - $ sudo chmod 777 visualvm.policy - - $ vi visualvm.policy - - Add the following in visualvm.policy - - - grant codebase "file:/usr/lib/jvm/java-8-openjdk-amd64/lib/tools.jar" { - permission java.security.AllPermission; - }; - -Run following commands to start jstatd using port 1111 - -.. code-block:: bash - - $ cd /usr/lib/jvm/java-8-openjdk-amd64/bin/ - $ ./jstatd -p 1111 -J-Djava.security.policy=visualvm.policy & - -**Local Machine** - -**Run & configure visualVM** - -Run visualVM by typing - -.. code-block:: bash - - $ jvisualvm - -Connect to jstatd & remote policy-api JVM - - 1. Right click on "Remote" in the left panel of the screen and select "Add Remote Host..." - 2. Enter the IP address of VM2 (running policy-api) - 3. Right click on IP address, select "Add JMX Connection..." - 4. Enter the VM2 IP Address (from step 2) <IP address>:9090 ( for example, 10.12.6.151:9090) and click OK. - 5. Double click on the newly added nodes under "Remote" to start monitoring CPU, Memory & GC. - -Sample Screenshot of visualVM - -.. image:: images/results-5.png - -Run Test --------- - -**Local Machine** - -Connect to lab VPN - -.. code-block:: bash - - $ sudo openvpn --config <path to lab ovpn key file> - -SSH into JMeter VM (VM1) - -.. code-block:: bash - - $ ssh -i <path to lab ssh key file> ubuntu@<host ip of JMeter VM> - -Run JMeter test in background for 72+ hours +The stability test was performed on a default ONAP OOM installation in the Intel Wind River Lab environment. +JMeter was installed on a separate VM to inject the traffic defined in the +`API stability script +<https://git.onap.org/policy/api/tree/testsuites/stability/src/main/resources/testplans/policy_api_stability.jmx>`_ +with the following command: .. code-block:: bash - $ mkdir s3p - $ nohup ./jMeter/apache-jmeter-5.2.1/bin/jmeter.sh -n -t ~/api/testsuites/stability/src/main/resources/testplans/policy_api_stability.jmx & + jmeter.sh --nongui --testfile policy_api_stability.jmx --logfile result.jtl -(Optional) Monitor JMeter test that is running in background (anytime after re-logging into JMeter VM - VM1) - -.. code-block:: bash - - $ tail -f s3p/stability.log nohup.out Test Plan --------- @@ -333,108 +105,43 @@ of each entity is set to the running thread number. - Get Preloaded Policy Types -Test Results El-Alto --------------------- +Test Results +------------ **Summary** -Policy API stability test plan was triggered and running for 72+ hours without any error occurred. +No errors were found during the 72 hours of the Policy API stability run. +The load was performed against a non-tweaked ONAP OOM installation. **Test Statistics** ======================= ============= =========== =============================== =============================== =============================== -**Total # of requests** **Success %** **Error %** **Avg. time taken per request** **Min. time taken per request** **Max. time taken per request** +**Total # of requests** **Success %** **TPS** **Avg. time taken per request** **Min. time taken per request** **Max. time taken per request** ======================= ============= =========== =============================== =============================== =============================== - 49723 100% 0% 86 ms 4 ms 795 ms + 176407 100% 0.68 7340 ms 34 ms 49298 ms ======================= ============= =========== =============================== =============================== =============================== -**VisualVM Results** - -.. image:: images/results-5.png -.. image:: images/results-6.png **JMeter Results** -.. image:: images/results-1.png -.. image:: images/results-2.png -.. image:: images/results-3.png -.. image:: images/results-4.png - - -Test Results Frankfurt ----------------------- - -PFPP ONAP Windriver lab - -**Summary** - -Policy API stability test plan was triggered and running for 72+ hours without -any real errors occurring. The single failure was on teardown and was due to -simultaneous test plans running concurrently on the lab system. - -Compared to El-Alto, 10x the number of API calls were made in the 72 hour run. -However, the latency increased (most likely due to the synchronization added -from -`POLICY-2533 <https://jira.onap.org/browse/POLICY-2533>`_. -This will be addressed in the next release. - -**Test Statistics** - -======================= ============= =========== =============================== =============================== =============================== -**Total # of requests** **Success %** **Error %** **Avg. time taken per request** **Min. time taken per request** **Max. time taken per request** -======================= ============= =========== =============================== =============================== =============================== - 514953 100% 0% 2510 ms 336 ms 15034 ms -======================= ============= =========== =============================== =============================== =============================== - -**VisualVM Results** - -VisualVM results were not captured as this was run in the PFPP ONAP Windriver -lab. +The following graphs show the response time distributions. The "Get Policy Types" API calls are the most expensive calls that +average a 10 seconds plus response time. -**JMeter Results** - -.. image:: images/api-s3p-jm-1_F.png +.. image:: images/api-response-time-distribution.png +.. image:: images/api-response-time-overtime.png Performance Test of Policy API ++++++++++++++++++++++++++++++ -Introduction ------------- - -Performance test of policy-api has the goal of testing the min/avg/max processing time and rest call throughput for all the requests when the number of requests are large enough to saturate the resource and find the bottleneck. - -Setup Details -------------- +A specific performance test was omitted in Guilin. The JMeter script used in the stability run injected +back to back traffic with 5 parallel threads with no pauses between requests. Since the JMeter threads operate +in synchronous mode (waiting for a request's response before sending the next request), JMeter injection rates autoregulate +because of the backpressure imposed by the response times. Even though the response times are high, the +"Response over Time" graph above indicates that they remain constant at large, throughout the duration of the test. +This together with the absence of notorious spikes in the kubernetes node CPU utilization suggests that the API +component is not strained. A more enlightning set of tests, would plot jmeter threads (increasing load) +against response times. These tests have not been performed in this release. -The performance test is performed on OOM-based deployment of ONAP Policy framework components in Intel Wind River Lab environment. -In addition, we use another VM with JMeter installed to generate the transactions. -The JMeter VM will be sending large number of REST requests to the policy-api component and collecting the statistics. -Policy-api component already knows how to communicate with MariaDB component if OOM-based deployment is working correctly. - -Test Plan ---------- - -Performance test plan is the same as stability test plan above. -Only differences are, in performance test, we increase the number of threads up to 20 (simulating 20 users' behaviors at the same time) whereas reducing the test time down to 1 hour. - -Run Test --------- - -Running/Triggering performance test will be the same as stability test. That is, launch JMeter pointing to corresponding *.jmx* test plan. The *API_HOST* and *API_PORT* are already set up in *.jmx*. - -Test Results ------------- -Test results are shown as below. Overall, the test was running smoothly and successfully. We do see some minor failed transactions, especially in POST calls which intend to write into DB simultaneously in a multi-threaded fashion . All GET calls (reading from DB) were succeeded. - -.. image:: images/summary-1.png -.. image:: images/summary-2.png -.. image:: images/summary-3.png -.. image:: images/result-1.png -.. image:: images/result-2.png -.. image:: images/result-3.png -.. image:: images/result-4.png -.. image:: images/result-5.png -.. image:: images/result-6.png diff --git a/docs/development/devtools/images/api-response-time-distribution.png b/docs/development/devtools/images/api-response-time-distribution.png Binary files differnew file mode 100644 index 00000000..e57ff627 --- /dev/null +++ b/docs/development/devtools/images/api-response-time-distribution.png diff --git a/docs/development/devtools/images/api-response-time-overtime.png b/docs/development/devtools/images/api-response-time-overtime.png Binary files differnew file mode 100644 index 00000000..c80a6a64 --- /dev/null +++ b/docs/development/devtools/images/api-response-time-overtime.png diff --git a/docs/development/devtools/images/pap-perf-jm-2_F.png b/docs/development/devtools/images/pap-perf-jm-2_F.png Binary files differdeleted file mode 100644 index e631f992..00000000 --- a/docs/development/devtools/images/pap-perf-jm-2_F.png +++ /dev/null diff --git a/docs/development/devtools/images/pap-s3p-jm-1.png b/docs/development/devtools/images/pap-s3p-jm-1.png Binary files differdeleted file mode 100644 index c292089d..00000000 --- a/docs/development/devtools/images/pap-s3p-jm-1.png +++ /dev/null diff --git a/docs/development/devtools/images/pap-s3p-jm-1_F.png b/docs/development/devtools/images/pap-s3p-jm-1_F.png Binary files differdeleted file mode 100644 index 2b6b656a..00000000 --- a/docs/development/devtools/images/pap-s3p-jm-1_F.png +++ /dev/null diff --git a/docs/development/devtools/images/pap-s3p-jm-performance.JPG b/docs/development/devtools/images/pap-s3p-jm-performance.JPG Binary files differnew file mode 100644 index 00000000..60bf6210 --- /dev/null +++ b/docs/development/devtools/images/pap-s3p-jm-performance.JPG diff --git a/docs/development/devtools/images/pap-s3p-jm-stability.JPG b/docs/development/devtools/images/pap-s3p-jm-stability.JPG Binary files differnew file mode 100644 index 00000000..58e26b5f --- /dev/null +++ b/docs/development/devtools/images/pap-s3p-jm-stability.JPG diff --git a/docs/development/devtools/images/pap-s3p-top-after.JPG b/docs/development/devtools/images/pap-s3p-top-after.JPG Binary files differnew file mode 100644 index 00000000..967cdcc0 --- /dev/null +++ b/docs/development/devtools/images/pap-s3p-top-after.JPG diff --git a/docs/development/devtools/images/pap-s3p-top-before.JPG b/docs/development/devtools/images/pap-s3p-top-before.JPG Binary files differnew file mode 100644 index 00000000..a922617f --- /dev/null +++ b/docs/development/devtools/images/pap-s3p-top-before.JPG diff --git a/docs/development/devtools/images/pap-s3p-vvm-1.png b/docs/development/devtools/images/pap-s3p-vvm-1.png Binary files differdeleted file mode 100644 index 8c72d1fb..00000000 --- a/docs/development/devtools/images/pap-s3p-vvm-1.png +++ /dev/null diff --git a/docs/development/devtools/images/pap-s3p-vvm-1_F.png b/docs/development/devtools/images/pap-s3p-vvm-1_F.png Binary files differdeleted file mode 100644 index e05402be..00000000 --- a/docs/development/devtools/images/pap-s3p-vvm-1_F.png +++ /dev/null diff --git a/docs/development/devtools/images/pap-s3p-vvm-2.png b/docs/development/devtools/images/pap-s3p-vvm-2.png Binary files differdeleted file mode 100644 index b1d7e346..00000000 --- a/docs/development/devtools/images/pap-s3p-vvm-2.png +++ /dev/null diff --git a/docs/development/devtools/images/pap-s3p-vvm-2_F.png b/docs/development/devtools/images/pap-s3p-vvm-2_F.png Binary files differdeleted file mode 100644 index ee20423a..00000000 --- a/docs/development/devtools/images/pap-s3p-vvm-2_F.png +++ /dev/null diff --git a/docs/development/devtools/images/result-1.png b/docs/development/devtools/images/result-1.png Binary files differdeleted file mode 100644 index 4715cd7a..00000000 --- a/docs/development/devtools/images/result-1.png +++ /dev/null diff --git a/docs/development/devtools/images/result-2.png b/docs/development/devtools/images/result-2.png Binary files differdeleted file mode 100644 index cd01147d..00000000 --- a/docs/development/devtools/images/result-2.png +++ /dev/null diff --git a/docs/development/devtools/images/result-3.png b/docs/development/devtools/images/result-3.png Binary files differdeleted file mode 100644 index 01e27a30..00000000 --- a/docs/development/devtools/images/result-3.png +++ /dev/null diff --git a/docs/development/devtools/images/result-4.png b/docs/development/devtools/images/result-4.png Binary files differdeleted file mode 100644 index 3fc2f36b..00000000 --- a/docs/development/devtools/images/result-4.png +++ /dev/null diff --git a/docs/development/devtools/images/result-5.png b/docs/development/devtools/images/result-5.png Binary files differdeleted file mode 100644 index 9b7140c6..00000000 --- a/docs/development/devtools/images/result-5.png +++ /dev/null diff --git a/docs/development/devtools/images/result-6.png b/docs/development/devtools/images/result-6.png Binary files differdeleted file mode 100644 index f07ea59e..00000000 --- a/docs/development/devtools/images/result-6.png +++ /dev/null diff --git a/docs/development/devtools/images/results-1.png b/docs/development/devtools/images/results-1.png Binary files differdeleted file mode 100644 index 35e1a965..00000000 --- a/docs/development/devtools/images/results-1.png +++ /dev/null diff --git a/docs/development/devtools/images/results-2.png b/docs/development/devtools/images/results-2.png Binary files differdeleted file mode 100644 index 82092025..00000000 --- a/docs/development/devtools/images/results-2.png +++ /dev/null diff --git a/docs/development/devtools/images/results-3.png b/docs/development/devtools/images/results-3.png Binary files differdeleted file mode 100644 index 69d430a2..00000000 --- a/docs/development/devtools/images/results-3.png +++ /dev/null diff --git a/docs/development/devtools/images/results-4.png b/docs/development/devtools/images/results-4.png Binary files differdeleted file mode 100644 index 47c0f5fa..00000000 --- a/docs/development/devtools/images/results-4.png +++ /dev/null diff --git a/docs/development/devtools/images/results-5.png b/docs/development/devtools/images/results-5.png Binary files differdeleted file mode 100644 index effd062b..00000000 --- a/docs/development/devtools/images/results-5.png +++ /dev/null diff --git a/docs/development/devtools/images/results-6.png b/docs/development/devtools/images/results-6.png Binary files differdeleted file mode 100644 index 1da1e366..00000000 --- a/docs/development/devtools/images/results-6.png +++ /dev/null diff --git a/docs/development/devtools/images/summary-1.png b/docs/development/devtools/images/summary-1.png Binary files differdeleted file mode 100644 index a9d3b61e..00000000 --- a/docs/development/devtools/images/summary-1.png +++ /dev/null diff --git a/docs/development/devtools/images/summary-2.png b/docs/development/devtools/images/summary-2.png Binary files differdeleted file mode 100644 index 2ca0c969..00000000 --- a/docs/development/devtools/images/summary-2.png +++ /dev/null diff --git a/docs/development/devtools/images/summary-3.png b/docs/development/devtools/images/summary-3.png Binary files differdeleted file mode 100644 index cd288d2b..00000000 --- a/docs/development/devtools/images/summary-3.png +++ /dev/null diff --git a/docs/development/devtools/images/xacml-s3p-jmeter.png b/docs/development/devtools/images/xacml-s3p-jmeter.png Binary files differdeleted file mode 100644 index 80777570..00000000 --- a/docs/development/devtools/images/xacml-s3p-jmeter.png +++ /dev/null diff --git a/docs/development/devtools/images/xacml-s3p-top.png b/docs/development/devtools/images/xacml-s3p-top.png Binary files differdeleted file mode 100644 index 36dc403e..00000000 --- a/docs/development/devtools/images/xacml-s3p-top.png +++ /dev/null diff --git a/docs/development/devtools/pap-s3p.rst b/docs/development/devtools/pap-s3p.rst index 5ae58ff5..ba9a74f6 100644 --- a/docs/development/devtools/pap-s3p.rst +++ b/docs/development/devtools/pap-s3p.rst @@ -10,268 +10,36 @@ Policy PAP component ~~~~~~~~~~~~~~~~~~~~ -72 Hours Stability Test of PAP -++++++++++++++++++++++++++++++ +Both the Performance and the Stability tests were executed by performing requests +against Policy components installed as part of a full ONAP OOM deployment in Nordix lab. -Introduction ------------- - -The 72 hour Stability Test for PAP has the goal of introducing a steady flow of transactions initiated from a test client server running JMeter for the duration of 72 hours. - -Setup details -------------- - -The stability test is performed on VM's running in OpenStack cloud environment. - -There are 2 seperate VM's, one for running PAP & other one for running JMeter to simulate steady flow of transactions. - -All the dependencies like mariadb, dmaap simulator, pdp simulator & policy/api component are installed in the VM having JMeter. - -For simplicity lets assume - -VM1 will be running JMeter, MariaDB, DMaaP simulator, PDP simulator & API component. - -VM2 will be running only PAP component. - -**OpenStack environment details** - -Version: Mitaka - -**PAP VM details (VM2)** - -OS:Ubuntu 16.04 LTS - -CPU: 4 core - -RAM: 4 GB - -HardDisk: 40 GB - -Docker version 19.03.8 - -Java: openjdk version "11.0.7" 2020-04-14 - -**JMeter VM details (VM1)** - -OS: Ubuntu 16.04 LTS - -CPU: 4 core - -RAM: 4 GB - -HardDisk: 40 GB - -Docker Version: 18.09.6 - -Java: openjdk version "11.0.7" 2020-04-14 - -JMeter: 5.2.1 - -Install Docker in VM1 & VM2 ---------------------------- - -Make sure to execute below commands in VM1 & VM2 both. - -Make the etc/hosts entries - -.. code-block:: bash - - $ echo $(hostname -I | cut -d\ -f1) $(hostname) | sudo tee -a /etc/hosts - -Make the DNS entries - -.. code-block:: bash - - $ echo "nameserver <PrimaryDNSIPIP>" >> /etc/resolvconf/resolv.conf.d/head - $ echo "nameserver <SecondaryDNSIP>" >> /etc/resolvconf/resolv.conf.d/head - $ resolvconf -u - -Update the ubuntu software installer - -.. code-block:: bash - - $ apt-get update - -Check and Install Java - -.. code-block:: bash - - $ apt-get install -y openjdk-11-jdk - $ java -version - -Ensure that the Java version that is executing is OpenJDK version 8 - - -Check and install docker - -.. code-block:: bash - - $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - - $ add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" - $ apt-get update - $ apt-cache policy docker-ce - $ apt-get install -y docker-ce - $ systemctl status docker - $ docker ps - -Change the permissions of the Docker socket file - -.. code-block:: bash - - $ chmod 777 /var/run/docker.sock - -Check the status of the Docker service and ensure it is running correctly - -.. code-block:: bash - - $ service docker status - $ docker ps - -Install JMeter in VM1 ---------------------- - -Download & install JMeter - -.. code-block:: bash - - $ mkdir jMeter - $ cd jMeter - $ wget http://mirrors.whoishostingthis.com/apache//jmeter/binaries/apache-jmeter-5.2.1.zip - $ unzip apache-jmeter-5.2.1.zip - -Run JMeter - -.. code-block:: bash - - $ /home/ubuntu/jMeter/apache-jmeter-5.2.1/bin/jmeter - -The above command will load the JMeter UI. Then navigate to File → Open → Browse and select the test plan jmx file to open. -The jmx file is present in the policy/pap git repository. - -Install simulators in VM1 -------------------------- - -Clone PAP to VM1 using the following command : - -.. code-block:: bash - - root@policytest-policytest-3-p5djn6as2477:~$ git clone http://gerrit.onap.org/r/policy/pap - -For installing simulator, execute the script `setup_components.sh` as shown below: - -.. code-block:: bash - - root@policytest-policytest-3-p5djn6as2477:~$ ./pap/testsuites/stability/src/main/resources/simulatorsetup/setup_components.sh - -After installation make sure that following 4 docker containers are up and running. - -.. code-block:: bash - - root@policytest-policytest-3-p5djn6as2477:~$ docker ps - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - 887efa8dac12 nexus3.onap.org:10001/onap/policy-api "bash ./policy-api.sh" 6 days ago Up 6 days 0.0.0.0:6969->6969/tcp policy-api - 0a931c0a63ac pdp/simulator:latest "bash pdp-sim.sh" 6 days ago Up 6 days pdp-simulator - a41adcb32afb dmaap/simulator:latest "bash dmaap-sim.sh" 6 days ago Up 6 days 0.0.0.0:3904->3904/tcp dmaap-simulator - d52d6b750ba0 mariadb:10.2.14 "docker-entrypoint.s…" 6 days ago Up 6 days 0.0.0.0:3306->3306/tcp mariadb - -Install PAP in VM2 ------------------- - -Clone PAP to VM2 using the following command : - -.. code-block:: bash - - root@policytest-policytest-3-p5djn6as2477:~$ git clone http://gerrit.onap.org/r/policy/pap - -For installing PAP, execute the script `setup_pap.sh` as shown below: - -.. code-block:: bash - - root@policytest-policytest-3-p5djn6as2477:~$ cd pap/testsuites/stability/src/main/resources/papsetup/ - root@policytest-policytest-3-p5djn6as2477:~$ ./setup_pap.sh <VM2_IP> <VM1_IP> - -After installation make sure that following docker container is up and running. - -.. code-block:: bash - - root@policytest-policytest-0-uc3y2h5x6p4j:~$ docker ps - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - 42ac0ed4b713 nexus3.onap.org:10001/onap/policy-pap:2.2.3-SNAPSHOT "bash ./policy-pap.sh" 3 days ago Up 3 days 0.0.0.0:6969->6969/tcp, 0.0.0.0:9090->9090/tcp policy-pap - -Install & configure visualVM in VM2 ------------------------------------ - -visualVM needs to be installed in the virtual machine having PAP. It will be used to monitor CPU, Memory, GC for PAP while stability test is running. - -Install visualVM - -.. code-block:: bash - - $ sudo apt-get install visualvm - -Run few commands to configure permissions - -.. code-block:: bash - - $ cd /usr/lib/jvm/java-11-openjdk-amd64/bin/ - $ sudo touch visualvm.policy - $ sudo chmod 777 visualvm.policy - - $ vi visualvm.policy - - Add the following in visualvm.policy - - - grant codebase "file:/usr/lib/jvm/java-11-openjdk-amd64/lib/tools.jar" { - permission java.security.AllPermission; - }; - -Run following commands to start jstatd using port 1111 - -.. code-block:: bash - - $ cd /usr/lib/jvm/java-11-openjdk-amd64/bin/ - $ ./jstatd -p 1111 -J-Djava.security.policy=visualvm.policy & - -Run visualVM locally to connect to remote VM2 - -.. code-block:: bash - - # On your windows machine or your linux box locally, launch visualVM - $ nohup visualvm - -Connect to jstatd & remote apex-pdp JVM - - 1. Right click on "Remote" in the left panel of the screen and select "Add Remote Host..." - 2. Enter the IP address of VM2. - 3. Right click on IP address, select "Add JMX Connection..." - 4. Enter the VM2 IP Address (from step 2) <IP address>:9090 ( for example -10.12.6.201:9090) and click OK. - 5. Double click on the newly added nodes under "Remote" to start monitoring CPU, Memory & GC. +Setup Details ++++++++++++++ -Sample Screenshot of visualVM +- Policy-PAP along with all policy components deployed as part of a full ONAP OOM deployment. +- A second instance of APEX-PDP is spun up in the setup. Update the configuration file(OnapPfConfig.json) such that the PDP can register to the new group created by PAP in the tests. +- Both tests were run via jMeter, which was installed on a separate VM. -.. image:: images/pap-s3p-vvm-sample.png +Stability Test of PAP ++++++++++++++++++++++ Test Plan --------- +The 72 hours stability test ran the following steps sequentially in a single threaded loop. -The 72 hours stability test will run the following steps sequentially in a single threaded loop. - -- **Create Policy Type** - creates an operational policy type using policy/api component -- **Create Policy defaultDomain** - creates an operational policy using the policy type created in the above step using policy/api component -- **Create Policy sampleDomain** - creates an operational policy using the policy type created in the above step using policy/api component +- **Create Policy defaultDomain** - creates an operational policy using policy/api component +- **Create Policy sampleDomain** - creates an operational policy using policy/api component - **Check Health** - checks the health status of pap - **Check Statistics** - checks the statistics of pap - **Change state to ACTIVE** - changes the state of defaultGroup PdpGroup to ACTIVE - **Check PdpGroup Query** - makes a PdpGroup query request and verifies that PdpGroup is in the ACTIVE state. - **Deploy defaultDomain Policy** - deploys the policy defaultDomain in the existing PdpGroup - **Create/Update PDP Group** - creates a new PDPGroup named sampleGroup. -- **OS Process Sampler** - OS Process Sampler to start a new Pdp Instance - **Check PdpGroup Query** - makes a PdpGroup query request and verifies that 2 PdpGroups are in the ACTIVE state and defaultGroup has a policy deployed on it. - **Deployment Update sampleDomain** - deploys the policy sampleDomain in sampleGroup PdpGroup using pap api - **Check PdpGroup Query** - makes a PdpGroup query request and verifies that the defaultGroup has a policy defaultDomain deployed on it and sampleGroup has policy sampleDomain deployed on it. +- **Check Consolidated Health** - checks the consolidated health status of all policy components. - **Check Deployed Policies** - checks for all the deployed policies using pap api. -- **OS Process Sampler** - OS Process Sampler to stop the newly created Pdp Instance - **Undeploy Policy sampleDomain** - undeploys the policy sampleDomain from sampleGroup PdpGroup using pap api - **Undeploy Default Policy** - undeploys the policy defaultDomain from PdpGroup - **Change state to PASSIVE(sampleGroup)** - changes the state of sampleGroup PdpGroup to PASSIVE @@ -280,7 +48,6 @@ The 72 hours stability test will run the following steps sequentially in a singl - **Check PdpGroup Query** - makes a PdpGroup query request and verifies that PdpGroup is in the PASSIVE state. - **Delete Policy defaultDomain** - deletes the operational policy defaultDomain using policy/api component - **Delete Policy sampleDomain** - deletes the operational policy sampleDomain using policy/api component -- **Delete Policy Type** - deletes the operational policy type using policy/api component The following steps can be used to configure the parameters of test plan. @@ -295,13 +62,13 @@ The following steps can be used to configure the parameters of test plan. PAP_PORT Port number of PAP for making REST API calls API_HOST IP Address or host name of API component API_PORT Port number of API for making REST API calls - DIR Path where the pdp instance startup and stop script is placed - CONFIG_DIR Path where the pdp default Config file is placed =========== =================================================================== -Screenshot of PAP stability test plan +The test was run in the background via "nohup", to prevent it from being interrupted: + +.. code-block:: bash -.. image:: images/pap-s3p-testplan.png + nohup ./jMeter/apache-jmeter-5.3/bin/jmeter.sh -n -t stability.jmx -l testresults.jtl Test Results ------------ @@ -310,55 +77,49 @@ Test Results Stability test plan was triggered for 72 hours. +.. Note:: + + .. container:: paragraph + + As part of the OOM deployment, another APEX-PDP pod is spun up with the pdpGroup name specified as 'sampleGroup'. + After creating the new group called 'sampleGroup' as part of the test, a time delay of 2 minutes is added, + so that the pdp is registered to the newly created group. + This has resulted in a spike in the Average time taken per request. But, this is required to make proper assertions, + and also for the consolidated health check. + **Test Statistics** ======================= ================= ================== ================================== **Total # of requests** **Success %** **Error %** **Average time taken per request** ======================= ================= ================== ================================== -178208 100 % 0 % 76 ms +35059 99.99 % 0.01 % 354 ms ======================= ================= ================== ================================== -**VisualVM Screenshot** - -.. image:: images/pap-s3p-vvm-1.png -.. image:: images/pap-s3p-vvm-2.png - -**JMeter Screenshot** - -.. image:: images/pap-s3p-jm-1.png -.. image:: images/pap-s3p-jm-1.png - -Test Results Frankfurt release -------------------------------- +.. Note:: -**Summary** + .. container:: paragraph -Stability test plan was triggered for 72 hours. + There were only 3 failures during the 72 hours test, and all these 3 happened because the 2nd PDP instance didn't + get registered in time to the new group created, and as a result, the PdpGroup Query failed. This can be ignored, + as it was only a matter of one missing heartbeat over a period of 24 hours. -.. Note:: +**JMeter Screenshot** - .. container:: paragraph +.. image:: images/pap-s3p-jm-stability.JPG - Test cases for starting and stopping the PDP Instance has been included in the - test plan. These test cases have resulted in a spike in the Average time taken per request. +**Memory and CPU usage** -**Test Statistics** +The memory and CPU usage can be monitored by running "top" command on the PAP pod. A snapshot is taken before and after test execution to monitor the changes in resource utilization. -======================= ================= ================== ================================== -**Total # of requests** **Success %** **Error %** **Average time taken per request** -======================= ================= ================== ================================== - 29423 100 % 0 % 948 ms -======================= ================= ================== ================================== +Memory and CPU usage before test execution: -**VisualVM Screenshot** +.. image:: images/pap-s3p-top-before.JPG -.. image:: images/pap-s3p-vvm-1_F.png -.. image:: images/pap-s3p-vvm-2_F.png +Memory and CPU usage after test execution: -**JMeter Screenshot** +.. image:: images/pap-s3p-top-after.JPG -.. image:: images/pap-s3p-jm-1_F.png -.. image:: images/pap-s3p-jm-1_F.png +The CPU and memory usage by the PAP pod is consistent over the period of 72 hours test execution. Performance Test of PAP ++++++++++++++++++++++++ @@ -373,6 +134,7 @@ Setup Details The performance test is performed on a similar setup as Stability test. The JMeter VM will be sending a large number of REST requests to the PAP component and collecting the statistics. + Test Plan --------- @@ -381,14 +143,17 @@ Performance test plan is the same as the stability test plan above except for th - Increase the number of threads up to 5 (simulating 5 users' behaviours at the same time). - Reduce the test time to 2 hours. - Usage of counters to create different groups by the 'Create/Update PDP Group' test case. -- Usage of If-Controller for 'Deploy defaultDomain Policy' and 'Undeploy defaultDomain Policy' test cases to install and uninstall the Default policy only in one thread. -- OS Process Sampler for starting and stopping the PDP Instance has been disabled in the performance test plan for a better performance check. +- Removed the delay to wait for the new PDP to be registered. Also removed the corresponding assertions where the Pdp instance registration to the newly created group is validated. Run Test -------- Running/Triggering the performance test will be the same as the stability test. That is, launch JMeter pointing to corresponding *.jmx* test plan. The *API_HOST* , *API_PORT* , *PAP_HOST* , *PAP_PORT* are already set up in *.jmx*. +.. code-block:: bash + + nohup ./jMeter/apache-jmeter-5.3/bin/jmeter.sh -n -t perf.jmx -l perftestresults.jtl + Once the test execution is completed, execute the below script to get the statistics: .. code-block:: bash @@ -399,17 +164,16 @@ Once the test execution is completed, execute the below script to get the statis Test Results ------------ -Test results are shown as below. Overall, the test was running smoothly and successfully. We do see some minor failed transactions, especially in the 'Deploy' and 'Undeploy' Pap API in a multi-threaded fashion . +Test results are shown as below. **Test Statistics** ======================= ================= ================== ================================== ======================= **Total # of requests** **Success %** **Error %** **Average time taken per request** **Requests/sec** ======================= ================= ================== ================================== ======================= - 25743 99.5 % 0.50 % 397 ms 5148 +44293 100 % 0.00 % 943 ms 8858 ======================= ================= ================== ================================== ======================= **JMeter Screenshot** -.. image:: images/pap-perf-jm-1_F.png -.. image:: images/pap-perf-jm-2_F.png
\ No newline at end of file +.. image:: images/pap-s3p-jm-performance.JPG
\ No newline at end of file diff --git a/docs/development/devtools/xacml-s3p.rst b/docs/development/devtools/xacml-s3p.rst index 7c29a454..74369fc2 100644 --- a/docs/development/devtools/xacml-s3p.rst +++ b/docs/development/devtools/xacml-s3p.rst @@ -80,40 +80,32 @@ Stability Test of Policy XACML PDP Summary ======= -The Stability test was run with the same pods/VMs and uses the same jmeter script as the -performance test, except that it was run for 72 hours instead of 20 minutes. In -addition, it was run in the background via "nohup", to prevent it from being interrupted: +The stability test was performed on a default ONAP OOM installation in the Intel Wind River Lab environment. +JMeter was installed on a separate VM to inject the traffic defined in the +`XACML PDP stability script +<https://git.onap.org/policy/xacml-pdp/tree/testsuites/stability/src/main/resources/testplans/stability.jmx>`_ +with the following command: .. code-block:: bash - nohup jmeter -Jduration=259200 \ - -Jxacml_ip=$ip -Jpap_ip=$ip -Japi_ip=$ip \ - -Jxacml_port=31104 -Jpap_port=32425 -Japi_port=30709 \ - -n -t perf.jmx & + jmeter.sh -Jduration=259200 -Jusers=2 -Jxacml_ip=$ip -Jpap_ip=$ip -Japi_ip=$ip \ + -Jxacml_port=31104 -Jpap_port=32425 -Japi_port=30709 --nongui --testfile stability.jmx -The memory and CPU usage can be monitored by running "top" on the xacml pod. By taking -a snapshot before the test is started, and again when it completes, the total CPU used -by all of the requests can be computed. +The default log level of the root and org.eclipse.jetty.server.RequestLog loggers in the logback.xml +of the XACML PDP +(om/kubernetes/policy/components/policy-xacml-pdp/resources/config/logback.xml) +was set to ERROR since the OOM installation did not have log rotation enabled of the +container logs in the kubernetes worker nodes. Results ======= -The final output of the jmeter script is found in the nohup.out file: - -.. image:: images/xacml-s3p-jmeter.png - -The final memory and CPU from "top": +The stability summary results were reported by JMeter with the following line: -.. image:: images/xacml-s3p-top.png - -The through-put reported by jmeter was 4849 requests/second, with 0 errors. In addition, -the memory usage observed via "top" indicated that the virtual memory and resident set -sizes remained virtually unchanged through-out the test. +.. code-block:: bash -Unfortunately, the initial CPU usage was not recorded, so the CPU time reported in -the "top" screen-shot includes XACML-PDP start-up time as well as requests that were -executed before the stability test was started. Nevertheless, even including that, we find: + 2020-10-23 19:44:31,515 INFO o.a.j.r.Summariser: summary = 1061746369 in 72:00:16 = 4096.0/s Avg: 0 Min: 0 Max: 2584 Err: 0 (0.00%) -.. code-block:: bash +The XACML PDP offered good performance with JMeter for the traffic mix described above, creating 4096 threads per second +to inject the traffic load. No errors were encountered, and no significant CPU spikes were noted. - 13,166 CPU minutes * 60sec/min * 1000ms/sec / 1,256,834,239 requests = 0.63ms/request diff --git a/integration/pom.xml b/integration/pom.xml index 42909692..64012b58 100644 --- a/integration/pom.xml +++ b/integration/pom.xml @@ -27,7 +27,7 @@ <parent> <groupId>org.onap.policy.parent</groupId> <artifactId>policy-parent</artifactId> - <version>3.2.1-SNAPSHOT</version> + <version>3.3.0-SNAPSHOT</version> </parent> <artifactId>integration</artifactId> <packaging>pom</packaging> @@ -28,7 +28,7 @@ </parent> <groupId>org.onap.policy.parent</groupId> <artifactId>policy-parent</artifactId> - <version>3.2.1-SNAPSHOT</version> + <version>3.3.0-SNAPSHOT</version> <packaging>pom</packaging> <properties> diff --git a/version.properties b/version.properties index 962a34da..2640898e 100644 --- a/version.properties +++ b/version.properties @@ -3,8 +3,8 @@ # because they are used in Jenkins, whose plug-in doesn't support major=3 -minor=2 -patch=1 +minor=3 +patch=0 base_version=${major}.${minor}.${patch} |