aboutsummaryrefslogtreecommitdiffstats
path: root/docs/development
diff options
context:
space:
mode:
Diffstat (limited to 'docs/development')
-rw-r--r--docs/development/devtools/cl-participants-smoke.rst251
-rw-r--r--docs/development/devtools/devtools.rst2
-rw-r--r--docs/development/devtools/json/cl-instantiation.json53
-rw-r--r--docs/development/devtools/tosca/smoke-test-participants.yaml260
-rw-r--r--docs/development/devtools/xacml-s3p.rst45
5 files changed, 590 insertions, 21 deletions
diff --git a/docs/development/devtools/cl-participants-smoke.rst b/docs/development/devtools/cl-participants-smoke.rst
new file mode 100644
index 00000000..202f5d75
--- /dev/null
+++ b/docs/development/devtools/cl-participants-smoke.rst
@@ -0,0 +1,251 @@
+.. This work is licensed under a Creative Commons Attribution 4.0 International License.
+
+.. _clamp-controlloop-participants-smoke-tests:
+
+CLAMP participants (kubernetes, http) Smoke Tests
+-------------------------------------------------
+1. Introduction
+***************
+The CLAMP participants (kubernetes and http) are used to interact with the helm client in a kubernetes environment for the
+deployment of microservices via helm chart as well as to configure the microservices over REST endpoints. Both of these participants are
+often used together in the Control loop workflow.
+
+This document will serve as a guide to do smoke tests on the different components that are involved when working with the participants and outline how they operate. It will also show a developer how to set up their environment for carrying out smoke tests on these participants.
+
+2. Setup Guide
+**************
+This article assumes that:
+
+* You are using the operating systems such as linux/macOS/windows.
+* You are using a directory called *git* off your home directory *(~/git)* for your git repositories
+* Your local maven repository is in the location *~/.m2/repository*
+* You have copied the settings.xml from oparent to *~/.m2/* directory
+* You have added settings to access the ONAP Nexus to your M2 configuration, see `Maven Settings Example <https://wiki.onap.org/display/DW/Setting+Up+Your+Development+Environment>`_ (bottom of the linked page)
+
+The procedure documented in this article has been verified using Ubuntu 20.04 LTS VM.
+
+2.1 Prerequisites
+=================
+- Java 11
+- Docker
+- Maven 3
+- Git
+- helm3
+- k8s cluster
+- Refer to this guide for basic environment setup `Setting up dev environment <https://wiki.onap.org/display/DW/Setting+Up+Your+Development+Environment>`_
+
+2.2 Assumptions
+===============
+- You are accessing the policy repositories through gerrit.
+
+The following repositories are required for development in this project. These repositories should be present on your machine and you should run "mvn clean install" on all of them so that the packages are present in your .m2 repository.
+
+- policy/parent
+- policy/common
+- policy/models
+- policy/clamp
+- policy/docker
+
+In this setup guide, we will be setting up all the components technically required for a working dev environment.
+
+2.3 Setting up the components
+=============================
+
+2.3.1 MariaDB Setup
+^^^^^^^^^^^^^^^^^^^
+We will be using Docker to run our mariadb instance. It will have the runtime-controlloop database running in it.
+
+- controlloop: the runtime-controlloop db
+
+The easiest way to do this is to perform a small alteration on an SQL script provided by the clamp backend in the file "runtime/extra/sql/bulkload/create-db.sql"
+
+.. code-block:: mysql
+
+ CREATE DATABASE `controlloop`;
+ USE `controlloop`;
+ DROP USER 'policy';
+ CREATE USER 'policy';
+ GRANT ALL on controlloop.* to 'policy' identified by 'P01icY' with GRANT OPTION;
+
+Once this has been done, we can run the bash script provided here: "runtime/extra/bin-for-dev/start-db.sh"
+
+.. code-block:: bash
+
+ ./start-db.sh
+
+This will setup all the Control Loop runtime database. The database will be exposed locally on port 3306 and will be backed by an anonymous docker volume.
+
+2.3.2 DMAAP Simulator
+^^^^^^^^^^^^^^^^^^^^^
+For convenience, a dmaap simulator has been provided in the policy/models repository. To start the simulator, you can do the following:
+
+1. Navigate to /models-sim/policy-models-simulators in the policy/models repository.
+2. Add a configuration file to src/test/resources with the following contents:
+
+.. code-block:: json
+
+ {
+ "dmaapProvider":{
+ "name":"DMaaP simulator",
+ "topicSweepSec":900
+ },
+ "restServers":[
+ {
+ "name":"DMaaP simulator",
+ "providerClass":"org.onap.policy.models.sim.dmaap.rest.DmaapSimRestControllerV1",
+ "host":"localhost",
+ "port":3904,
+ "https":false
+ }
+ ]
+ }
+
+3. You can then start dmaap with:
+
+.. code-block:: bash
+
+ mvn exec:java -Dexec.mainClass=org.onap.policy.models.simulators.Main -Dexec.args="src/test/resources/YOUR_CONF_FILE.json"
+
+At this stage the dmaap simulator should be running on your local machine on port 3904.
+
+
+2.3.3 Controlloop Runtime
+^^^^^^^^^^^^^^^^^^^^^^^^^
+To start the controlloop runtime service, we need to execute the following maven command from the "runtime-controlloop" directory in the clamp repo. Control Loop runtime uses the config file "src/main/resources/application.yaml" by default.
+
+.. code-block:: bash
+
+ mvn spring-boot:run
+
+2.3.4 Helm chart repository
+^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Kubernetes participant consumes helm charts from the local chart database as well as from a helm repository. For the smoke testing, we are going to add `nginx-stable` helm repository to the helm client.
+The following command can be used to add nginx repository to the helm client.
+
+.. code-block:: bash
+
+ helm repo add nginx-stable https://helm.nginx.com/stable
+
+2.3.5 Kubernetes and http participants
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+The participants can be started from the clamp repository by executing the following maven command from the appropriate directories.
+The participants will be started and get registered to the Control Loop runtime.
+
+Navigate to the directory "participant/participant-impl/participant-impl-kubernetes/" and start kubernetes participant.
+
+.. code-block:: bash
+
+ mvn spring-boot:run
+
+Navigate to the directory "participant/participant-impl/participant-impl-http/" and start http participant.
+
+.. code-block:: bash
+
+ mvn spring-boot:run
+
+
+3. Running Tests
+****************
+In this section, we will run through the sequence of steps in Control Loop workflow . The workflow can be triggered via Postman client.
+
+3.1 Commissioning
+=================
+Commission Control loop TOSCA definitions to Runtime.
+
+The Control Loop definitions are commissioned to CL runtime which populates the CL runtime database.
+The following sample TOSCA template is commissioned to the runtime endpoint which contains definitions for kubernetes participant that deploys nginx ingress microservice
+helm chart and a http POST request for http participant.
+
+:download:`Tosca Service Template <tosca/smoke-test-participants.yaml>`
+
+Commissioning Endpoint:
+
+.. code-block:: bash
+
+ POST: https://<CL Runtime IP> : <Port> /onap/controlloop/v2/commission
+
+A successful commissioning gives 200 response in the postman client.
+
+
+3.2 Create New Instances of Control Loops
+=========================================
+Once the template is commissioned, we can instantiate Control Loop instances. This will create the instances with default state "UNINITIALISED".
+
+Instantiation Endpoint:
+
+.. code-block:: bash
+
+ POST: https://<CL Runtime IP> : <Port> /onap/controlloop/v2/instantiation
+
+Request body:
+
+:download:`Instantiation json <json/cl-instantiation.json>`
+
+3.3 Change the State of the Instance
+====================================
+When the Control loop is updated with state “PASSIVE”, the Kubernetes participant fetches the node template for all control loop elements and deploys the helm chart of each CL element in to the cluster. The following sample json input is passed on the request body.
+
+Control Loop Update Endpoint:
+
+.. code-block:: bash
+
+ PUT: https://<CL Runtime IP> : <Port> /onap/controlloop/v2/instantiation/command
+
+ Request body:
+.. code-block:: bash
+
+ {
+ "orderedState": "PASSIVE",
+ "controlLoopIdentifierList": [
+ {
+ "name": "K8SInstance0",
+ "version": "1.0.1"
+ }
+ ]
+ }
+
+
+After the state changed to "PASSIVE", nginx-ingress pod is deployed in the kubernetes cluster. And http participant should have posted the dummy data to the configured URL in the tosca template.
+
+The following command can be used to verify the pods deployed successfully by kubernetes participant.
+
+.. code-block:: bash
+
+ helm ls -n onap | grep nginx
+ kubectl get po -n onap | grep nginx
+
+The overall state of the control loop should be "PASSIVE" to indicate both the participants has successfully completed the operations. This can be verified by the following rest endpoint.
+
+Verify control loop state:
+
+.. code-block:: bash
+
+ GET: https://<CL Runtime IP> : <Port>/onap/controlloop/v2/instantiation
+
+
+3.4 Control Loop can be "UNINITIALISED" after deployment
+========================================================
+
+By changing the state to "UNINITIALISED", all the helm deployments under the corresponding control loop will be uninstalled from the cluster.
+Control Loop Update Endpoint:
+
+.. code-block:: bash
+
+ PUT: https://<CL Runtime IP> : <Port> /onap/controlloop/v2/instantiation/command
+
+ Request body:
+.. code-block:: bash
+
+ {
+ "orderedState": "UNINITIALISED",
+ "controlLoopIdentifierList": [
+ {
+ "name": "K8SInstance0",
+ "version": "1.0.1"
+ }
+ ]
+ }
+
+The nginx pod should be deleted from the k8s cluster.
+
+This concludes the required smoke tests for http and kubernetes participants.
diff --git a/docs/development/devtools/devtools.rst b/docs/development/devtools/devtools.rst
index e9a9a00c..cc973ed6 100644
--- a/docs/development/devtools/devtools.rst
+++ b/docs/development/devtools/devtools.rst
@@ -279,6 +279,8 @@ familiar with the Policy Framework components and test any local changes.
policy-gui-controlloop-smoke.rst
db-migrator-smoke.rst
+
+ cl-participants-smoke.rst
..
api-smoke.rst
diff --git a/docs/development/devtools/json/cl-instantiation.json b/docs/development/devtools/json/cl-instantiation.json
new file mode 100644
index 00000000..66197860
--- /dev/null
+++ b/docs/development/devtools/json/cl-instantiation.json
@@ -0,0 +1,53 @@
+{
+ "controlLoopList": [
+ {
+ "name": "K8SInstance0",
+ "version": "1.0.1",
+ "definition": {
+ "name": "org.onap.domain.sample.GenericK8s_ControlLoopDefinition",
+ "version": "1.2.3"
+ },
+ "state": "UNINITIALISED",
+ "orderedState": "UNINITIALISED",
+ "description": "K8s control loop instance 0",
+ "elements": {
+ "709c62b3-8918-41b9-a747-d21eb79c6c21": {
+ "id": "709c62b3-8918-41b9-a747-d21eb79c6c21",
+ "definition": {
+ "name": "org.onap.domain.database.Local_K8SMicroserviceControlLoopElement",
+ "version": "1.2.3"
+ },
+ "participantId": {
+ "name": "K8sParticipant0",
+ "version": "1.0.0"
+ },
+ "participantType": {
+ "name": "org.onap.k8s.controlloop.K8SControlLoopParticipant",
+ "version": "2.3.4"
+ },
+ "state": "UNINITIALISED",
+ "orderedState": "UNINITIALISED",
+ "description": "K8s Control Loop Element for the nginx-ingress microservice"
+ },
+ "709c62b3-8918-41b9-a747-d21eb79c6c22": {
+ "id": "709c62b3-8918-41b9-a747-d21eb79c6c22",
+ "definition": {
+ "name": "org.onap.domain.database.Http_MicroserviceControlLoopElement",
+ "version": "1.2.3"
+ },
+ "participantId": {
+ "name": "HttpParticipant0",
+ "version": "1.0.0"
+ },
+ "participantType": {
+ "name": "org.onap.k8s.controlloop.HttpControlLoopParticipant",
+ "version": "2.3.4"
+ },
+ "state": "UNINITIALISED",
+ "orderedState": "UNINITIALISED",
+ "description": "Http Control Loop Element"
+ }
+ }
+ }
+ ]
+} \ No newline at end of file
diff --git a/docs/development/devtools/tosca/smoke-test-participants.yaml b/docs/development/devtools/tosca/smoke-test-participants.yaml
new file mode 100644
index 00000000..a10e05ec
--- /dev/null
+++ b/docs/development/devtools/tosca/smoke-test-participants.yaml
@@ -0,0 +1,260 @@
+tosca_definitions_version: tosca_simple_yaml_1_3
+data_types:
+ onap.datatypes.ToscaConceptIdentifier:
+ derived_from: tosca.datatypes.Root
+ properties:
+ name:
+ type: string
+ required: true
+ version:
+ type: string
+ required: true
+ org.onap.datatypes.policy.clamp.controlloop.httpControlLoopElement.RestRequest:
+ version: 1.0.0
+ derived_from: tosca.datatypes.Root
+ properties:
+ restRequestId:
+ type: onap.datatypes.ToscaConceptIdentifier
+ typeVersion: 1.0.0
+ required: true
+ description: The name and version of a REST request to be sent to a REST endpoint
+ httpMethod:
+ type: string
+ required: true
+ constraints:
+ - valid_values: [POST, PUT, GET, DELETE]
+ description: The REST method to use
+ path:
+ type: string
+ required: true
+ description: The path of the REST request relative to the base URL
+ body:
+ type: string
+ required: false
+ description: The body of the REST request for PUT and POST requests
+ expectedResponse:
+ type: integer
+ required: true
+ constraints:
+ - in_range: [100, 599]
+ description: THe expected HTTP status code for the REST request
+ org.onap.datatypes.policy.clamp.controlloop.httpControlLoopElement.ConfigurationEntity:
+ version: 1.0.0
+ derived_from: tosca.datatypes.Root
+ properties:
+ configurationEntityId:
+ type: onap.datatypes.ToscaConceptIdentifier
+ typeVersion: 1.0.0
+ required: true
+ description: The name and version of a Configuration Entity to be handled by the HTTP Control Loop Element
+ restSequence:
+ type: list
+ entry_schema:
+ type: org.onap.datatypes.policy.clamp.controlloop.httpControlLoopElement.RestRequest
+ typeVersion: 1.0.0
+ description: A sequence of REST commands to send to the REST endpoint
+node_types:
+ org.onap.policy.clamp.controlloop.Participant:
+ version: 1.0.1
+ derived_from: tosca.nodetypes.Root
+ properties:
+ provider:
+ type: string
+ required: false
+ org.onap.policy.clamp.controlloop.ControlLoopElement:
+ version: 1.0.1
+ derived_from: tosca.nodetypes.Root
+ properties:
+ provider:
+ type: string
+ requred: false
+ participant_id:
+ type: onap.datatypes.ToscaConceptIdentifier
+ requred: true
+ participantType:
+ type: onap.datatypes.ToscaConceptIdentifier
+ required: true
+ metadata:
+ common: true
+ description: The identity of the participant type that hosts this type of Control Loop Element
+ startPhase:
+ type: integer
+ required: false
+ constraints:
+ - greater_or_equal: 0
+ metadata:
+ common: true
+ description: A value indicating the start phase in which this control loop element will be started, the
+ first start phase is zero. Control Loop Elements are started in their start_phase order and stopped
+ in reverse start phase order. Control Loop Elements with the same start phase are started and
+ stopped simultaneously
+ uninitializedToPassiveTimeout:
+ type: integer
+ required: false
+ constraints:
+ - greater_or_equal: 0
+ default: 60
+ metadata:
+ common: true
+ description: The maximum time in seconds to wait for a state chage from uninitialized to passive
+ passiveToRunningTimeout:
+ type: integer
+ required: false
+ constraints:
+ - greater_or_equal: 0
+ default: 60
+ metadata:
+ common: true
+ description: The maximum time in seconds to wait for a state chage from passive to running
+ runningToPassiveTimeout:
+ type: integer
+ required: false
+ constraints:
+ - greater_or_equal: 0
+ default: 60
+ metadata:
+ common: true
+ description: The maximum time in seconds to wait for a state chage from running to passive
+ passiveToUninitializedTimeout:
+ type: integer
+ required: false
+ constraints:
+ - greater_or_equal: 0
+ default: 60
+ metadata:
+ common: true
+ description: The maximum time in seconds to wait for a state chage from passive to uninitialized
+ org.onap.policy.clamp.controlloop.ControlLoop:
+ version: 1.0.1
+ derived_from: tosca.nodetypes.Root
+ properties:
+ provider:
+ type: string
+ requred: false
+ elements:
+ type: list
+ required: true
+ entry_schema:
+ type: onap.datatypes.ToscaConceptIdentifier
+ org.onap.policy.clamp.controlloop.K8SMicroserviceControlLoopElement:
+ version: 1.0.1
+ derived_from: org.onap.policy.clamp.controlloop.ControlLoopElement
+ properties:
+ chart:
+ type: string
+ required: true
+ configs:
+ type: list
+ required: false
+ requirements:
+ type: string
+ requred: false
+ templates:
+ type: list
+ required: false
+ entry_schema:
+ values:
+ type: string
+ required: true
+ org.onap.policy.clamp.controlloop.HttpControlLoopElement:
+ version: 1.0.1
+ derived_from: org.onap.policy.clamp.controlloop.ControlLoopElement
+ properties:
+ baseUrl:
+ type: string
+ required: true
+ description: The base URL to be prepended to each path, identifies the host for the REST endpoints.
+ httpHeaders:
+ type: map
+ required: false
+ entry_schema:
+ type: string
+ description: HTTP headers to send on REST requests
+ configurationEntities:
+ type: map
+ required: true
+ entry_schema:
+ type: org.onap.datatypes.policy.clamp.controlloop.httpControlLoopElement.ConfigurationEntity
+ typeVersion: 1.0.0
+ description: The connfiguration entities the Control Loop Element is managing and their associated REST requests
+topology_template:
+ node_templates:
+ org.onap.k8s.controlloop.K8SControlLoopParticipant:
+ version: 2.3.4
+ type: org.onap.policy.clamp.controlloop.Participant
+ type_version: 1.0.1
+ description: Participant for K8S
+ properties:
+ provider: ONAP
+ org.onap.domain.database.Local_K8SMicroserviceControlLoopElement:
+ # Chart installation without passing repository info
+ version: 1.2.3
+ type: org.onap.policy.clamp.controlloop.K8SMicroserviceControlLoopElement
+ type_version: 1.0.0
+ description: Control loop element for the K8S microservice for local chart
+ properties:
+ provider: ONAP
+ participant_id:
+ name: K8sParticipant0
+ version: 1.0.0
+ participantType:
+ name: org.onap.k8s.controlloop.K8SControlLoopParticipant
+ version: 2.3.4
+ chart:
+ chartId:
+ name: nginx-ingress
+ version: 0.11.0
+ releaseName: nginxapp
+ namespace: onap
+ org.onap.controlloop.HttpControlLoopParticipant:
+ version: 2.3.4
+ type: org.onap.policy.clamp.controlloop.Participant
+ type_version: 1.0.1
+ description: Participant for Http requests
+ properties:
+ provider: ONAP
+
+ org.onap.domain.database.Http_MicroserviceControlLoopElement:
+ version: 1.2.3
+ type: org.onap.policy.clamp.controlloop.HttpControlLoopElement
+ type_version: 1.0.1
+ description: Control loop element for the http requests of PMSH microservice
+ properties:
+ provider: ONAP
+ participant_id:
+ name: HttpParticipant0
+ version: 1.0.0
+ participantType:
+ name: org.onap.k8s.controlloop.HttpControlLoopParticipant
+ version: 2.3.4
+ uninitializedToPassiveTimeout: 180
+ startPhase: 1
+ baseUrl: http://httpbin.org
+ httpHeaders:
+ Content-Type: application/json
+ configurationEntities:
+ - configurationEntityId:
+ name: entity1
+ version: 1.0.1
+ restSequence:
+ - restRequestId:
+ name: request1
+ version: 1.0.1
+ httpMethod: POST
+ path: post
+ body: 'Dummy data for smoke testing'
+ expectedResponse: 200
+
+
+ org.onap.domain.sample.GenericK8s_ControlLoopDefinition:
+ version: 1.2.3
+ type: org.onap.policy.clamp.controlloop.ControlLoop
+ type_version: 1.0.0
+ description: Control loop for Hello World
+ properties:
+ provider: ONAP
+ elements:
+ - name: org.onap.domain.database.Local_K8SMicroserviceControlLoopElement
+ version: 1.2.3
+ - name: org.onap.domain.database.Http_MicroserviceControlLoopElement
+ version: 1.2.3
diff --git a/docs/development/devtools/xacml-s3p.rst b/docs/development/devtools/xacml-s3p.rst
index 1411f90b..7ef035d5 100644
--- a/docs/development/devtools/xacml-s3p.rst
+++ b/docs/development/devtools/xacml-s3p.rst
@@ -17,10 +17,11 @@ against the Policy RESTful APIs residing on the XACML PDP installed on a Cloud b
VM Configuration:
- 16GB RAM
-- 8 VCPU
-- 1TB Disk
+- 4 VCPU
+- 40GB Disk
-ONAP was deployed using a K8s Configuration on a separate VM.
+ONAP was deployed using a K8s Configuration on the same VM.
+Running jmeter and ONAP OOM on the same VM may adversely impact the performance of the XACML-PDP being tested.
Summary
=======
@@ -31,7 +32,7 @@ The Performance test was executed, and the result analyzed, via:
jmeter -Jduration=1200 -Jusers=10 \
-Jxacml_ip=$ip -Jpap_ip=$ip -Japi_ip=$ip \
- -Jxacml_port=31104 -Jpap_port=32425 -Japi_port=30709 \
+ -Jxacml_port=30111 -Jpap_port=30197 -Japi_port=30664 \
-n -t perf.jmx -l testresults.jtl
Note: the ports listed above correspond to port 6969 of the respective components.
@@ -64,31 +65,33 @@ threads), with the following results:
.. csv-table::
:header: "Number of Users", "Throughput (requests/second)", "Average Latency (ms)"
- 10, 8929, 3.10
- 20, 10827, 5.05
- 40, 11800, 9.35
- 80, 11750, 18.62
+ 10, 309.919, 5.83457
+ 20, 2527.73, 22.2634
+ 40, 3184.78, 35.1173
+ 80, 3677.35, 60.2893
Stability Test of Policy XACML PDP
************************************
The stability test was executed by performing requests
-against the Policy RESTful APIs residing on the XACML PDP installed in the windriver
+against the Policy RESTful APIs residing on the XACML PDP installed in the citycloud
lab. This was running on a kubernetes pod having the following configuration:
- 16GB RAM
-- 8 VCPU
-- 160GB Disk
+- 4 VCPU
+- 40GB Disk
-The test was run via jmeter, which was installed on a separate VM so-as not
-to impact the performance of the XACML-PDP being tested.
+The test was run via jmeter, which was installed on the same VM.
+Running jmeter and ONAP OOM on the same VM may adversely impact the performance of the XACML-PDP being tested.
+Due to the minimal nauture of this setup, the K8S cluster became overloaded on a couple of occasions during the test.
+This resulted in a small number of errors and a greater maximum transaction time than normal.
Summary
=======
-The stability test was performed on a default ONAP OOM installation in the Intel Wind River Lab environment.
-JMeter was installed on a separate VM to inject the traffic defined in the
+The stability test was performed on a default ONAP OOM installation in the city Cloud Lab environment.
+JMeter was installed on the same VM to inject the traffic defined in the
`XACML PDP stability script
<https://git.onap.org/policy/xacml-pdp/tree/testsuites/stability/src/main/resources/testplans/stability.jmx>`_
with the following command:
@@ -96,14 +99,14 @@ with the following command:
.. code-block:: bash
jmeter.sh -Jduration=259200 -Jusers=2 -Jxacml_ip=$ip -Jpap_ip=$ip -Japi_ip=$ip \
- -Jxacml_port=31104 -Jpap_port=32425 -Japi_port=30709 --nongui --testfile stability.jmx
+ -Jxacml_port=30111 -Jpap_port=30197 -Japi_port=30664 --nongui --testfile stability.jmx
Note: the ports listed above correspond to port 6969 of the respective components.
The default log level of the root and org.eclipse.jetty.server.RequestLog loggers in the logback.xml
of the XACML PDP
(om/kubernetes/policy/components/policy-xacml-pdp/resources/config/logback.xml)
-was set to ERROR since the OOM installation did not have log rotation enabled of the
+was set to WARN since the OOM installation did have log rotation enabled of the
container logs in the kubernetes worker nodes.
The stability test, stability.jmx, runs the following, all in parallel:
@@ -131,9 +134,9 @@ The stability summary results were reported by JMeter with the following summary
.. code-block:: bash
- summary = 207771010 in 72:00:01 = 801.6/s Avg: 6 Min: 0 Max: 411 Err: 0 (0.00%)
+ summary = 222450112 in 72:00:39 = 858.1/s Avg: 5 Min: 1 Max: 946942 Err: 17 (0.00%)
-The XACML PDP offered good performance with JMeter for the traffic mix described above, using 801 threads per second
-to inject the traffic load. No errors were encountered, and no significant CPU spikes were noted.
-The average transaction time was 6ms. with a maximum of 411ms.
+The XACML PDP offered good performance with JMeter for the traffic mix described above, using 858 threads per second
+to inject the traffic load. A small number of errors were encountered, and no significant CPU spikes were noted.
+The average transaction time was 5ms. with a maximum of 946942ms.