summaryrefslogtreecommitdiffstats
path: root/docs/development
diff options
context:
space:
mode:
Diffstat (limited to 'docs/development')
-rw-r--r--docs/development/devtools/images/xacml-s3p-jmeter.pngbin0 -> 158435 bytes
-rw-r--r--docs/development/devtools/images/xacml-s3p-top.pngbin0 -> 63500 bytes
-rw-r--r--docs/development/devtools/images/xacml-s3p.PNGbin114177 -> 0 bytes
-rw-r--r--docs/development/devtools/xacml-s3p.rst129
4 files changed, 91 insertions, 38 deletions
diff --git a/docs/development/devtools/images/xacml-s3p-jmeter.png b/docs/development/devtools/images/xacml-s3p-jmeter.png
new file mode 100644
index 00000000..80777570
--- /dev/null
+++ b/docs/development/devtools/images/xacml-s3p-jmeter.png
Binary files differ
diff --git a/docs/development/devtools/images/xacml-s3p-top.png b/docs/development/devtools/images/xacml-s3p-top.png
new file mode 100644
index 00000000..36dc403e
--- /dev/null
+++ b/docs/development/devtools/images/xacml-s3p-top.png
Binary files differ
diff --git a/docs/development/devtools/images/xacml-s3p.PNG b/docs/development/devtools/images/xacml-s3p.PNG
deleted file mode 100644
index 9a1407c6..00000000
--- a/docs/development/devtools/images/xacml-s3p.PNG
+++ /dev/null
Binary files differ
diff --git a/docs/development/devtools/xacml-s3p.rst b/docs/development/devtools/xacml-s3p.rst
index 5cca4afd..7c29a454 100644
--- a/docs/development/devtools/xacml-s3p.rst
+++ b/docs/development/devtools/xacml-s3p.rst
@@ -8,59 +8,112 @@
:maxdepth: 2
Policy XACML PDP component
-~~~~~~~~~~~~~~~~~~~~~~~~~~
+##########################
+
+Both the Performance and the Stability tests were executed by performing requests
+against the Policy RESTful APIs residing on the XACML PDP installed in the windriver
+lab. This was running on a kubernetes pod having the following configuration:
+
+- 16GB RAM
+- 8 VCPU
+- 160GB Disk
+
+Both tests were run via jmeter, which was installed on a separate VM so-as not
+to impact the performance of the XACML-PDP being tested.
Performance Test of Policy XACML PDP
-++++++++++++++++++++++++++++++++++++
+************************************
Summary
--------
+=======
-The Performance test was executed by performing requests against the Policy RESTful APIs residing on the XACML PDP installed in the windriver lab to get policy decisions for monitoring and guard policy types. This was running on a kubernetes host having the following configuration:
+The Performance test was executed, and the result analyzed, via:
-- 16GB RAM
-- 8 VCPU
-- 160GB Disk
+.. code-block:: bash
+
+ jmeter -Jduration=1200 -Jusers=10 \
+ -Jxacml_ip=$ip -Jpap_ip=$ip -Japi_ip=$ip \
+ -Jxacml_port=31104 -Jpap_port=32425 -Japi_port=30709 \
+ -n -t perf.jmx
+
+ ./result.sh
+
+Note: the ports listed above correspond to port 6969 of the respective components.
+
+The performance test, perf.jmx, runs the following, all in parallel:
+
+- Healthcheck, 10 simultaneous threads
+- Statistics, 10 simultaneous threads
+- Decisions, 10 simultaneous threads, each running the following in sequence:
-The performance test runs 10 simultaneous threads calling XACML PDP RESTful APIs to get decisions for Monitoring, Guard Min Max, and Guard Frequency Limiter policy types, with at duration of 6000 seconds. The test execution lasted approximately 50 minutes resulting in the following summary:
+ - Monitoring Decision
+ - Monitoring Decision, abbreviated
+ - Naming Decision
+ - Optimization Decision
+ - Default Guard Decision (always "Permit")
+ - Frequency Limiter Guard Decision
+ - Min/Max Guard Decision
-- 37,305 Healthcheck requests
-- 33,716 Statistics requests
-- 25,294 Monitoring decision requests
-- 25,288 Guard Min Max decisions
-- 25,286 Guard Frequency Limiter requests
+When the script starts up, it uses policy-api to create, and policy-pap to deploy,
+the policies that are needed by the test. It assumes that the "naming" policy has
+already been created and deployed. Once the test completes, it undeploys and deletes
+the policies that it previously created.
-The average throughput was about 9.8 transactions per second. CPU and memory usage along with a screenshot of the JMeter Summary Report are provided in this document.
+Results
+=======
+
+The test was run for 20 minutes at a time, for different numbers of users (i.e.,
+threads), with the following results:
+
+.. csv-table::
+ :header: "Number of Users", "Throughput (requests/second)", "Average Latency (ms)"
+
+ 10, 6064, 4.1
+ 20, 6495, 7.2
+ 40, 6457, 12.2
+ 80, 5803, 21.3
+
+
+Stability Test of Policy XACML PDP
+************************************
+
+Summary
+=======
+
+The Stability test was run with the same pods/VMs and uses the same jmeter script as the
+performance test, except that it was run for 72 hours instead of 20 minutes. In
+addition, it was run in the background via "nohup", to prevent it from being interrupted:
+
+.. code-block:: bash
+
+ nohup jmeter -Jduration=259200 \
+ -Jxacml_ip=$ip -Jpap_ip=$ip -Japi_ip=$ip \
+ -Jxacml_port=31104 -Jpap_port=32425 -Japi_port=30709 \
+ -n -t perf.jmx &
+
+The memory and CPU usage can be monitored by running "top" on the xacml pod. By taking
+a snapshot before the test is started, and again when it completes, the total CPU used
+by all of the requests can be computed.
Results
--------
+=======
+
+The final output of the jmeter script is found in the nohup.out file:
+
+.. image:: images/xacml-s3p-jmeter.png
-**CPU Utilization**
+The final memory and CPU from "top":
-Total CPU used by the PDP was measured before and after the test, using "ps -l".
+.. image:: images/xacml-s3p-top.png
-=================== ================== ================ =================== =============== ==================
-**Intial CPU time** **Final CPU time** **Intial CPU %** **Intial Memory %** **Final CPU %** **Final Memory %**
-=================== ================== ================ =================== =============== ==================
-00:60:27 00:73:45 3.5% 4.0% 94.12.3% 4.0%
-=================== ================== ================ =================== =============== ==================
+The through-put reported by jmeter was 4849 requests/second, with 0 errors. In addition,
+the memory usage observed via "top" indicated that the virtual memory and resident set
+sizes remained virtually unchanged through-out the test.
-**Memory Utilization**
+Unfortunately, the initial CPU usage was not recorded, so the CPU time reported in
+the "top" screen-shot includes XACML-PDP start-up time as well as requests that were
+executed before the stability test was started. Nevertheless, even including that, we find:
.. code-block:: bash
- Number of young garbage collections used during the test: 518
- Avg. Young garbage collection time: ~11.56ms per collection
- Total number of Full garbage collection: 32
- Avg. Full garbage collection time: ~315.06ms per collection
-
-
- S0C S1C S0U S1U EC EU OC OU MC MU CCSC CCSU YGC YGCT FGC FGCT GCT
-
- 16768.0 16768.0 0.0 5461.0 134144.0 71223.6 334692.0 138734.5 50008.0 48955.8 5760.0 5434.3 4043 45.793 32 10.082 55.875
-
- 16768.0 16768.0 0.0 4993.4 134144.0 66115.7 334692.0 252887.4 50264.0 49036.5 5760.0 5439.7 4561 53.686 32 10.082 63.768
-
-**Jmeter Results Summary**
-
-.. image:: images/xacml-s3p.PNG
+ 13,166 CPU minutes * 60sec/min * 1000ms/sec / 1,256,834,239 requests = 0.63ms/request