From cf2686107161c40c5e39ad6f3b3f488b3ae7be4e Mon Sep 17 00:00:00 2001 From: "adheli.tavares" Date: Tue, 4 Apr 2023 16:29:21 +0100 Subject: Restructure devtools folder - s3p tests documentation Issue-ID: POLICY-4583 Change-Id: I81fe30f4c083579263db0b9e663953bdc3ecb643 Signed-off-by: adheli.tavares --- .../apex-s3p-results/apex_metrics_after_72h.txt | 316 ----------------- .../apex-s3p-results/apex_metrics_before_72h.txt | 175 --------- .../apex-s3p-results/apex_perf_jmeter_results.png | Bin 110730 -> 0 bytes .../apex_stability_jmeter_results.png | Bin 109911 -> 0 bytes .../apex-s3p-results/apex_top_after_72h.png | Bin 76131 -> 0 bytes .../apex-s3p-results/apex_top_before_72h.png | Bin 74785 -> 0 bytes docs/development/devtools/apex-s3p.rst | 258 -------------- .../api-response-time-distribution_J.png | Bin 189340 -> 0 bytes ...pi-response-time-distribution_performance_J.png | Bin 217155 -> 0 bytes .../api-response-time-overtime_J.png | Bin 417371 -> 0 bytes .../api-response-time-overtime_performance_J.png | Bin 434459 -> 0 bytes .../devtools/api-s3p-results/api-s3p-jm-1_J.png | Bin 267889 -> 0 bytes .../devtools/api-s3p-results/api-s3p-jm-2_J.png | Bin 256815 -> 0 bytes .../devtools/api-s3p-results/api_top_after_72h.png | Bin 43519 -> 0 bytes .../api-s3p-results/api_top_before_72h.png | Bin 41751 -> 0 bytes docs/development/devtools/api-s3p.rst | 211 ----------- .../clamp-s3p-results/Stability_after_stats.png | Bin 123032 -> 0 bytes .../clamp-s3p-results/acm_performance_jmeter.png | Bin 229066 -> 0 bytes .../clamp-s3p-results/acm_stability_jmeter.png | Bin 218877 -> 0 bytes .../clamp-s3p-results/acm_stability_table.png | Bin 454197 -> 0 bytes docs/development/devtools/clamp-s3p.rst | 257 -------------- docs/development/devtools/devtools.rst | 74 ++-- .../distribution-jmeter-testcases.png | Bin 57822 -> 0 bytes .../distribution-visualvm-snapshot.png | Bin 28049 -> 0 bytes .../performance-monitor.png | Bin 136960 -> 0 bytes .../performance-statistics.png | Bin 238616 -> 0 bytes .../performance-threads.png | Bin 197890 -> 0 bytes .../performance-threshold.png | Bin 77349 -> 0 bytes .../distribution-s3p-results/stability-monitor.png | Bin 101015 -> 0 bytes .../stability-statistics.png | Bin 247554 -> 0 bytes .../distribution-s3p-results/stability-threads.png | Bin 202963 -> 0 bytes .../stability-threshold.png | Bin 71809 -> 0 bytes docs/development/devtools/distribution-s3p.rst | 389 --------------------- docs/development/devtools/drools-s3p.rst | 74 ---- docs/development/devtools/images/s3p-drools-1.png | Bin 302657 -> 0 bytes docs/development/devtools/images/s3p-drools-2.png | Bin 216610 -> 0 bytes docs/development/devtools/images/s3p-drools-3.png | Bin 141505 -> 0 bytes docs/development/devtools/images/s3p-drools-4.png | Bin 200544 -> 0 bytes .../development/devtools/images/s3p-perf-xacml.png | Bin 71291 -> 0 bytes .../pap-s3p-results/pap_metrics_after_72h.txt | 306 ---------------- .../pap-s3p-results/pap_metrics_before_72h.txt | 225 ------------ .../pap_performance_jmeter_results.png | Bin 169700 -> 0 bytes .../pap_stability_jmeter_results.png | Bin 207280 -> 0 bytes .../devtools/pap-s3p-results/pap_top_after_72h.png | Bin 43095 -> 0 bytes .../pap-s3p-results/pap_top_before_72h.png | Bin 42430 -> 0 bytes docs/development/devtools/pap-s3p.rst | 198 ----------- docs/development/devtools/run-s3p.rst | 52 --- .../apex-s3p-results/apex_metrics_after_72h.txt | 316 +++++++++++++++++ .../apex-s3p-results/apex_metrics_before_72h.txt | 175 +++++++++ .../apex-s3p-results/apex_perf_jmeter_results.png | Bin 0 -> 110730 bytes .../apex_stability_jmeter_results.png | Bin 0 -> 109911 bytes .../s3p/apex-s3p-results/apex_top_after_72h.png | Bin 0 -> 76131 bytes .../s3p/apex-s3p-results/apex_top_before_72h.png | Bin 0 -> 74785 bytes docs/development/devtools/testing/s3p/apex-s3p.rst | 258 ++++++++++++++ .../api-response-time-distribution_J.png | Bin 0 -> 189340 bytes ...pi-response-time-distribution_performance_J.png | Bin 0 -> 217155 bytes .../api-response-time-overtime_J.png | Bin 0 -> 417371 bytes .../api-response-time-overtime_performance_J.png | Bin 0 -> 434459 bytes .../testing/s3p/api-s3p-results/api-s3p-jm-1_J.png | Bin 0 -> 267889 bytes .../testing/s3p/api-s3p-results/api-s3p-jm-2_J.png | Bin 0 -> 256815 bytes .../s3p/api-s3p-results/api_top_after_72h.png | Bin 0 -> 43519 bytes .../s3p/api-s3p-results/api_top_before_72h.png | Bin 0 -> 41751 bytes docs/development/devtools/testing/s3p/api-s3p.rst | 211 +++++++++++ .../clamp-s3p-results/Stability_after_stats.png | Bin 0 -> 123032 bytes .../clamp-s3p-results/acm_performance_jmeter.png | Bin 0 -> 229066 bytes .../s3p/clamp-s3p-results/acm_stability_jmeter.png | Bin 0 -> 218877 bytes .../s3p/clamp-s3p-results/acm_stability_table.png | Bin 0 -> 454197 bytes .../development/devtools/testing/s3p/clamp-s3p.rst | 257 ++++++++++++++ .../distribution-jmeter-testcases.png | Bin 0 -> 57822 bytes .../distribution-visualvm-snapshot.png | Bin 0 -> 28049 bytes .../performance-monitor.png | Bin 0 -> 136960 bytes .../performance-statistics.png | Bin 0 -> 238616 bytes .../performance-threads.png | Bin 0 -> 197890 bytes .../performance-threshold.png | Bin 0 -> 77349 bytes .../distribution-s3p-results/stability-monitor.png | Bin 0 -> 101015 bytes .../stability-statistics.png | Bin 0 -> 247554 bytes .../distribution-s3p-results/stability-threads.png | Bin 0 -> 202963 bytes .../stability-threshold.png | Bin 0 -> 71809 bytes .../devtools/testing/s3p/distribution-s3p.rst | 389 +++++++++++++++++++++ .../s3p/drools-s3p-results/s3p-drools-1.png | Bin 0 -> 302657 bytes .../s3p/drools-s3p-results/s3p-drools-2.png | Bin 0 -> 216610 bytes .../s3p/drools-s3p-results/s3p-drools-3.png | Bin 0 -> 141505 bytes .../s3p/drools-s3p-results/s3p-drools-4.png | Bin 0 -> 200544 bytes .../devtools/testing/s3p/drools-s3p.rst | 74 ++++ .../s3p/pap-s3p-results/pap_metrics_after_72h.txt | 306 ++++++++++++++++ .../s3p/pap-s3p-results/pap_metrics_before_72h.txt | 225 ++++++++++++ .../pap_performance_jmeter_results.png | Bin 0 -> 169700 bytes .../pap_stability_jmeter_results.png | Bin 0 -> 207280 bytes .../s3p/pap-s3p-results/pap_top_after_72h.png | Bin 0 -> 43095 bytes .../s3p/pap-s3p-results/pap_top_before_72h.png | Bin 0 -> 42430 bytes docs/development/devtools/testing/s3p/pap-s3p.rst | 198 +++++++++++ docs/development/devtools/testing/s3p/run-s3p.rst | 52 +++ .../s3p/xacml-s3p-results/s3p-perf-xacml.png | Bin 0 -> 71291 bytes .../development/devtools/testing/s3p/xacml-s3p.rst | 134 +++++++ docs/development/devtools/xacml-s3p.rst | 134 ------- 95 files changed, 2640 insertions(+), 2624 deletions(-) delete mode 100644 docs/development/devtools/apex-s3p-results/apex_metrics_after_72h.txt delete mode 100644 docs/development/devtools/apex-s3p-results/apex_metrics_before_72h.txt delete mode 100644 docs/development/devtools/apex-s3p-results/apex_perf_jmeter_results.png delete mode 100644 docs/development/devtools/apex-s3p-results/apex_stability_jmeter_results.png delete mode 100644 docs/development/devtools/apex-s3p-results/apex_top_after_72h.png delete mode 100644 docs/development/devtools/apex-s3p-results/apex_top_before_72h.png delete mode 100644 docs/development/devtools/apex-s3p.rst delete mode 100644 docs/development/devtools/api-s3p-results/api-response-time-distribution_J.png delete mode 100644 docs/development/devtools/api-s3p-results/api-response-time-distribution_performance_J.png delete mode 100644 docs/development/devtools/api-s3p-results/api-response-time-overtime_J.png delete mode 100644 docs/development/devtools/api-s3p-results/api-response-time-overtime_performance_J.png delete mode 100644 docs/development/devtools/api-s3p-results/api-s3p-jm-1_J.png delete mode 100644 docs/development/devtools/api-s3p-results/api-s3p-jm-2_J.png delete mode 100644 docs/development/devtools/api-s3p-results/api_top_after_72h.png delete mode 100644 docs/development/devtools/api-s3p-results/api_top_before_72h.png delete mode 100644 docs/development/devtools/api-s3p.rst delete mode 100644 docs/development/devtools/clamp-s3p-results/Stability_after_stats.png delete mode 100644 docs/development/devtools/clamp-s3p-results/acm_performance_jmeter.png delete mode 100644 docs/development/devtools/clamp-s3p-results/acm_stability_jmeter.png delete mode 100644 docs/development/devtools/clamp-s3p-results/acm_stability_table.png delete mode 100644 docs/development/devtools/clamp-s3p.rst delete mode 100644 docs/development/devtools/distribution-s3p-results/distribution-jmeter-testcases.png delete mode 100644 docs/development/devtools/distribution-s3p-results/distribution-visualvm-snapshot.png delete mode 100644 docs/development/devtools/distribution-s3p-results/performance-monitor.png delete mode 100644 docs/development/devtools/distribution-s3p-results/performance-statistics.png delete mode 100644 docs/development/devtools/distribution-s3p-results/performance-threads.png delete mode 100644 docs/development/devtools/distribution-s3p-results/performance-threshold.png delete mode 100644 docs/development/devtools/distribution-s3p-results/stability-monitor.png delete mode 100644 docs/development/devtools/distribution-s3p-results/stability-statistics.png delete mode 100644 docs/development/devtools/distribution-s3p-results/stability-threads.png delete mode 100644 docs/development/devtools/distribution-s3p-results/stability-threshold.png delete mode 100644 docs/development/devtools/distribution-s3p.rst delete mode 100644 docs/development/devtools/drools-s3p.rst delete mode 100644 docs/development/devtools/images/s3p-drools-1.png delete mode 100644 docs/development/devtools/images/s3p-drools-2.png delete mode 100644 docs/development/devtools/images/s3p-drools-3.png delete mode 100644 docs/development/devtools/images/s3p-drools-4.png delete mode 100644 docs/development/devtools/images/s3p-perf-xacml.png delete mode 100644 docs/development/devtools/pap-s3p-results/pap_metrics_after_72h.txt delete mode 100644 docs/development/devtools/pap-s3p-results/pap_metrics_before_72h.txt delete mode 100644 docs/development/devtools/pap-s3p-results/pap_performance_jmeter_results.png delete mode 100644 docs/development/devtools/pap-s3p-results/pap_stability_jmeter_results.png delete mode 100644 docs/development/devtools/pap-s3p-results/pap_top_after_72h.png delete mode 100644 docs/development/devtools/pap-s3p-results/pap_top_before_72h.png delete mode 100644 docs/development/devtools/pap-s3p.rst delete mode 100644 docs/development/devtools/run-s3p.rst create mode 100644 docs/development/devtools/testing/s3p/apex-s3p-results/apex_metrics_after_72h.txt create mode 100644 docs/development/devtools/testing/s3p/apex-s3p-results/apex_metrics_before_72h.txt create mode 100644 docs/development/devtools/testing/s3p/apex-s3p-results/apex_perf_jmeter_results.png create mode 100644 docs/development/devtools/testing/s3p/apex-s3p-results/apex_stability_jmeter_results.png create mode 100644 docs/development/devtools/testing/s3p/apex-s3p-results/apex_top_after_72h.png create mode 100644 docs/development/devtools/testing/s3p/apex-s3p-results/apex_top_before_72h.png create mode 100644 docs/development/devtools/testing/s3p/apex-s3p.rst create mode 100644 docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-distribution_J.png create mode 100644 docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-distribution_performance_J.png create mode 100644 docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-overtime_J.png create mode 100644 docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-overtime_performance_J.png create mode 100644 docs/development/devtools/testing/s3p/api-s3p-results/api-s3p-jm-1_J.png create mode 100644 docs/development/devtools/testing/s3p/api-s3p-results/api-s3p-jm-2_J.png create mode 100644 docs/development/devtools/testing/s3p/api-s3p-results/api_top_after_72h.png create mode 100644 docs/development/devtools/testing/s3p/api-s3p-results/api_top_before_72h.png create mode 100644 docs/development/devtools/testing/s3p/api-s3p.rst create mode 100644 docs/development/devtools/testing/s3p/clamp-s3p-results/Stability_after_stats.png create mode 100644 docs/development/devtools/testing/s3p/clamp-s3p-results/acm_performance_jmeter.png create mode 100644 docs/development/devtools/testing/s3p/clamp-s3p-results/acm_stability_jmeter.png create mode 100644 docs/development/devtools/testing/s3p/clamp-s3p-results/acm_stability_table.png create mode 100644 docs/development/devtools/testing/s3p/clamp-s3p.rst create mode 100644 docs/development/devtools/testing/s3p/distribution-s3p-results/distribution-jmeter-testcases.png create mode 100644 docs/development/devtools/testing/s3p/distribution-s3p-results/distribution-visualvm-snapshot.png create mode 100644 docs/development/devtools/testing/s3p/distribution-s3p-results/performance-monitor.png create mode 100644 docs/development/devtools/testing/s3p/distribution-s3p-results/performance-statistics.png create mode 100644 docs/development/devtools/testing/s3p/distribution-s3p-results/performance-threads.png create mode 100644 docs/development/devtools/testing/s3p/distribution-s3p-results/performance-threshold.png create mode 100644 docs/development/devtools/testing/s3p/distribution-s3p-results/stability-monitor.png create mode 100644 docs/development/devtools/testing/s3p/distribution-s3p-results/stability-statistics.png create mode 100644 docs/development/devtools/testing/s3p/distribution-s3p-results/stability-threads.png create mode 100644 docs/development/devtools/testing/s3p/distribution-s3p-results/stability-threshold.png create mode 100644 docs/development/devtools/testing/s3p/distribution-s3p.rst create mode 100644 docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-1.png create mode 100644 docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-2.png create mode 100644 docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-3.png create mode 100644 docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-4.png create mode 100644 docs/development/devtools/testing/s3p/drools-s3p.rst create mode 100644 docs/development/devtools/testing/s3p/pap-s3p-results/pap_metrics_after_72h.txt create mode 100644 docs/development/devtools/testing/s3p/pap-s3p-results/pap_metrics_before_72h.txt create mode 100644 docs/development/devtools/testing/s3p/pap-s3p-results/pap_performance_jmeter_results.png create mode 100644 docs/development/devtools/testing/s3p/pap-s3p-results/pap_stability_jmeter_results.png create mode 100644 docs/development/devtools/testing/s3p/pap-s3p-results/pap_top_after_72h.png create mode 100644 docs/development/devtools/testing/s3p/pap-s3p-results/pap_top_before_72h.png create mode 100644 docs/development/devtools/testing/s3p/pap-s3p.rst create mode 100644 docs/development/devtools/testing/s3p/run-s3p.rst create mode 100644 docs/development/devtools/testing/s3p/xacml-s3p-results/s3p-perf-xacml.png create mode 100644 docs/development/devtools/testing/s3p/xacml-s3p.rst delete mode 100644 docs/development/devtools/xacml-s3p.rst (limited to 'docs/development') diff --git a/docs/development/devtools/apex-s3p-results/apex_metrics_after_72h.txt b/docs/development/devtools/apex-s3p-results/apex_metrics_after_72h.txt deleted file mode 100644 index 56f13907..00000000 --- a/docs/development/devtools/apex-s3p-results/apex_metrics_after_72h.txt +++ /dev/null @@ -1,316 +0,0 @@ -# HELP jvm_threads_current Current thread count of a JVM -# TYPE jvm_threads_current gauge -jvm_threads_current 32.0 -# HELP jvm_threads_daemon Daemon thread count of a JVM -# TYPE jvm_threads_daemon gauge -jvm_threads_daemon 17.0 -# HELP jvm_threads_peak Peak thread count of a JVM -# TYPE jvm_threads_peak gauge -jvm_threads_peak 81.0 -# HELP jvm_threads_started_total Started thread count of a JVM -# TYPE jvm_threads_started_total counter -jvm_threads_started_total 423360.0 -# HELP jvm_threads_deadlocked Cycles of JVM-threads that are in deadlock waiting to acquire object monitors or ownable synchronizers -# TYPE jvm_threads_deadlocked gauge -jvm_threads_deadlocked 0.0 -# HELP jvm_threads_deadlocked_monitor Cycles of JVM-threads that are in deadlock waiting to acquire object monitors -# TYPE jvm_threads_deadlocked_monitor gauge -jvm_threads_deadlocked_monitor 0.0 -# HELP jvm_threads_state Current count of threads by state -# TYPE jvm_threads_state gauge -jvm_threads_state{state="BLOCKED",} 0.0 -jvm_threads_state{state="TIMED_WAITING",} 11.0 -jvm_threads_state{state="NEW",} 0.0 -jvm_threads_state{state="RUNNABLE",} 7.0 -jvm_threads_state{state="TERMINATED",} 0.0 -jvm_threads_state{state="WAITING",} 14.0 -# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds. -# TYPE process_cpu_seconds_total counter -process_cpu_seconds_total 16418.06 -# HELP process_start_time_seconds Start time of the process since unix epoch in seconds. -# TYPE process_start_time_seconds gauge -process_start_time_seconds 1.651077494162E9 -# HELP process_open_fds Number of open file descriptors. -# TYPE process_open_fds gauge -process_open_fds 357.0 -# HELP process_max_fds Maximum number of open file descriptors. -# TYPE process_max_fds gauge -process_max_fds 1048576.0 -# HELP process_virtual_memory_bytes Virtual memory size in bytes. -# TYPE process_virtual_memory_bytes gauge -process_virtual_memory_bytes 1.0165403648E10 -# HELP process_resident_memory_bytes Resident memory size in bytes. -# TYPE process_resident_memory_bytes gauge -process_resident_memory_bytes 5.58034944E8 -# HELP pdpa_engine_event_executions Total number of APEX events processed by the engine. -# TYPE pdpa_engine_event_executions gauge -pdpa_engine_event_executions{engine_instance_id="NSOApexEngine-1:0.0.1",} 30743.0 -pdpa_engine_event_executions{engine_instance_id="NSOApexEngine-4:0.0.1",} 30766.0 -pdpa_engine_event_executions{engine_instance_id="NSOApexEngine-3:0.0.1",} 30722.0 -pdpa_engine_event_executions{engine_instance_id="NSOApexEngine-0:0.0.1",} 30727.0 -pdpa_engine_event_executions{engine_instance_id="NSOApexEngine-2:0.0.1",} 30742.0 -# HELP jvm_buffer_pool_used_bytes Used bytes of a given JVM buffer pool. -# TYPE jvm_buffer_pool_used_bytes gauge -jvm_buffer_pool_used_bytes{pool="mapped",} 0.0 -jvm_buffer_pool_used_bytes{pool="direct",} 3.3833905E7 -# HELP jvm_buffer_pool_capacity_bytes Bytes capacity of a given JVM buffer pool. -# TYPE jvm_buffer_pool_capacity_bytes gauge -jvm_buffer_pool_capacity_bytes{pool="mapped",} 0.0 -jvm_buffer_pool_capacity_bytes{pool="direct",} 3.3833904E7 -# HELP jvm_buffer_pool_used_buffers Used buffers of a given JVM buffer pool. -# TYPE jvm_buffer_pool_used_buffers gauge -jvm_buffer_pool_used_buffers{pool="mapped",} 0.0 -jvm_buffer_pool_used_buffers{pool="direct",} 15.0 -# HELP pdpa_policy_executions_total The total number of TOSCA policy executions. -# TYPE pdpa_policy_executions_total counter -# HELP pdpa_policy_deployments_total The total number of policy deployments. -# TYPE pdpa_policy_deployments_total counter -pdpa_policy_deployments_total{operation="deploy",status="TOTAL",} 5.0 -pdpa_policy_deployments_total{operation="undeploy",status="TOTAL",} 5.0 -pdpa_policy_deployments_total{operation="undeploy",status="SUCCESS",} 5.0 -pdpa_policy_deployments_total{operation="deploy",status="SUCCESS",} 5.0 -# HELP pdpa_engine_average_execution_time_seconds Average time taken to execute an APEX policy in seconds. -# TYPE pdpa_engine_average_execution_time_seconds gauge -pdpa_engine_average_execution_time_seconds{engine_instance_id="NSOApexEngine-1:0.0.1",} 0.00515235988680349 -pdpa_engine_average_execution_time_seconds{engine_instance_id="NSOApexEngine-4:0.0.1",} 0.00521845543782099 -pdpa_engine_average_execution_time_seconds{engine_instance_id="NSOApexEngine-3:0.0.1",} 0.005200800729119198 -pdpa_engine_average_execution_time_seconds{engine_instance_id="NSOApexEngine-0:0.0.1",} 0.005191785725908804 -pdpa_engine_average_execution_time_seconds{engine_instance_id="NSOApexEngine-2:0.0.1",} 0.0051784854596317684 -# HELP pdpa_engine_state State of the APEX engine as integers mapped as - 0:UNDEFINED, 1:STOPPED, 2:READY, 3:EXECUTING, 4:STOPPING -# TYPE pdpa_engine_state gauge -pdpa_engine_state{engine_instance_id="NSOApexEngine-1:0.0.1",} 1.0 -pdpa_engine_state{engine_instance_id="NSOApexEngine-4:0.0.1",} 1.0 -pdpa_engine_state{engine_instance_id="NSOApexEngine-3:0.0.1",} 1.0 -pdpa_engine_state{engine_instance_id="NSOApexEngine-0:0.0.1",} 1.0 -pdpa_engine_state{engine_instance_id="NSOApexEngine-2:0.0.1",} 1.0 -# HELP jvm_gc_collection_seconds Time spent in a given JVM garbage collector in seconds. -# TYPE jvm_gc_collection_seconds summary -jvm_gc_collection_seconds_count{gc="Copy",} 5883.0 -jvm_gc_collection_seconds_sum{gc="Copy",} 97.808 -jvm_gc_collection_seconds_count{gc="MarkSweepCompact",} 3.0 -jvm_gc_collection_seconds_sum{gc="MarkSweepCompact",} 0.357 -# HELP pdpa_engine_last_start_timestamp_epoch Epoch timestamp of the instance when engine was last started. -# TYPE pdpa_engine_last_start_timestamp_epoch gauge -pdpa_engine_last_start_timestamp_epoch{engine_instance_id="NSOApexEngine-1:0.0.1",} 0.0 -pdpa_engine_last_start_timestamp_epoch{engine_instance_id="NSOApexEngine-4:0.0.1",} 0.0 -pdpa_engine_last_start_timestamp_epoch{engine_instance_id="NSOApexEngine-3:0.0.1",} 0.0 -pdpa_engine_last_start_timestamp_epoch{engine_instance_id="NSOApexEngine-0:0.0.1",} 0.0 -pdpa_engine_last_start_timestamp_epoch{engine_instance_id="NSOApexEngine-2:0.0.1",} 0.0 -# HELP jvm_memory_pool_allocated_bytes_total Total bytes allocated in a given JVM memory pool. Only updated after GC, not continuously. -# TYPE jvm_memory_pool_allocated_bytes_total counter -jvm_memory_pool_allocated_bytes_total{pool="Eden Space",} 8.29800936264E11 -jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'profiled nmethods'",} 4.839232E7 -jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'non-profiled nmethods'",} 3.5181056E7 -jvm_memory_pool_allocated_bytes_total{pool="Compressed Class Space",} 8194120.0 -jvm_memory_pool_allocated_bytes_total{pool="Metaspace",} 7.7729144E7 -jvm_memory_pool_allocated_bytes_total{pool="Tenured Gen",} 1.41180272E8 -jvm_memory_pool_allocated_bytes_total{pool="Survivor Space",} 4.78761928E8 -jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'non-nmethods'",} 1392128.0 -# HELP pdpa_engine_uptime Time elapsed since the engine was started. -# TYPE pdpa_engine_uptime gauge -pdpa_engine_uptime{engine_instance_id="NSOApexEngine-1:0.0.1",} 259200.522 -pdpa_engine_uptime{engine_instance_id="NSOApexEngine-4:0.0.1",} 259200.751 -pdpa_engine_uptime{engine_instance_id="NSOApexEngine-3:0.0.1",} 259200.678 -pdpa_engine_uptime{engine_instance_id="NSOApexEngine-0:0.0.1",} 259200.439 -pdpa_engine_uptime{engine_instance_id="NSOApexEngine-2:0.0.1",} 259200.601 -# HELP pdpa_engine_last_execution_time Time taken to execute the last APEX policy in seconds. -# TYPE pdpa_engine_last_execution_time histogram -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="0.005",} 24726.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="0.01",} 50195.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="0.025",} 70836.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="0.05",} 71947.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="0.075",} 71996.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="0.1",} 72001.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="0.25",} 72002.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="0.5",} 72002.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="0.75",} 72002.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="1.0",} 72002.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="2.5",} 72002.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="5.0",} 72002.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="7.5",} 72002.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="10.0",} 72002.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="+Inf",} 72002.0 -pdpa_engine_last_execution_time_count{engine_instance_id="NSOApexEngine-1:0.0.1",} 72002.0 -pdpa_engine_last_execution_time_sum{engine_instance_id="NSOApexEngine-1:0.0.1",} 609.1939999998591 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="0.005",} 24512.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="0.01",} 50115.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="0.025",} 70746.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="0.05",} 71918.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="0.075",} 71966.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="0.1",} 71967.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="0.25",} 71967.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="0.5",} 71967.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="0.75",} 71967.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="1.0",} 71967.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="2.5",} 71967.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="5.0",} 71967.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="7.5",} 71967.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="10.0",} 71967.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="+Inf",} 71967.0 -pdpa_engine_last_execution_time_count{engine_instance_id="NSOApexEngine-4:0.0.1",} 71967.0 -pdpa_engine_last_execution_time_sum{engine_instance_id="NSOApexEngine-4:0.0.1",} 610.3469999998522 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="0.005",} 24607.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="0.01",} 50182.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="0.025",} 70791.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="0.05",} 71929.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="0.075",} 71965.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="0.1",} 71970.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="0.25",} 71970.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="0.5",} 71970.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="0.75",} 71970.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="1.0",} 71970.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="2.5",} 71970.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="5.0",} 71970.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="7.5",} 71970.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="10.0",} 71970.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="+Inf",} 71970.0 -pdpa_engine_last_execution_time_count{engine_instance_id="NSOApexEngine-3:0.0.1",} 71970.0 -pdpa_engine_last_execution_time_sum{engine_instance_id="NSOApexEngine-3:0.0.1",} 608.8539999998619 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="0.005",} 24623.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="0.01",} 50207.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="0.025",} 70783.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="0.05",} 71934.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="0.075",} 71981.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="0.1",} 71986.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="0.25",} 71988.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="0.5",} 71988.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="0.75",} 71988.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="1.0",} 71988.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="2.5",} 71988.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="5.0",} 71988.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="7.5",} 71988.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="10.0",} 71988.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="+Inf",} 71988.0 -pdpa_engine_last_execution_time_count{engine_instance_id="NSOApexEngine-0:0.0.1",} 71988.0 -pdpa_engine_last_execution_time_sum{engine_instance_id="NSOApexEngine-0:0.0.1",} 610.5579999998558 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="0.005",} 24594.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="0.01",} 50131.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="0.025",} 70816.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="0.05",} 71905.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="0.075",} 71959.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="0.1",} 71961.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="0.25",} 71962.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="0.5",} 71962.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="0.75",} 71962.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="1.0",} 71962.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="2.5",} 71962.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="5.0",} 71962.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="7.5",} 71962.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="10.0",} 71962.0 -pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="+Inf",} 71962.0 -pdpa_engine_last_execution_time_count{engine_instance_id="NSOApexEngine-2:0.0.1",} 71962.0 -pdpa_engine_last_execution_time_sum{engine_instance_id="NSOApexEngine-2:0.0.1",} 608.3549999998555 -# HELP jvm_memory_objects_pending_finalization The number of objects waiting in the finalizer queue. -# TYPE jvm_memory_objects_pending_finalization gauge -jvm_memory_objects_pending_finalization 0.0 -# HELP jvm_memory_bytes_used Used bytes of a given JVM memory area. -# TYPE jvm_memory_bytes_used gauge -jvm_memory_bytes_used{area="heap",} 1.90274552E8 -jvm_memory_bytes_used{area="nonheap",} 1.16193856E8 -# HELP jvm_memory_bytes_committed Committed (bytes) of a given JVM memory area. -# TYPE jvm_memory_bytes_committed gauge -jvm_memory_bytes_committed{area="heap",} 5.10984192E8 -jvm_memory_bytes_committed{area="nonheap",} 1.56127232E8 -# HELP jvm_memory_bytes_max Max (bytes) of a given JVM memory area. -# TYPE jvm_memory_bytes_max gauge -jvm_memory_bytes_max{area="heap",} 8.151564288E9 -jvm_memory_bytes_max{area="nonheap",} -1.0 -# HELP jvm_memory_bytes_init Initial bytes of a given JVM memory area. -# TYPE jvm_memory_bytes_init gauge -jvm_memory_bytes_init{area="heap",} 5.28482304E8 -jvm_memory_bytes_init{area="nonheap",} 7667712.0 -# HELP jvm_memory_pool_bytes_used Used bytes of a given JVM memory pool. -# TYPE jvm_memory_pool_bytes_used gauge -jvm_memory_pool_bytes_used{pool="CodeHeap 'non-nmethods'",} 1353600.0 -jvm_memory_pool_bytes_used{pool="Metaspace",} 7.7729144E7 -jvm_memory_pool_bytes_used{pool="Tenured Gen",} 1.41180272E8 -jvm_memory_pool_bytes_used{pool="CodeHeap 'profiled nmethods'",} 4831104.0 -jvm_memory_pool_bytes_used{pool="Eden Space",} 4.5145032E7 -jvm_memory_pool_bytes_used{pool="Survivor Space",} 3949248.0 -jvm_memory_pool_bytes_used{pool="Compressed Class Space",} 8194120.0 -jvm_memory_pool_bytes_used{pool="CodeHeap 'non-profiled nmethods'",} 2.4085888E7 -# HELP jvm_memory_pool_bytes_committed Committed bytes of a given JVM memory pool. -# TYPE jvm_memory_pool_bytes_committed gauge -jvm_memory_pool_bytes_committed{pool="CodeHeap 'non-nmethods'",} 2555904.0 -jvm_memory_pool_bytes_committed{pool="Metaspace",} 8.5348352E7 -jvm_memory_pool_bytes_committed{pool="Tenured Gen",} 3.52321536E8 -jvm_memory_pool_bytes_committed{pool="CodeHeap 'profiled nmethods'",} 3.3030144E7 -jvm_memory_pool_bytes_committed{pool="Eden Space",} 1.41033472E8 -jvm_memory_pool_bytes_committed{pool="Survivor Space",} 1.7629184E7 -jvm_memory_pool_bytes_committed{pool="Compressed Class Space",} 9175040.0 -jvm_memory_pool_bytes_committed{pool="CodeHeap 'non-profiled nmethods'",} 2.6017792E7 -# HELP jvm_memory_pool_bytes_max Max bytes of a given JVM memory pool. -# TYPE jvm_memory_pool_bytes_max gauge -jvm_memory_pool_bytes_max{pool="CodeHeap 'non-nmethods'",} 5828608.0 -jvm_memory_pool_bytes_max{pool="Metaspace",} -1.0 -jvm_memory_pool_bytes_max{pool="Tenured Gen",} 5.621809152E9 -jvm_memory_pool_bytes_max{pool="CodeHeap 'profiled nmethods'",} 1.22912768E8 -jvm_memory_pool_bytes_max{pool="Eden Space",} 2.248671232E9 -jvm_memory_pool_bytes_max{pool="Survivor Space",} 2.81083904E8 -jvm_memory_pool_bytes_max{pool="Compressed Class Space",} 1.073741824E9 -jvm_memory_pool_bytes_max{pool="CodeHeap 'non-profiled nmethods'",} 1.22916864E8 -# HELP jvm_memory_pool_bytes_init Initial bytes of a given JVM memory pool. -# TYPE jvm_memory_pool_bytes_init gauge -jvm_memory_pool_bytes_init{pool="CodeHeap 'non-nmethods'",} 2555904.0 -jvm_memory_pool_bytes_init{pool="Metaspace",} 0.0 -jvm_memory_pool_bytes_init{pool="Tenured Gen",} 3.52321536E8 -jvm_memory_pool_bytes_init{pool="CodeHeap 'profiled nmethods'",} 2555904.0 -jvm_memory_pool_bytes_init{pool="Eden Space",} 1.41033472E8 -jvm_memory_pool_bytes_init{pool="Survivor Space",} 1.7563648E7 -jvm_memory_pool_bytes_init{pool="Compressed Class Space",} 0.0 -jvm_memory_pool_bytes_init{pool="CodeHeap 'non-profiled nmethods'",} 2555904.0 -# HELP jvm_memory_pool_collection_used_bytes Used bytes after last collection of a given JVM memory pool. -# TYPE jvm_memory_pool_collection_used_bytes gauge -jvm_memory_pool_collection_used_bytes{pool="Tenured Gen",} 3.853812E7 -jvm_memory_pool_collection_used_bytes{pool="Eden Space",} 0.0 -jvm_memory_pool_collection_used_bytes{pool="Survivor Space",} 3949248.0 -# HELP jvm_memory_pool_collection_committed_bytes Committed after last collection bytes of a given JVM memory pool. -# TYPE jvm_memory_pool_collection_committed_bytes gauge -jvm_memory_pool_collection_committed_bytes{pool="Tenured Gen",} 3.52321536E8 -jvm_memory_pool_collection_committed_bytes{pool="Eden Space",} 1.41033472E8 -jvm_memory_pool_collection_committed_bytes{pool="Survivor Space",} 1.7629184E7 -# HELP jvm_memory_pool_collection_max_bytes Max bytes after last collection of a given JVM memory pool. -# TYPE jvm_memory_pool_collection_max_bytes gauge -jvm_memory_pool_collection_max_bytes{pool="Tenured Gen",} 5.621809152E9 -jvm_memory_pool_collection_max_bytes{pool="Eden Space",} 2.248671232E9 -jvm_memory_pool_collection_max_bytes{pool="Survivor Space",} 2.81083904E8 -# HELP jvm_memory_pool_collection_init_bytes Initial after last collection bytes of a given JVM memory pool. -# TYPE jvm_memory_pool_collection_init_bytes gauge -jvm_memory_pool_collection_init_bytes{pool="Tenured Gen",} 3.52321536E8 -jvm_memory_pool_collection_init_bytes{pool="Eden Space",} 1.41033472E8 -jvm_memory_pool_collection_init_bytes{pool="Survivor Space",} 1.7563648E7 -# HELP jvm_classes_loaded The number of classes that are currently loaded in the JVM -# TYPE jvm_classes_loaded gauge -jvm_classes_loaded 11386.0 -# HELP jvm_classes_loaded_total The total number of classes that have been loaded since the JVM has started execution -# TYPE jvm_classes_loaded_total counter -jvm_classes_loaded_total 11448.0 -# HELP jvm_classes_unloaded_total The total number of classes that have been unloaded since the JVM has started execution -# TYPE jvm_classes_unloaded_total counter -jvm_classes_unloaded_total 62.0 -# HELP jvm_info VM version info -# TYPE jvm_info gauge -jvm_info{runtime="OpenJDK Runtime Environment",vendor="Alpine",version="11.0.9+11-alpine-r1",} 1.0 -# HELP jvm_memory_pool_allocated_bytes_created Total bytes allocated in a given JVM memory pool. Only updated after GC, not continuously. -# TYPE jvm_memory_pool_allocated_bytes_created gauge -jvm_memory_pool_allocated_bytes_created{pool="Eden Space",} 1.651077501662E9 -jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'profiled nmethods'",} 1.651077501657E9 -jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'non-profiled nmethods'",} 1.651077501662E9 -jvm_memory_pool_allocated_bytes_created{pool="Compressed Class Space",} 1.651077501662E9 -jvm_memory_pool_allocated_bytes_created{pool="Metaspace",} 1.651077501662E9 -jvm_memory_pool_allocated_bytes_created{pool="Tenured Gen",} 1.651077501662E9 -jvm_memory_pool_allocated_bytes_created{pool="Survivor Space",} 1.651077501662E9 -jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'non-nmethods'",} 1.651077501662E9 -# HELP pdpa_engine_last_execution_time_created Time taken to execute the last APEX policy in seconds. -# TYPE pdpa_engine_last_execution_time_created gauge -pdpa_engine_last_execution_time_created{engine_instance_id="NSOApexEngine-1:0.0.1",} 1.651080501294E9 -pdpa_engine_last_execution_time_created{engine_instance_id="NSOApexEngine-4:0.0.1",} 1.651080501295E9 -pdpa_engine_last_execution_time_created{engine_instance_id="NSOApexEngine-3:0.0.1",} 1.651080501295E9 -pdpa_engine_last_execution_time_created{engine_instance_id="NSOApexEngine-0:0.0.1",} 1.651080501294E9 -pdpa_engine_last_execution_time_created{engine_instance_id="NSOApexEngine-2:0.0.1",} 1.651080501294E9 -# HELP pdpa_policy_deployments_created The total number of policy deployments. -# TYPE pdpa_policy_deployments_created gauge -pdpa_policy_deployments_created{operation="deploy",status="TOTAL",} 1.651080501289E9 -pdpa_policy_deployments_created{operation="undeploy",status="TOTAL",} 1.651081148331E9 -pdpa_policy_deployments_created{operation="undeploy",status="SUCCESS",} 1.651081148331E9 -pdpa_policy_deployments_created{operation="deploy",status="SUCCESS",} 1.651080501289E9 diff --git a/docs/development/devtools/apex-s3p-results/apex_metrics_before_72h.txt b/docs/development/devtools/apex-s3p-results/apex_metrics_before_72h.txt deleted file mode 100644 index 4a3d8835..00000000 --- a/docs/development/devtools/apex-s3p-results/apex_metrics_before_72h.txt +++ /dev/null @@ -1,175 +0,0 @@ -# HELP jvm_threads_current Current thread count of a JVM -# TYPE jvm_threads_current gauge -jvm_threads_current 31.0 -# HELP jvm_threads_daemon Daemon thread count of a JVM -# TYPE jvm_threads_daemon gauge -jvm_threads_daemon 16.0 -# HELP jvm_threads_peak Peak thread count of a JVM -# TYPE jvm_threads_peak gauge -jvm_threads_peak 31.0 -# HELP jvm_threads_started_total Started thread count of a JVM -# TYPE jvm_threads_started_total counter -jvm_threads_started_total 32.0 -# HELP jvm_threads_deadlocked Cycles of JVM-threads that are in deadlock waiting to acquire object monitors or ownable synchronizers -# TYPE jvm_threads_deadlocked gauge -jvm_threads_deadlocked 0.0 -# HELP jvm_threads_deadlocked_monitor Cycles of JVM-threads that are in deadlock waiting to acquire object monitors -# TYPE jvm_threads_deadlocked_monitor gauge -jvm_threads_deadlocked_monitor 0.0 -# HELP jvm_threads_state Current count of threads by state -# TYPE jvm_threads_state gauge -jvm_threads_state{state="BLOCKED",} 0.0 -jvm_threads_state{state="TIMED_WAITING",} 11.0 -jvm_threads_state{state="NEW",} 0.0 -jvm_threads_state{state="RUNNABLE",} 7.0 -jvm_threads_state{state="TERMINATED",} 0.0 -jvm_threads_state{state="WAITING",} 13.0 -# HELP jvm_gc_collection_seconds Time spent in a given JVM garbage collector in seconds. -# TYPE jvm_gc_collection_seconds summary -jvm_gc_collection_seconds_count{gc="Copy",} 2.0 -jvm_gc_collection_seconds_sum{gc="Copy",} 0.059 -jvm_gc_collection_seconds_count{gc="MarkSweepCompact",} 2.0 -jvm_gc_collection_seconds_sum{gc="MarkSweepCompact",} 0.185 -# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds. -# TYPE process_cpu_seconds_total counter -process_cpu_seconds_total 38.14 -# HELP process_start_time_seconds Start time of the process since unix epoch in seconds. -# TYPE process_start_time_seconds gauge -process_start_time_seconds 1.651077494162E9 -# HELP process_open_fds Number of open file descriptors. -# TYPE process_open_fds gauge -process_open_fds 355.0 -# HELP process_max_fds Maximum number of open file descriptors. -# TYPE process_max_fds gauge -process_max_fds 1048576.0 -# HELP process_virtual_memory_bytes Virtual memory size in bytes. -# TYPE process_virtual_memory_bytes gauge -process_virtual_memory_bytes 1.0070171648E10 -# HELP process_resident_memory_bytes Resident memory size in bytes. -# TYPE process_resident_memory_bytes gauge -process_resident_memory_bytes 2.9052928E8 -# HELP jvm_buffer_pool_used_bytes Used bytes of a given JVM buffer pool. -# TYPE jvm_buffer_pool_used_bytes gauge -jvm_buffer_pool_used_bytes{pool="mapped",} 0.0 -jvm_buffer_pool_used_bytes{pool="direct",} 187432.0 -# HELP jvm_buffer_pool_capacity_bytes Bytes capacity of a given JVM buffer pool. -# TYPE jvm_buffer_pool_capacity_bytes gauge -jvm_buffer_pool_capacity_bytes{pool="mapped",} 0.0 -jvm_buffer_pool_capacity_bytes{pool="direct",} 187432.0 -# HELP jvm_buffer_pool_used_buffers Used buffers of a given JVM buffer pool. -# TYPE jvm_buffer_pool_used_buffers gauge -jvm_buffer_pool_used_buffers{pool="mapped",} 0.0 -jvm_buffer_pool_used_buffers{pool="direct",} 9.0 -# HELP jvm_memory_pool_allocated_bytes_total Total bytes allocated in a given JVM memory pool. Only updated after GC, not continuously. -# TYPE jvm_memory_pool_allocated_bytes_total counter -jvm_memory_pool_allocated_bytes_total{pool="Eden Space",} 3.035482E8 -jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'profiled nmethods'",} 9772800.0 -jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'non-profiled nmethods'",} 2152064.0 -jvm_memory_pool_allocated_bytes_total{pool="Compressed Class Space",} 4912232.0 -jvm_memory_pool_allocated_bytes_total{pool="Metaspace",} 4.1337744E7 -jvm_memory_pool_allocated_bytes_total{pool="Tenured Gen",} 2.8136056E7 -jvm_memory_pool_allocated_bytes_total{pool="Survivor Space",} 6813240.0 -jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'non-nmethods'",} 1272320.0 -# HELP pdpa_policy_deployments_total The total number of policy deployments. -# TYPE pdpa_policy_deployments_total counter -# HELP jvm_memory_objects_pending_finalization The number of objects waiting in the finalizer queue. -# TYPE jvm_memory_objects_pending_finalization gauge -jvm_memory_objects_pending_finalization 0.0 -# HELP jvm_memory_bytes_used Used bytes of a given JVM memory area. -# TYPE jvm_memory_bytes_used gauge -jvm_memory_bytes_used{area="heap",} 9.5900224E7 -jvm_memory_bytes_used{area="nonheap",} 6.0285288E7 -# HELP jvm_memory_bytes_committed Committed (bytes) of a given JVM memory area. -# TYPE jvm_memory_bytes_committed gauge -jvm_memory_bytes_committed{area="heap",} 5.10984192E8 -jvm_memory_bytes_committed{area="nonheap",} 6.3922176E7 -# HELP jvm_memory_bytes_max Max (bytes) of a given JVM memory area. -# TYPE jvm_memory_bytes_max gauge -jvm_memory_bytes_max{area="heap",} 8.151564288E9 -jvm_memory_bytes_max{area="nonheap",} -1.0 -# HELP jvm_memory_bytes_init Initial bytes of a given JVM memory area. -# TYPE jvm_memory_bytes_init gauge -jvm_memory_bytes_init{area="heap",} 5.28482304E8 -jvm_memory_bytes_init{area="nonheap",} 7667712.0 -# HELP jvm_memory_pool_bytes_used Used bytes of a given JVM memory pool. -# TYPE jvm_memory_pool_bytes_used gauge -jvm_memory_pool_bytes_used{pool="CodeHeap 'non-nmethods'",} 1272320.0 -jvm_memory_pool_bytes_used{pool="Metaspace",} 4.1681312E7 -jvm_memory_pool_bytes_used{pool="Tenured Gen",} 2.8136056E7 -jvm_memory_pool_bytes_used{pool="CodeHeap 'profiled nmethods'",} 1.0006912E7 -jvm_memory_pool_bytes_used{pool="Eden Space",} 6.5005376E7 -jvm_memory_pool_bytes_used{pool="Survivor Space",} 2758792.0 -jvm_memory_pool_bytes_used{pool="Compressed Class Space",} 4913352.0 -jvm_memory_pool_bytes_used{pool="CodeHeap 'non-profiled nmethods'",} 2411392.0 -# HELP jvm_memory_pool_bytes_committed Committed bytes of a given JVM memory pool. -# TYPE jvm_memory_pool_bytes_committed gauge -jvm_memory_pool_bytes_committed{pool="CodeHeap 'non-nmethods'",} 2555904.0 -jvm_memory_pool_bytes_committed{pool="Metaspace",} 4.32128E7 -jvm_memory_pool_bytes_committed{pool="Tenured Gen",} 3.52321536E8 -jvm_memory_pool_bytes_committed{pool="CodeHeap 'profiled nmethods'",} 1.0092544E7 -jvm_memory_pool_bytes_committed{pool="Eden Space",} 1.41033472E8 -jvm_memory_pool_bytes_committed{pool="Survivor Space",} 1.7629184E7 -jvm_memory_pool_bytes_committed{pool="Compressed Class Space",} 5505024.0 -jvm_memory_pool_bytes_committed{pool="CodeHeap 'non-profiled nmethods'",} 2555904.0 -# HELP jvm_memory_pool_bytes_max Max bytes of a given JVM memory pool. -# TYPE jvm_memory_pool_bytes_max gauge -jvm_memory_pool_bytes_max{pool="CodeHeap 'non-nmethods'",} 5828608.0 -jvm_memory_pool_bytes_max{pool="Metaspace",} -1.0 -jvm_memory_pool_bytes_max{pool="Tenured Gen",} 5.621809152E9 -jvm_memory_pool_bytes_max{pool="CodeHeap 'profiled nmethods'",} 1.22912768E8 -jvm_memory_pool_bytes_max{pool="Eden Space",} 2.248671232E9 -jvm_memory_pool_bytes_max{pool="Survivor Space",} 2.81083904E8 -jvm_memory_pool_bytes_max{pool="Compressed Class Space",} 1.073741824E9 -jvm_memory_pool_bytes_max{pool="CodeHeap 'non-profiled nmethods'",} 1.22916864E8 -# HELP jvm_memory_pool_bytes_init Initial bytes of a given JVM memory pool. -# TYPE jvm_memory_pool_bytes_init gauge -jvm_memory_pool_bytes_init{pool="CodeHeap 'non-nmethods'",} 2555904.0 -jvm_memory_pool_bytes_init{pool="Metaspace",} 0.0 -jvm_memory_pool_bytes_init{pool="Tenured Gen",} 3.52321536E8 -jvm_memory_pool_bytes_init{pool="CodeHeap 'profiled nmethods'",} 2555904.0 -jvm_memory_pool_bytes_init{pool="Eden Space",} 1.41033472E8 -jvm_memory_pool_bytes_init{pool="Survivor Space",} 1.7563648E7 -jvm_memory_pool_bytes_init{pool="Compressed Class Space",} 0.0 -jvm_memory_pool_bytes_init{pool="CodeHeap 'non-profiled nmethods'",} 2555904.0 -# HELP jvm_memory_pool_collection_used_bytes Used bytes after last collection of a given JVM memory pool. -# TYPE jvm_memory_pool_collection_used_bytes gauge -jvm_memory_pool_collection_used_bytes{pool="Tenured Gen",} 2.8136056E7 -jvm_memory_pool_collection_used_bytes{pool="Eden Space",} 0.0 -jvm_memory_pool_collection_used_bytes{pool="Survivor Space",} 2758792.0 -# HELP jvm_memory_pool_collection_committed_bytes Committed after last collection bytes of a given JVM memory pool. -# TYPE jvm_memory_pool_collection_committed_bytes gauge -jvm_memory_pool_collection_committed_bytes{pool="Tenured Gen",} 3.52321536E8 -jvm_memory_pool_collection_committed_bytes{pool="Eden Space",} 1.41033472E8 -jvm_memory_pool_collection_committed_bytes{pool="Survivor Space",} 1.7629184E7 -# HELP jvm_memory_pool_collection_max_bytes Max bytes after last collection of a given JVM memory pool. -# TYPE jvm_memory_pool_collection_max_bytes gauge -jvm_memory_pool_collection_max_bytes{pool="Tenured Gen",} 5.621809152E9 -jvm_memory_pool_collection_max_bytes{pool="Eden Space",} 2.248671232E9 -jvm_memory_pool_collection_max_bytes{pool="Survivor Space",} 2.81083904E8 -# HELP jvm_memory_pool_collection_init_bytes Initial after last collection bytes of a given JVM memory pool. -# TYPE jvm_memory_pool_collection_init_bytes gauge -jvm_memory_pool_collection_init_bytes{pool="Tenured Gen",} 3.52321536E8 -jvm_memory_pool_collection_init_bytes{pool="Eden Space",} 1.41033472E8 -jvm_memory_pool_collection_init_bytes{pool="Survivor Space",} 1.7563648E7 -# HELP jvm_classes_loaded The number of classes that are currently loaded in the JVM -# TYPE jvm_classes_loaded gauge -jvm_classes_loaded 7378.0 -# HELP jvm_classes_loaded_total The total number of classes that have been loaded since the JVM has started execution -# TYPE jvm_classes_loaded_total counter -jvm_classes_loaded_total 7378.0 -# HELP jvm_classes_unloaded_total The total number of classes that have been unloaded since the JVM has started execution -# TYPE jvm_classes_unloaded_total counter -jvm_classes_unloaded_total 0.0 -# HELP jvm_info VM version info -# TYPE jvm_info gauge -jvm_info{runtime="OpenJDK Runtime Environment",vendor="Alpine",version="11.0.9+11-alpine-r1",} 1.0 -# HELP jvm_memory_pool_allocated_bytes_created Total bytes allocated in a given JVM memory pool. Only updated after GC, not continuously. -# TYPE jvm_memory_pool_allocated_bytes_created gauge -jvm_memory_pool_allocated_bytes_created{pool="Eden Space",} 1.651077501662E9 -jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'profiled nmethods'",} 1.651077501657E9 -jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'non-profiled nmethods'",} 1.651077501662E9 -jvm_memory_pool_allocated_bytes_created{pool="Compressed Class Space",} 1.651077501662E9 -jvm_memory_pool_allocated_bytes_created{pool="Metaspace",} 1.651077501662E9 -jvm_memory_pool_allocated_bytes_created{pool="Tenured Gen",} 1.651077501662E9 -jvm_memory_pool_allocated_bytes_created{pool="Survivor Space",} 1.651077501662E9 -jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'non-nmethods'",} 1.651077501662E9 diff --git a/docs/development/devtools/apex-s3p-results/apex_perf_jmeter_results.png b/docs/development/devtools/apex-s3p-results/apex_perf_jmeter_results.png deleted file mode 100644 index 0fa35c0b..00000000 Binary files a/docs/development/devtools/apex-s3p-results/apex_perf_jmeter_results.png and /dev/null differ diff --git a/docs/development/devtools/apex-s3p-results/apex_stability_jmeter_results.png b/docs/development/devtools/apex-s3p-results/apex_stability_jmeter_results.png deleted file mode 100644 index 585f99c5..00000000 Binary files a/docs/development/devtools/apex-s3p-results/apex_stability_jmeter_results.png and /dev/null differ diff --git a/docs/development/devtools/apex-s3p-results/apex_top_after_72h.png b/docs/development/devtools/apex-s3p-results/apex_top_after_72h.png deleted file mode 100644 index dafc7002..00000000 Binary files a/docs/development/devtools/apex-s3p-results/apex_top_after_72h.png and /dev/null differ diff --git a/docs/development/devtools/apex-s3p-results/apex_top_before_72h.png b/docs/development/devtools/apex-s3p-results/apex_top_before_72h.png deleted file mode 100644 index 2e2e7574..00000000 Binary files a/docs/development/devtools/apex-s3p-results/apex_top_before_72h.png and /dev/null differ diff --git a/docs/development/devtools/apex-s3p.rst b/docs/development/devtools/apex-s3p.rst deleted file mode 100644 index 6a3be847..00000000 --- a/docs/development/devtools/apex-s3p.rst +++ /dev/null @@ -1,258 +0,0 @@ -.. This work is licensed under a -.. Creative Commons Attribution 4.0 International License. -.. http://creativecommons.org/licenses/by/4.0 - -.. _apex-s3p-label: - -.. toctree:: - :maxdepth: 2 - -Policy APEX PDP component -~~~~~~~~~~~~~~~~~~~~~~~~~ - -Both the Stability and the Performance tests were executed in a full ONAP OOM deployment in Nordix lab. - -Setup Details -+++++++++++++ - -Deploying ONAP using OOM ------------------------- - -APEX-PDP along with all policy components are deployed as part of a full ONAP OOM deployment. -At a minimum, the following ONAP components are needed: policy, mariadb-galera, aai, cassandra, aaf, and dmaap. - -Before deploying, the values.yaml files are changed to use NodePort instead of ClusterIP for policy-api, -policy-pap, and policy-apex-pdp, so that they are accessible from jmeter:: - - policy-apex-pdp NodePort 10.43.131.43 6969:31739/TCP - policy-api NodePort 10.43.67.153 6969:30430/TCP - policy-pap NodePort 10.43.200.57 6969:30585/TCP - -The node ports (31739, 30430 and 30585 above) are used in JMeter. -The HOSTNAMEs for JMeter are set to the IPs returned by running "kubectl get node -o wide" -and to find the applications for each node by running "kubectl describe node ". - -Set up policy-models-simulator ------------------------------- - -Policy-models-simulator is deployed to use CDS and DMaaP simulators during policy execution. - Simulator configurations used are available in apex-pdp repository: - testsuites/apex-pdp-stability/src/main/resources/simulatorConfig/ - -It is run as a docker image from a node accessible to the kubernetes cluster:: - - docker run -d --rm --publish 6680:6680 --publish 31054:3905 \ - --volume "apex-pdp/testsuites/apex-pdp-stability/src/main/resources/simulatorConfig:/opt/app/policy/simulators/etc/mounted" \ - nexus3.onap.org:10001/onap/policy-models-simulator:2.7-SNAPSHOT-latest - -The published ports 6680 and 31054 are used in JMeter for CDS and DMaaP simulators. - -Creation of VNF & PNF in AAI ----------------------------- - -In order for APEX-PDP engine to fetch the resource details from AAI during runtime execution, we need to create dummy -VNF & PNF entities in AAI. In a real control loop flow, the entities in AAI will be either created during orchestration -phase or provisioned in AAI separately. - -Download & execute the steps in postman collection for creating the entities along with it’s dependencies. -The steps needs to be performed sequentially one after another. And no input is required from user. - -:download:`Create VNF & PNF in AAI for Apex S3P ` - -Make sure to skip the delete VNF & PNF steps. - -JMeter Tests ------------- - -Two APEX policies are executed in the APEX-PDP engine, and are triggered by multiple threads during the tests. -Both tests were run via jMeter. - - Stability test script is available in apex-pdp repository: - testsuites/apex-pdp-stability/src/main/resources/apexPdpStabilityTestPlan.jmx - - Performance test script is available in apex-pdp repository: - testsuites/performance/performance-benchmark-test/src/main/resources/apexPdpPerformanceTestPlan.jmx - -.. Note:: - Policy executions are validated in a stricter fashion during the tests. - There are test cases where up to 80 events are expected on the DMaaP topic. - DMaaP simulator is used to keep it simple and avoid any message pickup timing related issues. - -Stability Test of APEX-PDP -++++++++++++++++++++++++++ - -Test Plan ---------- - -The 72 hours stability test ran the following steps. - -Setup Phase -""""""""""" - -Policies are created and deployed to APEX-PDP during this phase. Only one thread is in action and this step is done only once. - -- **Create Policy onap.policies.apex.Simplecontrolloop** - creates the first APEX policy using policy/api component. - This is a sample policy used for PNF testing. -- **Create Policy onap.policies.apex.Example** - creates the second APEX policy using policy/api component. - This is a sample policy used for VNF testing. -- **Deploy Policies** - Deploy both the policies created to APEX-PDP using policy/pap component - -Main Phase -"""""""""" - -Once the policies are created and deployed to APEX-PDP by the setup thread, five threads execute the below tests for 72 hours. - -- **Healthcheck** - checks the health status of APEX-PDP -- **Prometheus Metrics** - checks that APEX-PDP is exposing prometheus metrics -- **Test Simplecontrolloop policy success case** - Send a trigger event to *unauthenticated.DCAE_CL_OUTPUT* DMaaP topic. - If the policy execution is successful, 3 different notification events are sent to *APEX-CL-MGT* topic by each one of the 5 threads. - So, it is checked if 15 notification messages are received in total on *APEX-CL-MGT* topic with the relevant messages. -- **Test Simplecontrolloop policy failure case** - Send a trigger event with invalid pnfName to *unauthenticated.DCAE_CL_OUTPUT* DMaaP topic. - The policy execution is expected to fail due to AAI failure response. 2 notification events are expected on *APEX-CL-MGT* topic by a thread in this case. - It is checked if 10 notification messages are received in total on *APEX-CL-MGT* topic with the relevant messages. -- **Test Example policy success case** - Send a trigger event to *unauthenticated.DCAE_POLICY_EXAMPLE_OUTPUT* DMaaP topic. - If the policy execution is successful, 4 different notification events are sent to *APEX-CL-MGT* topic by each one of the 5 threads. - So, it is checked if 20 notification messages are received in total on *APEX-CL-MGT* topic with the relevant messages. -- **Test Example policy failure case** - Send a trigger event with invalid vnfName to *unauthenticated.DCAE_POLICY_EXAMPLE_OUTPUT* DMaaP topic. - The policy execution is expected to fail due to AAI failure response. 2 notification events are expected on *APEX-CL-MGT* topic by a thread in this case. - So, it is checked if 10 notification messages are received in total on *APEX-CL-MGT* topic with the relevant messages. -- **Clean up DMaaP notification topic** - DMaaP notification topic which is *APEX-CL-MGT* is cleaned up after each test to make sure that one failure doesn't lead to cascading errors. - - -Teardown Phase -"""""""""""""" - -Policies are undeployed from APEX-PDP and deleted during this phase. -Only one thread is in action and this step is done only once after the Main phase is complete. - -- **Undeploy Policies** - Undeploy both the policies from APEX-PDP using policy/pap component -- **Delete Policy onap.policies.apex.Simplecontrolloop** - delete the first APEX policy using policy/api component. -- **Delete Policy onap.policies.apex.Example** - delete the second APEX policy also using policy/api component. - -Test Configuration ------------------- - -The following steps can be used to configure the parameters of test plan. - -- **HTTP Authorization Manager** - used to store user/password authentication details. -- **HTTP Header Manager** - used to store headers which will be used for making HTTP requests. -- **User Defined Variables** - used to store following user defined parameters. - -=================== =============================================================================== - **Name** **Description** -=================== =============================================================================== - HOSTNAME IP Address or host name to access the components - PAP_PORT Port number of PAP for making REST API calls such as deploy/undeploy of policy - API_PORT Port number of API for making REST API calls such as create/delete of policy - APEX_PORT Port number of APEX for making REST API calls such as healthcheck/metrics - SIM_HOST IP Address or hostname running policy-models-simulator - DMAAP_PORT Port number of DMaaP simulator for making REST API calls such as reading notification events - CDS_PORT Port number of CDS simulator - wait Wait time if required after a request (in milliseconds) - threads Number of threads to run test cases in parallel - threadsTimeOutInMs Synchronization timer for threads running in parallel (in milliseconds) -=================== =============================================================================== - -Run Test --------- - -The test was run in the background via "nohup", to prevent it from being interrupted: - -.. code-block:: bash - - nohup ./apache-jmeter-5.4.3/bin/jmeter.sh -n -t apexPdpStabilityTestPlan.jmx -l stabilityTestResults.jtl - -Test Results ------------- - -**Summary** - -Stability test plan was triggered for 72 hours. There were no failures during the 72 hours test. - - -**Test Statistics** - -======================= ================= ================== ================================== -**Total # of requests** **Success %** **Error %** **Average time taken per request** -======================= ================= ================== ================================== -430397 100 % 0.00 % 151.694 ms -======================= ================= ================== ================================== - -.. Note:: - - There were no failures during the 72 hours test. - -**JMeter Screenshot** - -.. image:: apex-s3p-results/apex_stability_jmeter_results.png - -**Memory and CPU usage** - -The memory and CPU usage can be monitored by running "top" command in the APEX-PDP pod. -A snapshot is taken before and after test execution to monitor the changes in resource utilization. -Prometheus metrics is also collected before and after the test execution. - -Memory and CPU usage before test execution: - -.. image:: apex-s3p-results/apex_top_before_72h.png - -:download:`Prometheus metrics before 72h test ` - -Memory and CPU usage after test execution: - -.. image:: apex-s3p-results/apex_top_after_72h.png - -:download:`Prometheus metrics after 72h test ` - -Performance Test of APEX-PDP -++++++++++++++++++++++++++++ - -Introduction ------------- - -Performance test of APEX-PDP is done similar to the stability test, but in a more extreme manner using higher thread count. - -Setup Details -------------- - -The performance test is performed on a similar setup as Stability test. - - -Test Plan ---------- - -Performance test plan is the same as the stability test plan above except for the few differences listed below. - -- Increase the number of threads used in the Main Phase from 5 to 20. -- Reduce the test time to 2 hours. - -Run Test --------- - -.. code-block:: bash - - nohup ./apache-jmeter-5.4.3/bin/jmeter.sh -n -t apexPdpPerformanceTestPlan.jmx -l perftestresults.jtl - - -Test Results ------------- - -Test results are shown as below. - -**Test Statistics** - -======================= ================= ================== ================================== -**Total # of requests** **Success %** **Error %** **Average time taken per request** -======================= ================= ================== ================================== -47567 100 % 0.00 % 163.841 ms -======================= ================= ================== ================================== - -**JMeter Screenshot** - -.. image:: apex-s3p-results/apex_perf_jmeter_results.png - -Summary -+++++++ - -Multiple policies were executed in a multi-threaded fashion for both stability and performance tests. -Both tests ran smoothly without any issues. diff --git a/docs/development/devtools/api-s3p-results/api-response-time-distribution_J.png b/docs/development/devtools/api-s3p-results/api-response-time-distribution_J.png deleted file mode 100644 index 6b62b2b2..00000000 Binary files a/docs/development/devtools/api-s3p-results/api-response-time-distribution_J.png and /dev/null differ diff --git a/docs/development/devtools/api-s3p-results/api-response-time-distribution_performance_J.png b/docs/development/devtools/api-s3p-results/api-response-time-distribution_performance_J.png deleted file mode 100644 index 60476027..00000000 Binary files a/docs/development/devtools/api-s3p-results/api-response-time-distribution_performance_J.png and /dev/null differ diff --git a/docs/development/devtools/api-s3p-results/api-response-time-overtime_J.png b/docs/development/devtools/api-s3p-results/api-response-time-overtime_J.png deleted file mode 100644 index b32ff6ae..00000000 Binary files a/docs/development/devtools/api-s3p-results/api-response-time-overtime_J.png and /dev/null differ diff --git a/docs/development/devtools/api-s3p-results/api-response-time-overtime_performance_J.png b/docs/development/devtools/api-s3p-results/api-response-time-overtime_performance_J.png deleted file mode 100644 index 82a0b8ae..00000000 Binary files a/docs/development/devtools/api-s3p-results/api-response-time-overtime_performance_J.png and /dev/null differ diff --git a/docs/development/devtools/api-s3p-results/api-s3p-jm-1_J.png b/docs/development/devtools/api-s3p-results/api-s3p-jm-1_J.png deleted file mode 100644 index c219a63c..00000000 Binary files a/docs/development/devtools/api-s3p-results/api-s3p-jm-1_J.png and /dev/null differ diff --git a/docs/development/devtools/api-s3p-results/api-s3p-jm-2_J.png b/docs/development/devtools/api-s3p-results/api-s3p-jm-2_J.png deleted file mode 100644 index 0083f3ca..00000000 Binary files a/docs/development/devtools/api-s3p-results/api-s3p-jm-2_J.png and /dev/null differ diff --git a/docs/development/devtools/api-s3p-results/api_top_after_72h.png b/docs/development/devtools/api-s3p-results/api_top_after_72h.png deleted file mode 100644 index de4c4553..00000000 Binary files a/docs/development/devtools/api-s3p-results/api_top_after_72h.png and /dev/null differ diff --git a/docs/development/devtools/api-s3p-results/api_top_before_72h.png b/docs/development/devtools/api-s3p-results/api_top_before_72h.png deleted file mode 100644 index 2b334377..00000000 Binary files a/docs/development/devtools/api-s3p-results/api_top_before_72h.png and /dev/null differ diff --git a/docs/development/devtools/api-s3p.rst b/docs/development/devtools/api-s3p.rst deleted file mode 100644 index 12c3a516..00000000 --- a/docs/development/devtools/api-s3p.rst +++ /dev/null @@ -1,211 +0,0 @@ -.. This work is licensed under a -.. Creative Commons Attribution 4.0 International License. -.. http://creativecommons.org/licenses/by/4.0 - -.. _api-s3p-label: - -.. toctree:: - :maxdepth: 2 - -Policy API S3P Tests -#################### - - -72 Hours Stability Test of Policy API -+++++++++++++++++++++++++++++++++++++ - -Introduction ------------- - -The 72 hour stability test of policy API has the goal of verifying the stability of running policy design API REST -service by ingesting a steady flow of transactions in a multi-threaded fashion to -simulate multiple clients' behaviours. -All the transaction flows are initiated from a test client server running JMeter for the duration of 72 hours. - -Setup Details -------------- - -The stability test was performed on a default ONAP OOM installation in the Nordix Lab environment. -JMeter was installed on a separate VM to inject the traffic defined in the -`API stability script -`_ -with the following command: - -.. code-block:: bash - - nohup apache-jmeter-5.5/bin/jmeter -n -t policy_api_stability.jmx -l stabilityTestResultsPolicyApi.jtl & - -The test was run in the background via “nohup” and “&”, to prevent it from being interrupted. - -Test Plan ---------- - -The 72+ hours stability test will be running the following steps sequentially -in multi-threaded loops. Thread number is set to 5 to simulate 5 API clients' -behaviours (they can be calling the same policy CRUD API simultaneously). -Each thread creates a different version of the policy types and policies to not -interfere with one another while operating simultaneously. The point version -of each entity is set to the running thread number. - -**Setup Thread (will be running only once)** - -- Get policy-api Healthcheck -- Get API Counter Statistics -- Get Preloaded Policy Types - -**API Test Flow (5 threads running the same steps in the same loop)** - -- Create a new Monitoring Policy Type with Version 6.0.# -- Create a new Monitoring Policy Type with Version 7.0.# -- Create a new Optimization Policy Type with Version 6.0.# -- Create a new Guard Policy Type with Version 6.0.# -- Create a new Native APEX Policy Type with Version 6.0.# -- Create a new Native Drools Policy Type with Version 6.0.# -- Create a new Native XACML Policy Type with Version 6.0.# -- Get All Policy Types -- Get All Versions of the new Monitoring Policy Type -- Get Version 6.0.# of the new Monitoring Policy Type -- Get Version 6.0.# of the new Optimization Policy Type -- Get Version 6.0.# of the new Guard Policy Type -- Get Version 6.0.# of the new Native APEX Policy Type -- Get Version 6.0.# of the new Native Drools Policy Type -- Get Version 6.0.# of the new Native XACML Policy Type -- Get the Latest Version of the New Monitoring Policy Type -- Create Version 6.0.# of Node Template -- Create Monitoring Policy Ver 6.0.# w/Monitoring Policy Type Ver 6.0.# -- Create Monitoring Policy Ver 7.0.# w/Monitoring Policy Type Ver 7.0.# -- Create Optimization Policy Ver 6.0.# w/Optimization Policy Type Ver 6.0.# -- Create Guard Policy Ver 6.0.# w/Guard Policy Type Ver 6.0.# -- Create Native APEX Policy Ver 6.0.# w/Native APEX Policy Type Ver 6.0.# -- Create Native Drools Policy Ver 6.0.# w/Native Drools Policy Type Ver 6.0.# -- Create Native XACML Policy Ver 6.0.# w/Native XACML Policy Type Ver 6.0.# -- Create Version 6.0.# of PNF Example Policy with Metadata -- Get Node Template -- Get All TCA Policies -- Get All Versions of Monitoring Policy Type -- Get Version 6.0.# of the new Monitoring Policy -- Get Version 6.0.# of the new Optimization Policy -- Get Version 6.0.# of the new Guard Policy -- Get Version 6.0.# of the new Native APEX Policy -- Get Version 6.0.# of the new Native Drools Policy -- Get Version 6.0.# of the new Native XACML Policy -- Get the Latest Version of the new Monitoring Policy -- Delete Version 6.0.# of the new Monitoring Policy -- Delete Version 7.0.# of the new Monitoring Policy -- Delete Version 6.0.# of the new OptimizationPolicy -- Delete Version 6.0.# of the new Guard Policy -- Delete Version 6.0.# of the new Native APEX Policy -- Delete Version 6.0.# of PNF Example Policy having Metadata -- Delete Version 6.0.# of the new Native Drools Policy -- Delete Version 6.0.# of the new Native XACML Policy -- Delete Monitoring Policy Type with Version 6.0.# -- Delete Monitoring Policy Type with Version 7.0.# -- Delete Optimization Policy Type with Version 6.0.# -- Delete Guard Policy Type with Version 6.0.# -- Delete Native APEX Policy Type with Version 6.0.# -- Delete Native Drools Policy Type with Version 6.0.# -- Delete Native XACML Policy Type with Version 6.0.# -- Delete Node Template -- Get Policy Metrics - -**TearDown Thread (will only be running after API Test Flow is completed)** - -- Get policy-api Healthcheck -- Get Preloaded Policy Types - - -Test Results ------------- - -**Summary** - -No errors were found during the 72 hours of the Policy API stability run. -The load was performed against a non-tweaked ONAP OOM installation. - -**Test Statistics** - -======================= ============= =========== =============================== =============================== =============================== -**Total # of requests** **Success %** **TPS** **Avg. time taken per request** **Min. time taken per request** **Max. time taken per request** -======================= ============= =========== =============================== =============================== =============================== - 950839 100% 3.67 1351 ms 126 ms 16324 ms -======================= ============= =========== =============================== =============================== =============================== - -.. image:: api-s3p-results/api-s3p-jm-1_J.png - -**JMeter Results** - -The following graphs show the response time distributions. The "Get Policy Types" API calls are the most expensive calls that -average a 13 seconds plus response time. - -.. image:: api-s3p-results/api-response-time-distribution_J.png -.. image:: api-s3p-results/api-response-time-overtime_J.png - -**Memory and CPU usage** - -The memory and CPU usage can be monitored by running "top" command in the policy-api pod. -A snapshot is taken before and after test execution to monitor the changes in resource utilization. - -Memory and CPU usage before test execution: - -.. image:: api-s3p-results/api_top_before_72h.png - -Memory and CPU usage after test execution: - -.. image:: api-s3p-results/api_top_after_72h.png - - -Performance Test of Policy API -++++++++++++++++++++++++++++++ - -Introduction ------------- - -Performance test of policy-api has the goal of testing the min/avg/max processing time and rest call throughput for all the requests when the number of requests are large enough to saturate the resource and find the bottleneck. - -Setup Details -------------- - -The performance test was performed on a default ONAP OOM installation in the Nordix Lab environment. -JMeter was installed on a separate VM to inject the traffic defined in the -`API performance script -`_ -with the following command: - -.. code-block:: bash - - nohup apache-jmeter-5.5/bin/jmeter -n -t policy_api_performance.jmx -l performanceTestResultsPolicyApi.jtl & - -The test was run in the background via “nohup” and “&”, to prevent it from being interrupted. - -Test Plan ---------- - -Performance test plan is the same as stability test plan above. -Only differences are, in performance test, we increase the number of threads up to 20 (simulating 20 users' behaviours at the same time) whereas reducing the test time down to 2.5 hours. - -Run Test --------- - -Running/Triggering performance test will be the same as stability test. That is, launch JMeter pointing to corresponding *.jmx* test plan. The *API_HOST* and *API_PORT* are already set up in *.jmx*. - -**Test Statistics** - -======================= ============= =========== =============================== =============================== =============================== -**Total # of requests** **Success %** **TPS** **Avg. time taken per request** **Min. time taken per request** **Max. time taken per request** -======================= ============= =========== =============================== =============================== =============================== - 16212 100% 1.8 11109 ms 162 ms 237265 ms -======================= ============= =========== =============================== =============================== =============================== - -.. image:: api-s3p-results/api-s3p-jm-2_J.png - -Test Results ------------- - -The following graphs show the response time distributions. - -.. image:: api-s3p-results/api-response-time-distribution_performance_J.png -.. image:: api-s3p-results/api-response-time-overtime_performance_J.png - - - - diff --git a/docs/development/devtools/clamp-s3p-results/Stability_after_stats.png b/docs/development/devtools/clamp-s3p-results/Stability_after_stats.png deleted file mode 100644 index 38242866..00000000 Binary files a/docs/development/devtools/clamp-s3p-results/Stability_after_stats.png and /dev/null differ diff --git a/docs/development/devtools/clamp-s3p-results/acm_performance_jmeter.png b/docs/development/devtools/clamp-s3p-results/acm_performance_jmeter.png deleted file mode 100644 index bad1cf71..00000000 Binary files a/docs/development/devtools/clamp-s3p-results/acm_performance_jmeter.png and /dev/null differ diff --git a/docs/development/devtools/clamp-s3p-results/acm_stability_jmeter.png b/docs/development/devtools/clamp-s3p-results/acm_stability_jmeter.png deleted file mode 100644 index 2f576505..00000000 Binary files a/docs/development/devtools/clamp-s3p-results/acm_stability_jmeter.png and /dev/null differ diff --git a/docs/development/devtools/clamp-s3p-results/acm_stability_table.png b/docs/development/devtools/clamp-s3p-results/acm_stability_table.png deleted file mode 100644 index 28942eff..00000000 Binary files a/docs/development/devtools/clamp-s3p-results/acm_stability_table.png and /dev/null differ diff --git a/docs/development/devtools/clamp-s3p.rst b/docs/development/devtools/clamp-s3p.rst deleted file mode 100644 index eb17d894..00000000 --- a/docs/development/devtools/clamp-s3p.rst +++ /dev/null @@ -1,257 +0,0 @@ -.. This work is licensed under a -.. Creative Commons Attribution 4.0 International License. -.. http://creativecommons.org/licenses/by/4.0 - -.. _acm-s3p-label: - -.. toctree:: - :maxdepth: 2 - -Policy Clamp Automation Composition -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Both the Performance and the Stability tests were executed by performing requests -against acm components installed as docker images in local environment. - - -ACM Deployment -++++++++++++++ - -The docker containers can be deployed via Policy CSIT script. -Clone the Policy/docker repo to the local vm - -.. code-block:: bash - - git clone "https://gerrit.onap.org/r/policy/docker" - -Set the following environment variables on the system before deploying the containers. - -.. code-block:: bash - - export CONTAINER_LOCATION=nexus3.onap.org:10001/ - export PROJECT=clamp - -Invoke the following script from the ~/docker/csit folder. - -.. code-block:: bash - - ./start-all.sh - -This script installs the docker containers of ACM and Policy components required for running the tests. - - -Jmeter setup -++++++++++++ - -Apache jmeter tool is installed either on the same virtual machine or on a different virtual machine. - -.. code-block:: bash - - # Install required packages - sudo apt install -y wget unzip - - # Install JMeter - mkdir -p jmeter - cd jmeter - wget https://dlcdn.apache.org//jmeter/binaries/apache-jmeter-5.5.zip # check if valid version - unzip -q apache-jmeter-5.5.zip - rm apache-jmeter-5.5.zip - - -Setup Verification -++++++++++++++++++ -Ensure the following components are up and running before executing the test. - -- acm runtime component docker image is started and running. -- Participant docker images policy-clamp-cl-pf-ppnt, policy-clamp-cl-http-ppnt, policy-clamp-cl-k8s-ppnt are started and running. -- Dmaap simulator for communication between components. -- mariadb docker container for policy and clampacm database. -- policy-api for communication between policy participant and policy-framework -- Both tests were run via jMeter, which was installed on a separate VM. - -Stability Test of acm components -++++++++++++++++++++++++++++++++ - -Test Plan ---------- -The 72 hours stability test ran the following steps sequentially in a single threaded loop. - -- **Create Policy defaultDomain** - creates an operational policy using policy/api component -- **Delete Policy sampleDomain** - deletes the operational policy sampleDomain using policy/api component -- **Commission AC definition** - commissions the acm definition in runtime -- **Instantiate acm** - Instantiate the acm towards participants -- **Check acm state** - check the current state of acm -- **Change State to PASSIVE** - change the state of the acm to PASSIVE -- **Check acm state** - check the current state of acm -- **Change State to UNINITIALISED** - change the state of the ACM to UNINITIALISED -- **Check acm state** - check the current state of acm -- **Delete instantiated acm** - delete the instantiated acm from all participants -- **Delete ACM Definition** - delete the acm definition on runtime - -The following parameters can be configured on the JMX file for the test. - -- **HTTP Authorization Manager** - used to store user/password authentication details. -- **HTTP Header Manager** - used to store headers which will be used for making HTTP requests. -- **User Defined Variables** - used to store following user defined parameters. - -============================= ======================================================================== - **Name** **Description** -============================= ======================================================================== - RUNTIME_HOST IP Address or host name of acm runtime component - RUNTIME_PORT Port number of acm runtime components for making REST API calls - POLICY_PARTICIPANT_HOST IP Address or host name of policy participant - POLICY_PARTICIPANT_HOST_PORT Port number of policy participant -============================= ======================================================================== - -Download the ACM stability.jmx and performance.jmx files from the Policy-Clamp repo. - -Stability jmx file - -.. code-block:: bash - - ~/clamp/testsuites/stability/src/main/resources/testplans/stability.jmx - -The test was run in the background via "nohup", to prevent it from being interrupted: - -.. code-block:: bash - - nohup ./jmeter/apache-jmeter-5.5/bin/jmeter -n -t stability.jmx -l testresults.jtl - -Test Results ------------- - -**Summary** - -Stability test plan was triggered for 72 hours. - -.. Note:: - - .. container:: paragraph - - The assertions of state changes are not completely taken care of, as the stability is ran with acm components - alone, and not including complete policy framework deployment, which makes it difficult for actual state changes from - PASSIVE to RUNNING etc to happen. - -**Test Statistics** - -======================= ================= ================== ================================== -**Total # of requests** **Success %** **Error %** **Average time taken per request** -======================= ================= ================== ================================== -97916 100.00 % 0.00 % 246 ms -======================= ================= ================== ================================== - -**ACM component Setup** - -================ ============================================================ =========================================== ========================= -**CONTAINER ID** **IMAGE** **PORT** **NAME** -================ ============================================================ =========================================== ========================= - a9cb0cd103cf nexus3.onap.org:10001/onap/policy-clamp-runtime-acm:latest 6969/tcp policy-clamp-runtime-acm - 886e572b8438 nexus3.onap.org:10001/onap/policy-clamp-ac-pf-ppnt:latest 6969/tcp policy-clamp-ac-pf-ppnt - 035707b1b95f nexus3.onap.org:10001/onap/policy-api:latest 6969/tcp policy-api - d34204f95ff3 nexus3.onap.org:10001/onap/policy-clamp-ac-http-ppnt:latest 6969/tcp policy-clamp-ac-http-ppnt - 4470e608c9a8 nexus3.onap.org:10001/onap/policy-clamp-ac-k8s-ppnt:latest 6969/tcp policy-clamp-ac-k8s-ppnt - 62229d46b79c nexus3.onap.org:10001/onap/policy-models-simulator:latest 3905/tcp, 6666/tcp, 6668-6670/tcp, 6680/tcp simulator - efaf0ca5e1f0 nexus3.onap.org:10001/mariadb:10.5.8 3306/tcp mariadb - e84cf17db2a4 nexus3.onap.org:10001/onap/policy-pap:latest 6969/tcp policy-pap - 0a16eecd13c9 nexus3.onap.org:10001/onap/policy-apex-pdp:latest 6969/tcp policy-apex-pdp -================ ============================================================ =========================================== ========================= - -.. Note:: - - .. container:: paragraph - - There were no failures during the 72 hours test. - -**JMeter Screenshot** - -.. image:: clamp-s3p-results/acm_stability_jmeter.png - -**JMeter Screenshot** - -.. image:: clamp-s3p-results/acm_stability_table.png - -**Memory and CPU usage** - -The memory and CPU usage can be monitored by running "docker stats" command. - -Memory and CPU usage after test execution: - -.. image:: clamp-s3p-results/Stability_after_stats.png - - -Performance Test of acm components -++++++++++++++++++++++++++++++++++ - -Introduction ------------- - -Performance test of acm components has the goal of testing the min/avg/max processing time and rest call throughput for all the requests with multiple requests at the same time. - -Setup Details -------------- - -The performance test is performed on a similar setup as Stability test. The JMeter VM will be sending a large number of REST requests to the runtime component and collecting the statistics. - - -Test Plan ---------- - -Performance test plan is the same as the stability test plan above except for the few differences listed below. - -- Increase the number of threads up to 5 (simulating 5 users' behaviours at the same time). -- Reduce the test time to 2 hours. - -Run Test --------- - -Performance jmx file - -.. code-block:: bash - - ~/clamp/testsuites/performance/src/main/resources/testplans/performance.jmx - -Running/Triggering the performance test will be the same as the stability test. That is, launch JMeter pointing to corresponding *.jmx* test plan. The *RUNTIME_HOST*, *RUNTIME_PORT*, *POLICY_PARTICIPANT_HOST*, *POLICY_PARTICIPANT_HOST_PORT* are already set up in *.jmx* - -.. code-block:: bash - - nohup ./jmeter/apache-jmeter-5.5/bin/jmeter -n -t performance.jmx -l testresults.jtl - -Once the test execution is completed, execute the below script to get the statistics: - -.. code-block:: bash - - $ cd ./clamp/testsuites/performance/src/main/resources/testplans - $ ./results.sh resultTree.log - -Test Results ------------- - -Test results are shown as below. - -**Test Statistics** - -======================= ================= ================== ================================== -**Total # of requests** **Success %** **Error %** **Average time taken per request** -======================= ================= ================== ================================== -13591 100 % 0.00 % 249 ms -======================= ================= ================== ================================== - -**ACM component Setup** - -================ ============================================================ =========================================== ========================= -**CONTAINER ID** **IMAGE** **PORT** **NAME** -================ ============================================================ =========================================== ========================= - a9cb0cd103cf nexus3.onap.org:10001/onap/policy-clamp-runtime-acm:latest 6969/tcp policy-clamp-runtime-acm - 886e572b8438 nexus3.onap.org:10001/onap/policy-clamp-ac-pf-ppnt:latest 6969/tcp policy-clamp-ac-pf-ppnt - 035707b1b95f nexus3.onap.org:10001/onap/policy-api:latest 6969/tcp policy-api - d34204f95ff3 nexus3.onap.org:10001/onap/policy-clamp-ac-http-ppnt:latest 6969/tcp policy-clamp-ac-http-ppnt - 4470e608c9a8 nexus3.onap.org:10001/onap/policy-clamp-ac-k8s-ppnt:latest 6969/tcp policy-clamp-ac-k8s-ppnt - 62229d46b79c nexus3.onap.org:10001/onap/policy-models-simulator:latest 3905/tcp, 6666/tcp, 6668-6670/tcp, 6680/tcp simulator - efaf0ca5e1f0 nexus3.onap.org:10001/mariadb:10.5.8 3306/tcp mariadb - e84cf17db2a4 nexus3.onap.org:10001/onap/policy-pap:latest 6969/tcp policy-pap - 0a16eecd13c9 nexus3.onap.org:10001/onap/policy-apex-pdp:latest 6969/tcp policy-apex-pdp -================ ============================================================ =========================================== ========================= - -**JMeter Screenshot** - -.. image:: clamp-s3p-results/acm_performance_jmeter.png diff --git a/docs/development/devtools/devtools.rst b/docs/development/devtools/devtools.rst index c84fb746..c626966f 100644 --- a/docs/development/devtools/devtools.rst +++ b/docs/development/devtools/devtools.rst @@ -11,7 +11,10 @@ Policy Platform Development Tools :depth: 3 -This article explains how to build the ONAP Policy Framework for development purposes and how to run stability/performance tests for a variety of components. To start, the developer should consult the latest ONAP Wiki to familiarize themselves with developer best practices and how-tos to setup their environment, see `https://wiki.onap.org/display/DW/Developer+Best+Practices`. +This article explains how to build the ONAP Policy Framework for development purposes and how to run stability/ +performance tests for a variety of components. To start, the developer should consult the latest ONAP Wiki to +familiarize themselves with developer best practices and how-tos to setup their environment, +see `https://wiki.onap.org/display/DW/Developer+Best+Practices`. This article assumes that: @@ -19,16 +22,22 @@ This article assumes that: * You are using a directory called *git* off your home directory *(~/git)* for your git repositories * Your local maven repository is in the location *~/.m2/repository* * You have copied the settings.xml from oparent to *~/.m2/* directory -* You have added settings to access the ONAP Nexus to your M2 configuration, see `Maven Settings Example `_ (bottom of the linked page) +* You have added settings to access the ONAP Nexus to your M2 configuration, + see `Maven Settings Example `_ + (bottom of the linked page) -The procedure documented in this article has been verified to work on a MacBook laptop running macOS Mojave Version 10.14.6 and an Ubuntu 18.06 VM. +The procedure documented in this article has been verified to work on a MacBook laptop running macOS Mojave Version +10.14.6 and an Ubuntu 18.06 VM. Cloning All The Policy Repositories *********************************** -Run a script such as the script below to clone the required modules from the `ONAP git repository `_. This script clones all the ONAP Policy Framework repositories. +Run a script such as the script below to clone the required modules from the +`ONAP git repository `_. +This script clones all the ONAP Policy Framework repositories. -ONAP Policy Framework has dependencies to the ONAP Parent *oparent* module, the ONAP ECOMP SDK *ecompsdkos* module, and the A&AI Schema module. +ONAP Policy Framework has dependencies to the ONAP Parent *oparent* module, the ONAP ECOMP SDK *ecompsdkos* module, +and the A&AI Schema module. .. code-block:: bash @@ -167,7 +176,8 @@ Building ONAP Policy Framework Components rm -fr ~/.m2/repository/org/onap -**Step 2:** A pom such as the one below can be used to build the ONAP Policy Framework modules. Create the *pom.xml* file in the directory *~/git/onap/policy*. +**Step 2:** A pom such as the one below can be used to build the ONAP Policy Framework modules. Create the *pom.xml* +file in the directory *~/git/onap/policy*. .. code-block:: xml :caption: Typical pom.xml to build the ONAP Policy Framework @@ -203,8 +213,9 @@ Building ONAP Policy Framework Components **Policy Architecture/API Transition** -In Dublin, a new Policy Architecture was introduced. The legacy architecture runs in parallel with the new architecture. It will be deprecated after Frankfurt release. -If the developer is only interested in working with the new architecture components, the engine sub-module can be ommitted. +In Dublin, a new Policy Architecture was introduced. The legacy architecture runs in parallel with the new +architecture. It will be deprecated after Frankfurt release. If the developer is only interested in working with the +new architecture components, the engine sub-module can be ommitted. **Step 3:** You can now build the Policy framework. @@ -247,15 +258,17 @@ Another example on how to run the MariaDb is using the docker compose file used Running the API component standalone ++++++++++++++++++++++++++++++++++++ -Assuming you have successfully built the codebase using the instructions above. The only requirement for the API component to run is a -running MariaDb database instance. The easiest way to do this is to run the docker image, please see the mariadb documentation for the latest -information on doing so. Once the mariadb is up and running, a configuration file must be provided to the api in order for it to know how to -connect to the mariadb. You can locate the default configuration file in the packaging of the api component: +Assuming you have successfully built the codebase using the instructions above. The only requirement for the API +component to run is a running MariaDb database instance. The easiest way to do this is to run the docker image, please +see the mariadb documentation for the latest information on doing so. Once the mariadb is up and running, a +configuration file must be provided to the api in order for it to know how to connect to the mariadb. You can locate +the default configuration file in the packaging of the api component: `Default Policy API Configuration `_ -You will want to change the fields pertaining to "host", "port" and "databaseUrl" to your local environment settings and start the -policy-api springboot application either using your IDE of choice or using the run goal from Spring Boot Maven plugin: *mvn spring-boot:run*. +You will want to change the fields pertaining to "host", "port" and "databaseUrl" to your local environment settings +and start the policy-api springboot application either using your IDE of choice or using the run goal from Spring Boot +Maven plugin: *mvn spring-boot:run*. Running the API component using Docker Compose ++++++++++++++++++++++++++++++++++++++++++++++ @@ -267,17 +280,19 @@ An example of running the api using a docker compose script is located in the Po Running the PAP component standalone ++++++++++++++++++++++++++++++++++++ -Once you have successfully built the PAP codebase, a running MariaDb database and DMaaP instance will also be required to start up the application. -For MariaDb instance, the easiest way is to run the docker image, please see the mariadb documentation for the latest -information on doing so. For DMaaP, the easiest way during development is to run the DMaaP simulator which is explained in the below sections. -Once the mariadb and DMaaP are running, a configuration file must be provided to the PAP component in order for it to know how to -connect to the mariadb and DMaaP along with other relevant configuration details. You can locate the default configuration file in the packaging of the PAP component: +Once you have successfully built the PAP codebase, a running MariaDb database and DMaaP instance will also be required +to start up the application. For MariaDb instance, the easiest way is to run the docker image, please see the mariadb +documentation for the latest information on doing so. For DMaaP, the easiest way during development is to run the DMaaP +simulator which is explained in the below sections. Once the mariadb and DMaaP are running, a configuration file must +be provided to the PAP component in order for it to know how to connect to the mariadb and DMaaP along with other +relevant configuration details. You can locate the default configuration file in the packaging of the PAP component: `Default PAP Configuration `_ Update the fields related to MariaDB, DMaaP and the RestServer for the application as per your local environment settings. Then to start the application, just run the Spring Boot application using IDE or command line. + Running the Smoke Tests *********************** @@ -308,16 +323,17 @@ The following links contain instructions on how to run the S3P Stability and Per familiar with the Policy Framework components and test any local changes. .. toctree:: - :maxdepth: 1 + :maxdepth: 2 + + testing/s3p/run-s3p.rst + testing/s3p/api-s3p.rst + testing/s3p/pap-s3p.rst + testing/s3p/apex-s3p.rst + testing/s3p/drools-s3p.rst + testing/s3p/xacml-s3p.rst + testing/s3p/distribution-s3p.rst + testing/s3p/clamp-s3p.rst - run-s3p.rst - api-s3p.rst - pap-s3p.rst - apex-s3p.rst - drools-s3p.rst - xacml-s3p.rst - distribution-s3p.rst - clamp-s3p.rst Running the Pairwise Tests ************************** @@ -380,7 +396,7 @@ To test these images, CSITs will be run. 3. Clone policy/docker repo. -4. Modify docker/csit/docker-compose-all.yml to use the tagged OpenSuse image. +4. Modify docker/csit/docker-compose.yml to use the tagged OpenSuse image. Replace: diff --git a/docs/development/devtools/distribution-s3p-results/distribution-jmeter-testcases.png b/docs/development/devtools/distribution-s3p-results/distribution-jmeter-testcases.png deleted file mode 100644 index 86a437a7..00000000 Binary files a/docs/development/devtools/distribution-s3p-results/distribution-jmeter-testcases.png and /dev/null differ diff --git a/docs/development/devtools/distribution-s3p-results/distribution-visualvm-snapshot.png b/docs/development/devtools/distribution-s3p-results/distribution-visualvm-snapshot.png deleted file mode 100644 index 03b73d36..00000000 Binary files a/docs/development/devtools/distribution-s3p-results/distribution-visualvm-snapshot.png and /dev/null differ diff --git a/docs/development/devtools/distribution-s3p-results/performance-monitor.png b/docs/development/devtools/distribution-s3p-results/performance-monitor.png deleted file mode 100644 index 71fd7fca..00000000 Binary files a/docs/development/devtools/distribution-s3p-results/performance-monitor.png and /dev/null differ diff --git a/docs/development/devtools/distribution-s3p-results/performance-statistics.png b/docs/development/devtools/distribution-s3p-results/performance-statistics.png deleted file mode 100644 index fecd6c03..00000000 Binary files a/docs/development/devtools/distribution-s3p-results/performance-statistics.png and /dev/null differ diff --git a/docs/development/devtools/distribution-s3p-results/performance-threads.png b/docs/development/devtools/distribution-s3p-results/performance-threads.png deleted file mode 100644 index 2488abd9..00000000 Binary files a/docs/development/devtools/distribution-s3p-results/performance-threads.png and /dev/null differ diff --git a/docs/development/devtools/distribution-s3p-results/performance-threshold.png b/docs/development/devtools/distribution-s3p-results/performance-threshold.png deleted file mode 100644 index 73b20ff2..00000000 Binary files a/docs/development/devtools/distribution-s3p-results/performance-threshold.png and /dev/null differ diff --git a/docs/development/devtools/distribution-s3p-results/stability-monitor.png b/docs/development/devtools/distribution-s3p-results/stability-monitor.png deleted file mode 100644 index bebaaeb0..00000000 Binary files a/docs/development/devtools/distribution-s3p-results/stability-monitor.png and /dev/null differ diff --git a/docs/development/devtools/distribution-s3p-results/stability-statistics.png b/docs/development/devtools/distribution-s3p-results/stability-statistics.png deleted file mode 100644 index 12ee2b5b..00000000 Binary files a/docs/development/devtools/distribution-s3p-results/stability-statistics.png and /dev/null differ diff --git a/docs/development/devtools/distribution-s3p-results/stability-threads.png b/docs/development/devtools/distribution-s3p-results/stability-threads.png deleted file mode 100644 index 4cfd7a78..00000000 Binary files a/docs/development/devtools/distribution-s3p-results/stability-threads.png and /dev/null differ diff --git a/docs/development/devtools/distribution-s3p-results/stability-threshold.png b/docs/development/devtools/distribution-s3p-results/stability-threshold.png deleted file mode 100644 index f348761b..00000000 Binary files a/docs/development/devtools/distribution-s3p-results/stability-threshold.png and /dev/null differ diff --git a/docs/development/devtools/distribution-s3p.rst b/docs/development/devtools/distribution-s3p.rst deleted file mode 100644 index 55966738..00000000 --- a/docs/development/devtools/distribution-s3p.rst +++ /dev/null @@ -1,389 +0,0 @@ -.. This work is licensed under a -.. Creative Commons Attribution 4.0 International License. -.. http://creativecommons.org/licenses/by/4.0 - -.. _distribution-s3p-label: - -Policy Distribution component -############################# - -72h Stability and 4h Performance Tests of Distribution -++++++++++++++++++++++++++++++++++++++++++++++++++++++ - -Common Setup ------------- - -Update the ubuntu software installer - -.. code-block:: bash - - sudo apt update - -Install Java - -.. code-block:: bash - - sudo apt install -y openjdk-11-jdk - -Ensure that the Java version that is executing is OpenJDK version 11 - -.. code-block:: bash - - $ java --version - openjdk 11.0.11 2021-04-20 - OpenJDK Runtime Environment (build 11.0.11+9-Ubuntu-0ubuntu2.18.04) - OpenJDK 64-Bit Server VM (build 11.0.11+9-Ubuntu-0ubuntu2.18.04, mixed mode) - -Install Docker and Docker Compose - -.. code-block:: bash - - # Add docker repository - curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg - - echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ - $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null - - sudo apt update - - # Install docker - sudo apt-get install docker-ce docker-ce-cli containerd.io - -Change the permissions of the Docker socket file - -.. code-block:: bash - - sudo chmod 666 /var/run/docker.sock - -Check the status of the Docker service and ensure it is running correctly - -.. code-block:: bash - - systemctl status --no-pager docker - docker.service - Docker Application Container Engine - Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) - Active: active (running) since Wed 2020-10-14 13:59:40 UTC; 1 weeks 0 days ago - # ... (truncated for brevity) - - docker ps - CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES - -Install and verify docker-compose - -.. code-block:: bash - - # Install compose (check if version is still available or update as necessary) - sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose - sudo chmod +x /usr/local/bin/docker-compose - - # Check if install was successful - docker-compose --version - -Clone the policy-distribution repo to access the test scripts - -.. code-block:: bash - - git clone https://gerrit.onap.org/r/policy/distribution - -.. _setup-distribution-s3p-components: - -Start services for MariaDB, Policy API, PAP and Distribution ------------------------------------------------------------- - -Navigate to the main folder for scripts to setup services: - -.. code-block:: bash - - cd ~/distribution/testsuites/stability/src/main/resources/setup - -Modify the versions.sh script to match all the versions being tested. - -.. code-block:: bash - - vi ~/distribution/testsuites/stability/src/main/resources/setup/versions.sh - -Ensure the correct docker image versions are specified - e.g. for Kohn-M4 - -- export POLICY_DIST_VERSION=2.8-SNAPSHOT - -Run the start.sh script to start the components. After installation, script will execute -``docker ps`` and show the running containers. - -.. code-block:: bash - - ./start.sh - - Creating network "setup_default" with the default driver - Creating policy-distribution ... done - Creating mariadb ... done - Creating simulator ... done - Creating policy-db-migrator ... done - Creating policy-api ... done - Creating policy-pap ... done - - fa4e9bd26e60 nexus3.onap.org:10001/onap/policy-pap:2.7-SNAPSHOT-latest "/opt/app/policy/pap…" 1 second ago Up Less than a second 6969/tcp policy-pap - efb65dd95020 nexus3.onap.org:10001/onap/policy-api:2.7-SNAPSHOT-latest "/opt/app/policy/api…" 1 second ago Up Less than a second 6969/tcp policy-api - cf602c2770ba nexus3.onap.org:10001/onap/policy-db-migrator:2.5-SNAPSHOT-latest "/opt/app/policy/bin…" 2 seconds ago Up 1 second 6824/tcp policy-db-migrator - 99383d2fecf4 pdp/simulator "sh /opt/app/policy/…" 2 seconds ago Up 1 second pdp-simulator - 3c0e205c5f47 nexus3.onap.org:10001/onap/policy-models-simulator:2.7-SNAPSHOT-latest "simulators.sh" 3 seconds ago Up 2 seconds 3904/tcp simulator - 3ad00d90d6a3 nexus3.onap.org:10001/onap/policy-distribution:2.8-SNAPSHOT-latest "/opt/app/policy/bin…" 3 seconds ago Up 2 seconds 6969/tcp, 9090/tcp policy-distribution - bb0b915cdecc nexus3.onap.org:10001/mariadb:10.5.8 "docker-entrypoint.s…" 3 seconds ago Up 2 seconds 3306/tcp mariadb - -.. note:: - The containers on this docker-compose are running with HTTP configuration. For HTTPS, ports - and configurations will need to be changed, as well certificates and keys must be generated - for security. - - -Install JMeter --------------- - -Download and install JMeter - -.. code-block:: bash - - # Install required packages - sudo apt install -y wget unzip - - # Install JMeter - mkdir -p jmeter - cd jmeter - wget https://dlcdn.apache.org//jmeter/binaries/apache-jmeter-5.5.zip # check if valid version - unzip -q apache-jmeter-5.5.zip - rm apache-jmeter-5.5.zip - - -Install & configure visualVM --------------------------------------- - -VisualVM needs to be installed in the virtual machine running Distribution. It will be used to -monitor CPU, Memory and GC for Distribution while the stability tests are running. - -.. code-block:: bash - - sudo apt install -y visualvm - -Run these commands to configure permissions (if permission errors happens, use ``sudo su``) - -.. code-block:: bash - - # Set globally accessable permissions on policy file - sudo chmod 777 /usr/lib/jvm/java-11-openjdk-amd64/bin/visualvm.policy - - # Create Java security policy file for VisualVM - sudo cat > /usr/lib/jvm/java-11-openjdk-amd64/bin/visualvm.policy << EOF - grant codebase "jrt:/jdk.jstatd" { - permission java.security.AllPermission; - }; - grant codebase "jrt:/jdk.internal.jvmstat" { - permission java.security.AllPermission; - }; - EOF - -Run the following command to start jstatd using port 1111 - -.. code-block:: bash - - /usr/lib/jvm/java-11-openjdk-amd64/bin/jstatd -p 1111 -J-Djava.security.policy=/usr/lib/jvm/java-11-openjdk-amd64/bin/visualvm.policy & - -Run visualVM to connect to POLICY_DISTRIBUTION_IP:9090 - -.. code-block:: bash - - # Get the Policy Distribution container IP - echo $(docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' policy-distribution) - - # Start visual vm - visualvm & - -This will load up the visualVM GUI - -Connect to Distribution JMX Port. - - 1. On the visualvm toolbar, click on "Add JMX Connection" - 2. Enter the Distribution container IP and Port 9090. This is the JMX port exposed by the - distribution container - 3. Double click on the newly added nodes under "Remotes" to start monitoring CPU, Memory & GC. - -Example Screenshot of visualVM - -.. image:: distribution-s3p-results/distribution-visualvm-snapshot.png - - -Stability Test of Policy Distribution -+++++++++++++++++++++++++++++++++++++ - -Introduction ------------- - -The 72 hour Stability Test for policy distribution has the goal of introducing a steady flow of -transactions initiated from a test client server running JMeter. The policy distribution is -configured with a special FileSystemReception plugin to monitor a local directory for newly added -csar files to be processed by itself. The input CSAR will be added/removed by the test client -(JMeter) and the result will be pulled from the backend (PAP and PolicyAPI) by the test client -(JMeter). - -The test will be performed in an environment where Jmeter will continuously add/remove a test csar -into the special directory where policy distribution is monitoring and will then get the processed -results from PAP and PolicyAPI to verify the successful deployment of the policy. The policy will -then be undeployed and the test will loop continuously until 72 hours have elapsed. - - -Test Plan Sequence ------------------- - -The 72h stability test will run the following steps sequentially in a single threaded loop. - -- **Delete Old CSAR** - Checks if CSAR already exists in the watched directory, if so it deletes it -- **Add CSAR** - Adds CSAR to the directory that distribution is watching -- **Get Healthcheck** - Ensures Healthcheck is returning 200 OK -- **Get Statistics** - Ensures Statistics is returning 200 OK -- **Get Metrics** - Ensures Metrics is returning 200 OK -- **Assert PDP Group Query** - Checks that PDPGroupQuery contains the deployed policy -- **Assert PoliciesDeployed** - Checks that the policy is deployed -- **Undeploy/Delete Policy** - Undeploys and deletes the Policy for the next loop -- **Assert PDP Group Query for Deleted Policy** - Ensures the policy has been removed and does not exist - -The following steps can be used to configure the parameters of the test plan. - -- **HTTP Authorization Manager** - used to store user/password authentication details. -- **HTTP Header Manager** - used to store headers which will be used for making HTTP requests. -- **User Defined Variables** - used to store following user defined parameters. - -========== =============================================== - **Name** **Description** -========== =============================================== - PAP_HOST IP Address or host name of PAP component - PAP_PORT Port number of PAP for making REST API calls - API_HOST IP Address or host name of API component - API_PORT Port number of API for making REST API calls - DURATION Duration of Test -========== =============================================== - -Screenshot of Distribution stability test plan - -.. image:: distribution-s3p-results/distribution-jmeter-testcases.png - - -Running the Test Plan ---------------------- - -Check if the /tmp/policydistribution/distributionmount exists as it was created during the start.sh -script execution. If not, run the following commands to create folder and change folder permissions -to allow the testplan to insert the CSAR into the /tmp/policydistribution/distributionmount folder. - -.. note:: - Make sure that only csar file is being loaded in the watched folder and log generation is in a - logs folder, as any sort of zip file can be understood by distribution as a policy file. A - logback.xml configuration file is available under setup/distribution folder. - -.. code-block:: bash - - sudo mkdir -p /tmp/policydistribution/distributionmount - sudo chmod -R a+trwx /tmp - - -Navigate to the stability test folder. - -.. code-block:: bash - - cd ~/distribution/testsuites/stability/src/main/resources/testplans/ - -Execute the run_test.sh - -.. code-block:: bash - - ./run_test.sh - - -Test Results ------------- - -**Summary** - -- Stability test plan was triggered for 72 hours. -- No errors were reported - -**Test Statistics** - -.. image:: distribution-s3p-results/stability-statistics.png -.. image:: distribution-s3p-results/stability-threshold.png - -**VisualVM Screenshots** - -.. image:: distribution-s3p-results/stability-monitor.png -.. image:: distribution-s3p-results/stability-threads.png - - -Performance Test of Policy Distribution -+++++++++++++++++++++++++++++++++++++++ - -Introduction ------------- - -The 4h Performance Test of Policy Distribution has the goal of testing the min/avg/max processing -time and rest call throughput for all the requests when the number of requests are large enough to -saturate the resource and find the bottleneck. - -It also tests that distribution can handle multiple policy CSARs and that these are deployed within -60 seconds consistently. - - -Setup Details -------------- - -The performance test is based on the same setup as the distribution stability tests. - - -Test Plan Sequence ------------------- - -Performance test plan is different from the stability test plan. - -- Instead of handling one policy csar at a time, multiple csar's are deployed within the watched - folder at the exact same time. -- We expect all policies from these csar's to be deployed within 60 seconds. -- There are also multithreaded tests running towards the healthcheck and statistics endpoints of - the distribution service. - - -Running the Test Plan ---------------------- - -Check if /tmp folder permissions to allow the Testplan to insert the CSAR into the -/tmp/policydistribution/distributionmount folder. -Clean up from previous run. If necessary, put containers down with script ``down.sh`` from setup -folder mentioned on :ref:`Setup components ` - -.. code-block:: bash - - sudo mkdir -p /tmp/policydistribution/distributionmount - sudo chmod -R a+trwx /tmp - -Navigate to the testplan folder and execute the test script: - -.. code-block:: bash - - cd ~/distribution/testsuites/performance/src/main/resources/testplans/ - ./run_test.sh - - -Test Results ------------- - -**Summary** - -- Performance test plan was triggered for 4 hours. -- No errors were reported - -**Test Statistics** - -.. image:: distribution-s3p-results/performance-statistics.png -.. image:: distribution-s3p-results/performance-threshold.png - -**VisualVM Screenshots** - -.. image:: distribution-s3p-results/performance-monitor.png -.. image:: distribution-s3p-results/performance-threads.png - -End of document diff --git a/docs/development/devtools/drools-s3p.rst b/docs/development/devtools/drools-s3p.rst deleted file mode 100644 index bc8b79b3..00000000 --- a/docs/development/devtools/drools-s3p.rst +++ /dev/null @@ -1,74 +0,0 @@ -.. This work is licensed under a -.. Creative Commons Attribution 4.0 International License. -.. http://creativecommons.org/licenses/by/4.0 - -.. _drools-s3p-label: - -.. toctree:: - :maxdepth: 2 - -Policy Drools PDP component -~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Both the Performance and the Stability tests were executed against an ONAP installation in the Policy tenant -in the UNH lab, from the admin VM running the jmeter tool to inject the load. - -General Setup -************* - -Agent VMs in this lab have the following configuration: - -- 16GB RAM -- 8 VCPU - -Jmeter is run from the admin VM. - -The drools-pdp container uses the JVM memory and CPU settings from the default OOM installation. - -Other ONAP components exercised during the stability tests were: - -- Policy XACML PDP to process guard queries for each transaction. -- DMaaP to carry PDP-D and jmeter initiated traffic to complete transactions. -- Policy API to create (and delete at the end of the tests) policies for each - scenario under test. -- Policy PAP to deploy (and undeploy at the end of the tests) policies for each scenario under test. -- XACML PDP Stability test was running at the same time. - -The following components are simulated during the tests. - -- SDNR. - -Stability Test of Policy PDP-D -****************************** - -PDP-D performance -================= - -The tests focused on the following use cases running in parallel: - -- vCPE -- SON O1 -- SON A1 - -Three threads ran in parallel, one for each scenario. The transactions were initiated -by each jmeter thread group. Each thread initiated a transaction, monitored the transaction, and -started the next one 250 ms. later. - -The results are illustrated on the following graphs: - -.. image:: images/s3p-drools-1.png -.. image:: images/s3p-drools-2.png -.. image:: images/s3p-drools-3.png - - -Commentary -========== - -There is around 1% unexpected failures during the 72-hour run. This can also be seen in the -final output of jmeter: - -.. code-block:: bash - - summary = 4751546 in 72:00:37 = 18.3/s Avg: 150 Min: 0 Max: 15087 Err: 47891 (1.01%) - -Sporadic database errors have been observed and seem related to the 1% failure percentage rate. diff --git a/docs/development/devtools/images/s3p-drools-1.png b/docs/development/devtools/images/s3p-drools-1.png deleted file mode 100644 index 3c1e06f7..00000000 Binary files a/docs/development/devtools/images/s3p-drools-1.png and /dev/null differ diff --git a/docs/development/devtools/images/s3p-drools-2.png b/docs/development/devtools/images/s3p-drools-2.png deleted file mode 100644 index 7e124716..00000000 Binary files a/docs/development/devtools/images/s3p-drools-2.png and /dev/null differ diff --git a/docs/development/devtools/images/s3p-drools-3.png b/docs/development/devtools/images/s3p-drools-3.png deleted file mode 100644 index 50f2c148..00000000 Binary files a/docs/development/devtools/images/s3p-drools-3.png and /dev/null differ diff --git a/docs/development/devtools/images/s3p-drools-4.png b/docs/development/devtools/images/s3p-drools-4.png deleted file mode 100644 index 369d1f33..00000000 Binary files a/docs/development/devtools/images/s3p-drools-4.png and /dev/null differ diff --git a/docs/development/devtools/images/s3p-perf-xacml.png b/docs/development/devtools/images/s3p-perf-xacml.png deleted file mode 100644 index 2c27967f..00000000 Binary files a/docs/development/devtools/images/s3p-perf-xacml.png and /dev/null differ diff --git a/docs/development/devtools/pap-s3p-results/pap_metrics_after_72h.txt b/docs/development/devtools/pap-s3p-results/pap_metrics_after_72h.txt deleted file mode 100644 index 8864726e..00000000 --- a/docs/development/devtools/pap-s3p-results/pap_metrics_after_72h.txt +++ /dev/null @@ -1,306 +0,0 @@ -# HELP logback_events_total Number of error level events that made it to the logs -# TYPE logback_events_total counter -logback_events_total{level="warn",} 23.0 -logback_events_total{level="debug",} 0.0 -logback_events_total{level="error",} 1.0 -logback_events_total{level="trace",} 0.0 -logback_events_total{level="info",} 1709270.0 -# HELP system_cpu_usage The "recent cpu usage" for the whole system -# TYPE system_cpu_usage gauge -system_cpu_usage 0.1270718232044199 -# HELP hikaricp_connections_acquire_seconds Connection acquire time -# TYPE hikaricp_connections_acquire_seconds summary -hikaricp_connections_acquire_seconds_count{pool="HikariPool-1",} 298222.0 -hikaricp_connections_acquire_seconds_sum{pool="HikariPool-1",} 321.533641537 -# HELP hikaricp_connections_acquire_seconds_max Connection acquire time -# TYPE hikaricp_connections_acquire_seconds_max gauge -hikaricp_connections_acquire_seconds_max{pool="HikariPool-1",} 0.006766789 -# HELP tomcat_sessions_created_sessions_total -# TYPE tomcat_sessions_created_sessions_total counter -tomcat_sessions_created_sessions_total 158246.0 -# HELP jvm_classes_unloaded_classes_total The total number of classes unloaded since the Java virtual machine has started execution -# TYPE jvm_classes_unloaded_classes_total counter -jvm_classes_unloaded_classes_total 799.0 -# HELP jvm_gc_memory_allocated_bytes_total Incremented for an increase in the size of the (young) heap memory pool after one GC to before the next -# TYPE jvm_gc_memory_allocated_bytes_total counter -jvm_gc_memory_allocated_bytes_total 3.956513686328E12 -# HELP tomcat_sessions_alive_max_seconds -# TYPE tomcat_sessions_alive_max_seconds gauge -tomcat_sessions_alive_max_seconds 2488.0 -# HELP spring_data_repository_invocations_seconds_max -# TYPE spring_data_repository_invocations_seconds_max gauge -spring_data_repository_invocations_seconds_max{exception="None",method="findByKeyParentKeyNameAndKeyParentKeyVersion",repository="PolicyStatusRepository",state="SUCCESS",} 0.0 -spring_data_repository_invocations_seconds_max{exception="None",method="saveAll",repository="PolicyStatusRepository",state="SUCCESS",} 0.0 -spring_data_repository_invocations_seconds_max{exception="None",method="save",repository="PdpGroupRepository",state="SUCCESS",} 0.0 -spring_data_repository_invocations_seconds_max{exception="None",method="findByPdpGroup",repository="PolicyAuditRepository",state="SUCCESS",} 0.0 -spring_data_repository_invocations_seconds_max{exception="None",method="findById",repository="ToscaServiceTemplateRepository",state="SUCCESS",} 0.0 -spring_data_repository_invocations_seconds_max{exception="None",method="findByPdpGroup",repository="PolicyStatusRepository",state="SUCCESS",} 0.0 -spring_data_repository_invocations_seconds_max{exception="None",method="findAll",repository="PdpGroupRepository",state="SUCCESS",} 0.863253324 -spring_data_repository_invocations_seconds_max{exception="None",method="deleteById",repository="PdpGroupRepository",state="SUCCESS",} 0.0 -spring_data_repository_invocations_seconds_max{exception="None",method="findAll",repository="PolicyStatusRepository",state="SUCCESS",} 0.144251855 -spring_data_repository_invocations_seconds_max{exception="None",method="saveAll",repository="PolicyAuditRepository",state="SUCCESS",} 0.0 -spring_data_repository_invocations_seconds_max{exception="None",method="findByPdpGroupAndNameAndVersion",repository="PolicyAuditRepository",state="SUCCESS",} 0.0 -spring_data_repository_invocations_seconds_max{exception="None",method="findById",repository="ToscaNodeTemplateRepository",state="SUCCESS",} 0.0 -spring_data_repository_invocations_seconds_max{exception="None",method="findByKeyName",repository="PdpGroupRepository",state="SUCCESS",} 0.0 -spring_data_repository_invocations_seconds_max{exception="None",method="save",repository="PdpSubGroupRepository",state="SUCCESS",} 0.0 -spring_data_repository_invocations_seconds_max{exception="None",method="save",repository="PdpRepository",state="SUCCESS",} 0.0 -spring_data_repository_invocations_seconds_max{exception="None",method="deleteAll",repository="PolicyStatusRepository",state="SUCCESS",} 0.0 -# HELP spring_data_repository_invocations_seconds -# TYPE spring_data_repository_invocations_seconds summary -spring_data_repository_invocations_seconds_count{exception="None",method="findByKeyParentKeyNameAndKeyParentKeyVersion",repository="PolicyStatusRepository",state="SUCCESS",} 15740.0 -spring_data_repository_invocations_seconds_sum{exception="None",method="findByKeyParentKeyNameAndKeyParentKeyVersion",repository="PolicyStatusRepository",state="SUCCESS",} 3116.970495755 -spring_data_repository_invocations_seconds_count{exception="None",method="saveAll",repository="PolicyStatusRepository",state="SUCCESS",} 113798.0 -spring_data_repository_invocations_seconds_sum{exception="None",method="saveAll",repository="PolicyStatusRepository",state="SUCCESS",} 480.71823635 -spring_data_repository_invocations_seconds_count{exception="None",method="save",repository="PdpGroupRepository",state="SUCCESS",} 28085.0 -spring_data_repository_invocations_seconds_sum{exception="None",method="save",repository="PdpGroupRepository",state="SUCCESS",} 9.645079055 -spring_data_repository_invocations_seconds_count{exception="None",method="findByPdpGroup",repository="PolicyAuditRepository",state="SUCCESS",} 6981.0 -spring_data_repository_invocations_seconds_sum{exception="None",method="findByPdpGroup",repository="PolicyAuditRepository",state="SUCCESS",} 616.931466813 -spring_data_repository_invocations_seconds_count{exception="None",method="findById",repository="ToscaServiceTemplateRepository",state="SUCCESS",} 46250.0 -spring_data_repository_invocations_seconds_sum{exception="None",method="findById",repository="ToscaServiceTemplateRepository",state="SUCCESS",} 8406.051483096 -spring_data_repository_invocations_seconds_count{exception="None",method="findByPdpGroup",repository="PolicyStatusRepository",state="SUCCESS",} 42765.0 -spring_data_repository_invocations_seconds_sum{exception="None",method="findByPdpGroup",repository="PolicyStatusRepository",state="SUCCESS",} 10979.997264985 -spring_data_repository_invocations_seconds_count{exception="None",method="findAll",repository="PdpGroupRepository",state="SUCCESS",} 101780.0 -spring_data_repository_invocations_seconds_sum{exception="None",method="findAll",repository="PdpGroupRepository",state="SUCCESS",} 20530.858991818 -spring_data_repository_invocations_seconds_count{exception="None",method="deleteById",repository="PdpGroupRepository",state="SUCCESS",} 1.0 -spring_data_repository_invocations_seconds_sum{exception="None",method="deleteById",repository="PdpGroupRepository",state="SUCCESS",} 0.004567796 -spring_data_repository_invocations_seconds_count{exception="None",method="findAll",repository="PolicyStatusRepository",state="SUCCESS",} 32620.0 -spring_data_repository_invocations_seconds_sum{exception="None",method="findAll",repository="PolicyStatusRepository",state="SUCCESS",} 11459.109680167 -spring_data_repository_invocations_seconds_count{exception="None",method="saveAll",repository="PolicyAuditRepository",state="SUCCESS",} 28080.0 -spring_data_repository_invocations_seconds_sum{exception="None",method="saveAll",repository="PolicyAuditRepository",state="SUCCESS",} 45.836464781 -spring_data_repository_invocations_seconds_count{exception="None",method="findByPdpGroupAndNameAndVersion",repository="PolicyAuditRepository",state="SUCCESS",} 13960.0 -spring_data_repository_invocations_seconds_sum{exception="None",method="findByPdpGroupAndNameAndVersion",repository="PolicyAuditRepository",state="SUCCESS",} 1765.653676534 -spring_data_repository_invocations_seconds_count{exception="None",method="findById",repository="ToscaNodeTemplateRepository",state="SUCCESS",} 21331.0 -spring_data_repository_invocations_seconds_sum{exception="None",method="findById",repository="ToscaNodeTemplateRepository",state="SUCCESS",} 1.286926983 -spring_data_repository_invocations_seconds_count{exception="None",method="findByKeyName",repository="PdpGroupRepository",state="SUCCESS",} 13970.0 -spring_data_repository_invocations_seconds_sum{exception="None",method="findByKeyName",repository="PdpGroupRepository",state="SUCCESS",} 4175.556697162 -spring_data_repository_invocations_seconds_count{exception="None",method="save",repository="PdpSubGroupRepository",state="SUCCESS",} 2.0 -spring_data_repository_invocations_seconds_sum{exception="None",method="save",repository="PdpSubGroupRepository",state="SUCCESS",} 0.864602048 -spring_data_repository_invocations_seconds_count{exception="None",method="save",repository="PdpRepository",state="SUCCESS",} 36866.0 -spring_data_repository_invocations_seconds_sum{exception="None",method="save",repository="PdpRepository",state="SUCCESS",} 7686.38602325 -spring_data_repository_invocations_seconds_count{exception="None",method="deleteAll",repository="PolicyStatusRepository",state="SUCCESS",} 56899.0 -spring_data_repository_invocations_seconds_sum{exception="None",method="deleteAll",repository="PolicyStatusRepository",state="SUCCESS",} 882.098525295 -# HELP jvm_threads_states_threads The current number of threads having NEW state -# TYPE jvm_threads_states_threads gauge -jvm_threads_states_threads{state="runnable",} 9.0 -jvm_threads_states_threads{state="blocked",} 0.0 -jvm_threads_states_threads{state="waiting",} 29.0 -jvm_threads_states_threads{state="timed-waiting",} 8.0 -jvm_threads_states_threads{state="new",} 0.0 -jvm_threads_states_threads{state="terminated",} 0.0 -# HELP process_cpu_usage The "recent cpu usage" for the Java Virtual Machine process -# TYPE process_cpu_usage gauge -process_cpu_usage 0.006697923643670462 -# HELP tomcat_sessions_expired_sessions_total -# TYPE tomcat_sessions_expired_sessions_total counter -tomcat_sessions_expired_sessions_total 158186.0 -# HELP jvm_buffer_total_capacity_bytes An estimate of the total capacity of the buffers in this pool -# TYPE jvm_buffer_total_capacity_bytes gauge -jvm_buffer_total_capacity_bytes{id="mapped",} 0.0 -jvm_buffer_total_capacity_bytes{id="direct",} 169210.0 -# HELP process_start_time_seconds Start time of the process since unix epoch. -# TYPE process_start_time_seconds gauge -process_start_time_seconds 1.649849957815E9 -# HELP hikaricp_connections_creation_seconds_max Connection creation time -# TYPE hikaricp_connections_creation_seconds_max gauge -hikaricp_connections_creation_seconds_max{pool="HikariPool-1",} 0.51 -# HELP hikaricp_connections_creation_seconds Connection creation time -# TYPE hikaricp_connections_creation_seconds summary -hikaricp_connections_creation_seconds_count{pool="HikariPool-1",} 3936.0 -hikaricp_connections_creation_seconds_sum{pool="HikariPool-1",} 942.369 -# HELP hikaricp_connections_max Max connections -# TYPE hikaricp_connections_max gauge -hikaricp_connections_max{pool="HikariPool-1",} 10.0 -# HELP jdbc_connections_min Minimum number of idle connections in the pool. -# TYPE jdbc_connections_min gauge -jdbc_connections_min{name="dataSource",} 10.0 -# HELP jvm_memory_committed_bytes The amount of memory in bytes that is committed for the Java virtual machine to use -# TYPE jvm_memory_committed_bytes gauge -jvm_memory_committed_bytes{area="heap",id="Tenured Gen",} 1.76160768E8 -jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 4.9020928E7 -jvm_memory_committed_bytes{area="heap",id="Eden Space",} 7.0582272E7 -jvm_memory_committed_bytes{area="nonheap",id="Metaspace",} 1.1890688E8 -jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 2555904.0 -jvm_memory_committed_bytes{area="heap",id="Survivor Space",} 8781824.0 -jvm_memory_committed_bytes{area="nonheap",id="Compressed Class Space",} 1.5450112E7 -jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 3.1850496E7 -# HELP jvm_threads_peak_threads The peak live thread count since the Java virtual machine started or peak was reset -# TYPE jvm_threads_peak_threads gauge -jvm_threads_peak_threads 51.0 -# HELP hikaricp_connections_idle Idle connections -# TYPE hikaricp_connections_idle gauge -hikaricp_connections_idle{pool="HikariPool-1",} 10.0 -# HELP hikaricp_connections Total connections -# TYPE hikaricp_connections gauge -hikaricp_connections{pool="HikariPool-1",} 10.0 -# HELP http_server_requests_seconds -# TYPE http_server_requests_seconds summary -http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/audit/{pdpGroupName}/{policyName}/{policyVersion}",} 13960.0 -http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/audit/{pdpGroupName}/{policyName}/{policyVersion}",} 4066.52698026 -http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/metrics",} 22470.0 -http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/metrics",} 3622.506076129 -http_server_requests_seconds_count{exception="None",method="POST",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/deployments/batch",} 13961.0 -http_server_requests_seconds_sum{exception="None",method="POST",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/deployments/batch",} 27890.47103474 -http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/status",} 14404.0 -http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/status",} 7821.856496806 -http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/status/{pdpGroupName}/{policyName}/{policyVersion}",} 15738.0 -http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/status/{pdpGroupName}/{policyName}/{policyVersion}",} 5848.655389921 -http_server_requests_seconds_count{exception="None",method="DELETE",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/policies/{name}",} 7059.0 -http_server_requests_seconds_sum{exception="None",method="DELETE",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/policies/{name}",} 15554.208182423 -http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/audit/{pdpGroupName}",} 6981.0 -http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/audit/{pdpGroupName}",} 1756.291465092 -http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/deployed",} 6979.0 -http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/deployed",} 1934.785157616 -http_server_requests_seconds_count{exception="None",method="PUT",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/{name}",} 4.0 -http_server_requests_seconds_sum{exception="None",method="PUT",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/{name}",} 7.281567744 -http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps",} 31395.0 -http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps",} 13046.055299896 -http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/healthcheck",} 11237.0 -http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/healthcheck",} 6979.030310367 -http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/components/healthcheck",} 6979.0 -http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/components/healthcheck",} 3741.773622509 -http_server_requests_seconds_count{exception="None",method="GET",outcome="CLIENT_ERROR",status="404",uri="/policy/pap/v1/policies/status/{pdpGroupName}/{policyName}/{policyVersion}",} 2.0 -http_server_requests_seconds_sum{exception="None",method="GET",outcome="CLIENT_ERROR",status="404",uri="/policy/pap/v1/policies/status/{pdpGroupName}/{policyName}/{policyVersion}",} 1.318371311 -http_server_requests_seconds_count{exception="None",method="DELETE",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/{name}",} 1.0 -http_server_requests_seconds_sum{exception="None",method="DELETE",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/{name}",} 1.026191347 -http_server_requests_seconds_count{exception="None",method="POST",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/policies",} 7077.0 -http_server_requests_seconds_sum{exception="None",method="POST",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/policies",} 14603.589203056 -http_server_requests_seconds_count{exception="None",method="POST",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/batch",} 2.0 -http_server_requests_seconds_sum{exception="None",method="POST",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/batch",} 1.877099877 -# HELP http_server_requests_seconds_max -# TYPE http_server_requests_seconds_max gauge -http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/audit/{pdpGroupName}/{policyName}/{policyVersion}",} 0.0 -http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/metrics",} 0.147881793 -http_server_requests_seconds_max{exception="None",method="POST",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/deployments/batch",} 0.0 -http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/status",} 0.0 -http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/status/{pdpGroupName}/{policyName}/{policyVersion}",} 0.0 -http_server_requests_seconds_max{exception="None",method="DELETE",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/policies/{name}",} 0.0 -http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/audit/{pdpGroupName}",} 0.0 -http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/deployed",} 0.0 -http_server_requests_seconds_max{exception="None",method="PUT",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/{name}",} 0.0 -http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps",} 0.227488581 -http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/healthcheck",} 0.272733892 -http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/components/healthcheck",} 0.0 -http_server_requests_seconds_max{exception="None",method="GET",outcome="CLIENT_ERROR",status="404",uri="/policy/pap/v1/policies/status/{pdpGroupName}/{policyName}/{policyVersion}",} 0.0 -http_server_requests_seconds_max{exception="None",method="DELETE",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/{name}",} 0.0 -http_server_requests_seconds_max{exception="None",method="POST",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/policies",} 0.0 -http_server_requests_seconds_max{exception="None",method="POST",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/batch",} 0.0 -# HELP jvm_buffer_count_buffers An estimate of the number of buffers in the pool -# TYPE jvm_buffer_count_buffers gauge -jvm_buffer_count_buffers{id="mapped",} 0.0 -jvm_buffer_count_buffers{id="direct",} 10.0 -# HELP hikaricp_connections_pending Pending threads -# TYPE hikaricp_connections_pending gauge -hikaricp_connections_pending{pool="HikariPool-1",} 0.0 -# HELP system_load_average_1m The sum of the number of runnable entities queued to available processors and the number of runnable entities running on the available processors averaged over a period of time -# TYPE system_load_average_1m gauge -system_load_average_1m 0.6 -# HELP jvm_memory_used_bytes The amount of used memory -# TYPE jvm_memory_used_bytes gauge -jvm_memory_used_bytes{area="heap",id="Tenured Gen",} 6.7084064E7 -jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 4.110464E7 -jvm_memory_used_bytes{area="heap",id="Eden Space",} 3.329572E7 -jvm_memory_used_bytes{area="nonheap",id="Metaspace",} 1.12499384E8 -jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 1394432.0 -jvm_memory_used_bytes{area="heap",id="Survivor Space",} 463856.0 -jvm_memory_used_bytes{area="nonheap",id="Compressed Class Space",} 1.3096368E7 -jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 3.1773568E7 -# HELP tomcat_sessions_rejected_sessions_total -# TYPE tomcat_sessions_rejected_sessions_total counter -tomcat_sessions_rejected_sessions_total 0.0 -# HELP jvm_gc_live_data_size_bytes Size of long-lived heap memory pool after reclamation -# TYPE jvm_gc_live_data_size_bytes gauge -jvm_gc_live_data_size_bytes 5.0955016E7 -# HELP jvm_gc_memory_promoted_bytes_total Count of positive increases in the size of the old generation memory pool before GC to after GC -# TYPE jvm_gc_memory_promoted_bytes_total counter -jvm_gc_memory_promoted_bytes_total 1.692072808E9 -# HELP tomcat_sessions_active_max_sessions -# TYPE tomcat_sessions_active_max_sessions gauge -tomcat_sessions_active_max_sessions 1101.0 -# HELP jdbc_connections_active Current number of active connections that have been allocated from the data source. -# TYPE jdbc_connections_active gauge -jdbc_connections_active{name="dataSource",} 0.0 -# HELP jdbc_connections_max Maximum number of active connections that can be allocated at the same time. -# TYPE jdbc_connections_max gauge -jdbc_connections_max{name="dataSource",} 10.0 -# HELP jvm_memory_max_bytes The maximum amount of memory in bytes that can be used for memory management -# TYPE jvm_memory_max_bytes gauge -jvm_memory_max_bytes{area="heap",id="Tenured Gen",} 2.803236864E9 -jvm_memory_max_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 1.22912768E8 -jvm_memory_max_bytes{area="heap",id="Eden Space",} 1.12132096E9 -jvm_memory_max_bytes{area="nonheap",id="Metaspace",} -1.0 -jvm_memory_max_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 5828608.0 -jvm_memory_max_bytes{area="heap",id="Survivor Space",} 1.40115968E8 -jvm_memory_max_bytes{area="nonheap",id="Compressed Class Space",} 1.073741824E9 -jvm_memory_max_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 1.22916864E8 -# HELP jvm_threads_daemon_threads The current number of live daemon threads -# TYPE jvm_threads_daemon_threads gauge -jvm_threads_daemon_threads 34.0 -# HELP process_files_open_files The open file descriptor count -# TYPE process_files_open_files gauge -process_files_open_files 36.0 -# HELP system_cpu_count The number of processors available to the Java virtual machine -# TYPE system_cpu_count gauge -system_cpu_count 1.0 -# HELP jvm_gc_pause_seconds Time spent in GC pause -# TYPE jvm_gc_pause_seconds summary -jvm_gc_pause_seconds_count{action="end of major GC",cause="Metadata GC Threshold",} 2.0 -jvm_gc_pause_seconds_sum{action="end of major GC",cause="Metadata GC Threshold",} 0.391 -jvm_gc_pause_seconds_count{action="end of major GC",cause="Allocation Failure",} 13.0 -jvm_gc_pause_seconds_sum{action="end of major GC",cause="Allocation Failure",} 5.98 -jvm_gc_pause_seconds_count{action="end of minor GC",cause="Allocation Failure",} 56047.0 -jvm_gc_pause_seconds_sum{action="end of minor GC",cause="Allocation Failure",} 549.532 -jvm_gc_pause_seconds_count{action="end of minor GC",cause="GCLocker Initiated GC",} 9.0 -jvm_gc_pause_seconds_sum{action="end of minor GC",cause="GCLocker Initiated GC",} 0.081 -# HELP jvm_gc_pause_seconds_max Time spent in GC pause -# TYPE jvm_gc_pause_seconds_max gauge -jvm_gc_pause_seconds_max{action="end of major GC",cause="Metadata GC Threshold",} 0.0 -jvm_gc_pause_seconds_max{action="end of major GC",cause="Allocation Failure",} 0.0 -jvm_gc_pause_seconds_max{action="end of minor GC",cause="Allocation Failure",} 0.0 -jvm_gc_pause_seconds_max{action="end of minor GC",cause="GCLocker Initiated GC",} 0.0 -# HELP hikaricp_connections_min Min connections -# TYPE hikaricp_connections_min gauge -hikaricp_connections_min{pool="HikariPool-1",} 10.0 -# HELP process_files_max_files The maximum file descriptor count -# TYPE process_files_max_files gauge -process_files_max_files 1048576.0 -# HELP hikaricp_connections_active Active connections -# TYPE hikaricp_connections_active gauge -hikaricp_connections_active{pool="HikariPool-1",} 0.0 -# HELP jvm_threads_live_threads The current number of live threads including both daemon and non-daemon threads -# TYPE jvm_threads_live_threads gauge -jvm_threads_live_threads 46.0 -# HELP process_uptime_seconds The uptime of the Java virtual machine -# TYPE process_uptime_seconds gauge -process_uptime_seconds 510671.853 -# HELP hikaricp_connections_usage_seconds Connection usage time -# TYPE hikaricp_connections_usage_seconds summary -hikaricp_connections_usage_seconds_count{pool="HikariPool-1",} 298222.0 -hikaricp_connections_usage_seconds_sum{pool="HikariPool-1",} 125489.766 -# HELP hikaricp_connections_usage_seconds_max Connection usage time -# TYPE hikaricp_connections_usage_seconds_max gauge -hikaricp_connections_usage_seconds_max{pool="HikariPool-1",} 0.878 -# HELP pap_policy_deployments_total -# TYPE pap_policy_deployments_total counter -pap_policy_deployments_total{operation="deploy",status="FAILURE",} 0.0 -pap_policy_deployments_total{operation="undeploy",status="SUCCESS",} 13971.0 -pap_policy_deployments_total{operation="deploy",status="SUCCESS",} 14028.0 -pap_policy_deployments_total{operation="undeploy",status="FAILURE",} 0.0 -# HELP jvm_buffer_memory_used_bytes An estimate of the memory that the Java virtual machine is using for this buffer pool -# TYPE jvm_buffer_memory_used_bytes gauge -jvm_buffer_memory_used_bytes{id="mapped",} 0.0 -jvm_buffer_memory_used_bytes{id="direct",} 169210.0 -# HELP hikaricp_connections_timeout_total Connection timeout total count -# TYPE hikaricp_connections_timeout_total counter -hikaricp_connections_timeout_total{pool="HikariPool-1",} 0.0 -# HELP jvm_classes_loaded_classes The number of classes that are currently loaded in the Java virtual machine -# TYPE jvm_classes_loaded_classes gauge -jvm_classes_loaded_classes 18727.0 -# HELP jdbc_connections_idle Number of established but idle connections. -# TYPE jdbc_connections_idle gauge -jdbc_connections_idle{name="dataSource",} 10.0 -# HELP tomcat_sessions_active_current_sessions -# TYPE tomcat_sessions_active_current_sessions gauge -tomcat_sessions_active_current_sessions 60.0 -# HELP jvm_gc_max_data_size_bytes Max size of long-lived heap memory pool -# TYPE jvm_gc_max_data_size_bytes gauge -jvm_gc_max_data_size_bytes 2.803236864E9 diff --git a/docs/development/devtools/pap-s3p-results/pap_metrics_before_72h.txt b/docs/development/devtools/pap-s3p-results/pap_metrics_before_72h.txt deleted file mode 100644 index 047ccf99..00000000 --- a/docs/development/devtools/pap-s3p-results/pap_metrics_before_72h.txt +++ /dev/null @@ -1,225 +0,0 @@ -# HELP spring_data_repository_invocations_seconds_max -# TYPE spring_data_repository_invocations_seconds_max gauge -spring_data_repository_invocations_seconds_max{exception="None",method="findByKeyName",repository="PdpGroupRepository",state="SUCCESS",} 0.0 -spring_data_repository_invocations_seconds_max{exception="None",method="save",repository="PdpRepository",state="SUCCESS",} 0.008146982 -spring_data_repository_invocations_seconds_max{exception="None",method="findAll",repository="PdpGroupRepository",state="SUCCESS",} 0.777049798 -spring_data_repository_invocations_seconds_max{exception="None",method="findAll",repository="PolicyStatusRepository",state="SUCCESS",} 0.569583402 -# HELP spring_data_repository_invocations_seconds -# TYPE spring_data_repository_invocations_seconds summary -spring_data_repository_invocations_seconds_count{exception="None",method="findByKeyName",repository="PdpGroupRepository",state="SUCCESS",} 1.0 -spring_data_repository_invocations_seconds_sum{exception="None",method="findByKeyName",repository="PdpGroupRepository",state="SUCCESS",} 1.257790017 -spring_data_repository_invocations_seconds_count{exception="None",method="save",repository="PdpRepository",state="SUCCESS",} 23.0 -spring_data_repository_invocations_seconds_sum{exception="None",method="save",repository="PdpRepository",state="SUCCESS",} 0.671469491 -spring_data_repository_invocations_seconds_count{exception="None",method="findAll",repository="PdpGroupRepository",state="SUCCESS",} 30.0 -spring_data_repository_invocations_seconds_sum{exception="None",method="findAll",repository="PdpGroupRepository",state="SUCCESS",} 8.481980058 -spring_data_repository_invocations_seconds_count{exception="None",method="findAll",repository="PolicyStatusRepository",state="SUCCESS",} 4.0 -spring_data_repository_invocations_seconds_sum{exception="None",method="findAll",repository="PolicyStatusRepository",state="SUCCESS",} 1.939575991 -# HELP hikaricp_connections_max Max connections -# TYPE hikaricp_connections_max gauge -hikaricp_connections_max{pool="HikariPool-1",} 10.0 -# HELP tomcat_sessions_created_sessions_total -# TYPE tomcat_sessions_created_sessions_total counter -tomcat_sessions_created_sessions_total 16.0 -# HELP process_files_open_files The open file descriptor count -# TYPE process_files_open_files gauge -process_files_open_files 34.0 -# HELP hikaricp_connections_active Active connections -# TYPE hikaricp_connections_active gauge -hikaricp_connections_active{pool="HikariPool-1",} 0.0 -# HELP jvm_classes_unloaded_classes_total The total number of classes unloaded since the Java virtual machine has started execution -# TYPE jvm_classes_unloaded_classes_total counter -jvm_classes_unloaded_classes_total 2.0 -# HELP system_cpu_usage The "recent cpu usage" for the whole system -# TYPE system_cpu_usage gauge -system_cpu_usage 0.03765922097101717 -# HELP jvm_classes_loaded_classes The number of classes that are currently loaded in the Java virtual machine -# TYPE jvm_classes_loaded_classes gauge -jvm_classes_loaded_classes 18022.0 -# HELP process_uptime_seconds The uptime of the Java virtual machine -# TYPE process_uptime_seconds gauge -process_uptime_seconds 570.627 -# HELP jvm_memory_committed_bytes The amount of memory in bytes that is committed for the Java virtual machine to use -# TYPE jvm_memory_committed_bytes gauge -jvm_memory_committed_bytes{area="heap",id="Tenured Gen",} 1.76160768E8 -jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 2.6017792E7 -jvm_memory_committed_bytes{area="heap",id="Eden Space",} 7.0582272E7 -jvm_memory_committed_bytes{area="nonheap",id="Metaspace",} 1.04054784E8 -jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 2555904.0 -jvm_memory_committed_bytes{area="heap",id="Survivor Space",} 8781824.0 -jvm_memory_committed_bytes{area="nonheap",id="Compressed Class Space",} 1.4286848E7 -jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 6881280.0 -# HELP jvm_gc_live_data_size_bytes Size of long-lived heap memory pool after reclamation -# TYPE jvm_gc_live_data_size_bytes gauge -jvm_gc_live_data_size_bytes 4.13206E7 -# HELP jdbc_connections_min Minimum number of idle connections in the pool. -# TYPE jdbc_connections_min gauge -jdbc_connections_min{name="dataSource",} 10.0 -# HELP process_start_time_seconds Start time of the process since unix epoch. -# TYPE process_start_time_seconds gauge -process_start_time_seconds 1.649787267607E9 -# HELP jdbc_connections_idle Number of established but idle connections. -# TYPE jdbc_connections_idle gauge -jdbc_connections_idle{name="dataSource",} 10.0 -# HELP jvm_gc_memory_promoted_bytes_total Count of positive increases in the size of the old generation memory pool before GC to after GC -# TYPE jvm_gc_memory_promoted_bytes_total counter -jvm_gc_memory_promoted_bytes_total 2.7154576E7 -# HELP hikaricp_connections_creation_seconds_max Connection creation time -# TYPE hikaricp_connections_creation_seconds_max gauge -hikaricp_connections_creation_seconds_max{pool="HikariPool-1",} 0.0 -# HELP hikaricp_connections_creation_seconds Connection creation time -# TYPE hikaricp_connections_creation_seconds summary -hikaricp_connections_creation_seconds_count{pool="HikariPool-1",} 0.0 -hikaricp_connections_creation_seconds_sum{pool="HikariPool-1",} 0.0 -# HELP tomcat_sessions_active_current_sessions -# TYPE tomcat_sessions_active_current_sessions gauge -tomcat_sessions_active_current_sessions 16.0 -# HELP jvm_threads_daemon_threads The current number of live daemon threads -# TYPE jvm_threads_daemon_threads gauge -jvm_threads_daemon_threads 34.0 -# HELP jvm_memory_used_bytes The amount of used memory -# TYPE jvm_memory_used_bytes gauge -jvm_memory_used_bytes{area="heap",id="Tenured Gen",} 4.13206E7 -jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 2.6013824E7 -jvm_memory_used_bytes{area="heap",id="Eden Space",} 2853928.0 -jvm_memory_used_bytes{area="nonheap",id="Metaspace",} 9.9649768E7 -jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 1364736.0 -jvm_memory_used_bytes{area="heap",id="Survivor Space",} 1036120.0 -jvm_memory_used_bytes{area="nonheap",id="Compressed Class Space",} 1.2613992E7 -jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 6865408.0 -# HELP hikaricp_connections_timeout_total Connection timeout total count -# TYPE hikaricp_connections_timeout_total counter -hikaricp_connections_timeout_total{pool="HikariPool-1",} 0.0 -# HELP jvm_memory_max_bytes The maximum amount of memory in bytes that can be used for memory management -# TYPE jvm_memory_max_bytes gauge -jvm_memory_max_bytes{area="heap",id="Tenured Gen",} 2.803236864E9 -jvm_memory_max_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 1.22912768E8 -jvm_memory_max_bytes{area="heap",id="Eden Space",} 1.12132096E9 -jvm_memory_max_bytes{area="nonheap",id="Metaspace",} -1.0 -jvm_memory_max_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 5828608.0 -jvm_memory_max_bytes{area="heap",id="Survivor Space",} 1.40115968E8 -jvm_memory_max_bytes{area="nonheap",id="Compressed Class Space",} 1.073741824E9 -jvm_memory_max_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 1.22916864E8 -# HELP tomcat_sessions_active_max_sessions -# TYPE tomcat_sessions_active_max_sessions gauge -tomcat_sessions_active_max_sessions 16.0 -# HELP tomcat_sessions_alive_max_seconds -# TYPE tomcat_sessions_alive_max_seconds gauge -tomcat_sessions_alive_max_seconds 0.0 -# HELP jvm_threads_peak_threads The peak live thread count since the Java virtual machine started or peak was reset -# TYPE jvm_threads_peak_threads gauge -jvm_threads_peak_threads 43.0 -# HELP hikaricp_connections_acquire_seconds Connection acquire time -# TYPE hikaricp_connections_acquire_seconds summary -hikaricp_connections_acquire_seconds_count{pool="HikariPool-1",} 57.0 -hikaricp_connections_acquire_seconds_sum{pool="HikariPool-1",} 0.103535665 -# HELP hikaricp_connections_acquire_seconds_max Connection acquire time -# TYPE hikaricp_connections_acquire_seconds_max gauge -hikaricp_connections_acquire_seconds_max{pool="HikariPool-1",} 0.004207252 -# HELP hikaricp_connections_usage_seconds Connection usage time -# TYPE hikaricp_connections_usage_seconds summary -hikaricp_connections_usage_seconds_count{pool="HikariPool-1",} 57.0 -hikaricp_connections_usage_seconds_sum{pool="HikariPool-1",} 13.297 -# HELP hikaricp_connections_usage_seconds_max Connection usage time -# TYPE hikaricp_connections_usage_seconds_max gauge -hikaricp_connections_usage_seconds_max{pool="HikariPool-1",} 0.836 -# HELP http_server_requests_seconds -# TYPE http_server_requests_seconds summary -http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/metrics",} 9.0 -http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/metrics",} 1.93944618 -http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps",} 3.0 -http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps",} 1.365007581 -http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/healthcheck",} 4.0 -http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/healthcheck",} 2.636914428 -# HELP http_server_requests_seconds_max -# TYPE http_server_requests_seconds_max gauge -http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/metrics",} 0.213989915 -http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps",} 0.0 -http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/healthcheck",} 0.714076223 -# HELP process_cpu_usage The "recent cpu usage" for the Java Virtual Machine process -# TYPE process_cpu_usage gauge -process_cpu_usage 0.002436413304293255 -# HELP hikaricp_connections_idle Idle connections -# TYPE hikaricp_connections_idle gauge -hikaricp_connections_idle{pool="HikariPool-1",} 10.0 -# HELP tomcat_sessions_rejected_sessions_total -# TYPE tomcat_sessions_rejected_sessions_total counter -tomcat_sessions_rejected_sessions_total 0.0 -# HELP jvm_gc_memory_allocated_bytes_total Incremented for an increase in the size of the (young) heap memory pool after one GC to before the next -# TYPE jvm_gc_memory_allocated_bytes_total counter -jvm_gc_memory_allocated_bytes_total 1.401269088E9 -# HELP tomcat_sessions_expired_sessions_total -# TYPE tomcat_sessions_expired_sessions_total counter -tomcat_sessions_expired_sessions_total 0.0 -# HELP pap_policy_deployments_total -# TYPE pap_policy_deployments_total counter -pap_policy_deployments_total{operation="deploy",status="FAILURE",} 0.0 -pap_policy_deployments_total{operation="undeploy",status="SUCCESS",} 0.0 -pap_policy_deployments_total{operation="deploy",status="SUCCESS",} 0.0 -pap_policy_deployments_total{operation="undeploy",status="FAILURE",} 0.0 -# HELP hikaricp_connections_pending Pending threads -# TYPE hikaricp_connections_pending gauge -hikaricp_connections_pending{pool="HikariPool-1",} 0.0 -# HELP process_files_max_files The maximum file descriptor count -# TYPE process_files_max_files gauge -process_files_max_files 1048576.0 -# HELP jvm_buffer_memory_used_bytes An estimate of the memory that the Java virtual machine is using for this buffer pool -# TYPE jvm_buffer_memory_used_bytes gauge -jvm_buffer_memory_used_bytes{id="mapped",} 0.0 -jvm_buffer_memory_used_bytes{id="direct",} 169210.0 -# HELP jvm_gc_pause_seconds Time spent in GC pause -# TYPE jvm_gc_pause_seconds summary -jvm_gc_pause_seconds_count{action="end of major GC",cause="Metadata GC Threshold",} 2.0 -jvm_gc_pause_seconds_sum{action="end of major GC",cause="Metadata GC Threshold",} 0.472 -jvm_gc_pause_seconds_count{action="end of minor GC",cause="Allocation Failure",} 19.0 -jvm_gc_pause_seconds_sum{action="end of minor GC",cause="Allocation Failure",} 0.507 -# HELP jvm_gc_pause_seconds_max Time spent in GC pause -# TYPE jvm_gc_pause_seconds_max gauge -jvm_gc_pause_seconds_max{action="end of major GC",cause="Metadata GC Threshold",} 0.0 -jvm_gc_pause_seconds_max{action="end of minor GC",cause="Allocation Failure",} 0.029 -# HELP jvm_threads_live_threads The current number of live threads including both daemon and non-daemon threads -# TYPE jvm_threads_live_threads gauge -jvm_threads_live_threads 43.0 -# HELP hikaricp_connections_min Min connections -# TYPE hikaricp_connections_min gauge -hikaricp_connections_min{pool="HikariPool-1",} 10.0 -# HELP jdbc_connections_max Maximum number of active connections that can be allocated at the same time. -# TYPE jdbc_connections_max gauge -jdbc_connections_max{name="dataSource",} 10.0 -# HELP jvm_buffer_total_capacity_bytes An estimate of the total capacity of the buffers in this pool -# TYPE jvm_buffer_total_capacity_bytes gauge -jvm_buffer_total_capacity_bytes{id="mapped",} 0.0 -jvm_buffer_total_capacity_bytes{id="direct",} 169210.0 -# HELP system_cpu_count The number of processors available to the Java virtual machine -# TYPE system_cpu_count gauge -system_cpu_count 1.0 -# HELP hikaricp_connections Total connections -# TYPE hikaricp_connections gauge -hikaricp_connections{pool="HikariPool-1",} 10.0 -# HELP jdbc_connections_active Current number of active connections that have been allocated from the data source. -# TYPE jdbc_connections_active gauge -jdbc_connections_active{name="dataSource",} 0.0 -# HELP system_load_average_1m The sum of the number of runnable entities queued to available processors and the number of runnable entities running on the available processors averaged over a period of time -# TYPE system_load_average_1m gauge -system_load_average_1m 0.36 -# HELP jvm_gc_max_data_size_bytes Max size of long-lived heap memory pool -# TYPE jvm_gc_max_data_size_bytes gauge -jvm_gc_max_data_size_bytes 2.803236864E9 -# HELP jvm_threads_states_threads The current number of threads having NEW state -# TYPE jvm_threads_states_threads gauge -jvm_threads_states_threads{state="runnable",} 9.0 -jvm_threads_states_threads{state="blocked",} 0.0 -jvm_threads_states_threads{state="waiting",} 26.0 -jvm_threads_states_threads{state="timed-waiting",} 8.0 -jvm_threads_states_threads{state="new",} 0.0 -jvm_threads_states_threads{state="terminated",} 0.0 -# HELP jvm_buffer_count_buffers An estimate of the number of buffers in the pool -# TYPE jvm_buffer_count_buffers gauge -jvm_buffer_count_buffers{id="mapped",} 0.0 -jvm_buffer_count_buffers{id="direct",} 10.0 -# HELP logback_events_total Number of error level events that made it to the logs -# TYPE logback_events_total counter -logback_events_total{level="warn",} 22.0 -logback_events_total{level="debug",} 0.0 -logback_events_total{level="error",} 0.0 -logback_events_total{level="trace",} 0.0 -logback_events_total{level="info",} 385.0 diff --git a/docs/development/devtools/pap-s3p-results/pap_performance_jmeter_results.png b/docs/development/devtools/pap-s3p-results/pap_performance_jmeter_results.png deleted file mode 100644 index a6504789..00000000 Binary files a/docs/development/devtools/pap-s3p-results/pap_performance_jmeter_results.png and /dev/null differ diff --git a/docs/development/devtools/pap-s3p-results/pap_stability_jmeter_results.png b/docs/development/devtools/pap-s3p-results/pap_stability_jmeter_results.png deleted file mode 100644 index 5f54c02e..00000000 Binary files a/docs/development/devtools/pap-s3p-results/pap_stability_jmeter_results.png and /dev/null differ diff --git a/docs/development/devtools/pap-s3p-results/pap_top_after_72h.png b/docs/development/devtools/pap-s3p-results/pap_top_after_72h.png deleted file mode 100644 index 576b1c25..00000000 Binary files a/docs/development/devtools/pap-s3p-results/pap_top_after_72h.png and /dev/null differ diff --git a/docs/development/devtools/pap-s3p-results/pap_top_before_72h.png b/docs/development/devtools/pap-s3p-results/pap_top_before_72h.png deleted file mode 100644 index b59b2c95..00000000 Binary files a/docs/development/devtools/pap-s3p-results/pap_top_before_72h.png and /dev/null differ diff --git a/docs/development/devtools/pap-s3p.rst b/docs/development/devtools/pap-s3p.rst deleted file mode 100644 index b42d7eb0..00000000 --- a/docs/development/devtools/pap-s3p.rst +++ /dev/null @@ -1,198 +0,0 @@ -.. This work is licensed under a -.. Creative Commons Attribution 4.0 International License. -.. http://creativecommons.org/licenses/by/4.0 - -.. _pap-s3p-label: - -.. toctree:: - :maxdepth: 2 - -Policy PAP component -~~~~~~~~~~~~~~~~~~~~ - -Both the Performance and the Stability tests were executed by performing requests -against Policy components installed as part of a full ONAP OOM deployment in Nordix lab. - -Setup Details -+++++++++++++ - -- Policy-PAP along with all policy components deployed as part of a full ONAP OOM deployment. -- A second instance of APEX-PDP is spun up in the setup. Update the configuration file (OnapPfConfig.json) such that the PDP can register to the new group created by PAP in the tests. -- Both tests were run via jMeter. - -Stability Test of PAP -+++++++++++++++++++++ - -Test Plan ---------- -The 72 hours stability test ran the following steps sequentially in a single threaded loop. - -Setup Phase (steps running only once) -""""""""""""""""""""""""""""""""""""" - -- **Create Policy for defaultGroup** - creates an operational policy using policy/api component -- **Create NodeTemplate metadata for sampleGroup policy** - creates a node template containing metadata using policy/api component -- **Create Policy for sampleGroup** - creates an operational policy that refers to the metadata created above using policy/api component -- **Change defaultGroup state to ACTIVE** - changes the state of defaultGroup PdpGroup to ACTIVE -- **Create/Update PDP Group** - creates a new PDPGroup named sampleGroup. - A second instance of the PDP that is already spun up gets registered to this new group -- **Check PdpGroup Query** - makes a PdpGroup query request and verifies that both PdpGroups are in ACTIVE state. - -PAP Test Flow (steps running in a loop for 72 hours) -"""""""""""""""""""""""""""""""""""""""""""""""""""" - -- **Check Health** - checks the health status of pap -- **PAP Metrics** - Fetch prometheus metrics before the deployment/undeployment cycle - Save different counters such as deploy/undeploy-success/failure counters at API and engine level. -- **Check PdpGroup Query** - makes a PdpGroup query request and verifies that both PdpGroups are in the ACTIVE state. -- **Deploy Policy for defaultGroup** - deploys the policy defaultDomain to defaultGroup -- **Check status of defaultGroup policy** - checks the status of defaultGroup PdpGroup with the defaultDomain policy 1.0.0. -- **Check PdpGroup Audit defaultGroup** - checks the audit information for the defaultGroup PdpGroup. -- **Check PdpGroup Audit Policy (defaultGroup)** - checks the audit information for the defaultGroup PdpGroup with the defaultDomain policy 1.0.0. -- **Check PdpGroup Query** - makes a PdpGroup query request and verifies that 2 PdpGroups are in the ACTIVE state and defaultGroup has a policy deployed on it. -- **Deployment Update for sampleGroup policy** - deploys the policy sampleDomain in sampleGroup PdpGroup using pap api -- **Check status of sampleGroup** - checks the status of the sampleGroup PdpGroup. -- **Check status of PdpGroups** - checks the status of both PdpGroups. -- **Check PdpGroup Query** - makes a PdpGroup query request and verifies that the defaultGroup has a policy defaultDomain deployed on it and sampleGroup has policy sampleDomain deployed on it. -- **Check Audit** - checks the audit information for all PdpGroups. -- **Check Consolidated Health** - checks the consolidated health status of all policy components. -- **Check Deployed Policies** - checks for all the deployed policies using pap api. -- **Undeploy policy in sampleGroup** - undeploys the policy sampleDomain from sampleGroup PdpGroup using pap api -- **Undeploy policy in defaultGroup** - undeploys the policy defaultDomain from PdpGroup -- **Check status of policies** - checks the status of all policies and make sure both the policies are undeployed -- **Check PdpGroup Query** - makes a PdpGroup query request and verifies that PdpGroup is in the PASSIVE state. -- **PAP Metrics after deployments** - Fetch prometheus metrics after the deployment/undeployment cycle - Save the new counter values such as deploy/undeploy-success/failure counters at API and engine level, and check that the deploySuccess and undeploySuccess counters are increased by 2. - -.. Note:: - To avoid putting a large Constant Timer value after every deployment/undeployment, the status API is polled until the deployment/undeployment - is successfully completed, or until a timeout. This is to make sure that the operation is completed successfully and the PDPs gets enough time to respond back. - Otherwise, before the deployment is marked successful by PAP, an undeployment could be triggered as part of other tests, - and the operation's corresponding prometheus counter at engine level will not get updated. - -Teardown Phase (steps running only once after PAP Test Flow is completed) -""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" - -- **Change state to PASSIVE(sampleGroup)** - changes the state of sampleGroup PdpGroup to PASSIVE -- **Delete PdpGroup sampleGroup** - delete the sampleGroup PdpGroup using pap api -- **Change State to PASSIVE(defaultGroup)** - changes the state of defaultGroup PdpGroup to PASSIVE -- **Delete policy created for defaultGroup** - deletes the operational policy defaultDomain using policy/api component -- **Delete Policy created for sampleGroup** - deletes the operational policy sampleDomain using policy/api component -- **Delete Nodetemplate metadata for sampleGroup policy** - deleted the nodetemplate containing metadata for sampleGroup policy - -The following steps can be used to configure the parameters of test plan. - -- **HTTP Authorization Manager** - used to store user/password authentication details. -- **HTTP Header Manager** - used to store headers which will be used for making HTTP requests. -- **User Defined Variables** - used to store following user defined parameters. - -=========== =================================================================== - **Name** **Description** -=========== =================================================================== - PAP_HOST IP Address or host name of PAP component - PAP_PORT Port number of PAP for making REST API calls - API_HOST IP Address or host name of API component - API_PORT Port number of API for making REST API calls -=========== =================================================================== - -The test was run in the background via "nohup", to prevent it from being interrupted: - -.. code-block:: bash - - nohup apache-jmeter-5.5/bin/jmeter -n -t stability.jmx -l stabilityTestResults.jtl & - -Test Results ------------- - -**Summary** - -Stability test plan was triggered for 72 hours. There were no failures during the 72 hours test. - - -**Test Statistics** - -======================= ================= ================== ================================== -**Total # of requests** **Success %** **Error %** **Average time taken per request** -======================= ================= ================== ================================== - 102290 100 % 0.15 % 782 ms -======================= ================= ================== ================================== - -.. Note:: - - There were 0.15% failures during the 72 hours test, due to the timing between the update of the metric "undeploySuccessCount" and the Undeploy itself. - We suggest for the next test to increase the timeout timing up to 130s between "Undeploy policy in defaultGroup" and "PAP Metrics after deployments" - -**JMeter Screenshot** - -.. image:: pap-s3p-results/pap_stability_jmeter_results.png - -**Memory and CPU usage** - -The memory and CPU usage can be monitored by running "top" command in the PAP pod. -A snapshot is taken before and after test execution to monitor the changes in resource utilization. -Prometheus metrics is also collected before and after the test execution. - -Memory and CPU usage before test execution: - -.. image:: pap-s3p-results/pap_top_before_72h.png - -:download:`Prometheus metrics before 72h test ` - -Memory and CPU usage after test execution: - -.. image:: pap-s3p-results/pap_top_after_72h.png - -:download:`Prometheus metrics after 72h test ` - -Performance Test of PAP -++++++++++++++++++++++++ - -Introduction ------------- - -Performance test of PAP has the goal of testing the min/avg/max processing time and rest call throughput for all the requests with multiple requests at the same time. - -Setup Details -------------- - -The performance test is performed on a similar setup as Stability test. The JMeter VM will be sending a large number of REST requests to the PAP component and collecting the statistics. - - -Test Plan ---------- - -Performance test plan is the same as the stability test plan above except for the few differences listed below. - -- Increase the number of threads up to 10 (simulating 10 users' behaviours at the same time). -- Reduce the test time to 2 hours. -- Usage of counters (simulating each user) to create different pdpGroups, update their state and later delete them. -- Removed the tests to deploy policies to newly created groups as this will need a larger setup with multiple pdps registered to each group, which will also slow down the performance test with the time needed for registration process etc. -- Usage of counters (simulating each user) to create different drools policies and deploy them to defaultGroup. - In the test, a thread count of 10 is used resulting in 10 different drools policies getting deployed and undeployed continuously for 2 hours. - Other standard operations like checking the deployment status of policies, checking the metrics, health etc remains. - -Run Test --------- - -Running/Triggering the performance test will be the same as the stability test. That is, launch JMeter pointing to corresponding *.jmx* test plan. The *API_HOST* , *API_PORT* , *PAP_HOST* , *PAP_PORT* are already set up in *.jmx*. - -.. code-block:: bash - - nohup apache-jmeter-5.5/bin/jmeter -n -t performance.jmx -l performanceTestResults.jtl & - -Test Results ------------- - -Test results are shown as below. - -**Test Statistics** - -======================= ================= ================== ================================== -**Total # of requests** **Success %** **Error %** **Average time taken per request** -======================= ================= ================== ================================== -19886 100 % 0.00 % 3107 ms -======================= ================= ================== ================================== - -**JMeter Screenshot** - -.. image:: pap-s3p-results/pap_performance_jmeter_results.png diff --git a/docs/development/devtools/run-s3p.rst b/docs/development/devtools/run-s3p.rst deleted file mode 100644 index 17eba32a..00000000 --- a/docs/development/devtools/run-s3p.rst +++ /dev/null @@ -1,52 +0,0 @@ -Running the Policy Framework S3P Tests -###################################### - -.. contents:: - :depth: 3 - -Per release, the policy framework team perform stability and performance tests per component of the policy framework. -This testing work involves performing a series of test on a full OOM deployment and updating the various test plans to work towards the given deployment. -This work can take some time to setup before performing any tests to begin with. -For stability testing, a tool called JMeter is used to trigger a series of tests for a period of 72 hours which has to be manually initiated and monitored by the tester. -Likewise, with the performance tests, but in this case for ~2 hours. -As part of the work to make to automate this process a script can be now triggered to bring up a microk8s cluster on a VM, install JMeter, alter the cluster info to match the JMX test plans for JMeter to trigger and gather results at the end. -These S3P tests will be triggered for a shorter period as part of the CSITs to prove the stability and performance of our components. - -There has been recent work completed to trigger our CSIT tests in a K8s environment. -As part of this work, a script has been created to bring up a microk8s cluster for testing purposes which includes all necessary components for our policy framework testing. -For automating the S3Ps, we will use this script to bring up a K8s environment to perform the S3P tests against. -Once this cluster is brought up, a script is called to alter the cluster. -The IPS and PORTS of our policy components are set by this script to ensure consistency in the test plans. -JMeter is installed and the S3P test plans are triggered to run by their respective components. - -.. code-block:: bash - :caption: Start S3P Script - - #===MAIN===# - if [ -z "${WORKSPACE}" ]; then - export WORKSPACE=$(git rev-parse --show-toplevel) - fi - export TESTDIR=${WORKSPACE}/testsuites - export API_PERF_TEST_FILE=$TESTDIR/performance/src/main/resources/testplans/policy_api_performance.jmx - export API_STAB_TEST_FILE=$TESTDIR/stability/src/main/resources/testplans/policy_api_stability.jmx - if [ $1 == "run" ] - then - mkdir automate-performance;cd automate-performance; - git clone "https://gerrit.onap.org/r/policy/docker" - cd docker/csit - if [ $2 == "performance" ] - then - bash start-s3p-tests.sh run $API_PERF_TEST_FILE; - elif [ $2 == "stability" ] - then - bash start-s3p-tests.sh run $API_STAB_TEST_FILE; - else - echo "echo Invalid arguments provided. Usage: $0 [option..] {performance | stability}" - fi - else - echo "Invalid arguments provided. Usage: $0 [option..] {run | uninstall}" - fi - -This script is triggered by each component. -It will export the performance and stability testplans and trigger the start-s3p-test.sh script which will perform the steps to automatically run the s3p tests. - diff --git a/docs/development/devtools/testing/s3p/apex-s3p-results/apex_metrics_after_72h.txt b/docs/development/devtools/testing/s3p/apex-s3p-results/apex_metrics_after_72h.txt new file mode 100644 index 00000000..56f13907 --- /dev/null +++ b/docs/development/devtools/testing/s3p/apex-s3p-results/apex_metrics_after_72h.txt @@ -0,0 +1,316 @@ +# HELP jvm_threads_current Current thread count of a JVM +# TYPE jvm_threads_current gauge +jvm_threads_current 32.0 +# HELP jvm_threads_daemon Daemon thread count of a JVM +# TYPE jvm_threads_daemon gauge +jvm_threads_daemon 17.0 +# HELP jvm_threads_peak Peak thread count of a JVM +# TYPE jvm_threads_peak gauge +jvm_threads_peak 81.0 +# HELP jvm_threads_started_total Started thread count of a JVM +# TYPE jvm_threads_started_total counter +jvm_threads_started_total 423360.0 +# HELP jvm_threads_deadlocked Cycles of JVM-threads that are in deadlock waiting to acquire object monitors or ownable synchronizers +# TYPE jvm_threads_deadlocked gauge +jvm_threads_deadlocked 0.0 +# HELP jvm_threads_deadlocked_monitor Cycles of JVM-threads that are in deadlock waiting to acquire object monitors +# TYPE jvm_threads_deadlocked_monitor gauge +jvm_threads_deadlocked_monitor 0.0 +# HELP jvm_threads_state Current count of threads by state +# TYPE jvm_threads_state gauge +jvm_threads_state{state="BLOCKED",} 0.0 +jvm_threads_state{state="TIMED_WAITING",} 11.0 +jvm_threads_state{state="NEW",} 0.0 +jvm_threads_state{state="RUNNABLE",} 7.0 +jvm_threads_state{state="TERMINATED",} 0.0 +jvm_threads_state{state="WAITING",} 14.0 +# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds. +# TYPE process_cpu_seconds_total counter +process_cpu_seconds_total 16418.06 +# HELP process_start_time_seconds Start time of the process since unix epoch in seconds. +# TYPE process_start_time_seconds gauge +process_start_time_seconds 1.651077494162E9 +# HELP process_open_fds Number of open file descriptors. +# TYPE process_open_fds gauge +process_open_fds 357.0 +# HELP process_max_fds Maximum number of open file descriptors. +# TYPE process_max_fds gauge +process_max_fds 1048576.0 +# HELP process_virtual_memory_bytes Virtual memory size in bytes. +# TYPE process_virtual_memory_bytes gauge +process_virtual_memory_bytes 1.0165403648E10 +# HELP process_resident_memory_bytes Resident memory size in bytes. +# TYPE process_resident_memory_bytes gauge +process_resident_memory_bytes 5.58034944E8 +# HELP pdpa_engine_event_executions Total number of APEX events processed by the engine. +# TYPE pdpa_engine_event_executions gauge +pdpa_engine_event_executions{engine_instance_id="NSOApexEngine-1:0.0.1",} 30743.0 +pdpa_engine_event_executions{engine_instance_id="NSOApexEngine-4:0.0.1",} 30766.0 +pdpa_engine_event_executions{engine_instance_id="NSOApexEngine-3:0.0.1",} 30722.0 +pdpa_engine_event_executions{engine_instance_id="NSOApexEngine-0:0.0.1",} 30727.0 +pdpa_engine_event_executions{engine_instance_id="NSOApexEngine-2:0.0.1",} 30742.0 +# HELP jvm_buffer_pool_used_bytes Used bytes of a given JVM buffer pool. +# TYPE jvm_buffer_pool_used_bytes gauge +jvm_buffer_pool_used_bytes{pool="mapped",} 0.0 +jvm_buffer_pool_used_bytes{pool="direct",} 3.3833905E7 +# HELP jvm_buffer_pool_capacity_bytes Bytes capacity of a given JVM buffer pool. +# TYPE jvm_buffer_pool_capacity_bytes gauge +jvm_buffer_pool_capacity_bytes{pool="mapped",} 0.0 +jvm_buffer_pool_capacity_bytes{pool="direct",} 3.3833904E7 +# HELP jvm_buffer_pool_used_buffers Used buffers of a given JVM buffer pool. +# TYPE jvm_buffer_pool_used_buffers gauge +jvm_buffer_pool_used_buffers{pool="mapped",} 0.0 +jvm_buffer_pool_used_buffers{pool="direct",} 15.0 +# HELP pdpa_policy_executions_total The total number of TOSCA policy executions. +# TYPE pdpa_policy_executions_total counter +# HELP pdpa_policy_deployments_total The total number of policy deployments. +# TYPE pdpa_policy_deployments_total counter +pdpa_policy_deployments_total{operation="deploy",status="TOTAL",} 5.0 +pdpa_policy_deployments_total{operation="undeploy",status="TOTAL",} 5.0 +pdpa_policy_deployments_total{operation="undeploy",status="SUCCESS",} 5.0 +pdpa_policy_deployments_total{operation="deploy",status="SUCCESS",} 5.0 +# HELP pdpa_engine_average_execution_time_seconds Average time taken to execute an APEX policy in seconds. +# TYPE pdpa_engine_average_execution_time_seconds gauge +pdpa_engine_average_execution_time_seconds{engine_instance_id="NSOApexEngine-1:0.0.1",} 0.00515235988680349 +pdpa_engine_average_execution_time_seconds{engine_instance_id="NSOApexEngine-4:0.0.1",} 0.00521845543782099 +pdpa_engine_average_execution_time_seconds{engine_instance_id="NSOApexEngine-3:0.0.1",} 0.005200800729119198 +pdpa_engine_average_execution_time_seconds{engine_instance_id="NSOApexEngine-0:0.0.1",} 0.005191785725908804 +pdpa_engine_average_execution_time_seconds{engine_instance_id="NSOApexEngine-2:0.0.1",} 0.0051784854596317684 +# HELP pdpa_engine_state State of the APEX engine as integers mapped as - 0:UNDEFINED, 1:STOPPED, 2:READY, 3:EXECUTING, 4:STOPPING +# TYPE pdpa_engine_state gauge +pdpa_engine_state{engine_instance_id="NSOApexEngine-1:0.0.1",} 1.0 +pdpa_engine_state{engine_instance_id="NSOApexEngine-4:0.0.1",} 1.0 +pdpa_engine_state{engine_instance_id="NSOApexEngine-3:0.0.1",} 1.0 +pdpa_engine_state{engine_instance_id="NSOApexEngine-0:0.0.1",} 1.0 +pdpa_engine_state{engine_instance_id="NSOApexEngine-2:0.0.1",} 1.0 +# HELP jvm_gc_collection_seconds Time spent in a given JVM garbage collector in seconds. +# TYPE jvm_gc_collection_seconds summary +jvm_gc_collection_seconds_count{gc="Copy",} 5883.0 +jvm_gc_collection_seconds_sum{gc="Copy",} 97.808 +jvm_gc_collection_seconds_count{gc="MarkSweepCompact",} 3.0 +jvm_gc_collection_seconds_sum{gc="MarkSweepCompact",} 0.357 +# HELP pdpa_engine_last_start_timestamp_epoch Epoch timestamp of the instance when engine was last started. +# TYPE pdpa_engine_last_start_timestamp_epoch gauge +pdpa_engine_last_start_timestamp_epoch{engine_instance_id="NSOApexEngine-1:0.0.1",} 0.0 +pdpa_engine_last_start_timestamp_epoch{engine_instance_id="NSOApexEngine-4:0.0.1",} 0.0 +pdpa_engine_last_start_timestamp_epoch{engine_instance_id="NSOApexEngine-3:0.0.1",} 0.0 +pdpa_engine_last_start_timestamp_epoch{engine_instance_id="NSOApexEngine-0:0.0.1",} 0.0 +pdpa_engine_last_start_timestamp_epoch{engine_instance_id="NSOApexEngine-2:0.0.1",} 0.0 +# HELP jvm_memory_pool_allocated_bytes_total Total bytes allocated in a given JVM memory pool. Only updated after GC, not continuously. +# TYPE jvm_memory_pool_allocated_bytes_total counter +jvm_memory_pool_allocated_bytes_total{pool="Eden Space",} 8.29800936264E11 +jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'profiled nmethods'",} 4.839232E7 +jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'non-profiled nmethods'",} 3.5181056E7 +jvm_memory_pool_allocated_bytes_total{pool="Compressed Class Space",} 8194120.0 +jvm_memory_pool_allocated_bytes_total{pool="Metaspace",} 7.7729144E7 +jvm_memory_pool_allocated_bytes_total{pool="Tenured Gen",} 1.41180272E8 +jvm_memory_pool_allocated_bytes_total{pool="Survivor Space",} 4.78761928E8 +jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'non-nmethods'",} 1392128.0 +# HELP pdpa_engine_uptime Time elapsed since the engine was started. +# TYPE pdpa_engine_uptime gauge +pdpa_engine_uptime{engine_instance_id="NSOApexEngine-1:0.0.1",} 259200.522 +pdpa_engine_uptime{engine_instance_id="NSOApexEngine-4:0.0.1",} 259200.751 +pdpa_engine_uptime{engine_instance_id="NSOApexEngine-3:0.0.1",} 259200.678 +pdpa_engine_uptime{engine_instance_id="NSOApexEngine-0:0.0.1",} 259200.439 +pdpa_engine_uptime{engine_instance_id="NSOApexEngine-2:0.0.1",} 259200.601 +# HELP pdpa_engine_last_execution_time Time taken to execute the last APEX policy in seconds. +# TYPE pdpa_engine_last_execution_time histogram +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="0.005",} 24726.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="0.01",} 50195.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="0.025",} 70836.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="0.05",} 71947.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="0.075",} 71996.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="0.1",} 72001.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="0.25",} 72002.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="0.5",} 72002.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="0.75",} 72002.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="1.0",} 72002.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="2.5",} 72002.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="5.0",} 72002.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="7.5",} 72002.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="10.0",} 72002.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-1:0.0.1",le="+Inf",} 72002.0 +pdpa_engine_last_execution_time_count{engine_instance_id="NSOApexEngine-1:0.0.1",} 72002.0 +pdpa_engine_last_execution_time_sum{engine_instance_id="NSOApexEngine-1:0.0.1",} 609.1939999998591 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="0.005",} 24512.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="0.01",} 50115.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="0.025",} 70746.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="0.05",} 71918.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="0.075",} 71966.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="0.1",} 71967.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="0.25",} 71967.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="0.5",} 71967.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="0.75",} 71967.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="1.0",} 71967.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="2.5",} 71967.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="5.0",} 71967.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="7.5",} 71967.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="10.0",} 71967.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-4:0.0.1",le="+Inf",} 71967.0 +pdpa_engine_last_execution_time_count{engine_instance_id="NSOApexEngine-4:0.0.1",} 71967.0 +pdpa_engine_last_execution_time_sum{engine_instance_id="NSOApexEngine-4:0.0.1",} 610.3469999998522 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="0.005",} 24607.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="0.01",} 50182.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="0.025",} 70791.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="0.05",} 71929.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="0.075",} 71965.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="0.1",} 71970.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="0.25",} 71970.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="0.5",} 71970.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="0.75",} 71970.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="1.0",} 71970.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="2.5",} 71970.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="5.0",} 71970.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="7.5",} 71970.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="10.0",} 71970.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-3:0.0.1",le="+Inf",} 71970.0 +pdpa_engine_last_execution_time_count{engine_instance_id="NSOApexEngine-3:0.0.1",} 71970.0 +pdpa_engine_last_execution_time_sum{engine_instance_id="NSOApexEngine-3:0.0.1",} 608.8539999998619 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="0.005",} 24623.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="0.01",} 50207.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="0.025",} 70783.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="0.05",} 71934.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="0.075",} 71981.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="0.1",} 71986.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="0.25",} 71988.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="0.5",} 71988.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="0.75",} 71988.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="1.0",} 71988.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="2.5",} 71988.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="5.0",} 71988.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="7.5",} 71988.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="10.0",} 71988.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-0:0.0.1",le="+Inf",} 71988.0 +pdpa_engine_last_execution_time_count{engine_instance_id="NSOApexEngine-0:0.0.1",} 71988.0 +pdpa_engine_last_execution_time_sum{engine_instance_id="NSOApexEngine-0:0.0.1",} 610.5579999998558 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="0.005",} 24594.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="0.01",} 50131.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="0.025",} 70816.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="0.05",} 71905.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="0.075",} 71959.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="0.1",} 71961.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="0.25",} 71962.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="0.5",} 71962.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="0.75",} 71962.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="1.0",} 71962.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="2.5",} 71962.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="5.0",} 71962.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="7.5",} 71962.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="10.0",} 71962.0 +pdpa_engine_last_execution_time_bucket{engine_instance_id="NSOApexEngine-2:0.0.1",le="+Inf",} 71962.0 +pdpa_engine_last_execution_time_count{engine_instance_id="NSOApexEngine-2:0.0.1",} 71962.0 +pdpa_engine_last_execution_time_sum{engine_instance_id="NSOApexEngine-2:0.0.1",} 608.3549999998555 +# HELP jvm_memory_objects_pending_finalization The number of objects waiting in the finalizer queue. +# TYPE jvm_memory_objects_pending_finalization gauge +jvm_memory_objects_pending_finalization 0.0 +# HELP jvm_memory_bytes_used Used bytes of a given JVM memory area. +# TYPE jvm_memory_bytes_used gauge +jvm_memory_bytes_used{area="heap",} 1.90274552E8 +jvm_memory_bytes_used{area="nonheap",} 1.16193856E8 +# HELP jvm_memory_bytes_committed Committed (bytes) of a given JVM memory area. +# TYPE jvm_memory_bytes_committed gauge +jvm_memory_bytes_committed{area="heap",} 5.10984192E8 +jvm_memory_bytes_committed{area="nonheap",} 1.56127232E8 +# HELP jvm_memory_bytes_max Max (bytes) of a given JVM memory area. +# TYPE jvm_memory_bytes_max gauge +jvm_memory_bytes_max{area="heap",} 8.151564288E9 +jvm_memory_bytes_max{area="nonheap",} -1.0 +# HELP jvm_memory_bytes_init Initial bytes of a given JVM memory area. +# TYPE jvm_memory_bytes_init gauge +jvm_memory_bytes_init{area="heap",} 5.28482304E8 +jvm_memory_bytes_init{area="nonheap",} 7667712.0 +# HELP jvm_memory_pool_bytes_used Used bytes of a given JVM memory pool. +# TYPE jvm_memory_pool_bytes_used gauge +jvm_memory_pool_bytes_used{pool="CodeHeap 'non-nmethods'",} 1353600.0 +jvm_memory_pool_bytes_used{pool="Metaspace",} 7.7729144E7 +jvm_memory_pool_bytes_used{pool="Tenured Gen",} 1.41180272E8 +jvm_memory_pool_bytes_used{pool="CodeHeap 'profiled nmethods'",} 4831104.0 +jvm_memory_pool_bytes_used{pool="Eden Space",} 4.5145032E7 +jvm_memory_pool_bytes_used{pool="Survivor Space",} 3949248.0 +jvm_memory_pool_bytes_used{pool="Compressed Class Space",} 8194120.0 +jvm_memory_pool_bytes_used{pool="CodeHeap 'non-profiled nmethods'",} 2.4085888E7 +# HELP jvm_memory_pool_bytes_committed Committed bytes of a given JVM memory pool. +# TYPE jvm_memory_pool_bytes_committed gauge +jvm_memory_pool_bytes_committed{pool="CodeHeap 'non-nmethods'",} 2555904.0 +jvm_memory_pool_bytes_committed{pool="Metaspace",} 8.5348352E7 +jvm_memory_pool_bytes_committed{pool="Tenured Gen",} 3.52321536E8 +jvm_memory_pool_bytes_committed{pool="CodeHeap 'profiled nmethods'",} 3.3030144E7 +jvm_memory_pool_bytes_committed{pool="Eden Space",} 1.41033472E8 +jvm_memory_pool_bytes_committed{pool="Survivor Space",} 1.7629184E7 +jvm_memory_pool_bytes_committed{pool="Compressed Class Space",} 9175040.0 +jvm_memory_pool_bytes_committed{pool="CodeHeap 'non-profiled nmethods'",} 2.6017792E7 +# HELP jvm_memory_pool_bytes_max Max bytes of a given JVM memory pool. +# TYPE jvm_memory_pool_bytes_max gauge +jvm_memory_pool_bytes_max{pool="CodeHeap 'non-nmethods'",} 5828608.0 +jvm_memory_pool_bytes_max{pool="Metaspace",} -1.0 +jvm_memory_pool_bytes_max{pool="Tenured Gen",} 5.621809152E9 +jvm_memory_pool_bytes_max{pool="CodeHeap 'profiled nmethods'",} 1.22912768E8 +jvm_memory_pool_bytes_max{pool="Eden Space",} 2.248671232E9 +jvm_memory_pool_bytes_max{pool="Survivor Space",} 2.81083904E8 +jvm_memory_pool_bytes_max{pool="Compressed Class Space",} 1.073741824E9 +jvm_memory_pool_bytes_max{pool="CodeHeap 'non-profiled nmethods'",} 1.22916864E8 +# HELP jvm_memory_pool_bytes_init Initial bytes of a given JVM memory pool. +# TYPE jvm_memory_pool_bytes_init gauge +jvm_memory_pool_bytes_init{pool="CodeHeap 'non-nmethods'",} 2555904.0 +jvm_memory_pool_bytes_init{pool="Metaspace",} 0.0 +jvm_memory_pool_bytes_init{pool="Tenured Gen",} 3.52321536E8 +jvm_memory_pool_bytes_init{pool="CodeHeap 'profiled nmethods'",} 2555904.0 +jvm_memory_pool_bytes_init{pool="Eden Space",} 1.41033472E8 +jvm_memory_pool_bytes_init{pool="Survivor Space",} 1.7563648E7 +jvm_memory_pool_bytes_init{pool="Compressed Class Space",} 0.0 +jvm_memory_pool_bytes_init{pool="CodeHeap 'non-profiled nmethods'",} 2555904.0 +# HELP jvm_memory_pool_collection_used_bytes Used bytes after last collection of a given JVM memory pool. +# TYPE jvm_memory_pool_collection_used_bytes gauge +jvm_memory_pool_collection_used_bytes{pool="Tenured Gen",} 3.853812E7 +jvm_memory_pool_collection_used_bytes{pool="Eden Space",} 0.0 +jvm_memory_pool_collection_used_bytes{pool="Survivor Space",} 3949248.0 +# HELP jvm_memory_pool_collection_committed_bytes Committed after last collection bytes of a given JVM memory pool. +# TYPE jvm_memory_pool_collection_committed_bytes gauge +jvm_memory_pool_collection_committed_bytes{pool="Tenured Gen",} 3.52321536E8 +jvm_memory_pool_collection_committed_bytes{pool="Eden Space",} 1.41033472E8 +jvm_memory_pool_collection_committed_bytes{pool="Survivor Space",} 1.7629184E7 +# HELP jvm_memory_pool_collection_max_bytes Max bytes after last collection of a given JVM memory pool. +# TYPE jvm_memory_pool_collection_max_bytes gauge +jvm_memory_pool_collection_max_bytes{pool="Tenured Gen",} 5.621809152E9 +jvm_memory_pool_collection_max_bytes{pool="Eden Space",} 2.248671232E9 +jvm_memory_pool_collection_max_bytes{pool="Survivor Space",} 2.81083904E8 +# HELP jvm_memory_pool_collection_init_bytes Initial after last collection bytes of a given JVM memory pool. +# TYPE jvm_memory_pool_collection_init_bytes gauge +jvm_memory_pool_collection_init_bytes{pool="Tenured Gen",} 3.52321536E8 +jvm_memory_pool_collection_init_bytes{pool="Eden Space",} 1.41033472E8 +jvm_memory_pool_collection_init_bytes{pool="Survivor Space",} 1.7563648E7 +# HELP jvm_classes_loaded The number of classes that are currently loaded in the JVM +# TYPE jvm_classes_loaded gauge +jvm_classes_loaded 11386.0 +# HELP jvm_classes_loaded_total The total number of classes that have been loaded since the JVM has started execution +# TYPE jvm_classes_loaded_total counter +jvm_classes_loaded_total 11448.0 +# HELP jvm_classes_unloaded_total The total number of classes that have been unloaded since the JVM has started execution +# TYPE jvm_classes_unloaded_total counter +jvm_classes_unloaded_total 62.0 +# HELP jvm_info VM version info +# TYPE jvm_info gauge +jvm_info{runtime="OpenJDK Runtime Environment",vendor="Alpine",version="11.0.9+11-alpine-r1",} 1.0 +# HELP jvm_memory_pool_allocated_bytes_created Total bytes allocated in a given JVM memory pool. Only updated after GC, not continuously. +# TYPE jvm_memory_pool_allocated_bytes_created gauge +jvm_memory_pool_allocated_bytes_created{pool="Eden Space",} 1.651077501662E9 +jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'profiled nmethods'",} 1.651077501657E9 +jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'non-profiled nmethods'",} 1.651077501662E9 +jvm_memory_pool_allocated_bytes_created{pool="Compressed Class Space",} 1.651077501662E9 +jvm_memory_pool_allocated_bytes_created{pool="Metaspace",} 1.651077501662E9 +jvm_memory_pool_allocated_bytes_created{pool="Tenured Gen",} 1.651077501662E9 +jvm_memory_pool_allocated_bytes_created{pool="Survivor Space",} 1.651077501662E9 +jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'non-nmethods'",} 1.651077501662E9 +# HELP pdpa_engine_last_execution_time_created Time taken to execute the last APEX policy in seconds. +# TYPE pdpa_engine_last_execution_time_created gauge +pdpa_engine_last_execution_time_created{engine_instance_id="NSOApexEngine-1:0.0.1",} 1.651080501294E9 +pdpa_engine_last_execution_time_created{engine_instance_id="NSOApexEngine-4:0.0.1",} 1.651080501295E9 +pdpa_engine_last_execution_time_created{engine_instance_id="NSOApexEngine-3:0.0.1",} 1.651080501295E9 +pdpa_engine_last_execution_time_created{engine_instance_id="NSOApexEngine-0:0.0.1",} 1.651080501294E9 +pdpa_engine_last_execution_time_created{engine_instance_id="NSOApexEngine-2:0.0.1",} 1.651080501294E9 +# HELP pdpa_policy_deployments_created The total number of policy deployments. +# TYPE pdpa_policy_deployments_created gauge +pdpa_policy_deployments_created{operation="deploy",status="TOTAL",} 1.651080501289E9 +pdpa_policy_deployments_created{operation="undeploy",status="TOTAL",} 1.651081148331E9 +pdpa_policy_deployments_created{operation="undeploy",status="SUCCESS",} 1.651081148331E9 +pdpa_policy_deployments_created{operation="deploy",status="SUCCESS",} 1.651080501289E9 diff --git a/docs/development/devtools/testing/s3p/apex-s3p-results/apex_metrics_before_72h.txt b/docs/development/devtools/testing/s3p/apex-s3p-results/apex_metrics_before_72h.txt new file mode 100644 index 00000000..4a3d8835 --- /dev/null +++ b/docs/development/devtools/testing/s3p/apex-s3p-results/apex_metrics_before_72h.txt @@ -0,0 +1,175 @@ +# HELP jvm_threads_current Current thread count of a JVM +# TYPE jvm_threads_current gauge +jvm_threads_current 31.0 +# HELP jvm_threads_daemon Daemon thread count of a JVM +# TYPE jvm_threads_daemon gauge +jvm_threads_daemon 16.0 +# HELP jvm_threads_peak Peak thread count of a JVM +# TYPE jvm_threads_peak gauge +jvm_threads_peak 31.0 +# HELP jvm_threads_started_total Started thread count of a JVM +# TYPE jvm_threads_started_total counter +jvm_threads_started_total 32.0 +# HELP jvm_threads_deadlocked Cycles of JVM-threads that are in deadlock waiting to acquire object monitors or ownable synchronizers +# TYPE jvm_threads_deadlocked gauge +jvm_threads_deadlocked 0.0 +# HELP jvm_threads_deadlocked_monitor Cycles of JVM-threads that are in deadlock waiting to acquire object monitors +# TYPE jvm_threads_deadlocked_monitor gauge +jvm_threads_deadlocked_monitor 0.0 +# HELP jvm_threads_state Current count of threads by state +# TYPE jvm_threads_state gauge +jvm_threads_state{state="BLOCKED",} 0.0 +jvm_threads_state{state="TIMED_WAITING",} 11.0 +jvm_threads_state{state="NEW",} 0.0 +jvm_threads_state{state="RUNNABLE",} 7.0 +jvm_threads_state{state="TERMINATED",} 0.0 +jvm_threads_state{state="WAITING",} 13.0 +# HELP jvm_gc_collection_seconds Time spent in a given JVM garbage collector in seconds. +# TYPE jvm_gc_collection_seconds summary +jvm_gc_collection_seconds_count{gc="Copy",} 2.0 +jvm_gc_collection_seconds_sum{gc="Copy",} 0.059 +jvm_gc_collection_seconds_count{gc="MarkSweepCompact",} 2.0 +jvm_gc_collection_seconds_sum{gc="MarkSweepCompact",} 0.185 +# HELP process_cpu_seconds_total Total user and system CPU time spent in seconds. +# TYPE process_cpu_seconds_total counter +process_cpu_seconds_total 38.14 +# HELP process_start_time_seconds Start time of the process since unix epoch in seconds. +# TYPE process_start_time_seconds gauge +process_start_time_seconds 1.651077494162E9 +# HELP process_open_fds Number of open file descriptors. +# TYPE process_open_fds gauge +process_open_fds 355.0 +# HELP process_max_fds Maximum number of open file descriptors. +# TYPE process_max_fds gauge +process_max_fds 1048576.0 +# HELP process_virtual_memory_bytes Virtual memory size in bytes. +# TYPE process_virtual_memory_bytes gauge +process_virtual_memory_bytes 1.0070171648E10 +# HELP process_resident_memory_bytes Resident memory size in bytes. +# TYPE process_resident_memory_bytes gauge +process_resident_memory_bytes 2.9052928E8 +# HELP jvm_buffer_pool_used_bytes Used bytes of a given JVM buffer pool. +# TYPE jvm_buffer_pool_used_bytes gauge +jvm_buffer_pool_used_bytes{pool="mapped",} 0.0 +jvm_buffer_pool_used_bytes{pool="direct",} 187432.0 +# HELP jvm_buffer_pool_capacity_bytes Bytes capacity of a given JVM buffer pool. +# TYPE jvm_buffer_pool_capacity_bytes gauge +jvm_buffer_pool_capacity_bytes{pool="mapped",} 0.0 +jvm_buffer_pool_capacity_bytes{pool="direct",} 187432.0 +# HELP jvm_buffer_pool_used_buffers Used buffers of a given JVM buffer pool. +# TYPE jvm_buffer_pool_used_buffers gauge +jvm_buffer_pool_used_buffers{pool="mapped",} 0.0 +jvm_buffer_pool_used_buffers{pool="direct",} 9.0 +# HELP jvm_memory_pool_allocated_bytes_total Total bytes allocated in a given JVM memory pool. Only updated after GC, not continuously. +# TYPE jvm_memory_pool_allocated_bytes_total counter +jvm_memory_pool_allocated_bytes_total{pool="Eden Space",} 3.035482E8 +jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'profiled nmethods'",} 9772800.0 +jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'non-profiled nmethods'",} 2152064.0 +jvm_memory_pool_allocated_bytes_total{pool="Compressed Class Space",} 4912232.0 +jvm_memory_pool_allocated_bytes_total{pool="Metaspace",} 4.1337744E7 +jvm_memory_pool_allocated_bytes_total{pool="Tenured Gen",} 2.8136056E7 +jvm_memory_pool_allocated_bytes_total{pool="Survivor Space",} 6813240.0 +jvm_memory_pool_allocated_bytes_total{pool="CodeHeap 'non-nmethods'",} 1272320.0 +# HELP pdpa_policy_deployments_total The total number of policy deployments. +# TYPE pdpa_policy_deployments_total counter +# HELP jvm_memory_objects_pending_finalization The number of objects waiting in the finalizer queue. +# TYPE jvm_memory_objects_pending_finalization gauge +jvm_memory_objects_pending_finalization 0.0 +# HELP jvm_memory_bytes_used Used bytes of a given JVM memory area. +# TYPE jvm_memory_bytes_used gauge +jvm_memory_bytes_used{area="heap",} 9.5900224E7 +jvm_memory_bytes_used{area="nonheap",} 6.0285288E7 +# HELP jvm_memory_bytes_committed Committed (bytes) of a given JVM memory area. +# TYPE jvm_memory_bytes_committed gauge +jvm_memory_bytes_committed{area="heap",} 5.10984192E8 +jvm_memory_bytes_committed{area="nonheap",} 6.3922176E7 +# HELP jvm_memory_bytes_max Max (bytes) of a given JVM memory area. +# TYPE jvm_memory_bytes_max gauge +jvm_memory_bytes_max{area="heap",} 8.151564288E9 +jvm_memory_bytes_max{area="nonheap",} -1.0 +# HELP jvm_memory_bytes_init Initial bytes of a given JVM memory area. +# TYPE jvm_memory_bytes_init gauge +jvm_memory_bytes_init{area="heap",} 5.28482304E8 +jvm_memory_bytes_init{area="nonheap",} 7667712.0 +# HELP jvm_memory_pool_bytes_used Used bytes of a given JVM memory pool. +# TYPE jvm_memory_pool_bytes_used gauge +jvm_memory_pool_bytes_used{pool="CodeHeap 'non-nmethods'",} 1272320.0 +jvm_memory_pool_bytes_used{pool="Metaspace",} 4.1681312E7 +jvm_memory_pool_bytes_used{pool="Tenured Gen",} 2.8136056E7 +jvm_memory_pool_bytes_used{pool="CodeHeap 'profiled nmethods'",} 1.0006912E7 +jvm_memory_pool_bytes_used{pool="Eden Space",} 6.5005376E7 +jvm_memory_pool_bytes_used{pool="Survivor Space",} 2758792.0 +jvm_memory_pool_bytes_used{pool="Compressed Class Space",} 4913352.0 +jvm_memory_pool_bytes_used{pool="CodeHeap 'non-profiled nmethods'",} 2411392.0 +# HELP jvm_memory_pool_bytes_committed Committed bytes of a given JVM memory pool. +# TYPE jvm_memory_pool_bytes_committed gauge +jvm_memory_pool_bytes_committed{pool="CodeHeap 'non-nmethods'",} 2555904.0 +jvm_memory_pool_bytes_committed{pool="Metaspace",} 4.32128E7 +jvm_memory_pool_bytes_committed{pool="Tenured Gen",} 3.52321536E8 +jvm_memory_pool_bytes_committed{pool="CodeHeap 'profiled nmethods'",} 1.0092544E7 +jvm_memory_pool_bytes_committed{pool="Eden Space",} 1.41033472E8 +jvm_memory_pool_bytes_committed{pool="Survivor Space",} 1.7629184E7 +jvm_memory_pool_bytes_committed{pool="Compressed Class Space",} 5505024.0 +jvm_memory_pool_bytes_committed{pool="CodeHeap 'non-profiled nmethods'",} 2555904.0 +# HELP jvm_memory_pool_bytes_max Max bytes of a given JVM memory pool. +# TYPE jvm_memory_pool_bytes_max gauge +jvm_memory_pool_bytes_max{pool="CodeHeap 'non-nmethods'",} 5828608.0 +jvm_memory_pool_bytes_max{pool="Metaspace",} -1.0 +jvm_memory_pool_bytes_max{pool="Tenured Gen",} 5.621809152E9 +jvm_memory_pool_bytes_max{pool="CodeHeap 'profiled nmethods'",} 1.22912768E8 +jvm_memory_pool_bytes_max{pool="Eden Space",} 2.248671232E9 +jvm_memory_pool_bytes_max{pool="Survivor Space",} 2.81083904E8 +jvm_memory_pool_bytes_max{pool="Compressed Class Space",} 1.073741824E9 +jvm_memory_pool_bytes_max{pool="CodeHeap 'non-profiled nmethods'",} 1.22916864E8 +# HELP jvm_memory_pool_bytes_init Initial bytes of a given JVM memory pool. +# TYPE jvm_memory_pool_bytes_init gauge +jvm_memory_pool_bytes_init{pool="CodeHeap 'non-nmethods'",} 2555904.0 +jvm_memory_pool_bytes_init{pool="Metaspace",} 0.0 +jvm_memory_pool_bytes_init{pool="Tenured Gen",} 3.52321536E8 +jvm_memory_pool_bytes_init{pool="CodeHeap 'profiled nmethods'",} 2555904.0 +jvm_memory_pool_bytes_init{pool="Eden Space",} 1.41033472E8 +jvm_memory_pool_bytes_init{pool="Survivor Space",} 1.7563648E7 +jvm_memory_pool_bytes_init{pool="Compressed Class Space",} 0.0 +jvm_memory_pool_bytes_init{pool="CodeHeap 'non-profiled nmethods'",} 2555904.0 +# HELP jvm_memory_pool_collection_used_bytes Used bytes after last collection of a given JVM memory pool. +# TYPE jvm_memory_pool_collection_used_bytes gauge +jvm_memory_pool_collection_used_bytes{pool="Tenured Gen",} 2.8136056E7 +jvm_memory_pool_collection_used_bytes{pool="Eden Space",} 0.0 +jvm_memory_pool_collection_used_bytes{pool="Survivor Space",} 2758792.0 +# HELP jvm_memory_pool_collection_committed_bytes Committed after last collection bytes of a given JVM memory pool. +# TYPE jvm_memory_pool_collection_committed_bytes gauge +jvm_memory_pool_collection_committed_bytes{pool="Tenured Gen",} 3.52321536E8 +jvm_memory_pool_collection_committed_bytes{pool="Eden Space",} 1.41033472E8 +jvm_memory_pool_collection_committed_bytes{pool="Survivor Space",} 1.7629184E7 +# HELP jvm_memory_pool_collection_max_bytes Max bytes after last collection of a given JVM memory pool. +# TYPE jvm_memory_pool_collection_max_bytes gauge +jvm_memory_pool_collection_max_bytes{pool="Tenured Gen",} 5.621809152E9 +jvm_memory_pool_collection_max_bytes{pool="Eden Space",} 2.248671232E9 +jvm_memory_pool_collection_max_bytes{pool="Survivor Space",} 2.81083904E8 +# HELP jvm_memory_pool_collection_init_bytes Initial after last collection bytes of a given JVM memory pool. +# TYPE jvm_memory_pool_collection_init_bytes gauge +jvm_memory_pool_collection_init_bytes{pool="Tenured Gen",} 3.52321536E8 +jvm_memory_pool_collection_init_bytes{pool="Eden Space",} 1.41033472E8 +jvm_memory_pool_collection_init_bytes{pool="Survivor Space",} 1.7563648E7 +# HELP jvm_classes_loaded The number of classes that are currently loaded in the JVM +# TYPE jvm_classes_loaded gauge +jvm_classes_loaded 7378.0 +# HELP jvm_classes_loaded_total The total number of classes that have been loaded since the JVM has started execution +# TYPE jvm_classes_loaded_total counter +jvm_classes_loaded_total 7378.0 +# HELP jvm_classes_unloaded_total The total number of classes that have been unloaded since the JVM has started execution +# TYPE jvm_classes_unloaded_total counter +jvm_classes_unloaded_total 0.0 +# HELP jvm_info VM version info +# TYPE jvm_info gauge +jvm_info{runtime="OpenJDK Runtime Environment",vendor="Alpine",version="11.0.9+11-alpine-r1",} 1.0 +# HELP jvm_memory_pool_allocated_bytes_created Total bytes allocated in a given JVM memory pool. Only updated after GC, not continuously. +# TYPE jvm_memory_pool_allocated_bytes_created gauge +jvm_memory_pool_allocated_bytes_created{pool="Eden Space",} 1.651077501662E9 +jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'profiled nmethods'",} 1.651077501657E9 +jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'non-profiled nmethods'",} 1.651077501662E9 +jvm_memory_pool_allocated_bytes_created{pool="Compressed Class Space",} 1.651077501662E9 +jvm_memory_pool_allocated_bytes_created{pool="Metaspace",} 1.651077501662E9 +jvm_memory_pool_allocated_bytes_created{pool="Tenured Gen",} 1.651077501662E9 +jvm_memory_pool_allocated_bytes_created{pool="Survivor Space",} 1.651077501662E9 +jvm_memory_pool_allocated_bytes_created{pool="CodeHeap 'non-nmethods'",} 1.651077501662E9 diff --git a/docs/development/devtools/testing/s3p/apex-s3p-results/apex_perf_jmeter_results.png b/docs/development/devtools/testing/s3p/apex-s3p-results/apex_perf_jmeter_results.png new file mode 100644 index 00000000..0fa35c0b Binary files /dev/null and b/docs/development/devtools/testing/s3p/apex-s3p-results/apex_perf_jmeter_results.png differ diff --git a/docs/development/devtools/testing/s3p/apex-s3p-results/apex_stability_jmeter_results.png b/docs/development/devtools/testing/s3p/apex-s3p-results/apex_stability_jmeter_results.png new file mode 100644 index 00000000..585f99c5 Binary files /dev/null and b/docs/development/devtools/testing/s3p/apex-s3p-results/apex_stability_jmeter_results.png differ diff --git a/docs/development/devtools/testing/s3p/apex-s3p-results/apex_top_after_72h.png b/docs/development/devtools/testing/s3p/apex-s3p-results/apex_top_after_72h.png new file mode 100644 index 00000000..dafc7002 Binary files /dev/null and b/docs/development/devtools/testing/s3p/apex-s3p-results/apex_top_after_72h.png differ diff --git a/docs/development/devtools/testing/s3p/apex-s3p-results/apex_top_before_72h.png b/docs/development/devtools/testing/s3p/apex-s3p-results/apex_top_before_72h.png new file mode 100644 index 00000000..2e2e7574 Binary files /dev/null and b/docs/development/devtools/testing/s3p/apex-s3p-results/apex_top_before_72h.png differ diff --git a/docs/development/devtools/testing/s3p/apex-s3p.rst b/docs/development/devtools/testing/s3p/apex-s3p.rst new file mode 100644 index 00000000..4fca626c --- /dev/null +++ b/docs/development/devtools/testing/s3p/apex-s3p.rst @@ -0,0 +1,258 @@ +.. This work is licensed under a +.. Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +.. _apex-s3p-label: + +.. toctree:: + :maxdepth: 2 + +Policy APEX PDP component +~~~~~~~~~~~~~~~~~~~~~~~~~ + +Both the Stability and the Performance tests were executed in a full ONAP OOM deployment in Nordix lab. + +Setup Details ++++++++++++++ + +Deploying ONAP using OOM +------------------------ + +APEX-PDP along with all policy components are deployed as part of a full ONAP OOM deployment. +At a minimum, the following ONAP components are needed: policy, mariadb-galera, aai, cassandra, aaf, and dmaap. + +Before deploying, the values.yaml files are changed to use NodePort instead of ClusterIP for policy-api, +policy-pap, and policy-apex-pdp, so that they are accessible from jmeter:: + + policy-apex-pdp NodePort 10.43.131.43 6969:31739/TCP + policy-api NodePort 10.43.67.153 6969:30430/TCP + policy-pap NodePort 10.43.200.57 6969:30585/TCP + +The node ports (31739, 30430 and 30585 above) are used in JMeter. +The HOSTNAMEs for JMeter are set to the IPs returned by running "kubectl get node -o wide" +and to find the applications for each node by running "kubectl describe node ". + +Set up policy-models-simulator +------------------------------ + +Policy-models-simulator is deployed to use CDS and DMaaP simulators during policy execution. + Simulator configurations used are available in apex-pdp repository: + testsuites/apex-pdp-stability/src/main/resources/simulatorConfig/ + +It is run as a docker image from a node accessible to the kubernetes cluster:: + + docker run -d --rm --publish 6680:6680 --publish 31054:3905 \ + --volume "apex-pdp/testsuites/apex-pdp-stability/src/main/resources/simulatorConfig:/opt/app/policy/simulators/etc/mounted" \ + nexus3.onap.org:10001/onap/policy-models-simulator:2.7-SNAPSHOT-latest + +The published ports 6680 and 31054 are used in JMeter for CDS and DMaaP simulators. + +Creation of VNF & PNF in AAI +---------------------------- + +In order for APEX-PDP engine to fetch the resource details from AAI during runtime execution, we need to create dummy +VNF & PNF entities in AAI. In a real control loop flow, the entities in AAI will be either created during orchestration +phase or provisioned in AAI separately. + +Download & execute the steps in postman collection for creating the entities along with it’s dependencies. +The steps needs to be performed sequentially one after another. And no input is required from user. + +:download:`Create VNF & PNF in AAI for Apex S3P ` + +Make sure to skip the delete VNF & PNF steps. + +JMeter Tests +------------ + +Two APEX policies are executed in the APEX-PDP engine, and are triggered by multiple threads during the tests. +Both tests were run via jMeter. + + Stability test script is available in apex-pdp repository: + testsuites/apex-pdp-stability/src/main/resources/apexPdpStabilityTestPlan.jmx + + Performance test script is available in apex-pdp repository: + testsuites/performance/performance-benchmark-test/src/main/resources/apexPdpPerformanceTestPlan.jmx + +.. Note:: + Policy executions are validated in a stricter fashion during the tests. + There are test cases where up to 80 events are expected on the DMaaP topic. + DMaaP simulator is used to keep it simple and avoid any message pickup timing related issues. + +Stability Test of APEX-PDP +++++++++++++++++++++++++++ + +Test Plan +--------- + +The 72 hours stability test ran the following steps. + +Setup Phase +""""""""""" + +Policies are created and deployed to APEX-PDP during this phase. Only one thread is in action and this step is done only once. + +- **Create Policy onap.policies.apex.Simplecontrolloop** - creates the first APEX policy using policy/api component. + This is a sample policy used for PNF testing. +- **Create Policy onap.policies.apex.Example** - creates the second APEX policy using policy/api component. + This is a sample policy used for VNF testing. +- **Deploy Policies** - Deploy both the policies created to APEX-PDP using policy/pap component + +Main Phase +"""""""""" + +Once the policies are created and deployed to APEX-PDP by the setup thread, five threads execute the below tests for 72 hours. + +- **Healthcheck** - checks the health status of APEX-PDP +- **Prometheus Metrics** - checks that APEX-PDP is exposing prometheus metrics +- **Test Simplecontrolloop policy success case** - Send a trigger event to *unauthenticated.DCAE_CL_OUTPUT* DMaaP topic. + If the policy execution is successful, 3 different notification events are sent to *APEX-CL-MGT* topic by each one of the 5 threads. + So, it is checked if 15 notification messages are received in total on *APEX-CL-MGT* topic with the relevant messages. +- **Test Simplecontrolloop policy failure case** - Send a trigger event with invalid pnfName to *unauthenticated.DCAE_CL_OUTPUT* DMaaP topic. + The policy execution is expected to fail due to AAI failure response. 2 notification events are expected on *APEX-CL-MGT* topic by a thread in this case. + It is checked if 10 notification messages are received in total on *APEX-CL-MGT* topic with the relevant messages. +- **Test Example policy success case** - Send a trigger event to *unauthenticated.DCAE_POLICY_EXAMPLE_OUTPUT* DMaaP topic. + If the policy execution is successful, 4 different notification events are sent to *APEX-CL-MGT* topic by each one of the 5 threads. + So, it is checked if 20 notification messages are received in total on *APEX-CL-MGT* topic with the relevant messages. +- **Test Example policy failure case** - Send a trigger event with invalid vnfName to *unauthenticated.DCAE_POLICY_EXAMPLE_OUTPUT* DMaaP topic. + The policy execution is expected to fail due to AAI failure response. 2 notification events are expected on *APEX-CL-MGT* topic by a thread in this case. + So, it is checked if 10 notification messages are received in total on *APEX-CL-MGT* topic with the relevant messages. +- **Clean up DMaaP notification topic** - DMaaP notification topic which is *APEX-CL-MGT* is cleaned up after each test to make sure that one failure doesn't lead to cascading errors. + + +Teardown Phase +"""""""""""""" + +Policies are undeployed from APEX-PDP and deleted during this phase. +Only one thread is in action and this step is done only once after the Main phase is complete. + +- **Undeploy Policies** - Undeploy both the policies from APEX-PDP using policy/pap component +- **Delete Policy onap.policies.apex.Simplecontrolloop** - delete the first APEX policy using policy/api component. +- **Delete Policy onap.policies.apex.Example** - delete the second APEX policy also using policy/api component. + +Test Configuration +------------------ + +The following steps can be used to configure the parameters of test plan. + +- **HTTP Authorization Manager** - used to store user/password authentication details. +- **HTTP Header Manager** - used to store headers which will be used for making HTTP requests. +- **User Defined Variables** - used to store following user defined parameters. + +=================== =============================================================================== + **Name** **Description** +=================== =============================================================================== + HOSTNAME IP Address or host name to access the components + PAP_PORT Port number of PAP for making REST API calls such as deploy/undeploy of policy + API_PORT Port number of API for making REST API calls such as create/delete of policy + APEX_PORT Port number of APEX for making REST API calls such as healthcheck/metrics + SIM_HOST IP Address or hostname running policy-models-simulator + DMAAP_PORT Port number of DMaaP simulator for making REST API calls such as reading notification events + CDS_PORT Port number of CDS simulator + wait Wait time if required after a request (in milliseconds) + threads Number of threads to run test cases in parallel + threadsTimeOutInMs Synchronization timer for threads running in parallel (in milliseconds) +=================== =============================================================================== + +Run Test +-------- + +The test was run in the background via "nohup", to prevent it from being interrupted: + +.. code-block:: bash + + nohup ./apache-jmeter-5.4.3/bin/jmeter.sh -n -t apexPdpStabilityTestPlan.jmx -l stabilityTestResults.jtl + +Test Results +------------ + +**Summary** + +Stability test plan was triggered for 72 hours. There were no failures during the 72 hours test. + + +**Test Statistics** + +======================= ================= ================== ================================== +**Total # of requests** **Success %** **Error %** **Average time taken per request** +======================= ================= ================== ================================== +430397 100 % 0.00 % 151.694 ms +======================= ================= ================== ================================== + +.. Note:: + + There were no failures during the 72 hours test. + +**JMeter Screenshot** + +.. image:: apex-s3p-results/apex_stability_jmeter_results.png + +**Memory and CPU usage** + +The memory and CPU usage can be monitored by running "top" command in the APEX-PDP pod. +A snapshot is taken before and after test execution to monitor the changes in resource utilization. +Prometheus metrics is also collected before and after the test execution. + +Memory and CPU usage before test execution: + +.. image:: apex-s3p-results/apex_top_before_72h.png + +:download:`Prometheus metrics before 72h test ` + +Memory and CPU usage after test execution: + +.. image:: apex-s3p-results/apex_top_after_72h.png + +:download:`Prometheus metrics after 72h test ` + +Performance Test of APEX-PDP +++++++++++++++++++++++++++++ + +Introduction +------------ + +Performance test of APEX-PDP is done similar to the stability test, but in a more extreme manner using higher thread count. + +Setup Details +------------- + +The performance test is performed on a similar setup as Stability test. + + +Test Plan +--------- + +Performance test plan is the same as the stability test plan above except for the few differences listed below. + +- Increase the number of threads used in the Main Phase from 5 to 20. +- Reduce the test time to 2 hours. + +Run Test +-------- + +.. code-block:: bash + + nohup ./apache-jmeter-5.4.3/bin/jmeter.sh -n -t apexPdpPerformanceTestPlan.jmx -l perftestresults.jtl + + +Test Results +------------ + +Test results are shown as below. + +**Test Statistics** + +======================= ================= ================== ================================== +**Total # of requests** **Success %** **Error %** **Average time taken per request** +======================= ================= ================== ================================== +47567 100 % 0.00 % 163.841 ms +======================= ================= ================== ================================== + +**JMeter Screenshot** + +.. image:: apex-s3p-results/apex_perf_jmeter_results.png + +Summary ++++++++ + +Multiple policies were executed in a multi-threaded fashion for both stability and performance tests. +Both tests ran smoothly without any issues. diff --git a/docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-distribution_J.png b/docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-distribution_J.png new file mode 100644 index 00000000..6b62b2b2 Binary files /dev/null and b/docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-distribution_J.png differ diff --git a/docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-distribution_performance_J.png b/docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-distribution_performance_J.png new file mode 100644 index 00000000..60476027 Binary files /dev/null and b/docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-distribution_performance_J.png differ diff --git a/docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-overtime_J.png b/docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-overtime_J.png new file mode 100644 index 00000000..b32ff6ae Binary files /dev/null and b/docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-overtime_J.png differ diff --git a/docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-overtime_performance_J.png b/docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-overtime_performance_J.png new file mode 100644 index 00000000..82a0b8ae Binary files /dev/null and b/docs/development/devtools/testing/s3p/api-s3p-results/api-response-time-overtime_performance_J.png differ diff --git a/docs/development/devtools/testing/s3p/api-s3p-results/api-s3p-jm-1_J.png b/docs/development/devtools/testing/s3p/api-s3p-results/api-s3p-jm-1_J.png new file mode 100644 index 00000000..c219a63c Binary files /dev/null and b/docs/development/devtools/testing/s3p/api-s3p-results/api-s3p-jm-1_J.png differ diff --git a/docs/development/devtools/testing/s3p/api-s3p-results/api-s3p-jm-2_J.png b/docs/development/devtools/testing/s3p/api-s3p-results/api-s3p-jm-2_J.png new file mode 100644 index 00000000..0083f3ca Binary files /dev/null and b/docs/development/devtools/testing/s3p/api-s3p-results/api-s3p-jm-2_J.png differ diff --git a/docs/development/devtools/testing/s3p/api-s3p-results/api_top_after_72h.png b/docs/development/devtools/testing/s3p/api-s3p-results/api_top_after_72h.png new file mode 100644 index 00000000..de4c4553 Binary files /dev/null and b/docs/development/devtools/testing/s3p/api-s3p-results/api_top_after_72h.png differ diff --git a/docs/development/devtools/testing/s3p/api-s3p-results/api_top_before_72h.png b/docs/development/devtools/testing/s3p/api-s3p-results/api_top_before_72h.png new file mode 100644 index 00000000..2b334377 Binary files /dev/null and b/docs/development/devtools/testing/s3p/api-s3p-results/api_top_before_72h.png differ diff --git a/docs/development/devtools/testing/s3p/api-s3p.rst b/docs/development/devtools/testing/s3p/api-s3p.rst new file mode 100644 index 00000000..12c3a516 --- /dev/null +++ b/docs/development/devtools/testing/s3p/api-s3p.rst @@ -0,0 +1,211 @@ +.. This work is licensed under a +.. Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +.. _api-s3p-label: + +.. toctree:: + :maxdepth: 2 + +Policy API S3P Tests +#################### + + +72 Hours Stability Test of Policy API ++++++++++++++++++++++++++++++++++++++ + +Introduction +------------ + +The 72 hour stability test of policy API has the goal of verifying the stability of running policy design API REST +service by ingesting a steady flow of transactions in a multi-threaded fashion to +simulate multiple clients' behaviours. +All the transaction flows are initiated from a test client server running JMeter for the duration of 72 hours. + +Setup Details +------------- + +The stability test was performed on a default ONAP OOM installation in the Nordix Lab environment. +JMeter was installed on a separate VM to inject the traffic defined in the +`API stability script +`_ +with the following command: + +.. code-block:: bash + + nohup apache-jmeter-5.5/bin/jmeter -n -t policy_api_stability.jmx -l stabilityTestResultsPolicyApi.jtl & + +The test was run in the background via “nohup” and “&”, to prevent it from being interrupted. + +Test Plan +--------- + +The 72+ hours stability test will be running the following steps sequentially +in multi-threaded loops. Thread number is set to 5 to simulate 5 API clients' +behaviours (they can be calling the same policy CRUD API simultaneously). +Each thread creates a different version of the policy types and policies to not +interfere with one another while operating simultaneously. The point version +of each entity is set to the running thread number. + +**Setup Thread (will be running only once)** + +- Get policy-api Healthcheck +- Get API Counter Statistics +- Get Preloaded Policy Types + +**API Test Flow (5 threads running the same steps in the same loop)** + +- Create a new Monitoring Policy Type with Version 6.0.# +- Create a new Monitoring Policy Type with Version 7.0.# +- Create a new Optimization Policy Type with Version 6.0.# +- Create a new Guard Policy Type with Version 6.0.# +- Create a new Native APEX Policy Type with Version 6.0.# +- Create a new Native Drools Policy Type with Version 6.0.# +- Create a new Native XACML Policy Type with Version 6.0.# +- Get All Policy Types +- Get All Versions of the new Monitoring Policy Type +- Get Version 6.0.# of the new Monitoring Policy Type +- Get Version 6.0.# of the new Optimization Policy Type +- Get Version 6.0.# of the new Guard Policy Type +- Get Version 6.0.# of the new Native APEX Policy Type +- Get Version 6.0.# of the new Native Drools Policy Type +- Get Version 6.0.# of the new Native XACML Policy Type +- Get the Latest Version of the New Monitoring Policy Type +- Create Version 6.0.# of Node Template +- Create Monitoring Policy Ver 6.0.# w/Monitoring Policy Type Ver 6.0.# +- Create Monitoring Policy Ver 7.0.# w/Monitoring Policy Type Ver 7.0.# +- Create Optimization Policy Ver 6.0.# w/Optimization Policy Type Ver 6.0.# +- Create Guard Policy Ver 6.0.# w/Guard Policy Type Ver 6.0.# +- Create Native APEX Policy Ver 6.0.# w/Native APEX Policy Type Ver 6.0.# +- Create Native Drools Policy Ver 6.0.# w/Native Drools Policy Type Ver 6.0.# +- Create Native XACML Policy Ver 6.0.# w/Native XACML Policy Type Ver 6.0.# +- Create Version 6.0.# of PNF Example Policy with Metadata +- Get Node Template +- Get All TCA Policies +- Get All Versions of Monitoring Policy Type +- Get Version 6.0.# of the new Monitoring Policy +- Get Version 6.0.# of the new Optimization Policy +- Get Version 6.0.# of the new Guard Policy +- Get Version 6.0.# of the new Native APEX Policy +- Get Version 6.0.# of the new Native Drools Policy +- Get Version 6.0.# of the new Native XACML Policy +- Get the Latest Version of the new Monitoring Policy +- Delete Version 6.0.# of the new Monitoring Policy +- Delete Version 7.0.# of the new Monitoring Policy +- Delete Version 6.0.# of the new OptimizationPolicy +- Delete Version 6.0.# of the new Guard Policy +- Delete Version 6.0.# of the new Native APEX Policy +- Delete Version 6.0.# of PNF Example Policy having Metadata +- Delete Version 6.0.# of the new Native Drools Policy +- Delete Version 6.0.# of the new Native XACML Policy +- Delete Monitoring Policy Type with Version 6.0.# +- Delete Monitoring Policy Type with Version 7.0.# +- Delete Optimization Policy Type with Version 6.0.# +- Delete Guard Policy Type with Version 6.0.# +- Delete Native APEX Policy Type with Version 6.0.# +- Delete Native Drools Policy Type with Version 6.0.# +- Delete Native XACML Policy Type with Version 6.0.# +- Delete Node Template +- Get Policy Metrics + +**TearDown Thread (will only be running after API Test Flow is completed)** + +- Get policy-api Healthcheck +- Get Preloaded Policy Types + + +Test Results +------------ + +**Summary** + +No errors were found during the 72 hours of the Policy API stability run. +The load was performed against a non-tweaked ONAP OOM installation. + +**Test Statistics** + +======================= ============= =========== =============================== =============================== =============================== +**Total # of requests** **Success %** **TPS** **Avg. time taken per request** **Min. time taken per request** **Max. time taken per request** +======================= ============= =========== =============================== =============================== =============================== + 950839 100% 3.67 1351 ms 126 ms 16324 ms +======================= ============= =========== =============================== =============================== =============================== + +.. image:: api-s3p-results/api-s3p-jm-1_J.png + +**JMeter Results** + +The following graphs show the response time distributions. The "Get Policy Types" API calls are the most expensive calls that +average a 13 seconds plus response time. + +.. image:: api-s3p-results/api-response-time-distribution_J.png +.. image:: api-s3p-results/api-response-time-overtime_J.png + +**Memory and CPU usage** + +The memory and CPU usage can be monitored by running "top" command in the policy-api pod. +A snapshot is taken before and after test execution to monitor the changes in resource utilization. + +Memory and CPU usage before test execution: + +.. image:: api-s3p-results/api_top_before_72h.png + +Memory and CPU usage after test execution: + +.. image:: api-s3p-results/api_top_after_72h.png + + +Performance Test of Policy API +++++++++++++++++++++++++++++++ + +Introduction +------------ + +Performance test of policy-api has the goal of testing the min/avg/max processing time and rest call throughput for all the requests when the number of requests are large enough to saturate the resource and find the bottleneck. + +Setup Details +------------- + +The performance test was performed on a default ONAP OOM installation in the Nordix Lab environment. +JMeter was installed on a separate VM to inject the traffic defined in the +`API performance script +`_ +with the following command: + +.. code-block:: bash + + nohup apache-jmeter-5.5/bin/jmeter -n -t policy_api_performance.jmx -l performanceTestResultsPolicyApi.jtl & + +The test was run in the background via “nohup” and “&”, to prevent it from being interrupted. + +Test Plan +--------- + +Performance test plan is the same as stability test plan above. +Only differences are, in performance test, we increase the number of threads up to 20 (simulating 20 users' behaviours at the same time) whereas reducing the test time down to 2.5 hours. + +Run Test +-------- + +Running/Triggering performance test will be the same as stability test. That is, launch JMeter pointing to corresponding *.jmx* test plan. The *API_HOST* and *API_PORT* are already set up in *.jmx*. + +**Test Statistics** + +======================= ============= =========== =============================== =============================== =============================== +**Total # of requests** **Success %** **TPS** **Avg. time taken per request** **Min. time taken per request** **Max. time taken per request** +======================= ============= =========== =============================== =============================== =============================== + 16212 100% 1.8 11109 ms 162 ms 237265 ms +======================= ============= =========== =============================== =============================== =============================== + +.. image:: api-s3p-results/api-s3p-jm-2_J.png + +Test Results +------------ + +The following graphs show the response time distributions. + +.. image:: api-s3p-results/api-response-time-distribution_performance_J.png +.. image:: api-s3p-results/api-response-time-overtime_performance_J.png + + + + diff --git a/docs/development/devtools/testing/s3p/clamp-s3p-results/Stability_after_stats.png b/docs/development/devtools/testing/s3p/clamp-s3p-results/Stability_after_stats.png new file mode 100644 index 00000000..38242866 Binary files /dev/null and b/docs/development/devtools/testing/s3p/clamp-s3p-results/Stability_after_stats.png differ diff --git a/docs/development/devtools/testing/s3p/clamp-s3p-results/acm_performance_jmeter.png b/docs/development/devtools/testing/s3p/clamp-s3p-results/acm_performance_jmeter.png new file mode 100644 index 00000000..bad1cf71 Binary files /dev/null and b/docs/development/devtools/testing/s3p/clamp-s3p-results/acm_performance_jmeter.png differ diff --git a/docs/development/devtools/testing/s3p/clamp-s3p-results/acm_stability_jmeter.png b/docs/development/devtools/testing/s3p/clamp-s3p-results/acm_stability_jmeter.png new file mode 100644 index 00000000..2f576505 Binary files /dev/null and b/docs/development/devtools/testing/s3p/clamp-s3p-results/acm_stability_jmeter.png differ diff --git a/docs/development/devtools/testing/s3p/clamp-s3p-results/acm_stability_table.png b/docs/development/devtools/testing/s3p/clamp-s3p-results/acm_stability_table.png new file mode 100644 index 00000000..28942eff Binary files /dev/null and b/docs/development/devtools/testing/s3p/clamp-s3p-results/acm_stability_table.png differ diff --git a/docs/development/devtools/testing/s3p/clamp-s3p.rst b/docs/development/devtools/testing/s3p/clamp-s3p.rst new file mode 100644 index 00000000..eb17d894 --- /dev/null +++ b/docs/development/devtools/testing/s3p/clamp-s3p.rst @@ -0,0 +1,257 @@ +.. This work is licensed under a +.. Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +.. _acm-s3p-label: + +.. toctree:: + :maxdepth: 2 + +Policy Clamp Automation Composition +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Both the Performance and the Stability tests were executed by performing requests +against acm components installed as docker images in local environment. + + +ACM Deployment +++++++++++++++ + +The docker containers can be deployed via Policy CSIT script. +Clone the Policy/docker repo to the local vm + +.. code-block:: bash + + git clone "https://gerrit.onap.org/r/policy/docker" + +Set the following environment variables on the system before deploying the containers. + +.. code-block:: bash + + export CONTAINER_LOCATION=nexus3.onap.org:10001/ + export PROJECT=clamp + +Invoke the following script from the ~/docker/csit folder. + +.. code-block:: bash + + ./start-all.sh + +This script installs the docker containers of ACM and Policy components required for running the tests. + + +Jmeter setup +++++++++++++ + +Apache jmeter tool is installed either on the same virtual machine or on a different virtual machine. + +.. code-block:: bash + + # Install required packages + sudo apt install -y wget unzip + + # Install JMeter + mkdir -p jmeter + cd jmeter + wget https://dlcdn.apache.org//jmeter/binaries/apache-jmeter-5.5.zip # check if valid version + unzip -q apache-jmeter-5.5.zip + rm apache-jmeter-5.5.zip + + +Setup Verification +++++++++++++++++++ +Ensure the following components are up and running before executing the test. + +- acm runtime component docker image is started and running. +- Participant docker images policy-clamp-cl-pf-ppnt, policy-clamp-cl-http-ppnt, policy-clamp-cl-k8s-ppnt are started and running. +- Dmaap simulator for communication between components. +- mariadb docker container for policy and clampacm database. +- policy-api for communication between policy participant and policy-framework +- Both tests were run via jMeter, which was installed on a separate VM. + +Stability Test of acm components +++++++++++++++++++++++++++++++++ + +Test Plan +--------- +The 72 hours stability test ran the following steps sequentially in a single threaded loop. + +- **Create Policy defaultDomain** - creates an operational policy using policy/api component +- **Delete Policy sampleDomain** - deletes the operational policy sampleDomain using policy/api component +- **Commission AC definition** - commissions the acm definition in runtime +- **Instantiate acm** - Instantiate the acm towards participants +- **Check acm state** - check the current state of acm +- **Change State to PASSIVE** - change the state of the acm to PASSIVE +- **Check acm state** - check the current state of acm +- **Change State to UNINITIALISED** - change the state of the ACM to UNINITIALISED +- **Check acm state** - check the current state of acm +- **Delete instantiated acm** - delete the instantiated acm from all participants +- **Delete ACM Definition** - delete the acm definition on runtime + +The following parameters can be configured on the JMX file for the test. + +- **HTTP Authorization Manager** - used to store user/password authentication details. +- **HTTP Header Manager** - used to store headers which will be used for making HTTP requests. +- **User Defined Variables** - used to store following user defined parameters. + +============================= ======================================================================== + **Name** **Description** +============================= ======================================================================== + RUNTIME_HOST IP Address or host name of acm runtime component + RUNTIME_PORT Port number of acm runtime components for making REST API calls + POLICY_PARTICIPANT_HOST IP Address or host name of policy participant + POLICY_PARTICIPANT_HOST_PORT Port number of policy participant +============================= ======================================================================== + +Download the ACM stability.jmx and performance.jmx files from the Policy-Clamp repo. + +Stability jmx file + +.. code-block:: bash + + ~/clamp/testsuites/stability/src/main/resources/testplans/stability.jmx + +The test was run in the background via "nohup", to prevent it from being interrupted: + +.. code-block:: bash + + nohup ./jmeter/apache-jmeter-5.5/bin/jmeter -n -t stability.jmx -l testresults.jtl + +Test Results +------------ + +**Summary** + +Stability test plan was triggered for 72 hours. + +.. Note:: + + .. container:: paragraph + + The assertions of state changes are not completely taken care of, as the stability is ran with acm components + alone, and not including complete policy framework deployment, which makes it difficult for actual state changes from + PASSIVE to RUNNING etc to happen. + +**Test Statistics** + +======================= ================= ================== ================================== +**Total # of requests** **Success %** **Error %** **Average time taken per request** +======================= ================= ================== ================================== +97916 100.00 % 0.00 % 246 ms +======================= ================= ================== ================================== + +**ACM component Setup** + +================ ============================================================ =========================================== ========================= +**CONTAINER ID** **IMAGE** **PORT** **NAME** +================ ============================================================ =========================================== ========================= + a9cb0cd103cf nexus3.onap.org:10001/onap/policy-clamp-runtime-acm:latest 6969/tcp policy-clamp-runtime-acm + 886e572b8438 nexus3.onap.org:10001/onap/policy-clamp-ac-pf-ppnt:latest 6969/tcp policy-clamp-ac-pf-ppnt + 035707b1b95f nexus3.onap.org:10001/onap/policy-api:latest 6969/tcp policy-api + d34204f95ff3 nexus3.onap.org:10001/onap/policy-clamp-ac-http-ppnt:latest 6969/tcp policy-clamp-ac-http-ppnt + 4470e608c9a8 nexus3.onap.org:10001/onap/policy-clamp-ac-k8s-ppnt:latest 6969/tcp policy-clamp-ac-k8s-ppnt + 62229d46b79c nexus3.onap.org:10001/onap/policy-models-simulator:latest 3905/tcp, 6666/tcp, 6668-6670/tcp, 6680/tcp simulator + efaf0ca5e1f0 nexus3.onap.org:10001/mariadb:10.5.8 3306/tcp mariadb + e84cf17db2a4 nexus3.onap.org:10001/onap/policy-pap:latest 6969/tcp policy-pap + 0a16eecd13c9 nexus3.onap.org:10001/onap/policy-apex-pdp:latest 6969/tcp policy-apex-pdp +================ ============================================================ =========================================== ========================= + +.. Note:: + + .. container:: paragraph + + There were no failures during the 72 hours test. + +**JMeter Screenshot** + +.. image:: clamp-s3p-results/acm_stability_jmeter.png + +**JMeter Screenshot** + +.. image:: clamp-s3p-results/acm_stability_table.png + +**Memory and CPU usage** + +The memory and CPU usage can be monitored by running "docker stats" command. + +Memory and CPU usage after test execution: + +.. image:: clamp-s3p-results/Stability_after_stats.png + + +Performance Test of acm components +++++++++++++++++++++++++++++++++++ + +Introduction +------------ + +Performance test of acm components has the goal of testing the min/avg/max processing time and rest call throughput for all the requests with multiple requests at the same time. + +Setup Details +------------- + +The performance test is performed on a similar setup as Stability test. The JMeter VM will be sending a large number of REST requests to the runtime component and collecting the statistics. + + +Test Plan +--------- + +Performance test plan is the same as the stability test plan above except for the few differences listed below. + +- Increase the number of threads up to 5 (simulating 5 users' behaviours at the same time). +- Reduce the test time to 2 hours. + +Run Test +-------- + +Performance jmx file + +.. code-block:: bash + + ~/clamp/testsuites/performance/src/main/resources/testplans/performance.jmx + +Running/Triggering the performance test will be the same as the stability test. That is, launch JMeter pointing to corresponding *.jmx* test plan. The *RUNTIME_HOST*, *RUNTIME_PORT*, *POLICY_PARTICIPANT_HOST*, *POLICY_PARTICIPANT_HOST_PORT* are already set up in *.jmx* + +.. code-block:: bash + + nohup ./jmeter/apache-jmeter-5.5/bin/jmeter -n -t performance.jmx -l testresults.jtl + +Once the test execution is completed, execute the below script to get the statistics: + +.. code-block:: bash + + $ cd ./clamp/testsuites/performance/src/main/resources/testplans + $ ./results.sh resultTree.log + +Test Results +------------ + +Test results are shown as below. + +**Test Statistics** + +======================= ================= ================== ================================== +**Total # of requests** **Success %** **Error %** **Average time taken per request** +======================= ================= ================== ================================== +13591 100 % 0.00 % 249 ms +======================= ================= ================== ================================== + +**ACM component Setup** + +================ ============================================================ =========================================== ========================= +**CONTAINER ID** **IMAGE** **PORT** **NAME** +================ ============================================================ =========================================== ========================= + a9cb0cd103cf nexus3.onap.org:10001/onap/policy-clamp-runtime-acm:latest 6969/tcp policy-clamp-runtime-acm + 886e572b8438 nexus3.onap.org:10001/onap/policy-clamp-ac-pf-ppnt:latest 6969/tcp policy-clamp-ac-pf-ppnt + 035707b1b95f nexus3.onap.org:10001/onap/policy-api:latest 6969/tcp policy-api + d34204f95ff3 nexus3.onap.org:10001/onap/policy-clamp-ac-http-ppnt:latest 6969/tcp policy-clamp-ac-http-ppnt + 4470e608c9a8 nexus3.onap.org:10001/onap/policy-clamp-ac-k8s-ppnt:latest 6969/tcp policy-clamp-ac-k8s-ppnt + 62229d46b79c nexus3.onap.org:10001/onap/policy-models-simulator:latest 3905/tcp, 6666/tcp, 6668-6670/tcp, 6680/tcp simulator + efaf0ca5e1f0 nexus3.onap.org:10001/mariadb:10.5.8 3306/tcp mariadb + e84cf17db2a4 nexus3.onap.org:10001/onap/policy-pap:latest 6969/tcp policy-pap + 0a16eecd13c9 nexus3.onap.org:10001/onap/policy-apex-pdp:latest 6969/tcp policy-apex-pdp +================ ============================================================ =========================================== ========================= + +**JMeter Screenshot** + +.. image:: clamp-s3p-results/acm_performance_jmeter.png diff --git a/docs/development/devtools/testing/s3p/distribution-s3p-results/distribution-jmeter-testcases.png b/docs/development/devtools/testing/s3p/distribution-s3p-results/distribution-jmeter-testcases.png new file mode 100644 index 00000000..86a437a7 Binary files /dev/null and b/docs/development/devtools/testing/s3p/distribution-s3p-results/distribution-jmeter-testcases.png differ diff --git a/docs/development/devtools/testing/s3p/distribution-s3p-results/distribution-visualvm-snapshot.png b/docs/development/devtools/testing/s3p/distribution-s3p-results/distribution-visualvm-snapshot.png new file mode 100644 index 00000000..03b73d36 Binary files /dev/null and b/docs/development/devtools/testing/s3p/distribution-s3p-results/distribution-visualvm-snapshot.png differ diff --git a/docs/development/devtools/testing/s3p/distribution-s3p-results/performance-monitor.png b/docs/development/devtools/testing/s3p/distribution-s3p-results/performance-monitor.png new file mode 100644 index 00000000..71fd7fca Binary files /dev/null and b/docs/development/devtools/testing/s3p/distribution-s3p-results/performance-monitor.png differ diff --git a/docs/development/devtools/testing/s3p/distribution-s3p-results/performance-statistics.png b/docs/development/devtools/testing/s3p/distribution-s3p-results/performance-statistics.png new file mode 100644 index 00000000..fecd6c03 Binary files /dev/null and b/docs/development/devtools/testing/s3p/distribution-s3p-results/performance-statistics.png differ diff --git a/docs/development/devtools/testing/s3p/distribution-s3p-results/performance-threads.png b/docs/development/devtools/testing/s3p/distribution-s3p-results/performance-threads.png new file mode 100644 index 00000000..2488abd9 Binary files /dev/null and b/docs/development/devtools/testing/s3p/distribution-s3p-results/performance-threads.png differ diff --git a/docs/development/devtools/testing/s3p/distribution-s3p-results/performance-threshold.png b/docs/development/devtools/testing/s3p/distribution-s3p-results/performance-threshold.png new file mode 100644 index 00000000..73b20ff2 Binary files /dev/null and b/docs/development/devtools/testing/s3p/distribution-s3p-results/performance-threshold.png differ diff --git a/docs/development/devtools/testing/s3p/distribution-s3p-results/stability-monitor.png b/docs/development/devtools/testing/s3p/distribution-s3p-results/stability-monitor.png new file mode 100644 index 00000000..bebaaeb0 Binary files /dev/null and b/docs/development/devtools/testing/s3p/distribution-s3p-results/stability-monitor.png differ diff --git a/docs/development/devtools/testing/s3p/distribution-s3p-results/stability-statistics.png b/docs/development/devtools/testing/s3p/distribution-s3p-results/stability-statistics.png new file mode 100644 index 00000000..12ee2b5b Binary files /dev/null and b/docs/development/devtools/testing/s3p/distribution-s3p-results/stability-statistics.png differ diff --git a/docs/development/devtools/testing/s3p/distribution-s3p-results/stability-threads.png b/docs/development/devtools/testing/s3p/distribution-s3p-results/stability-threads.png new file mode 100644 index 00000000..4cfd7a78 Binary files /dev/null and b/docs/development/devtools/testing/s3p/distribution-s3p-results/stability-threads.png differ diff --git a/docs/development/devtools/testing/s3p/distribution-s3p-results/stability-threshold.png b/docs/development/devtools/testing/s3p/distribution-s3p-results/stability-threshold.png new file mode 100644 index 00000000..f348761b Binary files /dev/null and b/docs/development/devtools/testing/s3p/distribution-s3p-results/stability-threshold.png differ diff --git a/docs/development/devtools/testing/s3p/distribution-s3p.rst b/docs/development/devtools/testing/s3p/distribution-s3p.rst new file mode 100644 index 00000000..55966738 --- /dev/null +++ b/docs/development/devtools/testing/s3p/distribution-s3p.rst @@ -0,0 +1,389 @@ +.. This work is licensed under a +.. Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +.. _distribution-s3p-label: + +Policy Distribution component +############################# + +72h Stability and 4h Performance Tests of Distribution +++++++++++++++++++++++++++++++++++++++++++++++++++++++ + +Common Setup +------------ + +Update the ubuntu software installer + +.. code-block:: bash + + sudo apt update + +Install Java + +.. code-block:: bash + + sudo apt install -y openjdk-11-jdk + +Ensure that the Java version that is executing is OpenJDK version 11 + +.. code-block:: bash + + $ java --version + openjdk 11.0.11 2021-04-20 + OpenJDK Runtime Environment (build 11.0.11+9-Ubuntu-0ubuntu2.18.04) + OpenJDK 64-Bit Server VM (build 11.0.11+9-Ubuntu-0ubuntu2.18.04, mixed mode) + +Install Docker and Docker Compose + +.. code-block:: bash + + # Add docker repository + curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg + + echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ + $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null + + sudo apt update + + # Install docker + sudo apt-get install docker-ce docker-ce-cli containerd.io + +Change the permissions of the Docker socket file + +.. code-block:: bash + + sudo chmod 666 /var/run/docker.sock + +Check the status of the Docker service and ensure it is running correctly + +.. code-block:: bash + + systemctl status --no-pager docker + docker.service - Docker Application Container Engine + Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) + Active: active (running) since Wed 2020-10-14 13:59:40 UTC; 1 weeks 0 days ago + # ... (truncated for brevity) + + docker ps + CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES + +Install and verify docker-compose + +.. code-block:: bash + + # Install compose (check if version is still available or update as necessary) + sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose + sudo chmod +x /usr/local/bin/docker-compose + + # Check if install was successful + docker-compose --version + +Clone the policy-distribution repo to access the test scripts + +.. code-block:: bash + + git clone https://gerrit.onap.org/r/policy/distribution + +.. _setup-distribution-s3p-components: + +Start services for MariaDB, Policy API, PAP and Distribution +------------------------------------------------------------ + +Navigate to the main folder for scripts to setup services: + +.. code-block:: bash + + cd ~/distribution/testsuites/stability/src/main/resources/setup + +Modify the versions.sh script to match all the versions being tested. + +.. code-block:: bash + + vi ~/distribution/testsuites/stability/src/main/resources/setup/versions.sh + +Ensure the correct docker image versions are specified - e.g. for Kohn-M4 + +- export POLICY_DIST_VERSION=2.8-SNAPSHOT + +Run the start.sh script to start the components. After installation, script will execute +``docker ps`` and show the running containers. + +.. code-block:: bash + + ./start.sh + + Creating network "setup_default" with the default driver + Creating policy-distribution ... done + Creating mariadb ... done + Creating simulator ... done + Creating policy-db-migrator ... done + Creating policy-api ... done + Creating policy-pap ... done + + fa4e9bd26e60 nexus3.onap.org:10001/onap/policy-pap:2.7-SNAPSHOT-latest "/opt/app/policy/pap…" 1 second ago Up Less than a second 6969/tcp policy-pap + efb65dd95020 nexus3.onap.org:10001/onap/policy-api:2.7-SNAPSHOT-latest "/opt/app/policy/api…" 1 second ago Up Less than a second 6969/tcp policy-api + cf602c2770ba nexus3.onap.org:10001/onap/policy-db-migrator:2.5-SNAPSHOT-latest "/opt/app/policy/bin…" 2 seconds ago Up 1 second 6824/tcp policy-db-migrator + 99383d2fecf4 pdp/simulator "sh /opt/app/policy/…" 2 seconds ago Up 1 second pdp-simulator + 3c0e205c5f47 nexus3.onap.org:10001/onap/policy-models-simulator:2.7-SNAPSHOT-latest "simulators.sh" 3 seconds ago Up 2 seconds 3904/tcp simulator + 3ad00d90d6a3 nexus3.onap.org:10001/onap/policy-distribution:2.8-SNAPSHOT-latest "/opt/app/policy/bin…" 3 seconds ago Up 2 seconds 6969/tcp, 9090/tcp policy-distribution + bb0b915cdecc nexus3.onap.org:10001/mariadb:10.5.8 "docker-entrypoint.s…" 3 seconds ago Up 2 seconds 3306/tcp mariadb + +.. note:: + The containers on this docker-compose are running with HTTP configuration. For HTTPS, ports + and configurations will need to be changed, as well certificates and keys must be generated + for security. + + +Install JMeter +-------------- + +Download and install JMeter + +.. code-block:: bash + + # Install required packages + sudo apt install -y wget unzip + + # Install JMeter + mkdir -p jmeter + cd jmeter + wget https://dlcdn.apache.org//jmeter/binaries/apache-jmeter-5.5.zip # check if valid version + unzip -q apache-jmeter-5.5.zip + rm apache-jmeter-5.5.zip + + +Install & configure visualVM +-------------------------------------- + +VisualVM needs to be installed in the virtual machine running Distribution. It will be used to +monitor CPU, Memory and GC for Distribution while the stability tests are running. + +.. code-block:: bash + + sudo apt install -y visualvm + +Run these commands to configure permissions (if permission errors happens, use ``sudo su``) + +.. code-block:: bash + + # Set globally accessable permissions on policy file + sudo chmod 777 /usr/lib/jvm/java-11-openjdk-amd64/bin/visualvm.policy + + # Create Java security policy file for VisualVM + sudo cat > /usr/lib/jvm/java-11-openjdk-amd64/bin/visualvm.policy << EOF + grant codebase "jrt:/jdk.jstatd" { + permission java.security.AllPermission; + }; + grant codebase "jrt:/jdk.internal.jvmstat" { + permission java.security.AllPermission; + }; + EOF + +Run the following command to start jstatd using port 1111 + +.. code-block:: bash + + /usr/lib/jvm/java-11-openjdk-amd64/bin/jstatd -p 1111 -J-Djava.security.policy=/usr/lib/jvm/java-11-openjdk-amd64/bin/visualvm.policy & + +Run visualVM to connect to POLICY_DISTRIBUTION_IP:9090 + +.. code-block:: bash + + # Get the Policy Distribution container IP + echo $(docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' policy-distribution) + + # Start visual vm + visualvm & + +This will load up the visualVM GUI + +Connect to Distribution JMX Port. + + 1. On the visualvm toolbar, click on "Add JMX Connection" + 2. Enter the Distribution container IP and Port 9090. This is the JMX port exposed by the + distribution container + 3. Double click on the newly added nodes under "Remotes" to start monitoring CPU, Memory & GC. + +Example Screenshot of visualVM + +.. image:: distribution-s3p-results/distribution-visualvm-snapshot.png + + +Stability Test of Policy Distribution ++++++++++++++++++++++++++++++++++++++ + +Introduction +------------ + +The 72 hour Stability Test for policy distribution has the goal of introducing a steady flow of +transactions initiated from a test client server running JMeter. The policy distribution is +configured with a special FileSystemReception plugin to monitor a local directory for newly added +csar files to be processed by itself. The input CSAR will be added/removed by the test client +(JMeter) and the result will be pulled from the backend (PAP and PolicyAPI) by the test client +(JMeter). + +The test will be performed in an environment where Jmeter will continuously add/remove a test csar +into the special directory where policy distribution is monitoring and will then get the processed +results from PAP and PolicyAPI to verify the successful deployment of the policy. The policy will +then be undeployed and the test will loop continuously until 72 hours have elapsed. + + +Test Plan Sequence +------------------ + +The 72h stability test will run the following steps sequentially in a single threaded loop. + +- **Delete Old CSAR** - Checks if CSAR already exists in the watched directory, if so it deletes it +- **Add CSAR** - Adds CSAR to the directory that distribution is watching +- **Get Healthcheck** - Ensures Healthcheck is returning 200 OK +- **Get Statistics** - Ensures Statistics is returning 200 OK +- **Get Metrics** - Ensures Metrics is returning 200 OK +- **Assert PDP Group Query** - Checks that PDPGroupQuery contains the deployed policy +- **Assert PoliciesDeployed** - Checks that the policy is deployed +- **Undeploy/Delete Policy** - Undeploys and deletes the Policy for the next loop +- **Assert PDP Group Query for Deleted Policy** - Ensures the policy has been removed and does not exist + +The following steps can be used to configure the parameters of the test plan. + +- **HTTP Authorization Manager** - used to store user/password authentication details. +- **HTTP Header Manager** - used to store headers which will be used for making HTTP requests. +- **User Defined Variables** - used to store following user defined parameters. + +========== =============================================== + **Name** **Description** +========== =============================================== + PAP_HOST IP Address or host name of PAP component + PAP_PORT Port number of PAP for making REST API calls + API_HOST IP Address or host name of API component + API_PORT Port number of API for making REST API calls + DURATION Duration of Test +========== =============================================== + +Screenshot of Distribution stability test plan + +.. image:: distribution-s3p-results/distribution-jmeter-testcases.png + + +Running the Test Plan +--------------------- + +Check if the /tmp/policydistribution/distributionmount exists as it was created during the start.sh +script execution. If not, run the following commands to create folder and change folder permissions +to allow the testplan to insert the CSAR into the /tmp/policydistribution/distributionmount folder. + +.. note:: + Make sure that only csar file is being loaded in the watched folder and log generation is in a + logs folder, as any sort of zip file can be understood by distribution as a policy file. A + logback.xml configuration file is available under setup/distribution folder. + +.. code-block:: bash + + sudo mkdir -p /tmp/policydistribution/distributionmount + sudo chmod -R a+trwx /tmp + + +Navigate to the stability test folder. + +.. code-block:: bash + + cd ~/distribution/testsuites/stability/src/main/resources/testplans/ + +Execute the run_test.sh + +.. code-block:: bash + + ./run_test.sh + + +Test Results +------------ + +**Summary** + +- Stability test plan was triggered for 72 hours. +- No errors were reported + +**Test Statistics** + +.. image:: distribution-s3p-results/stability-statistics.png +.. image:: distribution-s3p-results/stability-threshold.png + +**VisualVM Screenshots** + +.. image:: distribution-s3p-results/stability-monitor.png +.. image:: distribution-s3p-results/stability-threads.png + + +Performance Test of Policy Distribution ++++++++++++++++++++++++++++++++++++++++ + +Introduction +------------ + +The 4h Performance Test of Policy Distribution has the goal of testing the min/avg/max processing +time and rest call throughput for all the requests when the number of requests are large enough to +saturate the resource and find the bottleneck. + +It also tests that distribution can handle multiple policy CSARs and that these are deployed within +60 seconds consistently. + + +Setup Details +------------- + +The performance test is based on the same setup as the distribution stability tests. + + +Test Plan Sequence +------------------ + +Performance test plan is different from the stability test plan. + +- Instead of handling one policy csar at a time, multiple csar's are deployed within the watched + folder at the exact same time. +- We expect all policies from these csar's to be deployed within 60 seconds. +- There are also multithreaded tests running towards the healthcheck and statistics endpoints of + the distribution service. + + +Running the Test Plan +--------------------- + +Check if /tmp folder permissions to allow the Testplan to insert the CSAR into the +/tmp/policydistribution/distributionmount folder. +Clean up from previous run. If necessary, put containers down with script ``down.sh`` from setup +folder mentioned on :ref:`Setup components ` + +.. code-block:: bash + + sudo mkdir -p /tmp/policydistribution/distributionmount + sudo chmod -R a+trwx /tmp + +Navigate to the testplan folder and execute the test script: + +.. code-block:: bash + + cd ~/distribution/testsuites/performance/src/main/resources/testplans/ + ./run_test.sh + + +Test Results +------------ + +**Summary** + +- Performance test plan was triggered for 4 hours. +- No errors were reported + +**Test Statistics** + +.. image:: distribution-s3p-results/performance-statistics.png +.. image:: distribution-s3p-results/performance-threshold.png + +**VisualVM Screenshots** + +.. image:: distribution-s3p-results/performance-monitor.png +.. image:: distribution-s3p-results/performance-threads.png + +End of document diff --git a/docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-1.png b/docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-1.png new file mode 100644 index 00000000..3c1e06f7 Binary files /dev/null and b/docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-1.png differ diff --git a/docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-2.png b/docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-2.png new file mode 100644 index 00000000..7e124716 Binary files /dev/null and b/docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-2.png differ diff --git a/docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-3.png b/docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-3.png new file mode 100644 index 00000000..50f2c148 Binary files /dev/null and b/docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-3.png differ diff --git a/docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-4.png b/docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-4.png new file mode 100644 index 00000000..369d1f33 Binary files /dev/null and b/docs/development/devtools/testing/s3p/drools-s3p-results/s3p-drools-4.png differ diff --git a/docs/development/devtools/testing/s3p/drools-s3p.rst b/docs/development/devtools/testing/s3p/drools-s3p.rst new file mode 100644 index 00000000..88f601bd --- /dev/null +++ b/docs/development/devtools/testing/s3p/drools-s3p.rst @@ -0,0 +1,74 @@ +.. This work is licensed under a +.. Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +.. _drools-s3p-label: + +.. toctree:: + :maxdepth: 2 + +Policy Drools PDP component +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Both the Performance and the Stability tests were executed against an ONAP installation in the Policy tenant +in the UNH lab, from the admin VM running the jmeter tool to inject the load. + +General Setup +************* + +Agent VMs in this lab have the following configuration: + +- 16GB RAM +- 8 VCPU + +Jmeter is run from the admin VM. + +The drools-pdp container uses the JVM memory and CPU settings from the default OOM installation. + +Other ONAP components exercised during the stability tests were: + +- Policy XACML PDP to process guard queries for each transaction. +- DMaaP to carry PDP-D and jmeter initiated traffic to complete transactions. +- Policy API to create (and delete at the end of the tests) policies for each + scenario under test. +- Policy PAP to deploy (and undeploy at the end of the tests) policies for each scenario under test. +- XACML PDP Stability test was running at the same time. + +The following components are simulated during the tests. + +- SDNR. + +Stability Test of Policy PDP-D +****************************** + +PDP-D performance +================= + +The tests focused on the following use cases running in parallel: + +- vCPE +- SON O1 +- SON A1 + +Three threads ran in parallel, one for each scenario. The transactions were initiated +by each jmeter thread group. Each thread initiated a transaction, monitored the transaction, and +started the next one 250 ms. later. + +The results are illustrated on the following graphs: + +.. image:: drools-s3p-results/s3p-drools-1.png +.. image:: drools-s3p-results/s3p-drools-2.png +.. image:: drools-s3p-results/s3p-drools-3.png + + +Commentary +========== + +There is around 1% unexpected failures during the 72-hour run. This can also be seen in the +final output of jmeter: + +.. code-block:: bash + + summary = 4751546 in 72:00:37 = 18.3/s Avg: 150 Min: 0 Max: 15087 Err: 47891 (1.01%) + +Sporadic database errors have been observed and seem related to the 1% failure percentage rate. diff --git a/docs/development/devtools/testing/s3p/pap-s3p-results/pap_metrics_after_72h.txt b/docs/development/devtools/testing/s3p/pap-s3p-results/pap_metrics_after_72h.txt new file mode 100644 index 00000000..8864726e --- /dev/null +++ b/docs/development/devtools/testing/s3p/pap-s3p-results/pap_metrics_after_72h.txt @@ -0,0 +1,306 @@ +# HELP logback_events_total Number of error level events that made it to the logs +# TYPE logback_events_total counter +logback_events_total{level="warn",} 23.0 +logback_events_total{level="debug",} 0.0 +logback_events_total{level="error",} 1.0 +logback_events_total{level="trace",} 0.0 +logback_events_total{level="info",} 1709270.0 +# HELP system_cpu_usage The "recent cpu usage" for the whole system +# TYPE system_cpu_usage gauge +system_cpu_usage 0.1270718232044199 +# HELP hikaricp_connections_acquire_seconds Connection acquire time +# TYPE hikaricp_connections_acquire_seconds summary +hikaricp_connections_acquire_seconds_count{pool="HikariPool-1",} 298222.0 +hikaricp_connections_acquire_seconds_sum{pool="HikariPool-1",} 321.533641537 +# HELP hikaricp_connections_acquire_seconds_max Connection acquire time +# TYPE hikaricp_connections_acquire_seconds_max gauge +hikaricp_connections_acquire_seconds_max{pool="HikariPool-1",} 0.006766789 +# HELP tomcat_sessions_created_sessions_total +# TYPE tomcat_sessions_created_sessions_total counter +tomcat_sessions_created_sessions_total 158246.0 +# HELP jvm_classes_unloaded_classes_total The total number of classes unloaded since the Java virtual machine has started execution +# TYPE jvm_classes_unloaded_classes_total counter +jvm_classes_unloaded_classes_total 799.0 +# HELP jvm_gc_memory_allocated_bytes_total Incremented for an increase in the size of the (young) heap memory pool after one GC to before the next +# TYPE jvm_gc_memory_allocated_bytes_total counter +jvm_gc_memory_allocated_bytes_total 3.956513686328E12 +# HELP tomcat_sessions_alive_max_seconds +# TYPE tomcat_sessions_alive_max_seconds gauge +tomcat_sessions_alive_max_seconds 2488.0 +# HELP spring_data_repository_invocations_seconds_max +# TYPE spring_data_repository_invocations_seconds_max gauge +spring_data_repository_invocations_seconds_max{exception="None",method="findByKeyParentKeyNameAndKeyParentKeyVersion",repository="PolicyStatusRepository",state="SUCCESS",} 0.0 +spring_data_repository_invocations_seconds_max{exception="None",method="saveAll",repository="PolicyStatusRepository",state="SUCCESS",} 0.0 +spring_data_repository_invocations_seconds_max{exception="None",method="save",repository="PdpGroupRepository",state="SUCCESS",} 0.0 +spring_data_repository_invocations_seconds_max{exception="None",method="findByPdpGroup",repository="PolicyAuditRepository",state="SUCCESS",} 0.0 +spring_data_repository_invocations_seconds_max{exception="None",method="findById",repository="ToscaServiceTemplateRepository",state="SUCCESS",} 0.0 +spring_data_repository_invocations_seconds_max{exception="None",method="findByPdpGroup",repository="PolicyStatusRepository",state="SUCCESS",} 0.0 +spring_data_repository_invocations_seconds_max{exception="None",method="findAll",repository="PdpGroupRepository",state="SUCCESS",} 0.863253324 +spring_data_repository_invocations_seconds_max{exception="None",method="deleteById",repository="PdpGroupRepository",state="SUCCESS",} 0.0 +spring_data_repository_invocations_seconds_max{exception="None",method="findAll",repository="PolicyStatusRepository",state="SUCCESS",} 0.144251855 +spring_data_repository_invocations_seconds_max{exception="None",method="saveAll",repository="PolicyAuditRepository",state="SUCCESS",} 0.0 +spring_data_repository_invocations_seconds_max{exception="None",method="findByPdpGroupAndNameAndVersion",repository="PolicyAuditRepository",state="SUCCESS",} 0.0 +spring_data_repository_invocations_seconds_max{exception="None",method="findById",repository="ToscaNodeTemplateRepository",state="SUCCESS",} 0.0 +spring_data_repository_invocations_seconds_max{exception="None",method="findByKeyName",repository="PdpGroupRepository",state="SUCCESS",} 0.0 +spring_data_repository_invocations_seconds_max{exception="None",method="save",repository="PdpSubGroupRepository",state="SUCCESS",} 0.0 +spring_data_repository_invocations_seconds_max{exception="None",method="save",repository="PdpRepository",state="SUCCESS",} 0.0 +spring_data_repository_invocations_seconds_max{exception="None",method="deleteAll",repository="PolicyStatusRepository",state="SUCCESS",} 0.0 +# HELP spring_data_repository_invocations_seconds +# TYPE spring_data_repository_invocations_seconds summary +spring_data_repository_invocations_seconds_count{exception="None",method="findByKeyParentKeyNameAndKeyParentKeyVersion",repository="PolicyStatusRepository",state="SUCCESS",} 15740.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="findByKeyParentKeyNameAndKeyParentKeyVersion",repository="PolicyStatusRepository",state="SUCCESS",} 3116.970495755 +spring_data_repository_invocations_seconds_count{exception="None",method="saveAll",repository="PolicyStatusRepository",state="SUCCESS",} 113798.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="saveAll",repository="PolicyStatusRepository",state="SUCCESS",} 480.71823635 +spring_data_repository_invocations_seconds_count{exception="None",method="save",repository="PdpGroupRepository",state="SUCCESS",} 28085.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="save",repository="PdpGroupRepository",state="SUCCESS",} 9.645079055 +spring_data_repository_invocations_seconds_count{exception="None",method="findByPdpGroup",repository="PolicyAuditRepository",state="SUCCESS",} 6981.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="findByPdpGroup",repository="PolicyAuditRepository",state="SUCCESS",} 616.931466813 +spring_data_repository_invocations_seconds_count{exception="None",method="findById",repository="ToscaServiceTemplateRepository",state="SUCCESS",} 46250.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="findById",repository="ToscaServiceTemplateRepository",state="SUCCESS",} 8406.051483096 +spring_data_repository_invocations_seconds_count{exception="None",method="findByPdpGroup",repository="PolicyStatusRepository",state="SUCCESS",} 42765.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="findByPdpGroup",repository="PolicyStatusRepository",state="SUCCESS",} 10979.997264985 +spring_data_repository_invocations_seconds_count{exception="None",method="findAll",repository="PdpGroupRepository",state="SUCCESS",} 101780.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="findAll",repository="PdpGroupRepository",state="SUCCESS",} 20530.858991818 +spring_data_repository_invocations_seconds_count{exception="None",method="deleteById",repository="PdpGroupRepository",state="SUCCESS",} 1.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="deleteById",repository="PdpGroupRepository",state="SUCCESS",} 0.004567796 +spring_data_repository_invocations_seconds_count{exception="None",method="findAll",repository="PolicyStatusRepository",state="SUCCESS",} 32620.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="findAll",repository="PolicyStatusRepository",state="SUCCESS",} 11459.109680167 +spring_data_repository_invocations_seconds_count{exception="None",method="saveAll",repository="PolicyAuditRepository",state="SUCCESS",} 28080.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="saveAll",repository="PolicyAuditRepository",state="SUCCESS",} 45.836464781 +spring_data_repository_invocations_seconds_count{exception="None",method="findByPdpGroupAndNameAndVersion",repository="PolicyAuditRepository",state="SUCCESS",} 13960.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="findByPdpGroupAndNameAndVersion",repository="PolicyAuditRepository",state="SUCCESS",} 1765.653676534 +spring_data_repository_invocations_seconds_count{exception="None",method="findById",repository="ToscaNodeTemplateRepository",state="SUCCESS",} 21331.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="findById",repository="ToscaNodeTemplateRepository",state="SUCCESS",} 1.286926983 +spring_data_repository_invocations_seconds_count{exception="None",method="findByKeyName",repository="PdpGroupRepository",state="SUCCESS",} 13970.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="findByKeyName",repository="PdpGroupRepository",state="SUCCESS",} 4175.556697162 +spring_data_repository_invocations_seconds_count{exception="None",method="save",repository="PdpSubGroupRepository",state="SUCCESS",} 2.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="save",repository="PdpSubGroupRepository",state="SUCCESS",} 0.864602048 +spring_data_repository_invocations_seconds_count{exception="None",method="save",repository="PdpRepository",state="SUCCESS",} 36866.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="save",repository="PdpRepository",state="SUCCESS",} 7686.38602325 +spring_data_repository_invocations_seconds_count{exception="None",method="deleteAll",repository="PolicyStatusRepository",state="SUCCESS",} 56899.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="deleteAll",repository="PolicyStatusRepository",state="SUCCESS",} 882.098525295 +# HELP jvm_threads_states_threads The current number of threads having NEW state +# TYPE jvm_threads_states_threads gauge +jvm_threads_states_threads{state="runnable",} 9.0 +jvm_threads_states_threads{state="blocked",} 0.0 +jvm_threads_states_threads{state="waiting",} 29.0 +jvm_threads_states_threads{state="timed-waiting",} 8.0 +jvm_threads_states_threads{state="new",} 0.0 +jvm_threads_states_threads{state="terminated",} 0.0 +# HELP process_cpu_usage The "recent cpu usage" for the Java Virtual Machine process +# TYPE process_cpu_usage gauge +process_cpu_usage 0.006697923643670462 +# HELP tomcat_sessions_expired_sessions_total +# TYPE tomcat_sessions_expired_sessions_total counter +tomcat_sessions_expired_sessions_total 158186.0 +# HELP jvm_buffer_total_capacity_bytes An estimate of the total capacity of the buffers in this pool +# TYPE jvm_buffer_total_capacity_bytes gauge +jvm_buffer_total_capacity_bytes{id="mapped",} 0.0 +jvm_buffer_total_capacity_bytes{id="direct",} 169210.0 +# HELP process_start_time_seconds Start time of the process since unix epoch. +# TYPE process_start_time_seconds gauge +process_start_time_seconds 1.649849957815E9 +# HELP hikaricp_connections_creation_seconds_max Connection creation time +# TYPE hikaricp_connections_creation_seconds_max gauge +hikaricp_connections_creation_seconds_max{pool="HikariPool-1",} 0.51 +# HELP hikaricp_connections_creation_seconds Connection creation time +# TYPE hikaricp_connections_creation_seconds summary +hikaricp_connections_creation_seconds_count{pool="HikariPool-1",} 3936.0 +hikaricp_connections_creation_seconds_sum{pool="HikariPool-1",} 942.369 +# HELP hikaricp_connections_max Max connections +# TYPE hikaricp_connections_max gauge +hikaricp_connections_max{pool="HikariPool-1",} 10.0 +# HELP jdbc_connections_min Minimum number of idle connections in the pool. +# TYPE jdbc_connections_min gauge +jdbc_connections_min{name="dataSource",} 10.0 +# HELP jvm_memory_committed_bytes The amount of memory in bytes that is committed for the Java virtual machine to use +# TYPE jvm_memory_committed_bytes gauge +jvm_memory_committed_bytes{area="heap",id="Tenured Gen",} 1.76160768E8 +jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 4.9020928E7 +jvm_memory_committed_bytes{area="heap",id="Eden Space",} 7.0582272E7 +jvm_memory_committed_bytes{area="nonheap",id="Metaspace",} 1.1890688E8 +jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 2555904.0 +jvm_memory_committed_bytes{area="heap",id="Survivor Space",} 8781824.0 +jvm_memory_committed_bytes{area="nonheap",id="Compressed Class Space",} 1.5450112E7 +jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 3.1850496E7 +# HELP jvm_threads_peak_threads The peak live thread count since the Java virtual machine started or peak was reset +# TYPE jvm_threads_peak_threads gauge +jvm_threads_peak_threads 51.0 +# HELP hikaricp_connections_idle Idle connections +# TYPE hikaricp_connections_idle gauge +hikaricp_connections_idle{pool="HikariPool-1",} 10.0 +# HELP hikaricp_connections Total connections +# TYPE hikaricp_connections gauge +hikaricp_connections{pool="HikariPool-1",} 10.0 +# HELP http_server_requests_seconds +# TYPE http_server_requests_seconds summary +http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/audit/{pdpGroupName}/{policyName}/{policyVersion}",} 13960.0 +http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/audit/{pdpGroupName}/{policyName}/{policyVersion}",} 4066.52698026 +http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/metrics",} 22470.0 +http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/metrics",} 3622.506076129 +http_server_requests_seconds_count{exception="None",method="POST",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/deployments/batch",} 13961.0 +http_server_requests_seconds_sum{exception="None",method="POST",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/deployments/batch",} 27890.47103474 +http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/status",} 14404.0 +http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/status",} 7821.856496806 +http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/status/{pdpGroupName}/{policyName}/{policyVersion}",} 15738.0 +http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/status/{pdpGroupName}/{policyName}/{policyVersion}",} 5848.655389921 +http_server_requests_seconds_count{exception="None",method="DELETE",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/policies/{name}",} 7059.0 +http_server_requests_seconds_sum{exception="None",method="DELETE",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/policies/{name}",} 15554.208182423 +http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/audit/{pdpGroupName}",} 6981.0 +http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/audit/{pdpGroupName}",} 1756.291465092 +http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/deployed",} 6979.0 +http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/deployed",} 1934.785157616 +http_server_requests_seconds_count{exception="None",method="PUT",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/{name}",} 4.0 +http_server_requests_seconds_sum{exception="None",method="PUT",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/{name}",} 7.281567744 +http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps",} 31395.0 +http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps",} 13046.055299896 +http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/healthcheck",} 11237.0 +http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/healthcheck",} 6979.030310367 +http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/components/healthcheck",} 6979.0 +http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/components/healthcheck",} 3741.773622509 +http_server_requests_seconds_count{exception="None",method="GET",outcome="CLIENT_ERROR",status="404",uri="/policy/pap/v1/policies/status/{pdpGroupName}/{policyName}/{policyVersion}",} 2.0 +http_server_requests_seconds_sum{exception="None",method="GET",outcome="CLIENT_ERROR",status="404",uri="/policy/pap/v1/policies/status/{pdpGroupName}/{policyName}/{policyVersion}",} 1.318371311 +http_server_requests_seconds_count{exception="None",method="DELETE",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/{name}",} 1.0 +http_server_requests_seconds_sum{exception="None",method="DELETE",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/{name}",} 1.026191347 +http_server_requests_seconds_count{exception="None",method="POST",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/policies",} 7077.0 +http_server_requests_seconds_sum{exception="None",method="POST",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/policies",} 14603.589203056 +http_server_requests_seconds_count{exception="None",method="POST",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/batch",} 2.0 +http_server_requests_seconds_sum{exception="None",method="POST",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/batch",} 1.877099877 +# HELP http_server_requests_seconds_max +# TYPE http_server_requests_seconds_max gauge +http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/audit/{pdpGroupName}/{policyName}/{policyVersion}",} 0.0 +http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/metrics",} 0.147881793 +http_server_requests_seconds_max{exception="None",method="POST",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/deployments/batch",} 0.0 +http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/status",} 0.0 +http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/status/{pdpGroupName}/{policyName}/{policyVersion}",} 0.0 +http_server_requests_seconds_max{exception="None",method="DELETE",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/policies/{name}",} 0.0 +http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/audit/{pdpGroupName}",} 0.0 +http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/deployed",} 0.0 +http_server_requests_seconds_max{exception="None",method="PUT",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/{name}",} 0.0 +http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps",} 0.227488581 +http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/healthcheck",} 0.272733892 +http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/components/healthcheck",} 0.0 +http_server_requests_seconds_max{exception="None",method="GET",outcome="CLIENT_ERROR",status="404",uri="/policy/pap/v1/policies/status/{pdpGroupName}/{policyName}/{policyVersion}",} 0.0 +http_server_requests_seconds_max{exception="None",method="DELETE",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/{name}",} 0.0 +http_server_requests_seconds_max{exception="None",method="POST",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/policies",} 0.0 +http_server_requests_seconds_max{exception="None",method="POST",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/batch",} 0.0 +# HELP jvm_buffer_count_buffers An estimate of the number of buffers in the pool +# TYPE jvm_buffer_count_buffers gauge +jvm_buffer_count_buffers{id="mapped",} 0.0 +jvm_buffer_count_buffers{id="direct",} 10.0 +# HELP hikaricp_connections_pending Pending threads +# TYPE hikaricp_connections_pending gauge +hikaricp_connections_pending{pool="HikariPool-1",} 0.0 +# HELP system_load_average_1m The sum of the number of runnable entities queued to available processors and the number of runnable entities running on the available processors averaged over a period of time +# TYPE system_load_average_1m gauge +system_load_average_1m 0.6 +# HELP jvm_memory_used_bytes The amount of used memory +# TYPE jvm_memory_used_bytes gauge +jvm_memory_used_bytes{area="heap",id="Tenured Gen",} 6.7084064E7 +jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 4.110464E7 +jvm_memory_used_bytes{area="heap",id="Eden Space",} 3.329572E7 +jvm_memory_used_bytes{area="nonheap",id="Metaspace",} 1.12499384E8 +jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 1394432.0 +jvm_memory_used_bytes{area="heap",id="Survivor Space",} 463856.0 +jvm_memory_used_bytes{area="nonheap",id="Compressed Class Space",} 1.3096368E7 +jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 3.1773568E7 +# HELP tomcat_sessions_rejected_sessions_total +# TYPE tomcat_sessions_rejected_sessions_total counter +tomcat_sessions_rejected_sessions_total 0.0 +# HELP jvm_gc_live_data_size_bytes Size of long-lived heap memory pool after reclamation +# TYPE jvm_gc_live_data_size_bytes gauge +jvm_gc_live_data_size_bytes 5.0955016E7 +# HELP jvm_gc_memory_promoted_bytes_total Count of positive increases in the size of the old generation memory pool before GC to after GC +# TYPE jvm_gc_memory_promoted_bytes_total counter +jvm_gc_memory_promoted_bytes_total 1.692072808E9 +# HELP tomcat_sessions_active_max_sessions +# TYPE tomcat_sessions_active_max_sessions gauge +tomcat_sessions_active_max_sessions 1101.0 +# HELP jdbc_connections_active Current number of active connections that have been allocated from the data source. +# TYPE jdbc_connections_active gauge +jdbc_connections_active{name="dataSource",} 0.0 +# HELP jdbc_connections_max Maximum number of active connections that can be allocated at the same time. +# TYPE jdbc_connections_max gauge +jdbc_connections_max{name="dataSource",} 10.0 +# HELP jvm_memory_max_bytes The maximum amount of memory in bytes that can be used for memory management +# TYPE jvm_memory_max_bytes gauge +jvm_memory_max_bytes{area="heap",id="Tenured Gen",} 2.803236864E9 +jvm_memory_max_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 1.22912768E8 +jvm_memory_max_bytes{area="heap",id="Eden Space",} 1.12132096E9 +jvm_memory_max_bytes{area="nonheap",id="Metaspace",} -1.0 +jvm_memory_max_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 5828608.0 +jvm_memory_max_bytes{area="heap",id="Survivor Space",} 1.40115968E8 +jvm_memory_max_bytes{area="nonheap",id="Compressed Class Space",} 1.073741824E9 +jvm_memory_max_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 1.22916864E8 +# HELP jvm_threads_daemon_threads The current number of live daemon threads +# TYPE jvm_threads_daemon_threads gauge +jvm_threads_daemon_threads 34.0 +# HELP process_files_open_files The open file descriptor count +# TYPE process_files_open_files gauge +process_files_open_files 36.0 +# HELP system_cpu_count The number of processors available to the Java virtual machine +# TYPE system_cpu_count gauge +system_cpu_count 1.0 +# HELP jvm_gc_pause_seconds Time spent in GC pause +# TYPE jvm_gc_pause_seconds summary +jvm_gc_pause_seconds_count{action="end of major GC",cause="Metadata GC Threshold",} 2.0 +jvm_gc_pause_seconds_sum{action="end of major GC",cause="Metadata GC Threshold",} 0.391 +jvm_gc_pause_seconds_count{action="end of major GC",cause="Allocation Failure",} 13.0 +jvm_gc_pause_seconds_sum{action="end of major GC",cause="Allocation Failure",} 5.98 +jvm_gc_pause_seconds_count{action="end of minor GC",cause="Allocation Failure",} 56047.0 +jvm_gc_pause_seconds_sum{action="end of minor GC",cause="Allocation Failure",} 549.532 +jvm_gc_pause_seconds_count{action="end of minor GC",cause="GCLocker Initiated GC",} 9.0 +jvm_gc_pause_seconds_sum{action="end of minor GC",cause="GCLocker Initiated GC",} 0.081 +# HELP jvm_gc_pause_seconds_max Time spent in GC pause +# TYPE jvm_gc_pause_seconds_max gauge +jvm_gc_pause_seconds_max{action="end of major GC",cause="Metadata GC Threshold",} 0.0 +jvm_gc_pause_seconds_max{action="end of major GC",cause="Allocation Failure",} 0.0 +jvm_gc_pause_seconds_max{action="end of minor GC",cause="Allocation Failure",} 0.0 +jvm_gc_pause_seconds_max{action="end of minor GC",cause="GCLocker Initiated GC",} 0.0 +# HELP hikaricp_connections_min Min connections +# TYPE hikaricp_connections_min gauge +hikaricp_connections_min{pool="HikariPool-1",} 10.0 +# HELP process_files_max_files The maximum file descriptor count +# TYPE process_files_max_files gauge +process_files_max_files 1048576.0 +# HELP hikaricp_connections_active Active connections +# TYPE hikaricp_connections_active gauge +hikaricp_connections_active{pool="HikariPool-1",} 0.0 +# HELP jvm_threads_live_threads The current number of live threads including both daemon and non-daemon threads +# TYPE jvm_threads_live_threads gauge +jvm_threads_live_threads 46.0 +# HELP process_uptime_seconds The uptime of the Java virtual machine +# TYPE process_uptime_seconds gauge +process_uptime_seconds 510671.853 +# HELP hikaricp_connections_usage_seconds Connection usage time +# TYPE hikaricp_connections_usage_seconds summary +hikaricp_connections_usage_seconds_count{pool="HikariPool-1",} 298222.0 +hikaricp_connections_usage_seconds_sum{pool="HikariPool-1",} 125489.766 +# HELP hikaricp_connections_usage_seconds_max Connection usage time +# TYPE hikaricp_connections_usage_seconds_max gauge +hikaricp_connections_usage_seconds_max{pool="HikariPool-1",} 0.878 +# HELP pap_policy_deployments_total +# TYPE pap_policy_deployments_total counter +pap_policy_deployments_total{operation="deploy",status="FAILURE",} 0.0 +pap_policy_deployments_total{operation="undeploy",status="SUCCESS",} 13971.0 +pap_policy_deployments_total{operation="deploy",status="SUCCESS",} 14028.0 +pap_policy_deployments_total{operation="undeploy",status="FAILURE",} 0.0 +# HELP jvm_buffer_memory_used_bytes An estimate of the memory that the Java virtual machine is using for this buffer pool +# TYPE jvm_buffer_memory_used_bytes gauge +jvm_buffer_memory_used_bytes{id="mapped",} 0.0 +jvm_buffer_memory_used_bytes{id="direct",} 169210.0 +# HELP hikaricp_connections_timeout_total Connection timeout total count +# TYPE hikaricp_connections_timeout_total counter +hikaricp_connections_timeout_total{pool="HikariPool-1",} 0.0 +# HELP jvm_classes_loaded_classes The number of classes that are currently loaded in the Java virtual machine +# TYPE jvm_classes_loaded_classes gauge +jvm_classes_loaded_classes 18727.0 +# HELP jdbc_connections_idle Number of established but idle connections. +# TYPE jdbc_connections_idle gauge +jdbc_connections_idle{name="dataSource",} 10.0 +# HELP tomcat_sessions_active_current_sessions +# TYPE tomcat_sessions_active_current_sessions gauge +tomcat_sessions_active_current_sessions 60.0 +# HELP jvm_gc_max_data_size_bytes Max size of long-lived heap memory pool +# TYPE jvm_gc_max_data_size_bytes gauge +jvm_gc_max_data_size_bytes 2.803236864E9 diff --git a/docs/development/devtools/testing/s3p/pap-s3p-results/pap_metrics_before_72h.txt b/docs/development/devtools/testing/s3p/pap-s3p-results/pap_metrics_before_72h.txt new file mode 100644 index 00000000..047ccf99 --- /dev/null +++ b/docs/development/devtools/testing/s3p/pap-s3p-results/pap_metrics_before_72h.txt @@ -0,0 +1,225 @@ +# HELP spring_data_repository_invocations_seconds_max +# TYPE spring_data_repository_invocations_seconds_max gauge +spring_data_repository_invocations_seconds_max{exception="None",method="findByKeyName",repository="PdpGroupRepository",state="SUCCESS",} 0.0 +spring_data_repository_invocations_seconds_max{exception="None",method="save",repository="PdpRepository",state="SUCCESS",} 0.008146982 +spring_data_repository_invocations_seconds_max{exception="None",method="findAll",repository="PdpGroupRepository",state="SUCCESS",} 0.777049798 +spring_data_repository_invocations_seconds_max{exception="None",method="findAll",repository="PolicyStatusRepository",state="SUCCESS",} 0.569583402 +# HELP spring_data_repository_invocations_seconds +# TYPE spring_data_repository_invocations_seconds summary +spring_data_repository_invocations_seconds_count{exception="None",method="findByKeyName",repository="PdpGroupRepository",state="SUCCESS",} 1.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="findByKeyName",repository="PdpGroupRepository",state="SUCCESS",} 1.257790017 +spring_data_repository_invocations_seconds_count{exception="None",method="save",repository="PdpRepository",state="SUCCESS",} 23.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="save",repository="PdpRepository",state="SUCCESS",} 0.671469491 +spring_data_repository_invocations_seconds_count{exception="None",method="findAll",repository="PdpGroupRepository",state="SUCCESS",} 30.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="findAll",repository="PdpGroupRepository",state="SUCCESS",} 8.481980058 +spring_data_repository_invocations_seconds_count{exception="None",method="findAll",repository="PolicyStatusRepository",state="SUCCESS",} 4.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="findAll",repository="PolicyStatusRepository",state="SUCCESS",} 1.939575991 +# HELP hikaricp_connections_max Max connections +# TYPE hikaricp_connections_max gauge +hikaricp_connections_max{pool="HikariPool-1",} 10.0 +# HELP tomcat_sessions_created_sessions_total +# TYPE tomcat_sessions_created_sessions_total counter +tomcat_sessions_created_sessions_total 16.0 +# HELP process_files_open_files The open file descriptor count +# TYPE process_files_open_files gauge +process_files_open_files 34.0 +# HELP hikaricp_connections_active Active connections +# TYPE hikaricp_connections_active gauge +hikaricp_connections_active{pool="HikariPool-1",} 0.0 +# HELP jvm_classes_unloaded_classes_total The total number of classes unloaded since the Java virtual machine has started execution +# TYPE jvm_classes_unloaded_classes_total counter +jvm_classes_unloaded_classes_total 2.0 +# HELP system_cpu_usage The "recent cpu usage" for the whole system +# TYPE system_cpu_usage gauge +system_cpu_usage 0.03765922097101717 +# HELP jvm_classes_loaded_classes The number of classes that are currently loaded in the Java virtual machine +# TYPE jvm_classes_loaded_classes gauge +jvm_classes_loaded_classes 18022.0 +# HELP process_uptime_seconds The uptime of the Java virtual machine +# TYPE process_uptime_seconds gauge +process_uptime_seconds 570.627 +# HELP jvm_memory_committed_bytes The amount of memory in bytes that is committed for the Java virtual machine to use +# TYPE jvm_memory_committed_bytes gauge +jvm_memory_committed_bytes{area="heap",id="Tenured Gen",} 1.76160768E8 +jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 2.6017792E7 +jvm_memory_committed_bytes{area="heap",id="Eden Space",} 7.0582272E7 +jvm_memory_committed_bytes{area="nonheap",id="Metaspace",} 1.04054784E8 +jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 2555904.0 +jvm_memory_committed_bytes{area="heap",id="Survivor Space",} 8781824.0 +jvm_memory_committed_bytes{area="nonheap",id="Compressed Class Space",} 1.4286848E7 +jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 6881280.0 +# HELP jvm_gc_live_data_size_bytes Size of long-lived heap memory pool after reclamation +# TYPE jvm_gc_live_data_size_bytes gauge +jvm_gc_live_data_size_bytes 4.13206E7 +# HELP jdbc_connections_min Minimum number of idle connections in the pool. +# TYPE jdbc_connections_min gauge +jdbc_connections_min{name="dataSource",} 10.0 +# HELP process_start_time_seconds Start time of the process since unix epoch. +# TYPE process_start_time_seconds gauge +process_start_time_seconds 1.649787267607E9 +# HELP jdbc_connections_idle Number of established but idle connections. +# TYPE jdbc_connections_idle gauge +jdbc_connections_idle{name="dataSource",} 10.0 +# HELP jvm_gc_memory_promoted_bytes_total Count of positive increases in the size of the old generation memory pool before GC to after GC +# TYPE jvm_gc_memory_promoted_bytes_total counter +jvm_gc_memory_promoted_bytes_total 2.7154576E7 +# HELP hikaricp_connections_creation_seconds_max Connection creation time +# TYPE hikaricp_connections_creation_seconds_max gauge +hikaricp_connections_creation_seconds_max{pool="HikariPool-1",} 0.0 +# HELP hikaricp_connections_creation_seconds Connection creation time +# TYPE hikaricp_connections_creation_seconds summary +hikaricp_connections_creation_seconds_count{pool="HikariPool-1",} 0.0 +hikaricp_connections_creation_seconds_sum{pool="HikariPool-1",} 0.0 +# HELP tomcat_sessions_active_current_sessions +# TYPE tomcat_sessions_active_current_sessions gauge +tomcat_sessions_active_current_sessions 16.0 +# HELP jvm_threads_daemon_threads The current number of live daemon threads +# TYPE jvm_threads_daemon_threads gauge +jvm_threads_daemon_threads 34.0 +# HELP jvm_memory_used_bytes The amount of used memory +# TYPE jvm_memory_used_bytes gauge +jvm_memory_used_bytes{area="heap",id="Tenured Gen",} 4.13206E7 +jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 2.6013824E7 +jvm_memory_used_bytes{area="heap",id="Eden Space",} 2853928.0 +jvm_memory_used_bytes{area="nonheap",id="Metaspace",} 9.9649768E7 +jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 1364736.0 +jvm_memory_used_bytes{area="heap",id="Survivor Space",} 1036120.0 +jvm_memory_used_bytes{area="nonheap",id="Compressed Class Space",} 1.2613992E7 +jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 6865408.0 +# HELP hikaricp_connections_timeout_total Connection timeout total count +# TYPE hikaricp_connections_timeout_total counter +hikaricp_connections_timeout_total{pool="HikariPool-1",} 0.0 +# HELP jvm_memory_max_bytes The maximum amount of memory in bytes that can be used for memory management +# TYPE jvm_memory_max_bytes gauge +jvm_memory_max_bytes{area="heap",id="Tenured Gen",} 2.803236864E9 +jvm_memory_max_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 1.22912768E8 +jvm_memory_max_bytes{area="heap",id="Eden Space",} 1.12132096E9 +jvm_memory_max_bytes{area="nonheap",id="Metaspace",} -1.0 +jvm_memory_max_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 5828608.0 +jvm_memory_max_bytes{area="heap",id="Survivor Space",} 1.40115968E8 +jvm_memory_max_bytes{area="nonheap",id="Compressed Class Space",} 1.073741824E9 +jvm_memory_max_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 1.22916864E8 +# HELP tomcat_sessions_active_max_sessions +# TYPE tomcat_sessions_active_max_sessions gauge +tomcat_sessions_active_max_sessions 16.0 +# HELP tomcat_sessions_alive_max_seconds +# TYPE tomcat_sessions_alive_max_seconds gauge +tomcat_sessions_alive_max_seconds 0.0 +# HELP jvm_threads_peak_threads The peak live thread count since the Java virtual machine started or peak was reset +# TYPE jvm_threads_peak_threads gauge +jvm_threads_peak_threads 43.0 +# HELP hikaricp_connections_acquire_seconds Connection acquire time +# TYPE hikaricp_connections_acquire_seconds summary +hikaricp_connections_acquire_seconds_count{pool="HikariPool-1",} 57.0 +hikaricp_connections_acquire_seconds_sum{pool="HikariPool-1",} 0.103535665 +# HELP hikaricp_connections_acquire_seconds_max Connection acquire time +# TYPE hikaricp_connections_acquire_seconds_max gauge +hikaricp_connections_acquire_seconds_max{pool="HikariPool-1",} 0.004207252 +# HELP hikaricp_connections_usage_seconds Connection usage time +# TYPE hikaricp_connections_usage_seconds summary +hikaricp_connections_usage_seconds_count{pool="HikariPool-1",} 57.0 +hikaricp_connections_usage_seconds_sum{pool="HikariPool-1",} 13.297 +# HELP hikaricp_connections_usage_seconds_max Connection usage time +# TYPE hikaricp_connections_usage_seconds_max gauge +hikaricp_connections_usage_seconds_max{pool="HikariPool-1",} 0.836 +# HELP http_server_requests_seconds +# TYPE http_server_requests_seconds summary +http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/metrics",} 9.0 +http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/metrics",} 1.93944618 +http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps",} 3.0 +http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps",} 1.365007581 +http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/healthcheck",} 4.0 +http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/healthcheck",} 2.636914428 +# HELP http_server_requests_seconds_max +# TYPE http_server_requests_seconds_max gauge +http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/metrics",} 0.213989915 +http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps",} 0.0 +http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/healthcheck",} 0.714076223 +# HELP process_cpu_usage The "recent cpu usage" for the Java Virtual Machine process +# TYPE process_cpu_usage gauge +process_cpu_usage 0.002436413304293255 +# HELP hikaricp_connections_idle Idle connections +# TYPE hikaricp_connections_idle gauge +hikaricp_connections_idle{pool="HikariPool-1",} 10.0 +# HELP tomcat_sessions_rejected_sessions_total +# TYPE tomcat_sessions_rejected_sessions_total counter +tomcat_sessions_rejected_sessions_total 0.0 +# HELP jvm_gc_memory_allocated_bytes_total Incremented for an increase in the size of the (young) heap memory pool after one GC to before the next +# TYPE jvm_gc_memory_allocated_bytes_total counter +jvm_gc_memory_allocated_bytes_total 1.401269088E9 +# HELP tomcat_sessions_expired_sessions_total +# TYPE tomcat_sessions_expired_sessions_total counter +tomcat_sessions_expired_sessions_total 0.0 +# HELP pap_policy_deployments_total +# TYPE pap_policy_deployments_total counter +pap_policy_deployments_total{operation="deploy",status="FAILURE",} 0.0 +pap_policy_deployments_total{operation="undeploy",status="SUCCESS",} 0.0 +pap_policy_deployments_total{operation="deploy",status="SUCCESS",} 0.0 +pap_policy_deployments_total{operation="undeploy",status="FAILURE",} 0.0 +# HELP hikaricp_connections_pending Pending threads +# TYPE hikaricp_connections_pending gauge +hikaricp_connections_pending{pool="HikariPool-1",} 0.0 +# HELP process_files_max_files The maximum file descriptor count +# TYPE process_files_max_files gauge +process_files_max_files 1048576.0 +# HELP jvm_buffer_memory_used_bytes An estimate of the memory that the Java virtual machine is using for this buffer pool +# TYPE jvm_buffer_memory_used_bytes gauge +jvm_buffer_memory_used_bytes{id="mapped",} 0.0 +jvm_buffer_memory_used_bytes{id="direct",} 169210.0 +# HELP jvm_gc_pause_seconds Time spent in GC pause +# TYPE jvm_gc_pause_seconds summary +jvm_gc_pause_seconds_count{action="end of major GC",cause="Metadata GC Threshold",} 2.0 +jvm_gc_pause_seconds_sum{action="end of major GC",cause="Metadata GC Threshold",} 0.472 +jvm_gc_pause_seconds_count{action="end of minor GC",cause="Allocation Failure",} 19.0 +jvm_gc_pause_seconds_sum{action="end of minor GC",cause="Allocation Failure",} 0.507 +# HELP jvm_gc_pause_seconds_max Time spent in GC pause +# TYPE jvm_gc_pause_seconds_max gauge +jvm_gc_pause_seconds_max{action="end of major GC",cause="Metadata GC Threshold",} 0.0 +jvm_gc_pause_seconds_max{action="end of minor GC",cause="Allocation Failure",} 0.029 +# HELP jvm_threads_live_threads The current number of live threads including both daemon and non-daemon threads +# TYPE jvm_threads_live_threads gauge +jvm_threads_live_threads 43.0 +# HELP hikaricp_connections_min Min connections +# TYPE hikaricp_connections_min gauge +hikaricp_connections_min{pool="HikariPool-1",} 10.0 +# HELP jdbc_connections_max Maximum number of active connections that can be allocated at the same time. +# TYPE jdbc_connections_max gauge +jdbc_connections_max{name="dataSource",} 10.0 +# HELP jvm_buffer_total_capacity_bytes An estimate of the total capacity of the buffers in this pool +# TYPE jvm_buffer_total_capacity_bytes gauge +jvm_buffer_total_capacity_bytes{id="mapped",} 0.0 +jvm_buffer_total_capacity_bytes{id="direct",} 169210.0 +# HELP system_cpu_count The number of processors available to the Java virtual machine +# TYPE system_cpu_count gauge +system_cpu_count 1.0 +# HELP hikaricp_connections Total connections +# TYPE hikaricp_connections gauge +hikaricp_connections{pool="HikariPool-1",} 10.0 +# HELP jdbc_connections_active Current number of active connections that have been allocated from the data source. +# TYPE jdbc_connections_active gauge +jdbc_connections_active{name="dataSource",} 0.0 +# HELP system_load_average_1m The sum of the number of runnable entities queued to available processors and the number of runnable entities running on the available processors averaged over a period of time +# TYPE system_load_average_1m gauge +system_load_average_1m 0.36 +# HELP jvm_gc_max_data_size_bytes Max size of long-lived heap memory pool +# TYPE jvm_gc_max_data_size_bytes gauge +jvm_gc_max_data_size_bytes 2.803236864E9 +# HELP jvm_threads_states_threads The current number of threads having NEW state +# TYPE jvm_threads_states_threads gauge +jvm_threads_states_threads{state="runnable",} 9.0 +jvm_threads_states_threads{state="blocked",} 0.0 +jvm_threads_states_threads{state="waiting",} 26.0 +jvm_threads_states_threads{state="timed-waiting",} 8.0 +jvm_threads_states_threads{state="new",} 0.0 +jvm_threads_states_threads{state="terminated",} 0.0 +# HELP jvm_buffer_count_buffers An estimate of the number of buffers in the pool +# TYPE jvm_buffer_count_buffers gauge +jvm_buffer_count_buffers{id="mapped",} 0.0 +jvm_buffer_count_buffers{id="direct",} 10.0 +# HELP logback_events_total Number of error level events that made it to the logs +# TYPE logback_events_total counter +logback_events_total{level="warn",} 22.0 +logback_events_total{level="debug",} 0.0 +logback_events_total{level="error",} 0.0 +logback_events_total{level="trace",} 0.0 +logback_events_total{level="info",} 385.0 diff --git a/docs/development/devtools/testing/s3p/pap-s3p-results/pap_performance_jmeter_results.png b/docs/development/devtools/testing/s3p/pap-s3p-results/pap_performance_jmeter_results.png new file mode 100644 index 00000000..a6504789 Binary files /dev/null and b/docs/development/devtools/testing/s3p/pap-s3p-results/pap_performance_jmeter_results.png differ diff --git a/docs/development/devtools/testing/s3p/pap-s3p-results/pap_stability_jmeter_results.png b/docs/development/devtools/testing/s3p/pap-s3p-results/pap_stability_jmeter_results.png new file mode 100644 index 00000000..5f54c02e Binary files /dev/null and b/docs/development/devtools/testing/s3p/pap-s3p-results/pap_stability_jmeter_results.png differ diff --git a/docs/development/devtools/testing/s3p/pap-s3p-results/pap_top_after_72h.png b/docs/development/devtools/testing/s3p/pap-s3p-results/pap_top_after_72h.png new file mode 100644 index 00000000..576b1c25 Binary files /dev/null and b/docs/development/devtools/testing/s3p/pap-s3p-results/pap_top_after_72h.png differ diff --git a/docs/development/devtools/testing/s3p/pap-s3p-results/pap_top_before_72h.png b/docs/development/devtools/testing/s3p/pap-s3p-results/pap_top_before_72h.png new file mode 100644 index 00000000..b59b2c95 Binary files /dev/null and b/docs/development/devtools/testing/s3p/pap-s3p-results/pap_top_before_72h.png differ diff --git a/docs/development/devtools/testing/s3p/pap-s3p.rst b/docs/development/devtools/testing/s3p/pap-s3p.rst new file mode 100644 index 00000000..b42d7eb0 --- /dev/null +++ b/docs/development/devtools/testing/s3p/pap-s3p.rst @@ -0,0 +1,198 @@ +.. This work is licensed under a +.. Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +.. _pap-s3p-label: + +.. toctree:: + :maxdepth: 2 + +Policy PAP component +~~~~~~~~~~~~~~~~~~~~ + +Both the Performance and the Stability tests were executed by performing requests +against Policy components installed as part of a full ONAP OOM deployment in Nordix lab. + +Setup Details ++++++++++++++ + +- Policy-PAP along with all policy components deployed as part of a full ONAP OOM deployment. +- A second instance of APEX-PDP is spun up in the setup. Update the configuration file (OnapPfConfig.json) such that the PDP can register to the new group created by PAP in the tests. +- Both tests were run via jMeter. + +Stability Test of PAP ++++++++++++++++++++++ + +Test Plan +--------- +The 72 hours stability test ran the following steps sequentially in a single threaded loop. + +Setup Phase (steps running only once) +""""""""""""""""""""""""""""""""""""" + +- **Create Policy for defaultGroup** - creates an operational policy using policy/api component +- **Create NodeTemplate metadata for sampleGroup policy** - creates a node template containing metadata using policy/api component +- **Create Policy for sampleGroup** - creates an operational policy that refers to the metadata created above using policy/api component +- **Change defaultGroup state to ACTIVE** - changes the state of defaultGroup PdpGroup to ACTIVE +- **Create/Update PDP Group** - creates a new PDPGroup named sampleGroup. + A second instance of the PDP that is already spun up gets registered to this new group +- **Check PdpGroup Query** - makes a PdpGroup query request and verifies that both PdpGroups are in ACTIVE state. + +PAP Test Flow (steps running in a loop for 72 hours) +"""""""""""""""""""""""""""""""""""""""""""""""""""" + +- **Check Health** - checks the health status of pap +- **PAP Metrics** - Fetch prometheus metrics before the deployment/undeployment cycle + Save different counters such as deploy/undeploy-success/failure counters at API and engine level. +- **Check PdpGroup Query** - makes a PdpGroup query request and verifies that both PdpGroups are in the ACTIVE state. +- **Deploy Policy for defaultGroup** - deploys the policy defaultDomain to defaultGroup +- **Check status of defaultGroup policy** - checks the status of defaultGroup PdpGroup with the defaultDomain policy 1.0.0. +- **Check PdpGroup Audit defaultGroup** - checks the audit information for the defaultGroup PdpGroup. +- **Check PdpGroup Audit Policy (defaultGroup)** - checks the audit information for the defaultGroup PdpGroup with the defaultDomain policy 1.0.0. +- **Check PdpGroup Query** - makes a PdpGroup query request and verifies that 2 PdpGroups are in the ACTIVE state and defaultGroup has a policy deployed on it. +- **Deployment Update for sampleGroup policy** - deploys the policy sampleDomain in sampleGroup PdpGroup using pap api +- **Check status of sampleGroup** - checks the status of the sampleGroup PdpGroup. +- **Check status of PdpGroups** - checks the status of both PdpGroups. +- **Check PdpGroup Query** - makes a PdpGroup query request and verifies that the defaultGroup has a policy defaultDomain deployed on it and sampleGroup has policy sampleDomain deployed on it. +- **Check Audit** - checks the audit information for all PdpGroups. +- **Check Consolidated Health** - checks the consolidated health status of all policy components. +- **Check Deployed Policies** - checks for all the deployed policies using pap api. +- **Undeploy policy in sampleGroup** - undeploys the policy sampleDomain from sampleGroup PdpGroup using pap api +- **Undeploy policy in defaultGroup** - undeploys the policy defaultDomain from PdpGroup +- **Check status of policies** - checks the status of all policies and make sure both the policies are undeployed +- **Check PdpGroup Query** - makes a PdpGroup query request and verifies that PdpGroup is in the PASSIVE state. +- **PAP Metrics after deployments** - Fetch prometheus metrics after the deployment/undeployment cycle + Save the new counter values such as deploy/undeploy-success/failure counters at API and engine level, and check that the deploySuccess and undeploySuccess counters are increased by 2. + +.. Note:: + To avoid putting a large Constant Timer value after every deployment/undeployment, the status API is polled until the deployment/undeployment + is successfully completed, or until a timeout. This is to make sure that the operation is completed successfully and the PDPs gets enough time to respond back. + Otherwise, before the deployment is marked successful by PAP, an undeployment could be triggered as part of other tests, + and the operation's corresponding prometheus counter at engine level will not get updated. + +Teardown Phase (steps running only once after PAP Test Flow is completed) +""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" + +- **Change state to PASSIVE(sampleGroup)** - changes the state of sampleGroup PdpGroup to PASSIVE +- **Delete PdpGroup sampleGroup** - delete the sampleGroup PdpGroup using pap api +- **Change State to PASSIVE(defaultGroup)** - changes the state of defaultGroup PdpGroup to PASSIVE +- **Delete policy created for defaultGroup** - deletes the operational policy defaultDomain using policy/api component +- **Delete Policy created for sampleGroup** - deletes the operational policy sampleDomain using policy/api component +- **Delete Nodetemplate metadata for sampleGroup policy** - deleted the nodetemplate containing metadata for sampleGroup policy + +The following steps can be used to configure the parameters of test plan. + +- **HTTP Authorization Manager** - used to store user/password authentication details. +- **HTTP Header Manager** - used to store headers which will be used for making HTTP requests. +- **User Defined Variables** - used to store following user defined parameters. + +=========== =================================================================== + **Name** **Description** +=========== =================================================================== + PAP_HOST IP Address or host name of PAP component + PAP_PORT Port number of PAP for making REST API calls + API_HOST IP Address or host name of API component + API_PORT Port number of API for making REST API calls +=========== =================================================================== + +The test was run in the background via "nohup", to prevent it from being interrupted: + +.. code-block:: bash + + nohup apache-jmeter-5.5/bin/jmeter -n -t stability.jmx -l stabilityTestResults.jtl & + +Test Results +------------ + +**Summary** + +Stability test plan was triggered for 72 hours. There were no failures during the 72 hours test. + + +**Test Statistics** + +======================= ================= ================== ================================== +**Total # of requests** **Success %** **Error %** **Average time taken per request** +======================= ================= ================== ================================== + 102290 100 % 0.15 % 782 ms +======================= ================= ================== ================================== + +.. Note:: + + There were 0.15% failures during the 72 hours test, due to the timing between the update of the metric "undeploySuccessCount" and the Undeploy itself. + We suggest for the next test to increase the timeout timing up to 130s between "Undeploy policy in defaultGroup" and "PAP Metrics after deployments" + +**JMeter Screenshot** + +.. image:: pap-s3p-results/pap_stability_jmeter_results.png + +**Memory and CPU usage** + +The memory and CPU usage can be monitored by running "top" command in the PAP pod. +A snapshot is taken before and after test execution to monitor the changes in resource utilization. +Prometheus metrics is also collected before and after the test execution. + +Memory and CPU usage before test execution: + +.. image:: pap-s3p-results/pap_top_before_72h.png + +:download:`Prometheus metrics before 72h test ` + +Memory and CPU usage after test execution: + +.. image:: pap-s3p-results/pap_top_after_72h.png + +:download:`Prometheus metrics after 72h test ` + +Performance Test of PAP +++++++++++++++++++++++++ + +Introduction +------------ + +Performance test of PAP has the goal of testing the min/avg/max processing time and rest call throughput for all the requests with multiple requests at the same time. + +Setup Details +------------- + +The performance test is performed on a similar setup as Stability test. The JMeter VM will be sending a large number of REST requests to the PAP component and collecting the statistics. + + +Test Plan +--------- + +Performance test plan is the same as the stability test plan above except for the few differences listed below. + +- Increase the number of threads up to 10 (simulating 10 users' behaviours at the same time). +- Reduce the test time to 2 hours. +- Usage of counters (simulating each user) to create different pdpGroups, update their state and later delete them. +- Removed the tests to deploy policies to newly created groups as this will need a larger setup with multiple pdps registered to each group, which will also slow down the performance test with the time needed for registration process etc. +- Usage of counters (simulating each user) to create different drools policies and deploy them to defaultGroup. + In the test, a thread count of 10 is used resulting in 10 different drools policies getting deployed and undeployed continuously for 2 hours. + Other standard operations like checking the deployment status of policies, checking the metrics, health etc remains. + +Run Test +-------- + +Running/Triggering the performance test will be the same as the stability test. That is, launch JMeter pointing to corresponding *.jmx* test plan. The *API_HOST* , *API_PORT* , *PAP_HOST* , *PAP_PORT* are already set up in *.jmx*. + +.. code-block:: bash + + nohup apache-jmeter-5.5/bin/jmeter -n -t performance.jmx -l performanceTestResults.jtl & + +Test Results +------------ + +Test results are shown as below. + +**Test Statistics** + +======================= ================= ================== ================================== +**Total # of requests** **Success %** **Error %** **Average time taken per request** +======================= ================= ================== ================================== +19886 100 % 0.00 % 3107 ms +======================= ================= ================== ================================== + +**JMeter Screenshot** + +.. image:: pap-s3p-results/pap_performance_jmeter_results.png diff --git a/docs/development/devtools/testing/s3p/run-s3p.rst b/docs/development/devtools/testing/s3p/run-s3p.rst new file mode 100644 index 00000000..17eba32a --- /dev/null +++ b/docs/development/devtools/testing/s3p/run-s3p.rst @@ -0,0 +1,52 @@ +Running the Policy Framework S3P Tests +###################################### + +.. contents:: + :depth: 3 + +Per release, the policy framework team perform stability and performance tests per component of the policy framework. +This testing work involves performing a series of test on a full OOM deployment and updating the various test plans to work towards the given deployment. +This work can take some time to setup before performing any tests to begin with. +For stability testing, a tool called JMeter is used to trigger a series of tests for a period of 72 hours which has to be manually initiated and monitored by the tester. +Likewise, with the performance tests, but in this case for ~2 hours. +As part of the work to make to automate this process a script can be now triggered to bring up a microk8s cluster on a VM, install JMeter, alter the cluster info to match the JMX test plans for JMeter to trigger and gather results at the end. +These S3P tests will be triggered for a shorter period as part of the CSITs to prove the stability and performance of our components. + +There has been recent work completed to trigger our CSIT tests in a K8s environment. +As part of this work, a script has been created to bring up a microk8s cluster for testing purposes which includes all necessary components for our policy framework testing. +For automating the S3Ps, we will use this script to bring up a K8s environment to perform the S3P tests against. +Once this cluster is brought up, a script is called to alter the cluster. +The IPS and PORTS of our policy components are set by this script to ensure consistency in the test plans. +JMeter is installed and the S3P test plans are triggered to run by their respective components. + +.. code-block:: bash + :caption: Start S3P Script + + #===MAIN===# + if [ -z "${WORKSPACE}" ]; then + export WORKSPACE=$(git rev-parse --show-toplevel) + fi + export TESTDIR=${WORKSPACE}/testsuites + export API_PERF_TEST_FILE=$TESTDIR/performance/src/main/resources/testplans/policy_api_performance.jmx + export API_STAB_TEST_FILE=$TESTDIR/stability/src/main/resources/testplans/policy_api_stability.jmx + if [ $1 == "run" ] + then + mkdir automate-performance;cd automate-performance; + git clone "https://gerrit.onap.org/r/policy/docker" + cd docker/csit + if [ $2 == "performance" ] + then + bash start-s3p-tests.sh run $API_PERF_TEST_FILE; + elif [ $2 == "stability" ] + then + bash start-s3p-tests.sh run $API_STAB_TEST_FILE; + else + echo "echo Invalid arguments provided. Usage: $0 [option..] {performance | stability}" + fi + else + echo "Invalid arguments provided. Usage: $0 [option..] {run | uninstall}" + fi + +This script is triggered by each component. +It will export the performance and stability testplans and trigger the start-s3p-test.sh script which will perform the steps to automatically run the s3p tests. + diff --git a/docs/development/devtools/testing/s3p/xacml-s3p-results/s3p-perf-xacml.png b/docs/development/devtools/testing/s3p/xacml-s3p-results/s3p-perf-xacml.png new file mode 100644 index 00000000..2c27967f Binary files /dev/null and b/docs/development/devtools/testing/s3p/xacml-s3p-results/s3p-perf-xacml.png differ diff --git a/docs/development/devtools/testing/s3p/xacml-s3p.rst b/docs/development/devtools/testing/s3p/xacml-s3p.rst new file mode 100644 index 00000000..5ea2e287 --- /dev/null +++ b/docs/development/devtools/testing/s3p/xacml-s3p.rst @@ -0,0 +1,134 @@ +.. This work is licensed under a +.. Creative Commons Attribution 4.0 International License. +.. http://creativecommons.org/licenses/by/4.0 + +.. _xacml-s3p-label: + +.. toctree:: + :maxdepth: 2 + +########################## + +Performance Test of Policy XACML PDP (Jakarta) +********************************************** + +The Performance test was executed by performing requests +against the Policy RESTful APIs. + +A default ONAP installation in the Policy tenant in UNH was used to run the tests. + +The Agent VMs in this lab have the following configuration: + +- 16GB RAM +- 8 VCPU + +Summary +======= + +The Performance test was executed, and the result analysed, via: + +.. code-block:: bash + + jmeter -Jduration=1200 -Jusers=10 \ + -Jxacml_ip=$ip -Jpap_ip=$ip -Japi_ip=$ip \ + -Jxacml_port=30111 -Jpap_port=30197 -Japi_port=30664 \ + -n -t perf.jmx -l testresults.jtl + +Note: the ports listed above correspond to port 6969 of the respective components. + +The performance tests runs the following, all in parallel: + +- Healthcheck, 10 simultaneous threads +- Statistics, 10 simultaneous threads +- Decisions, 10 simultaneous threads, each running the following in sequence: + + - Monitoring Decision + - Monitoring Decision, abbreviated + - Naming Decision + - Optimization Decision + - Default Guard Decision (always "Permit") + - Frequency Limiter Guard Decision + - Min/Max Guard Decision + +When the script starts up, it uses policy-api to create, and policy-pap to deploy, +the policies that are needed by the test. It assumes that the "naming" policy has +already been created and deployed. Once the test completes, it undeploys and deletes +the policies that it previously created. + +Results +======= + +The test was run for 20 minutes with 10 users (i.e., threads), with the following results: + +.. csv-table:: + :header: "Number of Users", "Throughput (requests/second)", "Average Latency (ms)" + + 10, 4603, 2 + +.. image:: xacml-s3p-results/s3p-perf-xacml.png + + +Stability Test of Policy XACML PDP +********************************** + +This test was run using jmeter on a default +ONAP installation in the Policy tenant in UNH. + +The Agent VMs in this lab have the following configuration: + +- 16GB RAM +- 8 VCPU + +Summary +======= + +The stability test was performed on a default ONAP OOM installation in the Policy tenant of the UNH lab. +JMeter injected the traffic defined in the +`XACML PDP stability script +`_ +with the following command: + +.. code-block:: bash + + jmeter.sh -Jduration=259200 -Jusers=2 -Jxacml_ip=$ip -Jpap_ip=$ip -Japi_ip=$ip \ + -Jxacml_port=30111 -Jpap_port=30197 -Japi_port=30664 --nongui --testfile stability.jmx + +Note: the ports listed above correspond to port 6969 of the respective components. + +The default log level of the root and org.eclipse.jetty.server.RequestLog loggers in the logback.xml +of the XACML PDP +(om/kubernetes/policy/components/policy-xacml-pdp/resources/config/logback.xml) +was set to WARN since the OOM installation did have log rotation enabled of the +container logs in the kubernetes worker nodes. + +The stability test, stability.jmx, runs the following, all in parallel: + +- Healthcheck, 2 simultaneous threads +- Statistics, 2 simultaneous threads +- Decisions, 2 simultaneous threads, each running the following tasks in sequence: + - Monitoring Decision + - Monitoring Decision, abbreviated + - Naming Decision + - Optimization Decision + - Default Guard Decision (always "Permit") + - Frequency Limiter Guard Decision + - Min/Max Guard Decision + +When the script starts up, it uses policy-api to create, and policy-pap to deploy +the policies that are needed by the test. It assumes that the "naming" policy has +already been created and deployed. Once the test completes, it undeploys and deletes +the policies that it previously created. + +Results +======= + +The stability summary results were reported by JMeter with the following summary line: + +.. code-block:: bash + + summary = 941639699 in 71:59:36 = 3633.2/s Avg: 1 Min: 0 Max: 842 Err: 0 (0.00%) + +The XACML PDP offered very good performance with JMeter for the traffic mix described above. +The average transaction time is insignificant. The maximum transaction time of 842 ms. +There was a Drools stability test running in parallel, hence the actual load was higher. + diff --git a/docs/development/devtools/xacml-s3p.rst b/docs/development/devtools/xacml-s3p.rst deleted file mode 100644 index c52a21ab..00000000 --- a/docs/development/devtools/xacml-s3p.rst +++ /dev/null @@ -1,134 +0,0 @@ -.. This work is licensed under a -.. Creative Commons Attribution 4.0 International License. -.. http://creativecommons.org/licenses/by/4.0 - -.. _xacml-s3p-label: - -.. toctree:: - :maxdepth: 2 - -########################## - -Performance Test of Policy XACML PDP (Jakarta) -********************************************** - -The Performance test was executed by performing requests -against the Policy RESTful APIs. - -A default ONAP installation in the Policy tenant in UNH was used to run the tests. - -The Agent VMs in this lab have the following configuration: - -- 16GB RAM -- 8 VCPU - -Summary -======= - -The Performance test was executed, and the result analysed, via: - -.. code-block:: bash - - jmeter -Jduration=1200 -Jusers=10 \ - -Jxacml_ip=$ip -Jpap_ip=$ip -Japi_ip=$ip \ - -Jxacml_port=30111 -Jpap_port=30197 -Japi_port=30664 \ - -n -t perf.jmx -l testresults.jtl - -Note: the ports listed above correspond to port 6969 of the respective components. - -The performance tests runs the following, all in parallel: - -- Healthcheck, 10 simultaneous threads -- Statistics, 10 simultaneous threads -- Decisions, 10 simultaneous threads, each running the following in sequence: - - - Monitoring Decision - - Monitoring Decision, abbreviated - - Naming Decision - - Optimization Decision - - Default Guard Decision (always "Permit") - - Frequency Limiter Guard Decision - - Min/Max Guard Decision - -When the script starts up, it uses policy-api to create, and policy-pap to deploy, -the policies that are needed by the test. It assumes that the "naming" policy has -already been created and deployed. Once the test completes, it undeploys and deletes -the policies that it previously created. - -Results -======= - -The test was run for 20 minutes with 10 users (i.e., threads), with the following results: - -.. csv-table:: - :header: "Number of Users", "Throughput (requests/second)", "Average Latency (ms)" - - 10, 4603, 2 - -.. image:: images/s3p-perf-xacml.png - - -Stability Test of Policy XACML PDP -********************************** - -This test was run using jmeter on a default -ONAP installation in the Policy tenant in UNH. - -The Agent VMs in this lab have the following configuration: - -- 16GB RAM -- 8 VCPU - -Summary -======= - -The stability test was performed on a default ONAP OOM installation in the Policy tenant of the UNH lab. -JMeter injected the traffic defined in the -`XACML PDP stability script -`_ -with the following command: - -.. code-block:: bash - - jmeter.sh -Jduration=259200 -Jusers=2 -Jxacml_ip=$ip -Jpap_ip=$ip -Japi_ip=$ip \ - -Jxacml_port=30111 -Jpap_port=30197 -Japi_port=30664 --nongui --testfile stability.jmx - -Note: the ports listed above correspond to port 6969 of the respective components. - -The default log level of the root and org.eclipse.jetty.server.RequestLog loggers in the logback.xml -of the XACML PDP -(om/kubernetes/policy/components/policy-xacml-pdp/resources/config/logback.xml) -was set to WARN since the OOM installation did have log rotation enabled of the -container logs in the kubernetes worker nodes. - -The stability test, stability.jmx, runs the following, all in parallel: - -- Healthcheck, 2 simultaneous threads -- Statistics, 2 simultaneous threads -- Decisions, 2 simultaneous threads, each running the following tasks in sequence: - - Monitoring Decision - - Monitoring Decision, abbreviated - - Naming Decision - - Optimization Decision - - Default Guard Decision (always "Permit") - - Frequency Limiter Guard Decision - - Min/Max Guard Decision - -When the script starts up, it uses policy-api to create, and policy-pap to deploy -the policies that are needed by the test. It assumes that the "naming" policy has -already been created and deployed. Once the test completes, it undeploys and deletes -the policies that it previously created. - -Results -======= - -The stability summary results were reported by JMeter with the following summary line: - -.. code-block:: bash - - summary = 941639699 in 71:59:36 = 3633.2/s Avg: 1 Min: 0 Max: 842 Err: 0 (0.00%) - -The XACML PDP offered very good performance with JMeter for the traffic mix described above. -The average transaction time is insignificant. The maximum transaction time of 842 ms. -There was a Drools stability test running in parallel, hence the actual load was higher. - -- cgit 1.2.3-korg