diff options
author | a.sreekumar <ajith.sreekumar@bell.ca> | 2022-04-20 16:13:38 +0100 |
---|---|---|
committer | a.sreekumar <ajith.sreekumar@bell.ca> | 2022-04-20 16:15:11 +0100 |
commit | b01b8c2f5c41fb8c2a7c18096f10d4274dab353f (patch) | |
tree | 7f4afd69defcf7003e4ff9ac56c6acb90219994a /docs/development | |
parent | 41fc72569715ef1a86688325904d7833491c2891 (diff) |
Policy-PAP S3P doc updates
Change-Id: I2e9ea26abf36ee7d6915432a42df77bd8929a67b
Issue-ID: POLICY-4008
Signed-off-by: a.sreekumar <ajith.sreekumar@bell.ca>
Diffstat (limited to 'docs/development')
11 files changed, 594 insertions, 54 deletions
diff --git a/docs/development/devtools/pap-s3p-results/pap-s3p-mem-at.png b/docs/development/devtools/pap-s3p-results/pap-s3p-mem-at.png Binary files differdeleted file mode 100644 index dd880227..00000000 --- a/docs/development/devtools/pap-s3p-results/pap-s3p-mem-at.png +++ /dev/null diff --git a/docs/development/devtools/pap-s3p-results/pap-s3p-mem-bt.png b/docs/development/devtools/pap-s3p-results/pap-s3p-mem-bt.png Binary files differdeleted file mode 100644 index 7c909831..00000000 --- a/docs/development/devtools/pap-s3p-results/pap-s3p-mem-bt.png +++ /dev/null diff --git a/docs/development/devtools/pap-s3p-results/pap-s3p-performance-result-jmeter.png b/docs/development/devtools/pap-s3p-results/pap-s3p-performance-result-jmeter.png Binary files differdeleted file mode 100644 index be8bd99e..00000000 --- a/docs/development/devtools/pap-s3p-results/pap-s3p-performance-result-jmeter.png +++ /dev/null diff --git a/docs/development/devtools/pap-s3p-results/pap-s3p-stability-result-jmeter.png b/docs/development/devtools/pap-s3p-results/pap-s3p-stability-result-jmeter.png Binary files differdeleted file mode 100644 index 5ebc769f..00000000 --- a/docs/development/devtools/pap-s3p-results/pap-s3p-stability-result-jmeter.png +++ /dev/null diff --git a/docs/development/devtools/pap-s3p-results/pap_metrics_after_72h.txt b/docs/development/devtools/pap-s3p-results/pap_metrics_after_72h.txt new file mode 100644 index 00000000..8864726e --- /dev/null +++ b/docs/development/devtools/pap-s3p-results/pap_metrics_after_72h.txt @@ -0,0 +1,306 @@ +# HELP logback_events_total Number of error level events that made it to the logs +# TYPE logback_events_total counter +logback_events_total{level="warn",} 23.0 +logback_events_total{level="debug",} 0.0 +logback_events_total{level="error",} 1.0 +logback_events_total{level="trace",} 0.0 +logback_events_total{level="info",} 1709270.0 +# HELP system_cpu_usage The "recent cpu usage" for the whole system +# TYPE system_cpu_usage gauge +system_cpu_usage 0.1270718232044199 +# HELP hikaricp_connections_acquire_seconds Connection acquire time +# TYPE hikaricp_connections_acquire_seconds summary +hikaricp_connections_acquire_seconds_count{pool="HikariPool-1",} 298222.0 +hikaricp_connections_acquire_seconds_sum{pool="HikariPool-1",} 321.533641537 +# HELP hikaricp_connections_acquire_seconds_max Connection acquire time +# TYPE hikaricp_connections_acquire_seconds_max gauge +hikaricp_connections_acquire_seconds_max{pool="HikariPool-1",} 0.006766789 +# HELP tomcat_sessions_created_sessions_total +# TYPE tomcat_sessions_created_sessions_total counter +tomcat_sessions_created_sessions_total 158246.0 +# HELP jvm_classes_unloaded_classes_total The total number of classes unloaded since the Java virtual machine has started execution +# TYPE jvm_classes_unloaded_classes_total counter +jvm_classes_unloaded_classes_total 799.0 +# HELP jvm_gc_memory_allocated_bytes_total Incremented for an increase in the size of the (young) heap memory pool after one GC to before the next +# TYPE jvm_gc_memory_allocated_bytes_total counter +jvm_gc_memory_allocated_bytes_total 3.956513686328E12 +# HELP tomcat_sessions_alive_max_seconds +# TYPE tomcat_sessions_alive_max_seconds gauge +tomcat_sessions_alive_max_seconds 2488.0 +# HELP spring_data_repository_invocations_seconds_max +# TYPE spring_data_repository_invocations_seconds_max gauge +spring_data_repository_invocations_seconds_max{exception="None",method="findByKeyParentKeyNameAndKeyParentKeyVersion",repository="PolicyStatusRepository",state="SUCCESS",} 0.0 +spring_data_repository_invocations_seconds_max{exception="None",method="saveAll",repository="PolicyStatusRepository",state="SUCCESS",} 0.0 +spring_data_repository_invocations_seconds_max{exception="None",method="save",repository="PdpGroupRepository",state="SUCCESS",} 0.0 +spring_data_repository_invocations_seconds_max{exception="None",method="findByPdpGroup",repository="PolicyAuditRepository",state="SUCCESS",} 0.0 +spring_data_repository_invocations_seconds_max{exception="None",method="findById",repository="ToscaServiceTemplateRepository",state="SUCCESS",} 0.0 +spring_data_repository_invocations_seconds_max{exception="None",method="findByPdpGroup",repository="PolicyStatusRepository",state="SUCCESS",} 0.0 +spring_data_repository_invocations_seconds_max{exception="None",method="findAll",repository="PdpGroupRepository",state="SUCCESS",} 0.863253324 +spring_data_repository_invocations_seconds_max{exception="None",method="deleteById",repository="PdpGroupRepository",state="SUCCESS",} 0.0 +spring_data_repository_invocations_seconds_max{exception="None",method="findAll",repository="PolicyStatusRepository",state="SUCCESS",} 0.144251855 +spring_data_repository_invocations_seconds_max{exception="None",method="saveAll",repository="PolicyAuditRepository",state="SUCCESS",} 0.0 +spring_data_repository_invocations_seconds_max{exception="None",method="findByPdpGroupAndNameAndVersion",repository="PolicyAuditRepository",state="SUCCESS",} 0.0 +spring_data_repository_invocations_seconds_max{exception="None",method="findById",repository="ToscaNodeTemplateRepository",state="SUCCESS",} 0.0 +spring_data_repository_invocations_seconds_max{exception="None",method="findByKeyName",repository="PdpGroupRepository",state="SUCCESS",} 0.0 +spring_data_repository_invocations_seconds_max{exception="None",method="save",repository="PdpSubGroupRepository",state="SUCCESS",} 0.0 +spring_data_repository_invocations_seconds_max{exception="None",method="save",repository="PdpRepository",state="SUCCESS",} 0.0 +spring_data_repository_invocations_seconds_max{exception="None",method="deleteAll",repository="PolicyStatusRepository",state="SUCCESS",} 0.0 +# HELP spring_data_repository_invocations_seconds +# TYPE spring_data_repository_invocations_seconds summary +spring_data_repository_invocations_seconds_count{exception="None",method="findByKeyParentKeyNameAndKeyParentKeyVersion",repository="PolicyStatusRepository",state="SUCCESS",} 15740.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="findByKeyParentKeyNameAndKeyParentKeyVersion",repository="PolicyStatusRepository",state="SUCCESS",} 3116.970495755 +spring_data_repository_invocations_seconds_count{exception="None",method="saveAll",repository="PolicyStatusRepository",state="SUCCESS",} 113798.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="saveAll",repository="PolicyStatusRepository",state="SUCCESS",} 480.71823635 +spring_data_repository_invocations_seconds_count{exception="None",method="save",repository="PdpGroupRepository",state="SUCCESS",} 28085.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="save",repository="PdpGroupRepository",state="SUCCESS",} 9.645079055 +spring_data_repository_invocations_seconds_count{exception="None",method="findByPdpGroup",repository="PolicyAuditRepository",state="SUCCESS",} 6981.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="findByPdpGroup",repository="PolicyAuditRepository",state="SUCCESS",} 616.931466813 +spring_data_repository_invocations_seconds_count{exception="None",method="findById",repository="ToscaServiceTemplateRepository",state="SUCCESS",} 46250.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="findById",repository="ToscaServiceTemplateRepository",state="SUCCESS",} 8406.051483096 +spring_data_repository_invocations_seconds_count{exception="None",method="findByPdpGroup",repository="PolicyStatusRepository",state="SUCCESS",} 42765.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="findByPdpGroup",repository="PolicyStatusRepository",state="SUCCESS",} 10979.997264985 +spring_data_repository_invocations_seconds_count{exception="None",method="findAll",repository="PdpGroupRepository",state="SUCCESS",} 101780.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="findAll",repository="PdpGroupRepository",state="SUCCESS",} 20530.858991818 +spring_data_repository_invocations_seconds_count{exception="None",method="deleteById",repository="PdpGroupRepository",state="SUCCESS",} 1.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="deleteById",repository="PdpGroupRepository",state="SUCCESS",} 0.004567796 +spring_data_repository_invocations_seconds_count{exception="None",method="findAll",repository="PolicyStatusRepository",state="SUCCESS",} 32620.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="findAll",repository="PolicyStatusRepository",state="SUCCESS",} 11459.109680167 +spring_data_repository_invocations_seconds_count{exception="None",method="saveAll",repository="PolicyAuditRepository",state="SUCCESS",} 28080.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="saveAll",repository="PolicyAuditRepository",state="SUCCESS",} 45.836464781 +spring_data_repository_invocations_seconds_count{exception="None",method="findByPdpGroupAndNameAndVersion",repository="PolicyAuditRepository",state="SUCCESS",} 13960.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="findByPdpGroupAndNameAndVersion",repository="PolicyAuditRepository",state="SUCCESS",} 1765.653676534 +spring_data_repository_invocations_seconds_count{exception="None",method="findById",repository="ToscaNodeTemplateRepository",state="SUCCESS",} 21331.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="findById",repository="ToscaNodeTemplateRepository",state="SUCCESS",} 1.286926983 +spring_data_repository_invocations_seconds_count{exception="None",method="findByKeyName",repository="PdpGroupRepository",state="SUCCESS",} 13970.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="findByKeyName",repository="PdpGroupRepository",state="SUCCESS",} 4175.556697162 +spring_data_repository_invocations_seconds_count{exception="None",method="save",repository="PdpSubGroupRepository",state="SUCCESS",} 2.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="save",repository="PdpSubGroupRepository",state="SUCCESS",} 0.864602048 +spring_data_repository_invocations_seconds_count{exception="None",method="save",repository="PdpRepository",state="SUCCESS",} 36866.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="save",repository="PdpRepository",state="SUCCESS",} 7686.38602325 +spring_data_repository_invocations_seconds_count{exception="None",method="deleteAll",repository="PolicyStatusRepository",state="SUCCESS",} 56899.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="deleteAll",repository="PolicyStatusRepository",state="SUCCESS",} 882.098525295 +# HELP jvm_threads_states_threads The current number of threads having NEW state +# TYPE jvm_threads_states_threads gauge +jvm_threads_states_threads{state="runnable",} 9.0 +jvm_threads_states_threads{state="blocked",} 0.0 +jvm_threads_states_threads{state="waiting",} 29.0 +jvm_threads_states_threads{state="timed-waiting",} 8.0 +jvm_threads_states_threads{state="new",} 0.0 +jvm_threads_states_threads{state="terminated",} 0.0 +# HELP process_cpu_usage The "recent cpu usage" for the Java Virtual Machine process +# TYPE process_cpu_usage gauge +process_cpu_usage 0.006697923643670462 +# HELP tomcat_sessions_expired_sessions_total +# TYPE tomcat_sessions_expired_sessions_total counter +tomcat_sessions_expired_sessions_total 158186.0 +# HELP jvm_buffer_total_capacity_bytes An estimate of the total capacity of the buffers in this pool +# TYPE jvm_buffer_total_capacity_bytes gauge +jvm_buffer_total_capacity_bytes{id="mapped",} 0.0 +jvm_buffer_total_capacity_bytes{id="direct",} 169210.0 +# HELP process_start_time_seconds Start time of the process since unix epoch. +# TYPE process_start_time_seconds gauge +process_start_time_seconds 1.649849957815E9 +# HELP hikaricp_connections_creation_seconds_max Connection creation time +# TYPE hikaricp_connections_creation_seconds_max gauge +hikaricp_connections_creation_seconds_max{pool="HikariPool-1",} 0.51 +# HELP hikaricp_connections_creation_seconds Connection creation time +# TYPE hikaricp_connections_creation_seconds summary +hikaricp_connections_creation_seconds_count{pool="HikariPool-1",} 3936.0 +hikaricp_connections_creation_seconds_sum{pool="HikariPool-1",} 942.369 +# HELP hikaricp_connections_max Max connections +# TYPE hikaricp_connections_max gauge +hikaricp_connections_max{pool="HikariPool-1",} 10.0 +# HELP jdbc_connections_min Minimum number of idle connections in the pool. +# TYPE jdbc_connections_min gauge +jdbc_connections_min{name="dataSource",} 10.0 +# HELP jvm_memory_committed_bytes The amount of memory in bytes that is committed for the Java virtual machine to use +# TYPE jvm_memory_committed_bytes gauge +jvm_memory_committed_bytes{area="heap",id="Tenured Gen",} 1.76160768E8 +jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 4.9020928E7 +jvm_memory_committed_bytes{area="heap",id="Eden Space",} 7.0582272E7 +jvm_memory_committed_bytes{area="nonheap",id="Metaspace",} 1.1890688E8 +jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 2555904.0 +jvm_memory_committed_bytes{area="heap",id="Survivor Space",} 8781824.0 +jvm_memory_committed_bytes{area="nonheap",id="Compressed Class Space",} 1.5450112E7 +jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 3.1850496E7 +# HELP jvm_threads_peak_threads The peak live thread count since the Java virtual machine started or peak was reset +# TYPE jvm_threads_peak_threads gauge +jvm_threads_peak_threads 51.0 +# HELP hikaricp_connections_idle Idle connections +# TYPE hikaricp_connections_idle gauge +hikaricp_connections_idle{pool="HikariPool-1",} 10.0 +# HELP hikaricp_connections Total connections +# TYPE hikaricp_connections gauge +hikaricp_connections{pool="HikariPool-1",} 10.0 +# HELP http_server_requests_seconds +# TYPE http_server_requests_seconds summary +http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/audit/{pdpGroupName}/{policyName}/{policyVersion}",} 13960.0 +http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/audit/{pdpGroupName}/{policyName}/{policyVersion}",} 4066.52698026 +http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/metrics",} 22470.0 +http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/metrics",} 3622.506076129 +http_server_requests_seconds_count{exception="None",method="POST",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/deployments/batch",} 13961.0 +http_server_requests_seconds_sum{exception="None",method="POST",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/deployments/batch",} 27890.47103474 +http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/status",} 14404.0 +http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/status",} 7821.856496806 +http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/status/{pdpGroupName}/{policyName}/{policyVersion}",} 15738.0 +http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/status/{pdpGroupName}/{policyName}/{policyVersion}",} 5848.655389921 +http_server_requests_seconds_count{exception="None",method="DELETE",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/policies/{name}",} 7059.0 +http_server_requests_seconds_sum{exception="None",method="DELETE",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/policies/{name}",} 15554.208182423 +http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/audit/{pdpGroupName}",} 6981.0 +http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/audit/{pdpGroupName}",} 1756.291465092 +http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/deployed",} 6979.0 +http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/deployed",} 1934.785157616 +http_server_requests_seconds_count{exception="None",method="PUT",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/{name}",} 4.0 +http_server_requests_seconds_sum{exception="None",method="PUT",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/{name}",} 7.281567744 +http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps",} 31395.0 +http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps",} 13046.055299896 +http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/healthcheck",} 11237.0 +http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/healthcheck",} 6979.030310367 +http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/components/healthcheck",} 6979.0 +http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/components/healthcheck",} 3741.773622509 +http_server_requests_seconds_count{exception="None",method="GET",outcome="CLIENT_ERROR",status="404",uri="/policy/pap/v1/policies/status/{pdpGroupName}/{policyName}/{policyVersion}",} 2.0 +http_server_requests_seconds_sum{exception="None",method="GET",outcome="CLIENT_ERROR",status="404",uri="/policy/pap/v1/policies/status/{pdpGroupName}/{policyName}/{policyVersion}",} 1.318371311 +http_server_requests_seconds_count{exception="None",method="DELETE",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/{name}",} 1.0 +http_server_requests_seconds_sum{exception="None",method="DELETE",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/{name}",} 1.026191347 +http_server_requests_seconds_count{exception="None",method="POST",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/policies",} 7077.0 +http_server_requests_seconds_sum{exception="None",method="POST",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/policies",} 14603.589203056 +http_server_requests_seconds_count{exception="None",method="POST",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/batch",} 2.0 +http_server_requests_seconds_sum{exception="None",method="POST",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/batch",} 1.877099877 +# HELP http_server_requests_seconds_max +# TYPE http_server_requests_seconds_max gauge +http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/audit/{pdpGroupName}/{policyName}/{policyVersion}",} 0.0 +http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/metrics",} 0.147881793 +http_server_requests_seconds_max{exception="None",method="POST",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/deployments/batch",} 0.0 +http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/status",} 0.0 +http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/status/{pdpGroupName}/{policyName}/{policyVersion}",} 0.0 +http_server_requests_seconds_max{exception="None",method="DELETE",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/policies/{name}",} 0.0 +http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/audit/{pdpGroupName}",} 0.0 +http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/deployed",} 0.0 +http_server_requests_seconds_max{exception="None",method="PUT",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/{name}",} 0.0 +http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps",} 0.227488581 +http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/healthcheck",} 0.272733892 +http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/components/healthcheck",} 0.0 +http_server_requests_seconds_max{exception="None",method="GET",outcome="CLIENT_ERROR",status="404",uri="/policy/pap/v1/policies/status/{pdpGroupName}/{policyName}/{policyVersion}",} 0.0 +http_server_requests_seconds_max{exception="None",method="DELETE",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/{name}",} 0.0 +http_server_requests_seconds_max{exception="None",method="POST",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/policies",} 0.0 +http_server_requests_seconds_max{exception="None",method="POST",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/batch",} 0.0 +# HELP jvm_buffer_count_buffers An estimate of the number of buffers in the pool +# TYPE jvm_buffer_count_buffers gauge +jvm_buffer_count_buffers{id="mapped",} 0.0 +jvm_buffer_count_buffers{id="direct",} 10.0 +# HELP hikaricp_connections_pending Pending threads +# TYPE hikaricp_connections_pending gauge +hikaricp_connections_pending{pool="HikariPool-1",} 0.0 +# HELP system_load_average_1m The sum of the number of runnable entities queued to available processors and the number of runnable entities running on the available processors averaged over a period of time +# TYPE system_load_average_1m gauge +system_load_average_1m 0.6 +# HELP jvm_memory_used_bytes The amount of used memory +# TYPE jvm_memory_used_bytes gauge +jvm_memory_used_bytes{area="heap",id="Tenured Gen",} 6.7084064E7 +jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 4.110464E7 +jvm_memory_used_bytes{area="heap",id="Eden Space",} 3.329572E7 +jvm_memory_used_bytes{area="nonheap",id="Metaspace",} 1.12499384E8 +jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 1394432.0 +jvm_memory_used_bytes{area="heap",id="Survivor Space",} 463856.0 +jvm_memory_used_bytes{area="nonheap",id="Compressed Class Space",} 1.3096368E7 +jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 3.1773568E7 +# HELP tomcat_sessions_rejected_sessions_total +# TYPE tomcat_sessions_rejected_sessions_total counter +tomcat_sessions_rejected_sessions_total 0.0 +# HELP jvm_gc_live_data_size_bytes Size of long-lived heap memory pool after reclamation +# TYPE jvm_gc_live_data_size_bytes gauge +jvm_gc_live_data_size_bytes 5.0955016E7 +# HELP jvm_gc_memory_promoted_bytes_total Count of positive increases in the size of the old generation memory pool before GC to after GC +# TYPE jvm_gc_memory_promoted_bytes_total counter +jvm_gc_memory_promoted_bytes_total 1.692072808E9 +# HELP tomcat_sessions_active_max_sessions +# TYPE tomcat_sessions_active_max_sessions gauge +tomcat_sessions_active_max_sessions 1101.0 +# HELP jdbc_connections_active Current number of active connections that have been allocated from the data source. +# TYPE jdbc_connections_active gauge +jdbc_connections_active{name="dataSource",} 0.0 +# HELP jdbc_connections_max Maximum number of active connections that can be allocated at the same time. +# TYPE jdbc_connections_max gauge +jdbc_connections_max{name="dataSource",} 10.0 +# HELP jvm_memory_max_bytes The maximum amount of memory in bytes that can be used for memory management +# TYPE jvm_memory_max_bytes gauge +jvm_memory_max_bytes{area="heap",id="Tenured Gen",} 2.803236864E9 +jvm_memory_max_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 1.22912768E8 +jvm_memory_max_bytes{area="heap",id="Eden Space",} 1.12132096E9 +jvm_memory_max_bytes{area="nonheap",id="Metaspace",} -1.0 +jvm_memory_max_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 5828608.0 +jvm_memory_max_bytes{area="heap",id="Survivor Space",} 1.40115968E8 +jvm_memory_max_bytes{area="nonheap",id="Compressed Class Space",} 1.073741824E9 +jvm_memory_max_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 1.22916864E8 +# HELP jvm_threads_daemon_threads The current number of live daemon threads +# TYPE jvm_threads_daemon_threads gauge +jvm_threads_daemon_threads 34.0 +# HELP process_files_open_files The open file descriptor count +# TYPE process_files_open_files gauge +process_files_open_files 36.0 +# HELP system_cpu_count The number of processors available to the Java virtual machine +# TYPE system_cpu_count gauge +system_cpu_count 1.0 +# HELP jvm_gc_pause_seconds Time spent in GC pause +# TYPE jvm_gc_pause_seconds summary +jvm_gc_pause_seconds_count{action="end of major GC",cause="Metadata GC Threshold",} 2.0 +jvm_gc_pause_seconds_sum{action="end of major GC",cause="Metadata GC Threshold",} 0.391 +jvm_gc_pause_seconds_count{action="end of major GC",cause="Allocation Failure",} 13.0 +jvm_gc_pause_seconds_sum{action="end of major GC",cause="Allocation Failure",} 5.98 +jvm_gc_pause_seconds_count{action="end of minor GC",cause="Allocation Failure",} 56047.0 +jvm_gc_pause_seconds_sum{action="end of minor GC",cause="Allocation Failure",} 549.532 +jvm_gc_pause_seconds_count{action="end of minor GC",cause="GCLocker Initiated GC",} 9.0 +jvm_gc_pause_seconds_sum{action="end of minor GC",cause="GCLocker Initiated GC",} 0.081 +# HELP jvm_gc_pause_seconds_max Time spent in GC pause +# TYPE jvm_gc_pause_seconds_max gauge +jvm_gc_pause_seconds_max{action="end of major GC",cause="Metadata GC Threshold",} 0.0 +jvm_gc_pause_seconds_max{action="end of major GC",cause="Allocation Failure",} 0.0 +jvm_gc_pause_seconds_max{action="end of minor GC",cause="Allocation Failure",} 0.0 +jvm_gc_pause_seconds_max{action="end of minor GC",cause="GCLocker Initiated GC",} 0.0 +# HELP hikaricp_connections_min Min connections +# TYPE hikaricp_connections_min gauge +hikaricp_connections_min{pool="HikariPool-1",} 10.0 +# HELP process_files_max_files The maximum file descriptor count +# TYPE process_files_max_files gauge +process_files_max_files 1048576.0 +# HELP hikaricp_connections_active Active connections +# TYPE hikaricp_connections_active gauge +hikaricp_connections_active{pool="HikariPool-1",} 0.0 +# HELP jvm_threads_live_threads The current number of live threads including both daemon and non-daemon threads +# TYPE jvm_threads_live_threads gauge +jvm_threads_live_threads 46.0 +# HELP process_uptime_seconds The uptime of the Java virtual machine +# TYPE process_uptime_seconds gauge +process_uptime_seconds 510671.853 +# HELP hikaricp_connections_usage_seconds Connection usage time +# TYPE hikaricp_connections_usage_seconds summary +hikaricp_connections_usage_seconds_count{pool="HikariPool-1",} 298222.0 +hikaricp_connections_usage_seconds_sum{pool="HikariPool-1",} 125489.766 +# HELP hikaricp_connections_usage_seconds_max Connection usage time +# TYPE hikaricp_connections_usage_seconds_max gauge +hikaricp_connections_usage_seconds_max{pool="HikariPool-1",} 0.878 +# HELP pap_policy_deployments_total +# TYPE pap_policy_deployments_total counter +pap_policy_deployments_total{operation="deploy",status="FAILURE",} 0.0 +pap_policy_deployments_total{operation="undeploy",status="SUCCESS",} 13971.0 +pap_policy_deployments_total{operation="deploy",status="SUCCESS",} 14028.0 +pap_policy_deployments_total{operation="undeploy",status="FAILURE",} 0.0 +# HELP jvm_buffer_memory_used_bytes An estimate of the memory that the Java virtual machine is using for this buffer pool +# TYPE jvm_buffer_memory_used_bytes gauge +jvm_buffer_memory_used_bytes{id="mapped",} 0.0 +jvm_buffer_memory_used_bytes{id="direct",} 169210.0 +# HELP hikaricp_connections_timeout_total Connection timeout total count +# TYPE hikaricp_connections_timeout_total counter +hikaricp_connections_timeout_total{pool="HikariPool-1",} 0.0 +# HELP jvm_classes_loaded_classes The number of classes that are currently loaded in the Java virtual machine +# TYPE jvm_classes_loaded_classes gauge +jvm_classes_loaded_classes 18727.0 +# HELP jdbc_connections_idle Number of established but idle connections. +# TYPE jdbc_connections_idle gauge +jdbc_connections_idle{name="dataSource",} 10.0 +# HELP tomcat_sessions_active_current_sessions +# TYPE tomcat_sessions_active_current_sessions gauge +tomcat_sessions_active_current_sessions 60.0 +# HELP jvm_gc_max_data_size_bytes Max size of long-lived heap memory pool +# TYPE jvm_gc_max_data_size_bytes gauge +jvm_gc_max_data_size_bytes 2.803236864E9 diff --git a/docs/development/devtools/pap-s3p-results/pap_metrics_before_72h.txt b/docs/development/devtools/pap-s3p-results/pap_metrics_before_72h.txt new file mode 100644 index 00000000..047ccf99 --- /dev/null +++ b/docs/development/devtools/pap-s3p-results/pap_metrics_before_72h.txt @@ -0,0 +1,225 @@ +# HELP spring_data_repository_invocations_seconds_max +# TYPE spring_data_repository_invocations_seconds_max gauge +spring_data_repository_invocations_seconds_max{exception="None",method="findByKeyName",repository="PdpGroupRepository",state="SUCCESS",} 0.0 +spring_data_repository_invocations_seconds_max{exception="None",method="save",repository="PdpRepository",state="SUCCESS",} 0.008146982 +spring_data_repository_invocations_seconds_max{exception="None",method="findAll",repository="PdpGroupRepository",state="SUCCESS",} 0.777049798 +spring_data_repository_invocations_seconds_max{exception="None",method="findAll",repository="PolicyStatusRepository",state="SUCCESS",} 0.569583402 +# HELP spring_data_repository_invocations_seconds +# TYPE spring_data_repository_invocations_seconds summary +spring_data_repository_invocations_seconds_count{exception="None",method="findByKeyName",repository="PdpGroupRepository",state="SUCCESS",} 1.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="findByKeyName",repository="PdpGroupRepository",state="SUCCESS",} 1.257790017 +spring_data_repository_invocations_seconds_count{exception="None",method="save",repository="PdpRepository",state="SUCCESS",} 23.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="save",repository="PdpRepository",state="SUCCESS",} 0.671469491 +spring_data_repository_invocations_seconds_count{exception="None",method="findAll",repository="PdpGroupRepository",state="SUCCESS",} 30.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="findAll",repository="PdpGroupRepository",state="SUCCESS",} 8.481980058 +spring_data_repository_invocations_seconds_count{exception="None",method="findAll",repository="PolicyStatusRepository",state="SUCCESS",} 4.0 +spring_data_repository_invocations_seconds_sum{exception="None",method="findAll",repository="PolicyStatusRepository",state="SUCCESS",} 1.939575991 +# HELP hikaricp_connections_max Max connections +# TYPE hikaricp_connections_max gauge +hikaricp_connections_max{pool="HikariPool-1",} 10.0 +# HELP tomcat_sessions_created_sessions_total +# TYPE tomcat_sessions_created_sessions_total counter +tomcat_sessions_created_sessions_total 16.0 +# HELP process_files_open_files The open file descriptor count +# TYPE process_files_open_files gauge +process_files_open_files 34.0 +# HELP hikaricp_connections_active Active connections +# TYPE hikaricp_connections_active gauge +hikaricp_connections_active{pool="HikariPool-1",} 0.0 +# HELP jvm_classes_unloaded_classes_total The total number of classes unloaded since the Java virtual machine has started execution +# TYPE jvm_classes_unloaded_classes_total counter +jvm_classes_unloaded_classes_total 2.0 +# HELP system_cpu_usage The "recent cpu usage" for the whole system +# TYPE system_cpu_usage gauge +system_cpu_usage 0.03765922097101717 +# HELP jvm_classes_loaded_classes The number of classes that are currently loaded in the Java virtual machine +# TYPE jvm_classes_loaded_classes gauge +jvm_classes_loaded_classes 18022.0 +# HELP process_uptime_seconds The uptime of the Java virtual machine +# TYPE process_uptime_seconds gauge +process_uptime_seconds 570.627 +# HELP jvm_memory_committed_bytes The amount of memory in bytes that is committed for the Java virtual machine to use +# TYPE jvm_memory_committed_bytes gauge +jvm_memory_committed_bytes{area="heap",id="Tenured Gen",} 1.76160768E8 +jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 2.6017792E7 +jvm_memory_committed_bytes{area="heap",id="Eden Space",} 7.0582272E7 +jvm_memory_committed_bytes{area="nonheap",id="Metaspace",} 1.04054784E8 +jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 2555904.0 +jvm_memory_committed_bytes{area="heap",id="Survivor Space",} 8781824.0 +jvm_memory_committed_bytes{area="nonheap",id="Compressed Class Space",} 1.4286848E7 +jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 6881280.0 +# HELP jvm_gc_live_data_size_bytes Size of long-lived heap memory pool after reclamation +# TYPE jvm_gc_live_data_size_bytes gauge +jvm_gc_live_data_size_bytes 4.13206E7 +# HELP jdbc_connections_min Minimum number of idle connections in the pool. +# TYPE jdbc_connections_min gauge +jdbc_connections_min{name="dataSource",} 10.0 +# HELP process_start_time_seconds Start time of the process since unix epoch. +# TYPE process_start_time_seconds gauge +process_start_time_seconds 1.649787267607E9 +# HELP jdbc_connections_idle Number of established but idle connections. +# TYPE jdbc_connections_idle gauge +jdbc_connections_idle{name="dataSource",} 10.0 +# HELP jvm_gc_memory_promoted_bytes_total Count of positive increases in the size of the old generation memory pool before GC to after GC +# TYPE jvm_gc_memory_promoted_bytes_total counter +jvm_gc_memory_promoted_bytes_total 2.7154576E7 +# HELP hikaricp_connections_creation_seconds_max Connection creation time +# TYPE hikaricp_connections_creation_seconds_max gauge +hikaricp_connections_creation_seconds_max{pool="HikariPool-1",} 0.0 +# HELP hikaricp_connections_creation_seconds Connection creation time +# TYPE hikaricp_connections_creation_seconds summary +hikaricp_connections_creation_seconds_count{pool="HikariPool-1",} 0.0 +hikaricp_connections_creation_seconds_sum{pool="HikariPool-1",} 0.0 +# HELP tomcat_sessions_active_current_sessions +# TYPE tomcat_sessions_active_current_sessions gauge +tomcat_sessions_active_current_sessions 16.0 +# HELP jvm_threads_daemon_threads The current number of live daemon threads +# TYPE jvm_threads_daemon_threads gauge +jvm_threads_daemon_threads 34.0 +# HELP jvm_memory_used_bytes The amount of used memory +# TYPE jvm_memory_used_bytes gauge +jvm_memory_used_bytes{area="heap",id="Tenured Gen",} 4.13206E7 +jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 2.6013824E7 +jvm_memory_used_bytes{area="heap",id="Eden Space",} 2853928.0 +jvm_memory_used_bytes{area="nonheap",id="Metaspace",} 9.9649768E7 +jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 1364736.0 +jvm_memory_used_bytes{area="heap",id="Survivor Space",} 1036120.0 +jvm_memory_used_bytes{area="nonheap",id="Compressed Class Space",} 1.2613992E7 +jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 6865408.0 +# HELP hikaricp_connections_timeout_total Connection timeout total count +# TYPE hikaricp_connections_timeout_total counter +hikaricp_connections_timeout_total{pool="HikariPool-1",} 0.0 +# HELP jvm_memory_max_bytes The maximum amount of memory in bytes that can be used for memory management +# TYPE jvm_memory_max_bytes gauge +jvm_memory_max_bytes{area="heap",id="Tenured Gen",} 2.803236864E9 +jvm_memory_max_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 1.22912768E8 +jvm_memory_max_bytes{area="heap",id="Eden Space",} 1.12132096E9 +jvm_memory_max_bytes{area="nonheap",id="Metaspace",} -1.0 +jvm_memory_max_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 5828608.0 +jvm_memory_max_bytes{area="heap",id="Survivor Space",} 1.40115968E8 +jvm_memory_max_bytes{area="nonheap",id="Compressed Class Space",} 1.073741824E9 +jvm_memory_max_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 1.22916864E8 +# HELP tomcat_sessions_active_max_sessions +# TYPE tomcat_sessions_active_max_sessions gauge +tomcat_sessions_active_max_sessions 16.0 +# HELP tomcat_sessions_alive_max_seconds +# TYPE tomcat_sessions_alive_max_seconds gauge +tomcat_sessions_alive_max_seconds 0.0 +# HELP jvm_threads_peak_threads The peak live thread count since the Java virtual machine started or peak was reset +# TYPE jvm_threads_peak_threads gauge +jvm_threads_peak_threads 43.0 +# HELP hikaricp_connections_acquire_seconds Connection acquire time +# TYPE hikaricp_connections_acquire_seconds summary +hikaricp_connections_acquire_seconds_count{pool="HikariPool-1",} 57.0 +hikaricp_connections_acquire_seconds_sum{pool="HikariPool-1",} 0.103535665 +# HELP hikaricp_connections_acquire_seconds_max Connection acquire time +# TYPE hikaricp_connections_acquire_seconds_max gauge +hikaricp_connections_acquire_seconds_max{pool="HikariPool-1",} 0.004207252 +# HELP hikaricp_connections_usage_seconds Connection usage time +# TYPE hikaricp_connections_usage_seconds summary +hikaricp_connections_usage_seconds_count{pool="HikariPool-1",} 57.0 +hikaricp_connections_usage_seconds_sum{pool="HikariPool-1",} 13.297 +# HELP hikaricp_connections_usage_seconds_max Connection usage time +# TYPE hikaricp_connections_usage_seconds_max gauge +hikaricp_connections_usage_seconds_max{pool="HikariPool-1",} 0.836 +# HELP http_server_requests_seconds +# TYPE http_server_requests_seconds summary +http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/metrics",} 9.0 +http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/metrics",} 1.93944618 +http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps",} 3.0 +http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps",} 1.365007581 +http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/healthcheck",} 4.0 +http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/healthcheck",} 2.636914428 +# HELP http_server_requests_seconds_max +# TYPE http_server_requests_seconds_max gauge +http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/metrics",} 0.213989915 +http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps",} 0.0 +http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/healthcheck",} 0.714076223 +# HELP process_cpu_usage The "recent cpu usage" for the Java Virtual Machine process +# TYPE process_cpu_usage gauge +process_cpu_usage 0.002436413304293255 +# HELP hikaricp_connections_idle Idle connections +# TYPE hikaricp_connections_idle gauge +hikaricp_connections_idle{pool="HikariPool-1",} 10.0 +# HELP tomcat_sessions_rejected_sessions_total +# TYPE tomcat_sessions_rejected_sessions_total counter +tomcat_sessions_rejected_sessions_total 0.0 +# HELP jvm_gc_memory_allocated_bytes_total Incremented for an increase in the size of the (young) heap memory pool after one GC to before the next +# TYPE jvm_gc_memory_allocated_bytes_total counter +jvm_gc_memory_allocated_bytes_total 1.401269088E9 +# HELP tomcat_sessions_expired_sessions_total +# TYPE tomcat_sessions_expired_sessions_total counter +tomcat_sessions_expired_sessions_total 0.0 +# HELP pap_policy_deployments_total +# TYPE pap_policy_deployments_total counter +pap_policy_deployments_total{operation="deploy",status="FAILURE",} 0.0 +pap_policy_deployments_total{operation="undeploy",status="SUCCESS",} 0.0 +pap_policy_deployments_total{operation="deploy",status="SUCCESS",} 0.0 +pap_policy_deployments_total{operation="undeploy",status="FAILURE",} 0.0 +# HELP hikaricp_connections_pending Pending threads +# TYPE hikaricp_connections_pending gauge +hikaricp_connections_pending{pool="HikariPool-1",} 0.0 +# HELP process_files_max_files The maximum file descriptor count +# TYPE process_files_max_files gauge +process_files_max_files 1048576.0 +# HELP jvm_buffer_memory_used_bytes An estimate of the memory that the Java virtual machine is using for this buffer pool +# TYPE jvm_buffer_memory_used_bytes gauge +jvm_buffer_memory_used_bytes{id="mapped",} 0.0 +jvm_buffer_memory_used_bytes{id="direct",} 169210.0 +# HELP jvm_gc_pause_seconds Time spent in GC pause +# TYPE jvm_gc_pause_seconds summary +jvm_gc_pause_seconds_count{action="end of major GC",cause="Metadata GC Threshold",} 2.0 +jvm_gc_pause_seconds_sum{action="end of major GC",cause="Metadata GC Threshold",} 0.472 +jvm_gc_pause_seconds_count{action="end of minor GC",cause="Allocation Failure",} 19.0 +jvm_gc_pause_seconds_sum{action="end of minor GC",cause="Allocation Failure",} 0.507 +# HELP jvm_gc_pause_seconds_max Time spent in GC pause +# TYPE jvm_gc_pause_seconds_max gauge +jvm_gc_pause_seconds_max{action="end of major GC",cause="Metadata GC Threshold",} 0.0 +jvm_gc_pause_seconds_max{action="end of minor GC",cause="Allocation Failure",} 0.029 +# HELP jvm_threads_live_threads The current number of live threads including both daemon and non-daemon threads +# TYPE jvm_threads_live_threads gauge +jvm_threads_live_threads 43.0 +# HELP hikaricp_connections_min Min connections +# TYPE hikaricp_connections_min gauge +hikaricp_connections_min{pool="HikariPool-1",} 10.0 +# HELP jdbc_connections_max Maximum number of active connections that can be allocated at the same time. +# TYPE jdbc_connections_max gauge +jdbc_connections_max{name="dataSource",} 10.0 +# HELP jvm_buffer_total_capacity_bytes An estimate of the total capacity of the buffers in this pool +# TYPE jvm_buffer_total_capacity_bytes gauge +jvm_buffer_total_capacity_bytes{id="mapped",} 0.0 +jvm_buffer_total_capacity_bytes{id="direct",} 169210.0 +# HELP system_cpu_count The number of processors available to the Java virtual machine +# TYPE system_cpu_count gauge +system_cpu_count 1.0 +# HELP hikaricp_connections Total connections +# TYPE hikaricp_connections gauge +hikaricp_connections{pool="HikariPool-1",} 10.0 +# HELP jdbc_connections_active Current number of active connections that have been allocated from the data source. +# TYPE jdbc_connections_active gauge +jdbc_connections_active{name="dataSource",} 0.0 +# HELP system_load_average_1m The sum of the number of runnable entities queued to available processors and the number of runnable entities running on the available processors averaged over a period of time +# TYPE system_load_average_1m gauge +system_load_average_1m 0.36 +# HELP jvm_gc_max_data_size_bytes Max size of long-lived heap memory pool +# TYPE jvm_gc_max_data_size_bytes gauge +jvm_gc_max_data_size_bytes 2.803236864E9 +# HELP jvm_threads_states_threads The current number of threads having NEW state +# TYPE jvm_threads_states_threads gauge +jvm_threads_states_threads{state="runnable",} 9.0 +jvm_threads_states_threads{state="blocked",} 0.0 +jvm_threads_states_threads{state="waiting",} 26.0 +jvm_threads_states_threads{state="timed-waiting",} 8.0 +jvm_threads_states_threads{state="new",} 0.0 +jvm_threads_states_threads{state="terminated",} 0.0 +# HELP jvm_buffer_count_buffers An estimate of the number of buffers in the pool +# TYPE jvm_buffer_count_buffers gauge +jvm_buffer_count_buffers{id="mapped",} 0.0 +jvm_buffer_count_buffers{id="direct",} 10.0 +# HELP logback_events_total Number of error level events that made it to the logs +# TYPE logback_events_total counter +logback_events_total{level="warn",} 22.0 +logback_events_total{level="debug",} 0.0 +logback_events_total{level="error",} 0.0 +logback_events_total{level="trace",} 0.0 +logback_events_total{level="info",} 385.0 diff --git a/docs/development/devtools/pap-s3p-results/pap_performance_jmeter_results.jpg b/docs/development/devtools/pap-s3p-results/pap_performance_jmeter_results.jpg Binary files differnew file mode 100644 index 00000000..eae3ac0a --- /dev/null +++ b/docs/development/devtools/pap-s3p-results/pap_performance_jmeter_results.jpg diff --git a/docs/development/devtools/pap-s3p-results/pap_stability_jmeter_results.jpg b/docs/development/devtools/pap-s3p-results/pap_stability_jmeter_results.jpg Binary files differnew file mode 100644 index 00000000..46401519 --- /dev/null +++ b/docs/development/devtools/pap-s3p-results/pap_stability_jmeter_results.jpg diff --git a/docs/development/devtools/pap-s3p-results/pap_top_after_72h.jpg b/docs/development/devtools/pap-s3p-results/pap_top_after_72h.jpg Binary files differnew file mode 100644 index 00000000..ecab404c --- /dev/null +++ b/docs/development/devtools/pap-s3p-results/pap_top_after_72h.jpg diff --git a/docs/development/devtools/pap-s3p-results/pap_top_before_72h.jpg b/docs/development/devtools/pap-s3p-results/pap_top_before_72h.jpg Binary files differnew file mode 100644 index 00000000..ce2208f9 --- /dev/null +++ b/docs/development/devtools/pap-s3p-results/pap_top_before_72h.jpg diff --git a/docs/development/devtools/pap-s3p.rst b/docs/development/devtools/pap-s3p.rst index df9d5c7c..23a2d353 100644 --- a/docs/development/devtools/pap-s3p.rst +++ b/docs/development/devtools/pap-s3p.rst @@ -18,7 +18,7 @@ Setup Details - Policy-PAP along with all policy components deployed as part of a full ONAP OOM deployment. - A second instance of APEX-PDP is spun up in the setup. Update the configuration file (OnapPfConfig.json) such that the PDP can register to the new group created by PAP in the tests. -- Both tests were run via jMeter, which was installed on a separate VM. +- Both tests were run via jMeter. Stability Test of PAP +++++++++++++++++++++ @@ -27,33 +27,58 @@ Test Plan --------- The 72 hours stability test ran the following steps sequentially in a single threaded loop. -- **Create Policy defaultDomain** - creates an operational policy using policy/api component -- **Create Policy sampleDomain** - creates an operational policy using policy/api component +Setup Phase (steps running only once) +""""""""""""""""""""""""""""""""""""" + +- **Create Policy for defaultGroup** - creates an operational policy using policy/api component +- **Create NodeTemplate metadata for sampleGroup policy** - creates a node template containing metadata using policy/api component +- **Create Policy for sampleGroup** - creates an operational policy that refers to the metadata created above using policy/api component +- **Change defaultGroup state to ACTIVE** - changes the state of defaultGroup PdpGroup to ACTIVE +- **Create/Update PDP Group** - creates a new PDPGroup named sampleGroup. + A second instance of the PDP that is already spun up gets registered to this new group +- **Check PdpGroup Query** - makes a PdpGroup query request and verifies that both PdpGroups are in ACTIVE state. + +PAP Test Flow (steps running in a loop for 72 hours) +"""""""""""""""""""""""""""""""""""""""""""""""""""" + - **Check Health** - checks the health status of pap -- **Check Statistics** - checks the statistics of pap -- **Change state to ACTIVE** - changes the state of defaultGroup PdpGroup to ACTIVE -- **Check PdpGroup Query** - makes a PdpGroup query request and verifies that PdpGroup is in the ACTIVE state. -- **Deploy defaultDomain Policy** - deploys the policy defaultDomain in the existing PdpGroup -- **Check status of defaultGroup** - checks the status of defaultGroup PdpGroup with the defaultDomain policy 1.0.0. +- **PAP Metrics** - Fetch prometheus metrics before the deployment/undeployment cycle + Save different counters such as deploy/undeploy-success/failure counters at API and engine level. +- **Check PdpGroup Query** - makes a PdpGroup query request and verifies that both PdpGroups are in the ACTIVE state. +- **Deploy Policy for defaultGroup** - deploys the policy defaultDomain to defaultGroup +- **Check status of defaultGroup policy** - checks the status of defaultGroup PdpGroup with the defaultDomain policy 1.0.0. - **Check PdpGroup Audit defaultGroup** - checks the audit information for the defaultGroup PdpGroup. - **Check PdpGroup Audit Policy (defaultGroup)** - checks the audit information for the defaultGroup PdpGroup with the defaultDomain policy 1.0.0. -- **Create/Update PDP Group** - creates a new PDPGroup named sampleGroup. - **Check PdpGroup Query** - makes a PdpGroup query request and verifies that 2 PdpGroups are in the ACTIVE state and defaultGroup has a policy deployed on it. -- **Deployment Update sampleDomain** - deploys the policy sampleDomain in sampleGroup PdpGroup using pap api +- **Deployment Update for sampleGroup policy** - deploys the policy sampleDomain in sampleGroup PdpGroup using pap api - **Check status of sampleGroup** - checks the status of the sampleGroup PdpGroup. - **Check status of PdpGroups** - checks the status of both PdpGroups. - **Check PdpGroup Query** - makes a PdpGroup query request and verifies that the defaultGroup has a policy defaultDomain deployed on it and sampleGroup has policy sampleDomain deployed on it. - **Check Audit** - checks the audit information for all PdpGroups. - **Check Consolidated Health** - checks the consolidated health status of all policy components. - **Check Deployed Policies** - checks for all the deployed policies using pap api. -- **Undeploy Policy sampleDomain** - undeploys the policy sampleDomain from sampleGroup PdpGroup using pap api -- **Undeploy Default Policy** - undeploys the policy defaultDomain from PdpGroup +- **Undeploy policy in sampleGroup** - undeploys the policy sampleDomain from sampleGroup PdpGroup using pap api +- **Undeploy policy in defaultGroup** - undeploys the policy defaultDomain from PdpGroup +- **Check status of policies** - checks the status of all policies and make sure both the policies are undeployed +- **Check PdpGroup Query** - makes a PdpGroup query request and verifies that PdpGroup is in the PASSIVE state. +- **PAP Metrics after deployments** - Fetch prometheus metrics after the deployment/undeployment cycle + Save the new counter values such as deploy/undeploy-success/failure counters at API and engine level, and check that the deploySuccess and undeploySuccess counters are increased by 2. + +.. Note:: + To avoid putting a large Constant Timer value after every deployment/undeployment, the status API is polled until the deployment/undeployment + is successfully completed, or until a timeout. This is to make sure that the operation is completed successfully and the PDPs gets enough time to respond back. + Otherwise, before the deployment is marked successful by PAP, an undeployment could be triggered as part of other tests, + and the operation's corresponding prometheus counter at engine level will not get updated. + +Teardown Phase (steps running only once after PAP Test Flow is completed) +""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""" + - **Change state to PASSIVE(sampleGroup)** - changes the state of sampleGroup PdpGroup to PASSIVE -- **Delete PdpGroup SampleGroup** - delete the sampleGroup PdpGroup using pap api +- **Delete PdpGroup sampleGroup** - delete the sampleGroup PdpGroup using pap api - **Change State to PASSIVE(defaultGroup)** - changes the state of defaultGroup PdpGroup to PASSIVE -- **Check PdpGroup Query** - makes a PdpGroup query request and verifies that PdpGroup is in the PASSIVE state. -- **Delete Policy defaultDomain** - deletes the operational policy defaultDomain using policy/api component -- **Delete Policy sampleDomain** - deletes the operational policy sampleDomain using policy/api component +- **Delete policy created for defaultGroup** - deletes the operational policy defaultDomain using policy/api component +- **Delete Policy created for sampleGroup** - deletes the operational policy sampleDomain using policy/api component +- **Delete Nodetemplate metadata for sampleGroup policy** - deleted the nodetemplate containing metadata for sampleGroup policy The following steps can be used to configure the parameters of test plan. @@ -74,61 +99,49 @@ The test was run in the background via "nohup", to prevent it from being interru .. code-block:: bash - nohup ./jMeter/apache-jmeter-5.3/bin/jmeter.sh -n -t stability.jmx -l testresults.jtl + nohup ./apache-jmeter-5.4.1/bin/jmeter.sh -n -t stability.jmx -l stabilityTestResults.jtl Test Results ------------ **Summary** -Stability test plan was triggered for 72 hours. - -.. Note:: +Stability test plan was triggered for 72 hours. There were no failures during the 72 hours test. - .. container:: paragraph - - As part of the OOM deployment, another APEX-PDP pod is spun up with the pdpGroup name specified as 'sampleGroup'. - After creating the new group called 'sampleGroup' as part of the test, a time delay of 2 minutes is added, - so that the pdp is registered to the newly created group. - This has resulted in a spike in the Average time taken per request. But this is required to make proper assertions, - and also for the consolidated health check. **Test Statistics** ======================= ================= ================== ================================== **Total # of requests** **Success %** **Error %** **Average time taken per request** ======================= ================= ================== ================================== -34053 99.14 % 0.86 % 1051 ms +140980 100 % 0.00 % 717 ms ======================= ================= ================== ================================== .. Note:: - .. container:: paragraph - - There were some failures during the 72 hour stability tests. These tests were caused by the apex-pdp pods restarting - intermitently due to limited resources in our testing environment. The second apex instance was configured as a - replica of the apex-pdp pod and therefore, when it restarted, registered to the "defaultGroup" as the configuration - was taken from the original apex-pdp pod. This meant a manual change whenever the pods restarted to make apex-pdp-"2" - register with the "sampleGroup". - When both pods were running as expected, no errors relating to the pap functionality were observed. These errors are - strictly caused by the environment setup and not by pap. + There were no failures during the 72 hours test. **JMeter Screenshot** -.. image:: pap-s3p-results/pap-s3p-stability-result-jmeter.png +.. image:: pap-s3p-results/pap_stability_jmeter_results.jpg **Memory and CPU usage** -The memory and CPU usage can be monitored by running "top" command on the PAP pod. A snapshot is taken before and after test execution to monitor the changes in resource utilization. +The memory and CPU usage can be monitored by running "top" command in the PAP pod. +A snapshot is taken before and after test execution to monitor the changes in resource utilization. +Prometheus metrics is also collected before and after the test execution. Memory and CPU usage before test execution: -.. image:: pap-s3p-results/pap-s3p-mem-bt.png +.. image:: pap-s3p-results/pap_top_before_72h.jpg + +:download:`Prometheus metrics before 72h test <pap-s3p-results/pap_metrics_before_72h.txt>` Memory and CPU usage after test execution: -.. image:: pap-s3p-results/pap-s3p-mem-at.png +.. image:: pap-s3p-results/pap_top_after_72h.jpg +:download:`Prometheus metrics after 72h test <pap-s3p-results/pap_metrics_after_72h.txt>` Performance Test of PAP ++++++++++++++++++++++++ @@ -149,10 +162,13 @@ Test Plan Performance test plan is the same as the stability test plan above except for the few differences listed below. -- Increase the number of threads up to 5 (simulating 5 users' behaviours at the same time). +- Increase the number of threads up to 10 (simulating 10 users' behaviours at the same time). - Reduce the test time to 2 hours. -- Usage of counters to create different groups by the 'Create/Update PDP Group' test case. -- Removed the delay to wait for the new PDP to be registered. Also removed the corresponding assertions where the Pdp instance registration to the newly created group is validated. +- Usage of counters (simulating each user) to create different pdpGroups, update their state and later delete them. +- Removed the tests to deploy policies to newly created groups as this will need a larger setup with multiple pdps registered to each group, which will also slow down the performance test with the time needed for registration process etc. +- Usage of counters (simulating each user) to create different drools policies and deploy them to defaultGroup. + In the test, a thread count of 10 is used resulting in 10 different drools policies getting deployed and undeployed continuously for 2 hours. + Other standard operations like checking the deployment status of policies, checking the metrics, health etc remains. Run Test -------- @@ -161,14 +177,7 @@ Running/Triggering the performance test will be the same as the stability test. .. code-block:: bash - nohup ./jMeter/apache-jmeter-5.3/bin/jmeter.sh -n -t performance.jmx -l perftestresults.jtl - -Once the test execution is completed, execute the below script to get the statistics: - -.. code-block:: bash - - $ cd /home/ubuntu/pap/testsuites/performance/src/main/resources/testplans - $ ./results.sh /home/ubuntu/pap_perf/resultTree.log + nohup ./apache-jmeter-5.4.1/bin/jmeter.sh -n -t performance.jmx -l performanceTestResults.jtl Test Results ------------ @@ -180,9 +189,9 @@ Test results are shown as below. ======================= ================= ================== ================================== **Total # of requests** **Success %** **Error %** **Average time taken per request** ======================= ================= ================== ================================== -24092 100 % 0.00 % 2467 ms +24276 100 % 0.00 % 2556 ms ======================= ================= ================== ================================== **JMeter Screenshot** -.. image:: pap-s3p-results/pap-s3p-performance-result-jmeter.png
\ No newline at end of file +.. image:: pap-s3p-results/pap_performance_jmeter_results.jpg |