diff options
Diffstat (limited to 'docs')
-rw-r--r-- | docs/clamp/acm/design-impl/clamp-runtime-acm.rst | 32 | ||||
-rw-r--r-- | docs/development/devtools/devtools.rst | 4 | ||||
-rw-r--r-- | docs/development/devtools/smoke/api-smoke.rst | 3 | ||||
-rw-r--r-- | docs/development/devtools/smoke/db-migrator-smoke.rst | 422 | ||||
-rw-r--r-- | docs/development/devtools/smoke/pap-smoke.rst | 8 | ||||
-rw-r--r-- | docs/development/devtools/smoke/xacml-smoke.rst | 17 | ||||
-rw-r--r-- | docs/development/devtools/testing/csit.rst | 29 | ||||
-rw-r--r-- | docs/development/devtools/testing/s3p/run-s3p.rst | 45 | ||||
-rw-r--r-- | docs/drools/pdpdEngine.rst | 2 |
9 files changed, 112 insertions, 450 deletions
diff --git a/docs/clamp/acm/design-impl/clamp-runtime-acm.rst b/docs/clamp/acm/design-impl/clamp-runtime-acm.rst index 46d4a85f..a3c22e69 100644 --- a/docs/clamp/acm/design-impl/clamp-runtime-acm.rst +++ b/docs/clamp/acm/design-impl/clamp-runtime-acm.rst @@ -430,3 +430,35 @@ YAML format is a standard for Automation Composition Type Definition. For the co text/plain ++++++++++ Text format is used by Prometheus. For the conversion from Object to String will be used **StringHttpMessageConverter**. + +JSON log format +*************** +ACM-runtime supports log in Json format. Below an example of appender for logback configuration to enable it. + +.. code-block:: xml + :caption: Part of logback configuration + :linenos: + + <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> + <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder"> + <layout class="org.onap.policy.clamp.acm.runtime.config.LoggingConsoleLayout"> + <timestampFormat>YYYY-MM-DDThh:mm:ss.sss+/-hh:mm</timestampFormat> + <timestampFormatTimezoneId>Etc/UTC</timestampFormatTimezoneId> + <staticParameters>service_id=policy-acm|application_id=policy-acm</staticParameters> + </layout> + </encoder> + </appender> + +LayoutWrappingEncoder implements the encoder interface and wraps the Java class LoggingConsoleLayout as layout to which it delegates the work of transforming an event into Json string. +Parameters for LoggingConsoleLayout: + +- *timestampFormat*: Timestamp Format +- *timestampFormatTimezoneId*: Time Zone used in the Timestamp Format +- *staticParameters*: List of parameters do add into the log separated with a "|" + +Below un example of result: + +.. code-block:: json + + {"severity":"INFO","extra_data":{"logger":"network","thread":"KAFKA-source-policy-acruntime-participant"},"service_id":"policy-acm","message":"[IN|KAFKA|policy-acruntime-participant]\n{\"state\":\"ON_LINE\",\"participantDefinitionUpdates\":[],\"automationCompositionInfoList\":[],\"participantSupportedElementType\":[{\"id\":\"f88c4463-f012-42e1-8927-12b552ecf380\",\"typeName\":\"org.onap.policy.clamp.acm.K8SMicroserviceAutomationCompositionElement\",\"typeVersion\":\"1.0.0\"}],\"messageType\":\"PARTICIPANT_STATUS\",\"messageId\":\"d3dc2f86-4253-4520-bbac-97c4c04547ad\",\"timestamp\":\"2025-01-21T16:14:27.087474035Z\",\"participantId\":\"101c62b3-8918-41b9-a747-d21eb79c6c93\",\"replicaId\":\"c1ba61d2-1dbd-44e4-80bd-135526c0615f\"}","application_id":"policy-acm","timestamp":"2025-01-21T16:14:27.114851006Z"} + {"severity":"INFO","extra_data":{"logger":"network","thread":"KAFKA-source-policy-acruntime-participant"},"service_id":"policy-acm","message":"[IN|KAFKA|policy-acruntime-participant]\n{\"state\":\"ON_LINE\",\"participantDefinitionUpdates\":[],\"automationCompositionInfoList\":[],\"participantSupportedElementType\":[{\"id\":\"4609a119-a8c7-41ee-96d1-6b49c3afaf2c\",\"typeName\":\"org.onap.policy.clamp.acm.HttpAutomationCompositionElement\",\"typeVersion\":\"1.0.0\"}],\"messageType\":\"PARTICIPANT_STATUS\",\"messageId\":\"ea29ab01-665d-4693-ab17-3a72491b5c71\",\"timestamp\":\"2025-01-21T16:14:27.117716317Z\",\"participantId\":\"101c62b3-8918-41b9-a747-d21eb79c6c91\",\"replicaId\":\"5e4f9690-742d-4190-a439-ebb4c820a010\"}","application_id":"policy-acm","timestamp":"2025-01-21T16:14:27.144379028Z"} diff --git a/docs/development/devtools/devtools.rst b/docs/development/devtools/devtools.rst index de0a6259..b0b243e4 100644 --- a/docs/development/devtools/devtools.rst +++ b/docs/development/devtools/devtools.rst @@ -239,7 +239,7 @@ Running the API component standalone ++++++++++++++++++++++++++++++++++++ Assuming you have successfully built the codebase using the instructions above. The only requirement for the API -component to run is a running MariaDb/Postgres database instance. The easiest way to do this is to run the docker +component to run is a running Postgres database instance. The easiest way to do this is to run the docker image, please see the official documentation for the latest information on doing so. Once the database is up and running, a configuration file must be provided to the api in order for it to know how to connect to the database. You can locate the default configuration file in the packaging of the api component: @@ -260,7 +260,7 @@ An example of running the api using a docker compose script is located in the Po Running the PAP component standalone ++++++++++++++++++++++++++++++++++++ -Once you have successfully built the PAP codebase, a running MariaDb/Postgres database and Kafka instance will also be +Once you have successfully built the PAP codebase, a running Postgres database and Kafka instance will also be required to start up the application. To start database and Kafka, check official documentation on how to run an instance of each. After database and Kafka are up and running, a configuration file must be provided to the PAP component in order for it to know how to connect to the database and Kafka along with other relevant configuration diff --git a/docs/development/devtools/smoke/api-smoke.rst b/docs/development/devtools/smoke/api-smoke.rst index 8230f33b..b2c81f83 100644 --- a/docs/development/devtools/smoke/api-smoke.rst +++ b/docs/development/devtools/smoke/api-smoke.rst @@ -11,7 +11,8 @@ Policy API Smoke Test ~~~~~~~~~~~~~~~~~~~~~ The policy-api smoke testing is executed against a default ONAP installation as per OOM charts. -This test verifies the execution of all the REST api's exposed by the component to make sure the contract works as expected. +This test verifies the execution of all the REST api's exposed by the component to make sure the +contract works as expected. General Setup ************* diff --git a/docs/development/devtools/smoke/db-migrator-smoke.rst b/docs/development/devtools/smoke/db-migrator-smoke.rst index 74b8eddd..c6d8fd0d 100644 --- a/docs/development/devtools/smoke/db-migrator-smoke.rst +++ b/docs/development/devtools/smoke/db-migrator-smoke.rst @@ -8,415 +8,51 @@ Policy DB Migrator Smoke Tests Prerequisites ************* -Check number of files in each release +- Have Docker and Docker compose installed +- Some bash knowledge -.. code:: - :number-lines: +Preparing the test +================== - ls 0800/upgrade/*.sql | wc -l = 96 - ls 0900/upgrade/*.sql | wc -l = 13 - ls 1000/upgrade/*.sql | wc -l = 9 - ls 0800/downgrade/*.sql | wc -l = 96 - ls 0900/downgrade/*.sql | wc -l = 13 - ls 1000/downgrade/*.sql | wc -l = 9 +The goal for the smoke test is to confirm the any upgrade or downgrade operation between different +db-migrator versions are completed without issues. -Upgrade scripts -=============== +So, before running the test, make sure that there are different tests doing upgrade and downgrade +operations to the latest version. The script with test cases is under db-migrator folder in `docker +repository <https://github.com/onap/policy-docker/tree/master/policy-db-migrator/smoke-test>`_ -.. code:: - :number-lines: +Edit the `*-tests.sh` file to add the tests and also to check if the database variables (host, +admin user, admin password) are set correctly. - /opt/app/policy/bin/prepare_upgrade.sh policyadmin - /opt/app/policy/bin/db-migrator -s policyadmin -o upgrade # upgrade to Jakarta version (latest) - /opt/app/policy/bin/db-migrator -s policyadmin -o upgrade -t 0900 # upgrade to Istanbul - /opt/app/policy/bin/db-migrator -s policyadmin -o upgrade -t 0800 # upgrade to Honolulu +Running the test +================ -.. note:: - You can also run db-migrator upgrade with the -t and -f options +The script mentioned on the step above is ran against the `Docker compose configuration +<https://github.com/onap/policy-docker/tree/master/compose>`_. -Downgrade scripts -================= +Change the `db_migrator_policy_init.sh` on db-migrator service descriptor in the docker compose file +to the `*-test.sh` file. -.. code:: - :number-lines: +Start the service - /opt/app/policy/bin/prepare_downgrade.sh policyadmin - /opt/app/policy/bin/db-migrator -s policyadmin -o downgrade -t 0900 # downgrade to Istanbul - /opt/app/policy/bin/db-migrator -s policyadmin -o downgrade -t 0800 # downgrade to Honolulu - /opt/app/policy/bin/db-migrator -s policyadmin -o downgrade -t 0 # delete all tables +.. code-block:: bash -Db migrator initialization script -================================= + cd ~/git/docker/compose + ./start-compose.sh policy-db-migrator -Update /oom/kubernetes/policy/resources/config/db_migrator_policy_init.sh with the appropriate upgrade/downgrade calls. +To collect the logs -The policy version you are deploying should either be an upgrade or downgrade from the current db migrator schema version. +.. code-block:: bash -Every time you modify db_migrator_policy_init.sh you will have to undeploy, make and redeploy before updates are applied. + docker compose logs + # or + docker logs policy-db-migrator -1. Fresh Install -**************** +To finish execution -.. list-table:: - :widths: 60 20 - :header-rows: 0 +.. code-block:: bash - * - Number of files run - - 118 - * - Tables in policyadmin - - 70 - * - Records Added - - 118 - * - schema_version - - 1000 + ./stop-compose.sh -2. Downgrade to Honolulu (0800) -******************************* - -Modify db_migrator_policy_init.sh - remove any lines referencing upgrade and add the 2 lines under "Downgrade scripts" tagged as Honolulu - -Make/Redeploy to run downgrade. - -.. list-table:: - :widths: 60 20 - :header-rows: 0 - - * - Number of files run - - 13 - * - Tables in policyadmin - - 73 - * - Records Added - - 13 - * - schema_version - - 0800 - -3. Upgrade to Istanbul (0900) -***************************** - -Modify db_migrator_policy_init.sh - remove any lines referencing downgrade and add the 2 lines under "Upgrade scripts". - -Make/Redeploy to run upgrade. - -.. list-table:: - :widths: 60 20 - :header-rows: 0 - - * - Number of files run - - 13 - * - Tables in policyadmin - - 75 - * - Records Added - - 13 - * - schema_version - - 0900 - -4. Upgrade to Istanbul (0900) without any information in the migration schema -***************************************************************************** - -Ensure you are on release 0800. (This may require running a downgrade before starting the test) - -Drop db-migrator tables in migration schema: - -.. code:: - :number-lines: - - DROP TABLE schema_versions; - DROP TABLE policyadmin_schema_changelog; - -Modify db_migrator_policy_init.sh - remove any lines referencing downgrade and add the 2 lines under "Upgrade scripts". - -Make/Redeploy to run upgrade. - -.. list-table:: - :widths: 60 20 - :header-rows: 0 - - * - Number of files run - - 13 - * - Tables in policyadmin - - 75 - * - Records Added - - 13 - * - schema_version - - 0900 - -5. Upgrade to Istanbul (0900) after failed downgrade -**************************************************** - -Ensure you are on release 0900. - -Rename pdpstatistics table in policyadmin schema: - -.. code:: - - RENAME TABLE pdpstatistics TO backup_pdpstatistics; - -Modify db_migrator_policy_init.sh - remove any lines referencing upgrade and add the 2 lines under "Downgrade scripts" - -Make/Redeploy to run downgrade - -This should result in an error (last row in policyadmin_schema_changelog will have a success value of 0) - -Rename backup_pdpstatistic table in policyadmin schema: - -.. code:: - - RENAME TABLE backup_pdpstatistics TO pdpstatistics; - -Modify db_migrator_policy_init.sh - Remove any lines referencing downgrade and add the 2 lines under "Upgrade scripts" - -Make/Redeploy to run upgrade - -.. list-table:: - :widths: 60 20 - :header-rows: 0 - - * - Number of files run - - 11 - * - Tables in policyadmin - - 75 - * - Records Added - - 11 - * - schema_version - - 0900 - -6. Downgrade to Honolulu (0800) after failed downgrade -****************************************************** - -Ensure you are on release 0900. - -Add timeStamp column to papdpstatistics_enginestats: - -.. code:: - - ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN timeStamp datetime DEFAULT NULL NULL AFTER UPTIME; - -Modify db_migrator_policy_init.sh - remove any lines referencing upgrade and add the 2 lines under "Downgrade scripts" - -Make/Redeploy to run downgrade - -This should result in an error (last row in policyadmin_schema_changelog will have a success value of 0) - -Remove timeStamp column from jpapdpstatistics_enginestats: - -.. code:: - - ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp; - -The config job will retry 5 times. If you make your fix before this limit is reached you won't need to redeploy. - -Redeploy to run downgrade - -.. list-table:: - :widths: 60 20 - :header-rows: 0 - - * - Number of files run - - 14 - * - Tables in policyadmin - - 73 - * - Records Added - - 14 - * - schema_version - - 0800 - -7. Downgrade to Honolulu (0800) after failed upgrade -**************************************************** - -Ensure you are on release 0800. - -Modify db_migrator_policy_init.sh - remove any lines referencing downgrade and add the 2 lines under "Upgrade scripts" - -Update pdpstatistics: - -.. code:: - - ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL NULL AFTER POLICYEXECUTEDSUCCESSCOUNT; - -Make/Redeploy to run upgrade - -This should result in an error (last row in policyadmin_schema_changelog will have a success value of 0) - -Once the retry count has been reached, update pdpstatistics: - -.. code:: - - ALTER TABLE pdpstatistics DROP COLUMN POLICYUNDEPLOYCOUNT; - -Modify db_migrator_policy_init.sh - Remove any lines referencing upgrade and add the 2 lines under "Downgrade scripts" - -Make/Redeploy to run downgrade - -.. list-table:: - :widths: 60 20 - :header-rows: 0 - - * - Number of files run - - 7 - * - Tables in policyadmin - - 73 - * - Records Added - - 7 - * - schema_version - - 0800 - -8. Upgrade to Istanbul (0900) after failed upgrade -************************************************** - -Ensure you are on release 0800. - -Modify db_migrator_policy_init.sh - remove any lines referencing downgrade and add the 2 lines under "Upgrade scripts" - -Update PDP table: - -.. code:: - - ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY; - -Make/Redeploy to run upgrade - -This should result in an error (last row in policyadmin_schema_changelog will have a success value of 0) - -Update PDP table: - -.. code:: - - ALTER TABLE pdp DROP COLUMN LASTUPDATE; - -The config job will retry 5 times. If you make your fix before this limit is reached you won't need to redeploy. - -Redeploy to run upgrade - -.. list-table:: - :widths: 60 20 - :header-rows: 0 - - * - Number of files run - - 14 - * - Tables in policyadmin - - 75 - * - Records Added - - 14 - * - schema_version - - 0900 - -9. Downgrade to Honolulu (0800) with data in pdpstatistics and jpapdpstatistics_enginestats -******************************************************************************************* - -Ensure you are on release 0900. - -Check pdpstatistics and jpapdpstatistics_enginestats are populated with data. - -.. code:: - :number-lines: - - SELECT count(*) FROM pdpstatistics; - SELECT count(*) FROM jpapdpstatistics_enginestats; - -Modify db_migrator_policy_init.sh - remove any lines referencing upgrade and add the 2 lines under "Downgrade scripts" - -Make/Redeploy to run downgrade - -Check the tables to ensure the number of records is the same. - -.. code:: - :number-lines: - - SELECT count(*) FROM pdpstatistics; - SELECT count(*) FROM jpapdpstatistics_enginestats; - -Check pdpstatistics to ensure the primary key has changed: - -.. code:: - - SELECT column_name, constraint_name FROM information_schema.key_column_usage WHERE table_name='pdpstatistics'; - -Check jpapdpstatistics_enginestats to ensure id column has been dropped and timestamp column added. - -.. code:: - - SELECT table_name, column_name, data_type FROM information_schema.columns WHERE table_name = 'jpapdpstatistics_enginestats'; - -Check the pdp table to ensure the LASTUPDATE column has been dropped. - -.. code:: - - SELECT table_name, column_name, data_type FROM information_schema.columns WHERE table_name = 'pdp'; - - -.. list-table:: - :widths: 60 20 - :header-rows: 0 - - * - Number of files run - - 13 - * - Tables in policyadmin - - 73 - * - Records Added - - 13 - * - schema_version - - 0800 - -10. Upgrade to Istanbul (0900) with data in pdpstatistics and jpapdpstatistics_enginestats -****************************************************************************************** - -Ensure you are on release 0800. - -Check pdpstatistics and jpapdpstatistics_enginestats are populated with data. - -.. code:: - :number-lines: - - SELECT count(*) FROM pdpstatistics; - SELECT count(*) FROM jpapdpstatistics_enginestats; - -Modify db_migrator_policy_init.sh - remove any lines referencing downgrade and add the 2 lines under "Upgrade scripts" - -Make/Redeploy to run upgrade - -Check the tables to ensure the number of records is the same. - -.. code:: - :number-lines: - - SELECT count(*) FROM pdpstatistics; - SELECT count(*) FROM jpapdpstatistics_enginestats; - -Check pdpstatistics to ensure the primary key has changed: - -.. code:: - - SELECT column_name, constraint_name FROM information_schema.key_column_usage WHERE table_name='pdpstatistics'; - -Check jpapdpstatistics_enginestats to ensure timestamp column has been dropped and id column added. - -.. code:: - - SELECT table_name, column_name, data_type FROM information_schema.columns WHERE table_name = 'jpapdpstatistics_enginestats'; - -Check the pdp table to ensure the LASTUPDATE column has been added and the value has defaulted to the CURRENT_TIMESTAMP. - -.. code:: - - SELECT table_name, column_name, data_type, column_default FROM information_schema.columns WHERE table_name = 'pdp'; - -.. list-table:: - :widths: 60 20 - :header-rows: 0 - - * - Number of files run - - 13 - * - Tables in policyadmin - - 75 - * - Records Added - - 13 - * - schema_version - - 0900 - -.. note:: - The number of records added may vary depending on the number of retries. - -With addition of Postgres support to db-migrator, these tests can be also performed on a Postgres version of database. -In addition, scripts running the aforementioned scenarios can be found under `smoke-tests` folder on db-migrator code base. End of Document diff --git a/docs/development/devtools/smoke/pap-smoke.rst b/docs/development/devtools/smoke/pap-smoke.rst index a5f54c06..a17c8c6c 100644 --- a/docs/development/devtools/smoke/pap-smoke.rst +++ b/docs/development/devtools/smoke/pap-smoke.rst @@ -11,7 +11,8 @@ Policy PAP Smoke Test ~~~~~~~~~~~~~~~~~~~~~ The policy-pap smoke testing is executed against a default ONAP installation as per OOM charts. -This test verifies the execution of all the REST api's exposed by the component to make sure the contract works as expected. +This test verifies the execution of all the REST api's exposed by the component to make sure the +contract works as expected. General Setup ************* @@ -28,7 +29,7 @@ The ONAP components used during the smoke tests are: - Policy API to perform CRUD of policies. - Policy DB to store the policies. -- DMaaP for the communication between components. +- Kafka for the communication between components. - Policy PAP to perform runtime administration (deploy/undeploy/status/statistics/etc). - Policy Apex-PDP to deploy & undeploy policies. And send heartbeats to PAP. - Policy Drools-PDP to deploy & undeploy policies. And send heartbeats to PAP. @@ -66,4 +67,5 @@ Make sure to execute the delete steps in order to clean the setup after testing. Delete policies using policy-api -------------------------------- -Use the previously downloaded policy-api postman collection to delete the policies created for testing. +Use the previously downloaded policy-api postman collection to delete the policies created for +testing. diff --git a/docs/development/devtools/smoke/xacml-smoke.rst b/docs/development/devtools/smoke/xacml-smoke.rst index 61f3551f..b57a3065 100644 --- a/docs/development/devtools/smoke/xacml-smoke.rst +++ b/docs/development/devtools/smoke/xacml-smoke.rst @@ -10,8 +10,8 @@ XACML PDP Smoke Test ~~~~~~~~~~~~~~~~~~~~ -The policy-xacml-pdp smoke testing can be executed against a kubernetes based policy framework installation, -and/or a docker-compose set up similar to the one executed by CSIT tests. +The policy-xacml-pdp smoke testing can be executed against a kubernetes based policy framework +installation, and/or a docker-compose set up similar to the one executed by CSIT tests. General Setup ************* @@ -21,16 +21,21 @@ PF kubernetes Install For installation instructions, please refer to the following documentation: -`Policy Framework K8S Install <https://docs.onap.org/projects/onap-policy-parent/en/latest/development/devtools/testing/csit.html>`_ +`Policy Framework K8S Install +<https://docs.onap.org/projects/onap-policy-parent/en/latest/development/devtools/testing/csit.html>`_ -The script referred to in the above link should handle the install of the of microk8s, docker and other required components for the install of the policy framework and clamp components. The scripts are used by policy as a means to run the CSIT tests in Kubernetes. +The script referred to in the above link should handle the install of the of microk8s, docker and +other required components for the install of the policy framework and clamp components. The scripts +are used by policy as a means to run the CSIT tests in Kubernetes. docker-compose based -------------------- -A smaller testing environment can be put together by replicating the docker-based CSIT test environment. Details are on the same page as K8s setup: +A smaller testing environment can be put together by replicating the docker-based CSIT test +environment. Details are on the same page as K8s setup: -`Policy CSIT Test Install Docker <https://docs.onap.org/projects/onap-policy-parent/en/latest/development/devtools/testing/csit.html>`_ +`Policy CSIT Test Install Docker +<https://docs.onap.org/projects/onap-policy-parent/en/latest/development/devtools/testing/csit.html>`_ Testing procedures ****************** diff --git a/docs/development/devtools/testing/csit.rst b/docs/development/devtools/testing/csit.rst index ede88af1..9151e166 100644 --- a/docs/development/devtools/testing/csit.rst +++ b/docs/development/devtools/testing/csit.rst @@ -42,12 +42,13 @@ Under the folder `~/git/policy/docker/csit`, there are two main scripts to run t Running CSIT in Docker environment ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -If not familiar with the PF Docker structure, the detailed information can be found :ref:`here <docker-label>` +If not familiar with the PF Docker structure, the detailed information can be found +:ref:`here <docker-label>` Running tests to validate code changes -------------------------------------- -For *local* images, set `LOCAL_IMAGES=true`, located at the `get-versions.sh` script +For *local* images, run the script with the `--local` flag. .. note:: Make sure to do the same changes to any other components that are using locally built images. @@ -59,7 +60,7 @@ Then use the `run-project-csit.sh` script to run the test suite. .. code-block:: bash cd ~/git/policy/docker - ./csit/run-project-csit.sh <component> + ./csit/run-project-csit.sh <component> --local The <component> input is any of the policy components available: @@ -72,6 +73,7 @@ The <component> input is any of the policy components available: - drools-applications - xacml-pdp - clamp + - opa-pdp Keep in mind that after the Robot executions, logs from docker-compose are printed and test logs might not be available on console and the containers are teared down. The tests results @@ -82,12 +84,14 @@ Running tests for learning PF usage ----------------------------------- In that case, no changes required on docker-compose files, but commenting the tear down of docker -containers might be required. For that, edit the file `run-project-csit.sh` script and comment the -following line: +containers might be required. For that, run the `run-project-csit.sh` script with `--no-exit` flag: .. code-block:: bash - # source_safely ${WORKSPACE}/compose/stop-compose.sh (currently line 36) + cd ~/git/policy/docker + ./csit/run-project-csit.sh <component> --local --no-exit + # or + ./csit/run-project-csit.sh <component> --no-exit # will download images from nexus3 server This way, the docker containers are still up and running for more investigation. @@ -130,6 +134,7 @@ The <component> input is any of the policy components available: - drools-pdp - xacml-pdp - clamp + - opa-pdp Different from Docker usage, the microk8s installation is not removed when tests finish. @@ -138,12 +143,12 @@ Different from Docker usage, the microk8s installation is not removed when tests Installing all available PF components -------------------------------------- -Use the `run-k8s-csit.sh` script to install PF components with Prometheus server available. +Use the `cluster_setup.sh` script to install PF components with Prometheus server available. .. code-block:: bash - cd ~/git/policy/docker - ./csit/run-k8s-csit.sh install + cd ~/git/policy/docker/csit/resources/scripts + ./cluster_setup.sh install In this case, no tests are executed and the environment can be used for other integration tests @@ -156,7 +161,7 @@ Uninstall and clean up If running the CSIT tests with microk8s environment, docker images for the tests suites are created. To clean them up, user `docker prune <https://docs.docker.com/config/pruning/>`_ command. -To uninstall policy helm deployment and/or the microk8s cluster, use `run-k8s-csit.sh` +To uninstall policy helm deployment and/or the microk8s cluster, use `cluster_setup.sh` .. code-block:: bash @@ -164,10 +169,10 @@ To uninstall policy helm deployment and/or the microk8s cluster, use `run-k8s-cs cd ~/git/policy/docker # to uninstall deployment - ./csit/run-k8s-csit.sh uninstall + ./csit/resources/scripts/cluster_setup.sh uninstall # to remove cluster - ./csit/run-k8s-csit.sh clean + ./csit/resources/scripts/cluster_setup.sh clean End of document
\ No newline at end of file diff --git a/docs/development/devtools/testing/s3p/run-s3p.rst b/docs/development/devtools/testing/s3p/run-s3p.rst index 17eba32a..1ac88442 100644 --- a/docs/development/devtools/testing/s3p/run-s3p.rst +++ b/docs/development/devtools/testing/s3p/run-s3p.rst @@ -6,11 +6,11 @@ Running the Policy Framework S3P Tests Per release, the policy framework team perform stability and performance tests per component of the policy framework. This testing work involves performing a series of test on a full OOM deployment and updating the various test plans to work towards the given deployment. -This work can take some time to setup before performing any tests to begin with. +This work can take some time to setup to begin with, before performing any tests. For stability testing, a tool called JMeter is used to trigger a series of tests for a period of 72 hours which has to be manually initiated and monitored by the tester. -Likewise, with the performance tests, but in this case for ~2 hours. -As part of the work to make to automate this process a script can be now triggered to bring up a microk8s cluster on a VM, install JMeter, alter the cluster info to match the JMX test plans for JMeter to trigger and gather results at the end. -These S3P tests will be triggered for a shorter period as part of the CSITs to prove the stability and performance of our components. +Likewise, the performance tests run in the same manner but for a shorter time of ~2 hours. +As part of the work to automate this process a script can be now triggered to bring up a microk8s cluster on a VM, install JMeter, alter the cluster info to match the JMX test plans for JMeter to trigger and gather results at the end. +These S3P tests will be triggered for a shorter period as part of the GHAs to prove the stability and performance of our components. There has been recent work completed to trigger our CSIT tests in a K8s environment. As part of this work, a script has been created to bring up a microk8s cluster for testing purposes which includes all necessary components for our policy framework testing. @@ -19,34 +19,15 @@ Once this cluster is brought up, a script is called to alter the cluster. The IPS and PORTS of our policy components are set by this script to ensure consistency in the test plans. JMeter is installed and the S3P test plans are triggered to run by their respective components. -.. code-block:: bash - :caption: Start S3P Script +`run-s3p-tests.sh <https://github.com/onap/policy-docker/blob/master/csit/run-s3p-tests.sh>`_ - #===MAIN===# - if [ -z "${WORKSPACE}" ]; then - export WORKSPACE=$(git rev-parse --show-toplevel) - fi - export TESTDIR=${WORKSPACE}/testsuites - export API_PERF_TEST_FILE=$TESTDIR/performance/src/main/resources/testplans/policy_api_performance.jmx - export API_STAB_TEST_FILE=$TESTDIR/stability/src/main/resources/testplans/policy_api_stability.jmx - if [ $1 == "run" ] - then - mkdir automate-performance;cd automate-performance; - git clone "https://gerrit.onap.org/r/policy/docker" - cd docker/csit - if [ $2 == "performance" ] - then - bash start-s3p-tests.sh run $API_PERF_TEST_FILE; - elif [ $2 == "stability" ] - then - bash start-s3p-tests.sh run $API_STAB_TEST_FILE; - else - echo "echo Invalid arguments provided. Usage: $0 [option..] {performance | stability}" - fi - else - echo "Invalid arguments provided. Usage: $0 [option..] {run | uninstall}" - fi +This script automates the setup, execution, and teardown of S3P tests for policy components. +It initializes a Kubernetes environment, installs Apache JMeter for running test plans, and executes specified JMX test files. +The script logs all operations, tracks errors, warnings, and processed files, and provides a summary report upon completion. +It includes options to either run tests (test <jmx_file>) or clean up the environment (clean). The clean option uninstalls the Kubernetes cluster and removes temporary resources. +The script also ensures proper resource usage tracking and error handling throughout its execution. -This script is triggered by each component. -It will export the performance and stability testplans and trigger the start-s3p-test.sh script which will perform the steps to automatically run the s3p tests. +`run-s3p-test.sh <https://github.com/onap/policy-api/blob/master/testsuites/run-s3p-test.sh>`_ +In summary, this script automates running performance or stability tests for a Policy Framework component by setting up necessary directories, cloning the required docker repository, and executing predefined test plans. +It also provides a clean-up option to remove resources after testing.
\ No newline at end of file diff --git a/docs/drools/pdpdEngine.rst b/docs/drools/pdpdEngine.rst index 6397dd86..7a699025 100644 --- a/docs/drools/pdpdEngine.rst +++ b/docs/drools/pdpdEngine.rst @@ -761,7 +761,7 @@ Data Migration PDP-D data used to be migrated across releases with its own db-migrator until Kohn release. Since Oslo, the main policy database manager, -`db-migrator <https://git.onap.org/policy/docker/tree/policy-db-migrator/src/main/docker/db-migrator>`_ +`db-migrator <https://git.onap.org/policy/docker/tree/policy-db-migrator/>`_ has been in use. The migration occurs when different release data is detected. *db-migrator* will look under the |