aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authormrichomme <morgan.richomme@orange.com>2020-04-10 18:22:59 +0200
committerMorgan Richomme <morgan.richomme@orange.com>2020-04-14 17:25:12 +0000
commit28babfa0a7ab8a6c1df9d16c7235d968c1337df3 (patch)
tree2a3bf4bd9aeb83ac2ee44a2e41ee665b3b394931
parentc06ea62716be250a59bd1d3857baa1a3a927a317 (diff)
Fix integration markdown errors for linter
Issue-ID: INT-1523 Signed-off-by: mrichomme <morgan.richomme@orange.com> Change-Id: I2be0865395b12e1f277834b0c096f5d183cb5056 Signed-off-by: mrichomme <morgan.richomme@orange.com>
-rw-r--r--README.md5
-rw-r--r--bootstrap/README.md9
-rw-r--r--bootstrap/jenkins/README.md2
-rw-r--r--bootstrap/vagrant-onap/README.md1
-rw-r--r--deployment/README.md6
-rw-r--r--deployment/aks/README.md74
-rw-r--r--deployment/onap-lab-ci/README.md1
-rw-r--r--documentation/api-dependencies/README.md4
-rw-r--r--test/README.md8
-rw-r--r--test/hpa_automation/heat/README.md42
-rw-r--r--test/mocks/datafilecollector-testharness/auto-test/README.md65
-rw-r--r--test/mocks/datafilecollector-testharness/common/README.md61
-rw-r--r--test/mocks/datafilecollector-testharness/dr-sim/README.md118
-rw-r--r--test/mocks/datafilecollector-testharness/ftps-sftp-server/README.md22
-rw-r--r--test/mocks/datafilecollector-testharness/mr-sim/README.md102
-rw-r--r--test/mocks/datafilecollector-testharness/simulator-group/README.md104
-rw-r--r--test/mocks/mass-pnf-sim/README.md60
-rw-r--r--test/mocks/mass-pnf-sim/pnf-sim-lightweight/README.md29
-rw-r--r--test/mocks/pnf-onboarding/README.md22
19 files changed, 371 insertions, 364 deletions
diff --git a/README.md b/README.md
index b6c71420e..28a24f4f3 100644
--- a/README.md
+++ b/README.md
@@ -1,14 +1,11 @@
-
# ONAP Integration
## Description
Responsible for the integration framework / automated tools, code and scripts, best practice guidance related to cross-project Continuous System Integration Testing (CSIT), and delivery of the ONAP project.
-See https://wiki.onap.org/display/DW/Integration+Project for additional details.
-
+See <https://wiki.onap.org/display/DW/Integration+Project> for additional details.
## Sub-projects
See respective directories for additional details about each sub-project.
-
diff --git a/bootstrap/README.md b/bootstrap/README.md
index 7809abf02..6c6553de8 100644
--- a/bootstrap/README.md
+++ b/bootstrap/README.md
@@ -1,12 +1,11 @@
-
# ONAP Integration - Bootstrap
## Description
-* A framework to automatically install and test a set of base infrastructure components and environments for new developers.
+A framework to automatically install and test a set of base infrastructure components and environments for new developers.
## Sub-components
-* codesearch - A set of vagrant scripts that will set up Hound daemon configured to search through all ONAP repositories.
-* jenkins - A set of vagrant scripts that will set up a simple Jenkins environment with jobs configured to build all ONAP java code and docker images.
-* vagrant-minimal-onap - A set of vagrant scripts that will set up minimal ONAP environment for research purposes.
+- codesearch: A set of vagrant scripts that will set up Hound daemon configured to search through all ONAP repositories.
+- jenkins: A set of vagrant scripts that will set up a simple Jenkins environment with jobs configured to build all ONAP java code and docker images.
+- vagrant-minimal-onap: A set of vagrant scripts that will set up minimal ONAP environment for research purposes.
diff --git a/bootstrap/jenkins/README.md b/bootstrap/jenkins/README.md
index b9cb8645f..170134d4f 100644
--- a/bootstrap/jenkins/README.md
+++ b/bootstrap/jenkins/README.md
@@ -1,4 +1,3 @@
-
# ONAP Integration > Bootstrap > Jenkins
This directory contains a set of vagrant scripts that will automatically set up a Jenkins instance
@@ -9,4 +8,3 @@ can successfully build ONAP code from scratch. It is not intended to be used as
Jenkins CI/CD environment.
NOTE: the Jenkins instance is by default NOT SECURED, with the default admin user and password as "jenkins".
-
diff --git a/bootstrap/vagrant-onap/README.md b/bootstrap/vagrant-onap/README.md
index 8a8d1bd9b..edfbe85dc 100644
--- a/bootstrap/vagrant-onap/README.md
+++ b/bootstrap/vagrant-onap/README.md
@@ -1,4 +1,3 @@
# Deprecated
ONAP on Vagrant tool's code has been migrated to [Devtool repo](https://git.onap.org/integration/devtool/)
-
diff --git a/deployment/README.md b/deployment/README.md
index e1b8b8748..ab5a911f1 100644
--- a/deployment/README.md
+++ b/deployment/README.md
@@ -1,8 +1,6 @@
-
# ONAP Integration - Deployment
## Description
-* Heat templates and scripts for automatic deployments for system testing and continuous integration test flows
-* Sample OPENRC and heat environment settings files for ONAP deployment in ONAP External Labs
-
+- Heat templates and scripts for automatic deployments for system testing and continuous integration test flows
+- Sample OPENRC and heat environment settings files for ONAP deployment in ONAP External Labs
diff --git a/deployment/aks/README.md b/deployment/aks/README.md
index 1b46e5139..383fb9982 100644
--- a/deployment/aks/README.md
+++ b/deployment/aks/README.md
@@ -6,14 +6,12 @@ Copyright 2019 AT&T Intellectual Property. All rights reserved.
This file is licensed under the CREATIVE COMMONS ATTRIBUTION 4.0 INTERNATIONAL LICENSE
-Full license text at https://creativecommons.org/licenses/by/4.0/legalcode
-
+Full license text at <https://creativecommons.org/licenses/by/4.0/legalcode>
## About
ONAP on AKS will orchestrate an Azure Kubernetes Service (AKS) deployment, a DevStack deployment, an ONAP + NFS deployment, as well as configuration to link the Azure resources together. After ONAP is installed, a cloud region will also be added to ONAP with the new DevStack details that can be used to instantiate a VNF.
-
### Pre-Reqs
The following software is required to be installed:
@@ -22,12 +20,11 @@ The following software is required to be installed:
- [helm](https://helm.sh/docs/using_helm/)
- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
- [azure command line](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-apt?view=azure-cli-latest)
-- make, openjdk-8-jdk, openjdk-8-jre (``apt-get update && apt-get install make openjdk-8-jre openjdk-8-jdk``)
+- make, openjdk-8-jdk, openjdk-8-jre (`apt-get update && apt-get install make openjdk-8-jre openjdk-8-jdk`)
Check the [OOM Cloud Setup Guide](https://docs.onap.org/en/latest/submodules/oom.git/docs/oom_cloud_setup_guide.html#cloud-setup-guide-label) for the versions of kubectl and helm to use.
-After installing the above software, run ``az login`` and follow the instructions to finalize the azure command line installation. **You'll need to be either an owner or co-owner of the azure subscription, or some of the deployment steps may not complete successfully**. If you have multiple azure subscriptions, use ``az account set --subscription <subscription name>`` prior to running ``az login`` so that resources are deployed to the correct subscription. See [the azure docs](https://docs.microsoft.com/en-us/cli/azure/get-started-with-azure-cli?view=azure-cli-latest) for more details on using the azure command line.
-
+After installing the above software, run `az login` and follow the instructions to finalize the azure command line installation. **You'll need to be either an owner or co-owner of the azure subscription, or some of the deployment steps may not complete successfully**. If you have multiple azure subscriptions, use `az account set --subscription <subscription name>` prior to running `az login` so that resources are deployed to the correct subscription. See [the azure docs](https://docs.microsoft.com/en-us/cli/azure/get-started-with-azure-cli?view=azure-cli-latest) for more details on using the azure command line.
### The following resources will be created in Azure
@@ -35,14 +32,11 @@ After installing the above software, run ``az login`` and follow the instruction
- VM running NFS server application
- VM running latest DevStack version
-
## Usage
-
### cloud.sh
-
-``cloud.sh`` is the main driver script, and deploys a Kubernetes Cluster (AKS), DevStack, NFS, and bootstraps ONAP with configuration needed to instantiate a VNF. The script creates ONAP in "about" an hour.
+`cloud.sh` is the main driver script, and deploys a Kubernetes Cluster (AKS), DevStack, NFS, and bootstraps ONAP with configuration needed to instantiate a VNF. The script creates ONAP in "about" an hour.
```
@@ -76,17 +70,14 @@ It:
$ ./cloud.sh --override
```
-
### cloud.conf
+This file contains the parameters that will be used when executing `cloud.sh`. The parameter `BUILD` will be generated at runtime.
-This file contains the parameters that will be used when executing ``cloud.sh``. The parameter ``BUILD`` will be generated at runtime.
-
-For an example with all of the parameters filled out, check [here](./cloud.conf.example). You can copy this and modify to suit your deployment. The parameters that MUST be modified from ``cloud.conf.example`` are ``USER_PUBLIC_IP_PREFIX`` and ``BUILD_DIR``.
+For an example with all of the parameters filled out, check [here](./cloud.conf.example). You can copy this and modify to suit your deployment. The parameters that MUST be modified from `cloud.conf.example` are `USER_PUBLIC_IP_PREFIX` and `BUILD_DIR`.
All other parameters will work out of the box, however you can also customize them to suit your own deployment. See below for a description of the available parameters and how they're used.
-
```
# The variable $BUILD will be generated dynamically when this file is sourced
@@ -166,36 +157,33 @@ DOCKER_REPOSITORY= Image repository url to pull ONAP images to use for i
### Integration Override
-When you execute ``cloud.sh``, you have the option to create an ``integration-override.yaml`` file that will be used during ``helm deploy ...`` to install ONAP. This is done by passing the ``--override`` flag to cloud.sh.
-
-The template used to create the override file is ``./util/integration-override.template``, and is invoked by ``./util/create_robot_config.sh``. It's very possible this isn't complete or sufficient for how you'd like to customize your deployment. You can update the template file and/or the script to provide additional customization for your ONAP install.
+When you execute `cloud.sh`, you have the option to create an `integration-override.yaml` file that will be used during `helm deploy ...` to install ONAP. This is done by passing the `--override` flag to cloud.sh.
+The template used to create the override file is `./util/integration-override.template`, and is invoked by `./util/create_robot_config.sh`. It's very possible this isn't complete or sufficient for how you'd like to customize your deployment. You can update the template file and/or the script to provide additional customization for your ONAP install.
### OOM Overrides
-In ``cloud.conf``, there's a parameter ``OOM_OVERRIDES`` available that's used to provide command line overrides to ``helm deploy``. This uses the standard helm syntax, so if you're using it the value should look like ``OOM_OVERRIDES="--set vid.enabled=false,so.image=abc"``. If you don't want to override anything, just set this value to an empty string.
-
+In `cloud.conf`, there's a parameter `OOM_OVERRIDES` available that's used to provide command line overrides to `helm deploy`. This uses the standard helm syntax, so if you're using it the value should look like `OOM_OVERRIDES="--set vid.enabled=false,so.image=abc"`. If you don't want to override anything, just set this value to an empty string.
### Pre Install
-When you run ``cloud.sh`` it will execute ``pre_install.sh`` first, which checks a few things:
+When you run `cloud.sh` it will execute `pre_install.sh` first, which checks a few things:
- It checks you have the correct pre-reqs installed. So, it'll make sure you have kubectl, azure cli, helm, etc...
-- It checks that the version of kubernetes in ``cloud.conf`` is available in Azure.
-- It checks the version of azure cli is >= to the baselined version (you can check this version by looking at the top of ``pre_install.sh``). The Azure cli is introduced changes in minor versions that aren't backwards compatible.
-- It checks that the version of kubectl installed is at **MOST** 1 minor version different than the version of kubernetes in ``cloud.conf``.
+- It checks that the version of kubernetes in `cloud.conf` is available in Azure.
+- It checks the version of azure cli is >= to the baselined version (you can check this version by looking at the top of `pre_install.sh`). The Azure cli is introduced changes in minor versions that aren't backwards compatible.
+- It checks that the version of kubectl installed is at **MOST** 1 minor version different than the version of kubernetes in `cloud.conf`.
-If you would like to skip ``pre_install.sh`` and run the deployment anyways, pass the flag ``--no-validate`` to ``cloud.sh``, like this:
+If you would like to skip `pre_install.sh` and run the deployment anyways, pass the flag `--no-validate` to `cloud.sh`, like this:
```
$ ./cloud.sh --no-validate
```
-
### Post Install
-After ONAP is deployed, you have the option of executing an arbitrary set of post-install scripts. This is enabled by passing the ``--post-install`` flag to ``cloud.sh``, like this:
+After ONAP is deployed, you have the option of executing an arbitrary set of post-install scripts. This is enabled by passing the `--post-install` flag to `cloud.sh`, like this:
```
$ ./cloud.sh --post-install
@@ -205,30 +193,28 @@ $ ./cloud.sh --post-install
These post-install scripts need to be executable from the command line, and will be provided two parameters that they can use to perform their function:
- /path/to/onap.conf : This is created during the deployment, and has various ONAP and OpenStack parameters.
-- /path/to/cloud.conf : this is the same ``cloud.conf`` that's used during the original deployment.
-
+- /path/to/cloud.conf : this is the same `cloud.conf` that's used during the original deployment.
Your post-install scripts can disregard these parameters, or source them and use the parameters as-needed.
-Included with this repo is one post-install script (``000_bootstrap_onap.sh``)that bootstraps AAI, VID, and SO with cloud and customer details so that ONAP is ready to model and instantiate a VNF.
-
-In order to include other custom post-install scripts, simply put them in the ``post-install`` directory, and make sure to set its mode to executable. They are executed in alphabetical order.
+Included with this repo is one post-install script (`000_bootstrap_onap.sh`)that bootstraps AAI, VID, and SO with cloud and customer details so that ONAP is ready to model and instantiate a VNF.
+In order to include other custom post-install scripts, simply put them in the `post-install` directory, and make sure to set its mode to executable. They are executed in alphabetical order.
## Post Deployment
-After ONAP and DevStack are deployed, there will be a ``deployment.notes`` file with instructions on how to access the various components. The ``BUILD_DIR`` specified in ``cloud.conf`` will contain a new ssh key, kubeconfig, and other deployment artifacts as well.
-
-All of the access information below will be in ``deployment.notes``.
+After ONAP and DevStack are deployed, there will be a `deployment.notes` file with instructions on how to access the various components. The `BUILD_DIR` specified in `cloud.conf` will contain a new ssh key, kubeconfig, and other deployment artifacts as well.
+All of the access information below will be in `deployment.notes`.
### Kubernetes Access
To access the Kubernetes dashboard:
-``az aks browse --resource-group $AKS_RESOURCE_GROUP_NAME --name $AKS_NAME``
+`az aks browse --resource-group $AKS_RESOURCE_GROUP_NAME --name $AKS_NAME`
To use kubectl:
+
```
export KUBECONFIG=$BUILD_DIR/kubeconfig
@@ -241,28 +227,25 @@ kubectl ...
To access Horizon:
Find the public IP address via the Azure portal, and go to
-``http://$DEVSTACK_PUBLIC_IP``
+`http://$DEVSTACK_PUBLIC_IP`
SSH access to DevStack node:
-``ssh -i $BUILD_DIR/id_rsa ${DEVSTACK_ADMIN_USER}@${DEVSTACK_PUBLIC_IP}``
+`ssh -i $BUILD_DIR/id_rsa ${DEVSTACK_ADMIN_USER}@${DEVSTACK_PUBLIC_IP}`
OpenStack cli access:
There's an openstack cli pod that's created in the default kubernetes default namespace. To use it, run:
-``kubectl exec $OPENSTACK_CLI_POD -- sh -lc "<openstack command>"``
-
+`kubectl exec $OPENSTACK_CLI_POD -- sh -lc "<openstack command>"`
### NFS Access
-``ssh -i $BUILD_DIR/id_rsa ${NFS_ADMIN_USER}@${NFS_PUBLIC_IP}``
-
+`ssh -i $BUILD_DIR/id_rsa ${NFS_ADMIN_USER}@${NFS_PUBLIC_IP}`
## Deleting the deployment
-After deployment, there will be a script named ``$BUILD_DIR/clean.sh`` that can be used to delete the resource groups that were created during deployment. This script is not required; you can always just navigate to the Azure portal to delete the resource groups manually.
-
+After deployment, there will be a script named `$BUILD_DIR/clean.sh` that can be used to delete the resource groups that were created during deployment. This script is not required; you can always just navigate to the Azure portal to delete the resource groups manually.
## Running the scripts separately
@@ -270,7 +253,6 @@ Below are instructions for how to create DevStack, NFS, or AKS cluster separatel
**NOTE: The configuration to link components together (network peering, route table modification, NFS setup, etc...) and the onap-bootstrap will not occur if you run the scripts separately**
-
### DevStack creation
```
@@ -304,7 +286,6 @@ additional options:
```
-
### NFS Creation
```
@@ -334,7 +315,6 @@ additional options:
```
-
### AKS Creation
```
diff --git a/deployment/onap-lab-ci/README.md b/deployment/onap-lab-ci/README.md
deleted file mode 100644
index d7efc7b6a..000000000
--- a/deployment/onap-lab-ci/README.md
+++ /dev/null
@@ -1 +0,0 @@
-# onap-lab-ci \ No newline at end of file
diff --git a/documentation/api-dependencies/README.md b/documentation/api-dependencies/README.md
index f82ddcf41..342539823 100644
--- a/documentation/api-dependencies/README.md
+++ b/documentation/api-dependencies/README.md
@@ -1,3 +1 @@
-
-This directory contains the documentation for API dependencies between
-ONAP projects.
+This directory contains the documentation for API dependencies between ONAP projects.
diff --git a/test/README.md b/test/README.md
index 3fee9a34f..b05f768fe 100644
--- a/test/README.md
+++ b/test/README.md
@@ -1,9 +1,7 @@
-
# ONAP Integration - Test
## Description
-* Code and tools for automatic system testing and continuous integration test flows across ONAP projects
-* Common guidelines, templates, and best practices to help project developers to write unit and system test code
-* Framework and tools for security testing
-
+- Code and tools for automatic system testing and continuous integration test flows across ONAP projects
+- Common guidelines, templates, and best practices to help project developers to write unit and system test code
+- Framework and tools for security testing
diff --git a/test/hpa_automation/heat/README.md b/test/hpa_automation/heat/README.md
index 404ddbea1..75a96e121 100644
--- a/test/hpa_automation/heat/README.md
+++ b/test/hpa_automation/heat/README.md
@@ -4,26 +4,26 @@ These guide describes how to run the hpa_automation.py script. It can be used to
## Prerequisites
- - Install ONAP CLI. See [link](https://onap.readthedocs.io/en/dublin/submodules/cli.git/docs/installation_guide.html)
- - Install python mysql.connector (pip install mysql-connector-python)
- - Must have connectivity to the ONAP, a k8s vm already running is recommended as connectivity to the ONAP k8s network is required for the SDC onboarding section.
- - Create policies for homing using the temp_resource_module_name specified in hpa_automation_config.json. Sample policies can be seen in the sample_vfw_policies directory. Be sure to specify the right path to the directory in hpa_automation_config.json, only policies should exist in the directory
- - Create Nodeport for Policy pdp using the pdp_service_expose.yaml file (copy pdp_service_expose.yaml in hpa_automation/heat to rancher and run kubectl apply -f pdp_expose.yaml)
- - Put in the CSAR file to be used to create service models and specify its path in hpa_automation_config.json
- - Modify the SO bpmn configmap to change the SO vnf adapter endpoint to v2. See step 4 [here](https://onap.readthedocs.io/en/casablanca/submodules/integration.git/docs/docs_vfwHPA.html#docs-vfw-hpa)
- - Prepare sdnc_preload file and put in the right path to its location in hpa_automation_config.json
- - Put in the right parameters for automation in hpa_automation_config.json
- - Ensure the insert_policy_models_heat.py script is in the same location as the hpa_automation.py script as the automation scripts calls the insert_policy_models_heat.py script.
+- Install ONAP CLI. See [link](https://onap.readthedocs.io/en/dublin/submodules/cli.git/docs/installation_guide.html)
+- Install python mysql.connector (pip install mysql-connector-python)
+- Must have connectivity to the ONAP, a k8s vm already running is recommended as connectivity to the ONAP k8s network is required for the SDC onboarding section.
+- Create policies for homing using the temp_resource_module_name specified in hpa_automation_config.json. Sample policies can be seen in the sample_vfw_policies directory. Be sure to specify the right path to the directory in hpa_automation_config.json, only policies should exist in the directory
+- Create Nodeport for Policy pdp using the pdp_service_expose.yaml file (copy pdp_service_expose.yaml in hpa_automation/heat to rancher and run kubectl apply -f pdp_expose.yaml)
+- Put in the CSAR file to be used to create service models and specify its path in hpa_automation_config.json
+- Modify the SO bpmn configmap to change the SO vnf adapter endpoint to v2. See step 4 [here](https://onap.readthedocs.io/en/casablanca/submodules/integration.git/docs/docs_vfwHPA.html#docs-vfw-hpa)
+- Prepare sdnc_preload file and put in the right path to its location in hpa_automation_config.json
+- Put in the right parameters for automation in hpa_automation_config.json
+- Ensure the insert_policy_models_heat.py script is in the same location as the hpa_automation.py script as the automation scripts calls the insert_policy_models_heat.py script.
**Points to Note:**
- - The hpa_automation.py runs end to end. It does the following;
- - Create cloud complex
- - Register cloud regions
- - Create service type
- - Create customer and adds customer subscription
- - SDC Onboarding (Create VLM, VSP, VF Model, and service model)
- - Upload policy models and adds policies
- - Create Service Instance and VNF Instance
- - SDNC Preload and Creates VF module
-
- - There are well named functions that do the above items every time the script is run. If you do not wish to run any part of that, you can go into the script and comment out the section at the bottom that handles that portion.
+
+1. The hpa_automation.py runs end to end. It does the following;
+ - Create cloud complex
+ - Register cloud regions
+ - Create service type
+ - Create customer and adds customer subscription
+ - SDC Onboarding (Create VLM, VSP, VF Model, and service model)
+ - Upload policy models and adds policies
+ - Create Service Instance and VNF Instance
+ - SDNC Preload and Creates VF module
+2. There are well named functions that do the above items every time the script is run. If you do not wish to run any part of that, you can go into the script and comment out the section at the bottom that handles that portion.
diff --git a/test/mocks/datafilecollector-testharness/auto-test/README.md b/test/mocks/datafilecollector-testharness/auto-test/README.md
index b73067dee..f6ccd52cb 100644
--- a/test/mocks/datafilecollector-testharness/auto-test/README.md
+++ b/test/mocks/datafilecollector-testharness/auto-test/README.md
@@ -1,54 +1,61 @@
-## Running automated test case and test suites
+# Running automated test case and test suites
+
Test cases run a single test case and test suites run one or more test cases in a sequence.
The test cases and test suites are possible to run on both Ubuntu and Mac-OS.
-##Overall structure and setup
+## Overall structure and setup
+
Test cases and test suites are written as bash scripts which call predefined functions in two other bash scripts
located in ../common dir.
The functions are described further below.
The integration repo is needed as well as docker.
-If needed setup the ``DFC_LOCAL_IMAGE`` and ``DFC_REMOTE_IMAGE`` env var in test_env.sh to point to the dfc images (local registry image or next registry image) without the image tag.
+If needed setup the `DFC_LOCAL_IMAGE` and `DFC_REMOTE_IMAGE` env var in test_env.sh to point to the dfc images (local registry image or next registry image) without the image tag.
The predefined images should be ok for current usage:
-``DFC_REMOTE_IMAGE=nexus3.onap.org:10001/onap/org.onap.dcaegen2.collectors.datafile.datafile-app-server``
+`DFC_REMOTE_IMAGE=nexus3.onap.org:10001/onap/org.onap.dcaegen2.collectors.datafile.datafile-app-server`
-``DFC_LOCAL_IMAGE=onap/org.onap.dcaegen2.collectors.datafile.datafile-app-server``
+`DFC_LOCAL_IMAGE=onap/org.onap.dcaegen2.collectors.datafile.datafile-app-server`
-If the test cases/suites in this dir are not executed in the auto-test dir in the integration repo, then the ``SIM_GROUP`` env var need to point to the ``simulator-group`` dir.
+If the test cases/suites in this dir are not executed in the auto-test dir in the integration repo, then the `SIM_GROUP` env var need to point to the `simulator-group` dir.
See instructions in the test_env.sh. The ../common dir is needed as well in the case. That is, it is possible to have auto-test dir (and the common dir) somewhere else
than in the integration repo but the simulator-group and common dir need to be available.
-##Test cases and test suites naming.
-Each file filename should have the format ``<tc-id>.sh`` for test cases and ``<ts-id>.sh`` for test suite. The tc-id and ts-id are the
+## Test cases and test suites naming
+
+Each file filename should have the format `<tc-id>.sh` for test cases and `<ts-id>.sh` for test suite. The tc-id and ts-id are the
identify of the test case or test suite. Example FTC2.sh, FTC2 is the id of the test case. Just the contents of the files determines if
it is a test case or test suite so good to name the file so it is easy to see if it is a test case or a test suite.
-A simple way to list all test cases/suite along with the description is to do ``grep ONELINE_DESCR *.sh`` in the shell.
+A simple way to list all test cases/suite along with the description is to do `grep ONELINE_DESCR *.sh` in the shell.
-##Logs from containers and test cases
-All logs from each test cases are stored under ``logs/<tc-id>/``.
+## Logs from containers and test cases
+
+All logs from each test cases are stored under `logs/<tc-id>/`.
The logs include the application.log and the container log from dfc, the container logs from each simulator and the test case log (same as the screen output).
In the test cases the logs are stored with a prefix so the logs can be stored at different steps during the test. All test cases contains an entry to save all logs with prefix 'END' at the end of each test case.
-##Execution##
-Test cases and test suites are executed by: `` [sudo] ./<tc-id or ts-id>.sh local | remote | remote-remove | manual-container | manual-app``</br>
-**local** - uses the dfc image pointed out by ``DFC_LOCAL_IMAGE`` in the test_env, should be the dfc image built locally in your docker registry.</br>
-**remote** - uses the dfc image pointed out by ``DFC_REMOTE_IMAGE`` in the test_env, should be the dfc nexus image in your docker registry.</br>
-**remote-remove** - uses the dfc image pointed out by ``DFC_REMOTE_IMAGE`` in the test_env, should be the dfc nexus image in your docker registry. Removes the nexus image and pull from remote registry.</br>
-**manual-container** - uses dfc in a manually started container. The script will prompt you for manual starting and stopping of the container.</br>
-**manual-app** - uses dfc app started as an external process (from eclipse etc). The script will prompt you for manual start and stop of the process.</br>
+
+## Execution
+
+Test cases and test suites are executed by: ` [sudo] ./<tc-id or ts-id>.sh local | remote | remote-remove | manual-container | manual-app`</br>
+
+- **local** - uses the dfc image pointed out by `DFC_LOCAL_IMAGE` in the test_env, should be the dfc image built locally in your docker registry.</br>
+- **remote** - uses the dfc image pointed out by `DFC_REMOTE_IMAGE` in the test_env, should be the dfc nexus image in your docker registry.</br>
+- **remote-remove** - uses the dfc image pointed out by `DFC_REMOTE_IMAGE` in the test_env, should be the dfc nexus image in your docker registry. Removes the nexus image and pull from remote registry.</br>
+- **manual-container** - uses dfc in a manually started container. The script will prompt you for manual starting and stopping of the container.</br>
+- **manual-app** - uses dfc app started as an external process (from eclipse etc). The script will prompt you for manual start and stop of the process.</br>
When running dfc manually, either as a container or an app the ports need to be set to map the instance id of the dfc. Most test cases start dfc with index 0, then the test case expects the ports of dfc to be mapped to the standar port number.
However, if a higher instance id than 0 is used then the mapped ports need add that index to the port number (eg, if index 2 is used the dfc need to map port 8102 and 8435 instead of the standard 8100 and 8433).
-##Test case file##
+## Test case file
+
A test case file contains a number of steps to verify a certain functionality.
-A description of the test case should be given to the ``TC_ONELINE_DESCR`` var. The description will be printed in the test result.
+A description of the test case should be given to the `TC_ONELINE_DESCR` var. The description will be printed in the test result.
The empty template for a test case files looks like this:
-(Only the parts noted with < and > shall be changed.)
+(Only the parts noted with &lt; and > shall be changed.)
------------------------------------------------------------
```
#!/bin/bash
@@ -69,20 +76,18 @@ store_logs END
print_result
```
------------------------------------------------------------
The ../common/testcase_common.sh contains all functions needed for the test case file. See the README.md file in the ../common dir for a description of all available functions.
+## Test suite files
-##Test suite files##
A test suite file contains one or more test cases to run in sequence.
-A description of the test case should be given to the ``TS_ONELINE_DESCR`` var. The description will be printed in the test result.
+A description of the test case should be given to the `TS_ONELINE_DESCR` var. The description will be printed in the test result.
The empty template for a test suite files looks like this:
-(Only the parts noted with ``<`` and ``>`` shall be changed.)
+(Only the parts noted with `<` and `>` shall be changed.)
------------------------------------------------------------
```
#!/bin/bash
@@ -104,11 +109,11 @@ suite_complete
```
------------------------------------------------------------
The ../common/testsuite_common.sh contains all functions needed for a test suite file. See the README.md file in the ../common dir for a description of all available functions.
-##Known limitations##
+## Known limitations
+
When DFC has polled a new event from the MR simulator, DFC starts to check each file whether it has been already published or not. This check is done per file towards the DR simulator.
If the event contains a large amount of files, there is a risk that DFC will flood the DR simulator with requests for these checks. The timeout in DFC for the response is currently 4 sec and the DR simulator may not be able to answer all request within the timeout.
DR simulator is single threaded. This seem to be a problem only for the first polled event. For subsequent events these requests seem to be spread out in time by DFC so the DR simulator can respond in time.
@@ -117,4 +122,4 @@ A number of the test script will report failure due to this limitation in the DR
The FTP servers may deny connection when too many file download requests are made in a short time from DFC.
This is visible in the DFC application log as WARNINGs for failed downloads. However, DFC always retry the failed download a number of times to
-minimize the risk of giving up download completely for these files. \ No newline at end of file
+minimize the risk of giving up download completely for these files.
diff --git a/test/mocks/datafilecollector-testharness/common/README.md b/test/mocks/datafilecollector-testharness/common/README.md
index bcd345739..31f40ef10 100644
--- a/test/mocks/datafilecollector-testharness/common/README.md
+++ b/test/mocks/datafilecollector-testharness/common/README.md
@@ -1,4 +1,4 @@
-##Common test scripts and env file for test
+## Common test scripts and env file for test
**test_env.sh**</br>
Common env variables for test in the auto-test dir. Used by the auto test cases/suites but could be used for other test script as well.
@@ -9,7 +9,7 @@ Common functions for auto test cases in the auto-test dir. A subset of the funct
**testsuite_common.sh**</br>
Common functions for auto test suites in the auto-test dir.
-##Descriptions of functions in testcase_common.sh
+## Descriptions of functions in testcase_common.sh
The following is a list of the available functions in a test case file. Please see some of the defined test cases for examples.
@@ -90,97 +90,97 @@ Sleep for a number of seconds and prints dfc heartbeat output every 30 sec
**mr_equal <variable-name> <target-value> [<timeout-in-sec>]**</br>
Tests if a variable value in the MR simulator is equal to a target value and an optional timeout.
-</br>Arg: ``<variable-name> <target-value>`` - This test set pass or fail depending on if the variable is
+</br>Arg: `<variable-name> <target-value>` - This test set pass or fail depending on if the variable is
equal to the targer or not.
-</br>Arg: ``<variable-name> <target-value> <timeout-in-sec>`` - This test waits up to the timeout seconds
+</br>Arg: `<variable-name> <target-value> <timeout-in-sec>` - This test waits up to the timeout seconds
before setting pass or fail depending on if the variable value becomes equal to the target
value or not.
**mr_greater <variable-name> <target-value> [<timeout-in-sec>]**</br>
Tests if a variable value in the MR simulator is greater than a target value and an optional timeout.
-</br>Arg: ``<variable-name> <target-value>`` - This test set pass or fail depending on if the variable is
+</br>Arg: `<variable-name> <target-value>` - This test set pass or fail depending on if the variable is
greater the target or not.
-</br>Arg: ``<variable-name> <target-value> <timeout-in-sec>`` - This test waits up to the timeout seconds
+</br>Arg: `<variable-name> <target-value> <timeout-in-sec>` - This test waits up to the timeout seconds
before setting pass or fail depending on if the variable value is greater than the target
value or not.
**mr_less <variable-name> <target-value> [<timeout-in-sec>]**</br>
Tests if a variable value in the MR simulator is less than a target value and an optional timeout.
-</br>Arg: ``<variable-name> <target-value>`` - This test set pass or fail depending on if the variable is
+</br>Arg: `<variable-name> <target-value>` - This test set pass or fail depending on if the variable is
less than the target or not.
-</br>Arg: ``<variable-name> <target-value> <timeout-in-sec>`` - This test waits up to the timeout seconds
+</br>Arg: `<variable-name> <target-value> <timeout-in-sec>` - This test waits up to the timeout seconds
before setting pass or fail depending on if the variable value is less than the target
value or not.
**mr_contain_str <variable-name> <target-value> [<timeout-in-sec>]**</br>
Tests if a variable value in the MR simulator contains a substring target and an optional timeout.
-</br>Arg: ``<variable-name> <target-value>`` - This test set pass or fail depending on if the variable contains
+</br>Arg: `<variable-name> <target-value>` - This test set pass or fail depending on if the variable contains
the target substring or not.
-</br>Arg: ``<variable-name> <target-value> <timeout-in-sec>`` - This test waits up to the timeout seconds
+</br>Arg: `<variable-name> <target-value> <timeout-in-sec>` - This test waits up to the timeout seconds
before setting pass or fail depending on if the variable value contains the target
substring or not.
**dr_equal <variable-name> <target-value> [<timeout-in-sec>]**</br>
Tests if a variable value in the DR simulator is equal to a target value and an optional timeout.
-</br>Arg: ``<variable-name> <target-value>`` - This test set pass or fail depending on if the variable is
+</br>Arg: `<variable-name> <target-value>` - This test set pass or fail depending on if the variable is
equal to the target or not.
-</br>Arg: ``<variable-name> <target-value> <timeout-in-sec>`` - This test waits up to the timeout seconds
+</br>Arg: `<variable-name> <target-value> <timeout-in-sec>` - This test waits up to the timeout seconds
before setting pass or fail depending on if the variable value becomes equal to the target
value or not.
**dr_greater <variable-name> <target-value> [<timeout-in-sec>]**</br>
Tests if a variable value in the DR simulator is greater than a target value and an optional timeout.
-</br>Arg: ``<variable-name> <target-value>`` - This test set pass or fail depending on if the variable is
+</br>Arg: `<variable-name> <target-value>` - This test set pass or fail depending on if the variable is
greater the target or not.
-</br>Arg: ``<variable-name> <target-value> <timeout-in-sec>`` - This test waits up to the timeout seconds
+</br>Arg: `<variable-name> <target-value> <timeout-in-sec>` - This test waits up to the timeout seconds
before setting pass or fail depending on if the variable value is greater than the target
value or not.
**dr_less <variable-name> <target-value> [<timeout-in-sec>]**</br>
Tests if a variable value in the DR simulator is less than a target value and an optional timeout.
-</br>Arg: ``<variable-name> <target-value>`` - This test set pass or fail depending on if the variable is
+</br>Arg: `<variable-name> <target-value>` - This test set pass or fail depending on if the variable is
less than the target or not.
-</br>Arg: ``<variable-name> <target-value> <timeout-in-sec>`` - This test waits up to the timeout seconds
+</br>Arg: `<variable-name> <target-value> <timeout-in-sec>` - This test waits up to the timeout seconds
before setting pass or fail depending on if the variable value is less than the target
value or not.
**dr_contain_str <variable-name> <target-value> [<timeout-in-sec>]**</br>
Tests if a variable value in the DR simulator contains a substring target and an optional timeout.
-</br>Arg: ``<variable-name> <target-value>`` - This test set pass or fail depending on if the variable contains
+</br>Arg: `<variable-name> <target-value>` - This test set pass or fail depending on if the variable contains
the target substring or not.
-</br>Arg: ``<variable-name> <target-value> <timeout-in-sec>`` - This test waits up to the timeout seconds
+</br>Arg: `<variable-name> <target-value> <timeout-in-sec>` - This test waits up to the timeout seconds
before setting pass or fail depending on if the variable value contains the target
substring or not.
**drr_equal <variable-name> <target-value> [<timeout-in-sec>]**</br>
Tests if a variable value in the DR Redir simulator is equal to a target value and an optional timeout.
-</br>Arg: ``<variable-name> <target-value>`` - This test set pass or fail depending on if the variable is
+</br>Arg: `<variable-name> <target-value>` - This test set pass or fail depending on if the variable is
equal to the target or not.
-</br>Arg: ``<variable-name> <target-value> <timeout-in-sec>`` - This test waits up to the timeout seconds
+</br>Arg: `<variable-name> <target-value> <timeout-in-sec>` - This test waits up to the timeout seconds
before setting pass or fail depending on if the variable value becomes equal to the target
value or not.
**drr_greater <variable-name> <target-value> [<timeout-in-sec>]**</br>
Tests if a variable value in the DR Redir simulator is greater than a target value and an optional timeout.
-</br>Arg: ``<variable-name> <target-value>`` - This test set pass or fail depending on if the variable is
+</br>Arg: `<variable-name> <target-value>` - This test set pass or fail depending on if the variable is
greater the target or not.
-</br>Arg: ``<variable-name> <target-value> <timeout-in-sec>`` - This test waits up to the timeout seconds
+</br>Arg: `<variable-name> <target-value> <timeout-in-sec>` - This test waits up to the timeout seconds
before setting pass or fail depending on if the variable value is greater than the target
value or not.
**drr_less <variable-name> <target-value> [<timeout-in-sec>]**</br>
Tests if a variable value in the DR Redir simulator is less than a target value and an optional timeout.
-</br>Arg: ``<variable-name> <target-value>`` - This test set pass or fail depending on if the variable is
+</br>Arg: `<variable-name> <target-value>` - This test set pass or fail depending on if the variable is
less than the target or not.
-</br>Arg: ``<variable-name> <target-value> <timeout-in-sec>`` - This test waits up to the timeout seconds
+</br>Arg: `<variable-name> <target-value> <timeout-in-sec>` - This test waits up to the timeout seconds
before setting pass or fail depending on if the variable value is less than the target
value or not.
**drr_contain_str <variable-name> <target-value> [<timeout-in-sec>]**</br>
Tests if a variable value in the DR Redir simulator contains a substring target and an optional timeout.
-</br>Arg: ``<variable-name> <target-value>`` - This test set pass or fail depending on if the variable contains
+</br>Arg: `<variable-name> <target-value>` - This test set pass or fail depending on if the variable contains
the target substring or not.
-</br>Arg: ``<variable-name> <target-value> <timeout-in-sec>`` - This test waits up to the timeout seconds
+</br>Arg: `<variable-name> <target-value> <timeout-in-sec>` - This test waits up to the timeout seconds
before setting pass or fail depending on if the variable value contains the target
substring or not.
@@ -203,18 +203,17 @@ Print the test result. Only once at the very end of the script.
Print all variables from the simulators and the dfc heartbeat.
In addition, comment in the file can be added using the normal comment sign in bash '#'.
-Comments that shall be visible on the screen as well as in the test case log, use ``echo "<msg>"``.
+Comments that shall be visible on the screen as well as in the test case log, use `echo "<msg>"`.
-
-##Descriptions of functions in testsuite_common.sh
+## Descriptions of functions in testsuite_common.sh
The following is a list of the available functions in a test suite file. Please see a existing test suite for examples.
**suite_setup**</br>
Sets up the test suite and print out a heading.
-**run_tc <tc-script> <$1 from test suite script> <$2 from test suite script>**</br>
+**run_tc <tc-script> &lt;$1 from test suite script> &lt;$2 from test suite script>**</br>
Execute a test case with arg from test suite script
**suite_complete**</br>
-Print out the overall result of the executed test cases. \ No newline at end of file
+Print out the overall result of the executed test cases.
diff --git a/test/mocks/datafilecollector-testharness/dr-sim/README.md b/test/mocks/datafilecollector-testharness/dr-sim/README.md
index a258ed46d..4e7273a11 100644
--- a/test/mocks/datafilecollector-testharness/dr-sim/README.md
+++ b/test/mocks/datafilecollector-testharness/dr-sim/README.md
@@ -1,77 +1,106 @@
-###Run DR simulators as docker container
-1. Build docker container with ```docker build -t drsim_common:latest .```
-2. Run the container ```docker-compose up```
+# Run DR simulators as docker container
+
+1. Build docker container with `docker build -t drsim_common:latest .`
+2. Run the container `docker-compose up`
3. For specific behavior of of the simulators, add arguments to the `command` entries in the `docker-compose.yml`.
+
For example `command: node dmaapDR.js --tc no_publish` . (No argument will assume '--tc normal'). Run `node dmaapDR.js --printtc`
and `node dmaapDR-redir.js --printtc` for details or see further below for the list of possible arg to the simulator
-###Run DR simulators and all other simulators as one group
+# Run DR simulators and all other simulators as one group
+
See the README in the 'simulator-group' dir.
-###Run DR simulators from cmd line
+# Run DR simulators from cmd line
+
1. install nodejs
2. install npm
+
Make sure that you run these commands in the application directory "dr-sim"
+
3. `npm install express`
4. `npm install argparse`
5. `node dmaapDR.js` #keep it in the foreground, see below for a list for arg to the simulator
6. `node dmaapDR_redir.js` #keep it in the foreground, see below for a list for arg to the simulator
-###Arg to control the behavior of the simulators
+# Arg to control the behavior of the simulators
+
+## DR
+
+\--tc tc_normal Normal case, query response based on published files. Publish respond with ok/redirect depending on if file is published or not.</br>
+
+\--tc tc_none_published Query respond 'ok'. Publish respond with redirect.</br>
+
+\--tc tc_all_published Query respond with filename. Publish respond with 'ok'.</br>
+
+\--tc tc_10p_no_response 10% % no response for query and publish. Otherwise normal case.</br>
+
+\--tc tc_10first_no_response 10 first queries and requests gives no response for query and publish. Otherwise normal case.</br>
+
+\--tc tc_100first_no_response 100 first queries and requests gives no response for query and publish. Otherwise normal case.</br>
+
+\--tc tc_all_delay_1s All responses delayed 1s (both query and publish).</br>
+
+\--tc tc_all_delay_10s All responses delayed 10s (both query and publish).</br>
+
+\--tc tc_10p_delay_10s 10% of responses delayed 10s, (both query and publish).</br>
-**DR**
+\--tc tc_10p_error_response 10% error response for query and publish. Otherwise normal case.</br>
- --tc tc_normal Normal case, query response based on published files. Publish respond with ok/redirect depending on if file is published or not.</br>
- --tc tc_none_published Query respond 'ok'. Publish respond with redirect.</br>
- --tc tc_all_published Query respond with filename. Publish respond with 'ok'.</br>
- --tc tc_10p_no_response 10% % no response for query and publish. Otherwise normal case.</br>
- --tc tc_10first_no_response 10 first queries and requests gives no response for query and publish. Otherwise normal case.</br>
- --tc tc_100first_no_response 100 first queries and requests gives no response for query and publish. Otherwise normal case.</br>
- --tc tc_all_delay_1s All responses delayed 1s (both query and publish).</br>
- --tc tc_all_delay_10s All responses delayed 10s (both query and publish).</br>
- --tc tc_10p_delay_10s 10% of responses delayed 10s, (both query and publish).</br>
- --tc tc_10p_error_response 10% error response for query and publish. Otherwise normal case.</br>
- --tc tc_10first_error_response 10 first queries and requests gives no response for query and publish. Otherwise normal case.</br>
- --tc tc_100first_error_response 100 first queries and requests gives no response for query and publish. Otherwise normal case.</br>
+\--tc tc_10first_error_response 10 first queries and requests gives no response for query and publish. Otherwise normal case.</br>
+\--tc tc_100first_error_response 100 first queries and requests gives no response for query and publish. Otherwise normal case.</br>
-**DR Redirect**
+## DR Redirect
- --tc_normal Normal case, all files publish and DR updated.</br>
- --tc_no_publish Ok response but no files published.</br>
- --tc_10p_no_response 10% % no response (file not published).</br>
- --tc_10first_no_response 10 first requests give no response (files not published).</br>
- --tc_100first_no_response 100 first requests give no response (files not published).</br>
- --tc_all_delay_1s All responses delayed 1s, normal publish.</br>
- --tc_all_delay_10s All responses delayed 10s, normal publish.</br>
- --tc_10p_delay_10s 10% of responses delayed 10s, normal publish.</br>
- --tc_10p_error_response 10% error response (file not published).</br>
- --tc_10first_error_response 10 first requests give error response (file not published).</br>
- --tc_100first_error_response 100 first requests give error responses (file not published).</br>
+\--tc_normal Normal case, all files publish and DR updated.</br>
+\--tc_no_publish Ok response but no files published.</br>
-###Needed environment
+\--tc_10p_no_response 10% % no response (file not published).</br>
-DR
+\--tc_10first_no_response 10 first requests give no response (files not published).</br>
- DRR_SIM_IP Set to host name of the DR Redirect simulator "drsim_redir" if running the simulators in a docker private network. Otherwise to "localhost"
- DR_FEEDS A comma separated list of configured feednames and filetypes. Example "1:A,2:B:C" - Feed 1 for filenames beginning with A and feed2 for filenames beginning with B or C.
+\--tc_100first_no_response 100 first requests give no response (files not published).</br>
+
+\--tc_all_delay_1s All responses delayed 1s, normal publish.</br>
+
+\--tc_all_delay_10s All responses delayed 10s, normal publish.</br>
+
+\--tc_10p_delay_10s 10% of responses delayed 10s, normal publish.</br>
+
+\--tc_10p_error_response 10% error response (file not published).</br>
+
+\--tc_10first_error_response 10 first requests give error response (file not published).</br>
+
+\--tc_100first_error_response 100 first requests give error responses (file not published).</br>
+
+# Needed environment
+
+## DR
+
+```
+DRR_SIM_IP Set to host name of the DR Redirect simulator "drsim_redir" if running the simulators in a docker private network. Otherwise to "localhost"
+DR_FEEDS A comma separated list of configured feednames and filetypes. Example "1:A,2:B:C" - Feed 1 for filenames beginning with A and feed2 for filenames beginning with B or C.
+```
`DRR_SIM_IP` is needed for the redirected publish request to be redirected to the DR redirect server.
-DR Redirect (DRR for short)
+## DR Redirect (DRR for short)
- DR_SIM_IP Set to host name of the DR simulator "drsim" if running the simulators in a docker private network. Otherwise to "localhost"
- DR_REDIR_FEEDS Same contentd as DR_FEEDS for DR.
+```
+DR_SIM_IP Set to host name of the DR simulator "drsim" if running the simulators in a docker private network. Otherwise to "localhost"
+DR_REDIR_FEEDS Same contentd as DR_FEEDS for DR.
+```
The DR Redirect server send callback to DR server to update the list of successfully published files.
When running as container (using an ip address from the `dfc_net` docker network) the env shall be set to 'drsim'. . When running the servers from command line, set the env variable `DR_SIM_IP=localhost`
-###APIs for statistic readout
-The simulator can be queried for statistics (use curl from cmd line or open in browser, curl used below):
+# APIs for statistic readout
-DR
+The simulator can be queried for statistics (use curl from cmd line or open in browser, curl used below):
+## DR
`curl localhost:3906/` - returns 'ok'
@@ -135,9 +164,7 @@ DR
`curl localhost:3906/ctr_publish_query_bad_file_prefix/<feed>` - returns a list of the number of publish queries with bad file prefix for a feed
-
-DR Redirect
-
+## DR Redirect
`curl localhost:3908/` - returns 'ok'
@@ -178,6 +205,3 @@ DR Redirect
`curl localhost:3908/feeds/dwl_volume` - returns a list of the number of bytes of the published files for each feed
`curl localhost:3908/dwl_volume/<feed>` - returns the number of bytes of the published files for a feed
-
-
-
diff --git a/test/mocks/datafilecollector-testharness/ftps-sftp-server/README.md b/test/mocks/datafilecollector-testharness/ftps-sftp-server/README.md
index 3bd67404a..f20b29698 100644
--- a/test/mocks/datafilecollector-testharness/ftps-sftp-server/README.md
+++ b/test/mocks/datafilecollector-testharness/ftps-sftp-server/README.md
@@ -1,27 +1,29 @@
-###Deployment of certificates: (in case of update)
+# Deployment of certificates: (in case of update)
This folder is prepared with a set of keys matching DfC for test purposes.
Copy from datafile-app-server/config/keys to the ./tls/ the following files:
-* dfc.crt
-* ftp.crt
-* ftp.key
+- dfc.crt
+- ftp.crt
+- ftp.key
-###Docker preparations
-Source: https://docs.docker.com/install/linux/linux-postinstall/
+# Docker preparations
+
+Source: <https://docs.docker.com/install/linux/linux-postinstall/>
`sudo usermod -aG docker $USER`
then logout-login to activate it.
-###Prepare files for the simulator
+# Prepare files for the simulator
+
Run `prepare.sh` with an argument found in `test_cases.yml` (or add a new tc in that file) to create files (1MB, 5MB and 50MB files) and a large number of
symbolic links to these files to simulate PM files. The files names maches the files in
the events produced by the MR simulator. The dirs with the files will be mounted
by the ftp containers, defined in the docker-compse file, when started
-###Starting/stopping the FTPS/SFTP server(s)
+# Starting/stopping the FTPS/SFTP server(s)
Start: `docker-compose up`
@@ -30,6 +32,6 @@ Stop: Ctrl +C, then `docker-compose down` or `docker-compose down --remove-orph
If you experience issues (or port collision), check the currently running other containers
by using 'docker ps' and stop them if necessary.
+# Cleaning docker structure
-###Cleaning docker structure
-Deep cleaning: `docker system prune` \ No newline at end of file
+Deep cleaning: `docker system prune`
diff --git a/test/mocks/datafilecollector-testharness/mr-sim/README.md b/test/mocks/datafilecollector-testharness/mr-sim/README.md
index d3ca91c87..653b47e8f 100644
--- a/test/mocks/datafilecollector-testharness/mr-sim/README.md
+++ b/test/mocks/datafilecollector-testharness/mr-sim/README.md
@@ -1,45 +1,43 @@
-#MR-simulator
-This readme contains:
+# MR-simulator
-**Introduction**
+This readme contains:
-**Building and running**
+- Introduction
+- Building and running
+- Configuration
-**Configuration**
+## Introduction
-###Introduction###
The MR-sim is a python script delivering batches of events including one or more fileReady for one or more PNFs.
It is possible to configure number of events, PNFs, consumer groups, exising or missing files, file prefixes and change identifier.
In addition, MR sim can be configured to deliver file url for up to 5 FTP servers (simulating the PNFs).
-###Building and running###
+## Building and running
+
It is possible build and run MR-sim manually as a container if needed. In addition MR-sim can be executed as python script, see instuctions further down.
Otherwise it is recommended to use the test scripts in the auto-test dir or run all simulators in one go using scripts in the simulator-group dir.
To build and run manually as a docker container:
-1. Build docker container with ```docker build -t mrsim:latest .```
-2. Run the container ```docker-compose up```
-###Configuration###
+1. Build docker container with `docker build -t mrsim:latest .`
+2. Run the container `docker-compose up`
+
+## Configuration
+
The event pattern, called TC, of the MR-sim is controlled with a arg to python script. See section TC info for available patterns.
All other configuration is done via envrionment variables.
The simulator listens to port 2222.
The following envrionment vaiables are used:
-**FTPS_SIMS** - A comma-separated list of hostname:port for the FTP servers to generate ftps file urls for. If not set MR sim will assume 'localhost:21'. Minimum 1 and maximum 5 host-port pairs can be given.
-
-**SFTP_SIMS** - A comma-separated list of hostname:port for the FTP servers to generate sftp file urls for. If not set MR sim will assume 'localhost:1022'. Minimum 1 and maximum 5 host-port pairs can be given.
-
-**NUM_FTP_SERVERS** - Number of FTP servers to use out of those specified in the envrioment variables above. The number shall be in the range 1-5.
-
-**MR_GROUPS** - A comma-separated list of consummer-group:changeId[:changeId]*. Defines which change identifier that should be used for each consumer gropu. If not set the MR-sim will assume 'OpenDcae-c12:PM_MEAS_FILES'.
+- **FTPS_SIMS** - A comma-separated list of hostname:port for the FTP servers to generate ftps file urls for. If not set MR sim will assume 'localhost:21'. Minimum 1 and maximum 5 host-port pairs can be given.
+- **SFTP_SIMS** - A comma-separated list of hostname:port for the FTP servers to generate sftp file urls for. If not set MR sim will assume 'localhost:1022'. Minimum 1 and maximum 5 host-port pairs can be given.
+- **NUM_FTP_SERVERS** - Number of FTP servers to use out of those specified in the envrioment variables above. The number shall be in the range 1-5.
+- **MR_GROUPS** - A comma-separated list of consummer-group:changeId[:changeId]\*. Defines which change identifier that should be used for each consumer gropu. If not set the MR-sim will assume 'OpenDcae-c12:PM_MEAS_FILES'.
+- **MR_FILE_PREFIX_MAPPING** - A comma-separated list of changeId:filePrefix. Defines which file prefix to use for each change identifier, needed to distinguish files for each change identifiers. If not set the MR-sim will assume 'PM_MEAS_FILES:A
-**MR_FILE_PREFIX_MAPPING** - A comma-separated list of changeId:filePrefix. Defines which file prefix to use for each change identifier, needed to distinguish files for each change identifiers. If not set the MR-sim will assume 'PM_MEAS_FILES:A
+## Statistics read-out and commands
-
-
-###Statistics read-out and commands###
The simulator can be queried for statistics and started/stopped (use curl from cmd line or open in browser, curl used below):
`curl localhost:2222` - Just returns 'Hello World'.
@@ -60,70 +58,63 @@ The simulator can be queried for statistics and started/stopped (use curl from
`curl localhost:2222/fileprefixes` - returns the setting of env var MR_FILE_PREFIX_MAPPING.
-
`curl localhost:2222/ctr_requests` - returns an integer of the number of get requests, for all groups, to the event poll path
`curl localhost:2222/groups/ctr_requests` - returns a list of integers of the number of get requests, for each consumer group, to the event poll path
`curl localhost:2222/ctr_requests/<consumer-group>` - returns an integer of the number of get requests, for the specified consumer group, to the event poll path
-
`curl localhost:2222/ctr_responses` - returns an integer of the number of get responses, for all groups, to the event poll path
`curl localhost:2222/groups/ctr_responses` - returns a list of integers of the number of get responses, for each consumer group, to the event poll path
`curl localhost:2222/ctr_responses/<consumer-group>` - returns an integer of the number of get responses, for the specified consumer group, to the event poll path
-
`curl localhost:2222/ctr_files` - returns an integer of the number generated files for all groups
`curl localhost:2222/groups/ctr_files` - returns a list of integers of the number generated files for each group
`curl localhost:2222/ctr_files/<consumer-group>` - returns an integer or the number generated files for the specified group
-
`curl localhost:2222/ctr_unique_files` - returns an integer of the number generated unique files for all groups
`curl localhost:2222/groups/ctr_unique_files` - returns a list of integers of the number generated unique files for each group
`curl localhost:2222/ctr_unique_files/<consumer-group>` - returns an integer or the number generated unique files for the specified group
-
-
`curl localhost:2222/ctr_events` - returns the total number of events for all groups
`curl localhost:2222/groups/ctr_events` - returns a list the integer of the total number of events for each group
`curl localhost:2222/ctr_events/<consumer-group>` - returns the total number of events for a specified group
-
`curl localhost:2222/exe_time_first_poll` - returns the execution time in mm:ss from the first poll
`curl localhost:2222/groups/exe_time_first_poll` - returns a list of the execution time in mm:ss from the first poll for each group
`curl localhost:2222/exe_time_first_poll/<consumer-group>` - returns the execution time in mm:ss from the first poll for the specified group
-
`curl localhost:2222/ctr_unique_PNFs` - returns the number of unique PNFS in all events.
`curl localhost:2222/groups/ctr_unique_PNFs` - returns a list of the number of unique PNFS in all events for each group.
`curl localhost:2222/ctr_unique_PNFs/<consumer-group>` - returns the number of unique PNFS in all events for the specified group.
+## Alternative to running python (as described below) on your machine, use the docker files
-#Alternative to running python (as described below) on your machine, use the docker files.
-1. Build docker container with ```docker build -t mrsim:latest .```
-2. Run the container ```docker-compose up```
-The behavior can be changed by argument to the python script in the docker-compose.yml
+1. Build docker container with `docker build -t mrsim:latest .`
+2. Run the container `docker-compose up`
+ The behavior can be changed by argument to the python script in the docker-compose.yml
+## Common TC info
-##Common TC info
File names for 1MB, 5MB and 50MB files
-Files in the format: <size-in-mb>MB_<sequence-number>.tar.gz Ex. for 5MB file with sequence number 12: 5MB_12.tar.gz
+Files in the format: <size-in-mb>MB\_<sequence-number>.tar.gz Ex. for 5MB file with sequence number 12: 5MB_12.tar.gz
The sequence numbers are stepped so that all files have unique names
-Missing files (files that are not expected to be found in the ftp server. Format: MissingFile_<sequence-number>.tar.gz
+Missing files (files that are not expected to be found in the ftp server. Format: MissingFile\*<sequence-number>.tar.gz
+
+When the number of events are exhausted, empty replies are returned '\[]', for the limited test cases. For endless tc no empty replies will be given.
-When the number of events are exhausted, empty replies are returned '[]', for the limited test cases. For endless tc no empty replies will be given.
Test cases are limited unless noted as 'endless'.
TC100 - One ME, SFTP, 1 1MB file, 1 event
@@ -140,7 +131,6 @@ TC112 - One ME, SFTP, 5MB files, 100 files per event, 100 events, 1 event per po
TC113 - One ME, SFTP, 1MB files, 100 files per event, 100 events. All events in one poll.
-
TC120 - One ME, SFTP, 1MB files, 100 files per event, 100 events, 1 event per poll. 10% of replies each: no response, empty message, slow response, 404-error, malformed json
TC121 - One ME, SFTP, 1MB files, 100 files per event, 100 events, 1 event per poll. 10% missing files
@@ -195,36 +185,28 @@ TC8XX is same as TC7XX but with FTPS
TC2XXX is same as TC1XXX but with FTPS
-
## Developer workflow
-1. ```sudo apt install python3-venv```
-2. ```source .env/bin/activate/```
-3. ```pip3 install "anypackage"``` #also include in source code
-4. ```pip3 freeze | grep -v "pkg-resources" > requirements.txt``` #to create a req file
-5. ```FLASK_APP=mr-sim.py flask run```
-
- or
-
- ```python3 mr-sim.py ```
-
-6. Check/lint/format the code before commit/amed by ```autopep8 --in-place --aggressive --aggressive mr-sim.py```
-
-
-## User workflow on *NIX
+1. `sudo apt install python3-venv`
+2. `source .env/bin/activate/`
+3. `pip3 install "anypackage"` #also include in source code
+4. `pip3 freeze | grep -v "pkg-resources" > requirements.txt` #to create a req file
+5. `FLASK_APP=mr-sim.py flask run`
+ or
+ `python3 mr-sim.py `
+6. Check/lint/format the code before commit/amed by `autopep8 --in-place --aggressive --aggressive mr-sim.py`
+## User workflow on \*NIX
When cloning/fetching from the repository first time:
+
1. `git clone`
2. `cd "..." ` #navigate to this folder
3. `source setup.sh ` #setting up virtualenv and install requirements
-
- you'll get a sourced virtualenv shell here, check prompt
+ you'll get a sourced virtualenv shell here, check prompt
4. `(env) $ python3 mr-sim.py --help`
-
- alternatively
-
- `(env) $ python3 mr-sim.py --tc1`
+ alternatively
+ `(env) $ python3 mr-sim.py --tc1`
Every time you run the script, you'll need to step into the virtualenv by following step 3 first.
@@ -241,4 +223,4 @@ When cloning/fetching from the repository first time:
7. 'pip3 install -r requirements.txt' #this will install in the local environment then
8. 'python3 dfc-sim.py'
-Every time you run the script, you'll need to step into the virtualenv by step 2+6. \ No newline at end of file
+Every time you run the script, you'll need to step into the virtualenv by step 2+6.
diff --git a/test/mocks/datafilecollector-testharness/simulator-group/README.md b/test/mocks/datafilecollector-testharness/simulator-group/README.md
index 55a2467ae..1af9e3e80 100644
--- a/test/mocks/datafilecollector-testharness/simulator-group/README.md
+++ b/test/mocks/datafilecollector-testharness/simulator-group/README.md
@@ -1,4 +1,5 @@
-###Introduction
+# Introduction
+
The purpose of the "simulator-group" is to run all containers in one go with specified behavior.
Mainly this is needed for CSIT tests and for auto test but can be used also for manual testing of dfc both as an java-app
or as a manually started container. Instead of running the simulators manually as described below the auto-test cases
@@ -13,83 +14,66 @@ In general these steps are needed to run the simulator group and dfc
5. Start the simulators
6. Start dfc
-###Overview of the simulators.
+# Overview of the simulators.
+
There are 5 different types of simulators. For futher details, see the README.md in each simulator dir.
-1. The MR simulator emits fileready events, upon poll requests, with new and historice file references.
-It is possible to configire the change identifier and file prefixes for these identifiers and for which consumer groups
-these change identifier shall be generated. It is also possible to configure the number of events and files to generate and
-from which ftp servers the files shall be fetched from.
+1. The MR simulator emits fileready events, upon poll requests, with new and historice file references
+ It is possible to configire the change identifier and file prefixes for these identifiers and for which consumer groups
+ these change identifier shall be generated. It is also possible to configure the number of events and files to generate and
+ from which ftp servers the files shall be fetched from.
2. The DR simulator handles the publish queries (to check if a file has previously been published) and the
-actual publish request (which results in a redirect to the DR REDIR simulator. It keeps a 'db' of published files updated by the DR REDIR simulator.
-It is possible to configure 1 or more feeds along with the accepted filename prefixes for each feed. It is also possible
-to configure the responses for the publish queries and publish requests.
+ actual publish request (which results in a redirect to the DR REDIR simulator. It keeps a 'db' of published files updated by the DR REDIR simulator.
+ It is possible to configure 1 or more feeds along with the accepted filename prefixes for each feed. It is also possible
+ to configure the responses for the publish queries and publish requests.
3. The DR REDIR simulator handles the redirect request for publish from the DR simulator. All accepted files will be stored as and empty
-file with a file name concatenated from the published file name + file size + feed id.
-It is possible to configure 1 or more feeds along with the accepted filename prefixes for each feed. It is also possible
-to configure the responses for the publish requests.
+ file with a file name concatenated from the published file name + file size + feed id.
+ It is possible to configure 1 or more feeds along with the accepted filename prefixes for each feed. It is also possible
+ to configure the responses for the publish requests.
4. The SFTP simulator(s) handles the ftp download requests. 5 of these simulators are always started and in the MR sim it is
-possible to configure the distrubution of files over these 5 servers (from 1 up to 5 severs). At start of the server, the server is
-populated with files to download.
+ possible to configure the distrubution of files over these 5 servers (from 1 up to 5 severs). At start of the server, the server is
+ populated with files to download.
5. The FTPS simulator(s) is the same as the SFTP except that it using the FTPS protocol.
+# Build the simulator images
-### Build the simulator images
Run the script `prepare-images.sh` to build the docker images for MR, DR and FTPS servers.
-###Edit simulator env variables
-
-
-
-
-###Summary of scripts and files
-`consul_config.sh` - Convert a json config file to work with dfc when manually started as java-app or container and then add that json to Consul.
-
-`dfc-internal-stats.sh` - Periodically extract jvm data and dfc internal data and print to console/file.
-
-`docker-compose-setup.sh` - Sets environment variables for the simulators and start the simulators with that settings.
-
-`docker-compose-template.yml` - A docker compose template with environment variables setting. Used for producing a docker-compose file to defined the simulator containers.
-
-`prepare-images.sh` - Script to build all needed simulator images.
-
-`setup-ftp-files-for-image.sh` - Script executed in the ftp server to create files for download.
+# Edit simulator env variables
-`sim-monitor-start.sh` - Script to install needed packages and start the simulator monitor.
+## Summary of scripts and files
-`sim-monitor.js` - The source file the simulator monitor.
+- `consul_config.sh` - Convert a json config file to work with dfc when manually started as java-app or container and then add that json to Consul.
+- `dfc-internal-stats.sh` - Periodically extract jvm data and dfc internal data and print to console/file.
+- `docker-compose-setup.sh` - Sets environment variables for the simulators and start the simulators with that settings.
+- `docker-compose-template.yml` - A docker compose template with environment variables setting. Used for producing a docker-compose file to defined the simulator containers.
+- `prepare-images.sh` - Script to build all needed simulator images.
+- `setup-ftp-files-for-image.sh` - Script executed in the ftp server to create files for download.
+- `sim-monitor-start.sh` - Script to install needed packages and start the simulator monitor.
+- `sim-monitor.js` - The source file the simulator monitor.
+- `simulators-kill.sh` - Script to kill all the simulators
+- `simulators-start.sh` - Script to start all the simulators. All env variables need to be set prior to executing the script.
-`simulators-kill.sh` - Script to kill all the simulators
+## Preparation
-`simulators-start.sh` - Script to start all the simulators. All env variables need to be set prior to executing the script.
+Do the manual steps to prepare the simulator images:
+- Build the mr-sim image.
+- cd ../mr-sim
+- Run the docker build command to build the image for the MR simulator: 'docker build -t mrsim:latest .'
+- cd ../dr-sim
+- Run the docker build command to build the image for the DR simulators: \`docker build -t drsim_common:latest .'
+- cd ../ftps-sftp-server
+- Check the README.md in ftps-sftp-server dir in case the cert need to be updated.
+- Run the docker build command to build the image for the DR simulators: \`docker build -t ftps_vsftpd:latest -f Dockerfile-ftps .'
-
-###Preparation
-Do the manual steps to prepare the simulator images
-
-Build the mr-sim image.
-
-cd ../mr-sim
-
-Run the docker build command to build the image for the MR simulator: 'docker build -t mrsim:latest .'
-
-cd ../dr-sim
-
-Run the docker build command to build the image for the DR simulators: `docker build -t drsim_common:latest .'
-
-cd ../ftps-sftp-server
-Check the README.md in ftps-sftp-server dir in case the cert need to be updated.
-Run the docker build command to build the image for the DR simulators: `docker build -t ftps_vsftpd:latest -f Dockerfile-ftps .'
-
-
-###Execution
+## Execution
Edit the `docker-compose-setup.sh` (or create a copy) to setup the env variables to the desired test behavior for each simulators.
See each simulator to find a description of the available settings (DR_TC, DR_REDIR_TC and MR_TC).
The following env variables shall be set (example values).
Note that NUM_FTPFILES and NUM_PNFS controls the number of ftp files created in the ftp servers.
-A total of NUM_FTPFILES * NUM_PNFS ftp files will be created in each ftp server (4 files in the below example).
+A total of NUM_FTPFILES \* NUM_PNFS ftp files will be created in each ftp server (4 files in the below example).
Large settings will be time consuming at start of the servers.
Note that the number of files must match the number of file references emitted from the MR sim.
@@ -122,7 +106,8 @@ So farm, this only works when the simulator python script is started from the co
Kill all the containers with `simulators-kill.se`
`simulators_start.sh` is for CSIT test and requires the env variables for test setting to be present in the shell.
-`setup-ftp-files.for-image.sh` is for CSIT and executed when the ftp servers are started from the docker-compose-setup.sh`.
+
+`setup-ftp-files.for-image.sh` is for CSIT and executed when the ftp servers are started from the docker-compose-setup.sh\`.
To make DFC to be able to connect to the simulator containers, DFC need to run in host mode.
Start DFC by the following cmd: `docker run -d --network="host" --name dfc_app <dfc-image> `
@@ -130,9 +115,8 @@ Start DFC by the following cmd: `docker run -d --network="host" --name dfc_app <
`<dfc-image>` could be either the locally built image `onap/org.onap.dcaegen2.collectors.datafile.datafile-app-server`
or the one in nexus `nexus3.onap.org:10001/onap/org.onap.dcaegen2.collectors.datafile.datafile-app-server`.
+# Start the simulator monitor
-
-###Start the simulator monitor
Start the simulator monitor server with `node sim-monitor.js` on the cmd line and the open a browser with the url `localhost:9999/mon`
to see the statisics page with data from DFC(ss), MR sim, DR sim and DR redir sim.
If needed run 'npm install express' first
diff --git a/test/mocks/mass-pnf-sim/README.md b/test/mocks/mass-pnf-sim/README.md
index 07f74e2b7..f997f56ad 100644
--- a/test/mocks/mass-pnf-sim/README.md
+++ b/test/mocks/mass-pnf-sim/README.md
@@ -1,48 +1,78 @@
### Mass PNF simulator
+
The purpose of this simulator is to mimic the PNF for benchmark purposes.
This variant is based on the PNF simulator and use several components.
The modification are focusing on the following areas:
- -add a script configuring and governing multiple instances of PNF simualtor
- -removing parts which are not required for benchmark purposes.
- -add functionality which creates and maintains the ROP files
- -add functionality to query the actual ROP files and construct VES events based on them
+- add a script configuring and governing multiple instances of PNF simualtor
+- removing parts which are not required for benchmark purposes.
+- add functionality which creates and maintains the ROP files
+- add functionality to query the actual ROP files and construct VES events based on them
+### Pre-configuration
-###Pre-configuration
The ipstart should align to a /28 Ip address range start (e.g. 10.11.0.16, 10.11.0.32)
For debug purposes, you can use your own IP address as VES collector, use "ip" command to determine it.
Example:
+
+```
./mass-pnf-sim.py --bootstrap 2 --urlves http://10.148.95.??:10000/eventListener/v7 --ipfileserver 10.148.95.??? --typefileserver sftp --ipstart 10.11.0.16
+```
Note that the file creator is started at a time of the bootstrapping.
Stop/start will not re-launch it.
-###Replacing VES for test purposes
-`sudo nc -vv -l -k -p 10000`
+### Replacing VES for test purposes
+
+```
+sudo nc -vv -l -k -p 10000
+```
+
+### Start
-###Start
Define the amount of simulators to be launched
+
+```
./mass-pnf-sim.py --start 2
+```
+
+### Trigger
-###Trigger
+```
./mass-pnf-sim.py --trigger 2
+```
-###Trigger only a subset of the simulators
+### Trigger only a subset of the simulators
+
+The following command will trigger 0,1,2,3:
+
+```
./mass-pnf-sim.py --triggerstart 0 --triggerend 3
-#this will trigger 0,1,2,3
+```
+
+The following command will trigger 4 and 5:
+```
./mass-pnf-sim.py --triggerstart 4 --triggerend 5
-#this will trigger 4,5
+```
-###Stop and clean
+### Stop and clean
+
+```
./mass-pnf-sim.py --stop 2
./mass-pnf-sim.py --clean
+```
+
+### Verbose printout from Python
-###Verbose printout from Python
+```
python3 -m trace --trace --count -C . ./mass-pnf-sim.py .....
+```
+
+### Cleaning and recovery after incorrect configuration
-###Cleaning and recovery after incorrect configuration
+```
docker stop $(docker ps -aq); docker rm $(docker ps -aq)
+```
diff --git a/test/mocks/mass-pnf-sim/pnf-sim-lightweight/README.md b/test/mocks/mass-pnf-sim/pnf-sim-lightweight/README.md
index 0e2b668a4..927140571 100644
--- a/test/mocks/mass-pnf-sim/pnf-sim-lightweight/README.md
+++ b/test/mocks/mass-pnf-sim/pnf-sim-lightweight/README.md
@@ -1,16 +1,26 @@
-##Local development shortcuts:
-####To start listening on port 10000 for test purposes
-`nc -l -k -p 10000`
-####Test the command above:
-`echo "Hello World" | nc localhost 10000`
+## Local development shortcuts:
-####Trigger the pnf simulator locally:
+To start listening on port 10000 for test purposes:
+
+```
+nc -l -k -p 10000
+```
+
+Test the command above:
+
+```
+echo "Hello World" | nc localhost 10000
+```
+
+Trigger the pnf simulator locally:
```
~/dev/git/integration/test/mocks/mass-pnf-sim/pnf-sim-lightweight$ curl -s -X POST -H "Content-Type: application/json" -H "X-ONAP-RequestID: 123" -H "X-InvocationID: 456" -d @config/config.json
http://localhost:5000/simulator/start
```
-#### VES event sending
+
+## VES event sending
+
the default action is to send a VES Message every 15 minutes and the total duration of the VES FileReady Message sending is 1 day (these values can be changed in config/config.json)
Message from the stdout of nc:
@@ -27,7 +37,8 @@ User-Agent: Apache-HttpClient/4.5.5 (Java/1.8.0_162)
Accept-Encoding: gzip,deflate
```
-```javascript
+```i
+javascript
{"event":{"commonEventHeader":{"startEpochMicrosec":"1551865758690","sourceId":"val13","eventId":"registration_51865758",
"nfcNamingCode":"oam","internalHeaderFields":{},"priority":"Normal","version":"4.0.1","reportingEntityName":"NOK6061ZW3",
"sequence":"0","domain":"notification","lastEpochMicrosec":"1551865758690","eventName":"pnfRegistration_Nokia_5gDu",
@@ -36,4 +47,4 @@ Accept-Encoding: gzip,deflate
"arrayOfNamedHashMap":[{"name":"10MB.tar.gz","hashMap":{
"location":"ftpes://10.11.0.68/10MB.tar.gz","fileFormatType":"org.3GPP.32.435#measCollec",
"fileFormatVersion":"V10","compression":"gzip"}}]}}}
-``` \ No newline at end of file
+```
diff --git a/test/mocks/pnf-onboarding/README.md b/test/mocks/pnf-onboarding/README.md
index b14b34d95..b75a77631 100644
--- a/test/mocks/pnf-onboarding/README.md
+++ b/test/mocks/pnf-onboarding/README.md
@@ -1,20 +1,22 @@
-PNF Package for Integration Test
-================================
+# PNF Package for Integration Test
**NOTE: Requires openssl to be preinstalled.**
This module builds 3 PNF packages based on the files in `/src/main/resources/csarContent/`
1. unsigned package:
- `sample-pnf-1.0.1-SNAPSHOT.csar`
+ `sample-pnf-1.0.1-SNAPSHOT.csar`
2. signed packages:
- A) `sample-signed-pnf-1.0.1-SNAPSHOT.zip`
- B) `sample-signed-pnf-cms-includes-cert-1.0.1-SNAPSHOT.zip`
- The signed packages are based on ETSI SOL004 Security Option 2. They contain csar, cert and cms files. In package B cms includes cert.
+ A) `sample-signed-pnf-1.0.1-SNAPSHOT.zip`
+ B) `sample-signed-pnf-cms-includes-cert-1.0.1-SNAPSHOT.zip`
+ The signed packages are based on ETSI SOL004 Security Option 2. They contain csar, cert and cms files. In package B cms includes cert.
The packages are generated by running the following command in the same directory as this readme file i.e. pnf-onboarding directory:
-> `$ mvn clean install`
+
+> ```
+> `$ mvn clean install`
+> ```
The packages will be stored in the maven generated `target` directory.
@@ -22,5 +24,7 @@ To be able to use the signed packages in SDC the `src/main/resources/securityCon
If SDC is running in containers locally then the following commands could be used to copy the root.cert to the default location in SDC Onboarding Container. It is assumed that the commands are executed from inside pnf-onboarding directory.
-> `$ docker exec -it <sdc-onboard-backend-container-id> mkdir -p /var/lib/jetty/cert`
-> `$ docker cp src/main/resources/securityContent/root.cert <sdc-onboard-backend-container-id>:/var/lib/jetty/cert` \ No newline at end of file
+> ```
+> `$ docker exec -it <sdc-onboard-backend-container-id> mkdir -p /var/lib/jetty/cert`
+> `$ docker cp src/main/resources/securityContent/root.cert <sdc-onboard-backend-container-id>:/var/lib/jetty/cert`
+> ```