summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--docs/docs_5G_oof_pci.rst4
-rw-r--r--docs/docs_5g_rtpm.rst4
-rw-r--r--docs/docs_CM_flexible_designer_orchestrator.rst2
-rw-r--r--docs/docs_CM_schedule_optimizer.rst14
-rw-r--r--docs/docs_vFW_CNF_CDS.rst28
-rw-r--r--docs/docs_vfwHPA.rst2
-rw-r--r--docs/docs_vfw_edgex_k8s.rst2
-rw-r--r--docs/docs_vipsec.rst58
-rw-r--r--docs/functional-requirements.csv2
-rw-r--r--docs/release-notes.rst12
-rw-r--r--test/security/check_certificates/check_certificates/check_certificates_validity.py315
11 files changed, 380 insertions, 63 deletions
diff --git a/docs/docs_5G_oof_pci.rst b/docs/docs_5G_oof_pci.rst
index 6c0a2608f..8edabf40c 100644
--- a/docs/docs_5G_oof_pci.rst
+++ b/docs/docs_5G_oof_pci.rst
@@ -41,7 +41,7 @@ In Frankfurt release, the following are the main enhancements:
- In addition, the first step towards O-RAN alignment is being taken with SDN-C (R) being able to receive a DMaaP
message containing configuration updates (which would be triggered when a neighbor-list-change occurs in the RAN
and is communicated to ONAP over VES). Details of this implementation is available at:
- https://wiki.onap.org/display/DW/CM+Notification+Support+in+ONAP
+ https://wiki.onap.org/display/DW/CM+Notification+Support+in+ONAP
The end-to-end setup for the use case requires a Config DB which stores the cell related details of the RAN.
@@ -95,7 +95,7 @@ Installation: https://wiki.onap.org/display/DW/Demo+setup+steps+for+Frankfurt
Son-Handler installation:
-https://onap.readthedocs.io/en/latest/submodules/dcaegen2.git/docs/sections/services/son-handler/installation.html
+https://docs.onap.org/projects/onap-dcaegen2/en/frankfurt/sections/services/son-handler/installation.html?highlight=dcaegen2
Test Status and Plans
diff --git a/docs/docs_5g_rtpm.rst b/docs/docs_5g_rtpm.rst
index eaed6786d..5ecab4b19 100644
--- a/docs/docs_5g_rtpm.rst
+++ b/docs/docs_5g_rtpm.rst
@@ -18,8 +18,8 @@ The Real-Time Performance Measurements support allows for a PNF to send streamin
Component and API descriptions can be found under:
-- `High Volume VNF Event Streaming (HV-VES) Collector <https://onap.readthedocs.io/en/latest/submodules/dcaegen2.git/docs/sections/services/ves-hv/index.html>`_
-- `HV-VES (High Volume VES) <https://onap.readthedocs.io/en/latest/submodules/dcaegen2.git/docs/sections/apis/ves-hv/index.html#hv-ves-high-volume-ves>`_
+- `High Volume VNF Event Streaming (HV-VES) Collector <https://docs.onap.org/projects/onap-dcaegen2/en/frankfurt/sections/services/ves-hv/index.html>`_
+- `HV-VES (High Volume VES) <https://docs.onap.org/projects/onap-dcaegen2/en/frankfurt/sections/apis/ves-hv/index.html#hv-ves-high-volume-ves>`_
How to verify
~~~~~~~~~~~~~
diff --git a/docs/docs_CM_flexible_designer_orchestrator.rst b/docs/docs_CM_flexible_designer_orchestrator.rst
index 3a9dd7bfe..0cfd703b7 100644
--- a/docs/docs_CM_flexible_designer_orchestrator.rst
+++ b/docs/docs_CM_flexible_designer_orchestrator.rst
@@ -287,4 +287,4 @@ part of the Dublin release. The others were not part of the release but
are available to test with your vNF. Please refer to the Scale out
release notes for further information.
-https://onap.readthedocs.io/en/latest/submodules/integration.git/docs/docs_scaleout.html#docs-scaleout
+https://docs.onap.org/projects/onap-integration/en/frankfurt/docs_scaleout.html
diff --git a/docs/docs_CM_schedule_optimizer.rst b/docs/docs_CM_schedule_optimizer.rst
index 9da2e5337..28946b54d 100644
--- a/docs/docs_CM_schedule_optimizer.rst
+++ b/docs/docs_CM_schedule_optimizer.rst
@@ -1,15 +1,15 @@
.. This work is licensed under a Creative Commons Attribution 4.0
International License. http://creativecommons.org/licenses/by/4.0
-
-.. _docs_CM_schedule_optimizer:
-Change Management Schedule Optimization
+.. _docs_CM_schedule_optimizer:
+
+Change Management Schedule Optimization
-------------------------------------------------------------
-Description
+Description
~~~~~~~~~~~~~~
-The change management schedule optimizer automatically identifies a conflict-free schedule for executing changes across multiple network function instances. It takes into account constraints such as concurrency limits (how many instances can be executed simultaneously), time preferences (e.g., night time maintenance windows with low traffic volumes) and applies optimization techniques to generate schedules.
+The change management schedule optimizer automatically identifies a conflict-free schedule for executing changes across multiple network function instances. It takes into account constraints such as concurrency limits (how many instances can be executed simultaneously), time preferences (e.g., night time maintenance windows with low traffic volumes) and applies optimization techniques to generate schedules.
-More details can be found here:
-https://onap.readthedocs.io/en/latest/submodules/optf/cmso.git/docs/index.html \ No newline at end of file
+More details can be found here:
+https://docs.onap.org/projects/onap-optf-cmso/en/latest/index.html#master-index
diff --git a/docs/docs_vFW_CNF_CDS.rst b/docs/docs_vFW_CNF_CDS.rst
index 77b618e5b..26bfe083b 100644
--- a/docs/docs_vFW_CNF_CDS.rst
+++ b/docs/docs_vFW_CNF_CDS.rst
@@ -190,21 +190,21 @@ Changes done:
- Instantiation broker
The broker implements `infra_workload`_ API used to handle vf-module instantiation request comming from the SO. User directives were changed by SDNC directives what impacts also the way how a'la carte instantiation method works from the VID. There is no need to specify the user directives delivered from the separate file. Instead SDNC directives are delivered through SDNC preloading (a'la carte instantiation) or through the resource assignment performed by the CDS (Macro flow instantiation).
-
-
+
+
For helm package instantiation following parameters have to be delivered in the SDNC directives:
-
-
+
+
======================== ==============================================
-
+
Variable Description
-
+
------------------------ ----------------------------------------------
-
- k8s-rb-profile-name Name of the override profile
-
+
+ k8s-rb-profile-name Name of the override profile
+
k8s-rb-profile-namespace Name of the namespace for created helm package
-
+
======================== ==============================================
- Default profile support was added to the plugin
@@ -293,7 +293,7 @@ modify existing k8s helm templates for each create CNF instance. It opens anothe
chartpath: templates/deployment.yaml
-Above we have exemplary manifest file of the RB profile. Since Frankfurt *override_values.yaml* file does not need to be used as instantiation values are passed to the plugin over Instance API of k8s plugin. In the example profile contains additional k8s helm template which will be added on demand
+Above we have exemplary manifest file of the RB profile. Since Frankfurt *override_values.yaml* file does not need to be used as instantiation values are passed to the plugin over Instance API of k8s plugin. In the example profile contains additional k8s helm template which will be added on demand
to the helm package during its installation. In our case, depending on the SO instantiation request input parameters, vPGN helm package can be enriched with additional ssh service. Such service will be dynamically added to the profile by CDS and later on CDS will upload whole custom RB profile to multicloud/k8s plugin.
In order to support generation and upload of profile, our vFW CBA model has enhanced **resource-assignment** workflow which contains additional steps, **profile-modification** and **profile-upload**. For the last step custom Kotlin script included in the CBA is used to upload K8S profile into multicloud/k8s plugin.
@@ -337,7 +337,7 @@ In order to support generation and upload of profile, our vFW CBA model has enha
}
},
-Profile generation step uses embedded into CDS functionality of templates processing and on its basis ssh port number (specified in the SO request as vpg-management-port) is included in the ssh service helm template.
+Profile generation step uses embedded into CDS functionality of templates processing and on its basis ssh port number (specified in the SO request as vpg-management-port) is included in the ssh service helm template.
::
@@ -361,7 +361,7 @@ Profile generation step uses embedded into CDS functionality of templates proces
chart: {{ .Chart.Name }}
To upload of the profile is conducted with the CDS capability to execute Kotlin scripts. It allows to define any required controller logic. In our case we use to implement decision point and mechanisms of profile generation and upload.
-During the generation CDS extracts the RB profile template included in the CBA, includes there generated ssh service helm template, modifies the manifest of RB template by adding there ssh service and after its archivisation sends the profile to
+During the generation CDS extracts the RB profile template included in the CBA, includes there generated ssh service helm template, modifies the manifest of RB template by adding there ssh service and after its archivisation sends the profile to
k8s plugin.
::
@@ -2489,7 +2489,7 @@ Multiple lower level bugs/issues were also found during use case development
.. _SDC-2776: https://jira.onap.org/browse/SDC-2776
.. _MULTICLOUD-941: https://jira.onap.org/browse/MULTICLOUD-941
.. _CCSDK-2155: https://jira.onap.org/browse/CCSDK-2155
-.. _infra_workload: https://docs.onap.org/en/latest/submodules/multicloud/framework.git/docs/specs/multicloud_infra_workload.html
+.. _infra_workload: https://docs.onap.org/projects/onap-multicloud-framework/en/latest/specs/multicloud_infra_workload.html?highlight=multicloud
.. _SDNC-1116: https://jira.onap.org/browse/SDNC-1116
.. _SO-2727: https://jira.onap.org/browse/SO-2727
.. _SDNC-1109: https://jira.onap.org/browse/SDNC-1109
diff --git a/docs/docs_vfwHPA.rst b/docs/docs_vfwHPA.rst
index 015b725e6..ed64e5e2a 100644
--- a/docs/docs_vfwHPA.rst
+++ b/docs/docs_vfwHPA.rst
@@ -219,7 +219,7 @@ If an update is needed, the update can be done via rest using curl or postman
}'
-9. Register new cloud regions. This can be done using instructions (Step 1 to Step 3) on this `page <https://onap.readthedocs.io/en/latest/submodules/multicloud/framework.git/docs/multicloud-plugin-windriver/UserGuide-MultiCloud-WindRiver-TitaniumCloud.html#tutorial-onboard-instance-of-wind-river-titanium-cloud>`_. The already existing CloudOwner and cloud complex can be used. If step 3 does not work using the k8s ip and external port. It can be done using the internal ip address and port. Exec into any pod and run the command from the pod.
+9. Register new cloud regions. This can be done using instructions (Step 1 to Step 3) on this `page <https://docs.onap.org/projects/onap-multicloud-framework/en/latest/multicloud-plugin-windriver/UserGuide-MultiCloud-WindRiver-TitaniumCloud.html?highlight=multicloud>`_. The already existing CloudOwner and cloud complex can be used. If step 3 does not work using the k8s ip and external port. It can be done using the internal ip address and port. Exec into any pod and run the command from the pod.
- Get msb-iag internal ip address and port
diff --git a/docs/docs_vfw_edgex_k8s.rst b/docs/docs_vfw_edgex_k8s.rst
index a25b349a2..e860feede 100644
--- a/docs/docs_vfw_edgex_k8s.rst
+++ b/docs/docs_vfw_edgex_k8s.rst
@@ -280,7 +280,7 @@ the service-subscription can be added to that object.
An example is shown below for K8s cloud but following the steps 1,2,3
from
-`here <https://onap.readthedocs.io/en/latest/submodules/multicloud/framework.git/docs/multicloud-plugin-windriver/UserGuide-MultiCloud-WindRiver-TitaniumCloud.html#tutorial-onboard-instance-of-wind-river-titanium-cloud>`__.
+`here <https://docs.onap.org/projects/onap-multicloud-framework/en/latest/multicloud-plugin-windriver/UserGuide-MultiCloud-WindRiver-TitaniumCloud.html?highlight=multicloud>`__.
The sample input below is for k8s cloud type.
**Step 1**: Cloud Registration/ Create a cloud region to represent the instance
diff --git a/docs/docs_vipsec.rst b/docs/docs_vipsec.rst
index 755d4c085..4ec8c6f7f 100644
--- a/docs/docs_vipsec.rst
+++ b/docs/docs_vipsec.rst
@@ -28,7 +28,7 @@ Install OOM ONAP using the deploy script in the integration repo. Instructions f
1. Check that all the required components were deployed;
-
+
``oom-rancher# helm list``
2. Check the state of the pods;
@@ -37,14 +37,14 @@ Install OOM ONAP using the deploy script in the integration repo. Instructions f
3. Run robot health check
- ``oom-rancher# cd oom/kubernetes/robot``
+ ``oom-rancher# cd oom/kubernetes/robot``
``oom-rancher# ./ete-k8s.sh onap health``
Ensure all the required components pass the health tests
4. Modify the SO bpmn configmap to change the SO vnf adapter endpoint to v2
-
- ``oom-rancher# kubectl -n onap edit configmap dev-so-so-bpmn-infra-app-configmap``
+
+ ``oom-rancher# kubectl -n onap edit configmap dev-so-so-bpmn-infra-app-configmap``
``- vnf:``
@@ -73,7 +73,7 @@ Install OOM ONAP using the deploy script in the integration repo. Instructions f
``oom-rancher# ./demo-k8s.sh onap init``
-7. Create HPA flavors in cloud regions to be registered with ONAP. All HPA flavor names must start with onap. During our tests, 3 cloud regions were registered and we created flavors in each cloud. The flavors match the flavors described in the test plan `here <https://wiki.onap.org/pages/viewpage.action?pageId=41421112>`_.
+7. Create HPA flavors in cloud regions to be registered with ONAP. All HPA flavor names must start with onap. During our tests, 3 cloud regions were registered and we created flavors in each cloud. The flavors match the flavors described in the test plan `here <https://wiki.onap.org/pages/viewpage.action?pageId=41421112>`_.
- **Cloud Region One**
@@ -81,7 +81,7 @@ Install OOM ONAP using the deploy script in the integration repo. Instructions f
``#nova flavor-create onap.hpa.flavor11 111 8 20 2``
``#nova flavor-key onap.hpa.flavor11 set hw:mem_page_size=2048``
-
+
**Flavor12**
``#nova flavor-create onap.hpa.flavor12 112 12 20 2``
@@ -90,9 +90,9 @@ Install OOM ONAP using the deploy script in the integration repo. Instructions f
``#openstack aggregate create --property aggregate_instance_extra_specs:sriov_nic=sriov-nic-intel-8086-154C-shared-1:3 aggr121``
``#openstack flavor set onap.hpa.flavor12 --property aggregate_instance_extra_specs:sriov_nic=sriov-nic-intel-8086-154C-shared-1:3``
-
+
**Flavor13**
- ``#nova flavor-create onap.hpa.flavor13 113 12 20 2``
+ ``#nova flavor-create onap.hpa.flavor13 113 12 20 2``
``#nova flavor-key onap.hpa.flavor13 set hw:mem_page_size=2048``
@@ -110,7 +110,7 @@ Install OOM ONAP using the deploy script in the integration repo. Instructions f
``#nova flavor-key onap.hpa.flavor21 set hw:cpu_policy=dedicated``
``#nova flavor-key onap.hpa.flavor21 set hw:cpu_thread_policy=isolate``
-
+
**Flavor22**
``#nova flavor-create onap.hpa.flavor22 222 12 20 2``
@@ -119,9 +119,9 @@ Install OOM ONAP using the deploy script in the integration repo. Instructions f
``#openstack aggregate create --property aggregate_instance_extra_specs:sriov_nic=sriov-nic-intel-8086-154C-shared-1:2 aggr221``
``#openstack flavor set onap.hpa.flavor22 --property aggregate_instance_extra_specs:sriov_nic=sriov-nic-intel-8086-154C-shared-1:2``
-
+
**Flavor23**
- ``#nova flavor-create onap.hpa.flavor23 223 12 20 2``
+ ``#nova flavor-create onap.hpa.flavor23 223 12 20 2``
``#nova flavor-key onap.hpa.flavor23 set hw:mem_page_size=2048``
@@ -139,20 +139,20 @@ Install OOM ONAP using the deploy script in the integration repo. Instructions f
``#nova flavor-key onap.hpa.flavor31 set hw:cpu_policy=dedicated``
``#nova flavor-key onap.hpa.flavor31 set hw:cpu_thread_policy=isolate``
-
+
**Flavor32**
``#nova flavor-create onap.hpa.flavor32 332 8192 20 2``
``#nova flavor-key onap.hpa.flavor32 set hw:mem_page_size=1048576``
-
+
**Flavor33**
- ``#nova flavor-create onap.hpa.flavor33 333 12 20 2``
+ ``#nova flavor-create onap.hpa.flavor33 333 12 20 2``
``#nova flavor-key onap.hpa.flavor33 set hw:mem_page_size=2048``
``#openstack aggregate create --property aggregate_instance_extra_specs:sriov_nic=sriov-nic-intel-8086-154C-shared-1:1 aggr331``
- ``#openstack flavor set onap.hpa.flavor33 --property aggregate_instance_extra_specs:sriov_nic=sriov-nic-intel-8086-154C-shared-1:1``
+ ``#openstack flavor set onap.hpa.flavor33 --property aggregate_instance_extra_specs:sriov_nic=sriov-nic-intel-8086-154C-shared-1:1``
8. Check that the cloud complex has the right values and update if it does not. Required values are;
@@ -205,7 +205,7 @@ If an update is needed, the update can be done via rest using curl or postman
}'
-9. Register new cloud regions. This can be done using instructions (Step 1 to Step 3) on this `page <https://onap.readthedocs.io/en/latest/submodules/multicloud/framework.git/docs/multicloud-plugin-windriver/UserGuide-MultiCloud-WindRiver-TitaniumCloud.html#tutorial-onboard-instance-of-wind-river-titanium-cloud>`_. The already existing CloudOwner and cloud complex can be used. If step 3 does not work using the k8s ip and external port. It can be done using the internal ip address and port. Exec into any pod and run the command from the pod.
+9. Register new cloud regions. This can be done using instructions (Step 1 to Step 3) on this `page <https://docs.onap.org/projects/onap-multicloud-framework/en/latest/multicloud-plugin-windriver/UserGuide-MultiCloud-WindRiver-TitaniumCloud.html?highlight=multicloud>`_. The already existing CloudOwner and cloud complex can be used. If step 3 does not work using the k8s ip and external port. It can be done using the internal ip address and port. Exec into any pod and run the command from the pod.
- Get msb-iag internal ip address and port
@@ -215,7 +215,7 @@ If an update is needed, the update can be done via rest using curl or postman
``oom-rancher# kubectl exec dev-oof-oof-6c848594c5-5khps -it -- bash``
-10. Put required subscription list into tenant for all the newly added cloud regions. An easy way to do this is to do a get on the default cloud region, copy the tenant information with the subscription. Then paste it in your put command and modify the region id, tenant-id, tenant-name and resource-version.
+10. Put required subscription list into tenant for all the newly added cloud regions. An easy way to do this is to do a get on the default cloud region, copy the tenant information with the subscription. Then paste it in your put command and modify the region id, tenant-id, tenant-name and resource-version.
**GET COMMAND**
@@ -360,14 +360,14 @@ If an update is needed, the update can be done via rest using curl or postman
}
}'
-
+
11. Onboard the vFW HPA template. The templates can be gotten from the `demo <https://github.com/onap/demo>`_ repo. The heat and env files used are located in demo/heat/vFW_HPA/vFW/. Create a zip file using the files. For onboarding instructions see steps 4 to 9 of `vFWCL instantiation, testing and debugging <https://wiki.onap.org/display/DW/vFWCL+instantiation%2C+testing%2C+and+debuging>`_. Note that in step 5, only one VSP is created. For the VSP the option to submit for testing in step 5cii was not shown. So you can check in and certify the VSP and proceed to step 6.
12. Get the parameters (model info, model invarant id...etc) required to create a service instance via rest. This can be done by creating a service instance via VID as in step 10 of `vFWCL instantiation, testing and debugging <https://wiki.onap.org/display/DW/vFWCL+instantiation%2C+testing%2C+and+debuging>`_. After creating the service instance, exec into the SO bpmn pod and look into the /app/logs/bpmn/debug.log file. Search for the service instance and look for its request details. Then populate the parameters required to create a service instance via rest in step 13 below.
13. Create a service instance rest request but do not create service instance yet. Specify OOF as the homing solution and multicloud as the orchestrator. Be sure to use a service instance name that does not exist and populate the parameters with values gotten from step 12.
-::
+::
curl -k -X POST \
http://{{k8s}}:30277/onap/so/infra/serviceInstances/v6 \
@@ -448,14 +448,14 @@ To Update a policy, use the following curl command. Modify the policy as require
"onapName": "SampleDemo",
"policyScope": "OSDF_DUBLIN"
}' 'https://pdp:8081/pdp/api/updatePolicy'
-
+
To delete a policy, use two commands below to delete from PDP and PAP
**DELETE POLICY INSIDE PDP**
::
-
+
curl -k -v -H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-H 'ClientAuth: cHl0aG9uOnRlc3Q=' \
@@ -468,7 +468,7 @@ To delete a policy, use two commands below to delete from PDP and PAP
**DELETE POLICY INSIDE PAP**
::
-
+
curl -k -v -H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-H 'ClientAuth: cHl0aG9uOnRlc3Q=' \
@@ -495,7 +495,7 @@ Create Policy
-Push Policy
+Push Policy
::
@@ -506,7 +506,7 @@ Push Policy
}' 'https://pdp:8081/pdp/api/pushPolicy'
-
+
17. Create Service Instance using step 13 above
18. Check bpmn logs to ensure that OOF sent homing response and flavor directives.
@@ -538,7 +538,7 @@ Push Policy
"vnf-vms": []
},
-
+
"vnf-parameters": [
{
"vnf-parameter-name":"vf_module_id",
@@ -787,13 +787,13 @@ Push Policy
"service-type": "8c071bd1-c361-4157-8282-3fef7689d32e",
"vnf-name": "ipsec-test",
"vnf-type": "Ipsec..base_vipsec..module-0"
-
+
}
}
}}
-
-Change parameters based on your environment.
+
+Change parameters based on your environment.
**Note**
@@ -804,5 +804,5 @@ Change parameters based on your environment.
"service-type": "8c071bd1-c361-4157-8282-3fef7689d32e", <-- same as Service Instance ID
"vnf-name": "ipsec-test", <-- name to be given to the vf module
"vnf-type": "Ipsec..base_vipsec..module-0" <-- can be found on the VID - VF Module dialog screen - Model Name
-
+
21. Create vf module (11g of `vFWCL instantiation, testing and debugging <https://wiki.onap.org/display/DW/vFWCL+instantiation%2C+testing%2C+and+debuging>`_). If everything worked properly, you should see the stack created in your VIM(WR titanium cloud openstack in this case).
diff --git a/docs/functional-requirements.csv b/docs/functional-requirements.csv
index 5e75fb510..ad90917aa 100644
--- a/docs/functional-requirements.csv
+++ b/docs/functional-requirements.csv
@@ -6,6 +6,6 @@ VSP Compliance and Validation Check within SDC,`wiki page <https://wiki.onap.org
Enable PNF software version at onboarding,`wiki page <https://jira.onap.org/browse/REQ-88?src=confmacro>`__,A.Schmid
xNF communication security enhancements, `wiki page <https://wiki.onap.org/display/DW/xNF+communication+security+enhancements+-+Tests+Description+and+Status>`__,M.Przybysz
ETSI Alignment SO plugin to support SOL003 to connect to an external VNFM,`wiki page <https://wiki.onap.org/display/DW/ETSI+Alignment+Support>`__,F.Oliveira Byung-Woo Jun
-Integration of CDS as an Actor, `wiki page <https://docs.onap.org/en/latest/submodules/policy/parent.git/docs/development/actors/cds/cds.html>`__, B.Sakoto R.K.Verma Y.Malakov
+Integration of CDS as an Actor, `wiki page <https://docs.onap.org/projects/onap-ccsdk-cds/en/latest/CDS_Designer_Guide.html?highlight=actors%2Fcds>`__, B.Sakoto R.K.Verma Y.Malakov
3rd Party Operational Domain Manager, `wiki page <https://wiki.onap.org/display/DW/Third-party+Operational+Domain+Manager>`__, D.Patel
Configuration & persistency, `wiki page <https://wiki.onap.org/pages/viewpage.action?pageId=64003184>`__,Reshmasree c Swaminathan S
diff --git a/docs/release-notes.rst b/docs/release-notes.rst
index 4f38d5892..80170dd4c 100644
--- a/docs/release-notes.rst
+++ b/docs/release-notes.rst
@@ -97,12 +97,14 @@ https://nexus.onap.org/content/repositories/releases/org/onap/demo/vnf/
Robot Test Suites
-----------------
-Version: 1.6.3
+Version: 1.6.4
+..............
+
+:Release Date: 2020-07-07
+:sha1: f863e0060b9e0b13822074d0180cab11aed87ad5
-:Release Date: 2020-06-03
-:sha1: 8f4f6f64eb4626433e6f32eeb146a71d3c840935
**New Features**
-- bug Fixes(Teardown, control loop, alotteed properties)
-- CI support for hvves, 5GBulkPm and pnf-registrate
+- Some corrections for vLB CDS
+- Change owning-entity-id from hard coded to variable
diff --git a/test/security/check_certificates/check_certificates/check_certificates_validity.py b/test/security/check_certificates/check_certificates/check_certificates_validity.py
new file mode 100644
index 000000000..a1eed1e37
--- /dev/null
+++ b/test/security/check_certificates/check_certificates/check_certificates_validity.py
@@ -0,0 +1,315 @@
+<<<<<<< HEAD (9fd8f6 Update release note to include demo artifacts)
+=======
+#!/usr/bin/env python3
+# COPYRIGHT NOTICE STARTS HERE
+#
+# Copyright 2020 Orange, Ltd.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+# COPYRIGHT NOTICE ENDS HERE
+
+# Check all the kubernetes pods, evaluate the certificate and build a
+# certificate dashboard.
+#
+# Dependencies:
+# See requirements.txt
+# The dashboard is based on bulma framework
+#
+# Environment:
+# This script should be run on the local machine which has network access to
+# the onap K8S cluster.
+# It requires k8s cluster config file on local machine
+# It requires also the ONAP IP provided through an env variable ONAP_IP
+# ONAP_NAMESPACE env variable is also considered
+# if not set we set it to onap
+# Example usage:
+# python check_certificates_validity.py
+# the summary html page will be generated where the script is launched
+"""
+Check ONAP certificates
+"""
+import argparse
+import logging
+import os
+import ssl
+import sys
+import OpenSSL
+from datetime import datetime
+from kubernetes import client, config
+from jinja2 import Environment, FileSystemLoader, select_autoescape
+
+# Logger
+LOG_LEVEL = 'INFO'
+logging.basicConfig()
+LOGGER = logging.getLogger("Gating-Index")
+LOGGER.setLevel(LOG_LEVEL)
+CERT_MODES = ['nodeport', 'ingress', 'internal']
+EXP_CRITERIA_MIN = 30
+EXP_CRITERIA_MAX = 389
+EXPECTED_CERT_STRING = "C=US;O=ONAP;OU=OSAAF;CN=intermediateCA_9"
+RESULT_PATH = "."
+
+
+# Get arguments
+parser = argparse.ArgumentParser()
+parser.add_argument(
+ '-m',
+ '--mode',
+ choices=CERT_MODES,
+ help='Mode (nodeport, ingress, internal). If not set all modes are tried',
+ default='nodeport')
+parser.add_argument(
+ '-i',
+ '--ip',
+ help='ONAP IP needed (for nodeport mode)',
+ default=os.environ.get('ONAP_IP'))
+parser.add_argument(
+ '-n',
+ '--namespace',
+ help='ONAP namespace',
+ default='onap')
+parser.add_argument(
+ '-d',
+ '--dir',
+ help='Result directory',
+ default=RESULT_PATH)
+
+args = parser.parse_args()
+
+# Get the ONAP namespace
+onap_namespace = args.namespace
+LOGGER.info("Verification of the %s certificates started", onap_namespace)
+
+# Nodeport specific section
+# Retrieve the kubernetes IP for mode nodeport
+if args.mode == "nodeport":
+ if args.ip is None:
+ LOGGER.error(
+ "In nodeport mode, the IP of the ONAP cluster is required." +
+ "The value can be set using -i option " +
+ "or retrieved from the ONAP_IP env variable")
+ exit(parser.print_usage())
+ try:
+ nodeports_xfail_list = []
+ with open('nodeports_xfail.txt', 'r') as f:
+ first_line = f.readline()
+ for line in f:
+ nodeports_xfail_list.append(
+ line.split(" ", 1)[0].strip('\n'))
+ LOGGER.info("nodeports xfail list: %s",
+ nodeports_xfail_list)
+ except KeyError:
+ LOGGER.error("Please set the environment variable ONAP_IP")
+ sys.exit(1)
+ except FileNotFoundError:
+ LOGGER.warning("Nodeport xfail list not found")
+
+# Kubernetes section
+# retrieve the candidate ports first
+k8s_config = config.load_kube_config()
+
+core = client.CoreV1Api()
+api_instance = client.ExtensionsV1beta1Api(
+ client.ApiClient(k8s_config))
+k8s_services = core.list_namespaced_service(onap_namespace).items
+k8s_ingress = api_instance.list_namespaced_ingress(onap_namespace).items
+
+
+def get_certifificate_info(host, port):
+ LOGGER.debug("Host: %s", host)
+ LOGGER.debug("Port: %s", port)
+ cert = ssl.get_server_certificate(
+ (host, port))
+ LOGGER.debug("get certificate")
+ x509 = OpenSSL.crypto.load_certificate(
+ OpenSSL.crypto.FILETYPE_PEM, cert)
+
+ LOGGER.debug("get certificate")
+ exp_date = datetime.strptime(
+ x509.get_notAfter().decode('ascii'), '%Y%m%d%H%M%SZ')
+ LOGGER.debug("Expiration date retrieved %s", exp_date)
+ issuer = x509.get_issuer().get_components()
+
+ issuer_info = ''
+ # format issuer nicely
+ for issuer_info_key, issuer_info_val in issuer:
+ issuer_info += (issuer_info_key.decode('utf-8') + "=" +
+ issuer_info_val.decode('utf-8') + ";")
+ cert_validity = False
+ if issuer_info[:-1] == EXPECTED_CERT_STRING:
+ cert_validity = True
+
+ return {'expiration_date': exp_date,
+ 'issuer': issuer_info[:-1],
+ 'validity': cert_validity}
+
+
+def test_services(k8s_services, mode):
+ success_criteria = True # success criteria per scan
+ # looks for the certificates
+ node_ports_list = []
+ node_ports_ssl_error_list = []
+ node_ports_connection_error_list = []
+ node_ports_type_error_list = []
+ node_ports_reset_error_list = []
+
+ # for node ports and internal we consider the services
+ # for the ingress we consider the ingress
+ for service in k8s_services:
+ try:
+ for port in service.spec.ports:
+ # For nodeport mode, we consider
+ # - the IP of the cluster
+ # - spec.port.node_port
+ #
+ # For internal mode, we consider
+ # - spec.selector.app
+ # - spec.port.port
+ test_name = service.metadata.name
+ test_port = None
+ error_waiver = False # waiver per port
+ if mode == 'nodeport':
+ test_url = args.ip
+ test_port = port.node_port
+
+ # Retrieve the nodeport xfail list
+ # to consider SECCOM waiver if needed
+ if test_port in nodeports_xfail_list:
+ error_waiver = True
+ else: # internal mode
+ test_port = port.port
+ test_url = ''
+ # in Internal mode there are 2 types
+ # app
+ # app.kubernetes.io/name
+ try:
+ test_url = service.spec.selector['app']
+ except KeyError:
+ test_url = service.spec.selector['app.kubernetes.io/name']
+
+ if test_port is not None:
+ LOGGER.info(
+ "Look for certificate %s (%s:%s)",
+ test_name,
+ test_url,
+ test_port)
+ cert_info = get_certifificate_info(test_url, test_port)
+ exp_date = cert_info['expiration_date']
+ LOGGER.info("Expiration date retrieved %s", exp_date)
+ # calculate the remaining time
+ delta_time = (exp_date - datetime.now()).days
+
+ # Test criteria
+ if error_waiver:
+ LOGGER.info("Port found in the xfail list," +
+ "do not consider it for success criteria")
+ else:
+ if (delta_time < EXP_CRITERIA_MIN or
+ delta_time > EXP_CRITERIA_MAX):
+ success_criteria = False
+ if cert_info['validity'] is False:
+ success_criteria = False
+ # add certificate to the list
+ node_ports_list.append(
+ {'pod_name': test_name,
+ 'pod_port': test_port,
+ 'expiration_date': str(exp_date),
+ 'remaining_days': delta_time,
+ 'cluster_ip': service.spec.cluster_ip,
+ 'issuer': cert_info['issuer'],
+ 'validity': cert_info['validity']})
+ else:
+ LOGGER.debug("Port value retrieved as None")
+ except ssl.SSLError as e:
+ LOGGER.exception("Bad certificate for port %s" % port)
+ node_ports_ssl_error_list.append(
+ {'pod_name': test_name,
+ 'pod_port': test_port,
+ 'error_details': str(e)})
+ except ConnectionRefusedError as e:
+ LOGGER.exception("ConnectionrefusedError for port %s" % port)
+ node_ports_connection_error_list.append(
+ {'pod_name': test_name,
+ 'pod_port': test_port,
+ 'error_details': str(e)})
+ except TypeError as e:
+ LOGGER.exception("Type Error for port %s" % port)
+ node_ports_type_error_list.append(
+ {'pod_name': test_name,
+ 'pod_port': test_port,
+ 'error_details': str(e)})
+ except ConnectionResetError as e:
+ LOGGER.exception("ConnectionResetError for port %s" % port)
+ node_ports_reset_error_list.append(
+ {'pod_name': test_name,
+ 'pod_port': test_port,
+ 'error_details': str(e)})
+
+ # Create html summary
+ jinja_env = Environment(
+ autoescape=select_autoescape(['html']),
+ loader=FileSystemLoader('./templates'))
+ if args.mode == 'nodeport':
+ jinja_env.get_template('cert-nodeports.html.j2').stream(
+ node_ports_list=node_ports_list,
+ node_ports_ssl_error_list=node_ports_ssl_error_list,
+ node_ports_connection_error_list=node_ports_connection_error_list,
+ node_ports_type_error_list=node_ports_type_error_list,
+ node_ports_reset_error_list=node_ports_reset_error_list).dump(
+ '{}/certificates.html'.format(args.dir))
+ else:
+ jinja_env.get_template('cert-internal.html.j2').stream(
+ node_ports_list=node_ports_list,
+ node_ports_ssl_error_list=node_ports_ssl_error_list,
+ node_ports_connection_error_list=node_ports_connection_error_list,
+ node_ports_type_error_list=node_ports_type_error_list,
+ node_ports_reset_error_list=node_ports_reset_error_list).dump(
+ '{}/certificates.html'.format(args.dir))
+
+ return success_criteria
+
+
+def test_ingress(k8s_ingress, mode):
+ LOGGER.debug('Test %s mode', mode)
+ for ingress in k8s_ingress:
+ LOGGER.debug(ingress)
+ return True
+
+
+# ***************************************************************************
+# ***************************************************************************
+# start of the test
+# ***************************************************************************
+# ***************************************************************************
+test_status = True
+if args.mode == "ingress":
+ test_routine = test_ingress
+ test_param = k8s_ingress
+else:
+ test_routine = test_services
+ test_param = k8s_services
+
+LOGGER.info(">>>> Test certificates: mode = %s", args.mode)
+if test_routine(test_param, args.mode):
+ LOGGER.warning(">>>> Test PASS")
+else:
+ LOGGER.warning(">>>> Test FAIL")
+ test_status = False
+
+if test_status:
+ LOGGER.info(">>>> Test Check certificates PASS")
+else:
+ LOGGER.error(">>>> Test Check certificates FAIL")
+ sys.exit(1)
+>>>>>>> CHANGE (20a4db Update release notes for Frankfurt Maintenance release)