aboutsummaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
authormrichomme <morgan.richomme@orange.com>2020-03-19 19:02:41 +0100
committermrichomme <morgan.richomme@orange.com>2020-03-31 14:20:36 +0200
commitefb859d2581a42ea0de4d56646e89848e722c59e (patch)
treee4bb88f7bc778d1e640637ac79d74ca9f66eb05d /docs
parent22872ddddd3c4e3646a2d01d42b534fdea469b8d (diff)
fix integration doc warning
Issue-ID: INT-1490 Signed-off-by: mrichomme <morgan.richomme@orange.com> Change-Id: I9153da660ae469c0bd3ed51cfebd912b6e4b9bf2 Signed-off-by: mrichomme <morgan.richomme@orange.com>
Diffstat (limited to 'docs')
-rw-r--r--docs/docs_5G_Bulk_PM.rst9
-rw-r--r--docs/docs_5G_Configuration_over_NETCONF.rst8
-rw-r--r--docs/docs_5G_PNF_Software_Upgrade.rst22
-rw-r--r--docs/docs_CCVPN.rst73
-rw-r--r--docs/docs_scaleout.rst27
-rw-r--r--docs/docs_vCPE with Tosca VNF.rst22
-rw-r--r--docs/docs_vfwHPA.rst220
-rw-r--r--docs/onap-integration-ci.rst2
-rw-r--r--docs/onap-oom-heat.rst5
-rw-r--r--docs/release-notes.rst2
10 files changed, 192 insertions, 198 deletions
diff --git a/docs/docs_5G_Bulk_PM.rst b/docs/docs_5G_Bulk_PM.rst
index 71d8778cd..da21b701c 100644
--- a/docs/docs_5G_Bulk_PM.rst
+++ b/docs/docs_5G_Bulk_PM.rst
@@ -1,19 +1,19 @@
.. This work is licensed under a Creative Commons Attribution 4.0
International License. http://creativecommons.org/licenses/by/4.0
-
+
.. _docs_5g_bulk_pm:
5G Bulk PM
----------
5G Bulk PM Package
-~~~~~~~~~~~~
+~~~~~~~~~~~~~~~~~~
- 5G Bulk PM Package: https://wiki.onap.org/display/DW/5G+-+Bulk+PM+-+Integration+Test+Case
Description
~~~~~~~~~~~
-The Bulk PM feature consists of an event-driven bulk transfer of monitoring data from an xNF to ONAP/DCAE. A micro-service will listen for 'FileReady' VES events sent from an xNF via the VES collector. Once files become available the collector micro-service will fetch them using protocol such as FTPES (committed) or SFTP. The collected data files are published internally on a DMaaP Data Router (DR) feed.
-The ONAP 5G Bulk PM Use Case Wiki Page can be found here:
+The Bulk PM feature consists of an event-driven bulk transfer of monitoring data from an xNF to ONAP/DCAE. A micro-service will listen for 'FileReady' VES events sent from an xNF via the VES collector. Once files become available the collector micro-service will fetch them using protocol such as FTPES (committed) or SFTP. The collected data files are published internally on a DMaaP Data Router (DR) feed.
+The ONAP 5G Bulk PM Use Case Wiki Page can be found here:
https://wiki.onap.org/display/DW/5G+-+Bulk+PM
How to Use
@@ -28,4 +28,3 @@ To see information on the status of the test see https://wiki.onap.org/display/D
Known Issues and Resolutions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
none.
-
diff --git a/docs/docs_5G_Configuration_over_NETCONF.rst b/docs/docs_5G_Configuration_over_NETCONF.rst
index 9cf8643c5..10bf740e4 100644
--- a/docs/docs_5G_Configuration_over_NETCONF.rst
+++ b/docs/docs_5G_Configuration_over_NETCONF.rst
@@ -1,10 +1,10 @@
.. This work is licensed under a Creative Commons Attribution 4.0
International License. http://creativecommons.org/licenses/by/4.0
-
+
.. _docs_5G_Configuration_over_NETCONF:
5G - Configuration over NETCONF
-----------------------
+-------------------------------
Description
@@ -16,8 +16,8 @@ This use case is intended to be applicable for 5G base stations and other nodes
**Useful Links**
-- `5G - Configuration with NETCONF documentation <https://wiki.onap.org/display/DW/5G+-+Configuration+with+NETCONF>
-- `5G - Configuration with NETCONF - Integtion Test Cases <https://wiki.onap.org/pages/viewpage.action?pageId=58229781&src=contextnavipagetreemode>
+- `5G - Configuration with NETCONF documentation <https://wiki.onap.org/display/DW/5G+-+Configuration+with+NETCONF>`_
+- `5G - Configuration with NETCONF - Integtion Test Cases <https://wiki.onap.org/pages/viewpage.action?pageId=58229781&src=contextnavipagetreemode>`_
How to Use
~~~~~~~~~~
diff --git a/docs/docs_5G_PNF_Software_Upgrade.rst b/docs/docs_5G_PNF_Software_Upgrade.rst
index f25066baa..6b8e5d2d6 100644
--- a/docs/docs_5G_PNF_Software_Upgrade.rst
+++ b/docs/docs_5G_PNF_Software_Upgrade.rst
@@ -3,26 +3,26 @@
.. _docs_5g_pnf_software_upgrade:
-============================================================
+
5G PNF Software Upgrade
-============================================================
+-----------------------
Description
-------------
+~~~~~~~~~~~
The 5G PNF Software upgrade use case shows how users/network operators can modify the software of a PNF instance during installation or regular maintenance. This use case is one aspect of Software Management. This could be used to update the PNF software to a different version of software.
Useful Link
-------------
+~~~~~~~~~~~
`PNF Software Upgrade Wiki Page <https://wiki.onap.org/display/DW/PNF+software+upgrade+in+R6+Frankfurt>`_
Current Status in Frankfurt
----------------------------
-============================================================
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
PNF Software Upgrade Scenarios
-============================================================
+------------------------------
There are 3 PNF software upgrade scenarios supported in Frankfurt release:
@@ -39,20 +39,18 @@ There are 3 PNF software upgrade scenarios supported in Frankfurt release:
- (https://wiki.onap.org/pages/viewpage.action?pageId=64008675)
Common tasks for all scenarios
-------------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SO Workflows
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+~~~~~~~~~~~~
Common SO workflows are used with generic SO building blocks which can be used for any PNF software upgrade scenarios. In Frankfurt release, a PNF software upgrade workflow and a PNF preparation workflow have been created.
.. image:: files/softwareUpgrade/SWUPWorkflow.png
LCM evolution with API Decision Tree
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+====================================
A decision point has been introduced in the Frankfurt release. The service designer needs to indicate which LCM API they would like to use for the LCM operations on the selected PNF source at design time (via SDC). The possible LCM APIs are: SO-REF-DATA (default), CDS, SDNC, or APPC.
.. image:: files/softwareUpgrade/APIDecisionTree.png
-
-
diff --git a/docs/docs_CCVPN.rst b/docs/docs_CCVPN.rst
index 9eb8830d5..e6fde72b1 100644
--- a/docs/docs_CCVPN.rst
+++ b/docs/docs_CCVPN.rst
@@ -82,7 +82,7 @@ The integration test environment is established to have ONAP instance with Frank
Testing Procedure
~~~~~~~~~~~~~~~~~
-Test environment is described in Installation Procedure section and test procedure is described in https://wiki.onap.org/display/DW/MDONS+Integration+Test+Case.
+Test environment is described in Installation Procedure section and test procedure is described in https://wiki.onap.org/display/DW/MDONS+Integration+Test+Case.
Update for Dublin release
@@ -114,7 +114,7 @@ During the integration testing, SDC, SO, SDC master branch are used which includ
Service used for CCVPN
-~~~~~~~~~~~~~~~~~~~~~
+~~~~~~~~~~~~~~~~~~~~~~
- SOTNVPNInfraService, SDWANVPNInfraService and SIteService: https://wiki.onap.org/display/DW/CCVPN+Service+Design
- WanConnectionService ( Another way to describe CCVPN in a single service form which based on ONF CIM ): https://wiki.onap.org/display/DW/CCVPN+Wan+Connection+Service+Design
@@ -149,7 +149,7 @@ And the test status can be found: https://wiki.onap.org/display/DW/CCVPN++-Test+
Known Issues and Resolutions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-1) AAI-1923. Link Management, UUI can't delete the link to external onap otn domain.
+1) AAI-1923. Link Management, UUI can't delete the link to external onap otn domain.
For the manual steps provided by A&AI team, we should follow the steps as follow
the only way to delete is using the forceDeleteTool shell script in the graphadmin container.
@@ -157,19 +157,19 @@ First we will need to find the vertex id, you should be able to get the id by ma
GET /aai/v14/network/ext-aai-networks/ext-aai-network/createAndDelete/esr-system-info/test-esr-system-info-id-val-0?format=raw
-::
+.. code-block:: JSON
+
+ {
-{
-"results": [
-{
-"id": "20624",
-"node-type": "pserver",
-"url": "/aai/v13/cloud-infrastructure/pservers/pserver/pserverid14503-as988q",
-"properties": {
-}
-}
-]
-}
+ "results": [
+ {
+ "id": "20624",
+ "node-type": "pserver",
+ "url": "/aai/v13/cloud-infrastructure/pservers/pserver/pserverid14503-as988q",
+ "properties": {}
+ }
+ ]
+ }
Same goes for the ext-aai-network:
@@ -182,7 +182,7 @@ Run the following command multiple times for both the esr-system-info and ext-aa
::
-kubectl exec -it $(kubectl get pods -lapp=aai-graphadmin -n onap --template 'range .items.metadata.name"\n"end' | head -1) -n onap gosu aaiadmin /opt/app/aai-graphadmin/scripts/forceDeleteTool.sh -action DELETE_NODE -userId YOUR_ID_ANY_VALUE -vertexId VERTEX_ID
+ kubectl exec -it $(kubectl get pods -lapp=aai-graphadmin -n onap --template 'range .items.metadata.name"\n"end' | head -1) -n onap gosu aaiadmin /opt/app/aai-graphadmin/scripts/forceDeleteTool.sh -action DELETE_NODE -userId YOUR_ID_ANY_VALUE -vertexId VERTEX_ID
From the above, remove the YOUR_ID_ANY_VALUE and VERTEX_ID with your info.
@@ -192,16 +192,18 @@ To overcome the Service distribution, the SO catalog has to be populated with th
a) Refering to the Csar that is generated in the SDC designed as per the detailes mentioned in the below link: https://wiki.onap.org/display/DW/CCVPN+Service+Design
b) Download the Csar from SDC thus generated.
c) copy the csar to SO sdc controller pod and bpmn pod
+
+.. code-block:: bash
+
kubectl -n onap get pod|grep so
kubectl -n onap exec -it dev-so-so-sdc-controller-c949f5fbd-qhfbl /bin/sh
-
mkdir null/ASDC
mkdir null/ASDC/1
kubectl -n onap cp service-Sdwanvpninfraservice-csar.csar dev-so-so-bpmn-infra-58796498cf-6pzmz:null/ASDC/1/service-Sdwanvpninfraservice-csar.csar
kubectl -n onap cp service-Sdwanvpninfraservice-csar.csar dev-so-so-bpmn-infra-58796498cf-6pzmz:ASDC/1/service-Sdwanvpninfraservice-csar.csar
-d) populate model information to SO db
- the db script example can be seen in https://wiki.onap.org/display/DW/Manual+steps+for+CCVPN+Integration+Testing
+d) populate model information to SO db: the db script example can be seen in
+ https://wiki.onap.org/display/DW/Manual+steps+for+CCVPN+Integration+Testing
The same would also be applicable for the integration of the client to create the service and get the details.
Currently the testing has been performed using the postman calls to the corresponding APIs.
@@ -213,27 +215,32 @@ a) Make an available csar file for CCVPN use case.
b) Replace uuid of available files with what existing in SDC.
c) Put available csar files in UUI local path (/home/uui).
-4) SO docker branch 1.3.5 has fixes for the issues 1SO-1248.
+4) SO docker branch 1.3.5 has fixes for the issues 1SO-1248
After SDC distribution success, copy all csar files from so-sdc-controller:
- connect to so-sdc-controller( eg: kubectl.exe exec -it -n onap dev-so-so-sdc-controller-77df99bbc9-stqdz /bin/sh )
- find out all csar files ( eg: find / -name '*.csar' )
- the csar files should be in this path: /app/null/ASDC/ ( eg: /app/null/ASDC/1/service-Sotnvpninfraservice-csar.csar )
- exit from the so-sdc-controller ( eg: exit )
- copy all csar files to local derectory ( eg: kubectl.exe cp onap/dev-so-so-sdc-controller-6dfdbff76c-64nf9:/app/null/ASDC/tmp/service-DemoService-csar.csar service-DemoService-csar.csar -c so-sdc-controller )
-
-Copy csar files, which got from so-sdc-controller, to so-bpmn-infra
- connect to so-bpmn-infra ( eg: kubectl.exe -n onap exec -it dev-so-so-bpmn-infra-54db5cd955-h7f5s -c so-bpmn-infra /bin/sh )
- check the /app/ASDC deretory, if doesn't exist, create it ( eg: mkdir /app/ASDC -p )
- exit from the so-bpmn-infra ( eg: exit )
- copy all csar files to so-bpmn-infra ( eg: kubectl.exe cp service-Siteservice-csar.csar onap/dev-so-so-bpmn-infra-54db5cd955-h7f5s:/app/ASDC/1/service-Siteservice-csar.csar )
-
-5) Manual steps in closed loop Scenario:
+
+- connect to so-sdc-controller ( eg: kubectl.exe exec -it -n onap dev-so-so-sdc-controller-77df99bbc9-stqdz /bin/sh )
+- find out all csar files ( eg: find / -name "\*.csar" ), the csar files should
+ be in this path: /app/null/ASDC/ ( eg: /app/null/ASDC/1/service-Sotnvpninfraservice-csar.csar )
+- exit from the so-sdc-controller ( eg: exit )
+- copy all csar files to local derectory ( eg: kubectl.exe cp onap/dev-so-so-sdc-controller-6dfdbff76c-64nf9:/app/null/ASDC/tmp/service-DemoService-csar.csar service-DemoService-csar.csar -c so-sdc-controller )
+
+Copy csar files, which got from so-sdc-controller, to so-bpmn-infra:
+
+- connect to so-bpmn-infra ( eg: kubectl.exe -n onap exec -it dev-so-so-bpmn-infra-54db5cd955-h7f5s -c so-bpmn-infra /bin/sh )
+- check the /app/ASDC deretory, if doesn't exist, create it ( eg: mkdir /app/ASDC -p )
+- exit from the so-bpmn-infra ( eg: exit )
+- copy all csar files to so-bpmn-infra ( eg: kubectl.exe cp service-Siteservice-csar.csar onap/dev-so-so-bpmn-infra-54db5cd955-h7f5s:/app/ASDC/1/service-Siteservice-csar.csar )
+
+5) Manual steps in closed loop Scenario
+
Following steps were undertaken for the closed loop testing.
+
a. Give controller ip, username and password, trust store and key store file in restconf collector collector.properties
b. Updated DMAAP ip in cambria.hosts in DmaapConfig.json in restconf collector and run restconf collector
c. Followed the steps provided in this link(https://wiki.onap.org/display/DW/Holmes+User+Guide+-+Casablanca#HolmesUserGuide-Casablanca-Configurations) to push CCVPN rules to holmes
d. Followed the steps provided in this link(https://wiki.onap.org/display/DW/ONAP+Policy+Framework%3A+Installation+of+Amsterdam+Controller+and+vCPE+Policy) as reference to push CCVPN policies to policy module and updated sdnc.url, username and password in environment(/opt/app/policy/config/controlloop.properties.environment)
+
As per wiki (Policy on OOM), push-policied.sh script is used to install policies. but I observed that CCVPN policy is not added in this script. So merged CCVPN policy using POLICY-1356 JIRA ticket. but policy is pushed by using push-policy_casablanca.sh script during integration test.
It is found that the changes made were overwritten and hence had to patch the DG manually. This will be tracked by the JIRA SDNC-540.
diff --git a/docs/docs_scaleout.rst b/docs/docs_scaleout.rst
index b47c0693c..d3fe9fb41 100644
--- a/docs/docs_scaleout.rst
+++ b/docs/docs_scaleout.rst
@@ -77,7 +77,7 @@ There are four different message flows:
The numbers in the figure represent the sequence of steps within a given flow. Note that interactions between the components in the picture and AAI, SDNC, and DMaaP are not shown for clarity's sake.
-Scale out with manual trigger (green flow) and closed-loop-enabled scale out (red flow) are mutually exclusive. When the manual trigger is used, VID directly triggers the appropriate workflow in SO (step 1 of the green flow in the figure above). See Section 4 for more details.
+Scale out with manual trigger (green flow) and closed-loop-enabled scale out (red flow) are mutually exclusive. When the manual trigger is used, VID directly triggers the appropriate workflow in SO (step 1 of the green flow in the figure above). See Section 4 for more details.
When closed-loop enabled scale out is used, Policy triggers the SO workflow. The closed loop starts with the vLB periodically reporting telemetry about traffic patterns to the VES collector in DCAE (step 1 of the red flow). When the amount of traffic exceeds a given threshold (which the user defines during closed loop creation in CLAMP - see Section 1-4), DCAE notifies Policy (step 2), which in turn triggers the appropriate action. For this use case, the action is contacting SO to augment resource capacity in the network (step 3).
@@ -97,7 +97,7 @@ This use-case requires operations on several ONAP components to perform service
1-1 VNF Configuration Modeling and Upload with CDS
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Since Dublin, the scale out use case integrates with the Controller Design Studio (CDS) ONAP component to automate the generation of cloud configuration at VNF instantiation time. The user interested in running the use case only with manual preload can skip this section and start from Section 1-2. The description of the use case with manual preload is provided in Section5.
+Since Dublin, the scale out use case integrates with the Controller Design Studio (CDS) ONAP component to automate the generation of cloud configuration at VNF instantiation time. The user interested in running the use case only with manual preload can skip this section and start from Section 1-2. The description of the use case with manual preload is provided in Section5.
Users can model this configuration at VNF design time and onboard the blueprint to CDS via the CDS GUI. The blueprint includes naming policies and network configuration details (e.g. IP address families, network names, etc.) that CDS will use during VNF instantiation to generate resource names and assign network configuration to VMs through the cloud orchestrator.
@@ -1113,18 +1113,15 @@ that will instantiate Service, VNF, VF modules and Heat stacks:
"projectName":"Project-Demonstration"
},
"owningEntity":{
- "owningEntityId":"6f6c49d0-8a8c-4704-9174-321bcc526cc0",
- "owningEntityName":"OE-Demonstration"
+ "owningEntityId":"6f6c49d0-8a8c-4704-9174-321bcc526cc0",
+ "owningEntityName":"OE-Demonstration"
},
"modelInfo":{
- "modelVersion":"1.0",
- "modelVersionId":"{{service-uuid}}",
- "modelInvariantId":"{{service-invariantUUID}}",
- "modelName":"{{service-name}}",
- "modelType":"service"
- }
- }
-}'
+ "modelVersion":"1.0",
+ "modelVersionId":"{{service-uuid}}",
+ "modelInvariantId":"{{service-invariantUUID}}",
+ "modelName":"{{service-name}}",
+ "modelType":"service"}}}'
Note that the "dcae_collector_ip" parameter has to contain the IP address of one of the Kubernetes cluster nodes, 10.12.5.214 in the example above. In the response to the Macro request, the user will obtain a requestId that will be usefulto follow the instantiation request status in the ONAP SO:
@@ -1143,7 +1140,7 @@ PART 3 - Post Instantiation Operations
3-1 Post Instantiation VNF configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-CDS executes post-instantiation VNF configuration if the "skip-post-instantiation" flag in the SDC service model is set to false, which is the default behavior. Manual post-instantiation configuration is necessary if the "skip-post-instantiation" flag in the service model is set to true or if the VNF is instantiated using the preload approach, which doesn't include CDS. Regardless, this step is NOT required during scale out operations, as VNF reconfiguration will be triggered by SO and executed by APPC.
+CDS executes post-instantiation VNF configuration if the "skip-post-instantiation" flag in the SDC service model is set to false, which is the default behavior. Manual post-instantiation configuration is necessary if the "skip-post-instantiation" flag in the service model is set to true or if the VNF is instantiated using the preload approach, which doesn't include CDS. Regardless, this step is NOT required during scale out operations, as VNF reconfiguration will be triggered by SO and executed by APPC.
If VNF post instantiation is executed manually, in order to change the state of the vLB the users should run the following REST call, replacing the IP addresses in the VNF endpoint and JSON object to match the private IP addresses of their vDNS instance:
@@ -1398,7 +1395,7 @@ These IDs are also used in the URL request to SO:
::
- http://<Any_K8S_Node_IP_Address>:30277/onap/so/infra/serviceInstantiation/v7/serviceInstances/7d3ca782-c486-44b3-9fe5-39f322d8ee80/vnfs/9d33cf2d-d6aa-4b9e-a311-460a6be5a7de/vfModules/scaleOut
+ http://<Any_K8S_Node_IP_Address>:30277/onap/so/infra/serviceInstantiation/v7/serviceInstances/7d3ca782-c486-44b3-9fe5-39f322d8ee80/vnfs/9d33cf2d-d6aa-4b9e-a311-460a6be5a7de/vfModules/scaleOut
Finally, the "configurationParameters" section in the JSON request to SO contains the parameters that will be used to reconfigure the VNF after scaling. Please see Section 1-7 for an in-depth description of how to set the parameters correctly.
@@ -1428,7 +1425,7 @@ The procedure is similar to one described above, with some minor changes:
4) **Controller type selection** in SO works as described in Section 1-6.
-5) **VNF instantiation from VID**: users can use VID to create the service, the VNF, and instantiate the VF modules. In the VID main page, users should select GR API (this should be the default option).
+5) **VNF instantiation from VID**: users can use VID to create the service, the VNF, and instantiate the VF modules. In the VID main page, users should select GR API (this should be the default option).
.. figure:: files/scaleout/vid.png
:align: center
diff --git a/docs/docs_vCPE with Tosca VNF.rst b/docs/docs_vCPE with Tosca VNF.rst
index 4a5b6fc69..85b2cbe3b 100644
--- a/docs/docs_vCPE with Tosca VNF.rst
+++ b/docs/docs_vCPE with Tosca VNF.rst
@@ -3,7 +3,7 @@
vCPE with Tosca VNF
----------------------------
-VNF Packages and NS Packages
+VNF Packages and NS Packages
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
vCPE tosca file url: https://git.onap.org/demo/tree/tosca/vCPE
@@ -65,12 +65,12 @@ After the patch https://gerrit.onap.org/r/#/c/73502/ is merged. With the generat
- The policy scope has to add a value “us” into it which is a configuration issue in OOF side. Policy side also need do improvement to deal with policy scope automatically append instead of replacement so such policy could be used by several services at the same time.
Design Time:
-~~~~~~~~~~~
+~~~~~~~~~~~~
1) Because SDC doesn't export ETSI aigned VNF package and NS package, so in this release, we put the real ETSI aligned package as package artifact.
2) When design Network service in SDC, should assign "gvnfmdriver" as the value of nf_type in Properties Assignment. so that VF-C can know will use gvnfm to manage VNF life cycle.
Run Time:
-~~~~~~~~
+~~~~~~~~~
1) First onboard VNF/NS package from SDC to VF-C catalog in sequence.
2) Trigger the NS operation via UUI
@@ -143,17 +143,17 @@ Known Issues and Resolutions
- vnflcm notification error patch https://gerrit.onap.org/r/#/c/73852/
- grant error patch not merged into VF-C 1.2.2 image: https://gerrit.onap.org/r/#/c/73833/ and https://gerrit.onap.org/r/#/c/73770/
- VF-C catalog config should be updated with the right SDC URL and user/pwd
-Resolution: Disable VFC catalog livenessprobe and update configuration
+ Resolution: Disable VFC catalog livenessprobe and update configuration
a) edit dev-vfc-catalog deployment
b) remove livenessprobe section
c) enter into catalog pod and update configuration
-::
-kubectl -n onap exec -it dev-vfc-catalog-6978b76c86-87722 /bin/bash
-config file location: service/vfc/nfvo/catalog/catalog/pub/config/config.py
-Update the SDC configuration as follows:
-SDC_BASE_URL = "http://msb-iag:80/api"
-SDC_USER = "aai"
-SDC_PASSWD = "Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U"
+::
+ kubectl -n onap exec -it dev-vfc-catalog-6978b76c86-87722 /bin/bash
+ config file location: service/vfc/nfvo/catalog/catalog/pub/config/config.py
+ Update the SDC configuration as follows:
+ SDC_BASE_URL = "http://msb-iag:80/api"
+ SDC_USER = "aai"
+ SDC_PASSWD = "Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U"
diff --git a/docs/docs_vfwHPA.rst b/docs/docs_vfwHPA.rst
index 2dd229b31..35cea9d6a 100644
--- a/docs/docs_vfwHPA.rst
+++ b/docs/docs_vfwHPA.rst
@@ -8,7 +8,7 @@ vFW/vDNS with HPA Tutorial: Setting Up and Configuration
--------------------------------------------------------
Description
-~~~~~~~~~~
+~~~~~~~~~~~
This use case makes modifications to the regular vFW use case in ONAP by giving the VMs certain hardware features (such as SRIOV NIC, CPU pinning, pci passthrough.. etc.) in order to enhance their performance. Multiple cloud regions with flavors that have HPA features are registered with ONAP. We then create policies that specify the HPA requirements of each VM in the use case. When a service instance is created with OOF specified as the homing solution, OOF responds with the homing solution (cloud region) and flavor directives that meet the requirements specified in the policy.
This tutorial covers enhancements 1 to 5 in Background of https://wiki.onap.org/pages/viewpage.action?pageId=41421112. It focuses on Test Plan 1.
@@ -26,7 +26,7 @@ This tutorial covers enhancements 1 to 5 in Background of https://wiki.onap.org/
Setting Up and Installation
-~~~~~~~~~~~~~~~~~~~~~~~~~~
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
Some fixes for HPA support were made subsequent to the release of the Casablanca images. Several updated docker images need to be used to utilize the fixes. The details of the docker images that need to be used and the issues that are fixed are described at this link https://wiki.onap.org/display/DW/Docker+image+updates+for+HPA+vFW+testing
Instructions for updating the manifest of ONAP docker images can be found here: https://onap.readthedocs.io/en/casablanca/submodules/integration.git/docs/#deploying-an-updated-docker-manifest
@@ -35,7 +35,7 @@ Install OOM ONAP using the deploy script in the integration repo. Instructions f
1. Check that all the required components were deployed;
-
+
``oom-rancher# helm list``
2. Check the state of the pods;
@@ -44,14 +44,14 @@ Install OOM ONAP using the deploy script in the integration repo. Instructions f
3. Run robot health check
- ``oom-rancher# cd oom/kubernetes/robot``
+ ``oom-rancher# cd oom/kubernetes/robot``
``oom-rancher# ./ete-k8s.sh onap health``
Ensure all the required components pass the health tests
4. Modify the SO bpmn configmap to change the SO vnf adapter endpoint to v2
-
- ``oom-rancher# kubectl -n onap edit configmap dev-so-so-bpmn-infra-app-configmap``
+
+ ``oom-rancher# kubectl -n onap edit configmap dev-so-so-bpmn-infra-app-configmap``
``- vnf:``
@@ -74,7 +74,7 @@ Install OOM ONAP using the deploy script in the integration repo. Instructions f
``oom-rancher# kubectl delete <pod-name> -n onap``
-5. Create HPA flavors in cloud regions to be registered with ONAP. All HPA flavor names must start with onap. During our tests, 3 cloud regions were registered and we created flavors in each cloud. The flavors match the flavors described in the test plan `here <https://wiki.onap.org/pages/viewpage.action?pageId=41421112>`_.
+5. Create HPA flavors in cloud regions to be registered with ONAP. All HPA flavor names must start with onap. During our tests, 3 cloud regions were registered and we created flavors in each cloud. The flavors match the flavors described in the test plan `here <https://wiki.onap.org/pages/viewpage.action?pageId=41421112>`_.
- **Cloud Region One**
@@ -82,7 +82,7 @@ Install OOM ONAP using the deploy script in the integration repo. Instructions f
``#nova flavor-create onap.hpa.flavor11 111 8 20 2``
``#nova flavor-key onap.hpa.flavor11 set hw:mem_page_size=2048``
-
+
**Flavor12**
``#nova flavor-create onap.hpa.flavor12 112 12 20 2``
@@ -91,9 +91,9 @@ Install OOM ONAP using the deploy script in the integration repo. Instructions f
``#openstack aggregate create --property aggregate_instance_extra_specs:sriov_nic=sriov-nic-intel-8086-154C-shared-1:3 aggr121``
``#openstack flavor set onap.hpa.flavor12 --property aggregate_instance_extra_specs:sriov_nic=sriov-nic-intel-8086-154C-shared-1:3``
-
+
**Flavor13**
- ``#nova flavor-create onap.hpa.flavor13 113 12 20 2``
+ ``#nova flavor-create onap.hpa.flavor13 113 12 20 2``
``#nova flavor-key onap.hpa.flavor13 set hw:mem_page_size=2048``
@@ -111,7 +111,7 @@ Install OOM ONAP using the deploy script in the integration repo. Instructions f
``#nova flavor-key onap.hpa.flavor21 set hw:cpu_policy=dedicated``
``#nova flavor-key onap.hpa.flavor21 set hw:cpu_thread_policy=isolate``
-
+
**Flavor22**
``#nova flavor-create onap.hpa.flavor22 222 12 20 2``
@@ -120,9 +120,9 @@ Install OOM ONAP using the deploy script in the integration repo. Instructions f
``#openstack aggregate create --property aggregate_instance_extra_specs:sriov_nic=sriov-nic-intel-8086-154C-shared-1:2 aggr221``
``#openstack flavor set onap.hpa.flavor22 --property aggregate_instance_extra_specs:sriov_nic=sriov-nic-intel-8086-154C-shared-1:2``
-
+
**Flavor23**
- ``#nova flavor-create onap.hpa.flavor23 223 12 20 2``
+ ``#nova flavor-create onap.hpa.flavor23 223 12 20 2``
``#nova flavor-key onap.hpa.flavor23 set hw:mem_page_size=2048``
@@ -140,20 +140,20 @@ Install OOM ONAP using the deploy script in the integration repo. Instructions f
``#nova flavor-key onap.hpa.flavor31 set hw:cpu_policy=dedicated``
``#nova flavor-key onap.hpa.flavor31 set hw:cpu_thread_policy=isolate``
-
+
**Flavor32**
``#nova flavor-create onap.hpa.flavor32 332 8192 20 2``
``#nova flavor-key onap.hpa.flavor32 set hw:mem_page_size=1048576``
-
+
**Flavor33**
- ``#nova flavor-create onap.hpa.flavor33 333 12 20 2``
+ ``#nova flavor-create onap.hpa.flavor33 333 12 20 2``
``#nova flavor-key onap.hpa.flavor33 set hw:mem_page_size=2048``
``#openstack aggregate create --property aggregate_instance_extra_specs:sriov_nic=sriov-nic-intel-8086-154C-shared-1:1 aggr331``
- ``#openstack flavor set onap.hpa.flavor33 --property aggregate_instance_extra_specs:sriov_nic=sriov-nic-intel-8086-154C-shared-1:1``
+ ``#openstack flavor set onap.hpa.flavor33 --property aggregate_instance_extra_specs:sriov_nic=sriov-nic-intel-8086-154C-shared-1:1``
**Note: Use case can be run manually or using automation script (recommended)**
@@ -229,7 +229,7 @@ If an update is needed, the update can be done via rest using curl or postman
``oom-rancher# kubectl exec dev-oof-oof-6c848594c5-5khps -it -- bash``
-10. Put required subscription list into tenant for all the newly added cloud regions. An easy way to do this is to do a get on the default cloud region, copy the tenant information with the subscription. Then paste it in your put command and modify the region id, tenant-id, tenant-name and resource-version.
+10. Put required subscription list into tenant for all the newly added cloud regions. An easy way to do this is to do a get on the default cloud region, copy the tenant information with the subscription. Then paste it in your put command and modify the region id, tenant-id, tenant-name and resource-version.
**GET COMMAND**
@@ -374,122 +374,122 @@ If an update is needed, the update can be done via rest using curl or postman
}
}'
-
+
11. Onboard the vFW HPA template. The templates can be gotten from the `demo <https://github.com/onap/demo>`_ repo. The heat and env files used are located in demo/heat/vFW_HPA/vFW/. Create a zip file using the files. For onboarding instructions see steps 4 to 9 of `vFWCL instantiation, testing and debugging <https://wiki.onap.org/display/DW/vFWCL+instantiation%2C+testing%2C+and+debuging>`_. Note that in step 5, only one VSP is created. For the VSP the option to submit for testing in step 5cii was not shown. So you can check in and certify the VSP and proceed to step 6.
12. Get the parameters (model info, model invarant id...etc) required to create a service instance via rest. This can be done by creating a service instance via VID as in step 10 of `vFWCL instantiation, testing and debugging <https://wiki.onap.org/display/DW/vFWCL+instantiation%2C+testing%2C+and+debuging>`_. After creating the service instance, exec into the SO bpmn pod and look into the /app/logs/bpmn/debug.log file. Search for the service instance and look for its request details. Then populate the parameters required to create a service instance via rest in step 13 below.
13. Create a service instance rest request but do not create service instance yet. Specify OOF as the homing solution and multicloud as the orchestrator. Be sure to use a service instance name that does not exist and populate the parameters with values gotten from step 12.
-::
+::
curl -k -X POST \
http://{{k8s}}:30277/onap/so/infra/serviceInstances/v6 \
-H 'authorization: Basic SW5mcmFQb3J0YWxDbGllbnQ6cGFzc3dvcmQxJA== \
-H 'content-type: application/json' \
-
- -d '{
-
- "requestDetails":{
-
- "modelInfo":{
-
+
+ -d '{
+
+ "requestDetails":{
+
+ "modelInfo":{
+
"modelInvariantId":"b7564cb9-4074-4c9b-95d6-39d4191e80d9",
-
+
"modelType":"service",
-
+
"modelName":"vfw_HPA",
-
+
"modelVersion":"1.0",
-
+
"modelVersionId":"35d184e8-1cba-46e3-9311-a17ace766eb0",
-
+
"modelUuid":"35d184e8-1cba-46e3-9311-a17ace766eb0",
-
+
"modelInvariantUuid":"b7564cb9-4074-4c9b-95d6-39d4191e80d9"
-
+
},
-
- "requestInfo":{
-
+
+ "requestInfo":{
+
"source":"VID",
-
+
"instanceName":"oof-12-homing",
-
+
"suppressRollback":false,
-
+
"requestorId":"demo"
-
+
},
-
- "subscriberInfo":{
-
+
+ "subscriberInfo":{
+
"globalSubscriberId":"Demonstration"
-
+
},
-
- "requestParameters":{
-
+
+ "requestParameters":{
+
"subscriptionServiceType":"vFW",
-
+
"aLaCarte":true,
-
+
"testApi":"VNF_API",
-
- "userParams":[
-
- {
-
+
+ "userParams":[
+
+ {
+
"name":"Customer_Location",
-
- "value":{
-
+
+ "value":{
+
"customerLatitude":"32.897480",
-
+
"customerLongitude":"97.040443",
-
+
"customerName":"some_company"
-
+
}
-
+
},
-
- {
-
+
+ {
+
"name":"Homing_Solution",
-
+
"value":"oof"
-
+
},
-
- {
-
+
+ {
+
"name":"orchestrator",
-
+
"value":"multicloud"
-
+
}
-
+
]
-
+
},
-
- "project":{
-
+
+ "project":{
+
"projectName":"Project-Demonstration"
-
+
},
-
- "owningEntity":{
-
+
+ "owningEntity":{
+
"owningEntityId":"e1564fc9-b9d0-44f9-b5af-953b4aad2f40",
-
+
"owningEntityName":"OE-Demonstration"
-
+
}
-
+
}
-
+
}'
14. Get the resourceModuleName to be used for creating policies. This can be gotten from the CSAR file of the service model created. However, an easy way to get the resourceModuleName is to send the service instance create request in step 13 above. This will fail as there are no policies but you can then go into the bpmn debug.log file and get its value by searching for resourcemodulename.
@@ -513,14 +513,14 @@ To Update a policy, use the following curl command. Modify the policy as require
"onapName": "SampleDemo",
"policyScope": "OSDF_CASABLANCA"
}' 'https://pdp:8081/pdp/api/updatePolicy'
-
+
To delete a policy, use two commands below to delete from PDP and PAP
**DELETE POLICY INSIDE PDP**
::
-
+
curl -k -v -H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-H 'ClientAuth: cHl0aG9uOnRlc3Q=' \
@@ -533,14 +533,14 @@ To delete a policy, use two commands below to delete from PDP and PAP
**DELETE POLICY INSIDE PAP**
::
-
+
curl -k -v -H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-H 'ClientAuth: cHl0aG9uOnRlc3Q=' \
-H 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' \
-H 'Environment: TEST' \
-X DELETE \
- -d '{"policyName": "OSDF_CASABLANCA.Config_MS_vnfPolicy_vFWHPA.1.xml","policyComponent":"PAP","policyType":"Optimization","deleteCondition":"ALL"}' https://pdp:8081/pdp/api/deletePolicy
+ -d '{"policyName": "OSDF_CASABLANCA.Config_MS_vnfPolicy_vFWHPA.1.xml","policyComponent":"PAP","policyType":"Optimization","deleteCondition":"ALL"}' https://pdp:8081/pdp/api/deletePolicy
Below are the 3 HPA policies for test cases in the `test plan <https://wiki.onap.org/pages/viewpage.action?pageId=41421112>`_
@@ -559,7 +559,7 @@ Create Policy
}' 'https://pdp:8081/pdp/api/createPolicy'
-Push Policy
+Push Policy
::
@@ -587,7 +587,7 @@ Create Policy
}' 'https://pdp:8081/pdp/api/createPolicy'
-Push Policy
+Push Policy
::
@@ -611,8 +611,8 @@ Create Policy
"onapName": "SampleDemo",
"policyScope": "OSDF_CASABLANCA"
}' 'https://pdp:8081/pdp/api/createPolicy'
-
-Push Policy
+
+Push Policy
::
@@ -621,7 +621,7 @@ Push Policy
"policyName": "OSDF_CASABLANCA.hpa_policy_vFW_3",
"policyType": "MicroService"
}' 'https://pdp:8081/pdp/api/pushPolicy'
-
+
17. Create Service Instance using step 13 above
18. Check bpmn logs to ensure that OOF sent homing response and flavor directives.
@@ -652,9 +652,9 @@ Push Policy
"vnf-networks": [],
"vnf-vms": []
},
-
-
- "vnf-parameters": [
+
+
+ "vnf-parameters": [
{
"vnf-parameter-name": "vfw_image_name",
"vnf-parameter-value": "ubuntu-16.04"
@@ -731,7 +731,7 @@ Push Policy
"vnf-parameter-name": "vsn_private_ip_1",
"vnf-parameter-value": "10.0.100.3"
},
-
+
{
"vnf-parameter-name": "vfw_name_0",
"vnf-parameter-value": "vfw"
@@ -774,7 +774,7 @@ Push Policy
},
{
"vnf-parameter-name": "vf_module_id",
- "vnf-parameter-value": "VfwHpa..base_vfw..module-0"
+ "vnf-parameter-value": "VfwHpa..base_vfw..module-0"
},
{
"vnf-parameter-name": "sec_group",
@@ -797,32 +797,32 @@ Push Policy
"vnf-parameter-name": "oof_directives",
"vnf-parameter-value": "{\"directives\": [{\"id\": \"vfw\", \"type\": \"vnfc\", \"directives\": [{\"attributes\": [{\"attribute_name\": \"firewall_flavor_name\", \"attribute_value\": \"onap.hpa.flavor31\"}, {\"attribute_name\": \"flavorId\", \"attribute_value\": \"2297339f-6a89-4808-a78f-68216091f904\"}, {\"attribute_name\": \"flavorId\", \"attribute_value\": \"2297339f-6a89-4808-a78f-68216091f904\"}, {\"attribute_name\": \"flavorId\", \"attribute_value\": \"2297339f-6a89-4808-a78f-68216091f904\"}], \"type\": \"flavor_directives\"}]}, {\"id\": \"vgenerator\", \"type\": \"vnfc\", \"directives\": [{\"attributes\": [{\"attribute_name\": \"packetgen_flavor_name\", \"attribute_value\": \"onap.hpa.flavor32\"}, {\"attribute_name\": \"flavorId\", \"attribute_value\": \"2297339f-6a89-4808-a78f-68216091f904\"}], \"type\": \"flavor_directives\"}]}, {\"id\": \"vsink\", \"type\": \"vnfc\", \"directives\": [{\"attributes\": [{\"attribute_name\": \"sink_flavor_name\", \"attribute_value\": \"onap.large\"}, {\"attribute_name\": \"flavorId\", \"attribute_value\": \"2297339f-6a89-4808-a78f-68216091f904\"}], \"type\": \"flavor_directives\"}]}]}"
},
-
+
{
"vnf-parameter-name": "sdnc_directives",
"vnf-parameter-value": "{}"
- },
-
+ },
+
{
"vnf-parameter-name": "template_type",
"vnf-parameter-value": "heat"
}
-
-
+
+
],
"vnf-topology-identifier": {
"generic-vnf-name": "oof-12-vnf-3",
- "generic-vnf-type": "vfw_hpa 0",
+ "generic-vnf-type": "vfw_hpa 0",
"service-type": "6b17354c-0fae-4491-b62e-b41619929c54",
- "vnf-name": "vfwhpa_stack",
+ "vnf-name": "vfwhpa_stack",
"vnf-type": "VfwHpa..base_vfw..module-0"
-
+
}
}
}}
-
-Change parameters based on your environment.
+
+Change parameters based on your environment.
**Note**
@@ -833,5 +833,5 @@ Change parameters based on your environment.
"service-type": "6b17354c-0fae-4491-b62e-b41619929c54", <-- same as Service Instance ID
"vnf-name": "vfwhpa_stack", <-- name to be given to the vf module
"vnf-type": "VfwHpa..base_vfw..module-0" <-- can be found on the VID - VF Module dialog screen - Model Name
-
+
21. Create vf module (11g of `vFWCL instantiation, testing and debugging <https://wiki.onap.org/display/DW/vFWCL+instantiation%2C+testing%2C+and+debuging>`_). If everything worked properly, you should see the stack created in your VIM(WR titanium cloud openstack in this case).
diff --git a/docs/onap-integration-ci.rst b/docs/onap-integration-ci.rst
index 99e72313e..cbbac6686 100644
--- a/docs/onap-integration-ci.rst
+++ b/docs/onap-integration-ci.rst
@@ -1,5 +1,3 @@
-.. _onap-integration-ci:
-
Integration Continuous Integration Guide
----------------------------------------
diff --git a/docs/onap-oom-heat.rst b/docs/onap-oom-heat.rst
index bb9c1abff..8bccec796 100644
--- a/docs/onap-oom-heat.rst
+++ b/docs/onap-oom-heat.rst
@@ -1,5 +1,3 @@
-.. _onap-oom-heat:
-
Integration Environement Installation
-------------------------------------
@@ -126,8 +124,7 @@ Exploring the Rancher VM
The Rancher VM that is spun up by this HEAT template serves the
following key roles:
-- Hosts the /dockerdata-nfs/ NFS export shared by all the k8s VMs for persistent
- volumes
+- Hosts the /dockerdata-nfs/ NFS export shared by all the k8s VMs for persistent volumes
- git clones the oom repo into /root/oom
- git clones the integration repo into /root/integration
- Creates the helm override file at /root/integration-override.yaml
diff --git a/docs/release-notes.rst b/docs/release-notes.rst
index 884998fa1..d0dce2537 100644
--- a/docs/release-notes.rst
+++ b/docs/release-notes.rst
@@ -2,8 +2,6 @@
.. This work is licensed under a Creative Commons Attribution 4.0
International License. http://creativecommons.org/licenses/by/4.0
-.. _doc-release-notes:
-
Integration Release Notes
=========================