aboutsummaryrefslogtreecommitdiffstats
path: root/docs/docs_scaleout.rst
diff options
context:
space:
mode:
Diffstat (limited to 'docs/docs_scaleout.rst')
-rw-r--r--docs/docs_scaleout.rst409
1 files changed, 386 insertions, 23 deletions
diff --git a/docs/docs_scaleout.rst b/docs/docs_scaleout.rst
index 6b88168da..80ee6bf95 100644
--- a/docs/docs_scaleout.rst
+++ b/docs/docs_scaleout.rst
@@ -1,27 +1,191 @@
.. _docs_scaleout:
+:orphan:
+
VF Module Scale Out Use Case
----------------------------
Source files
~~~~~~~~~~~~
-- Heat templates directory: https://git.onap.org/demo/tree/heat/vLB_CDS?h=elalto
+- Heat templates directory: https://git.onap.org/demo/tree/heat?h=guilin
+- Heat templates directory (vLB_CDS use case): https://git.onap.org/demo/tree/heat/vLB_CDS?h=guilin
Additional files
~~~~~~~~~~~~~~~~
- TOSCA model template: https://git.onap.org/integration/tree/docs/files/scaleout/service-Vloadbalancercds-template.yml
-- Naming policy script: https://git.onap.org/integration/tree/docs/files/scaleout/push_naming_policy.sh
+- Naming policy script: :download:`push_naming_poliy.sh <files/scaleout/push_naming_policy.sh>`
+- Controller Blueprint Archive (to use with CDS) : https://git.onap.org/ccsdk/cds/tree/components/model-catalog/blueprint-model/service-blueprint/vLB_CDS_Kotlin?h=guilin
+- TCA blueprint: :download:`guilin-tca.yaml <files/scaleout/latest-tca-guilin.yaml>`
+
+Useful tool
+~~~~~~~~~~~
+POSTMAN collection that can be used to simulate all inter process queries : https://www.getpostman.com/collections/878061d291f9efe55463
+To be able to use this postman collection, you may need to expose some ports that are not exposed in OOM by default.
+These commands may help for exposing the ports:
+
+::
+
+ kubectl port-forward service/cds-blueprints-processor-http --address 0.0.0.0 32749:8080 -n onap &
+ kubectl port-forward service/so-catalog-db-adapter --address 0.0.0.0 30845:8082 -n onap &
+ kubectl port-forward service/so-request-db-adapter --address 0.0.0.0 32223:8083 -n onap &
+
+OOM Installation
+~~~~~~~~~~~~~~~~
+Before doing the OOM installation, take care to the following steps:
+
+Set the right Openstack values for Robot and SO
+===============================================
+
+The config for robot must be set in an OOM override file before the OOM installation, this will initialize the robot framework & SO with all the required openstack info.
+A section like that is required in that override file
+
+::
+
+ robot:
+ enabled: true
+ flavor: small
+ appcUsername: "appc@appc.onap.org"
+ appcPassword: "demo123456!"
+ openStackKeyStoneUrl: "http://10.12.25.2:5000"
+ openStackKeystoneAPIVersion: "v3"
+ openStackPublicNetId: "5771462c-9582-421c-b2dc-ee6a04ec9bde"
+ openStackTenantId: "c9ef9a6345b440b7a96d906a0f48c6b1"
+ openStackUserName: "openstack_user"
+ openStackUserDomain: "default"
+ openStackProjectName: "CLAMP"
+ ubuntu14Image: "trusty-server-cloudimg-amd64-disk1"
+ ubuntu16Image: "xenial-server-cloudimg-amd64-disk1"
+ openStackPrivateNetCidr: "10.0.0.0/16"
+ openStackPrivateNetId: "fd05c1ab-3f43-4f6f-8a8c-76aee04ef293"
+ openStackPrivateSubnetId: "fd05c1ab-3f43-4f6f-8a8c-76aee04ef293"
+ openStackSecurityGroup: "f05e9cbf-d40f-4d1f-9f91-d673ba591a3a"
+ openStackOamNetworkCidrPrefix: "10.0"
+ dcaeCollectorIp: "10.12.6.10"
+ vnfPubKey: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDKXDgoo3+WOqcUG8/5uUbk81+yczgwC4Y8ywTmuQqbNxlY1oQ0YxdMUqUnhitSXs5S/yRuAVOYHwGg2mCs20oAINrP+mxBI544AMIb9itPjCtgqtE2EWo6MmnFGbHB4Sx3XioE7F4VPsh7japsIwzOjbrQe+Mua1TGQ5d4nfEOQaaglXLLPFfuc7WbhbJbK6Q7rHqZfRcOwAMXgDoBqlyqKeiKwnumddo2RyNT8ljYmvB6buz7KnMinzo7qB0uktVT05FH9Rg0CTWH5norlG5qXgP2aukL0gk1ph8iAt7uYLf1ktp+LJI2gaF6L0/qli9EmVCSLr1uJ38Q8CBflhkh"
+ demoArtifactsVersion: "1.6.0"
+ demoArtifactsRepoUrl: "https://nexus.onap.org/content/repositories/releases"
+ scriptVersion: "1.6.0"
+ nfsIpAddress: "10.12.6.10"
+ config:
+ openStackEncryptedPasswordHere: "e10c86aa13e692020233d18f0ef6d527"
+ openStackSoEncryptedPassword: "1DD1B3B4477FBAFAFEA617C575639C6F09E95446B5AE1F46C72B8FD960219ABB0DBA997790FCBB12"
+ so:
+ enabled: true
+ so-catalog-db-adapter:
+ config:
+ openStackUserName: "opesntack_user"
+ openStackKeyStoneUrl: "http://10.12.25.2:5000/v3"
+ openStackEncryptedPasswordHere: "1DD1B3B4477FBAFAFEA617C575639C6F09E95446B5AE1F46C72B8FD960219ABB0DBA997790FCBB12"
+ openStackKeystoneVersion: "KEYSTONE_V3"
+
+The values that must be changed according to your lab are all "openStack******" parameters + dcaeCollectorIp + nfsIpAddress
+
+**Generating SO Encrypted Password:**
+
+The SO Encrypted Password uses a java based encryption utility since the
+Java encryption library is not easy to integrate with openssl/python that
+Robot uses in Dublin and upper versions.
+
+.. note::
+ To generate SO ``openStackEncryptedPasswordHere`` and ``openStackSoEncryptedPassword``
+ ensure `default-jdk` is installed::
+
+ apt-get update; apt-get install default-jdk
+
+ Then execute (on oom repository)::
+
+ SO_ENCRYPTION_KEY=`cat ~/oom/kubernetes/so/resources/config/mso/encryption.key`
+ OS_PASSWORD=XXXX_OS_CLEARTESTPASSWORD_XXXX
+
+ git clone http://gerrit.onap.org/r/integration
+ cd integration/deployment/heat/onap-rke/scripts
+
+ javac Crypto.java
+ java Crypto "$OS_PASSWORD" "$SO_ENCRYPTION_KEY"
+
+**Update the OpenStack parameters:**
+
+There are assumptions in the demonstration VNF Heat templates about the
+networking available in the environment. To get the most value out of these
+templates and the automation that can help confirm the setup is correct, please
+observe the following constraints.
+
+
+``openStackPublicNetId:``
+ This network should allow Heat templates to add interfaces.
+ This need not be an external network, floating IPs can be assigned to the
+ ports on the VMs that are created by the heat template but its important that
+ neutron allow ports to be created on them.
+
+``openStackPrivateNetCidr: "10.0.0.0/16"``
+ This ip address block is used to assign OA&M addresses on VNFs to allow ONAP
+ connectivity. The demonstration Heat templates assume that 10.0 prefix can be
+ used by the VNFs and the demonstration ip addressing plan embodied in the
+ preload template prevent conflicts when instantiating the various VNFs. If
+ you need to change this, you will need to modify the preload data in the
+ Robot Helm chart like integration_preload_parameters.py and the
+ demo/heat/preload_data in the Robot container. The size of the CIDR should
+ be sufficient for ONAP and the VMs you expect to create.
+
+``openStackOamNetworkCidrPrefix: "10.0"``
+ This ip prefix mush match the openStackPrivateNetCidr and is a helper
+ variable to some of the Robot scripts for demonstration. A production
+ deployment need not worry about this setting but for the demonstration VNFs
+ the ip asssignment strategy assumes 10.0 ip prefix.
+
+**Generating ROBOT Encrypted Password:**
+
+The Robot encrypted Password uses the same encryption.key as SO but an
+openssl algorithm that works with the python based Robot Framework.
+
+.. note::
+ To generate Robot ``openStackEncryptedPasswordHere`` call on oom respository::
+
+ cd so/resources/config/mso/
+ /oom/kubernetes/so/resources/config/mso# echo -n "<openstack tenant password>" | openssl aes-128-ecb -e -K `cat encryption.key` -nosalt | xxd -c 256 -p``
+
+Initialize the Customer and Owning entities
+===========================================
+
+The robot script can be helpful to initialize the customer and owning entity that
+will be used later to instantiate the VNF (PART 2 - Scale Out Use Case Instantiation)
+
+::
+
+ In the oom_folder/kubernetes/robot/ execute the following command:
+ ./demo-k8s.sh onap init_customer
+
+If this command is unsuccessful it means that the parameters provided to the OOM installation were not correct.
+
+- Verify and Get the tenant/owning entity/cloud-regions defined in AAI by Robot script:
+ These values will be required by the POSTMAN collection when instantiating the Service/vnf ...
+
+To get them some POSTMAN collection queries are useful to use:
+
+- GET "AAI Owning Entities"
+- GET "AAI Cloud-regions"
+- GET "AAI Cloud-regions/tenant"
Description
~~~~~~~~~~~
-The scale out use case uses a VNF composed of three virtual functions. A traffic generator (vPacketGen), a load balancer (vLB), and a DNS (vDNS). Communication between the vPacketGen and the vLB, and the vLB and the vDNS occurs via two separate private networks. In addition, all virtual functions have an interface to the ONAP OAM private network, as shown in the topology below.
+
+The scale out use case uses a VNF composed of three virtual functions. A traffic
+generator (vPacketGen), a load balancer (vLB), and a DNS (vDNS). Communication
+between the vPacketGen and the vLB, and the vLB and the vDNS occurs via two
+separate private networks. In addition, all virtual functions have an interface
+to the ONAP OAM private network, as shown in the topology below.
.. figure:: files/scaleout/topology.png
:align: center
-The vPacketGen issues DNS lookup queries that reach the DNS server via the vLB. vDNS replies reach the packet generator via the vLB as well. The vLB reports the average amount of traffic per vDNS instances over a given time interval (e.g. 10 seconds) to the DCAE collector via the ONAP OAM private network.
+The vPacketGen issues DNS lookup queries that reach the DNS server via the vLB.
+vDNS replies reach the packet generator via the vLB as well. The vLB reports the
+average amount of traffic per vDNS instances over a given time interval (e.g. 10
+seconds) to the DCAE collector via the ONAP OAM private network.
-To run the use case, make sure that the security group in OpenStack has ingress/egress entries for protocol 47 (GRE). Users can test the VNF by running DNS queries from the vPakcketGen:
+To run the use case, make sure that the security group in OpenStack has
+ingress/egress entries for protocol 47 (GRE). Users can test the VNF by running
+DNS queries from the vPakcketGen:
::
@@ -61,7 +225,14 @@ The output below means that the vLB has been set up correctly, has forwarded the
The Scale Out Use Case
~~~~~~~~~~~~~~~~~~~~~~
-The Scale Out use case shows how users/network operators can add Virtual Network Function Components (VNFCs) as part of a VF Module that has been instantiated in the Service model, in order to increase capacity of the network. ONAP Frankfurt release supports scale out with manual trigger by directly calling SO APIs and closed-loop-enabled automation from Policy. For Frankfurt, the APPC controller is used to demonstrate post-scaling VNF reconfiguration operations. APPC can handle different VNF types, not only the VNF described in this document.
+
+The Scale Out use case shows how users/network operators can add Virtual Network
+Function Components (VNFCs) as part of a VF Module that has been instantiated in
+the Service model, in order to increase capacity of the network. ONAP Frankfurt
+release supports scale out with manual trigger by directly calling SO APIs and
+closed-loop-enabled automation from Policy. For Frankfurt, the APPC controller is
+used to demonstrate post-scaling VNF reconfiguration operations. APPC can handle
+different VNF types, not only the VNF described in this document.
The figure below shows all the interactions that take place during scale out operations.
@@ -74,37 +245,87 @@ There are four different message flows:
- Red: Closed-loop enabled scale out.
- Black: Orchestration and VNF lifecycle management (LCM) operations.
-The numbers in the figure represent the sequence of steps within a given flow. Note that interactions between the components in the picture and AAI, SDNC, and DMaaP are not shown for clarity's sake.
-
-Scale out with manual trigger (green flow) and closed-loop-enabled scale out (red flow) are mutually exclusive. When the manual trigger is used, VID directly triggers the appropriate workflow in SO (step 1 of the green flow in the figure above). See Section 4 for more details.
-
-When closed-loop enabled scale out is used, Policy triggers the SO workflow. The closed loop starts with the vLB periodically reporting telemetry about traffic patterns to the VES collector in DCAE (step 1 of the red flow). When the amount of traffic exceeds a given threshold (which the user defines during closed loop creation in CLAMP - see Section 1-4), DCAE notifies Policy (step 2), which in turn triggers the appropriate action. For this use case, the action is contacting SO to augment resource capacity in the network (step 3).
-
-At high level, once SO receives a call for scale out actions, it first creates a new VF module (step 1 of the black flow), then calls APPC to trigger some LCM actions (step 2). APPC runs VNF health check and configuration scale out as part of LCM actions (step 3). At this time, the VNF health check only reports the health status of the vLB, while the configuration scale out operation adds a new vDNS instance to the vLB internal state. As a result of configuration scale out, the vLB opens a connection towards the new vDNS instance.
+The numbers in the figure represent the sequence of steps within a given flow.
+Note that interactions between the components in the picture and AAI, SDNC, and
+DMaaP are not shown for clarity's sake.
+
+Scale out with manual trigger (green flow) and closed-loop-enabled scale out
+(red flow) are mutually exclusive. When the manual trigger is used, VID directly
+triggers the appropriate workflow in SO (step 1 of the green flow in the figure
+above). See Section 4 for more details.
+
+When closed-loop enabled scale out is used, Policy triggers the SO workflow.
+The closed loop starts with the vLB periodically reporting telemetry about traffic
+patterns to the VES collector in DCAE (step 1 of the red flow). When the amount
+of traffic exceeds a given threshold (which the user defines during closed loop
+creation in CLAMP - see Section 1-4), DCAE notifies Policy (step 2), which in turn
+triggers the appropriate action. For this use case, the action is contacting SO to
+augment resource capacity in the network (step 3).
+
+At high level, once SO receives a call for scale out actions, it first creates a
+new VF module (step 1 of the black flow), then calls APPC to trigger some LCM
+actions (step 2). APPC runs VNF health check and configuration scale out as part
+of LCM actions (step 3). At this time, the VNF health check only reports the
+health status of the vLB, while the configuration scale out operation adds a new
+vDNS instance to the vLB internal state. As a result of configuration scale out,
+the vLB opens a connection towards the new vDNS instance.
At deeper level, the SO workflow works as depicted below:
.. figure:: files/scaleout/so-blocks.png
:align: center
-SO first contacts APPC to run VNF health check and proceeds on to the next block of the workflow only if the vLB is healthy (not shown in the previous figure for simplicity's sake). Then, SO assigns resources, instantiates, and activates the new VF module. Finally, SO calls APPC again for configuration scale out and VNF health check. The VNF health check at the end of the workflow validates that the vLB health status hasn't been negatively affected by the scale out operation.
+SO first contacts APPC to run VNF health check and proceeds on to the next block
+of the workflow only if the vLB is healthy (not shown in the previous figure for
+simplicity's sake). Then, SO assigns resources, instantiates, and activates the
+new VF module. Finally, SO calls APPC again for configuration scale out and VNF
+health check. The VNF health check at the end of the workflow validates that the
+vLB health status hasn't been negatively affected by the scale out operation.
PART 1 - Service Definition and Onboarding
------------------------------------------
+
This use-case requires operations on several ONAP components to perform service definition and onboarding.
+1-1 VNF Configuration Modeling and Upload with CDS (Recommended way)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-1-1 VNF Configuration Modeling and Upload with CDS
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Since Dublin, the scale out use case integrates with the Controller Design Studio (CDS) ONAP component to automate the generation of cloud configuration at VNF instantiation time. The user interested in running the use case only with manual preload can skip this section and start from Section 1-2. The description of the use case with manual preload is provided in Section5.
Users can model this configuration at VNF design time and onboard the blueprint to CDS via the CDS GUI. The blueprint includes naming policies and network configuration details (e.g. IP address families, network names, etc.) that CDS will use during VNF instantiation to generate resource names and assign network configuration to VMs through the cloud orchestrator.
Please look at the CDS documentation for details about how to create configuration models, blueprints, and use the CDS tool: https://wiki.onap.org/display/DW/Modeling+Concepts. For running the use case, users can use the standard model package that CDS provides out of the box, which can be found here: https://wiki.onap.org/pages/viewpage.action?pageId=64007442
+::
+
+ For the current use case you can also follow these steps (Do not use the SDC flow to deploy the CBA when importing a VSP, this is not going to work anymore since Guilin):
+ 1. You must first bootstrap CDS by using the query in the POSTMAN collection query named POST "CDS Bootstrap"
+ 2. You must upload the attached CBA by using the POSTMAN collection named POST "CDS Save without Validation", the CBA zip file can be attached in the POSTMAN query
+ Controller Blueprint Archive (to use with CDS) : https://git.onap.org/ccsdk/cds/tree/components/model-catalog/blueprint-model/service-blueprint/vLB_CDS_Kotlin?h=guilin
+ 3. Create a zip file with the HEAT files located here: https://git.onap.org/demo/tree/heat/vLB_CDS?h=guilin
+ 4. Create the VSP & Service in the SDC onboarding and SDC Catalog + Distribute the service
+ To know the right values that must be set in the SDC Service properties assignment you must open the CBA zip and look at the TOSCA-Metadata/TOSCA.meta file
+ This file looks like that:
+ TOSCA-Meta-File-Version: 1.0.0
+ CSAR-Version: 1.0
+ Created-By: Seaudi, Abdelmuhaimen <abdelmuhaimen.seaudi@orange.com>
+ Entry-Definitions: Definitions/vLB_CDS.json
+ Template-Tags: vLB_CDS
+ Template-Name: vLB_CDS
+ Template-Version: 1.0.0
+ Template-Type: DEFAULT
+
+ - The sdnc_model_version is the Template-Version
+ - The sdnc_model_name is the Template-Name
+ - The sdnc_artifact_name is the prefix of the file you want to use in the Templates folder, in our CBA example it's vnf (that is supposed to reference the /Templates/vnf-mapping.json file)
+
+ Follow this guide for the VSP onboarding + service creation + properties assignment + distribution part (just skip the CBA attachment part as the CBA should have been pushed manually with the REST command): https://wiki.onap.org/pages/viewpage.action?pageId=64007442
+
+ Note that in case of issues with the AAI distribution, this may help : https://jira.onap.org/browse/AAI-1759
1-2 VNF Onboarding and Service Creation with SDC
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
Once the configuration blueprint is uploaded to CDS, users can define and onboard a service using SDC. SDC requires users to onboard a VNF descriptor that contains the definition of all the resources (private networks, compute nodes, keys, etc.) with their parameters that compose a VNF. The VNF used to demonstrate the scale out use case supports Heat templates as VNF descriptor, and hence requires OpenStack as cloud layer. Users can use the Heat templates linked at the top of the page to create a zip file that can be uploaded to SDC during service creation. To create a zip file, the user must be in the same folder that contains the Heat templates and the Manifest file that describes the content of the package. To create a zip file from command line, type:
::
@@ -139,7 +360,7 @@ To upload a DCAE blueprint, from the "Composition" tab in the service menu, sele
.. figure:: files/scaleout/1.png
:align: center
-Upload the DCAE blueprint linked at the top of the page using the pop-up window.
+Upload the DCAE blueprint (choose the one depending on your ONAP release, as the orginal TCA was depecrated in Guilin a new one is available to use) linked at the top of the page using the pop-up window.
.. figure:: files/scaleout/2.png
:align: center
@@ -162,10 +383,11 @@ This VNF only supports scaling the vDNS, so users should select the vDNS module
At this point, users can complete the service creation in SDC by testing, accepting, and distributing the Service Models as described in the SDC user manual.
-
1-3 Deploy Naming Policy
~~~~~~~~~~~~~~~~~~~~~~~~
+
This step is only required if CDS is used.
+Note that in Guilin, the default naming policy is already deployed in policy so this step is optional
In order to instantiate the VNF using CDS features, users need to deploy the naming policy that CDS uses for resource name generation to the Policy Engine. User can copy and run the script at the top of the page from any ONAP pod, for example Robot or Drools. The script uses the Policy endpoint defined in the Kubernetes domain, so the execution has to be triggered from some pod in the Kubernetes space.
@@ -175,9 +397,98 @@ In order to instantiate the VNF using CDS features, users need to deploy the nam
./push_naming_policy.sh
+
1-4 Closed Loop Design with CLAMP
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-This step is only required if closed loop is used.
+
+This step is only required if closed loop is used, for manual scaleout this section can be skipped.
+
+Here are Json examples that can be copy pasted in each policy configuration by clicking on the button EDIT JSON, just replace the value "LOOP_test_vLB_CDS" by your loop ID:
+For TCA config:
+::
+
+ {
+ "tca.policy": {
+ "domain": "measurementsForVfScaling",
+ "metricsPerEventName": [
+ {
+ "policyScope": "DCAE",
+ "thresholds": [
+ {
+ "version": "1.0.2",
+ "severity": "MAJOR",
+ "thresholdValue": 200,
+ "closedLoopEventStatus": "ONSET",
+ "closedLoopControlName": "LOOP_test_vLB_CDS",
+ "direction": "LESS_OR_EQUAL",
+ "fieldPath": "$.event.measurementsForVfScalingFields.vNicPerformanceArray[*].receivedTotalPacketsDelta"
+ }
+ ],
+ "eventName": "vLoadBalancer",
+ "policyVersion": "v0.0.1",
+ "controlLoopSchemaType": "VM",
+ "policyName": "DCAE.Config_tca-hi-lo"
+ }
+ ]
+ }
+ }
+
+For Drools config:
+
+::
+
+ {
+ "abatement": false,
+ "operations": [
+ {
+ "failure_retries": "final_failure_retries",
+ "id": "policy-1-vfmodule-create",
+ "failure_timeout": "final_failure_timeout",
+ "failure": "final_failure",
+ "operation": {
+ "payload": {
+ "requestParameters": "{\"usePreload\":false,\"userParams\":[]}",
+ "configurationParameters": "[{\"ip-addr\":\"$.vf-module-topology.vf-module-parameters.param[16].value\",\"oam-ip-addr\":\"$.vf-module-topology.vf-module-parameters.param[30].value\"}]"
+ },
+ "target": {
+ "entityIds": {
+ "resourceID": "Vlbcds..vdns..module-3",
+ "modelInvariantId": "e95a2949-8ba5-433d-a88f-587a6244b4ea",
+ "modelVersionId": "4a6ceddc-147e-471c-ae6f-907a0df76040",
+ "modelName": "Vlbcds..vdns..module-3",
+ "modelVersion": "1",
+ "modelCustomizationId": "7806ed67-a826-4b0e-b474-9ca4fa052a10"
+ },
+ "targetType": "VFMODULE"
+ },
+ "actor": "SO",
+ "operation": "VF Module Create"
+ },
+ "failure_guard": "final_failure_guard",
+ "retries": 1,
+ "timeout": 300,
+ "failure_exception": "final_failure_exception",
+ "description": "test",
+ "success": "final_success"
+ }
+ ],
+ "trigger": "policy-1-vfmodule-create",
+ "timeout": 650,
+ "id": "LOOP_test_vLB_CDS"
+ }
+
+For Frequency Limiter config:
+
+::
+
+ {
+ "id": "LOOP_test_vLB_CDS",
+ "actor": "SO",
+ "operation": "VF Module Create",
+ "limit": 1,
+ "timeWindow": 10,
+ "timeUnits": "minute"
+ }
Once the service model is distributed, users can design the closed loop from CLAMP, using the GUI at https://clamp.api.simpledemo.onap.org:30258
@@ -283,6 +594,7 @@ At this point, the closed loop is deployed to Policy Engine and DCAE, and a new
1-5 Creating a VNF Template with CDT
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
Before running scale out use case, the users need to create a VNF template using the Controller Design Tool (CDT), a design-time tool that allows users to create and on-board VNF templates into APPC. The template describes which control operation can be executed against the VNF (e.g. scale out, health check, modify configuration, etc.), the protocols that the VNF supports, port numbers, VNF APIs, and credentials for authentication. Being VNF agnostic, APPC uses these templates to "learn" about specific VNFs and the supported operations.
CDT requires two input:
@@ -349,6 +661,8 @@ To create the VNF template in CDT, the following steps are required:
- Click "Reference Data" Tab
- Click "Save All to APPC"
+Note, if a user gets an error when saving to Appc (cannot connect to AppC network), he should open a browser to http://ANY_K8S_IP:30211 to accept AppC proxy certificate
+
For health check operation, we just need to specify the protocol, the port number and username of the VNF (REST, 8183, and "admin" respectively, in the case of vLB/vDNS) and the API. For the vLB/vDNS, the API is:
::
@@ -366,6 +680,7 @@ At this time, CDT doesn't allow users to provide VNF password from the GUI. To u
1-6 Setting the Controller Type in SO Database
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
Users need to specify which controller to use for the scale out use case. For Dublin, the supported controller is APPC. Users need to create an association between the controller and the VNF type in the SO database.
To do so:
@@ -377,7 +692,7 @@ To do so:
mysql -ucataloguser -pcatalog123
-- Use catalogdb databalse
+- Use catalogdb database
::
@@ -395,6 +710,7 @@ SO has a default entry for VNF type "vLoadBalancerMS/vLoadBalancerMS 0"
1-7 Determining VNF reconfiguration parameters
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
The post scale out VNF reconfiguration is VNF-independent but the parameters used for VNF reconfiguration depend on the specific use case. For example, the vLB-vDNS-vPacketGenerator VNF described in this documentation use the vLB as "anchor" point. The vLB maintains the state of the VNF, which, for this use case is the list of active vDNS instances. After creating a new vDNS instance, the vLB needs to know the IP addresses (of the internal private network and management network) of the new vDNS. The reconfiguration action is executed by APPC, which receives those IP addresses from SO during the scale out workflow execution. Note that different VNFs may have different reconfiguration actions. A parameter resolution is expressed as JSON path to the SDNC VF module topology parameter array. For each reconfiguration parameter, the user has to specify the array location that contains the corresponding value (IP address in the specific case). For example, the "configurationParameters" section of the input request to SO during scale out with manual trigger (see Section 4) contains the resolution path to "ip-addr" and "oam-ip-addr" parameters used by the VNF.
::
@@ -896,7 +1212,30 @@ In future releases, we plan to leverage CDS to model post scaling VNF reconfigur
PART 2 - Scale Out Use Case Instantiation
-----------------------------------------
-This step is only required if CDS is used.
+
+Manual queries with POSTMAN
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This step is only required if CDS is used, otherwise you can use VID to instantiate the service and the VNF.
+Note that the POSTMAN collection linked at the top of this page, does provide some level of automatic scripting that will automatically get values between requests and provision the following queries
+
+You must enter in the postman config different variables:
+- "k8s" -> The k8s loadBalancer cluster node
+- "cds-service-model" -> The SDC service name distributed
+- "cds-instance-name" -> A name of your choice for the vnf instance (This must be changed each time you launch the instantiation)
+
+These useful requests are:
+CDS#1 - SDC Catalog Service -> This gets the Sdc service and provision some variables
+CDS#2 - SO Catalog DB Service VNFs - CDS -> This gets info in SO and provision some variables for the instantiation
+CDS#3 - SO Self-Serve Service Assign & Activate -> This starts the Service/vnf instantiation
+Open the body and replace the values like tenantId, Owning entity, region, and all the openstack values everywhere in the payload
+
+Note that you may have to add "onap_private_net_cidr":"10.0.0.0/16" in the "instanceParams" array depending of your openstack network configuration.
+
+CDS#4 - SO infra Active Request -> Used to get the status of the previous query
+
+Manual queries without POSTMAN
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
GET information from SDC catalogdb
@@ -1159,6 +1498,7 @@ PART 3 - Post Instantiation Operations
3-1 Post Instantiation VNF configuration
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
CDS executes post-instantiation VNF configuration if the "skip-post-instantiation" flag in the SDC service model is set to false, which is the default behavior. Manual post-instantiation configuration is necessary if the "skip-post-instantiation" flag in the service model is set to true or if the VNF is instantiated using the preload approach, which doesn't include CDS. Regardless, this step is NOT required during scale out operations, as VNF reconfiguration will be triggered by SO and executed by APPC.
If VNF post instantiation is executed manually, in order to change the state of the vLB the users should run the following REST call, replacing the IP addresses in the VNF endpoint and JSON object to match the private IP addresses of their vDNS instance:
@@ -1186,6 +1526,7 @@ At this point, the VNF is fully set up.
3-2 Updating AAI with VNF resources
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
To allow automated scale out via closed loop, the users need to inventory the VNF resources in AAI. This is done by running the heatbridge python script in /root/oom/kubernetes/robot in the Rancher VM in the Kubernetes cluster:
::
@@ -1198,7 +1539,25 @@ Note that "vlb_onap_private_ip_0" used in the heatbridge call is the actual para
PART 4 - Triggering Scale Out Manually
--------------------------------------
-For scale out with manual trigger, VID is not supported at this time. Users can run the use case by directly calling SO APIs:
+For scale out with manual trigger, VID is not supported at this time.
+
+Manual queries with POSTMAN
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Note that the POSTMAN collection linked at the top of this page, does provide some level of automatic scripting that will automatically get values between requests and provision the following queries
+
+You must enter in the postman config different variables:
+- "k8s" -> The k8s loadBalancer cluster node
+- "cds-service-model" -> The SDC service name distributed
+- "cds-instance-name" -> A name of your choice for the vnf instance (This must be changed each time you launch the instantiation)
+
+CDS#5 - SO ScaleOut -> This will initiate a Scaleout manually
+CDS#7 - SO ScaleIn -> This will initiate a ScaleIn manually
+
+Manual queries without POSTMAN
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Users can run the use case by directly calling SO APIs:
::
@@ -1752,6 +2111,7 @@ Module-1 Preload
Module-2 Preload
~~~~~~~~~~~~~~~~
+
::
@@ -2068,7 +2428,8 @@ To instantiate VF modules, please refer to this wiki page: https://wiki.onap.org
PART 6 - Known Issues and Resolutions
-------------------------------------
-1) When running closed loop-enabled scale out, the closed loop designed in CLAMP conflicts with the default closed loop defined for the old vLB/vDNS use case
+
+ 1) When running closed loop-enabled scale out, the closed loop designed in CLAMP conflicts with the default closed loop defined for the old vLB/vDNS use case
Resolution: Change TCA configuration for the old vLB/vDNS use case
@@ -2076,3 +2437,5 @@ Resolution: Change TCA configuration for the old vLB/vDNS use case
- Change "eventName" in the vLB default policy to something different, for example "vLB" instead of the default value "vLoadBalancer"
- Change "subscriberConsumerGroup" in the TCA configuration to something different, for example "OpenDCAE-c13" instead of the default value "OpenDCAE-c12"
- Click "UPDATE" to upload the new TCA configuration
+
+2) During Guilin testing, it has been noticed that there is an issue between SO and APPC for Healthcheck queries, this does not prevent the use case to proceed but limit APPC capabilities