aboutsummaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
authorLukasz Rajewski <lukasz.rajewski@orange.com>2020-04-27 21:30:24 +0200
committerLukasz Rajewski <lukasz.rajewski@orange.com>2020-04-28 11:55:25 +0200
commitb98b27daa949aa82207973e67f5d7dd0c8a361f5 (patch)
treede793b1c737930bae4d7c2eb1e9da8dff2bb002c /docs
parent151f6bd9846696668cdf395706dfc567442823e4 (diff)
Documentation for vFW In-Place Upgrade with TD
The documentation for vFW Traffic Distributino use case evolves to vFW InPlace Upgrade with TD one. There is changed description of the use case purpose, the workflow, the configuration procedure and information about the way of testing. Change-Id: I1e68d46871864b4e65df553355b3a11d86b4c9cb Issue-ID: INT-1277 Signed-off-by: Lukasz Rajewski <lukasz.rajewski@orange.com>
Diffstat (limited to 'docs')
-rw-r--r--docs/conf.py2
-rw-r--r--docs/docs_vFWDT.rst575
-rwxr-xr-xdocs/files/dt-use-case.pngbin240228 -> 154683 bytes
-rw-r--r--docs/files/vfwdt-general-workflow-sd.pngbin0 -> 158564 bytes
-rw-r--r--docs/files/vfwdt-identification-workflow-sd.pngbin0 -> 75840 bytes
-rw-r--r--docs/files/vfwdt-td-workflow-sd.pngbin0 -> 200932 bytes
-rw-r--r--docs/files/vfwdt-upgrade-workflow-sd.pngbin0 -> 143490 bytes
-rw-r--r--docs/files/vfwdt-workflow-general.pngbin0 -> 14271 bytes
-rw-r--r--docs/files/vfwdt-workflow-traffic.pngbin0 -> 16021 bytes
-rw-r--r--docs/files/vfwdt-workflow-upgrade.pngbin0 -> 16124 bytes
10 files changed, 383 insertions, 194 deletions
diff --git a/docs/conf.py b/docs/conf.py
index 99a44bc85..cd82442a5 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -13,7 +13,7 @@ linkcheck_ignore = [
'http://so-monitoring:30224',
r'http://SINK_IP_ADDRESS:667.*',
r'http.*K8S_HOST:30227.*',
- r'http.*K8S_NODE_IP:30209.*'
+ r'http.*K8S_NODE_IP.*'
]
intersphinx_mapping = {}
diff --git a/docs/docs_vFWDT.rst b/docs/docs_vFWDT.rst
index 35fae7715..5554cd4db 100644
--- a/docs/docs_vFWDT.rst
+++ b/docs/docs_vFWDT.rst
@@ -7,88 +7,174 @@
:depth: 3
..
-vFW Traffic Distribution Use Case
----------------------------------
+vFW In-Place Software Upgrade with Traffic Distribution Use Case
+----------------------------------------------------------------
Description
~~~~~~~~~~~
-The purpose of this work is to show Traffic Distribiution functionality implemented in Casablanca and Dublin releases for vFW Use Case.
-The orchstration workflow triggers a change to traffic distribution (redistribution) done by a traffic balancing/distribution entity (aka anchor point).
-The DistributeTraffic action targets the traffic balancing/distribution entity, in some cases DNS, other cases a load balancer external to the VNF instance, as examples.
+The purpose of this work is to show In-Place Software Upgrade Traffic Distribution functionality implemented in Frankfurt release for vFW Use Case.
+The use case is an evolution of vFW Traffic Distribution Use Case which was developed for Casablanca and Dublin releases.
+The orchestration workflow triggers a change of the software on selected instance of the firewall. The change is proceeded with minimization of disruption of the
+service since the firewall being upgraded must have all the traffic migrated out before the upgrade can be started. The traffic migration (redistribution) is done by
+a traffic balancing/distribution entity (aka anchor point). The DistributeTraffic action targets the traffic balancing/distribution entity, in some cases DNS, other cases a load balancer external to the VNF instance, as examples.
Traffic distribution (weight) changes intended to take a VNF instance out of service are completed only when all in-flight traffic/transactions have been completed.
DistributeTrafficCheck command may be used to verify initial conditions of redistribution or can be used to verify the state of VNFs and redistribution itself.
To complete the traffic redistribution process, gracefully taking a VNF instance out-of-service/into-service, without dropping in-flight calls or sessions,
-QuiesceTraffic/ResumeTraffic command may need to follow traffic distribution changes. The VNF application remains in an active state.
+QuiesceTraffic/ResumeTraffic command may need to follow traffic distribution changes. The upgrade operation consist of the UpgradePreCheck operation which can used to verify
+initial conditions for the operation like difference of the software version to the one requested, SoftwareUpgrade operation is responsible for modification of the software on
+selected vFW instance and UpgradePostCheck LCM actions is used to verify if the software was properly installed on vFW. After the completion of the software upgrade the traffic is migrated to the
+instance of the vFW which was before being upgraded. The workflow can be configured also in such a way to perform only singular migration of the traffic without upgrade of the software
+what allows to experiment with the version of the workflow implemented in the previous releases. All the LCM operations are executed by APPC controller and they are implemented with Ansible protocol. In order to avoid the inconsistency in the VNFs state the Lock/Unlocks
+mechanisms is used to prevent parallel execution of LCM actions on VNFs that are under maintenance because of the workflow that is currently executed on them.
+The VNF application remains in an active state.
+Traffic Distribution and In-Place Software Upgrade functionality is an outcome of Change Management project. Further details can be found on the following pages
-Traffic Distribution functionality is an outcome of Change Management project. Further details can be found on following pages
+- Frankfurt: https://wiki.onap.org/display/DW/Change+Management+Frankfurt+Extensions (Traffic Distribution workflow enhancements)
-https://wiki.onap.org/display/DW/Change+Management+Extensions (DistributeTraffic LCM and Use Case)
+- Dublin: https://wiki.onap.org/display/DW/Change+Management+Extensions (DistributeTraffic LCM and Use Case)
-https://wiki.onap.org/display/DW/Change+Management+Dublin+Extensions (Distribute Traffic Workflow with Optimization Framework)
+- Casablanca https://wiki.onap.org/display/DW/Change+Management+Dublin+Extensions (Distribute Traffic Workflow with Optimization Framework)
-Test Scenario
-~~~~~~~~~~~~~
+Test Scenarios
+~~~~~~~~~~~~~~
.. figure:: files/dt-use-case.png
:scale: 40 %
:align: center
- Figure 1 The idea of Traffic Distribution Use Case
+ Figure 1 The overview of interaction of components in vFW In-Place Software Upgrade with Traffic Distribution Use Case
-The idea of the simplified scenario presented in the Casablanca release is shown on Figure 1. In a result of the DistributeTraffic LCM action traffic flow originated from vPKG to vFW 1 and vSINK 1 is redirected to vFW 2 and vSINK 2 (as it is seen on Figure 2).
-Result of the change can be observed also on the vSINKs' dashboards which show a current incoming traffic. Observation of the dashboard from vSINK 1 and vSINK 2 proves workflow works properly.
+The main idea of the use case and prepared workflow is to show the interaction of different components of ONAP, including AAI, Policy, OOF, APPC for realization of scenario of software upgrade
+of vFW instance with migration of the traffic in time of its upgrade. vFW instance was modified to have two instances of vFW with dedicated vSINKs. The general idea of interaction of ONAP components
+is shown on Figure 1. Software Upgrade is performed on selected vFW instance. vPKG and the other vFW taking action while migration of the traffic out of vFW being upgraded. In a result of the DistributeTraffic
+LCM action traffic flow originated from vPKG to vFW 1 and vSINK 1 is redirected to vFW 2 and vSINK 2 (as it is seen on Figure 2). Result of the change can be observed also on the vSINKs' dashboards which show
+a current incoming traffic. After migration software is upgraded on the vFW and afterwards the traffic can be migrated back to this vFW instance. Observation of the dashboard from vSINK 1 and vSINK 2 proves workflow works properly.
.. figure:: files/dt-result.png
:scale: 60 %
:align: center
- Figure 2 The result of traffic distribution
+ Figure 2 The result of traffic distribution in time of the upgrade
-The purpose of the work in the Dublin release was to built a Traffic Distribution Workflow that takes as an input configuration parameters delivered by Optimization Framework and on their basis several traffic distribution LCM actions are executed by APPC in the specific workflow.
+The traffic distribution sub-workflow takes as an input configuration parameters delivered by Optimization Framework and on their basis several traffic distribution LCM actions are executed by APPC in the specific workflow.
+Further LCM actions are executed in order to present the idea of vFW In-Place Software Upgrade with Traffic Distribution. In this use case also APPC locking mechanisms is demonstrated, changes in APPC for VNFC level Ansible
+actions support and changes for APPC Ansible automation also are used in the use case. The APPC Ansible automation scripts allows to configure LCM actions without the need to enter the CDT portal, however there is
+possibility to do it manually and documentation describes also how to do it. In the same sense, the upload of policy types and policy instances is automated but the documentation describes how to do it manually.
-.. figure:: files/dt-workflow.png
- :scale: 60 %
+The demonstration scripts can be used to execute two different scenarios:
+
+1. Simple distribution of traffic from selected vFW instance to the other one
+
+2. Upgrade of the software on selected vFW instance. Both are preceded with shared phase of identification of VF-modules for reconfiguration what is done with help of Optimization Framework.
+
+Workflows
+~~~~~~~~~
+
+Whole vFW In-Place Software Upgrade with Traffic Distribution use case can be decomposed into following workflows:
+
+1. High level workflow (simplified workflow on Figure 3 and more detailed on Figure 4)
+
+.. figure:: files/vfwdt-workflow-general.png
+ :scale: 100 %
:align: center
- Figure 3 The Traffic Distribution Workflow
+ Figure 3 The In-Place Software Upgrade with Traffic Distribution general workflow
+
+* Identification of vFW instances (**I**) for migration of the traffic (source and destination) and identification of vPKG instance (anchor point) which would be responsible for reconfiguration of the traffic distribution. This operation id performed by Optimization Framework, HAS algorithm in particular
+
+* Before any operation is started workflow Locks (**II-IV**) with APPC all the VNFs involved in the procedure: vFW 1, vFW 2 and vPKG. In fact this is the vFW being upgraded, vFW which will be used to migrate traffic to and vPKG which performs the traffic distribution procedure. The VNFs needs to be locked in order to prevent the execution of other LCM actions in time of the whole workflow execution. Workflow checks state of the Lock on each VNF (**II**)(**1-6**), if the Locs are free (**III**)(**7**) the Locs are being acquired (**IV**)(**8-14**). If any Lock Check or Lock fails (**7, 14**), workflow is stopped.
-The prepared Traffic Distribution Workflow has following steps:
+* Depending on the workflow type different (Traffic Distribution or In-Place Software Upgrade with Traffic Distribution) LCM action are executed by APPC (**V**). All with Ansible protocol and with VNF and VF-modules identified before by Optimization Framework or the input parameters like selected vFW VNF instance. Workflows are conditional and will not be performed if the preconditions were not satisfied. In case of failure of LCM operation any other actions are canceled.
-- Workflow sends placement request to Optimization Framework (**1**) specific information about the vPKG and vFW-SINK models and VNF-ID of vFW that we want to migrate traffic out from.
- Optimization Framework role is to find the vFW-SINK VNF/VF-module instance where traffic should be migrated to and vPKG which will be associated with this vFW.
+* At the end workflow Unlocks with APPC the previously Locked VNFs (**VI**)(**15-21**). This operations is performed always even when some steps before were not completed. The purpose is to not leave VNFs in locked state (in maintenance status) as this will prevent future execution of LCM actions or workflows on them. The locks are being automatically released after longer time.
+
+.. figure:: files/vfwdt-general-workflow-sd.png
+ :scale: 80 %
+ :align: center
+
+ Figure 4 The In-Place Software Upgrade with Traffic Distribution detailed workflow
+
+2. Identification of VF-modules candidates for migration of traffic (detailed workflow is shown on Figure 5)
+
+.. figure:: files/vfwdt-identification-workflow-sd.png
+ :scale: 80 %
+ :align: center
+
+ Figure 5 Identification of VF-Module candidates for migration of traffic
+
+- Workflow sends placement request to Optimization Framework (**1**) specific information about the vPKG and vFW-SINK models and VNF-ID of vFW that we want to upgrade.
+ Optimization Framework role is to find the vFW-SINK VNF/VF-module instance where traffic should be migrated to in time of the upgrade and vPKG which will be associated with this vFW.
Although in our case the calculation is very simple, the mechanism is ready to work for instances of services with VNF having houndreds of VF-modules spread accross different cloud regions.
- Optimization Framework takes from the Policy Framework policies (**2-3**) for VNFs and for relations between each other (in our case there is checked ACTIVE status of vFW-SINK and vPKG VF-modules and the Region to which they belong)
-- Optimization Framework, base on the information from the polcies and service topology information taken from A&AI (**4-11**), offers traffic distribution anchor and destination canidates' pairs (**12-13**) (pairs of VF-modules data with information about their V-Servers and their network interfaces). This information is returned to the workflow script (**14**).
+- Optimization Framework, base on the information from the policies and service topology information taken from A&AI (**4-11**), offers traffic distribution anchor and destination candidates' pairs (**12-13**) (pairs of VF-modules data with information about their V-Servers and their network interfaces). This information is returned to the workflow script (**14**).
-- Information from Optimization Framework can be used to construct APPC LCM requests for DistributeTrafficCheck and DistributeTraffic commands (**15, 24, 33, 42**). This information is used to fill CDT templates with proper data for further Ansible playbooks execution (**17, 26, 35, 44**)
+- Information from Optimization Framework can be used to construct APPC LCM requests for DistributeTrafficCheck, DistributeTraffic, UpgradePreCheck, SoftwareUpgrade and UpgradePostCheck commands. This information is used to fill CDT templates with proper data for further Ansible playbooks execution. Script generates also here CDT templates for LCM actions which can be uploaded automatically to APPC DB.
-- In the first DistributeTrafficCheck LCM request on vPGN VNF/VF-Module APPC, over Ansible, checks if already configured destinatrion of vPKG packages is different than already configured. If not workflow is stopped (**23**).
+3. The Traffic Distribution sub-workflow (simplified workflow on Figure 6 and more detailed on Figure 7)
-- Next, APPC performs the DistributeTraffic action like it is shown on Figure 1 and Figure 2 (**25-31**). If operation is completed properly traffic should be redirected to vFW 2 and vSINK 2 instance. If not, workflow is stopped (**32**).
+.. figure:: files/vfwdt-workflow-traffic.png
+ :scale: 100 %
+ :align: center
+
+ Figure 6 The Traffic Distribution general workflow
+
+- In the first DistributeTrafficCheck LCM request on vPGN VNF/VF-Module APPC, over Ansible, checks if already configured destination of vPKG packages is different than already configured one (**I-III**)(**1-8**). If not workflow is stopped (**9**).
+
+- Next, APPC performs the DistributeTraffic action (**IV**)(**10-17**). If operation is completed properly traffic should be redirected to vFW 2 and vSINK 2 instance. If not, workflow is stopped (**18**).
+
+- Finally, APPC executes the DistributeTrafficCheck action (**V**) on vFW 1 in order to verify that it does not receive any traffic anymore (**19-26**) and on vFW 2 in order to verify that it receives traffic forwarded from vFW 2 (**28-35**). Workflow is stopped with failed state (**37**) if one of those conditions was not satisfied (**27, 36**)
+
+.. figure:: files/vfwdt-td-workflow-sd.png
+ :scale: 80 %
+ :align: center
+
+ Figure 7 The Traffic Distribution detailed workflow
+
+4. The In-Place Software Upgrade with Traffic Distribution sub-workflow (simplified workflow on Figure 8 and more detailed on Figure 9)
+
+.. figure:: files/vfwdt-workflow-upgrade.png
+ :scale: 100 %
+ :align: center
+
+ Figure 8 The In-Place Software Upgrade general workflow
+
+- Firstly there is performed the UpgradePreCheck LCM operation on selected vFW instance (**I**)(**1-8**). The Ansible script executed by the APPC checks if the software version is different than the one indicated in workflow's input. If it is the same the workflow is stopped (**9**).
+
+- When software of selected vFW instance needs to be upgraded (**II**) then the traffic migration procedure needs to be performed (**III** - see sub-workflow 3). If migration of traffic fails workflow is stopped.
+
+- Next APPC performs over Ansible procedure of in place software upgrade. In our case this is simple refresh of the software packages on VM in order to simulate some upgrade process. Successful completion of the script should set the version of the software to the one from the upgrade request. If action fails workflow is stopped without further rollback (**18**).
+
+- Afterwards, APPC performs the UpgradePostCheck LCM action (**IV**)(**19-26**). The script verifies if the version of software is the same like requested before in the upgrade. If not, workflow is stopped without further rollback (**27**).
+
+- Finally, when software upgrade is completed traffic migration procedure needs to be performed again (**VI**) to migrate traffic back to upgraded before vFW instance (see sub-workflow 3). If migration of traffic fails workflow is stopped and rollback is no being performed.
+
+.. figure:: files/vfwdt-upgrade-workflow-sd.png
+ :scale: 80 %
+ :align: center
-- Finally, APPC executes the DistributeTrafficCheck action on vFW 1 in order to verify that it does not receives any traffic anymore (**34-40**) and on vFW 2 in order to verify that it receives traffic forwarded from vFW 2 (**43-49**)
+ Figure 9 The In-Place Software Upgrade detailed workflow
Scenario Setup
--------------
-In order to setup the scenario and to test the DistributeTraffic LCM API in action you need to perform the following steps:
+In order to setup the scenario and to test workflows with APPC LCM APIs in action you need to perform the following steps:
-1. Create an instance of vFWDT (vPKG , 2 x vFW, 2 x vSINK) – dedicated for the DistributeTraffic LCM API tests
+1. Create an instance of vFWDT (vPKG , 2 x vFW, 2 x vSINK) – dedicated for the traffic migration tests
-#. Gather A&AI facts for Traffic Distribution use case configuration
+#. Gather A&AI facts for use case configuration
-#. Install Traffic Distribution workflow packages
+#. Install Software Upgrade and Traffic Distribution workflow packages
-#. Configure Optimization Framework for Traffic Distribution workflow
+#. Configure Optimization Framework for Traffic Distribution candidates gathering
#. Configure vPKG and vFW VNFs in APPC CDT tool
#. Configure Ansible Server to work with vPKG and vFW VMs
-#. Execute Traffic Distribution Workflow
+#. Execute Traffic Distribution or In-Place Upgrade Workflows
You will use the following ONAP K8s VMs or containers:
@@ -98,12 +184,12 @@ You will use the following ONAP K8s VMs or containers:
- APPC Ansible Server container – setup of Ansible Server, configuration of playbook and input parameters for LCM actions
-.. note:: In all occurences <K8S-NODE-IP> constant is the IP address of any K8s Node of ONAP OOM installation which hosts ONAP pods i.e. k8s-node-1 and <K8S-RANCHER-IP> constant is the IP address of K8S Rancher Server
+.. note:: In all occurrences *K8S_NODE_IP* constant is the IP address of any K8s Node of ONAP OOM installation which hosts ONAP pods i.e. k8s-node-1 and *K8S-RANCHER-IP* constant is the IP address of K8S Rancher Server
vFWDT Service Instantiation
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-In order to test a DistributeTraffic LCM API functionality a dedicated vFW instance must be prepared. It differs from a standard vFW instance by having an additional VF-module with a second instance of vFW and a second instance of vSINK. Thanks to that when a service instance is deployed there are already available two instances of vFW and vSINK that can be used for verification of DistributeTraffic LCM API – there is no need to use the ScaleOut function to test DistributeTraffic functionality what simplifies preparations for tests.
+In order to test workflows a dedicated vFW instance must be prepared. It differs from a standard vFW instance by having an additional VF-module with a second instance of vFW and a second instance of vSINK. Thanks to that when a service instance is deployed there are already available two instances of vFW and vSINK that can be used for migration of traffic from one vFW instance to the other one – there is no need to use the ScaleOut function to test workflows what simplifies preparations for tests.
In order to instantiate vFWDT service please follow the procedure for standard vFW with following changes. You can create such service manually or you can use robot framework. For manual instantiation:
@@ -111,13 +197,13 @@ In order to instantiate vFWDT service please follow the procedure for standard v
https://github.com/onap/demo/tree/master/heat/vFWDT
-2. Create Virtual Service in SDC with composition like it is shown on Figure 3
+2. Create Virtual Service in SDC with composition like it is shown on Figure 10
.. figure:: files/vfwdt-service.png
:scale: 60 %
:align: center
- Figure 3 Composition of vFWDT Service
+ Figure 10 Composition of vFWDT Service
3. Use the following payload files in the SDNC-Preload phase during the VF-Module instantiation
@@ -127,15 +213,15 @@ https://github.com/onap/demo/tree/master/heat/vFWDT
- :download:`vFW/SNK 2 preload example <files/vfw-2-preload.json>`
-.. note:: Use publikc-key that is a pair for private key files used to log into ONAP OOM Rancher server. It will simplify further configuration
+.. note:: Use public-key that is a pair for private key files used to log into ONAP OOM Rancher server. It will simplify further configuration
-.. note:: vFWDT has a specific configuration of the networks – different than the one in original vFW use case (see Figure 4). Two networks must be created before the heat stack creation: *onap-private* network (10.0.0.0/16 typically) and *onap-external-private* (e.g. "10.100.0.0/16"). The latter one should be connected over a router to the external network that gives an access to VMs. Thanks to that VMs can have a floating IP from the external network assigned automatically in a time of stacks' creation. Moreover, the vPKG heat stack must be created before the vFW/vSINK stacks (it means that the VF-module for vPKG must be created as a first one). The vPKG stack creates two networks for the vFWDT use case: *protected* and *unprotected*; so these networks must be present before the stacks for vFW/vSINK are created.
+.. note:: vFWDT has a specific configuration of the networks – different than the one in original vFW use case (see Figure 11). Two networks must be created before the heat stack creation: *onap-private* network (10.0.0.0/16 typically) and *onap-external-private* (e.g. "10.100.0.0/16"). The latter one should be connected over a router to the external network that gives an access to VMs. Thanks to that VMs can have a floating IP from the external network assigned automatically in a time of stacks' creation. Moreover, the vPKG heat stack must be created before the vFW/vSINK stacks (it means that the VF-module for vPKG must be created as a first one). The vPKG stack creates two networks for the vFWDT use case: *protected* and *unprotected*; so these networks must be present before the stacks for vFW/vSINK are created.
.. figure:: files/vfwdt-networks.png
:scale: 15 %
:align: center
- Figure 4 Configuration of networks for vFWDT service
+ Figure 11 Configuration of networks for vFWDT service
4. Go to *robot* folder in Rancher server (being *root* user)
@@ -165,12 +251,12 @@ Go to the Rancher node and locate *demo-k8s.sh* script in *oom/kubernetes/robot*
::
./demo-k8s.sh onap init
- ./ete-k8s.sh onap instantiateVFWDT
+ ./ete-k8s.sh onap instantiateVFWDTGRA
-.. note:: You can verify the status of robot's service instantiation process by going to http://K8S_NODE_IP:30209/logs/ (login/password: test/test)
+.. note:: You can verify the status of robot's service instantiation process by going to https://K8S_NODE_IP:30209/logs/ (login/password: test/test)
-After successful instantiation of vFWDT service go to the OpenStack dashboard and project which is configured for VNFs deployment and locate vFWDT VMs. Choose one and try to ssh into one them to proove that further ansible configuration action will be possible
+After successful instantiation of vFWDT service go to the OpenStack dashboard and project which is configured for VNFs deployment and locate vFWDT VMs. Choose one and try to ssh into one them to prove that further ansible configuration action will be possible
::
@@ -192,7 +278,7 @@ Preparation of Workflow Script Environment
::
- git clone --single-branch --branch dublin "https://gerrit.onap.org/r/demo"
+ git clone --single-branch --branch frankfurt "https://gerrit.onap.org/r/demo"
3. Enter vFWDT tutorial directory
@@ -220,11 +306,11 @@ what should show following folders
Gathering Scenario Facts
------------------------
-In order to configure CDT tool for execution of Ansible playbooks and for execution of Traffic distribution workflow we need following A&AI facts for vFWDT service
+In order to configure CDT tool for execution of Ansible playbooks and for execution of workflows we need following A&AI facts for vFWDT service
- **vnf-id** of generic-vnf vFW instance that we want to migrate traffic out from
- **vnf-type** of vPKG VNF - required to configure CDT for Distribute Traffic LCMs
-- **vnf-type** of vFW-SINK VNFs - required to configure CDT for Distribute Traffic LCMs
+- **vnf-type** of vFW-SINK VNFs - required to configure CDT for Distribute Traffic and Software Upgrade LCMs
Gathering facts from VID Portal
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -233,7 +319,7 @@ Gathering facts from VID Portal
::
- https://<K8S-NODE-IP>:30200/vid/welcome.htm
+ https://K8S_NODE_IP:30200/vid/welcome.htm
2. In the left hand menu enter **Search for Existing Service Instances**
@@ -247,24 +333,24 @@ Gathering facts from VID Portal
:scale: 60 %
:align: center
- Figure 5 vnf-type and vnf-id for vPKG VNF
+ Figure 12 vnf-type and vnf-id for vPKG VNF
.. figure:: files/vfwdt-vid-vnf-1.png
:scale: 60 %
:align: center
- Figure 6 vnf-type and vnf-id for vFW-SINK 1 VNF
+ Figure 13 vnf-type and vnf-id for vFW-SINK 1 VNF
.. figure:: files/vfwdt-vid-vnf-2.png
:scale: 60 %
:align: center
- Figure 7 vnf-type and vnf-id for vFW-SINK 2 VNF
+ Figure 14 vnf-type and vnf-id for vFW-SINK 2 VNF
Gathering facts directly from A&AI
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-1. Enter OpenStack dashboard on whicvh vFWDT instance was created and got to **Project->Compute->Instances** and read VM names of vPKG VM and 2 vFW VMs created in vFWDT service instance
+1. Enter OpenStack dashboard on which vFWDT instance was created and got to **Project->Compute->Instances** and read VM names of vPKG VM and 2 vFW VMs created in vFWDT service instance
2. Open Postman or any other REST client
@@ -278,7 +364,7 @@ Gathering facts directly from A&AI
::
- https://<K8S-NODE-IP>:30233/aai/v14/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/RegionOne/tenants/
+ https://K8S_NODE_IP:30233/aai/v14/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/RegionOne/tenants/
.. note:: *CloudOwner* and *Region* names are fixed for default setup of ONAP
@@ -286,17 +372,17 @@ Gathering facts directly from A&AI
::
- https://<K8S-NODE-IP>:30233/aai/v14/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/RegionOne/tenants/tenant/<tenant-id>/vservers/?vserver-name=<vm-name>
+ https://K8S_NODE_IP:30233/aai/v14/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/RegionOne/tenants/tenant/<tenant-id>/vservers/?vserver-name=<vm-name>
-Read from the response (realtionship with *generic-vnf* type) vnf-id of vPKG VNF
+Read from the response (relationship with *generic-vnf* type) vnf-id of vPKG VNF
-.. note:: If you do not receive any vserver candidate it means that heatbridge procedure was not performed or was not completed successfuly. It is mandatory to continue this tutorial
+.. note:: If you do not receive any vserver candidate it means that heatbridge procedure was not performed or was not completed successfully. It is mandatory to continue this tutorial
8. Create new GET query for *generic-vnf* type with following link replacing <vnf-id> with value read from previous GET response
::
- https://<K8S-NODE-IP>:30233/aai/v14/network/generic-vnfs/generic-vnf/<vnf-id>
+ https://K8S_NODE_IP:30233/aai/v14/network/generic-vnfs/generic-vnf/<vnf-id>
9. Repeat this procedure also for 2 vFW VMs and note their *vnf-type* and *vnf-id*
@@ -306,7 +392,7 @@ This sections show the steps necessary to configure Policies, CDT and Ansible se
Configuration of Policies for Optimization Framework
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-We need to enter the Policy editor in order to upload policy types and then the policy rules for the demo. The polcies are required for the Optimization Framework and they guide OOF how to determine
+We need to enter the Policy editor in order to upload policy types and then the policy rules for the demo. The policies are required for the Optimization Framework and they guide OOF how to determine
vFW and vPGN instances used in the Traffic Distribution workflow.
1. Enter the Policy portal
@@ -315,7 +401,7 @@ Specify *demo*:*demo* as a login and password
::
- https://<K8S-NODE-IP>:30219/onap/login.htm
+ https://K8S_NODE_IP:30219/onap/login.htm
From the left side menu enter *Dictionary* section and from the combo boxes select *MicroService Policy* and *MicroService Models* respectively. Below you can see the result.
@@ -323,15 +409,15 @@ From the left side menu enter *Dictionary* section and from the combo boxes sele
:scale: 70 %
:align: center
- Figure 8 List of MicroService policy types in the Policy portal
+ Figure 15 List of MicroService policy types in the Policy portal
2. Upload the policy types
Before policy rules for Traffic Distribution can be uploaded we need to create policy types to store these rules. For that we need to create following three types:
- VNF Policy - it used to filter vf-module instances i.e. base on their attributes from the AAI like *provStatus*, *cloudRegionId* etc.
-- Query Policy - it is used to declare extra inpt parameters for OOF placement request - in our case we need to specify cloud region name
-- Affinity Policy - it is used to specify the placement rule used for selection vf-module candiate pairs of vFW vf-module instance (traffic destination) and vPGN vf-module instance (anchor point). In this case the match is done by belonging to the same cloud region
+- Query Policy - it is used to declare extra input parameters for OOF placement request - in our case we need to specify cloud region name
+- Affinity Policy - it is used to specify the placement rule used for selection vf-module candidate pairs of vFW vf-module instance (traffic destination) and vPGN vf-module instance (anchor point). In this case the match is done by belonging to the same cloud region
Enter vFWDT tutorial directory on Rancher server (already created in `Preparation of Workflow Script Environment`_) and create policy types from the following files
@@ -346,7 +432,7 @@ For each file press *Create* button, choose the policy type file, select the *Mi
:scale: 70 %
:align: center
- Figure 9 Creation of new MicroService policy type for OOF
+ Figure 16 Creation of new MicroService policy type for OOF
In a result you should see in the dictionary all three new types of policies declared
@@ -354,7 +440,7 @@ In a result you should see in the dictionary all three new types of policies dec
:scale: 70 %
:align: center
- Figure 10 Completed list of MicroService policy types in the Policy portal
+ Figure 17 Completed list of MicroService policy types in the Policy portal
3. Push the policies into the PDP
@@ -383,7 +469,7 @@ The result can be verified in the Policy portal, in the *Editor* section, after
:scale: 70 %
:align: center
- Figure 11 List of policies for OOF and vFW traffic distribution
+ Figure 18 List of policies for OOF and vFW traffic distribution
Testing Gathered Facts on Workflow Script
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -393,23 +479,28 @@ is used to collect neccessary information for configuration of APPC and for furt
At this stage we will execute script in the initial mode to generate some configuration helpful in CDT and Ansible configuration.
-1. Enter vFWDT tutorial directory on Rancher server (already created in `Preparation of Workflow Script Environment`_) and execute there workflow script with follwoing parameters
+1. Enter vFWDT tutorial directory on Rancher server (already created in `Preparation of Workflow Script Environment`_). In the *workflow* folder you can find workflow script used to gather necessary configuration and responsible for execution of the LCM actions. It has following syntax
::
- python3 workflow.py <VNF-ID> <K8S-NODE-IP> True False True True
+ python3 workflow.py <VNF-ID> <RANCHER_NODE_IP> <K8S_NODE_IP> <IF-CACHE> <IF-VFWCL> <INITIAL-ONLY> <CHECK-STATUS> <VERSION>
-For now and for further use workflow script has following input parameters:
+- <VNF-ID> - vnf-id of vFW VNF instance that traffic should be migrated out from
+- <RANCHER_NODE_IP> - External IP of ONAP Rancher Node i.e. 10.12.5.160 (If Rancher Node is missing this is NFS node)
+- <K8S_NODE_IP> - External IP of ONAP K8s Worker Node i.e. 10.12.5.212
+- <IF-CACHE> - If script should use and build OOF response cache (cache it speed-ups further executions of script)
+- <IF-VFWCL> - If instead of vFWDT service instance vFW or vFWCL one is used (should be False always)
+- <INITIAL-ONLY> - If only configuration information will be collected (True for initial phase and False for full execution of workflow)
+- <CHECK-STATUS> - If APPC LCM action status should be verified and FAILURE should stop workflow (when False FAILED status of LCM action does not stop execution of further LCM actions)
+- <VERSION> - New version of vFW - for tests '1.0' or '2.0'. Ignore when you want to test traffic distribution workflow
-- vnf-id of vFW VNF instance that traffic should be migrated out from
-- External IP of ONAP Rancher Node i.e. 10.12.5.160 (If Rancher Node is missing this is NFS node)
-- External IP of ONAP K8s Worker Node i.e. 10.12.5.212
-- if script should use and build OOF response cache (cache it speed-ups further executions of script)
-- if instead of vFWDT service instance vFW or vFWCL one is used (should be False always)
-- if only configuration information will be collected (True for initial phase and False for full execution of workflow)
-- if APPC LCM action status should be verified and FAILURE should stop workflow (when False FAILED status of LCM action does not stop execution of further LCM actions)
+2. Execute there workflow script with following parameters
-2. The script at this stage should give simmilar output
+::
+
+ python3 workflow.py <VNF-ID> <RANCHER_NODE_IP> <K8S_NODE_IP> True False True True 2.0
+
+3. The script at this stage should give simmilar output
::
@@ -417,6 +508,10 @@ For now and for further use workflow script has following input parameters:
OOF Cache True, is CL vFW False, only info False, check LCM result True
+ New vFW software version 2.0
+
+ Starting OSDF Response Server...
+
vFWDT Service Information:
{
"vf-module-id": "0dce0e61-9309-449a-8e3e-f001635aaab1",
@@ -447,18 +542,20 @@ For now and for further use workflow script has following input parameters:
vofwl02vfw4407 ansible_ssh_host=10.0.110.4 ansible_ssh_user=ubuntu
The result should have almoast the same information for *vnf-id's* of both vFW VNFs. *vnf-type* for vPKG and vFW VNFs should be the same like those collected in previous steps.
-Ansible Inventory section contains information about the content Ansible Inventor file that will be configured later on `Configuration of Ansible Server`_
+Ansible Inventory section contains information about the content Ansible Inventor file that will be configured later on `Configuration of Ansible Server`_. The first phase of the workflow script will generate also the CDT artifacts which can be used for automatic configuration of the CDT tool - they can be ignored for manual CDT configuration.
Configuration of VNF in the APPC CDT tool
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Following steps aim to configure DistributeTraffic LCM action for our vPKG and vFW-SINK VNFs in APPC CDT tool.
+.. note:: Automated procedure can be found at the end of the section
+
+Following steps aim to configure DistributeTraffic LCM action for our vPKG and vFW-SINK VNFs in APPC CDT tool.
1. Enter the Controller Design Tool portal
::
- https://<K8S-NODE-IP>:30289/index.html
+ https://K8S_NODE_IP:30289/index.html
2. Click on *MY VNFS* button and login to CDT portal giving i.e. *demo* user name
@@ -468,7 +565,7 @@ Following steps aim to configure DistributeTraffic LCM action for our vPKG and v
:scale: 70 %
:align: center
- Figure 12 Creation of new VNF type in CDT
+ Figure 19 Creation of new VNF type in CDT
4. Enter previously retrieved VNF Type for vPKG VNF and press the *NEXT* button
@@ -476,7 +573,7 @@ Following steps aim to configure DistributeTraffic LCM action for our vPKG and v
:scale: 70 %
:align: center
- Figure 13 Creation of new VNF type in CDT
+ Figure 20 Creation of new VNF type in CDT
5. For already created VNF Type (if the view does not open itself) click the *View/Edit* button. In the LCM action edit view in the first tab please choose:
@@ -495,48 +592,64 @@ Following steps aim to configure DistributeTraffic LCM action for our vPKG and v
:scale: 70 %
:align: center
- Figure 14 DistributeTraffic LCM action editing
+ Figure 21 DistributeTraffic LCM action editing
+
+6. Go to the *Template* tab and in the editor paste the request template of LCM actions for vPKG VNF type
-6. Go to the *Template* tab and in the editor paste the request template of the DistributeTraffic LCM action for vPKG VNF type
+For DistributeTraffic and DistributeTrafficCheck LCMs
::
{
"InventoryNames": "VM",
- "PlaybookName": "${()=(book_name)}",
- "NodeList": [{
- "vm-info": [{
- "ne_id": "${()=(ne_id)}",
- "fixed_ip_address": "${()=(fixed_ip_address)}"
- }],
- "site": "site",
- "vnfc-type": "vpgn"
- }],
+ "PlaybookName": "${book_name}",
+ "AutoNodeList": true,
"EnvParameters": {
"ConfigFileName": "../traffic_distribution_config.json",
+ "vnf_instance": "vfwdt"
+ },
+ "FileParameters": {
+ "traffic_distribution_config.json": "${file_parameter_content}"
+ },
+ "Timeout": 3600
+ }
+
+
+For DistributeTraffic and DistributeTrafficCheck LCMs
+
+::
+
+ {
+ "InventoryNames": "VM",
+ "PlaybookName": "${book_name}",
+ "AutoNodeList": true,
+ "EnvParameters": {
+ "ConfigFileName": "../config.json",
"vnf_instance": "vfwdt",
+ "new_software_version": "${new-software-version}",
+ "existing_software_version": "${existing-software-version}"
},
"FileParameters": {
- "traffic_distribution_config.json": "${()=(file_parameter_content)}"
+ "config.json": "${file_parameter_content}"
},
"Timeout": 3600
}
-.. note:: For all this VNF types and for all actions CDT template is the same except **vnfc-type** parameter that for vPKG VNF type should have value *vpgn* and for vFW-SINK VNF type should have value *vfw-sink*
The meaning of selected template parameters is following:
- **EnvParameters** group contains all the parameters that will be passed directly to the Ansible playbook during the request's execution. *vnf_instance* is an obligatory parameter for VNF Ansible LCMs. In our case for simplification it has predefined value
-- **InventoryNames** parameter is obligatory if you want to have NodeList with limited VMs or VNFCs that playbook should be executed on. It can have value *VM* or *VNFC*. In our case *VM* valuye means that NodeList will have information about VMs on which playbook should be executed. In this use case this is always only one VM
-- **NodeList** parameter value must match the group of VMs like it was specified in the Ansible inventory file. *PlaybookName* must be the same as the name of playbook that was uploaded before to the Ansible server.
-- **FileParameters**
+- **InventoryNames** parameter is obligatory if you want to have NodeList with limited VMs or VNFCs that playbook should be executed on. It can have value *VM* or *VNFC*. In our case *VM* value means that NodeList will have information about VMs on which playbook should be executed. In this use case this is always only one VM
+- **AutoNodeList** parameter set to True indicates that template does not need the NodeList section specific and it will be generated automatically base on information from AAI - this requires proper data in the vserver and vnfc objects associated with VNFs
+- **PlaybookName** must be the same as the name of playbook that was uploaded before to the Ansible server.
+- **FileParameters** sections contains information about the configuration files with their content necessary to execute the playbook
.. figure:: files/vfwdt-create-template.png
:scale: 70 %
:align: center
- Figure 15 LCM DistributeTraffic request template
+ Figure 22 LCM DistributeTraffic request template
7. Afterwards press the *SYNCHRONIZE WITH TEMPLATE PARAMETERS* button. You will be moved to the *Parameter Definition* tab. The new parameters will be listed there.
@@ -544,17 +657,27 @@ The meaning of selected template parameters is following:
:scale: 70 %
:align: center
- Figure 16 Summary of parameters specified for DistributeTraffic LCM action.
+ Figure 23 Summary of parameters specified for DistributeTraffic LCM action.
.. note:: For each parameter you can define its: mandatory presence; default value; source (Manual/A&AI). For our case modification of this settings is not necessary
8. Finally, go back to the *Reference Data* tab and click *SAVE ALL TO APPC*.
-.. note:: Remember to configure DistributeTraffic and DistributeTrafficCheck actions for vPKG VNF type and DistributeTrafficCheck action for vFW-SINK
+.. note:: Remember to configure DistributeTraffic and DistributeTrafficCheck actions for vPKG VNF type and UpgradeSoftware, UpgradePreCheck, UpgradePostCheck and DistributeTrafficCheck actions for vFW-SINK
+
+9. Configuration of CDT tool is also automated and all steps above can be repeated with script *configure_ansible.sh*
+
+Enter vFWDT tutorial directory `Preparation of Workflow Script Environment`_ on Rancher server, make sure that *onap.pem* file is in *playbooks* directory and run
+
+::
+
+ ./playbooks/configure_ansible.sh
Configuration of Ansible Server
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+.. note:: Automated procedure can be found at the end of the section
+
After an instantiation of the vFWDT service the Ansible server must be configured in order to allow it a reconfiguration of vPKG VM.
1. Copy from Rancher server private key file used for vFWDT VMs' creation and used for access to Rancher server into the :file:`/opt/ansible-server/Playbooks/onap.pem` file
@@ -605,7 +728,7 @@ After an instantiation of the vFWDT service the Ansible server must be configure
private_key_file = /opt/ansible-server/Playbooks/onap.pem
-.. note:: This is the default privaye key file. In the `/opt/ansible-server/Playbooks/Ansible\ \_\ inventory` different key could be configured but APPC in time of execution of playbbok on Ansible server creates its own dedicated inventory file which does not have private key file specified. In consequence, this key file configured is mandatory for proper execution of playbooks by APPC
+.. note:: This is the default private key file. In the `/opt/ansible-server/Playbooks/Ansible\ \_\ inventory` different key could be configured but APPC in time of execution of playbook on Ansible server creates its own dedicated inventory file which does not have private key file specified. In consequence, this key file configured is mandatory for proper execution of playbooks by APPC
6. Test that the Ansible server can access over ssh vFWDT hosts configured in the ansible inventory
@@ -615,7 +738,7 @@ After an instantiation of the vFWDT service the Ansible server must be configure
ansible –i Ansible_inventory vpgn,vfw-sink –m ping
-7. Download the distribute traffic playbook into the :file:`/opt/ansible-server/Playbooks` directory
+7. Download the LCM playbooks into the :file:`/opt/ansible-server/Playbooks` directory
Exit Ansible server pod and enter vFWDT tutorial directory `Preparation of Workflow Script Environment`_ on Rancher server. Afterwards, copy playbooks into Ansible server pod
@@ -624,13 +747,15 @@ Exit Ansible server pod and enter vFWDT tutorial directory `Preparation of Workf
sudo kubectl cp playbooks/vfw-sink onap/`kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep appc-ansible`:/opt/ansible-server/Playbooks/
sudo kubectl cp playbooks/vpgn onap/`kubectl get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep appc-ansible`:/opt/ansible-server/Playbooks/
-8. After the configuration of Ansible serverthe structure of `/opt/ansible-server/Playbooks` directory should be following
+8. Configuration of ansible server is also automated and all steps above can be repeated with script *configure_ansible.sh* introduced in the previous section
+
+9. After the configuration of Ansible server with script the structure of `/opt/ansible-server/Playbooks` directory should be following
::
/opt/ansible-server/Playbooks $ ls -R
.:
- Ansible_inventory onap.pem vfw-sink vpgn
+ ansible.cfg Ansible_inventory configure_ansible.sh onap.pem server.py upgrade.sh vfw-sink vpgn
./vfw-sink:
latest
@@ -639,11 +764,20 @@ Exit Ansible server pod and enter vFWDT tutorial directory `Preparation of Workf
ansible
./vfw-sink/latest/ansible:
- distributetrafficcheck
+ distributetrafficcheck upgradepostcheck upgradeprecheck upgradesoftware
./vfw-sink/latest/ansible/distributetrafficcheck:
site.yml
+ ./vfw-sink/latest/ansible/upgradepostcheck:
+ site.yml
+
+ ./vfw-sink/latest/ansible/upgradeprecheck:
+ site.yml
+
+ ./vfw-sink/latest/ansible/upgradesoftware:
+ site.yml
+
./vpgn:
latest
@@ -651,7 +785,7 @@ Exit Ansible server pod and enter vFWDT tutorial directory `Preparation of Workf
ansible
./vpgn/latest/ansible:
- distributetraffic distributetrafficcheck
+ distributetraffic distributetrafficcheck
./vpgn/latest/ansible/distributetraffic:
site.yml
@@ -663,6 +797,8 @@ Exit Ansible server pod and enter vFWDT tutorial directory `Preparation of Workf
Configuration of APPC DB for Ansible
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+.. note:: Automated procedure can be found at the end of the section
+
For each VNF that uses the Ansible protocol you need to configure *PASSWORD* and *URL* field in the *DEVICE_AUTHENTICATION* table. This step must be performed after configuration in CDT which populates data in *DEVICE_AUTHENTICATION* table.
1. Enter the APPC DB container
@@ -682,36 +818,43 @@ For each VNF that uses the Ansible protocol you need to configure *PASSWORD* and
::
MariaDB [(none)]> use sdnctl;
- MariaDB [sdnctl]> UPDATE DEVICE_AUTHENTICATION SET URL = 'http://appc-ansible-server:8000/Dispatch' WHERE ACTION LIKE 'DistributeTraffic%';
- MariaDB [sdnctl]> UPDATE DEVICE_AUTHENTICATION SET PASSWORD = 'admin' WHERE ACTION LIKE 'DistributeTraffic%';
- MariaDB [sdnctl]> select * from DEVICE_AUTHENTICATION;
+ MariaDB [sdnctl]> UPDATE DEVICE_AUTHENTICATION SET URL = 'http://appc-ansible-server:8000/Dispatch' WHERE WHERE PROTOCOL LIKE 'ANSIBLE' AND URL IS NULL;
+ MariaDB [sdnctl]> UPDATE DEVICE_AUTHENTICATION SET PASSWORD = 'admin' WHERE PROTOCOL LIKE 'ANSIBLE' AND PASSWORD IS NULL;
+ MariaDB [sdnctl]> select * from DEVICE_AUTHENTICATION WHERE PROTOCOL LIKE 'ANSIBLE';
-Result should be simmilar to the following one:
+Result should be similar to the following one:
::
+--------------------------+------------------------------------------------------+----------+------------------------+-----------+----------+-------------+------------------------------------------+
| DEVICE_AUTHENTICATION_ID | VNF_TYPE | PROTOCOL | ACTION | USER_NAME | PASSWORD | PORT_NUMBER | URL |
+--------------------------+------------------------------------------------------+----------+------------------------+-----------+----------+-------------+------------------------------------------+
- | 137 | vFWDT 2019-05-20 21:10:/vFWDT_vPKG a646a255-9bee 0 | ANSIBLE | DistributeTraffic | admin | admin | 8000 | http://appc-ansible-server:8000/Dispatch |
- | 143 | vFWDT 2019-05-20 21:10:/vFWDT_vFWSNK b463aa83-b1fc 0 | ANSIBLE | DistributeTraffic | admin | admin | 8000 | http://appc-ansible-server:8000/Dispatch |
- | 149 | vFWDT 2019-05-20 21:10:/vFWDT_vFWSNK b463aa83-b1fc 0 | ANSIBLE | DistributeTrafficCheck | admin | admin | 8000 | http://appc-ansible-server:8000/Dispatch |
- | 152 | vFWDT 2019-05-20 21:10:/vFWDT_vPKG a646a255-9bee 0 | ANSIBLE | DistributeTrafficCheck | admin | admin | 8000 | http://appc-ansible-server:8000/Dispatch |
+ | 118 | vFWDT 2020-04-21 17-26-/vFWDT_vFWSNK 1faca5b5-4c29 1 | ANSIBLE | DistributeTrafficCheck | admin | admin | 8000 | http://appc-ansible-server:8000/Dispatch |
+ | 121 | vFWDT 2020-04-21 17-26-/vFWDT_vFWSNK 1faca5b5-4c29 1 | ANSIBLE | UpgradeSoftware | admin | admin | 8000 | http://appc-ansible-server:8000/Dispatch |
+ | 124 | vFWDT 2020-04-21 17-26-/vFWDT_vFWSNK 1faca5b5-4c29 1 | ANSIBLE | UpgradePreCheck | admin | admin | 8000 | http://appc-ansible-server:8000/Dispatch |
+ | 127 | vFWDT 2020-04-21 17-26-/vFWDT_vFWSNK 1faca5b5-4c29 1 | ANSIBLE | UpgradePostCheck | admin | admin | 8000 | http://appc-ansible-server:8000/Dispatch |
+ | 133 | vFWDT 2020-04-21 17-26-/vFWDT_vPKG 8021eee9-3a8f 0 | ANSIBLE | DistributeTraffic | admin | admin | 8000 | http://appc-ansible-server:8000/Dispatch |
+ | 136 | vFWDT 2020-04-21 17-26-/vFWDT_vPKG 8021eee9-3a8f 0 | ANSIBLE | DistributeTrafficCheck | admin | admin | 8000 | http://appc-ansible-server:8000/Dispatch |
+--------------------------+------------------------------------------------------+----------+------------------------+-----------+----------+-------------+------------------------------------------+
- 4 rows in set (0.00 sec)
+
+ 6 rows in set (0.00 sec)
+
+4. Configuration of APPC DB is also automated and all steps above can be repeated with script *configure_ansible.sh* introduced in the previous sections
-Testing Traffic Distribution Workflow
--------------------------------------
+Testing Workflows
+-----------------
-Since all the configuration of components of ONAP is already prepared it is possible to enter second phase of Traffic Distribution Workflow execution -
-the execution of DistributeTraffic and DistributeTrafficCheck LCM actions with configuration resolved before by OptimizationFramework.
+Since all the configuration of components of ONAP is already prepared it is possible to enter second phase of workflows execution -
+the execution of APPC LCM actions with configuration resolved before by OptimizationFramework.
Workflow Execution
~~~~~~~~~~~~~~~~~~
-In order to run Traffic Distribution Workflow execute following commands from the vFWDT tutorial directory `Preparation of Workflow Script Environment`_ on Rancher server.
+In order to run workflows execute following commands from the vFWDT tutorial directory `Preparation of Workflow Script Environment`_ on Rancher server.
+
+For Traffic Distribution workflow run
::
@@ -719,65 +862,83 @@ In order to run Traffic Distribution Workflow execute following commands from th
python3 workflow.py 909d396b-4d99-4c6a-a59b-abe948873303 10.12.5.217 10.12.5.63 True False False True
-The order of executed LCM actions is following:
+The order of executed LCM actions for Traffic Distribution workflow is following:
-1. DistributeTrafficCheck on vPKG VM - ansible playbook checks if traffic destinations specified by OOF is not configued in the vPKG and traffic does not go from vPKG already.
- If vPKG send alreadyt traffic to destination the playbook will fail and workflow will break.
-2. DistributeTraffic on vPKG VM - ansible playbook reconfigures vPKG in order to send traffic to destination specified before by OOF. When everything is fine at this stage
- change of the traffic should be observed on following dashboards (please turn on automatic reload of graphs)
+1. CheckLock on vPKG, vFW-1 and vFW-2 VMs
+2. Lock on vPKG, vFW-1 and vFW-2 VMs
+3. DistributeTrafficCheck on vPKG VM - ansible playbook checks if traffic destinations specified by OOF is not configured in the vPKG and traffic does not go from vPKG already.
+ If vPKG send already traffic to destination the playbook will fail and workflow will break.
+4. DistributeTraffic on vPKG VM - ansible playbook reconfigures vPKG in order to send traffic to destination specified before by OOF.
+5. DistributeTrafficCheck on vFW-1 VM - ansible playbook checks if traffic is not present on vFW from which traffic should be migrated out. If traffic is still present after 30 seconds playbook fails
+6. DistributeTrafficCheck on vFW-2 VM - ansible playbook checks if traffic is present on vFW from which traffic should be migrated out. If traffic is still not present after 30 seconds playbook fails
+7. Lock on vPKG, vFW-1 and vFW-2 VMs
- ::
- http://<vSINK-1-IP>:667/
- http://<vSINK-2-IP>:667/
+For In-Place Software Upgrade with Traffic Distribution workflow run
+
+::
+
+ cd workflow
+ python3 workflow.py 909d396b-4d99-4c6a-a59b-abe948873303 10.12.5.217 10.12.5.63 True False False True 2.0
+
+
+The order of executed LCM actions for In-Place Software Upgrade with Traffic Distribution workflow is following:
-3. DistributeTrafficCheck on vFW-1 VM - ansible playbook checks if traffic is not present on vFW from which traffic should be migrated out. If traffic is still present after 30 seconds playbook fails
-4. DistributeTrafficCheck on vFW-2 VM - ansible playbook checks if traffic is present on vFW from which traffic should be migrated out. If traffic is still not present after 30 seconds playbook fails
+1. CheckLock on vPKG, vFW-1 and vFW-2 VMs
+2. Lock on vPKG, vFW-1 and vFW-2 VMs
+3. UpgradePreCheck on vFW-1 VM - checks if the software version on vFW is different than the one requested in the workflow input
+4. DistributeTrafficCheck on vPKG VM - ansible playbook checks if traffic destinations specified by OOF is not configured in the vPKG and traffic does not go from vPKG already.
+ If vPKG send already traffic to destination the playbook will fail and workflow will break.
+5. DistributeTraffic on vPKG VM - ansible playbook reconfigures vPKG in order to send traffic to destination specified before by OOF.
+6. DistributeTrafficCheck on vFW-1 VM - ansible playbook checks if traffic is not present on vFW from which traffic should be migrated out. If traffic is still present after 30 seconds playbook fails
+7. DistributeTrafficCheck on vFW-2 VM - ansible playbook checks if traffic is present on vFW from which traffic should be migrated out. If traffic is still not present after 30 seconds playbook fails
+8. UpgradeSoftware on vFW-1 VM - ansible playbook modifies the software on the vFW instance and sets the version of the software to the specified one in the request
+9. UpgradePostCheck on vFW-1 VM - ansible playbook checks if the software of vFW is the same like the one specified in the workflows input.
+10. DistributeTraffic on vPKG VM - ansible playbook reconfigures vPKG in order to send traffic to destination specified before by OOF (reverse configuration).
+11. DistributeTrafficCheck on vFW-2 VM - ansible playbook checks if traffic is not present on vFW from which traffic should be migrated out. If traffic is still present after 30 seconds playbook fails
+12. DistributeTrafficCheck on vFW-1 VM - ansible playbook checks if traffic is present on vFW from which traffic should be migrated out. If traffic is still not present after 30 seconds playbook fails
+13. Unlock on vPKG, vFW-1 and vFW-2 VMs
+For both workflows when everything is fine with both workflows change of the traffic should be observed on following dashboards (please turn on automatic reload of graphs). The observed traffic pattern for upgrade scenario should be similar to the one presented in Figure 2
+
+ ::
+
+ http://vSINK-1-IP:667/
+ http://vSINK-2-IP:667/
+
Workflow Results
~~~~~~~~~~~~~~~~
-Expected result of workflow execution, when everythin is fine, is following:
+Expected result of Traffic Distribution workflow execution, when everything is fine, is following:
::
Distribute Traffic Workflow Execution:
- APPC REQ 0 - DistributeTrafficCheck
- Request Accepted. Receiving result status...
- Checking LCM DistributeTrafficCheck Status
- IN_PROGRESS
- IN_PROGRESS
- IN_PROGRESS
- IN_PROGRESS
+ WORKFLOW << Migrate vFW Traffic Conditionally >>
+ APPC LCM << CheckLock >> [Check vPGN Lock Status]
+ UNLOCKED
+ APPC LCM << CheckLock >> [Check vFW-1 Lock Status]
+ UNLOCKED
+ APPC LCM << CheckLock >> [Check vFW-2 Lock ]
+ UNLOCKED
+ APPC LCM << Lock >> [Lock vPGN]
SUCCESSFUL
- APPC REQ 1 - DistributeTraffic
- Request Accepted. Receiving result status...
- Checking LCM DistributeTraffic Status
- IN_PROGRESS
- IN_PROGRESS
- IN_PROGRESS
- IN_PROGRESS
- IN_PROGRESS
- IN_PROGRESS
- IN_PROGRESS
- IN_PROGRESS
- IN_PROGRESS
- IN_PROGRESS
- IN_PROGRESS
- IN_PROGRESS
- IN_PROGRESS
- IN_PROGRESS
- IN_PROGRESS
- IN_PROGRESS
+ APPC LCM << Lock >> [Lock vFW-1]
+ SUCCESSFUL
+ APPC LCM << Lock >> [Lock vFW-2]
+ SUCCESSFUL
+ APPC LCM << DistributeTrafficCheck >> [Check current traffic destination on vPGN]
+ ACCEPTED
+ APPC LCM << DistributeTrafficCheck >> [Status]
IN_PROGRESS
IN_PROGRESS
IN_PROGRESS
SUCCESSFUL
- APPC REQ 2 - DistributeTrafficCheck
- Request Accepted. Receiving result status...
- Checking LCM DistributeTrafficCheck Status
- IN_PROGRESS
+ WORKFLOW << Migrate Traffic and Verify >>
+ APPC LCM << DistributeTraffic >> [Migrating source vFW traffic to destination vFW]
+ ACCEPTED
+ APPC LCM << DistributeTraffic >> [Status]
IN_PROGRESS
IN_PROGRESS
IN_PROGRESS
@@ -787,49 +948,77 @@ Expected result of workflow execution, when everythin is fine, is following:
IN_PROGRESS
IN_PROGRESS
SUCCESSFUL
- APPC REQ 3 - DistributeTrafficCheck
- Request Accepted. Receiving result status...
- Checking LCM DistributeTrafficCheck Status
- IN_PROGRESS
- IN_PROGRESS
+ APPC LCM << DistributeTrafficCheck >> [Checking traffic has been stopped on the source vFW]
+ ACCEPTED
+ APPC LCM << DistributeTrafficCheck >> [Status]
IN_PROGRESS
IN_PROGRESS
IN_PROGRESS
+ SUCCESSFUL
+ APPC LCM << DistributeTrafficCheck >> [Checking traffic has appeared on the destination vFW]
+ ACCEPTED
+ APPC LCM << DistributeTrafficCheck >> [Status]
IN_PROGRESS
IN_PROGRESS
SUCCESSFUL
+ APPC LCM << Unlock >> [Unlock vPGN]
+ SUCCESSFUL
+ APPC LCM << Unlock >> [Unlock vFW-1]
+ SUCCESSFUL
+ APPC LCM << Unlock >> [Unlock vFW-2]
+ SUCCESSFUL
+
+
+In case we want to execute operation and one of the VNFs is locked because of other operation being executed:
+
+::
+
+ Distribute Traffic Workflow Execution:
+ WORKFLOW << Migrate vFW Traffic Conditionally >>
+ APPC LCM << CheckLock >> [Check vPGN Lock Status]
+ LOCKED
+ Traceback (most recent call last):
+ File "workflow.py", line 1235, in <module>
+ sys.argv[6].lower() == 'true', sys.argv[7].lower() == 'true', new_version)
+ File "workflow.py", line 1209, in execute_workflow
+ _execute_lcm_requests({"requests": lcm_requests, "description": "Migrate vFW Traffic Conditionally"}, onap_ip, check_result)
+ File "workflow.py", line 101, in wrap
+ ret = f(*args, **kwargs)
+ File "workflow.py", line 1007, in _execute_lcm_requests
+ raise Exception("APPC LCM << {} >> FAILED".format(req['input']['action']))
+ Exception: APPC LCM << CheckLock >> FAILED
+
In case of failure the result can be following:
::
Distribute Traffic Workflow Execution:
- APPC REQ 0 - DistributeTrafficCheck
- Request Accepted. Receiving result status...
- Checking LCM DistributeTrafficCheck Status
- IN_PROGRESS
- IN_PROGRESS
- IN_PROGRESS
- IN_PROGRESS
- IN_PROGRESS
- IN_PROGRESS
- IN_PROGRESS
- IN_PROGRESS
- IN_PROGRESS
- IN_PROGRESS
- IN_PROGRESS
- IN_PROGRESS
- IN_PROGRESS
- IN_PROGRESS
- IN_PROGRESS
+ WORKFLOW << Migrate vFW Traffic Conditionally >>
+ APPC LCM << CheckLock >> [Check vPGN Lock Status]
+ UNLOCKED
+ APPC LCM << CheckLock >> [Check vFW-1 Lock Status]
+ UNLOCKED
+ APPC LCM << CheckLock >> [Check vFW-2 Lock ]
+ UNLOCKED
+ APPC LCM << Lock >> [Lock vPGN]
+ SUCCESSFUL
+ APPC LCM << Lock >> [Lock vFW-1]
+ SUCCESSFUL
+ APPC LCM << Lock >> [Lock vFW-2]
+ SUCCESSFUL
+ APPC LCM << DistributeTrafficCheck >> [Check current traffic destination on vPGN]
+ ACCEPTED
+ APPC LCM << DistributeTrafficCheck >> [Status]
FAILED
- Traceback (most recent call last):
- File "workflow.py", line 563, in <module>
- sys.argv[5].lower() == 'true', sys.argv[6].lower() == 'true')
- File "workflow.py", line 557, in execute_workflow
- confirm_appc_lcm_action(onap_ip, req, check_result)
- File "workflow.py", line 529, in confirm_appc_lcm_action
- raise Exception("LCM {} {} - {}".format(req['input']['action'], status['status'], status['status-reason']))
- Exception: LCM DistributeTrafficCheck FAILED - FAILED
-
-.. note:: When CDT and Ansible is configured properly Traffic Distribution Workflow can fail when you pass as a vnf-id argument the ID of vFW VNF which does not handle traffic at the moment. To solve that pass the VNF ID of the other vFW VNF instance. Because of the same reason you cannot execute twice in a row workflow for the same VNF ID if first execution succedds.
+ APPC LCM <<DistributeTrafficCheck>> [FAILED - FAILED]
+ WORKFLOW << Migrate Traffic and Verify >> SKIP
+ APPC LCM << Unlock >> [Unlock vPGN]
+ SUCCESSFUL
+ APPC LCM << Unlock >> [Unlock vFW-1]
+ SUCCESSFUL
+ APPC LCM << Unlock >> [Unlock vFW-2]
+ SUCCESSFUL
+
+
+.. note:: When CDT and Ansible is configured properly Traffic Distribution Workflow can fail when you pass as a vnf-id argument the ID of vFW VNF which does not handle traffic at the moment. To solve that pass the VNF ID of the other vFW VNF instance. Because of the same reason you cannot execute twice in a row workflow for the same VNF ID if first execution succeeds.
diff --git a/docs/files/dt-use-case.png b/docs/files/dt-use-case.png
index 068e9e587..62b67d078 100755
--- a/docs/files/dt-use-case.png
+++ b/docs/files/dt-use-case.png
Binary files differ
diff --git a/docs/files/vfwdt-general-workflow-sd.png b/docs/files/vfwdt-general-workflow-sd.png
new file mode 100644
index 000000000..89fa1f4ab
--- /dev/null
+++ b/docs/files/vfwdt-general-workflow-sd.png
Binary files differ
diff --git a/docs/files/vfwdt-identification-workflow-sd.png b/docs/files/vfwdt-identification-workflow-sd.png
new file mode 100644
index 000000000..83310f731
--- /dev/null
+++ b/docs/files/vfwdt-identification-workflow-sd.png
Binary files differ
diff --git a/docs/files/vfwdt-td-workflow-sd.png b/docs/files/vfwdt-td-workflow-sd.png
new file mode 100644
index 000000000..73c6305a0
--- /dev/null
+++ b/docs/files/vfwdt-td-workflow-sd.png
Binary files differ
diff --git a/docs/files/vfwdt-upgrade-workflow-sd.png b/docs/files/vfwdt-upgrade-workflow-sd.png
new file mode 100644
index 000000000..6b2ee5dfa
--- /dev/null
+++ b/docs/files/vfwdt-upgrade-workflow-sd.png
Binary files differ
diff --git a/docs/files/vfwdt-workflow-general.png b/docs/files/vfwdt-workflow-general.png
new file mode 100644
index 000000000..3ffe35db6
--- /dev/null
+++ b/docs/files/vfwdt-workflow-general.png
Binary files differ
diff --git a/docs/files/vfwdt-workflow-traffic.png b/docs/files/vfwdt-workflow-traffic.png
new file mode 100644
index 000000000..8bc6073dd
--- /dev/null
+++ b/docs/files/vfwdt-workflow-traffic.png
Binary files differ
diff --git a/docs/files/vfwdt-workflow-upgrade.png b/docs/files/vfwdt-workflow-upgrade.png
new file mode 100644
index 000000000..6e24c706d
--- /dev/null
+++ b/docs/files/vfwdt-workflow-upgrade.png
Binary files differ