summaryrefslogtreecommitdiffstats
path: root/docs/vFWCL-notes.rst
diff options
context:
space:
mode:
authorMichal Ptacek <m.ptacek@partner.samsung.com>2019-11-04 06:45:08 +0000
committerMichal Ptacek <m.ptacek@partner.samsung.com>2019-11-04 06:45:08 +0000
commit7168a9a9e41d9aa1c2b5a69e0886893038b8bc68 (patch)
tree797cfc617594a82e3ee2a2fc88494c988548352f /docs/vFWCL-notes.rst
parentf2ae9b14e2b49e537cf87a4409912377f00bc471 (diff)
Updating instructions for vFWCL on ElAlto
Commit contains also some patch files due to POLICY-2191, this is expected to be removed shortly after new drools image is created. Issue-ID: OOM-1996 Change-Id: Ia2db50fc6dc66ea0c7598d6859eb08007b59a0b9 Signed-off-by: Michal Ptacek <m.ptacek@partner.samsung.com>
Diffstat (limited to 'docs/vFWCL-notes.rst')
-rw-r--r--docs/vFWCL-notes.rst386
1 files changed, 144 insertions, 242 deletions
diff --git a/docs/vFWCL-notes.rst b/docs/vFWCL-notes.rst
index 17a49399..2d6fd6fb 100644
--- a/docs/vFWCL-notes.rst
+++ b/docs/vFWCL-notes.rst
@@ -1,18 +1,18 @@
-*************************************
-vFWCL on Dublin ONAP offline platform
-*************************************
+************************************
+vFWCL on Ealto ONAP offline platform
+************************************
|image0|
-This document is collecting notes we have from running vFirewall demo on offline Dublin platform
+This document is collecting notes we have from running vFirewall demo on offline Elalto platform
installed by ONAP offline installer tool.
-Overall it was much easier in compare with earlier version, however following steps are still needed.
+Overall it's slightly more complicated than in Dublin mainly due to POLICY-2191 issue.
Some of the most relevant materials are available on following links:
-* `oom_quickstart_guide.html <https://docs.onap.org/en/dublin/submodules/oom.git/docs/oom_quickstart_guide.html>`_
-* `docs_vfw.html <https://docs.onap.org/en/dublin/submodules/integration.git/docs/docs_vfw.html>`_
+* `oom_quickstart_guide.html <https://docs.onap.org/en/elalto/submodules/oom.git/docs/oom_quickstart_guide.html>`_
+* `docs_vfw.html <https://docs.onap.org/en/elalto/submodules/integration.git/docs/docs_vfw.html>`_
.. contents:: Table of Contents
@@ -32,190 +32,59 @@ Snippets below are describing areas we need to configure for successfull vFWCL d
Pay attention to them and configure it (ideally before deployment) accordingly.
-**1) <helm_charts_dir>/onap/values.yaml**::
-
-
- #################################################################
- # Global configuration overrides.
- # !!! VIM specific entries are in APPC / Robot & SO parts !!!
- #################################################################
- global:
- # Change to an unused port prefix range to prevent port conflicts
- # with other instances running within the same k8s cluster
- nodePortPrefix: 302
- nodePortPrefixExt: 304
-
- # ONAP Repository
- # Uncomment the following to enable the use of a single docker
- # repository but ONLY if your repository mirrors all ONAP
- # docker images. This includes all images from dockerhub and
- # any other repository that hosts images for ONAP components.
- #repository: nexus3.onap.org:10001
- repositoryCred:
- user: docker
- password: docker
-
- # readiness check - temporary repo until images migrated to nexus3
- readinessRepository: oomk8s
- # logging agent - temporary repo until images migrated to nexus3
- loggingRepository: docker.elastic.co
-
- # image pull policy
- pullPolicy: Always
-
- # default mount path root directory referenced
- # by persistent volumes and log files
- persistence:
- mountPath: /dockerdata-nfs
- enableDefaultStorageclass: false
- parameters: {}
- storageclassProvisioner: kubernetes.io/no-provisioner
- volumeReclaimPolicy: Retain
-
- # override default resource limit flavor for all charts
- flavor: unlimited
-
- # flag to enable debugging - application support required
- debugEnabled: false
-
- #################################################################
- # Enable/disable and configure helm charts (ie. applications)
- # to customize the ONAP deployment.
- #################################################################
- aaf:
- enabled: true
- aai:
- enabled: true
- appc:
- enabled: true
- config:
- openStackType: "OpenStackProvider"
- openStackName: "OpenStack"
- openStackKeyStoneUrl: "http://10.20.30.40:5000/v2.0"
- openStackServiceTenantName: "service"
- openStackDomain: "default"
- openStackUserName: "onap-tieto"
- openStackEncryptedPassword: "31ECA9F2BA98EF34C9EC3412D071E31185F6D9522808867894FF566E6118983AD5E6F794B8034558"
- cassandra:
- enabled: true
- clamp:
- enabled: true
- cli:
- enabled: true
- consul:
- enabled: true
- contrib:
- enabled: true
- dcaegen2:
- enabled: true
- pnda:
- enabled: true
- dmaap:
- enabled: true
- esr:
- enabled: true
- log:
- enabled: true
- sniro-emulator:
- enabled: true
- oof:
- enabled: true
- mariadb-galera:
- enabled: true
- msb:
- enabled: true
- multicloud:
- enabled: true
- nbi:
- enabled: true
- config:
- # openstack configuration
- openStackRegion: "Yolo"
- openStackVNFTenantId: "1234"
- nfs-provisioner:
- enabled: true
- policy:
- enabled: true
- pomba:
- enabled: true
- portal:
- enabled: true
- robot:
- enabled: true
- appcUsername: "appc@appc.onap.org"
- appcPassword: "demo123456!"
- openStackKeyStoneUrl: "http://10.20.30.40:5000"
- openStackPublicNetId: "9403ceea-0738-4908-a826-316c8541e4bb"
- openStackPublicNetworkName: "rc3-offline-network"
- openStackTenantId: "b1ce7742d956463999923ceaed71786e"
- openStackUserName: "onap-tieto"
- ubuntu14Image: "trusty"
- openStackPrivateNetId: "3c7aa2bd-ba14-40ce-8070-6a0d6a617175"
- openStackPrivateSubnetId: "2bcb9938-9c94-4049-b580-550a44dc63b3"
- openStackPrivateNetCidr: "10.0.0.0/16"
- openStackSecurityGroup: "onap_sg"
- openStackOamNetworkCidrPrefix: "10.0"
- dcaeCollectorIp: "10.8.8.22" # this IP is taken from k8s host
- vnfPubKey: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDPwF2bYm2QuqZpjuAcZDJTcFdUkKv4Hbd/3qqbxf6g5ZgfQarCi+mYnKe9G9Px3CgFLPdgkBBnMSYaAzMjdIYOEdPKFTMQ9lIF0+i5KsrXvszWraGKwHjAflECfpTAWkPq2UJUvwkV/g7NS5lJN3fKa9LaqlXdtdQyeSBZAUJ6QeCE5vFUplk3X6QFbMXOHbZh2ziqu8mMtP+cWjHNBB47zHQ3RmNl81Rjv+QemD5zpdbK/h6AahDncOY3cfN88/HPWrENiSSxLC020sgZNYgERqfw+1YhHrclhf3jrSwCpZikjl7rqKroua2LBI/yeWEta3amTVvUnR2Y7gM8kHyh Generated-by-Nova"
- demoArtifactsVersion: "1.4.0" # Dublin prefered is 1.4.0
- demoArtifactsRepoUrl: "https://nexus.onap.org/content/repositories/releases"
- scriptVersion: "1.4.0" # Dublin prefered is 1.4.0
- rancherIpAddress: "10.8.8.8" # this IP is taken from infra node
- config:
- # instructions how to generate this value properly are in OOM quick quide mentioned above
- openStackEncryptedPasswordHere: "f7920677e15e2678b0f33736189e8965"
-
- sdc:
- enabled: true
- sdnc:
- enabled: true
-
- replicaCount: 1
-
- mysql:
- replicaCount: 1
- so:
- enabled: true
- config:
- openStackUserName: "onap-tieto"
- openStackRegion: "RegionOne"
- openStackKeyStoneUrl: "http://10.20.30.40:5000"
- openStackServiceTenantName: "services"
- # instructions how to generate this value properly are in OOM quick quide mentioned above
- openStackEncryptedPasswordHere: "31ECA9F2BA98EF34C9EC3412D071E31185F6D9522808867894FF566E6118983AD5E6F794B8034558"
-
- replicaCount: 1
-
- liveness:
- # necessary to disable liveness probe when setting breakpoints
- # in debugger so K8s doesn't restart unresponsive container
- enabled: true
-
- so-catalog-db-adapter:
- config:
- openStackUserName: "onap-tieto"
- openStackKeyStoneUrl: "http://10.20.30.40:5000/v2.0"
- # instructions how to generate this value properly are in OOM quick quide mentioned above
- openStackEncryptedPasswordHere: "31ECA9F2BA98EF34C9EC3412D071E31185F6D9522808867894FF566E6118983AD5E6F794B8034558"
-
- uui:
- enabled: true
- vfc:
- enabled: true
- vid:
- enabled: true
- vnfsdk:
- enabled: true
- modeling:
- enabled: true
-
-
-**2) <helm_charts_dir>/robot/resources/config/eteshare/config/vm_properties.py**::
-
- # following patch is required because in Dublin public network is hardcoded
- # reported in TEST-166 and is implemented in El-Alto
- # just add following row into file
- GLOBAL_INJECTED_OPENSTACK_PUBLIC_NETWORK = '{{ .Values.openStackPublicNetworkName }}'
+.. note:: We are using standard OOM kubernetes/onap/resources/overrides/onap-all.yaml override to enable all components, however looks that better tailored one onap-vfw.yaml exists in the same folder. In following description we would be focusing on just other override values specific for lab environment.
+
+**1) Override for Update APPC / Robot and SO parts**::
+
+
+ appc:
+ enabled: true
+ config:
+ openStackType: "OpenStackProvider"
+ openStackName: "OpenStack"
+ openStackKeyStoneUrl: "http://10.20.30.40:5000/v2.0"
+ openStackServiceTenantName: "service"
+ openStackDomain: "default"
+ openStackUserName: "onap-tieto"
+ openStackEncryptedPassword: "31ECA9F2BA98EF34C9EC3412D071E31185F6D9522808867894FF566E6118983AD5E6F794B8034558"
+ robot:
+ enabled: true
+ appcUsername: "appc@appc.onap.org"
+ appcPassword: "demo123456!"
+ openStackKeyStoneUrl: "http://10.20.30.40:5000"
+ openStackPublicNetId: "9403ceea-0738-4908-a826-316c8541e4bb"
+ openStackTenantId: "b1ce7742d956463999923ceaed71786e"
+ openStackUserName: "onap-tieto"
+ ubuntu14Image: "trusty"
+ openStackPrivateNetId: "3c7aa2bd-ba14-40ce-8070-6a0d6a617175"
+ openStackPrivateSubnetId: "2bcb9938-9c94-4049-b580-550a44dc63b3"
+ openStackPrivateNetCidr: "10.0.0.0/16"
+ openStackSecurityGroup: "onap_sg"
+ openStackOamNetworkCidrPrefix: "10.0"
+ openStackPublicNetworkName: "rc3-offline-network"
+ vnfPrivateKey: '/var/opt/ONAP/onap-dev.pem'
+ vnfPubKey: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDPwF2bYm2QuqZpjuAcZDJTcFdUkKv4Hbd/3qqbxf6g5ZgfQarCi+mYnKe9G9Px3CgFLPdgkBBnMSYaAzMjdIYOEdPKFTMQ9lIF0+i5KsrXvszWraGKwHjAflECfpTAWkPq2UJUvwkV/g7NS5lJN3fKa9LaqlXdtdQyeSBZAUJ6QeCE5vFUplk3X6QFbMXOHbZh2ziqu8mMtP+cWjHNBB47zHQ3RmNl81Rjv+QemD5zpdbK/h6AahDncOY3cfN88/HPWrENiSSxLC020sgZNYgERqfw+1YhHrclhf3jrSwCpZikjl7rqKroua2LBI/yeWEta3amTVvUnR2Y7gM8kHyh Generated-by-Nova"
+ demoArtifactsVersion: "1.4.0"
+ demoArtifactsRepoUrl: "https://nexus.onap.org/content/repositories/releases"
+ scriptVersion: "1.4.0"
+ config:
+ # openStackEncryptedPasswordHere should match the encrypted string used in SO and APPC and overridden per environment
+ openStackEncryptedPasswordHere: "f7920677e15e2678b0f33736189e8965"
+ so:
+ enabled: true
+ config:
+ openStackUserName: "onap-tieto"
+ openStackRegion: "RegionOne"
+ openStackKeyStoneUrl: "http://10.20.30.40:5000"
+ openStackServiceTenantName: "services"
+ openStackEncryptedPasswordHere: "31ECA9F2BA98EF34C9EC3412D071E31185F6D9522808867894FF566E6118983AD5E6F794B8034558"
+ so-catalog-db-adapter:
+ config:
+ openStackUserName: "onap-tieto"
+ openStackKeyStoneUrl: "http://10.20.30.40:5000/v2.0"
+ openStackEncryptedPasswordHere: "31ECA9F2BA98EF34C9EC3412D071E31185F6D9522808867894FF566E6118983AD5E6F794B8034558"
+
+
@@ -236,92 +105,125 @@ Relevant robot scripts are under <helm_charts_dir>/oom/kubernetes/robot
very useful page describing commands for `manual checking of HC’s <https://wiki.onap.org/display/DW/Robot+Healthcheck+Tests+on+ONAP+Components#RobotHealthcheckTestsonONAPComponents-ApplicationController(APPC)Healthcheck>`_
-Step 3. Patch public network
+Unfortunatelly some patching is still required to get vFWCL working on ONAP plaform.
+Therefore we provided a bunch of files and put them into ./patches folder within this repo.
+
+After installation is finished and all healthchecks are green it is still required to patch few things.
+Those will be described in following part.
+
+
+Step 3. Patching
============================
-This is the last part of correction for `TEST-166 <https://jira.onap.org/browse/TEST-166>`_ needed for Dublin branch.
+In order to get vFWCL working in our lab on offline platform, we need to ensure 3 things except healthecks prior proceeding
+with official instructions.
+
+**robot**
+a) private key for robot has to be configured properly and contain key present on robot pod
::
- [root@tomas-infra helm_charts]# kubectl get pods -n onap | grep robot
- onap-robot-robot-5c7c46bbf4-4zgkn 1/1 Running 0 3h15m
- [root@tomas-infra helm_charts]# kubectl exec -it onap-robot-robot-5c7c46bbf4-4zgkn bash
- root@onap-robot-robot-5c7c46bbf4-4zgkn:/# cd /var/opt/ONAP/
- root@onap-robot-robot-5c7c46bbf4-4zgkn:/var/opt/ONAP# sed -i 's/network_name=public/network_name=${GLOBAL_INJECTED_OPENSTACK_PUBLIC_NETWORK}/g' robot/resources/demo_preload.robot
- root@onap-robot-robot-5c7c46bbf4-4zgkn:/var/opt/ONAP# sed -i 's/network_name=public/network_name=${GLOBAL_INJECTED_OPENSTACK_PUBLIC_NETWORK}/g' robot/resources/stack_validation/policy_check_vfw.robot
- root@onap-robot-robot-5c7c46bbf4-4zgkn:/var/opt/ONAP# sed -i 's/network_name=public/network_name=${GLOBAL_INJECTED_OPENSTACK_PUBLIC_NETWORK}/g' robot/resources/stack_validation/validate_vfw.robot
+ # open configmap for robot and check GLOBAL_INJECTED_PRIVATE_KEY param
+ kubectl edit configmap onap-robot-robot-eteshare-configmap
+ # it should contain stuff like
+ # GLOBAL_INJECTED_PRIVATE_KEY = '/var/opt/ONAP/onap-dev.pem'
+we need to put some private key for that and that key must match with public key distributed to vFWCL VMs which is
+coming from *vnfPubKey* parameter in robot
-Step 4. Set private key for robot when accessing VNFs
-=====================================================
+b) in our lab there is some issue with cloud-init and vFW VMs are getting default route set quite randomly,
+which is an issue as in our lab we specified following dedicated network for vFW VMs public connectivity.
-This is workaround for ticket `TEST-167 <https://jira.onap.org/browse/TEST-167>`_, as of now robot is using following file as private key
-*/var/opt/ONAP/robot/assets/keys/onap_dev.pvt*
+.. note:: same network has to be reachable from k8s host where robot container is
-One can either set it to own private key, corresponding with public key inserted into VMs from *vnfPubKey* param
-OR
-set mount own private key into robot container and change GLOBAL_VM_PRIVATE_KEY in */var/opt/ONAP/robot/resources/global_properties.robot*
++--------------------------------------+----------------------------------------------+----------------------------------+-------------------------------------------------------+
+| id | name | tenant_id | subnets |
++--------------------------------------+----------------------------------------------+----------------------------------+-------------------------------------------------------+
+| 9403ceea-0738-4908-a826-316c8541e4bb | rc3-offline-network | b1ce7742d956463999923ceaed71786e | 1782c82c-cd92-4fb6-a292-5e396afe63ec 10.8.8.0/24 |
++--------------------------------------+----------------------------------------------+----------------------------------+-------------------------------------------------------+
+for this reason we are patching *base_vfw.yaml* for all vFW VMs with following code
-Step 5. robot init - demo services distribution
-================================================
+::
-Run following robot script to execute both init_customer + distribute
+ # nasty hack to bypass cloud-init issues
+ sed -i '1i nameserver 8.8.8.8' /etc/resolv.conf
+ iface_correct=`ip a | grep 10.8.8 | awk {'print $7'}`
+ route add default gw 10.8.8.1 ${iface_correct}
+
+
+Lets treat it as an example of how these two problems can be fixed. Feel free to adjust private/public key and skip cloud-init problem if you don't have it.
+Our helping script with above setting is fixing both issues (a) and (b) for us.
::
- #  demo-k8s.sh <namespace> init
+ # copy offline-installer repo into infra node and run following script from patches folder
+ ./update_robot.sh
- [root@tomas-infra robot]# ./demo-k8s.sh onap init
+**drools**
+c) usecases controller is not working - POLICY-2191
+There are couple of pom files required in order to get usecases controller in drools pod instantiated properly.
+One can fix it by running following script.
-Step 6. robot instantiateVFW
-============================
+::
-Following tag is used for whole vFWCL testcase. It will deploy single heat stack with 3 VMs and set policies and APPC mount point for vFWCL to happen.
+ # copy offline-installer repo into infra node and run following script from patches folder
+ ./update_policy.sh
+
+.. note:: This script is also restarting policy, there is some small chance that drools will be marked as sick during interval its being restarted and redeployed. If it happens, just try again.
+
+At this moment one can check that usecases controller is build properly via:
::
- # demo-k8s.sh <namespace> instantiateVFW
+ # on infra node
+ kubectl exec -it onap-policy-drools-0 bash
+ bash-4.4$ telemetry
+ Version: 1.0.0
+ https://localhost:9696/policy/pdp/engine> get controllers
+ HTTP/1.1 200 OK
+ Content-Length: 24
+ Content-Type: application/json
+ Date: Mon, 04 Nov 2019 06:31:09 GMT
+ Server: Jetty(9.4.20.v20190813)
+
+ [
+ "amsterdam",
+ "usecases"
+ ]
- root@tomas-infra robot]# ./demo-k8s.sh onap instantiateVFW
-Step 7. fix CloseLoopName in tca microservice
-=============================================
+Now we can proceed with same steps as on online platform.
-In Dublin scope, tca microservice is configured with hardcoded entries from `tcaSpec.json <https://gerrit.onap.org/r/gitweb?p=dcaegen2/analytics/tca.git;a=blob;f=dpo/tcaSpec.json;h=8e69c068ea47300707b8131fbc8d71e9a47af8a2;hb=HEAD#l278>`_
-After updating operational policy within instantiateVFW robot tag execution, one must change CloseLoopName in tca to match with generated
-value in policy. This is done in two parts:
+Step 4. robot init - demo services distribution
+================================================
-a) get correct value
+Run following robot script to execute both init_customer + distribute
::
- # from drools container, i.e. drools in Dublin is not mapped to k8s host
- curl -k --silent --user 'demo@people.osaaf.org:demo123456!' -X GET https://localhost:9696/policy/pdp/engine/controllers/usecases/drools/facts/usecases/controlloops --insecure
+ #  demo-k8s.sh <namespace> init
+
+ [root@tomas-infra robot]# ./demo-k8s.sh onap init
- # alternatively same value can be obtained from telemetry console in drools container
- telemetry
- https://localhost:9696/policy/pdp/engine> cd controllers/usecases/drools/facts/usecases/controlloops
- https://localhost:9696/policy/pdp/engine/controllers/usecases/drools/facts/usecases/controlloops> get
- HTTP/1.1 200 OK
- Content-Length: 62
- Content-Type: application/json
- Date: Tue, 25 Jun 2019 07:18:56 GMT
- Server: Jetty(9.4.14.v20181114)
- [
- "ControlLoop-vFirewall-da1fd2be-2a26-4704-ab99-cd80fe1cf89c"
- ]
-b) update the tca microservice
+Step 5. robot instantiateVFW
+============================
+
+Following tag is used for whole vFWCL testcase. It will deploy single heat stack with 3 VMs and set policies and APPC mount point for vFWCL to happen.
+
+::
+
+ # demo-k8s.sh <namespace> instantiateVFW
+
+ root@tomas-infra robot]# ./demo-k8s.sh onap instantiateVFW
-see Preconditions part in `docs_vfw.html <https://docs.onap.org/en/dublin/submodules/integration.git/docs/docs_vfw.html>`_
-This step will be automated in El-Alto, it's tracked in `TEST-168 <https://jira.onap.org/browse/TEST-168>`_
-Step 8. verify vFW
+Step 6. verify vFW
==================
Verify VFWCL. This step is just to verify CL functionality, which can be also verified by checking DarkStat GUI on vSINK VM <sink_ip:667>
@@ -332,6 +234,6 @@ Verify VFWCL. This step is just to verify CL functionality, which can be also ve
# e.g. where 10.8.8.5 is IP from public network dedicated to vPKG VM
root@tomas-infra robot]# ./demo-k8s.sh onap vfwclosedloop 10.8.8.5
-.. |image0| image:: images/vFWCL-dublin.jpg
+.. |image0| image:: images/vFWCL.jpg
:width: 387px
:height: 393px