aboutsummaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
authorBartek Grzybowski <b.grzybowski@partner.samsung.com>2019-11-06 15:26:37 +0100
committerBrian Freeman <bf1936@att.com>2019-11-19 13:35:54 +0000
commitf816c306b95ab83cc3b5da5991526ce43c7808a4 (patch)
treef24ad22bf2da96fa499c700d8da1e14b6ad318e3 /docs
parent6eea34bc9c4134b723ff32d8635eb28dd733c743 (diff)
Update to vCPE doc regarding service csars download
Service csars no longer need to be manually transferred from robot container as they are automatically downloaded by vcpe scripts and ete-k8s.sh at distributevCPEResCust distribution. Change-Id: I9163972df974828083e3204b5b8786d4bcce2848 Signed-off-by: Bartek Grzybowski <b.grzybowski@partner.samsung.com> Issue-ID: TEST-228
Diffstat (limited to 'docs')
-rw-r--r--docs/docs_vCPE.rst25
1 files changed, 6 insertions, 19 deletions
diff --git a/docs/docs_vCPE.rst b/docs/docs_vCPE.rst
index cff5f3f27..ee830b587 100644
--- a/docs/docs_vCPE.rst
+++ b/docs/docs_vCPE.rst
@@ -112,46 +112,33 @@ Here are the main steps to run the use case in Integration lab environment, wher
ete-k8s.sh onap distributevCPEResCust
-10. Manually copy vCPE customer service csar (starting with service-Vcperescust) under Robot container /tmp/csar directory to Rancher vcpe/csar directory, now you should have these files:
-
-::
-
- root@sb00-nfs:~/integration/test/vcpe/csar# ls -l
- total 528
- -rw-r--r-- 1 root root 126545 Jun 26 11:28 service-Demovcpeinfra-csar.csar
- -rw-r--r-- 1 root root 82053 Jun 26 11:28 service-Demovcpevbng-csar.csar
- -rw-r--r-- 1 root root 74179 Jun 26 11:28 service-Demovcpevbrgemu-csar.csar
- -rw-r--r-- 1 root root 79626 Jun 26 11:28 service-Demovcpevgmux-csar.csar
- -rw-r--r-- 1 root root 78156 Jun 26 11:28 service-Demovcpevgw-csar.csar
- -rw-r--r-- 1 root root 83892 Jun 26 11:28 service-Vcperescust20190625D996-csar.csar
-
-11. Instantiate vCPE infra services
+10. Instantiate vCPE infra services
::
vcpe.py infra
-12. From Rancher node run vcpe healthcheck command to check connectivity from sdnc to brg and gmux, and vpp configuration of brg and gmux. Write down BRG MAC address printed out at the last line
+11. From Rancher node run vcpe healthcheck command to check connectivity from sdnc to brg and gmux, and vpp configuration of brg and gmux. Write down BRG MAC address printed out at the last line
::
healthcheck-k8s.py --namespace <namespace name> --environment <env name>
-13. Instantiate vCPE customer service. Input the BRG MAC when prompt
+12. Instantiate vCPE customer service. Input the BRG MAC when prompt
::
vcpe.py customer
-14. Update libevel.so in vGMUX VM and restart the VM. This allows vGMUX to send events to VES collector in close loop test. See tutorial wiki for details
+13. Update libevel.so in vGMUX VM and restart the VM. This allows vGMUX to send events to VES collector in close loop test. See tutorial wiki for details
-15. Run heatbridge. The heatbridge command usage: demo-k8s.sh <namespace> heatbridge <stack_name> <service_instance_id> <service> <oam-ip-address>, please refer to vCPE tutorial page on how to fill in those paraemters. See an example as following:
+14. Run heatbridge. The heatbridge command usage: demo-k8s.sh <namespace> heatbridge <stack_name> <service_instance_id> <service> <oam-ip-address>, please refer to vCPE tutorial page on how to fill in those paraemters. See an example as following:
::
~/integration/test/vcpe# ~/oom/kubernetes/robot/demo-k8s.sh onap heatbridge vcpe_vfmodule_e2744f48729e4072b20b_201811262136 d8914ef3-3fdb-4401-adfe-823ee75dc604 vCPEvGMUX 10.0.101.21
-16. Start closed loop test by triggering packet drop VES event, and monitor if vGMUX is restarting. You may need to run the command twice if the first run fails
+15. Start closed loop test by triggering packet drop VES event, and monitor if vGMUX is restarting. You may need to run the command twice if the first run fails
::