summaryrefslogtreecommitdiffstats
AgeCommit message (Collapse)AuthorFilesLines
2018-10-01Prepare for release1.4.0Timoney, Dan (dt5972)27-27/+27
Update sdnc/northbound to prepare for release build Change-Id: I42a8f2433f5302f8caf8120ca4d759bf7a9dfe26 Issue-ID: SDNC-471 Signed-off-by: Timoney, Dan (dt5972) <dt5972@att.com>
2018-09-11GENERIC-RESOURCE-API.yang update for SOTNprakash.e1-0/+4
yang changes for SOTN Reoptimization Change-Id: I3a1f4c575922287b1330352b8a8e21e914dd1eee Issue-ID: SDNC-357 Signed-off-by: Prakash.E <prakash.e@huawei.com>
2018-08-27Fixed vnfTopoOperation vnf-id return in responseshashikanth.vh1-0/+3
Fixed issue vnfTopologyOperation, vnf-id not returning in response when vnf-id is not provided as part of input Change-Id: Iaa3b1387a45c60c27e3fdf1120e127648323e352 Issue-ID: SDNC-434 Signed-off-by: shashikanth.vh <shashikanth.vh@huawei.com>
2018-08-20Update swagger documentationTimoney, Dan (dt5972)3-43673/+112814
Updated swagger documentation for SDNC northbound APIs. Change-Id: Ic3c2cb6bd6cd1bb8a732f087ee614e7969d1757b Issue-ID: SDNC-427 Signed-off-by: Timoney, Dan (dt5972) <dt5972@att.com>
2018-08-16Fixed error in Yang modelTimoney, Dan (dt5972)1-2/+2
Wrong prefix used in 2 places to refer to ietf-inet-types import (prefix defined as ietf, but reference was using inet) Change-Id: I8a125f3ea560499487ec08250bb46aae1f209347 Issue-ID: SDNC-422 Signed-off-by: Timoney, Dan (dt5972) <dt5972@att.com>
2018-08-10Rename vlan-tags in connection-pointBrandon, Bruce (bb2697)1-1/+1
Rename vlan-tags to vlan-data in connection-point. Type in original requirements. Change-Id: I73f31588df0c9d715c46bb8adb9161d506214211 Issue-ID: SDNC-410 Signed-off-by: Brandon, Bruce (bb2697) <bb2697@att.com>
2018-08-09Fixed issues for connectivity attachment providershashikanth.vh1-2/+1
To tryGetProperties method properties is passed as parameter instead of params Change-Id: I70e73ed73a1d123cefffbefae3b05a0e0c294fa0 Issue-ID: SDNC-384 Signed-off-by: shashikanth.vh <shashikanth.vh@huawei.com>
2018-08-03restore sriov-parametersSmokowski, Kevin (ks6305)1-0/+22
restore sriov-parameters in generic resource yang Change-Id: I985e8f41511444ca435d3383ed18ab8bb2d1c414 Issue-ID: SDNC-414 Signed-off-by: Smokowski, Kevin (ks6305) <ks6305@att.com>
2018-08-02Merge "Add the request-action types"Dan Timoney1-0/+32
2018-07-30Add the request-action typesshashikanth.vh1-0/+32
Added new request action types for existing resources-type for CCVPN use case. Change-Id: I2a6406fc1a763b257abb4fd9143eaa728d54f120 Issue-ID: SDNC-384 Signed-off-by: shashikanth.vh <shashikanth.vh@huawei.com>
2018-07-30Corrections for northboundgaurav3-2/+6
Changes includes: 1. Correcting connection-attachment topology container name. 2. Make the vnf-topology-operation implementation flexible to allow SDNC generate vnf-id for assign/create operation rather then mandating SO to provide it. Change-Id: I8958a72a4ca2064b781d246e9329436848f578ad Issue-ID: SDNC-384 Signed-off-by: Gaurav Agrawal <gaurav.agrawal@huawei.com>
2018-07-27Merge "Add connection point and related to GR-API"Dan Timoney2-206/+230
2018-07-27Add connection point and related to GR-APIBrandon, Bruce (bb2697)2-206/+230
Add connection point and other new structures to generic resource API Yang Change-Id: I6f116a89ecde866d6c8e2bcd0cd4f3912da9ecad Issue-ID: SDNC-408 Signed-off-by: Brandon, Bruce (bb2697) <bb2697@att.com>
2018-07-27Add client code to generic-resource-apiTimoney, Dan (dt5972)2-2/+115
Add code to generate generic-resource-api client which was contributed to sdnc/apps repo to sdnc/northbound/generic-resource-api, so that it will automatically be updated when yang model is updated. Change-Id: I4d36073e9771e72a2ebc38cb45f2151a88699ae6 Issue-ID: SDNC-406 Signed-off-by: Timoney, Dan (dt5972) <dt5972@att.com>
2018-07-26Generic resource API catch upBrandon, Bruce (bb2697)9-1121/+3064
Adding several Generic Resource API capabilities to Yang, provider, and tests Change-Id: I9d6a3e1494ba35b2e98370ca5b2c4ff8270981af Issue-ID: SDNC-271 Signed-off-by: Brandon, Bruce (bb2697) <bb2697@att.com>
2018-07-25Merge "Add the request-action types"Dan Timoney1-0/+24
2018-07-20Add the request-action typesgaurav1-0/+24
Added new request action types for existing resources-type for CCVPN use case. Change-Id: Ife19aa69d90ebb347e07fc9314a92fb79129064d Issue-ID: SDNC-384 Signed-off-by: Gaurav Agrawal <gaurav.agrawal@huawei.com>
2018-07-20Adding testcases for new allotted resourcegaurav3-3/+289
Addition of a testcase for new allotted resource "connectivity attachment" to generic-resource-api schema. Change-Id: Ibf4164132c187ce26ab253290f9f85b56d4633fd Issue-ID: SDNC-384 Signed-off-by: Gaurav Agrawal <gaurav.agrawal@huawei.com>
2018-07-20Provider implementation of connectivity attachmentgaurav1-4/+182
Addition of a implementation of new allotted resource "connectivity attachment" to generic-resource-api schema. Connectivity attachment will be provided by Connectivity service and will be allotted to Site service as part of CCVPN use case. This new allotted-resource can serve for both SOTNAttachment and SDWANAttachment. Change-Id: I1823249b6ab9fe315e83862abcf05be65ceded5a Issue-ID: SDNC-384 Signed-off-by: Gaurav Agrawal <gaurav.agrawal@huawei.com>
2018-07-19Merge "Add classifier to repo zip"David Stilwell3-3/+0
2018-07-19Add classifier to repo zipTimoney, Dan (dt5972)3-3/+0
Add classifier repo to zip containing jars to be installed in ODL maven repository Change-Id: I3bd48439afa75cf8770eb09347a81bf83ddc610e Issue-ID: SDNC-393 Signed-off-by: Timoney, Dan (dt5972) <dt5972@att.com>
2018-07-18New allotted resource for connectivity attachment.gaurav2-0/+76
Addition of a new allotted resource "connectivity attachment" to generic-resource-api schema. Connectivity attachment will be provided by Connectivity service and will be allotted to Site service as part of CCVPN use case. This new allotted-resource can serve for both SOTNAttachment and SDWANAttachment. Change-Id: Iff5ad46a72d55bc83ca36e4703d439e43fe948ce Issue-ID: SDNC-384 Signed-off-by: Gaurav Agrawal <gaurav.agrawal@huawei.com>
2018-07-17Add feature aggregator for sdnc-northboundTimoney, Dan (dt5972)16-16/+348
Add feature aggregator sdnc-northbound-all to install all features from SDNC northbound repo. Change-Id: Ibe74a5d1880fb3c62d535a7367327a1bef610c19 Issue-ID: SDNC-393 Signed-off-by: Timoney, Dan (dt5972) <dt5972@att.com>
2018-06-06Roll version for CasablancaTimoney, Dan (dt5972)22-45/+45
Roll versions for Casablanca development Change-Id: I36526a7b5b8f1471a1d73b86b3cf996524a68f1e Issue-ID: SDNC-333 Signed-off-by: Timoney, Dan (dt5972) <dt5972@att.com>
2018-05-31Update to use CCSDK 0.2.4v1.3.42.0.0-ONAPbeijing2.0.0-ONAPTimoney, Dan (dt5972)22-43/+43
Update to use CCSDK version 0.2.4 Change-Id: I2588021598deeb694db7751be7f38c7548a7c3cc Issue-ID: CCSDK-291 Signed-off-by: Timoney, Dan (dt5972) <dt5972@att.com>
2018-05-24Update version.propertiesTimoney, Dan (dt5972)1-1/+1
Update version.properties to 1.3.3 Change-Id: Iacf184b81c07dc1192cdd20a92d7ecbb20225e57 Issue-ID: CCSDK-209 Signed-off-by: Timoney, Dan (dt5972) <dt5972@att.com>
2018-05-23Roll to version 1.3.3-SNAPSHOTTimoney, Dan (dt5972)21-21/+21
Update to next snapshot version Change-Id: I400547435524e07b4b095b73d861ed08de424e4d Issue-ID: CCSDK-290 Signed-off-by: Timoney, Dan (dt5972) <dt5972@att.com>
2018-05-23Update to release version parent 1.0.3Stilwell, David (stilwelld)21-21/+21
Update parent version to remove -SNAPSHOT (1.0.3) Change-Id: Iaba08985957d80875bec05099a618a2873ed801c Issue-ID: CCSDK-290 Signed-off-by: Stilwell, David (stilwelld) <stilwelld@att.com>
2018-04-29Roll versions step 7 : sdnc/northboundTimoney, Dan (dt5972)22-45/+45
Roll version for next release candidate. Change-Id: Ib9a84ff6d5958876d7c4902673a1cc8c199ef64e Issue-ID: SDNC-294 Signed-off-by: Timoney, Dan (dt5972) <dt5972@att.com>
2018-04-18Merge "Fix feature install for vnfapi"v1.3.1David Stilwell2-10/+8
2018-04-18Fix feature install for vnfapiTimoney, Dan (dt5972)2-10/+8
Fixed feature installer for vnfapi and vnftools to use client without -u option. Change-Id: Ic9b152965a6e70577d6e787f1e3b0a26852fcf54 Issue-ID: SDNC-283 Signed-off-by: Timoney, Dan (dt5972) <dt5972@att.com>
2018-03-22GenericResourceApiProvider unit tests part 8.Jakub Dudycz4-29/+439
Added unit tests for preloadVnfTopologyOperation and preloadNetworkTopologyOperation methods Change-Id: Iae0d3cbbaf27ee351eaf556fe7ad1a1ca54fbe91 Issue-ID: SDNC-275 Signed-off-by: Jakub Dudycz <jakub.dudycz@nokia.com>
2018-03-22GenericResourceApiProvider unit tests part 7.Jakub Dudycz5-37/+387
Added unit tests for tunnelxconnTopologyOperation and brgTopologyOperation methods Change-Id: I2172a62bddb3f394caf93c56ed2078efc153f3ea Issue-ID: SDNC-275 Signed-off-by: Jakub Dudycz <jakub.dudycz@nokia.com>
2018-03-21GenericResourceApiProvider unit tests part 6.Jakub Dudycz8-139/+532
Added unit tests for securityZoneTopologyOperation method and did some fixes in other test classes. Change-Id: I61adeb21e4dca08b6cd668a6b0e70070d2f86730 Issue-ID: SDNC-275 Signed-off-by: Jakub Dudycz <jakub.dudycz@nokia.com>
2018-03-20GenericResourceApiProvider unit tests part 5.Jakub Dudycz3-4/+188
Added unit tests for contrailRouteTopologyOperation method Change-Id: I8519c5e1f3b8e076186f1965a8e316d0a35095ff Issue-ID: SDNC-275 Signed-off-by: Jakub Dudycz <jakub.dudycz@nokia.com>
2018-03-20GenericResourceApiProvider unit tests part 4.Jakub Dudycz3-13/+261
Created unit tests for vfModuleTopologyOperation method Change-Id: I3bbd576ab85e95489c3df71e369cfe061dc4c2c3 Issue-ID: SDNC-275 Signed-off-by: Jakub Dudycz <jakub.dudycz@nokia.com>
2018-03-20GenericResourceApiProvider unit tests part 3.Jakub Dudycz3-72/+194
Added some unit tests for networkTopologyOperation method Change-Id: Iad137467fa367c9c6fb0050be1fe07a9019b213d Issue-ID: SDNC-275 Signed-off-by: Jakub Dudycz <jakub.dudycz@nokia.com>
2018-03-19GenericResourceApiProvider unit tests part 2.Jakub Dudycz4-73/+289
Unit tests for vnfTopologyOperation method Change-Id: I60c73f31eb070cf554e55779d1d18631ce9b7147 Issue-ID: SDNC-275 Signed-off-by: Jakub Dudycz <jakub.dudycz@nokia.com>
2018-03-19GenericResourceApiProvider unit tests part 1.Jakub Dudycz5-159/+267
Unit tests for serviceTopologyOperation method Change-Id: I9a6f43fb6892b9b1d8f92544f7a90e203b0c9229 Issue-ID: SDNC-275 Signed-off-by: Jakub Dudycz <jakub.dudycz@nokia.com>
2018-03-16Merge "vnfapi yang model change"Dan Timoney1-839/+932
2018-03-07vnfapi yang model changeCheung, Pat (kc1472)1-839/+932
Enhance vnfapi yang model to support network-role-tag, and subnet-role Change-Id: I8f0c7cb492df905881a9cb6747e8f7da6d07c82e Issue-ID: CCSDK-200 Signed-off-by: Cheung, Pat (kc1472) <kc1472@att.com>
2018-02-28Nitrogen port : sdnc-northboundTimoney, Dan (dt5972)30-1156/+551
Update sdnc/northbound to OpenDaylight Nitrogen release. Change-Id: I6e299418a7ab441899b78b5d3df5f8ee96829222 Issue-ID: SDNC-269 Signed-off-by: Timoney, Dan (dt5972) <dt5972@att.com>
2018-02-27Add tests to VnfApiProvider part 5pawel.kadlubanski1-17/+78
Write test for function vnfInstanceTopologyOperation in vnfapiProvider class. Case when svcLogicClient svcLogicClient execute method returns null and there is nor error during transaction writing. Issue-ID: SDNC-267 Change-Id: I8058a2f38bea66fe1c7b6f5dbf24400fef5ab3b8 Signed-off-by: pawel.kadlubanski <pawel.kadlubanski@nokia.com>
2018-02-27Add tests to VnfApiProvider part 4pawel.kadlubanski2-173/+250
Write test for function vnfInstanceTopologyOperation in vnfapiProvider class. Case when svcLogicClient svcLogicClient execute method returns null but exception is thrown during transaction writing. Issue-ID: SDNC-267 Change-Id: I28c833938297032eaa717001cae8810ab007e9b7 Signed-off-by: pawel.kadlubanski <pawel.kadlubanski@nokia.com>
2018-02-20Add tests to VnfApiProvider part 3pawel.kadlubanski1-0/+33
Write test for function vnfInstanceTopologyOperation in vnfapiProvider class. Case when svcLogicClient svcLogicClient execute method returns not null. Change-Id: Ia00b6d8ce2370920035424c6095e1d31611aaeb1 Issue-ID: SDNC-267 Signed-off-by: pawel.kadlubanski <pawel.kadlubanski@nokia.com>
2018-02-20Add tests to VnfApiProvider part 2pawel.kadlubanski1-30/+90
Write test for function vnfInstanceTopologyOperation in vnfapiProvider class. Case when svcLogicClient hasGraph return false. Case when svcLogicClient execute throw SvcLogicException. Case when svcLogicClient execute throw SvcLogicException. Change-Id: I6f65049e7567ae1d0fd9eccf7f8c1f35deb3c482 Issue-ID: SDNC-267 Signed-off-by: pawel.kadlubanski <pawel.kadlubanski@nokia.com>
2018-02-19Refactoring VnfApiProvider part 1pawel.kadlubanski2-27/+41
Move VnfInstanceTopologyOperationInput validation from vnfInstanceTopologyOperation to seperate method. Change-Id: I56e3999a92323c745f6d0aa8b4417b9e7acb277a Issue-ID: SDNC-268 Signed-off-by: pawel.kadlubanski <pawel.kadlubanski@nokia.com>
2018-02-19Add tests to VnfApiProvider part 1pawel.kadlubanski1-2/+62
Write test for function vnfInstanceTopologyOperation in vnfapiProvider class. Case when VnfInstanceTopologyOperationInput is null. Case when VnfInstanceTopologyOperationInput is not null but VnfInstanceRequestInformation is null. Case when VnfInstanceId in VnfInstanceRequestInformation is null. Change-Id: Icc778544a9e7a80600e54a2dcd87b77d11a23121 Issue-ID: SDNC-267 Signed-off-by: pawel.kadlubanski <pawel.kadlubanski@nokia.com>
2018-02-15Merge "Improve code metrics"Dan Timoney1-90/+69
2018-02-14Improve code metricspawel.kadlubanski1-90/+69
Resolve nested try block identified by Sonar as in class vnfapiProvider fixes Define a constant instead of duplicating literals identified by Sonar in class vnfapiProvider fixes Change-Id: I584dfb104ee6f506ef063bb83d00beb11b1f971f Issue-ID: SDNC-243 Signed-off-by: pawel.kadlubanski <pawel.kadlubanski@nokia.com>
wn in the following diagram. Note that key/value pairs of a parent will always take precedence over those of a child. Also note that values set on the command line have the highest precedence of all. .. graphviz:: digraph config { { node [shape=folder] oValues [label="values.yaml"] demo [label="onap-demo.yaml"] prod [label="onap-production.yaml"] oReq [label="requirements.yaml"] soValues [label="values.yaml"] soReq [label="requirements.yaml"] mdValues [label="values.yaml"] } { oResources [label="resources"] } onap -> oResources onap -> oValues oResources -> environments oResources -> oReq oReq -> so environments -> demo environments -> prod so -> soValues so -> soReq so -> charts charts -> mariadb mariadb -> mdValues } The top level onap/values.yaml file contains the values required to be set before deploying ONAP. Here is the contents of this file: .. include:: ../kubernetes/onap/values.yaml :code: yaml One may wish to create a value file that is specific to a given deployment such that it can be differentiated from other deployments. For example, a onap-development.yaml file may create a minimal environment for development while onap-production.yaml might describe a production deployment that operates independently of the developer version. For example, if the production OpenStack instance was different from a developer's instance, the onap-production.yaml file may contain a different value for the vnfDeployment/openstack/oam_network_cidr key as shown below. .. code-block:: yaml nsPrefix: onap nodePortPrefix: 302 apps: consul msb mso message-router sdnc vid robot portal policy appc aai sdc dcaegen2 log cli multicloud clamp vnfsdk aaf kube2msb dataRootDir: /dockerdata-nfs # docker repositories repository: onap: nexus3.onap.org:10001 oom: oomk8s aai: aaionap filebeat: docker.elastic.co image: pullPolicy: Never # vnf deployment environment vnfDeployment: openstack: ubuntu_14_image: "Ubuntu_14.04.5_LTS" public_net_id: "e8f51956-00dd-4425-af36-045716781ffc" oam_network_id: "d4769dfb-c9e4-4f72-b3d6-1d18f4ac4ee6" oam_subnet_id: "191f7580-acf6-4c2b-8ec0-ba7d99b3bc4e" oam_network_cidr: "192.168.30.0/24" <...> To deploy ONAP with this environment file, enter:: > helm deploy local/onap -n onap -f onap/resources/environments/onap-production.yaml --set global.masterPassword=password .. include:: environments_onap_demo.yaml :code: yaml When deploying all of ONAP a requirements.yaml file control which and what version of the ONAP components are included. Here is an excerpt of this file: .. code-block:: yaml # Referencing a named repo called 'local'. # Can add this repo by running commands like: # > helm serve # > helm repo add local http://127.0.0.1:8879 dependencies: <...> - name: so version: ~8.0.0 repository: '@local' condition: so.enabled <...> The ~ operator in the `so` version value indicates that the latest "8.X.X" version of `so` shall be used thus allowing the chart to allow for minor upgrades that don't impact the so API; hence, version 8.0.1 will be installed in this case. The onap/resources/environment/dev.yaml (see the excerpt below) enables for fine grained control on what components are included as part of this deployment. By changing this `so` line to `enabled: false` the `so` component will not be deployed. If this change is part of an upgrade the existing `so` component will be shut down. Other `so` parameters and even `so` child values can be modified, for example the `so`'s `liveness` probe could be disabled (which is not recommended as this change would disable auto-healing of `so`). .. code-block:: yaml ################################################################# # Global configuration overrides. # # These overrides will affect all helm charts (ie. applications) # that are listed below and are 'enabled'. ################################################################# global: <...> ################################################################# # Enable/disable and configure helm charts (ie. applications) # to customize the ONAP deployment. ################################################################# aaf: enabled: false <...> so: # Service Orchestrator enabled: true replicaCount: 1 liveness: # necessary to disable liveness probe when setting breakpoints # in debugger so K8s doesn't restart unresponsive container enabled: true <...> Accessing the ONAP Portal using OOM and a Kubernetes Cluster ------------------------------------------------------------ The ONAP deployment created by OOM operates in a private IP network that isn't publicly accessible (i.e. OpenStack VMs with private internal network) which blocks access to the ONAP Portal. To enable direct access to this Portal from a user's own environment (a laptop etc.) the portal application's port 8989 is exposed through a `Kubernetes LoadBalancer`_ object. Typically, to be able to access the Kubernetes nodes publicly a public address is assigned. In OpenStack this is a floating IP address. When the `portal-app` chart is deployed a Kubernetes service is created that instantiates a load balancer. The LB chooses the private interface of one of the nodes as in the example below (10.0.0.4 is private to the K8s cluster only). Then to be able to access the portal on port 8989 from outside the K8s & OpenStack environment, the user needs to assign/get the floating IP address that corresponds to the private IP as follows:: > kubectl -n onap get services|grep "portal-app" portal-app LoadBalancer 10.43.142.201 10.0.0.4 8989:30215/TCP,8006:30213/TCP,8010:30214/TCP 1d app=portal-app,release=dev In this example, use the 10.0.0.4 private address as a key find the corresponding public address which in this example is 10.12.6.155. If you're using OpenStack you'll do the lookup with the horizon GUI or the OpenStack CLI for your tenant (openstack server list). That IP is then used in your `/etc/hosts` to map the fixed DNS aliases required by the ONAP Portal as shown below:: 10.12.6.155 portal.api.simpledemo.onap.org 10.12.6.155 vid.api.simpledemo.onap.org 10.12.6.155 sdc.api.fe.simpledemo.onap.org 10.12.6.155 sdc.workflow.plugin.simpledemo.onap.org 10.12.6.155 sdc.dcae.plugin.simpledemo.onap.org 10.12.6.155 portal-sdk.simpledemo.onap.org 10.12.6.155 policy.api.simpledemo.onap.org 10.12.6.155 aai.api.sparky.simpledemo.onap.org 10.12.6.155 cli.api.simpledemo.onap.org 10.12.6.155 msb.api.discovery.simpledemo.onap.org 10.12.6.155 msb.api.simpledemo.onap.org 10.12.6.155 clamp.api.simpledemo.onap.org 10.12.6.155 so.api.simpledemo.onap.org 10.12.6.155 sdc.workflow.plugin.simpledemo.onap.org Ensure you've disabled any proxy settings the browser you are using to access the portal and then simply access now the new ssl-encrypted URL: ``https://portal.api.simpledemo.onap.org:30225/ONAPPORTAL/login.htm`` .. note:: Using the HTTPS based Portal URL the Browser needs to be configured to accept unsecure credentials. Additionally when opening an Application inside the Portal, the Browser might block the content, which requires to disable the blocking and reloading of the page .. note:: Besides the ONAP Portal the Components can deliver additional user interfaces, please check the Component specific documentation. .. note:: | Alternatives Considered: - Kubernetes port forwarding was considered but discarded as it would require the end user to run a script that opens up port forwarding tunnels to each of the pods that provides a portal application widget. - Reverting to a VNC server similar to what was deployed in the Amsterdam release was also considered but there were many issues with resolution, lack of volume mount, /etc/hosts dynamic update, file upload that were a tall order to solve in time for the Beijing release. Observations: - If you are not using floating IPs in your Kubernetes deployment and directly attaching a public IP address (i.e. by using your public provider network) to your K8S Node VMs' network interface, then the output of 'kubectl -n onap get services | grep "portal-app"' will show your public IP instead of the private network's IP. Therefore, you can grab this public IP directly (as compared to trying to find the floating IP first) and map this IP in /etc/hosts. .. figure:: oomLogoV2-Monitor.png :align: right Monitor ======= All highly available systems include at least one facility to monitor the health of components within the system. Such health monitors are often used as inputs to distributed coordination systems (such as etcd, Zookeeper, or Consul) and monitoring systems (such as Nagios or Zabbix). OOM provides two mechanisms to monitor the real-time health of an ONAP deployment: - a Consul GUI for a human operator or downstream monitoring systems and Kubernetes liveness probes that enable automatic healing of failed containers, and - a set of liveness probes which feed into the Kubernetes manager which are described in the Heal section. Within ONAP, Consul is the monitoring system of choice and deployed by OOM in two parts: - a three-way, centralized Consul server cluster is deployed as a highly available monitor of all of the ONAP components, and - a number of Consul agents. The Consul server provides a user interface that allows a user to graphically view the current health status of all of the ONAP components for which agents have been created - a sample from the ONAP Integration labs follows: .. figure:: consulHealth.png :align: center To see the real-time health of a deployment go to: ``http://<kubernetes IP>:30270/ui/`` where a GUI much like the following will be found: .. note:: If Consul GUI is not accessible, you can refer this `kubectl port-forward <https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/>`_ method to access an application .. figure:: oomLogoV2-Heal.png :align: right Heal ==== The ONAP deployment is defined by Helm charts as mentioned earlier. These Helm charts are also used to implement automatic recoverability of ONAP components when individual components fail. Once ONAP is deployed, a "liveness" probe starts checking the health of the components after a specified startup time. Should a liveness probe indicate a failed container it will be terminated and a replacement will be started in its place - containers are ephemeral. Should the deployment specification indicate that there are one or more dependencies to this container or component (for example a dependency on a database) the dependency will be satisfied before the replacement container/component is started. This mechanism ensures that, after a failure, all of the ONAP components restart successfully. To test healing, the following command can be used to delete a pod:: > kubectl delete pod [pod name] -n [pod namespace] One could then use the following command to monitor the pods and observe the pod being terminated and the service being automatically healed with the creation of a replacement pod:: > kubectl get pods --all-namespaces -o=wide .. figure:: oomLogoV2-Scale.png :align: right Scale ===== Many of the ONAP components are horizontally scalable which allows them to adapt to expected offered load. During the Beijing release scaling is static, that is during deployment or upgrade a cluster size is defined and this cluster will be maintained even in the presence of faults. The parameter that controls the cluster size of a given component is found in the values.yaml file for that component. Here is an excerpt that shows this parameter: .. code-block:: yaml # default number of instances replicaCount: 1 In order to change the size of a cluster, an operator could use a helm upgrade (described in detail in the next section) as follows:: > helm upgrade [RELEASE] [CHART] [flags] The RELEASE argument can be obtained from the following command:: > helm list Below is the example for the same:: > helm list NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE dev 1 Wed Oct 14 13:49:52 2020 DEPLOYED onap-8.0.0 Honolulu onap dev-cassandra 5 Thu Oct 15 14:45:34 2020 DEPLOYED cassandra-8.0.0 onap dev-contrib 1 Wed Oct 14 13:52:53 2020 DEPLOYED contrib-8.0.0 onap dev-mariadb-galera 1 Wed Oct 14 13:55:56 2020 DEPLOYED mariadb-galera-8.0.0 onap Here the Name column shows the RELEASE NAME, In our case we want to try the scale operation on cassandra, thus the RELEASE NAME would be dev-cassandra. Now we need to obtain the chart name for casssandra. Use the below command to get the chart name:: > helm search cassandra Below is the example for the same:: > helm search cassandra NAME CHART VERSION APP VERSION DESCRIPTION local/cassandra 8.0.0 ONAP cassandra local/portal-cassandra 8.0.0 Portal cassandra local/aaf-cass 8.0.0 ONAP AAF cassandra local/sdc-cs 8.0.0 ONAP Service Design and Creation Cassandra Here the Name column shows the chart name. As we want to try the scale operation for cassandra, thus the correponding chart name is local/cassandra Now we have both the command's arguments, thus we can perform the scale opeartion for cassandra as follows:: > helm upgrade dev-cassandra local/cassandra --set replicaCount=3 Using this command we can scale up or scale down the cassadra db instances. The ONAP components use Kubernetes provided facilities to build clustered, highly available systems including: Services_ with load-balancers, ReplicaSet_, and StatefulSet_. Some of the open-source projects used by the ONAP components directly support clustered configurations, for example ODL and MariaDB Galera. The Kubernetes Services_ abstraction to provide a consistent access point for each of the ONAP components, independent of the pod or container architecture of that component. For example, SDN-C uses OpenDaylight clustering with a default cluster size of three but uses a Kubernetes service to and change the number of pods in this abstract this cluster from the other ONAP components such that the cluster could change size and this change is isolated from the other ONAP components by the load-balancer implemented in the ODL service abstraction. A ReplicaSet_ is a construct that is used to describe the desired state of the cluster. For example 'replicas: 3' indicates to Kubernetes that a cluster of 3 instances is the desired state. Should one of the members of the cluster fail, a new member will be automatically started to replace it. Some of the ONAP components many need a more deterministic deployment; for example to enable intra-cluster communication. For these applications the component can be deployed as a Kubernetes StatefulSet_ which will maintain a persistent identifier for the pods and thus a stable network id for the pods. For example: the pod names might be web-0, web-1, web-{N-1} for N 'web' pods with corresponding DNS entries such that intra service communication is simple even if the pods are physically distributed across multiple nodes. An example of how these capabilities can be used is described in the Running Consul on Kubernetes tutorial. .. figure:: oomLogoV2-Upgrade.png :align: right Upgrade ======= Helm has built-in capabilities to enable the upgrade of pods without causing a loss of the service being provided by that pod or pods (if configured as a cluster). As described in the OOM Developer's Guide, ONAP components provide an abstracted 'service' end point with the pods or containers providing this service hidden from other ONAP components by a load balancer. This capability is used during upgrades to allow a pod with a new image to be added to the service before removing the pod with the old image. This 'make before break' capability ensures minimal downtime. Prior to doing an upgrade, determine of the status of the deployed charts:: > helm list NAME REVISION UPDATED STATUS CHART NAMESPACE so 1 Mon Feb 5 10:05:22 2020 DEPLOYED so-8.0.0 onap When upgrading a cluster a parameter controls the minimum size of the cluster during the upgrade while another parameter controls the maximum number of nodes in the cluster. For example, SNDC configured as a 3-way ODL cluster might require that during the upgrade no fewer than 2 pods are available at all times to provide service while no more than 5 pods are ever deployed across the two versions at any one time to avoid depleting the cluster of resources. In this scenario, the SDNC cluster would start with 3 old pods then Kubernetes may add a new pod (3 old, 1 new), delete one old (2 old, 1 new), add two new pods (2 old, 3 new) and finally delete the 2 old pods (3 new). During this sequence the constraints of the minimum of two pods and maximum of five would be maintained while providing service the whole time. Initiation of an upgrade is triggered by changes in the Helm charts. For example, if the image specified for one of the pods in the SDNC deployment specification were to change (i.e. point to a new Docker image in the nexus3 repository - commonly through the change of a deployment variable), the sequence of events described in the previous paragraph would be initiated. For example, to upgrade a container by changing configuration, specifically an environment value:: > helm upgrade so onap/so --version 8.0.1 --set enableDebug=true Issuing this command will result in the appropriate container being stopped by Kubernetes and replaced with a new container with the new environment value. To upgrade a component to a new version with a new configuration file enter:: > helm upgrade so onap/so --version 8.0.1 -f environments/demo.yaml To fetch release history enter:: > helm history so REVISION UPDATED STATUS CHART DESCRIPTION 1 Mon Feb 5 10:05:22 2020 SUPERSEDED so-8.0.0 Install complete 2 Mon Feb 5 10:10:55 2020 DEPLOYED so-8.0.1 Upgrade complete Unfortunately, not all upgrades are successful. In recognition of this the lineup of pods within an ONAP deployment is tagged such that an administrator may force the ONAP deployment back to the previously tagged configuration or to a specific configuration, say to jump back two steps if an incompatibility between two ONAP components is discovered after the two individual upgrades succeeded. This rollback functionality gives the administrator confidence that in the unfortunate circumstance of a failed upgrade the system can be rapidly brought back to a known good state. This process of rolling upgrades while under service is illustrated in this short YouTube video showing a Zero Downtime Upgrade of a web application while under a 10 million transaction per second load. For example, to roll-back back to previous system revision enter:: > helm rollback so 1 > helm history so REVISION UPDATED STATUS CHART DESCRIPTION 1 Mon Feb 5 10:05:22 2020 SUPERSEDED so-8.0.0 Install complete 2 Mon Feb 5 10:10:55 2020 SUPERSEDED so-8.0.1 Upgrade complete 3 Mon Feb 5 10:14:32 2020 DEPLOYED so-8.0.0 Rollback to 1 .. note:: The description field can be overridden to document actions taken or include tracking numbers. Many of the ONAP components contain their own databases which are used to record configuration or state information. The schemas of these databases may change from version to version in such a way that data stored within the database needs to be migrated between versions. If such a migration script is available it can be invoked during the upgrade (or rollback) by Container Lifecycle Hooks. Two such hooks are available, PostStart and PreStop, which containers can access by registering a handler against one or both. Note that it is the responsibility of the ONAP component owners to implement the hook handlers - which could be a shell script or a call to a specific container HTTP endpoint - following the guidelines listed on the Kubernetes site. Lifecycle hooks are not restricted to database migration or even upgrades but can be used anywhere specific operations need to be taken during lifecycle operations. OOM uses Helm K8S package manager to deploy ONAP components. Each component is arranged in a packaging format called a chart - a collection of files that describe a set of k8s resources. Helm allows for rolling upgrades of the ONAP component deployed. To upgrade a component Helm release you will need an updated Helm chart. The chart might have modified, deleted or added values, deployment yamls, and more. To get the release name use:: > helm ls To easily upgrade the release use:: > helm upgrade [RELEASE] [CHART] To roll back to a previous release version use:: > helm rollback [flags] [RELEASE] [REVISION] For example, to upgrade the onap-so helm release to the latest SO container release v1.1.2: - Edit so values.yaml which is part of the chart - Change "so: nexus3.onap.org:10001/openecomp/so:v1.1.1" to "so: nexus3.onap.org:10001/openecomp/so:v1.1.2" - From the chart location run:: > helm upgrade onap-so The previous so pod will be terminated and a new so pod with an updated so container will be created. .. figure:: oomLogoV2-Delete.png :align: right Delete ====== Existing deployments can be partially or fully removed once they are no longer needed. To minimize errors it is recommended that before deleting components from a running deployment the operator perform a 'dry-run' to display exactly what will happen with a given command prior to actually deleting anything. For example:: > helm undeploy onap --dry-run will display the outcome of deleting the 'onap' release from the deployment. To completely delete a release and remove it from the internal store enter:: > helm undeploy onap Once complete undeploy is done then delete the namespace as well using following command:: > kubectl delete namespace <name of namespace> .. note:: You need to provide the namespace name which you used during deployment, below is the example:: > kubectl delete namespace onap One can also remove individual components from a deployment by changing the ONAP configuration values. For example, to remove `so` from a running deployment enter:: > helm undeploy onap-so will remove `so` as the configuration indicates it's no longer part of the deployment. This might be useful if a one wanted to replace just `so` by installing a custom version.