summaryrefslogtreecommitdiffstats
path: root/docs/Chapter8
diff options
context:
space:
mode:
Diffstat (limited to 'docs/Chapter8')
-rw-r--r--docs/Chapter8/OPNFV-Verified-Badging.rst812
-rw-r--r--docs/Chapter8/VES_Registration_3_2.rst (renamed from docs/Chapter8/VES_Registraion_3_2.rst)984
-rw-r--r--docs/Chapter8/index.rst3
-rw-r--r--docs/Chapter8/input-VNF-API-fail-single_module.zipbin0 -> 2944 bytes
-rw-r--r--docs/Chapter8/input-VNF-API-pass-single_module.zipbin0 -> 3527 bytes
-rw-r--r--docs/Chapter8/tosca_vnf_test_environment.pngbin0 -> 101795 bytes
-rw-r--r--docs/Chapter8/tosca_vnf_test_flow.pngbin0 -> 40614 bytes
7 files changed, 1294 insertions, 505 deletions
diff --git a/docs/Chapter8/OPNFV-Verified-Badging.rst b/docs/Chapter8/OPNFV-Verified-Badging.rst
new file mode 100644
index 0000000..946bc6b
--- /dev/null
+++ b/docs/Chapter8/OPNFV-Verified-Badging.rst
@@ -0,0 +1,812 @@
+.. Modifications Copyright © 2017-2018 AT&T Intellectual Property.
+
+.. Licensed under the Creative Commons License, Attribution 4.0 Intl.
+ (the "License"); you may not use this documentation except in compliance
+ with the License. You may obtain a copy of the License at
+
+.. https://creativecommons.org/licenses/by/4.0/
+
+.. Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+
+
+OPNFV Verfied Program Badging for VNFs
+--------------------------------------
+
+OPNFV Verified Program Overview
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The `OPNFV Verified Program (OVP) <https://www.lfnetworking.org/OVP/>`__ is
+
+ an open source, community-led compliance and verification program to
+ demonstrate the readiness and availability of commercial NFV products and
+ services, including NFVI and VNFs, using OPNFV and ONAP components.
+
+ -- Source: OVP
+
+The program currently offers verification badges for NFVI, VNFs, and Labs. The
+VNF badge aims to verify that a given VNF is compatible and interoperable with
+a given release of ONAP and an ONAP-compatible NFVI.
+
+Relationship to ONAP
+^^^^^^^^^^^^^^^^^^^^
+
+The ONAP VNF Requirements project defines the mandatory and recommended
+requirements for a VNF to be successfully orchestrated by ONAP. At this time,
+the OPNFV VNF badge automates the verification of a subset of these
+requirements with plans to expand the scope over verified requirements over
+time.
+
+Currently, the `OPNFV VNF badge <https://vnf-verified.lfnetworking.org/#/>`__
+covers the following:
+
+* Compliance checks of the contents of a VNF onboarding package for :ref:`Heat-based <heat_requirements>`
+ or :ref:`TOSCA-based <tosca_requirements>` VNFs.
+
+ * Validation of the packages are, respectively, performed by the ONAP VVP
+ and ONAP VNFSDK projects.
+
+* Validation that the package can be onboarded, modeled, configured, deployed,
+ and instantiated on an ONAP-compatible NFVI (currently OpenStack)
+
+
+How to Receive a ONPFV VNF Badge
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The ONAP platform includes a set of automated tests that can be setup and
+executed for a given VNF to verify its compliance with the in-scope VNF
+Requirements. This test suite will produce a result file that is compatible
+for submission to the OPNFV Verified Program. Please refer to the
+`OPNFV VNF Portal <https://vnf-verified.lfnetworking.org/#/>`__ for more details
+on registering for the program and submitting your results.
+
+The following section will describe how to setup and execute the tests.
+
+Executing the OPNFV Verified Compliance and Validation Tests
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The instructions related to setting up and executing the tests vary based on
+whether the VNF is modeled in OpenStack Heat or in TOSCA. Please refer
+to the appropriate section based on your VNF.
+
+* :ref:`heat_vnf_validation`
+* :ref:`tosca_vnf_validation`
+
+
+.. _heat_vnf_validation:
+
+Heat-based VNF Validation
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This section describes how to setup and execute the validation tests against
+a VNF that is described using OpenStack Heat.
+
+Prerequisites
++++++++++++++
+
+- ONAP El Alto Release deployed via :doc:`OOM <../../../../oom.git/docs/oom_quickstart_guide>`
+- An OpenStack deployment is available and privisioned as ONAP's Cloud Site
+- `kubectl <https://kubernetes.io/docs/tasks/tools/install-kubectl/>`__ is
+ installed on the system used to start the testing
+- bash
+- VNF Heat Templates
+- Preload JSON files
+
+After deploying ONAP, you need to configure ONAP with:
+
+- A cloud owner
+- A cloud region
+- A subscriber
+- A service type
+- A project name
+- An owning entity
+- A platform
+- A line of business
+- A cloud site
+
+If you're not familiar with how to configure ONAP, there are guides that use
+:doc:`Robot <../../../../integration.git/docs/docs_robot>` or
+`REST API calls <https://wiki.onap.org/pages/viewpage.action?pageId=25431491>`__
+to handle the setup (including adding a new OpenStack site to ONAP).
+
+Validation Setup
+++++++++++++++++
+
+On your local machine, or the system from which you will run the tests, you will need to clone the
+ONAP OOM project repo:
+
+.. code-block:: bash
+
+ git clone --branch 5.0.1-ONAP ssh://<username>@gerrit.onap.org:29418/oom --recurse-submodules
+
+VNF Preparation
++++++++++++++++
+
+The VNF lifecycle validation test suite requires the VNF to be packaged into a
+specific directory hierarchy, shown below.
+
+.. code-block::
+
+ vnf_folder
+ ├── /templates
+ | └── base.yaml
+ | └── base.env
+ | └── incremental_0.yaml
+ | └── incremental_0.env
+ | └── ...
+ ├── /preloads
+ | └── base_preload.json
+ | └── incremental_0_preload.json
+ | └── ...
+ └── vnf-details.json
+
+- The name for ``vnf_folder`` is free-form, and can be located anywhere on your
+ computer. The path to this folder will be passed to the test suite as an
+ argument.
+- ``/templates`` should contain your VVP-compliant VNF heat templates.
+- ``/preloads`` should contain a preload file for each VNF module
+
+ - For a VNF-API preload: ``vnf-name``, ``vnf-type``, ``generic-vnf-type``,
+ and ``generic-vnf-name`` should be empty strings.
+ - For a GR-API preload: ``vnf-name``, ``vnf-type``, ``vf-module-type``,
+ and ``vf-module-name`` should be empty strings.
+ - This information will be populated at runtime by the test suite.
+
+- ``vnf-details`` should be a JSON file with the information that will be used
+ by ONAP to instantiate the VNF. The structure of ``vnf-details`` is shown below.
+- VNF disk image must be uploaded and available in the OpenStack project being
+ managed by ONAP
+- ``modules`` must contain an entry for each module of the VNF. Only one module
+ can be a base module.
+- ``api_type`` should match the format of the preloads (``vnf_api``
+ or ``gr_api``) that are provided in the package.
+- The other information should match what was used to configure ONAP during the
+ pre-requisite section of this guide.
+
+.. code-block:: json
+
+ {
+ "vnf_name": "The Vnf Name",
+ "description": "Description of the VNF",
+ "modules": [
+ {
+ "filename": "base.yaml",
+ "isBase": "true",
+ "preload": "base_preload.json"
+ },
+ {
+ "filename": "incremental_0.yaml",
+ "isBase": "false",
+ "preload": "incremental_0.json"
+ },
+ ],
+ "api_type": "[gr_api] or [vnf_api]",
+ "subscriber": "<subscriber name>",
+ "service_type": "<service type>",
+ "tenant_name": "<name of tenant>",
+ "region_id": "<name of region>",
+ "cloud_owner": "<name of cloud owner>",
+ "project_name": "<name of project>",
+ "owning_entity": "<name of owning entity>",
+ "platform": "<name of platform>",
+ "line_of_business": "<name of line of business>",
+ "os_password": "<openstack password>"
+ }
+
+Running the HEAT VNF Test
++++++++++++++++++++++++++
+
+The ONAP OOM Robot framework will run the test, using ``kubectl`` to manage the
+execution. The framework will copy your VNF template files to the Robot
+container required to execute the test.
+
+.. code-block:: bash
+
+ cd oom/kubernetes/robot
+ $ ./instantiate-k8s.sh --help
+ ./instantiate-k8s.sh [options]
+
+ required:
+ -n, --namespace <namespace> namespace that robot pod is running under.
+ -f, --folder <folder> path to folder containing heat templates, preloads, and vnf-details.json.
+
+ additional options:
+ -p, --poll some cloud environments (like azure) have a short time out value when executing
+ kubectl. If your shell exits before the test suite finishes, using this option
+ will poll the test suite logs every 30 seconds until the test finishes.
+ -t, --tag <tag> robot testcase tag to execute (default is instantiate_vnf).
+
+ This script executes the VNF instantiation robot test suite.
+ - It copies the VNF folder to the robot container that is part of the ONAP deployment.
+ - It models, distributes, and instantiates a heat-based VNF.
+ - It copies the logs to an output directory, and creates a tarball for upload to the OVP portal.
+
+
+**Sample execution:**
+
+.. code-block:: bash
+
+ $ ./instantiate-k8s.sh --namespace onap --folder /tmp/vnf-instantiation/examples/VNF_API/pass/multi_module/ --poll
+ ...
+ ...
+ ...
+ ...
+ ------------------------------------------------------------------------------
+ test suites.Vnf Instantiation :: The main driver for instantiating ... | PASS |
+ 1 critical test, 1 passed, 0 failed
+ 1 test total, 1 passed, 0 failed
+ ==============================================================================
+ test suites | PASS |
+ 1 critical test, 1 passed, 0 failed
+ 1 test total, 1 passed, 0 failed
+ ==============================================================================
+ Output: /share/logs/0003_ete_instantiate_vnf/output.xml
+ + set +x
+ test suite has finished
+ Copying Results from pod...
+ /tmp/vnf-instantiation /tmp/vnf-instantiation
+ a log.html
+ a results.json
+ a stack_report.json
+ a validation-scripts.json
+ /tmp/vnf-instantiation
+ VNF test results: /tmp/vnfdata.46749/vnf_heat_results.tar.gz
+
+The test suite takes about 10-15 minutes for a simple VNF, and will take longer
+for a more complicated VNF.
+
+Reporting Results
++++++++++++++++++
+
+Once the test suite is finished, it will create a directory and tarball in
+``/tmp`` (the name of the directory and file is shown at the end of the stdout
+of the script). There will be a ``results.json`` file in that directory
+that has the ultimate outcome of the test, in the structure shown below.
+
+**Log Files**
+
+The output tar file will have 4 log files in it.
+
+- ``results.json``: This is high-level results file of all of the test steps,
+ and is consumed by the OVP portal.
+- ``report.json``: This is the output of the VVP validation scripts.
+- ``stack_report.json``: This is the output from querying OpenStack to validate
+ the Heat modules.
+- ``log.html``: This is the Robot test log, and contains each execution step of
+ the test case.
+
+If the result is ``"PASS"``, that means the test suite was successful and the
+tarball is ready for submission to the OVP portal.
+
+**results.json**
+
+.. code-block:: json
+
+ {
+ "vnf_checksum": "afc57604a3b3b7401d5b8648328807b594d7711355a2315095ac57db4c334a50",
+ "build_tag": "vnf-validation-7055d30b-9a2e-4ca2-9409-499131cc86db",
+ "version": "2019.12",
+ "test_date": "2019-09-04 17:50:10.575",
+ "duration": 437.002,
+ "vnf_type": "heat",
+ "testcases_list": [
+ {
+ "mandatory": "true",
+ "name": "onap-vvp.validate.heat",
+ "result": "PASS",
+ "objective": "onap heat template validation",
+ "sub_testcase": [],
+ "portal_key_file": "report.json"
+ },
+ {
+ "mandatory": "true",
+ "name": "onap-vvp.lifecycle_validate.heat",
+ "result": "PASS",
+ "objective": "onap vnf lifecycle validation",
+ "sub_testcase": [
+ {
+ "name": "model-and-distribute",
+ "result": "PASS"
+ },
+ {
+ "name": "instantiation",
+ "result": "PASS"
+ }
+ ],
+ "portal_key_file": "log.html"
+ },
+ {
+ "mandatory": "true",
+ "name": "stack_validation",
+ "result": "PASS",
+ "objective": "onap vnf openstack validation",
+ "sub_testcase": [],
+ "portal_key_file": "stack_report.json"
+ }
+ ]
+ }
+
+Examples
+++++++++
+
+Example VNFs and setup files have been created as a starting point for your
+validation.
+
+* :download:`Passing Single Volume VNF using VNF API <input-VNF-API-pass-single_module.zip>`
+* :download:`Failing Single Volume VNF using VNF API <input-VNF-API-fail-single_module.zip>`
+
+Additional Resources
+++++++++++++++++++++
+
+- `ONAP VVP Project <https://wiki.onap.org/display/DW/VNF+Validation+Program+Project>`_
+
+
+.. _tosca_vnf_validation:
+
+TOSCA-based VNF Testing
+~~~~~~~~~~~~~~~~~~~~~~~
+
+VNF Test Platform (VTP) provides an platform to on-board different test cases
+required for OVP for various VNF testing provided by VNFSDK (for TOSCA) projects
+in ONAP. And it generates the test case outputs which would be uploaded into
+OVP portal for VNF badging.
+
+TOSCA VNF Test Environment
+++++++++++++++++++++++++++
+
+As pre-requestsite steps, it is assumed that, successful ONAP, Vendor VNFM and
+OpenStack cloud are already available. Below installation steps help to setup
+VTP components and CLI.
+
+.. image:: tosca_vnf_test_environment.png
+ :align: center
+
+Installation
+++++++++++++
+
+Clone the VNFSDK repo.
+
+.. code-block:: bash
+
+ git clone --branch elalto https://git.onap.org/vnfsdk/refrepo
+
+Install the VTP by using script
+``refrepo/vnfmarket-be/deployment/install/vtp_install.sh``
+
+Follow the steps as below (in sequence):
+
+- ``vtp_install.sh --download``: It will download all required artifacts into
+ ``/opt/vtp_stage``
+- ``vtp_install.sh --install``: It will install VTP (``/opt/controller``) and
+ CLI (``/opt/oclip``)
+- ``vtp_install.sh --start``: It will start VTP controller as Tomcat service
+ and CLI as ``oclip`` service
+- ``vtp_install.sh --verify``: It will verify the setup is done properly by
+ running some test cases.
+
+Last step (verify) would check the health of VTP components and TOSCA VNF
+compliance and validation test cases.
+
+Check Available Test Cases
+++++++++++++++++++++++++++
+
+VTP supports to check the compliance of VNF and PNF based on ONAP VNFRQTS.
+
+To check:
+
+- Go to command console
+- Run command ``oclip``
+- Now it will provide a command prompt:
+
+``oclip:open-cli>``
+
+Now run command as below and check the supported compliance test cases for
+VNFRQTS.
+
+- ``csar-validate`` - Helps to validate given VNF CSAR for all configured
+ VNFRQTS.
+- ``csar-validate-rxxx`` - Helps to validate given VNF CSAR for a given
+ VNFRQTS requirement number.
+
+.. code-block:: bash
+
+ oclip:open-cli>schema-list --product onap-dublin --service vnf-compliance
+ +--------------+----------------+------------------------+--------------+----------+------+
+ |product |service |command |ocs-version |enabled |rpc |
+ +--------------+----------------+------------------------+--------------+----------+------+
+ |onap-dublin |vnf-compliance |csar-validate-r10087 |1.0 |true | |
+ +--------------+----------------+------------------------+--------------+----------+------+
+ |onap-dublin |vnf-compliance |csar-validate |1.0 |true | |
+ +--------------+----------------+------------------------+--------------+----------+------+
+ |onap-dublin |vnf-compliance |csar-validate-r26885 |1.0 |true | |
+ +--------------+----------------+------------------------+--------------+----------+------+
+ |onap-dublin |vnf-compliance |csar-validate-r54356 |1.0 |true | |
+ ...
+
+To know the details of each VNFRQTS, run as below.
+
+.. code-block:: bash
+
+ oclip:open-cli>use onap-dublin
+ oclip:onap-dublin>csar-validate-r54356 --help
+ usage: oclip csar-validate-r54356
+
+ Data types used by NFV node and is based on TOSCA/YAML constructs specified in draft GS NFV-SOL 001.
+ The node data definitions/attributes used in VNFD MUST comply.
+
+Now run command as below and check the supported validation testcases
+
+.. code-block:: bash
+
+ oclip:onap-dublin>use open-cli
+ oclip:open-cli>schema-list --product onap-dublin --service vnf-validation
+ +--------------+----------------+----------------------+--------------+----------+------+
+ |product |service |command |ocs-version |enabled |rpc |
+ +--------------+----------------+----------------------+--------------+----------+------+
+ |onap-dublin |vnf-validation |vnf-tosca-provision |1.0 |true | |
+ +--------------+----------------+----------------------+--------------+----------+------+
+
+Configure ONAP with required VNFM and cloud details
++++++++++++++++++++++++++++++++++++++++++++++++++++
+
+**1. Setup the OCOMP profile onap-dublin**
+
+Run following command to configure the ONAP service URL and credentials as
+given below, which will be used by VTP while executing the test cases
+
+.. code-block:: bash
+
+ oclip:open-cli>use onap-dublin
+ oclip:onap-dublin>profile onap-dublin
+ oclip:onap-dublin>set sdc.onboarding:host-url=http://159.138.8.8:30280
+ oclip:onap-dublin>set sdc.onboarding:host-username=cs0008
+ oclip:onap-dublin>set sdc.onboarding:host-password=demo123456!
+ oclip:onap-dublin>set sdc.catalog:host-url=http://159.138.8.8:30205
+ oclip:onap-dublin>set sdc.catalog:host-password=demo123456\!
+ oclip:onap-dublin>set sdc.catalog:host-username=cs0008
+ oclip:onap-dublin>set sdc.catalog:service-model-approve:host-username=gv0001
+ oclip:onap-dublin>set sdc.catalog:service-model-distribute:host-username=op0001
+ oclip:onap-dublin>set sdc.catalog:service-model-test-start:host-username=jm0007
+ oclip:onap-dublin>set sdc.catalog:service-model-test-accept:host-username=jm0007
+ oclip:onap-dublin>set sdc.catalog:service-model-add-artifact:host-username=ocomp
+ oclip:onap-dublin>set sdc.catalog:vf-model-add-artifact:host-username=ocomp
+ oclip:onap-dublin>set aai:host-url=https://159.138.8.8:30233
+ oclip:onap-dublin>set aai:host-username=AAI
+ oclip:onap-dublin>set aai:host-password=AAI
+ oclip:onap-dublin>set vfc:host-url=http://159.138.8.8:30280
+ oclip:onap-dublin>set multicloud:host-url=http://159.138.8.8:30280
+
+NOTE: Mostly all above entries value would be same except the IP address used
+in the URL, which would be ONAP Kubernetes cluster IP.
+
+By default, SDC onboarding service does not provide node port, which is
+available to access from external ONAP network. To enable for external access,
+register the SDC onboarding service into MSB and use MSB url for
+sdc.onboarding:host-url.
+
+.. code-block:: bash
+
+ oclip:onap-dublin> microservice-create --service-name sdcob --service-version v1.0 --service-url /onboarding-api/v1.0 --path /onboarding-api/v1.0 --node-ip 172.16.1.0 --node-port 8081
+
+NOTE: To find the node-ip and node-port, use the following steps.
+
+Find out SDC onboarding service IP and port details as given here:
+
+.. code-block:: bash
+
+ [root@onap-dublin-vfw-93996-50c1z ~]# kubectl get pods -n onap -o wide | grep sdc-onboarding-be
+ dev-sdc-sdc-onboarding-be-5564b877c8-vpwr5 2/2 Running 0 29d 172.16.1.0 192.168.2.163 <none> <none>
+ dev-sdc-sdc-onboarding-be-cassandra-init-mtvz6 0/1 Completed 0 29d 172.16.0.220 192.168.2.163 <none> <none>
+ [root@onap-dublin-vfw-93996-50c1z ~]#
+
+Note down the IP address for sdc-onboarding-be 172.16.1.0
+
+.. code-block:: bash
+
+ [root@onap-dublin-vfw-93996-50c1z ~]# kubectl get services -n onap -o wide | grep sdc-onboarding-be
+ sdc-onboarding-be ClusterIP 10.247.198.92 <none> 8445/TCP,8081/TCP 29d app=sdc-onboarding-be,release=dev-sdc
+ [root@onap-dublin-vfw-93996-50c1z ~]#
+
+Note down the port for sdc-onboarding-be 8445 8081
+
+Similarly, other service IP and Port could be discovered like above, in case not
+know earlier :)
+
+Verify these details once by typing 'set'
+
+.. code-block:: bash
+
+ oclip:onap-dublin> set
+
+This profile would be used by user while running the test cases with ONAP setup
+configured in it, as below oclip --profile onap-dublin vnf-tosca-provision ....
+
+**2. Setup SDC consumer**
+
+SDC uses consumer concept to configure required VN model and service model
+artifacts. So following commands required to run, which will create consumer
+named ocomp, which is already configured in onap-dublin profile created in above
+steps.
+
+.. code-block:: bash
+
+ oclip --product onap-dublin --profile onap-dublin sdc-consumer-create --consumer-name ocomp
+
+NOTE: command oclip could be used in scripting mode as above or in interactive
+mode as used in earlier steps
+
+**3. Update the cloud and vnfm driver details**
+
+In the configuration file /opt/oclip/conf/vnf-tosca-provision.json, update the
+cloud and VNFM details.
+
+.. code-block:: json
+
+ { "cloud": {
+ "identity-url": "http://10.12.11.1:5000/v3",
+ "username": "admin",
+ "password": "password",
+ "region": "RegionOVP",
+ "version": "ocata",
+ "tenant": "ocomp"
+ },
+ "vnfm":{
+ "hwvnfmdriver":{
+ "version": "v1.0",
+ "url": "http://159.138.8.8:38088",
+ "username": "admin",
+ "password": "xxxx"
+ },
+ "gvnfmdriver":{
+ "version": "v1.0",
+ "url": "http://159.138.8.8:30280"
+ }
+ }
+ }
+
+**4.Configure the decided VNFRES (optional)**
+VTP allows to configure the set of VNFRQTS to be considered while running the
+VNF compliance test cases in the configuration file
+``/opt/oclip/conf/VNFRQTS.properties.``
+
+If not available, please create this file with following entries:
+
+.. code-block:: bash
+
+ VNFRQTS.enabled=r02454,r04298,r07879,r09467,r13390,r23823,r26881,r27310,r35851,r40293,r43958,r66070,r77707,r77786,r87234,r10087,r21322,r26885,r40820,r35854,r65486,r17852,r46527,r15837,r54356,r67895,r95321,r32155,r01123,r51347,r787965,r130206
+ pnfreqs.enabled=r10087,r87234,r35854,r15837,r17852,r293901,r146092,r57019,r787965,r130206
+ # ignored all chef and ansible related tests
+ vnferrors.ignored=
+ pnferrors.ignored=
+
+Running the TOSCA VNF Test
+++++++++++++++++++++++++++
+
+Every test provided in VTP is given with guidelines on how to use it. On every
+execution of test cases, use the following additional arguments based on
+requirements
+
+- ``--product onap-dublin`` - It helps VTP choose the test cases written for
+ onap-dublin version
+- ``--profile onap-dublin`` - It helps VTP to use the profile settings provided
+ by admin (optional)
+- ``--request-id`` - It helps VTP to track the progress of the test cases
+ execution and user could use this id for same. (optional)
+
+So, final test case execution would be as below. To find the test case
+arguments details, run second command below.
+
+.. code-block:: bash
+
+ oclip --product onap-dublin --profile onap-dublin --request-id req-1 <test case name> <test case arguments>
+ oclip --product onap-dublin <test case name> --help
+
+Running TOSCA VNF Compliance Testing
+++++++++++++++++++++++++++++++++++++
+
+To run compliance test as below with given CSAR file
+
+.. clode-block:: bash
+
+ oclip --product onap-dublin csar-validate --csar <csar file complete path>
+
+It will produce the result format as below:
+
+.. code-block:: json
+
+ {
+ "date": "Fri Sep 20 17:34:24 CST 2019",
+ "criteria": "PASS",
+ "contact": "ONAP VTP Team onap-discuss@lists.onap.org",
+ "results": [
+ {
+ "description": "V2.4.1 (2018-02)",
+ "passed": true,
+ "vnfreqName": "SOL004",
+ "errors": []
+ },
+ {
+ "description": "If the VNF or PNF CSAR Package utilizes Option 2 for package security, then the complete CSAR file MUST be digitally signed with the VNF or PNF provider private key. The VNF or PNF provider delivers one zip file consisting of the CSAR file, a signature file and a certificate file that includes the VNF or PNF provider public key. The certificate may also be included in the signature container, if the signature format allows that. The VNF or PNF provider creates a zip file consisting of the CSAR file with .csar extension, signature and certificate files. The signature and certificate files must be siblings of the CSAR file with extensions .cms and .cert respectively.\n",
+ "passed": true,
+ "vnfreqName": "r787965",
+ "errors": []
+ }
+ ],
+ "platform": "VNFSDK - VNF Test Platform (VTP) 1.0",
+ "vnf": {
+ "mode": "WITH_TOSCA_META_DIR",
+ "vendor": "ONAP",
+ "name": null,
+ "type": "TOSCA",
+ "version": null
+ }
+ }
+
+In case of errors, the errors section will have list of details as below. Each
+error block, will be given with error code and error details. Error code would
+be very useful to provide the troubleshooting guide in future. Note, to
+generate the test result in OVP archieve format, its recommended to run this
+compliance test with request-id similar to running validation test as below.
+
+.. code-block:: bash
+
+ [
+ {
+ "vnfreqNo": "R66070",
+ "code": "0x1000",
+ "message": "MissinEntry-Definitions file",
+ "lineNumber": -1
+ }
+ ]
+
+Running TOSCA VNF Validation Testing
+++++++++++++++++++++++++++++++++++++
+
+VTP provides validation test case with following modes:
+
+.. image:: tosca_vnf_test_flow.png
+ :align: center
+
+
+* **setup**: Create requires Vendor, Service Subscription and VNF cloud in
+ ONAP
+* **standup**: From the given VSP csar, VNF csar and NS csar, it creates VF
+ Model, NS Model and NS service
+* **cleanup**: Remove those entries created during provision
+* **provision**: Runs setup -> standup
+* **validate**: Runs setup -> standup -> cleanup
+* **checkup**: mode helps to verify automation is deployed properly.
+
+For OVP badging, validate mode would be used as below:
+
+.. code-block:: bash
+
+ oclip --request-id WkVVu9fD--product onap-dublin --profile onap-dublin vnf-tosca-provision --vsp <vsp csar> --vnf-csar <v
+
+Validation testing would take for a while to complete the test execution, so
+user could use the above given ``request-id``, to tracking the progress as
+below:
+
+.. code-block:: bash
+
+ oclip execution-list --request-id WkVVu9fD
+ +------------+------------------------+--------------+------------------+------------------------------+--------------+------------+--------------------------+--------------------------+
+ |request-id |execution-id |product |service |command |profile |status |start-time |end-time |
+ +------------+------------------------+--------------+------------------+------------------------------+--------------+------------+--------------------------+--------------------------+
+ |WkVVu9fD |WkVVu9fD-1568731678753 |onap-dublin |vnf-validation |vnf-tosca-provision | |in-progress |2019-09-17T14:47:58.000 | |
+ +------------+------------------------+--------------+------------------+------------------------------+--------------+------------+--------------------------+--------------------------+
+ |WkVVu9fD |WkVVu9fD-1568731876397 |onap-dublin |sdc.catalog |service-model-test-request |onap-dublin |in-progress |2019-09-17T14:51:16.000 | |
+ +------------+------------------------+--------------+------------------+------------------------------+--------------+------------+--------------------------+--------------------------+
+ |WkVVu9fD |WkVVu9fD-1568731966966 |onap-dublin |sdc.onboarding |vsp-archive |onap-dublin |completed |2019-09-17T14:52:46.000 |2019-09-17T14:52:47.000 |
+ +------------+------------------------+--------------+------------------+------------------------------+--------------+------------+--------------------------+--------------------------+
+ |WkVVu9fD |WkVVu9fD-1568731976982 |onap-dublin |aai |subscription-delete |onap-dublin |completed |2019-09-17T14:52:56.000 |2019-09-17T14:52:57.000 |
+ +------------+------------------------+--------------+------------------+------------------------------+--------------+------------+--------------------------+--------------------------+
+ |WkVVu9fD |WkVVu9fD-1568731785780 |onap-dublin |aai |vnfm-create |onap-dublin |completed |2019-09-17T14:49:45.000 |2019-09-17T14:49:46.000 |
+ ......
+
+While executing the test cases, VTP provides unique execution-id (2nd column)
+for each step. As you note in the example above, some steps are in-progress,
+while others are completed already. If there is error then status will be set
+to failed.
+
+To find out the foot-print of each step, following commands are available:
+
+.. code-block:: bash
+
+ oclip execution-show-out --execution-id WkVVu9fD-1568731785780 - Reports the standard output logs
+ oclip execution-show-err --execution-id WkVVu9fD-1568731785780 - Reports the standard error logs
+ oclip execution-show-debug --execution-id WkVVu9fD-1568731785780 - Reports the debug details like HTTP request and responseoclip execution-show --execution-id WkVVu9fD-1568731785780 - Reports the complete foot-print of inputs, outputs of steps
+
+Track the progress of the vnf-tosca-provision test cases until its completed.
+Then the out of the validation test cases could be retrieved as below:
+
+.. code-block:: bash
+
+ oclip execution-show --execution-id WkVVu9fD-1568731678753 - use vnf tosca test case execution id here
+
+It will provides the output format as below:
+
+.. code-block:: json
+
+ {
+ "output": {
+ "ns-id": null,
+ "vnf-id": "",
+ "vnfm-driver": "hwvnfmdriver",
+ "vnf-vendor-name": "huawei",
+ "onap-objects": {
+ "ns_instance_id": null,
+ "tenant_version": null,
+ "service_type_id": null,
+ "tenant_id": null,
+ "subscription_version": null,
+ "esr_vnfm_id": null,
+ "location_id": null,
+ "ns_version": null,
+ "vnf_status": "active",
+ "entitlement_id": null,
+ "ns_id": null,
+ "cloud_version": null,
+ "cloud_id": null,
+ "vlm_version": null,
+ "esr_vnfm_version": null,
+ "vlm_id": null,
+ "vsp_id": null,
+ "vf_id": null,
+ "ns_instance_status": "active",
+ "service_type_version": null,
+ "ns_uuid": null,
+ "location_version": null,
+ "feature_group_id": null,
+ "vf_version": null,
+ "vsp_version": null,
+ "agreement_id": null,
+ "vf_uuid": null,
+ "ns_vf_resource_id": null,
+ "vsp_version_id": null,
+ "customer_version": null,
+ "vf_inputs": null,
+ "customer_id": null,
+ "key_group_id": null,
+ },
+ "vnf-status": "active",
+ "vnf-name": "vgw",
+ "ns-status": "active"
+ },
+ "input": {
+ "mode": "validate",
+ "vsp": "/tmp/data/vtp-tmp-files/1568731645518.csar",
+ "vnfm-driver": "hwvnfmdriver",
+ "config-json": "/opt/oclip/conf/vnf-tosca-provision.json",
+ "vnf-vendor-name": "huawei",
+ "ns-csar": "/tmp/data/vtp-tmp-files/1568731660745.csar",
+ "onap-objects": "{}",
+ "timeout": "600000",
+ "vnf-name": "vgw",
+ "vnf-csar": "/tmp/data/vtp-tmp-files/1568731655310.csar"
+ },
+ "product": "onap-dublin",
+ "start-time": "2019-09-17T14:47:58.000",
+ "service": "vnf-validation",
+ "end-time": "2019-09-17T14:53:46.000",
+ "request-id": "WkVVu9fD-1568731678753",
+ "command": "vnf-tosca-provision",
+ "status": "completed"
+ }
+
+Reporting Results
++++++++++++++++++
+
+VTP provides translation tool to migrate the VTP result into OVP portal format
+and generates the tar file for the given test case execution. Please refer
+`<https://github.com/onap/vnfsdk-refrepo/tree/master/vnfmarket-be/deployment/vtp2ovp>`_
+for more details.
+
+Once tar is generated, it can be used to submit into OVP portal
+`<https://vnf-verified.lfnetworking.org/>`_
+
+.. References
+.. _`OVP VNF portal`: https://vnf-verified.lfnetworking.org
diff --git a/docs/Chapter8/VES_Registraion_3_2.rst b/docs/Chapter8/VES_Registration_3_2.rst
index 8013157..09ad322 100644
--- a/docs/Chapter8/VES_Registraion_3_2.rst
+++ b/docs/Chapter8/VES_Registration_3_2.rst
@@ -1,12 +1,12 @@
.. This work is licensed under a Creative Commons Attribution 4.0 International License.
.. http://creativecommons.org/licenses/by/4.0
-.. Copyright 2017 AT&T Intellectual Property, All rights reserved
+.. Copyright 2017-2020 AT&T Intellectual Property, All rights reserved
.. Copyright 2017-2018 Huawei Technologies Co., Ltd.
.. _ves_event_registration_3_2:
-Service: VES Event Registration 3.2
-------------------------------------
+Service: VES Event Registration 3.2.1
+-------------------------------------
+-----------------------------------------------------------------------------+
| **Legal Disclaimer** |
@@ -25,8 +25,8 @@ Service: VES Event Registration 3.2
+-----------------------------------------------------------------------------+
:Document: VES Event Registration
-:Revision: 3.2
-:Revision Date: December 10th, 2018
+:Revision: 3.2.1
+:Revision Date: January 28th, 2020
:Author: Rich Erickson
+-----------------+------------------------------+
@@ -81,9 +81,9 @@ may arise, and recommend actions that should be taken at specific
thresholds, or if specific conditions repeat within a specified time
interval.
-Based on the vendor’s recommendations, the Service Provider may create
+Based on the vendor's recommendations, the Service Provider may create
another YAML, which finalizes their engineering rules for the processing
-of the vendor’s events. The Service Provider may alter the threshold
+of the vendor's events. The Service Provider may alter the threshold
levels recommended by the vendor, and may modify and more clearly
specify actions that should be taken when specified conditions arise.
The Service Provided-created version of the YAML will be distributed to
@@ -94,7 +94,7 @@ Goal
The goal of the YAML is to completely describe the processing of VNF
events in a way that can be compiled or interpreted by applications
-across a Service Provider’s infrastructure.
+across a Service Provider's infrastructure.
Relation to the Common Event Format
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -103,44 +103,44 @@ The Common Event Format described in the VES Event Listener service
specification defines the structure of VES events including optional
fields that may be provided.
-Specific eventNames registered by the YAML (e.g., an InvalidLicense
+Specific ``eventNames`` registered by the YAML (e.g., an ``InvalidLicense``
fault), may require that certain fields, which are optional in the
Common Event Format, be present when events with that eventName are
published. For example, a fault eventName which communicates an
-‘InvalidLicense’ condition, may be registered to require that the
-configured ‘licenseKey’ be provided as a name-value pair in the Common
-Event Format’s ‘additionalFields’ structure, within the ‘faultFields’
-block. Anytime an ‘InvalidLicense’ fault event is detected, designers,
-applications and microservices across the Service Provider’s
+``InvalidLicense`` condition, may be registered to require that the
+configured ``licenseKey`` be provided as a name-value pair in the Common
+Event Format's ``additionalFields`` structure, within the ``faultFields``
+block. Anytime an ``InvalidLicense`` fault event is detected, designers,
+applications and microservices across the Service Provider's
infrastructure can count on that name-value pair being present.
The YAML registration may also restrict ranges or enumerations defined
in the Common Event Format. For example, eventSeverity is an enumerated
string within the Common Event Format with several values ranging from
-‘NORMAL’ to ‘CRITICAL’. The YAML registration for a particular eventName
-may require that it always be sent with eventSeverity set to a single
-value (e.g., ‘MINOR’), or to a subset of the possible enumerated values
-allowed by the Common Event Format (e.g., ‘MINOR’ or ‘NORMAL’).
+``NORMAL`` to ``CRITICAL``. The YAML registration for a particular eventName
+may require that it always be sent with ``eventSeverity`` set to a single
+value (e.g., ``MINOR``), or to a subset of the possible enumerated values
+allowed by the Common Event Format (e.g., ``MINOR`` or ``NORMAL``).
Relation to Service Design and Creation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Event registration for a VNF (or other event source) is provided to the
-Service Provider’s Service Creation and Design Environment (e.g., ASDC)
+Service Provider's Service Creation and Design Environment (e.g., SDC)
as a set of two YAML files consisting of the vendor recommendation YAML
and (optionally) the final Service Provider YAML. These YAML files
-describe all the eventNames that that VNF (or other event source)
+describe all the ``eventNames`` that that VNF (or other event source)
generates.
Once their events are registered, the Service Creation and Design
-Environment can then list the registered eventNames (e.g., as a drop
+Environment can then list the registered ``eventNames`` (e.g., as a drop
down list), for each VNF or other event source (e.g., a service), and
enable designers to study the YAML registrations for specific
eventNames. YAML registrations are both human readable and machine
readable.
The final Service Provider YAML is a type of Service Design and Creation
-‘artifact’, which can be distributed to Service Provider applications at
+artifact, which can be distributed to Service Provider applications at
design time: notably, to applications involved in the collection and
processing of VNF events. It can be parsed by those applications so they
can support the receipt and processing of VNF events, without the need
@@ -160,67 +160,53 @@ Filename
YAML file names should conform to the following naming convention:
- ``{AsdcModel}_{AsdcModelType}_{v#}.yml``
+ ``{SDCModel}_{SDCModelType}_{v#}.yml``
-The ‘#’ should be replaced with the current numbered version of the
+The '#' should be replaced with the current numbered version of the
file.
-‘ASDC’ is a reference to the Service Provider’s Service Design and
-Creation environment. The AsdcModelType is an enumeration with several
+'SDC' is a reference to the Service Provider's Service Design and
+Creation environment. The SDCModelType is an enumeration with several
values of which the following three are potentially relevant:
- Service
-
- Vnf
-
- VfModule
-The AsdcModel is the modelName of the specific modelType whose events
+The SDCModel is the modelName of the specific modelType whose events
are being registered (e.g., the name of the specific VNF or service as
it appears in the the Service Design and Creation Environment).
For example:
- ``vMRF_Vnf_v1.yml``
-
- ``vMRF_Service_v1.yml``
-
- ``vIsbcSsc_VfModule_v1.yml``
File Structure
~~~~~~~~~~~~~~
-Each eventType is registered as a distinct YAML ‘document’.
-
-YAML files consist of a series of YAML documents delimited by ‘- - -‘ and
-‘…’ for example:
-
-.. code-block:: ruby
-
- Some Ruby code.
- ---
-
- # Event Registration for eventName ‘name1’
-
- # details omitted
-
- ...
-
- ---
+Each eventType is registered as a distinct YAML document.
- # Event Registration for eventName ‘name2’
+YAML files consist of a series of YAML documents delimited by ``---``
+denoting the the start of a document, and -- optionally -- ``...`` denoting
+the end of a document. For example:
- # details omitted
-
- ...
-
- ---
-
- # Event Registration for eventName ‘name3’
+.. code-block:: yaml
- # details omitted
+ ---
+ # Event Registration for eventName 'name1'
+ # details omitted
+ ...
+ ---
+ # Event Registration for eventName 'name2'
+ # details omitted
+ ...
+ ---
+ # Event Registration for eventName 'name3'
+ # details omitted
+ ...
- ...
YAML Syntax and Semantics
^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -234,13 +220,13 @@ Qualifiers
~~~~~~~~~~
Each object or field name in the eventName being registered is followed
-by a ‘qualifier’, which consists of a colon and two curly braces, for
+by a 'qualifier', which consists of a colon and two curly braces, for
example:
- ``"objectOrFieldName: { }"``
+ ``objectOrFieldName: { }``
The curly braces contain meta-information about that object or field
-name (also known as the ‘element’), such as whether it is required to be
+name (also known as the 'element'), such as whether it is required to be
present, what values it may have, what handling it should trigger, etc…
Semantics have been defined for the following types of meta-information
@@ -255,49 +241,49 @@ taken if a specified trigger occurs. For example, the ``action`` keyword
may specify that a threshold crossing alert (i.e., tca) be generated,
and/or that a specific microservice handler be invoked, and/or that a
specific named-condition be asserted. In the Rules section of the YAML
-file, tca’s and microservices may be defined on individual
+file, tca's and microservices may be defined on individual
named-conditions or on logical combinations of named-conditions.
-The ``action:`` keyword is followed by five values in square brackets. The
+The ``action`` keyword is followed by five values in square brackets. The
first two values communicate the trigger, and the last three values
communicate the actions to be taken if that trigger occurs:
1. The first value conveys the trigger level. If the field on which the
action is defined reaches or passes through that level, then the
trigger fires. If a specific level is not important to the
- recommended action, the ‘any’ keyword may be used as the first value.
- (Note: ‘any’ is often used when an action is defined on the ‘event’
+ recommended action, the 'any' keyword may be used as the first value.
+ (Note: 'any' is often used when an action is defined on the 'event'
structure as a whole).
2. The second value indicates the direction of traversal of the level
- specified in the first value. The second value may be ‘up’, ‘down’,
- ‘at’ or ‘any’. ‘any’ is used if the direction of traversal is not
- important. ‘at’ implies that it traversed (or exactly attained) the
- trigger level but it doesn’t matter if the traversal was in the up
- direction or down direction. Note: If ‘up’, ‘down’ or ‘at’ are used,
+ specified in the first value. The second value may be ``up``, ``down``,
+ ``at`` or 'any'. 'any' is used if the direction of traversal is not
+ important. ``at`` implies that it traversed (or exactly attained) the
+ trigger level but it doesn't matter if the traversal was in the up
+ direction or down direction. Note: If ``up``, ``down`` or ``at`` are used,
the implication is that the microservices processing the events
within the service provider are maintaining state (e.g., to know that
- a measurement field traversed a trigger level in an ‘up’ direction,
+ a measurement field traversed a trigger level in an ``up`` direction,
the microservice would have to know that the field was previously
below the trigger level). When initially implementing support for
YAML actions, a service provider may choose to use and interpret
these keywords in a simpler way to eliminate the need to handle
- state. Specifically, they may choose to define and interpret all ‘up’
- guidance to mean ‘at the indicated trigger level or greater’, and
- they may choose to define and interpret all ‘down’ guidance to mean
- ‘at the indicated trigger level or lower’.
+ state. Specifically, they may choose to define and interpret all ``up``
+ guidance to mean 'at the indicated trigger level or greater', and
+ they may choose to define and interpret all ``down`` guidance to mean
+ 'at the indicated trigger level or lower'.
3. The third value optionally names the condition that has been attained
- when the triggers fires (e.g., ‘invalidLicence’ or
- ‘capacityExhaustion’). Named-conditions should be expressed in upper
+ when the triggers fires (e.g., ``invalidLicence`` or
+ ``capacityExhaustion``). Named-conditions should be expressed in upper
camel case with no underscores, hyphens or spaces. In the Rules
section of the YAML file, named-conditions may be used to specify
- tca’s that should be generated and/or microservices that should be
+ tca's that should be generated and/or microservices that should be
invoked. If it is not important to name a condition, then the keyword
- ‘null’ may be used as the third value.
+ ``null`` may be used as the third value.
-4. The fourth value recommends a specific microservice (e.g., ‘rebootVm’
- or ‘rebuildVnf’) supported by the Service Provider, be invoked if the
+4. The fourth value recommends a specific microservice (e.g., ``rebootVm``
+ or ``rebuildVnf``) supported by the Service Provider, be invoked if the
trigger is attained. Design time processing of the YAML by the
service provider can use these directives to automatically establish
policies and configure flows that need to be in place to support the
@@ -306,18 +292,16 @@ communicate the actions to be taken if that trigger occurs:
- If a vendor wants to recommend an action, it can either work with
the service provider to identify and specify microservices that the
service provider support, or, the vendor may simply indicate and
- recommend a generic microservice function by prefixing ‘RECO-’ in
+ recommend a generic microservice function by prefixing ``RECO-`` in
front of the microservice name, which should be expressed in upper
camel case with no underscores, hyphens or spaces.
-
- - The fourth value may also be set to ‘null’.
+ - The fourth value may also be set to ``null``.
5. The fifth value third value indicates a specific threshold crossing
alert (i.e., tca) that should be generated if the trigger occurs.
- This field may be omitted or provided as ‘null’.
-
- - Tca’s should be indicated by their eventNames.
+ This field may be omitted or provided as ``null``.
+ - Tca's should be indicated by their eventNames.
- When a tca is specified, a YAML registration for that tca eventName
should be added to the event registrations within the YAML file.
@@ -325,22 +309,22 @@ Examples:
.. code-block:: yaml
- event: {
+ event: {
action: [
any, any, null, rebootVm
]
- }
+ }
# whenever the above event occurs, the VM should be rebooted
- fieldname: {
+ fieldname: {
action: [ 80, up, null, null, tcaUpEventName ],
action: [ 60, down, overcapacity, null ]
- }
+ }
# when the value of fieldname crosses 80 in an up direction,
# tcaUpEventName should be published; if the fieldname crosses 60
- # in a down direction an ‘overCapacity’ named-condition is asserted.
+ # in a down direction an 'overCapacity' named-condition is asserted.
AggregationRole
+++++++++++++++
@@ -348,41 +332,38 @@ AggregationRole
The ``aggregationRole`` keyword is applied to the value keyword in a field
of a name-value pair.
-AggregationRole may be set to one of the following:
-
-- cumulativeCounter
+The field ``aggregationRole`` may be set to one of the following:
-- gauge
+- ``cumulativeCounter``
+- ``gauge``
+- ``index``
+- ``reference``
-- index
+``index`` identifies a field as an index or a key for aggregation.
-- reference
-
-“index” identifies a field as an index or a key for aggregation.
-
-“reference” fields have values that typically do not change over
+``reference`` fields have values that typically do not change over
consecutive collection intervals.
-“gauge” values may fluctuate from one collection interval to the next,
+``gauge`` values may fluctuate from one collection interval to the next,
i.e., increase or decrease.
-“cumulativeCounter” values keep incrementing regardless of collection
+``cumulativeCounter`` values keep incrementing regardless of collection
interval boundaries until they overflow, i.e., until they exceed a
maximum value specified by design. Typically, delta calculation is
-needed based on two cumulativeCounter values over two consecutive
+needed based on two ``cumulativeCounter`` values over two consecutive
collection intervals.
If needed, the ``aggregationRole`` setting tells the receiving event
processor how to aggregate the extensible keyValuePair data. Data
-aggregation may use a combination of ‘index’ and ‘reference’ data fields
+aggregation may use a combination of ``index`` and ``reference`` data fields
as aggregation keys while applying aggregation formulas, such as
-summation or average on the ‘gauge’ fields.
+summation or average on the ``gauge`` fields.
-Example 1:
+**Example 1**:
- - Interpretation of the below: If additionalMeasurements is supplied,
- it must have key name1 and name1’s value should be interpreted as an
- index:
+- Interpretation of the below: If additionalMeasurements is supplied,
+ it must have key name1 and name1's value should be interpreted as an
+ index:
.. code-block:: yaml
@@ -402,11 +383,11 @@ Example 1:
]
}
-Example 2:
+**Example 2**:
-- Let’s say a vnf wants to send the following ``TunnelTraffic`` fields
- through a VES arrayOfFields structure (specifically through
- additionalMeasurements in the VES measurementField block):
+- Let's say a VNF wants to send the following ``TunnelTraffic`` fields
+ through a VES ``arrayOfFields`` structure (specifically through
+ ``additionalMeasurements`` in the VES ``measurementField`` block):
+--------------------------+--------+-------------+-------------+-------------+
| Tunnel Name | Tunnel | Total | Total Output| Total Output|
@@ -479,26 +460,21 @@ If not supplied the implication is the standard VES datatype applies.
A value may be castTo one and only one of the following data types:
-- boolean
-
-- integer
-
-- number (note: this supports decimal values as well as integral
+- ``boolean``
+- ``integer``
+- ``number`` (**note**: this supports decimal values as well as integral
values)
+- ``string``
-- string
-
-Example:
+**Example**:
.. code-block:: yaml
fieldname: { value: [ x, y, z ], castTo: number }
- # only values ‘x’,‘y’, or ‘z’ allowed
+ # only values 'x','y', or 'z' allowed
# each must be cast to a number
-.. code-block:: yaml
-
additionalMeasurements: {
presence: optional, array: [
{
@@ -534,7 +510,7 @@ Examples:
fieldname: {
range: [ 1, unbounded ],
default: 5,
- comment: “needs further diagnosis; call the TAC”
+ comment: "needs further diagnosis; call the TAC"
}
.. code-block:: yaml
@@ -542,7 +518,7 @@ Examples:
fieldname: {
value: [ red, white, blue ],
default: blue,
- comment: “red indicates degraded quality of service”
+ comment: "red indicates degraded quality of service"
}
.. code-block:: yaml
@@ -576,7 +552,7 @@ HeartbeatAction
The ``heartbeatAction`` keyword is provided on the ``event`` objectName for
heartbeat events only. It provides design time guidance to the service
-provider’s heartbeat processing applications (i.e., their watchdog
+provider's heartbeat processing applications (i.e., their watchdog
timers). The syntax and semantics of the ``heartbeatAction`` keyword are
similar to the ``action`` keyword except the trigger is specified by the
first field only instead of the first two fields. When the
@@ -605,30 +581,30 @@ parameters:
- The first parameter specifies the character used to delimit (i.e., to
separate) the key-value pairs. If a space is used as a delimiter,
- it should be communicated within single quotes as ‘ ‘; otherwise,
+ it should be communicated within single quotes as ' '; otherwise,
the delimiter character should be provided without any quotes.
- The second parameter specifies the characters used to separate the
keys and values. If a space is used as a separator, it should be
- communicated within single quotes as ‘ ‘; otherwise, the
+ communicated within single quotes as ' '; otherwise, the
separator character should be provided without any quotes.
-- The third parameter is a “sub-keyword” (i.e., it is used only within
- ‘keyValuePairString’) called ‘keyValuePairs: [ ]’. Within the
- square brackets, a list of ‘keyValuePair’ keywords can be
+- The third parameter is a "sub-keyword" (i.e., it is used only within
+ ``keyValuePairString``) called ``keyValuePairs: [ ]``. Within the
+ square brackets, a list of ``keyValuePair`` keywords can be
provided as follows:
- - Each ‘keyValuePair’ is a structure (used only within
- ‘keyValuePairs’) which has a ‘key’ and a ‘value’. Each
- ‘keyValuePair’, ‘key’ and ‘value’ may be decorated with any of
+ - Each ``keyValuePair`` is a structure (used only within
+ ``keyValuePairs``) which has a ``key`` and a ``value``. Each
+ ``keyValuePair``, ``key`` and ``value`` may be decorated with any of
the other keywords specified in this specification (e.g., with
- ‘presence’, ‘value’, ‘range’ and other keywords).
+ ``presence``, ``value``, ``range`` and other keywords).
Examples:
- The following specifies an additionalFields string which is stuffed
- with ‘key=value’ pairs delimited by the pipe (‘\|’) symbol as in
- (“key1=value1\|key2=value2\|key3=value3…”).
+ with 'key=value' pairs delimited by the pipe ('\|') symbol as in
+ ("key1=value1\|key2=value2\|key3=value3…").
.. code-block:: yaml
@@ -654,8 +630,8 @@ Examples:
Presence
+++++++++
-The ``presence`` keyword may be defined as ‘required’ or ‘optional’. If
-not provided, the element is assumed to be ‘optional’.
+The ``presence`` keyword may be defined as 'required' or 'optional'. If
+not provided, the element is assumed to be 'optional'.
Examples:
@@ -682,9 +658,9 @@ by two parameters in square brackets:
- the first parameter conveys the minimum value
-- the second parameter conveys the maximum value or ‘unbounded’
+- the second parameter conveys the maximum value or 'unbounded'
-The keyword ‘unbounded’ is supported to convey an unbounded upper limit.
+The keyword 'unbounded' is supported to convey an unbounded upper limit.
Note that the range cannot override any restrictions defined in the VES
Common Event Format.
@@ -726,11 +702,11 @@ Units
+++++++
The ``units`` qualifier may be applied to values provided in VES Common
-Event Format extensible field structures. The ‘units’ qualifier
+Event Format extensible field structures. The 'units' qualifier
communicates the units (e.g., megabytes, seconds, Hz) that the value is
-expressed in. Note: the ‘units’ should not contain any space characters
-(e.g., use ‘numberOfPorts’ or ‘number\_of\_ports’ but not ‘number of
-ports’).
+expressed in. Note: the 'units' should not contain any space characters
+(e.g., use 'numberOfPorts' or 'number\_of\_ports' but not 'number of
+ports').
Example:
@@ -761,12 +737,12 @@ Examples:
.. code-block:: yaml
- fieldname: { value: x } # the value is ‘x’
+ fieldname: { value: x } # the value is 'x'
.. code-block:: yaml
fieldname: { value: [ x, y, z ] }
- # the value is either ‘x’, ‘y’, or ‘z’
+ # the value is either 'x', 'y', or 'z'
.. code-block:: yaml
@@ -775,7 +751,7 @@ Examples:
.. code-block:: yaml
- fieldname: { value: ‘error state’ }
+ fieldname: { value: 'error state' }
# the value is the string within the single quotes
Rules
@@ -785,15 +761,15 @@ Rules Document
++++++++++++++
After all events have been defined, the YAML file may conclude with a
-final YAML document delimited by ‘- - -‘ and ‘…’, which defines rules
-based on the named ‘conditions’ asserted in action qualifiers in the
+final YAML document delimited by '- - -' and '…', which defines rules
+based on the named 'conditions' asserted in action qualifiers in the
preceding event definitions. For example:
.. code-block:: yaml
---
- # Event Registration for eventName ‘name1’
+ # Event Registration for eventName 'name1'
event: {
presence: required,
@@ -803,14 +779,14 @@ preceding event definitions. For example:
...
---
- # Event Registration for eventName ‘name2’
+ # Event Registration for eventName 'name2'
event: {
presence: required,
structure: {
commonEventHeader: {
presence: required,
structure: {# details omitted}
- }
+ },
measurements: {
presence: required,
structure: {
@@ -841,7 +817,7 @@ preceding event definitions. For example:
# Rules
rules: [
- # defined based on conditions ‘A’ and ‘B’ - details omitted
+ # defined based on conditions 'A' and 'B' - details omitted
]
...
@@ -944,54 +920,62 @@ PM Dictionary
The Performance Management (PM) Dictionary is used by analytics
applications to interpret and process perf3gpp measurement information
-from vendors, including measurement name, measurement family, measured
+from network functions, including measurement name, measurement family, measured
object class, description, collection method, value ranges, unit of
measure, triggering conditions and other information. The ultimate goal
is for analytics applications to dynamically process new and updated
measurements based on information in the PM Dictionary.
-The PM dictionary is supplied by NF vendors in two parts:
-
-- *PM Dictionary Schema*: specifies meta-information about perf3gpp
- measurement events from that vendor. The meta-information is conveyed
- using standard meta-information keywords, and may be extended to
- include vendor-specific meta-information keywords. The PM Dictionary
- Schema may also convey a range of vendor-specific values for some of
- the keywords. Note: a vendor may provide multiple versions of the PM
- Dictionary Schema and refer to those versions from the PM Dictionary.
-
-- *PM Dictionary*: defines specific perf3gpp measurements sent by
- vendor NFs (each of which is compliant with a referenced PM
- Dictionary Schema).
+The PM dictionary is supplied by NF vendors in a single YAML file composed of
+two parts:
+
+- *PM Dictionary Schema*: specifies meta-information about performance
+ measurements from that NF. The meta-information is conveyed using
+ standard meta-information keywords and may be extended to include
+ vendor-specific meta-information keywords. The PM Dictionary Schema may also
+ convey a range of vendor-specific values for some of the keywords. There is
+ one PM Dictionary Schema provided per YAML file. It must be the first
+ YAML document in the PM Dictionary YAML file, if the file contains multiple
+ documents.
+
+- *PM Dictionary Measurements*: defines specific measurements sent by vendor
+ NFs (each of which is compliant with the PM Dictionary Schema provided in the
+ same YAML file). Each PM Dictionary Measurement is specified in a separate
+ YAML document and is composed of two parts; pmHeader and pmFields.
+ The ``pmHeader`` values MUST be the same for all PM Dictionary Measurements
+ in a single PM Dictionary YAML file.
PM Dictionary Schema Keywords
-+++++++++++++++++++++++++++++++++++
++++++++++++++++++++++++++++++
The following is a list of standard PM Dictionary Schema Keywords:
-pmDictionaryHeader Keywords:
+``pmHeader Keywords``:
+---------------+------------------------------------+-------+---------------+
| **Keyword** | **Description** |**M/O**|**Example** |
+===============+====================================+=======+===============+
-| nfType | NF type to whom this PM Dictionary |M |gnb |
+| nfType | NF type to whom this PM Dictionary |M |gnb-Nokia |
| | applies. nfType is vendor | | |
-| | defined and should match the string| | |
-| | used in eventName. | | |
+| | defined and should match the | | |
+| | nfName-vendor string used in | | |
+| | the fileReady or perf3gpp | | |
+| | eventName | | |
+---------------+------------------------------------+-------+---------------+
-| pmDefSchemaVsn| Version of the PM Dictionary Schema|M |1.0 |
+| pmDefSchemaVsn| Version of the PM Dictionary Schema|M |2.0 |
| | used for this PM Dictionary. | | |
| | Schema versions are specified in | | |
-| | the VES Specifications. | | |
+| | the VES Event Registration | | |
+| | Specifications. The latest PM | | |
+| | Dictionary Schema Version 2.0 ( | | |
+| | described in this document) | | |
+---------------+------------------------------------+-------+---------------+
| pmDefVsn | Version of the PM Dictionary. |M |5G19\_1906\_002|
| | Version is vendor defined. | | |
+---------------+------------------------------------+-------+---------------+
-| vendor | Vendor of the NF type to whom this |M |Nokia |
-| | PM Dictionary applies. | | |
-+---------------+------------------------------------+-------+---------------+
-pmDictionaryMeasurements Keywords:
+
+pmFields Keywords:
+--------------------+----------------------+--------+-----------------------+
| **Keyword** | **Description** | **M/O**| **Example** |
@@ -1006,10 +990,9 @@ pmDictionaryMeasurements Keywords:
| |measType for | | |
| |efficiency in GPB. | | |
+--------------------+----------------------+--------+-----------------------+
-|measAdditionalFields|Hashmap of vendor | 0 | measAggregationLevels |
+|measAdditionalFields|Hashmap of vendor | O | vendorField1 |
| |specific PM Dictionary| | |
-| |fields in key value | | |
-| |pair format | | |
+| |fields | | |
+--------------------+----------------------+--------+-----------------------+
|measChangeType |For the measLastChange| M | added |
| |,indicates the type of| | |
@@ -1044,7 +1027,7 @@ pmDictionaryMeasurements Keywords:
| |values supported by a | | |
| |vendor are specified | | |
| |in the PM Dictionary | | |
-| |YAML using the “value”| | |
+| |YAML using the "value"| | |
| |attribute and may | | |
| |include vendor-defined| | |
| |collection methods not| | |
@@ -1149,7 +1132,7 @@ pmDictionaryMeasurements Keywords:
| |supported by a vendor | | |
| |are specified in the | | |
| |PM Dictionary YAML | | |
-| |using the “value” | | |
+| |using the "value" | | |
| |attribute and may | | |
| |include vendor-defined| | |
| |objects not specified | | |
@@ -1194,7 +1177,7 @@ pmDictionaryMeasurements Keywords:
| |supported by a vendor | | |
| |are specified in the | | |
| |PM Dictionary YAML | | |
-| |using the “value” | | |
+| |using the "value" | | |
| |attribute and may | | |
| |include vendor-defined| | |
| |data types not | | |
@@ -1222,7 +1205,7 @@ pmDictionaryMeasurements Keywords:
| |values supported by a | | |
| |vendor are specified | | |
| |in the PM Dictionary | | |
-| |YAML using the “value”| | |
+| |YAML using the "value"| | |
| |attribute and may | | |
| |include vendor-defined| | |
| |units of measure not | | |
@@ -1268,315 +1251,305 @@ PM Dictionary Schema Example
The following is a sample PM Dictionary Schema:
-# PM Dictionary schema specifying and describing the meta information
-used to define perf3gpp measurements in the PM Dictionary
-
.. code-block:: yaml
- pmDictionary: {
- presence: required,
- structure: {
- pmDictionaryHeader: {
- presence: required,
- structure: {
- nfType: {
- presence: required,
- comment: "NF type; should match the string used in the perf3gpp eventName"
- },
- pmDefSchemaVsn: {
- presence: required,
- value: 1.0,
- comment: "PM Dictionary Schema Version"
- },
- pmDefVsn: {
- presence: required,
- comment: "vendor-defined PM Dictionary version"
+ ---
+ # PM Dictionary schema specifying and describing the meta information
+ # used to define perf3gpp measurements in the PM Dictionary
+
+ pmMetaData: { presence: required, structure: {
+ pmHeader: {
+ presence: required,
+ structure: {
+ nfType: {
+ presence: required,
+ comment: "NF type; should match the nfName-vendor string used in
+ the fileReady or perf3gpp eventName"
+ },
+ pmDefSchemaVsn: {
+ presence: required,
+ value: 2.0,
+ comment: "PM Dictionary Schema Version from the VES Event
+ Registration specification"
},
- vendor: {
- presence: required,
- comment: "vendor of the NF type"
+ pmDefVsn: {
+ presence: required,
+ comment: "vendor-defined PM Dictionary version"
}
- }
- },
- pmDictionaryMeasurements: {
- presence: required,
- array: [
- iMeasInfoId: {
- presence: required,
- comment: "vendor-defined integer measurement group identifier"
- },
- iMeasType: {
- presence: required,
- comment: "vendor-defined integer identifier for the measType; must be combined with measInfoId to identify a specific measurement."
- },
- measAdditionalFields: {
- presence: required,
- comment: "vendor-specific PM Dictionary fields",
- array: [
- keyValuePair: {
- presence: required,
- structure: {
- key: {
- presence: required,
- value: measAggregationLevels,
- comment:"Nokia-specific field"
- },
- value: {
- presence: required,
- value: [NGBTS, NGCELL, IPNO, IPSEC, ETHIF],
- comment: "list of one or more aggregation levels that Nokia recommends for this measurement; for example, if the value is NGBTS NGCELL, then Nokia recommends this measurement be aggregated on the 5G BTS level and the 5G Cell level"
- }
- }
- }
- ]
+ }
+ },
+ pmFields: {
+ presence: required,
+ structure: {
+ iMeasInfoId: {
+ presence: required,
+ comment: "vendor-defined integer measurement group identifier"
},
- measChangeType: {
- presence: required,
- value: [added, modified, deleted],
- comment: "indicates the type of change that occurred during measLastChange"
+ iMeasType: {
+ presence: required,
+ comment: "vendor-defined integer identifier for the measType;
+ must be combined with measInfoId to identify a
+ specific measurement."
+ },
+ measChangeType: {
+ presence: required,
+ value: [added, modified, deleted],
+ comment: "indicates the type of change that occurred during
+ measLastChange"
},
- measCollectionMethod: {
- presence: required,
- value: [CC, SI, DER, Gauge, Average],
- comment: "the measurement collection method; CC, SI, DER and Gauge are as defined in 3GPP; average contains the average value of the measurement during the granularity period"
+ measCollectionMethod: {
+ presence: required,
+ value: [CC, SI, DER, Gauge, Average],
+ comment: "the measurement collection method; CC, SI, DER and
+ Gauge are as defined in 3GPP; average contains the
+ average value of the measurement during the
+ granularity period"
},
- measCondition: {
- presence: required,
- comment: "description of the condition causing the measurement"
+ measCondition: {
+ presence: required,
+ comment: "description of the condition causing the measurement"
},
- measDescription: {
- presence: required,
- comment: "description of the measurement information and purpose"
+ measDescription: {
+ presence: required,
+ comment: "description of the measurement information
+ and purpose"
+ },
+ measFamily: {
+ presence: required,
+ comment: "abbreviation for a family of measurements, in
+ 3GPP format, or vendor defined"
},
- measFamily: {
- presence: required,
- comment: "abbreviation for a family of measurements, in 3GPP format,or vendor defined"
+ measInfoId: {
+ presence: required,
+ comment: "name for a group of related measurements in
+ 3GPP format or vendor defined"
},
- measInfoId: {
- presence: required,
- comment: "name for a group of related measurements in 3GPP format or vendor defined"
+ measLastChange: {
+ presence: required,
+ comment: "version of the PM Dictionary the last time this
+ measurement was added, modified or deleted"
},
- measLastChange: {
- presence: required,
- comment: "version of the PM Dictionary the last time this measurement was added, modified or deleted"
+ measObjClass: {
+ presence: required,
+ value: [NGBTS, NGCELL, IPNO, IPSEC, ETHIF],
+ comment: "measurement object class"
},
- measObjClass: {
- presence: required,
- value: [NGBTS, NGCELL, IPNO, IPSEC, ETHIF],
- comment: "measurement object class"
+ measResultRange: {
+ presence: optional,
+ comment: "range of the measurement result; only necessary when
+ the range is smaller than the full range of the
+ data type"
},
- measResultRange: {
- presence: optional,
- comment: "range of the measurement result; only necessary when the range is smaller than the full range of the data type"
+ measResultType: {
+ presence: required,
+ value: [float, uint32, uint64],
+ comment: "data type of the measurement result"
},
- measResultType: {
- presence: required,
- value: [float, unit32, uint64],
- comment: "data type of the measurement result"
+ measResultUnits: {
+ presence: required,
+ value: [seconds, minutes, nanoseconds, microseconds, dB,
+ number, kilobytes, bytes, ethernetFrames,
+ packets, users],
+ comment: "units of measure for the measurement result"
},
- measResultUnits: {
- presence: required,
- value: [ seconds, minutes, nanoseconds, microseconds, dB, number, kilobytes, bytes, ethernetFrames, packets, users],
- comment: "units of measure for the measurement result"
+ measType: {
+ presence: required,
+ comment: "measurement name in 3GPP or vendor-specific format;
+ vendor specific names are preceded with VS"
},
- measType: {
- presence: required,
- comment: "measurement name in 3GPP or vendor-specific format; vendor specific names are preceded with VS"
+ measAdditionalFields: {
+ presence: required,
+ comment: "vendor-specific PM Dictionary fields",
+ structure: {
+ vendorField1: {
+ presence: required,
+ value: [X, Y, Z],
+ comment: "vendor field 1 description"
+ },
+ vendorField2: {
+ presence: optional,
+ value: [A, B],
+ comment: "vendor field 2 description."
+ }
}
- ]
}
}
}
+ ...
+**Note**: The ``measAdditionalFields`` can be different for different vendors
+and NF Types. The PM Dictionary Schema specifies what ``measAdditionalFields``
+are provided for this particular NF type.
-PM Dictionary Example
-+++++++++++++++++++++
+PM Dictionary Measurement Example
++++++++++++++++++++++++++++++++++
-The following is a sample PM Dictionary in both bracketed and
+The following are PM Dictionary measurement examples in both bracketed and
indent-style YAML formats
-
-# PM Dictionary perf3gpp measurements for the Nokia gnb NF (bracket
-style yaml)
-
.. code-block:: yaml
-
- pmDictionary: {
-
- pmDictionaryHeader: {
- nfType: gnb,
- pmDefSchemaVsn: 1.0,
- pmDefVsn: 5G19_1906_002,
- vendor: Nokia
+ # PM Dictionary perf3gpp measurements for the gnb-Nokia NF (bracket style yaml)
+ ---
+ pmMetaData: {
+ pmHeader: {
+ nfType: gnb-Nokia,
+ pmDefSchemaVsn: 2.0,
+ pmDefVsn: 5G19_1906_002
},
- pmDictionaryMeasurements: [
- {
- iMeasInfoId: 2204,
- iMeasType: 1,
- measAdditionalFields: { measAggregationLevels: "NGBTS NGCELL"},
- measCollectionMethod: CC,
- measCondition: "This measurement is updated when X2AP: SgNB Modification Required message is sent to MeNB with the SCG Change Indication set as PSCellChange.",
- measDescription: "This counter indicates the number of intra gNB intra frequency PSCell change attempts.",
- measFamily: NINFC,
- measInfoId: "NR Intra Frequency PSCell Change",
- measLastChange: 5G18A_1807_003,
- measObjClass: NGCELL,
- measResultRange: 0..4096,
- measResultType: integer,
- measResultUnits: number,
- measType: VS.NINFC.IntraFrPscelChAttempt
+ pmFields: {
+ iMeasInfoId: 2204,
+ iMeasType: 1,
+
+ measCollectionMethod: CC,
+ measCondition: "This measurement is updated when X2AP: SgNB Modification Required message is sent to MeNB
+ with the SCG Change Indication set as PSCellChange.",
+ measDescription: "This counter indicates the number of intra gNB intra frequency PSCell change attempts.",
+ measFamily: NINFC,
+ measInfoId: "NR Intra Frequency PSCell Change",
+ measLastChange: 5G18A_1807_003,
+ measObjClass: NGCELL,
+ measResultRange: 0-4096,
+ measResultType: integer,
+ measResultUnits: number,
+ measType: VS.NINFC.IntraFrPscelChAttempt,
+ measAdditionalFields: {
+ vendorField1: X,
+ vendorField2: B
+ }
+ }
+ }
+ ...
+ ---
+ pmMetaData: {
+ pmHeader: {
+ nfType: gnb-Nokia,
+ pmDefSchemaVsn: 2.0,
+ pmDefVsn: 5G19_1906_002
},
- {
- iMeasInfoId: 2204,
- iMeasType: 2,
- measAdditionalFields: {measAggregationLevels: "NGBTS NGCELL"},
- measCollectionMethod: CC,
- measCondition: "This measurement is updated when the TDCoverall timer has elapsed before gNB receives the X2AP: SgNB Modification Confirm message.",
- measDescription: "This measurement the number of intra gNB intra frequency PSCell change failures due to TDCoverall timer expiry.",
- measFamily: NINFC,
- measInfoId: "NR Intra Frequency PSCell Change",
- measLastChange: 5G18A_1807_003,
- measObjClass: NGCELL,
- measResultRange: 0..4096,
- measResultType: integer,
- measResultUnits: number,
- measType: VS.NINFC.IntraFrPscelChFailTdcExp
+ pmFields: {
+ iMeasInfoId: 2204,
+ iMeasType: 2,
+ measCollectionMethod: CC,
+ measCondition: "This measurement is updated when the TDCoverall timer has elapsed before gNB receives the X2AP: SgNB Modification Confirm message.",
+ measDescription: "This measurement the number of intra gNB intra frequency PSCell change failures due to TDCoverall timer expiry.",
+ measFamily: NINFC,
+ measInfoId: "NR Intra Frequency PSCell Change",
+ measLastChange: 5G18A_1807_003,
+ measObjClass: NGCELL,
+ measResultRange: 0-4096,
+ measResultType: integer,
+ measResultUnits: number,
+ measType: VS.NINFC.IntraFrPscelChFailTdcExp,
+ measAdditionalFields: {
+ vendorField1: Y
+ }
+ }
+ }
+ ...
+ ---
+ pmMetaData: {
+ pmHeader: {
+ nfType: gnb-Nokia,
+ pmDefSchemaVsn: 2.0,
+ pmDefVsn: 5G19_1906_002
},
- {
- iMeasInfoId: 2204,
- iMeasType: 3,
- measAdditionalFields: { measAggregationLevels: "NGBTS NGCELL"},
- measCondition: "This measurement is updated when MeNB replies to X2AP: SgNB Modification Required message with the X2AP: SgNB Modification Refuse message.",
- measCollectionMethod: CC,
- measDescription: "This counter indicates the number of intra gNB intra frequency PSCell change failures due to MeNB refusal.",
- measFamily: NINFC,
- measInfoId: "NR Intra Frequency PSCell Change",
- measLastChange: 5G19_1906_002,
- measObjClass: NGCELL,
- measResultRange: 0..4096,
- measResultType: integer,
- measResultUnits: number,
- measType: VS.NINFC.IntraFrPscelChFailMenbRef
+ pmFields: {
+ iMeasInfoId: 2206,
+ iMeasType: 1,
+ measCondition: "This measurement is updated when MeNB replies to X2AP: SgNB Modification Required message with the X2AP: SgNB Modification Refuse message.",
+ measCollectionMethod: CC,
+ measDescription: "This counter indicates the number of intra gNB intra frequency PSCell change failures due to MeNB refusal.",
+ measFamily: NINFC
+ measInfoId: "NR Intra Frequency PSCell Change",
+ measLastChange: 5G19_1906_002,
+ measObjClass: NGCELL,
+ measResultRange: 0-4096,
+ measResultType: integer,
+ measResultUnits: number,
+ measType: VS.NINFC.IntraFrPscelChFailMenbRef,
+ measAdditionalFields: {
+ vendorField1: Z,
+ vendorField2: A
+ }
}
- ]
}
-
+ ...
.. code-block:: yaml
- # PM Dictionary perf3gpp measurements for the Nokia gnb NF (indented style yaml)
-
+ # PM Dictionary perf3gpp measurements for the gnb-Nokia NF (indented style yaml)
+ ---
pmDictionary:
-
- pmDictionaryHeader:
-
- nfType: gnb
-
- pmDefSchemaVsn: 1.0
-
- pmDefVsn: 5G19_1906_002
-
- vendor: Nokia
-
- pmDictionaryMeasurements:
-
- -
-
- iMeasInfoId: 2204
-
- iMeasType: 1
-
- measAdditionalFields:
-
- measAggregationLevels: "NGBTS NGCELL"
-
- measCollectionMethod: CC
-
- measCondition: "This measurement is updated when X2AP: SgNB Modification Required message is sent to MeNB with the SCG Change Indication set as PSCellChange."
-
- measDescription: "This counter indicates the number of intra gNB intra frequency PSCell change attempts."
-
- measFamily: NINFC
-
- measInfoId: "NR Intra Frequency PSCell Change"
-
- measLastChange: 5G18A_1807_003
-
- measObjClass: NGCELL
-
- measResultRange: "0..4096"
-
- measResultType: integer
-
- measResultUnits: number
-
- measType: VS.NINFC.IntraFrPscelChAttempt
-
- -
-
- iMeasInfoId: 2204
-
- iMeasType: 2
-
- measAdditionalFields:
-
- measAggregationLevels: "NGBTS NGCELL"
-
- measCollectionMethod: CC
-
- measCondition: "This measurement is updated when the TDCoverall timer has elapsed before gNB receives the X2AP: SgNB Modification Confirm message."
-
- measDescription: "This measurement the number of intra gNB intra frequency PSCell change failures due to TDCoverall timer expiry."
-
- measFamily: NINFC
-
- measInfoId: "NR Intra Frequency PSCell Change"
-
- measLastChange: 5G18A_1807_003
-
- measObjClass: NGCELL
-
- measResultRange: "0..4096"
-
- measResultType: integer
-
- measResultUnits: number
-
- measType: VS.NINFC.IntraFrPscelChFailTdcExp
-
- -
-
- iMeasInfoId: 2204
-
- iMeasType: 3
-
- measAdditionalFields:
-
- measAggregationLevels: "NGBTS NGCELL"
-
- measCollectionMethod: CC
-
- measCondition: "This measurement is updated when MeNB replies to X2AP: SgNB Modification Required message with the X2AP: SgNB Modification Refuse message."
-
- measDescription: "This counter indicates the number of intra gNB intra frequency PSCell change failures due to MeNB refusal."
-
- measFamily: NINFC
-
- measInfoId: "NR Intra Frequency PSCell Change"
-
- measLastChange: 5G19_1906_002
-
- measObjClass: NGCELL
-
- measResultRange: "0..4096"
-
- measResultType: integer
-
- measResultUnits: number
-
- measType: VS.NINFC.IntraFrPscelChFailMenbRef
-
+ pmHeader:
+ nfType: gnb-Nokia
+ pmDefSchemaVsn: 2.0
+ pmDefVsn: 5G19_1906_002
+ pmFields:
+ iMeasInfoId: 2204
+ iMeasType: 1
+ measCollectionMethod: CC
+ measCondition: "This measurement is updated when X2AP: SgNB Modification Required message is sent to MeNB with the SCG Change Indication set as PSCellChange."
+ measDescription: "This counter indicates the number of intra gNB intra frequency PSCell change attempts."
+ measFamily: NINFC
+ measInfoId: "NR Intra Frequency PSCell Change"
+ measLastChange: 5G18A_1807_003
+ measObjClass: NGCELL
+ measResultRange: 0-4096
+ measResultType: integer
+ measResultUnits: number
+ measType: VS.NINFC.IntraFrPscelChAttempt
+ measAdditionalFields:
+ vendorField1: X
+ vendorField2: B
+ ...
+ ---
+ pmMetaData:
+ pmHeader:
+ nfType: gnb-Nokia
+ pmDefSchemaVsn: 2.0
+ pmDefVsn: 5G19_1906_002
+ pmFields:
+ iMeasInfoId: 2204
+ iMeasType: 2
+ measCollectionMethod: CC
+ measCondition: "This measurement is updated when the TDCoverall timer has elapsed before gNB receives the X2AP: SgNB Modification Confirm message."
+ measDescription: "This measurement the number of intra gNB intra frequency PSCell change failures due to TDCoverall timer expiry."
+ measFamily: NINFC
+ measInfoId: "NR Intra Frequency PSCell Change"
+ measLastChange: 5G18A_1807_003
+ measObjClass: NGCELL
+ measResultRange: 0-4096
+ measResultType: integer
+ measResultUnits: number
+ measType: VS.NINFC.IntraFrPscelChFailTdcExp
+ measAdditionalFields:
+ vendorField1: Y
+ ...
+ ---
+ pmMetaData:
+ pmHeader:
+ nfType: gnb-Nokia
+ pmDefSchemaVsn: 2.0
+ pmDefVsn: 5G19_1906_002
+ pmFields:
+ iMeasInfoId: 2206
+ iMeasType: 1
+ measCollectionMethod: CC
+ measCondition: "This measurement is updated when MeNB replies to X2AP: SgNB Modification Required message with the X2AP: SgNB Modification Refuse message."
+ measDescription: "This counter indicates the number of intra gNB intra frequency PSCell change failures due to MeNB refusal."
+ measFamily: NINFC
+ measInfoId: "NR Intra Frequency PSCell Change"
+ measLastChange: 5G19_1906_002
+ measObjClass: NGCELL
+ measResultRange: 0-4096
+ measResultType: integer
+ measResultUnits: number
+ measType: VS.NINFC.IntraFrPscelChFailMenbRef
+ measAdditionalFields:
+ vendorField1: Z
+ vendorField2: A
+ ...
FM Meta Data
~~~~~~~~~~~~~
@@ -1595,7 +1568,7 @@ Data and Fault Meta Data:
qualifier of faultFields.alarmAdditionalInformation within each
alarm.
-FM Meta Data keywords must be provided in ‘hash format’ as Keyword:
+FM Meta Data keywords must be provided in 'hash format' as Keyword:
Value. Values containing whitespace must be enclosed in single quotes.
Successive keywords must be separated by commas. These conventions will
make machine processing of FM Meta Data Keywords easier to perform.
@@ -1961,7 +1934,7 @@ Measurements
.. code-block:: yaml
- # registration for Mfvs_vMRF
+ # registration for Measurement_vMRF
# Constants: the values of domain, eventName, priority, version,
# measurementFieldsVersion, additionalMeasurements.namedArrayOfFields.name,
# Variables (to be supplied at runtime) include: eventId, reportingEntityName, sequence,
@@ -1985,8 +1958,8 @@ Measurements
presence: required, structure: {
commonEventHeader: {
presence: required, structure: {
- domain: {presence: required, value: measurementsForVfScaling},
- eventName: {presence: required, value: Mfvs_vMrf},
+ domain: {presence: required, value: measurement},
+ eventName: {presence: required, value: Measurement_vMrf},
eventId: {presence: required},
nfNamingCode: {value: mrfx},
priority: {presence: required, value: Normal},
@@ -1998,11 +1971,12 @@ Measurements
startEpochMicrosec: {presence: required},
lastEpochMicrosec: {presence: required},
version: {presence: required, value: 3.0}
+ vesEventListenerVersion: {presence: required, value: 7.1.1}
}
},
- measurementsForVfScalingFields: {
+ measurement: {
presence: required, structure: {
- measurementFieldsVersion: {presence: required, value: 2.0},
+ measurementFieldsVersion: {presence: required, value: 4.0},
measurementInterval: {presence: required, range: [ 60, 3600 ], default: 300},
concurrentSessions: {presence: required, range: [ 0, 100000 ]},
requestRate: {presence: required, range: [ 0, 100000 ]},
@@ -2440,7 +2414,7 @@ Syslog
syslogFieldsVersion: {presence: required, value: 3.0},
syslogMsg: {presence: required},
syslogSData: {
- presence: required, keyValuePairString: {‘ ‘, =, keyValuePairs: [
+ presence: required, keyValuePairString: {' ', =, keyValuePairs: [
keyValuePair: {
presence: required, structure: {
key: {presence: required, value: ATTEST},
@@ -2873,7 +2847,7 @@ Contents.
| 3/15/2017 | 1.0 | This is the initial release of the VES Event |
| | | Registration document. |
+------------+----------+-----------------------------------------------------+
-| 3/22/2017 | 1.1 | - Changed the ‘alert’ qualifier to ‘action’ and |
+| 3/22/2017 | 1.1 | - Changed the 'alert' qualifier to 'action' and |
| | | added support for conditions that will trigger |
| | | rules. |
| | | |
@@ -2894,14 +2868,14 @@ Contents.
| | | |
| | | - Wordsmithed throughout. |
+------------+----------+-----------------------------------------------------+
-| 3/31/2017 | 1.3 | - Generalized the descriptions from an ASDC, ECOMP |
+| 3/31/2017 | 1.3 | - Generalized the descriptions from an SDC, ECOMP |
| | | and AT&T-specific interaction with a VNF vendor, |
| | | to a generic Service Provider interaction with a |
| | | VNF vendor. |
| | | |
| | | - Wordsmithed throughout. |
| | | |
-| | | - Added a ‘default’ qualifier |
+| | | - Added a 'default' qualifier |
| | | |
| | | - Fixed syntax and semantic inconsistencies in the |
| | | Rules section |
@@ -2918,10 +2892,10 @@ Contents.
+------------+----------+-----------------------------------------------------+
| 4/14/2017 | 1.4 | - Wordsmithed throughout |
| | | |
-| | | - Action keyword: clarified use of ‘up’, ‘down’ and |
-| | | ‘at’ triggers; clarified the specification and use|
-| | | of microservices directives at design time and |
-| | | runtime, clarified the use of tca’s |
+| | | - Action keyword: clarified use of ``up``, ``down``,|
+| | | ``at`` triggers; clarified the specification and |
+| | | use of microservices directives at design time and|
+| | | runtime, clarified the use of tca's |
| | | |
| | | - HeartbeatAction keyword: Added the heartbeatAction|
| | | keyword |
@@ -2938,7 +2912,7 @@ Contents.
| 10/3/2017 | 1.5 | - Back of Cover Page: updated the license and |
| | | copyright notice to comply with ONAP guidelines |
| | | |
-| | | - Section 3.1: Added a ‘Units’ qualifier |
+| | | - Section 3.1: Added a 'Units' qualifier |
| | | |
| | | - Examples: updated the examples to align with VES |
| | | 5.4.1 |
@@ -2961,7 +2935,7 @@ Contents.
| | | - Wordsmithed the Introduction |
+------------+----------+-----------------------------------------------------+
| 6/28/2018 | 2.0 | - Updated to align with the change of the |
-| | | ‘measurementsForVfScaling’ domain to ‘measurement’|
+| | | 'measurementsForVfScaling' domain to 'measurement'|
| | | |
| | | - measurementsForVfScaling measurement |
| | | |
@@ -2970,7 +2944,7 @@ Contents.
| | | - measurementsForVfScalingVersion |
| | | measurementFieldsVersion |
| | | |
-| | | - the ‘mfvs’ abbreviation measurement |
+| | | - the 'mfvs' abbreviation measurement |
| | | |
| | | 1. Clarified YAML file naming. |
| | | |
@@ -2995,9 +2969,9 @@ Contents.
| | | |
| | | 10. Modified the Examples as follows: |
| | | |
-| | | - changed ‘faultFieldsVersion’ to 3.0 |
+| | | - changed 'faultFieldsVersion' to 3.0 |
| | | |
-| | | - changed ‘heartbeatFieldsVersion’ to 2.0 |
+| | | - changed 'heartbeatFieldsVersion' to 2.0 |
| | | |
| | | - provided guidance at the top of the Measurements |
| | | examples as to how to send extensible fields |
@@ -3005,42 +2979,42 @@ Contents.
| | | eliminate the need for custom development at the |
| | | service provider. |
| | | |
-| | | - changed ‘measurementFieldsVersion’ to 3.0 |
+| | | - changed 'measurementFieldsVersion' to 3.0 |
| | | |
| | | - changed measurementFields.additionalMeasurements |
-| | | to reference a ‘namedHashMap’ |
+| | | to reference a 'namedHashMap' |
| | | |
-| | | - ‘field’ is replaced by ‘keyValuePair’ |
+| | | - 'field' is replaced by 'keyValuePair' |
| | | |
-| | | - ‘name’ is replaced by ‘key’ |
+| | | - 'name' is replaced by 'key' |
| | | |
-| | | - changed ‘namedArrayOfFields’ to ‘namedHashMap’ |
+| | | - changed 'namedArrayOfFields' to 'namedHashMap' |
| | | |
| | | - fixed the mobile Flow example to show the |
-| | | ‘mobileFlowFields’, show the |
-| | | ‘mobileFlowFieldsVersion’ at 3.0, modify |
-| | | ‘additionalInformation’ to use a hashMap |
+| | | 'mobileFlowFields', show the |
+| | | 'mobileFlowFieldsVersion' at 3.0, modify |
+| | | 'additionalInformation' to use a hashMap |
| | | |
-| | | - ‘field’ is replaced by ‘keyValuePair’ |
+| | | - 'field' is replaced by 'keyValuePair' |
| | | |
-| | | - ‘name’ is replaced by ‘key’ |
+| | | - 'name' is replaced by 'key' |
| | | |
-| | | - changed ‘sipSignalingFieldsVersion’ to 2.0 |
+| | | - changed 'sipSignalingFieldsVersion' to 2.0 |
| | | |
-| | | - changed ‘additionalInformation’ to use a hashmap |
+| | | - changed 'additionalInformation' to use a hashmap |
| | | |
-| | | - ‘field’ is replaced by ‘keyValuePair’ |
+| | | - 'field' is replaced by 'keyValuePair' |
| | | |
-| | | - ‘name’ is replaced by ‘key’ |
+| | | - 'name' is replaced by 'key' |
| | | |
| | | - fixed the voiceQuality example to show the |
-| | | ‘voiceQualityFields’, show the |
-| | | ‘voiceQualityFieldsVersion’ at 2.0 and modify |
-| | | ‘additionalInformation’ to use a hashMap |
+| | | 'voiceQualityFields', show the |
+| | | 'voiceQualityFieldsVersion' at 2.0 and modify |
+| | | 'additionalInformation' to use a hashMap |
| | | |
-| | | - ‘field’ is replaced by ‘keyValuePair’ |
+| | | - 'field' is replaced by 'keyValuePair' |
| | | |
-| | | - ‘name’ is replaced by ‘key’ |
+| | | - 'name' is replaced by 'key' |
| | | |
| | | - Modified the rules example to conform to the |
| | | Complex Conditions and Rules sections. |
@@ -3058,14 +3032,14 @@ Contents.
| | | - Section 3.2.1: corrected and clarified |
| | | |
| | | - Section 3.2.3 Clarified number of conditions |
-| | | that may be and’d or or’d |
+| | | that may be and'd or or'd |
| | | |
| | | - Section 3.2.4: fixed reference to PersistentB1 |
| | | |
| | | - Section 3.2.6: fixed math in example |
| | | |
-| | | - Section 3.3.2: changed reference from ‘alerts’ to |
-| | | ‘events’ |
+| | | - Section 3.3.2: changed reference from 'alerts' to |
+| | | 'events' |
+------------+----------+-----------------------------------------------------+
| 7/30/2018 | 3.0 | - Removed the isHomogeneous keyword. |
| | | |
@@ -3085,4 +3059,6 @@ Contents.
| | | - Changed the location of the doc to VNF |
| | | Requirements and changed the formatting |
+------------+----------+-----------------------------------------------------+
-
+| 01/28/2020 | 3.2.1 | - Minor formatting changes |
+| | | - Updated performance metric schema and examples |
++------------+----------+-----------------------------------------------------+
diff --git a/docs/Chapter8/index.rst b/docs/Chapter8/index.rst
index 9f0390d..b49d191 100644
--- a/docs/Chapter8/index.rst
+++ b/docs/Chapter8/index.rst
@@ -24,5 +24,6 @@ Appendix
VNF-License-Information-Guidelines
TOSCA-model
Ansible-Playbook-Examples
- VES_Registraion_3_2.rst
+ VES_Registration_3_2.rst
ves7_1spec.rst
+ OPNFV-Verified-Badging.rst
diff --git a/docs/Chapter8/input-VNF-API-fail-single_module.zip b/docs/Chapter8/input-VNF-API-fail-single_module.zip
new file mode 100644
index 0000000..69319eb
--- /dev/null
+++ b/docs/Chapter8/input-VNF-API-fail-single_module.zip
Binary files differ
diff --git a/docs/Chapter8/input-VNF-API-pass-single_module.zip b/docs/Chapter8/input-VNF-API-pass-single_module.zip
new file mode 100644
index 0000000..7474183
--- /dev/null
+++ b/docs/Chapter8/input-VNF-API-pass-single_module.zip
Binary files differ
diff --git a/docs/Chapter8/tosca_vnf_test_environment.png b/docs/Chapter8/tosca_vnf_test_environment.png
new file mode 100644
index 0000000..78b3f74
--- /dev/null
+++ b/docs/Chapter8/tosca_vnf_test_environment.png
Binary files differ
diff --git a/docs/Chapter8/tosca_vnf_test_flow.png b/docs/Chapter8/tosca_vnf_test_flow.png
new file mode 100644
index 0000000..87dc8ec
--- /dev/null
+++ b/docs/Chapter8/tosca_vnf_test_flow.png
Binary files differ