diff options
59 files changed, 2008 insertions, 667 deletions
@@ -1,57 +1,94 @@ Content --- -This repository contains all the HEAT templates for the instantiation of the ONAP platform, and the vFirewall and vLoadBalancer/vDNS demo applications. +The Demo repository contains the HEAT templates and scripts for the instantiation of the ONAP platform and use cases. The repository includes: -The repository includes: - - README.md: this file + - README.md: this file. - - LICENSE.TXT: the license text + - LICENSE.TXT: the license text. - - The "boot" directory: the scripts that instantiate ONAP. + - pom.xml: POM file used to build the software hosted in this repository. - - The "heat" directory: contains the following three directories: + - version.properties: current version number of the Demo repository. Format: MAJOR.MINOR.PATCH (e.g. 1.1.0) - - ONAP: contains the HEAT files for the installation of the ONAP platform. It includes the template onap_openstack.yaml and the environment file onap_openstack.env for vanilla OpenStack. + - The "boot" directory contains the scripts that install and configure ONAP: + - install.sh: sets up the host VM for specific components. This script runs only once, soon after the VM is created. + - vm\_init.sh: contains component-specific configuration, downloads and runs docker containers. For some components, this script may either call a component-specific script (cloned from Gerrit repository) or call docker-compose. + - serv.sh: it is installed in /etc/init.d, calls vm\_init.sh at each VM (re)boot. + - configuration files for the Bind DNS Server installed with ONAP. Currently, both simpledemo.openecomp.org and simpledemo.onap.org domains are supported. + - sdc\_ext\_volume_partitions.txt: file that contains external volume partitions for SDC. + + - The "docker\_update\_scripts" directory contains scripts that update all the docker containers of an ONAP instance. + + - The "heat" directory contains the following sub-directories: + + - ONAP: contains the HEAT files for the installation of the ONAP platform. NOTE: onap\_openstack.yaml AND onap\_openstack.env ARE THE HEAT TEMPLATE AND ENVIRONMENT FILE CURRENTLY SUPPORTED. onap\_openstack\_float.yaml, onap\_openstack\_float.env, onap\_openstack\_nofloat.yaml, onap\_openstack\_nofloat.env AND onap\_rackspace.yaml, onap\_rackspace.env AREN'T UPDATED AND THEIR USAGE IS DEPRECATED. + + - vCPE: contains sub-directories with HEAT templates for the installation of vCPE Infrastructure (Radius Server, DHCP, DNS, Web Server), vBNG, vBRG Emulator, vGMUX, and vGW. + + - vFW: contains the HEAT template for the instantiation of the vFirewall VNF (base\_vfw.yaml) and the environment file (base\_vfw.env) For Amsterdam release, this template is used for testing and demonstrating VNF instantiation only (no closed-loop). + + - vFWCL: contains two sub-directories, one that hosts the HEAT template for the vFirewall and vSink (vFWSNK/base\_vfw.yaml), and one that hosts the HEAT template for the vPacketGenerator (vPKG/base\_vpkg.yaml). For Amsterdam release, these templates are used for testing and demonstrating VNF instantiation and closed-loop. - - vFW: contains the HEAT template for the instantiation of the vFirewall VNF (base_vfw.yaml) and the environment file (base_vfw.env) + - vLB: contains the HEAT template for the instantiation of the vPacketGenerator/vLoadBalancer/vDNS VNF (base\_vlb.yaml) and the environment file (base\_vlb.env). The directory also contains the HEAT template for the DNS scaling-up scenario (dnsscaling.yaml) with its environment file (dnsscaling.env). - - vLB: contains the HEAT template for the instantiation of the vLoadBalancer/vDNS VNF (base_vlb.yaml) and the environment file (base_vlb.env). The folder also contains the HEAT template for the DNS scaling-up scenario (dnsscaling.yaml) with its environment file (dnsscaling.env), and the HEAT template for the vLB packet generator (packet_gen_vlb.yaml) and its environment file (packet_gen_vlb.env). + - vVG: contains the HEAT template for the instantiation of a volume group (base\_vvg.yaml and base\_vvg.env). + + - The "scripts" directory contains the deploy.sh script that uploads software artifacts to the Nexus repository during the build process. + + - The "tosca" directory contains an example of the TOSCA model of the vCPE infrastructure. + + - The "tutorials" directory contains tutorials for Clearwater\_IMS and for creating a Netconf mount point in APPC. The "VoLTE" sub-directory is currently not used. + + - The "vagrant" directory contains the scripts that install ONAP using Vagrant. - The "vnfs" directory: contains the following directories: - honeycomb_plugin: Honeycomb plugin that allows ONAP to change VNF configuration via RESTCONF or NETCONF protocols. - - VES: source code of the ONAP Vendor Event Listener (VES) Library. The VES library used here has been cloned from the GitHub repository at https://github.com/att/evel-library on February 1, 2017. + - vCPE: contains sub-directories with the scripts that install all the components of the vCPE use case. + + - VES: source code of the ONAP Vendor Event Listener (VES) Library. The VES library used here has been cloned from the GitHub repository at https://github.com/att/evel-library on February 1, 2017. (DEPRECATED FOR AMSTERDAM RELEASE) + + - VESreporting_vFW: VES client for vFirewall demo application. (DEPRECATED FOR AMSTERDAM RELEASE) - - VESreporting_vFW: VES client for vFirewall demo application. + - VESreporting_vLB: VES client for vLoadBalancer/vDNS demo application. (DEPRECATED FOR AMSTERDAM RELEASE) - - VESreporting_vLB: VES client for vLoadBalancer/vDNS demo application. + - VES5.0: source code of the ONAP Vendor Event Listener (VES) Library, version 5.0. (SUPPORTED FOR AMSTERDAM RELEASE) - - vFW: scripts that download, install and run packages for the vFirewall demo application. + - VESreporting_vFW5.0: VES v5.0 client for vFirewall demo application. (SUPPORTED FOR AMSTERDAM RELEASE) - - vLB: scripts that download, install and run packages for the vLoadBalancer/vDNS demo application. + - VESreporting_vLB5.0: VES v5.0 client for vLoadBalancer/vDNS demo application. (SUPPORTED FOR AMSTERDAM RELEASE) + + - vFW: scripts that download, install and run packages for the vFirewall use case. + + - vLB: scripts that download, install and run packages for the vLoadBalancer/vDNS use case. -ONAP HEAT Template +ONAP Installation in OpenStack Clouds via HEAT Template --- -The ONAP HEAT template spins up the entire ONAP platform. The template, onap_openstack.yaml, comes with an environment file, onap_openstack.env, in which all the default values are defined. +The ONAP HEAT template spins up the entire ONAP platform in OpenStack-based clouds. The template, onap\_openstack.yaml, comes with an environment file, onap\_openstack.env, in which all the default values are defined. + +NOTE: onap\_openstack.yaml AND onap\_openstack.env ARE THE HEAT TEMPLATE AND ENVIRONMENT FILE CURRENTLY SUPPORTED. onap\_openstack\_float.yaml, onap\_openstack\_float.env, onap\_openstack\_nofloat.yaml, onap\_openstack\_nofloat.env AND onap\_rackspace.yaml, onap\_rackspace.env AREN'T UPDATED AND THEIR USAGE IS DEPRECATED. As such, the following description refers to onap\_openstack.yaml and onap\_openstack.env. -The HEAT template is composed of two sections: (i) parameters, and (ii) resources. The parameter section contains the declaration and description of the parameters that will be used to spin up ONAP, such as public network identifier, URLs of code and artifacts repositories, etc. -The default values of these parameters can be found in the environment file. The resource section contains the definition of: - - ONAP Private Management Network, which ONAP components use to communicate with each other and with VNFs - - ONAP Virtual Machines (VMs) - - Public/private key pair used to access ONAP VMs - - Virtual interfaces towards the ONAP Private Management Network - - Disk volumes. +The HEAT template is composed of two sections: (i) parameters, and (ii) resources. -Each VM specification includes Operating System image name, VM size (i.e. flavor), VM name, etc. Each VM has two virtual network interfaces: one towards the public network and one towards the ONAP Private Management network, as described above. -Furthermore, each VM runs a post-instantiation script that downloads and installs software dependencies (e.g. Java JDK, gcc, make, Python, ...) and ONAP software packages and docker containers from remote repositories. + - The "parameters" section contains the declarations and descriptions of the parameters that will be used to spin up ONAP, such as public network identifier, URLs of code and artifacts repositories, etc. The default values of these parameters can be found in the environment file. -When the HEAT template is executed, the Openstack HEAT engine creates the resources defined in the HEAT template, based on the parameters values defined in the environment file. + - The "resources" section contains the definitions of: + - ONAP Private Management Network, which is used by ONAP components to communicate with each other and with VNFs + - ONAP Virtual Machines (VMs) + - Public/private key pair used to access ONAP VMs + - Virtual interfaces towards the ONAP Private Management Network + - Disk volumes. -Before running HEAT, it is necessary to customize the environment file. Indeed, some parameters, namely public_net_id, pub_key, openstack_tenant_id, openstack_username, and openstack_api_key, need to be set depending on the user's environment: +Each VM specification includes Operating System image name, VM size (i.e. flavor), VM name, etc. Each VM has a virtual network interface with a private IP address in the ONAP Private Management network and a floating IP that OpenStack assigns based on availability. +Furthermore, each VM runs an install.sh script that downloads and installs software dependencies (e.g. Java JDK, gcc, make, Python, ...). install.sh finally calls vm_init.sh that downloads docker containers from remote repositories and runs them. + +When the HEAT template is executed, the OpenStack HEAT engine creates the resources defined in the HEAT template, based on the parameter values defined in the environment file. + +Before running HEAT, it is necessary to customize the environment file. Indeed, some parameters, namely public\_net\_id, pub\_key, openstack\_tenant\_id, openstack\_username, and openstack\_api\_key, need to be set depending on the user's environment: public_net_id: PUT YOUR NETWORK ID/NAME HERE pub_key: PUT YOUR PUBLIC KEY HERE @@ -62,14 +99,13 @@ Before running HEAT, it is necessary to customize the environment file. Indeed, keystone_url: PUT THE KEYSTONE URL HERE (do not include version number) -openstack_region parameter is set to RegionOne (OpenStack default). If your OpenStack is using another Region, please modify this parameter. - -public_net_id is the unique identifier (UUID) or name of the public network of the cloud provider. To get the public_net_id, use the following OpenStack CLI command (ext is the name of the external network, change it with the name of the external network of your installation) +openstack\_region parameter is set to RegionOne (OpenStack default). If your OpenStack is using another Region, please modify this parameter. - openstack network list | grep ext | awk '{print $2}' +public\_net\_id is the unique identifier (UUID) or name of the public network of the cloud provider. To get the public\_net\_id, use the following OpenStack CLI command (ext is the name of the external network, change it with the name of the external network of your installation) + openstack network list | grep ext | awk '{print $2}' -pub_key is string value of the public key that will be installed in each ONAP VM. To create a public/private key pair in Linux, please execute the following instruction: +pub\_key is the string value of the public key that will be installed in each ONAP VM. To create a public/private key pair in Linux, please execute the following instruction: user@ubuntu:~$ ssh-keygen -t rsa @@ -83,9 +119,10 @@ The following operations to create the public/private key pair occur: Your identification has been saved in /home/user/.ssh/id_rsa. Your public key has been saved in /home/user/.ssh/id_rsa.pub. -openstack_username, openstack_tenant_id (password), and openstack_api_key are user's credentials to access the OpenrStack-based cloud. Note that in the Rackspace web interface, openstack_api_key can be found by clicking on the username on the top-right corner of the GUI and then "Account Settings". +openstack\_username, openstack\_tenant\_id (password), and openstack\_api\_key are the user's credentials to access the OpenStack-based cloud. Some global parameters used for all components are also required: + ubuntu_1404_image: PUT THE UBUNTU 14.04 IMAGE NAME HERE ubuntu_1604_image: PUT THE UBUNTU 16.04 IMAGE NAME HERE flavor_small: PUT THE SMALL FLAVOR NAME HERE @@ -102,182 +139,206 @@ To get the flavor names used in your OpenStack environment, use the following Op openstack flavor list -Some network parameters must be configured - dns_list: PUT THE ADDRESS OF THE EXTERNAL DNS HERE (e.g. a comma-separated list of IP addresses in your /etc/resolv.conf in UNIX-based Operating Systems). THIS LIST MUST INCLUDE THE DNS SERVER THAT OFFERS DNS AS AS SERVICE (see DCAE section below for more details) - external_dns: PUT THE FIRST ADDRESS OF THE EXTERNAL DNS LIST HERE +Some network parameters must be configured: + + dns_list: PUT THE ADDRESS OF THE EXTERNAL DNS HERE (e.g. a comma-separated list of IP addresses in your /etc/resolv.conf in UNIX-based Operating Systems). + external_dns: PUT THE FIRST ADDRESS OF THE EXTERNAL DNS LIST HERE (THIS WILL BE DEPRECATED SOON) + dns_forwarder: PUT THE IP OF DNS FORWARDER FOR ONAP DEPLOYMENT'S OWN DNS SERVER oam_network_cidr: 10.0.0.0/16 -You can use the Google Public DNS 8.8.8.8 and 4.4.4.4 address or your internal DNS servers +ONAP installs a DNS server used to resolve IP addresses in the ONAP OAM private network. ONAP Amsterdam Release also requires OpenStack Designate DNS support for the DCAE platform, so as to allow IP address discovery and communication among DCAE elements. This is required because the ONAP HEAT template only installs the DCAE bootstrap container, which will in turn install the entire DCAE platform. As such, at installation time, the IP addresses of the DCAE components are unknown. The DNS server that ONAP installs needs to be connected to the Designate DNS to allow communication between the DCAE elements and the other ONAP components. To this end, dns\_list, external\_dns, and dns\_forwarder should all have the IP address of the Designate DNS. These three parameters are redundant, but still required for Amsterdam Release. Originally, dns\_list and external\_dns were both used to circumvent some limitations of older OpenStack versions. In future releases, the DNS settings and parameters in HEAT will be consolidated. The Designate DNS is configured to access the external DNS. As such, the ONAP DNS will forward to the Designate DNS the queries from ONAP components to the external world. The Designate DNS will then forward those queries to the external DNS. -DCAE spins up ONAP's data collection and analytics system in two phases. The first is the launching of a bootstrap VM that is specified in the ONAP Heat template. This VM requires a number of deployment specific conifiguration parameters being provided so that it can subsequently bring up the DCAE system. There are two groups of parameters. The first group relates to the launching of DCAE VMs, including parameters such as the keystone URL and additional VM image IDs/names. DCAE VMs are connected to the same internal network as the rest of ONAP VMs, but dynamically spun up by the DCAE core platform. Hence these parameters need to be provided to DCAE. Note that although DCAE VMs will be launched in the same tenant as the rest of ONAP, because DCAE may use MultiCloud node as the agent for interfacing with the underying cloud, it needs a separate keystone URL (which points to MultiCloud node instead of the underlying cloud). The second group of configuration parameters relate to DNS As A Service support (DNSaaS). DCAE requires DNSaaS for registering its VMs into organization-wide DNS service. For OpenStack, DNSaaS is provided by Designate. Designate support can be provided via an integrated service endpoint listed under the service catalog of the OpenStack installation; or proxyed by the ONAP MultiCloud service. For the latter case, a number of parameters are needed to configure MultiCloud to use the correct Designate service. These parameters are described below: +DCAE spins up ONAP's data collection and analytics system in two phases. The first is the launching of a bootstrap VM that is specified in the ONAP Heat template, as described above. This VM requires a number of deployment-specific configuration parameters being provided so that it can subsequently bring up the DCAE system. There are two groups of parameters. The first group relates to the launching of DCAE VMs, including parameters such as the keystone URL and additional VM image IDs/names. Hence these parameters need to be provided to DCAE. Note that although DCAE VMs will be launched in the same tenant as the rest of ONAP, because DCAE may use MultiCloud node as the agent for interfacing with the underlying cloud, it needs a separate keystone URL (which points to MultiCloud node instead of the underlying cloud). The second group of configuration parameters relate to DNS As A Service support (DNSaaS). DCAE requires DNSaaS for registering its VMs into organization-wide DNS service. For OpenStack, DNSaaS is provided by Designate, as mentioned above. Designate support can be provided via an integrated service endpoint listed under the service catalog of the OpenStack installation; or proxyed by the ONAP MultiCloud service. For the latter case, a number of parameters are needed to configure MultiCloud to use the correct Designate service. These parameters are described below: - dcae_keystone_url: PUT THE KEYSTONE URL OF THE OPENSTACK INSTANCE WHERE DCAE IS DEPLOYED (Note: put the MultiCloud proxy URL if the DNSaaS is proxyed by MultiCloud) - dcae_centos_7_image: PUT THE CENTOS7 IMAGE ID/NAME AVAILABLE AT THE OPENSTACK INSTANCE WHERE DCAE IS DEPLOYED - dcae_security_group: PUT THE SECURITY GROUP ID/NAME TO BE USED AT THE OPENSTACK INSTANCE WHERE DCAE IS DEPLOYED - dcae_key_name: PUT THE ACCESS KEY-PAIR NAME REGISTER AT THE OPENSTACK INSTANCE WHERE DCAE IS DEPLOYED - dcae_public_key: PUT THE PUBLIC KEY OF A KEY-PAIR USED FOR DCAE BOOTSTRAP NODE TO COMMUNICATE WITH DCAE VMS - dcae_private_key: PUT THE PRIVATE KEY OF A KEY-PAIR USED FOR DCAE BOOTSTRAP NODE TO COMMUNICATE WITH DCAE VMS + dcae_keystone_url: PUT THE MULTIVIM PROVIDED KEYSTONE API URL HERE + dcae_centos_7_image: PUT THE CENTOS7 VM IMAGE NAME HERE FOR DCAE LAUNCHED CENTOS7 VM + dcae_domain: PUT THE NAME OF DOMAIN THAT DCAE VMS REGISTER UNDER + dcae_public_key: PUT THE PUBLIC KEY OF A KEYPAIR HERE TO BE USED BETWEEN DCAE LAUNCHED VMS + dcae_private_key: PUT THE SECRET KEY OF A KEYPAIR HERE TO BE USED BETWEEN DCAE LAUNCHED VMS - dnsaas_config_enabled: true for false FOR WHETHER DNSAAS IS PROXYED - dnsaas_region: PUT THE REGION OF THE OPENSTACK INSTANCE WHERE DNSAAS IS PROVIDED - dnsaas_tenant_id: PUT THE TENANT ID/NAME OF THE OPENSTACK INSTANCE WHERE DNSAAS IS PROVIDED - dnsaas_keystone_url: PUT THE KEYSTONE URL OF THE OPENSTACK INSTANCE WHERE DNSAAS IS PROVIDED - dnsaas_username: PUT THE USERNAME OF THE OPENSTACK INSTANCE WHERE DNSAAS IS PROVIDED - dnsaas_password: PUT THE PASSWORD OF THE OPENSTACK INSTANCE WHERE DNSAAS IS PROVIDED + dnsaas_config_enabled: PUT WHETHER TO USE PROXYED DESIGNATE + dnsaas_region: PUT THE DESIGNATE PROVIDING OPENSTACK'S REGION HERE + dnsaas_keystone_url: PUT THE DESIGNATE PROVIDING OPENSTACK'S KEYSTONE URL HERE + dnsaas_tenant_name: PUT THE TENANT NAME IN THE DESIGNATE PROVIDING OPENSTACK HERE (FOR R1 USE THE SAME AS openstack_tenant_name) + dnsaas_username: PUT THE DESIGNATE PROVIDING OPENSTACK'S USERNAME HERE + dnsaas_password: PUT THE DESIGNATE PROVIDING OPENSTACK'S PASSWORD HERE The ONAP platform can be instantiated via Horizon (OpenStack dashboard) or Command Line. Instantiation via Horizon: + - Login to Horizon URL with your personal credentials - Click "Stacks" from the "Orchestration" menu - Click "Launch Stack" - - Paste or manually upload the HEAT template file (onap_openstack.yaml) in the "Template Source" form - - Paste or manually upload the HEAT environment file (onap_openstack.env) in the "Environment Source" form + - Paste or manually upload the HEAT template file (onap\_openstack.yaml) in the "Template Source" form + - Paste or manually upload the HEAT environment file (onap\_openstack.env) in the "Environment Source" form - Click "Next" - Specify a name in the "Stack Name" form - Provide the password in the "Password" form - Click "Launch" Instantiation via Command Line: + - Install the HEAT client on your machine, e.g. in Ubuntu (ref. http://docs.openstack.org/user-guide/common/cli-install-openstack-command-line-clients.html): apt-get install python-dev python-pip pip install python-heatclient # Install heat client pip install python-openstackclient # Install the Openstack client to support multiple services - - Create a file (named i.e. ~/openstack/openrc) that sets all the environmental variables required to access Rackspace: + - Create a file (named i.e. ~/openstack/openrc) that sets all the environmental variables required to access the OpenStack platform: export OS_AUTH_URL=INSERT THE AUTH URL HERE export OS_USERNAME=INSERT YOUR USERNAME HERE export OS_TENANT_ID=INSERT YOUR TENANT ID HERE export OS_REGION_NAME=INSERT THE REGION HERE export OS_PASSWORD=INSERT YOUR PASSWORD HERE - - - Run the script from command line: + + Alternatively, you can download the OpenStack RC file from the dashboard: Compute -> Access & Security -> API Access -> Download RC File + + - Source the script or RC file from command line: source ~/openstack/openrc - In order to install the ONAP platform, type: - heat stack-create STACK_NAME -f PATH_TO_HEAT_TEMPLATE(YAML FILE) -e PATH_TO_ENV_FILE # Old HEAT client, OR - openstack stack create -t PATH_TO_HEAT_TEMPLATE(YAML FILE) -e PATH_TO_ENV_FILE STACK_NAME # New Openstack client + openstack stack create -t PATH_TO_HEAT_TEMPLATE(YAML FILE) -e PATH_TO_ENV_FILE STACK_NAME # New Openstack client, OR + heat stack-create STACK_NAME -f PATH_TO_HEAT_TEMPLATE(YAML FILE) -e PATH_TO_ENV_FILE # Old HEAT client -VNFs HEAT Templates +vFirewall Use Case --- -The HEAT templates for the demo applications are stored in heat/vFW and heat/vLB directories. - -vFW contains the HEAT template, base_vfw.yaml, and the environment file, base_vfw.env, that are used to instantiate a virtual firewall. The VNF is composed of three VMs: - - Packet generator - - Firewall - - Sink +The use case is composed of three virtual functions (VFs): packet generator, firewall, and traffic sink. These VFs run in three separate VMs. The packet generator sends packets to the packet sink through the firewall. The firewall reports the volume of traffic passing though to the ONAP DCAE collector. To check the traffic volume that lands at the sink VM, you can access the link http://sink\_ip\_address:667 through your browser and enable automatic page refresh by clicking the "Off" button. You can see the traffic volume in the charts. -The packet generator generates traffic that passes through the firewall and reaches the sink. The firewall periodically reports the number of packets received in a unit of time to the DCAE collector. If the reported number of packets received on the firewall is above a high-water mark or below a low-water mark, ONAP will enforce a configuration change on the packet generator, reducing or augmenting the quantity of traffic generated, respectively. +The packet generator includes a script that periodically generates different volumes of traffic. The closed-loop policy has been configured to re-adjust the traffic volume when high-water or low-water marks are crossed. -vLB contains the HEAT template, base_vlb.yaml, and the environment file, base_vlb.env, that are used to spin up a virtual load balancer and a virtual DNS. vLB also contains the HEAT template, packet_gen_vlb.yaml, and the environment file packet_gen_vlb.env, of a packet generator that generates DNS queries. -The load balancer periodically reports the number of DNS query packets received in a time unit to the DCAE collector. If the reported number of received packets crosses a threshold, then ONAP will spin up a new DNS based on the dnsscaling.yaml HEAT template and dnsscaling.env to better balance the load of incoming DNS queries. +__Closed-Loop for vFirewall demo:__ -The vFW and vLB HEAT templates and environment files are onboarded into ONAP SDC and run automatically. The user is not required to run these templates manually. -However, before onboarding the templates following the instructions in the ONAP documentation, the user should set the following values in the environment files: +Through the ONAP Portal's Policy Portal, we can find the configuration and operation policies that are currently enabled for the vFirewall use case. - public_net_id: INSERT YOUR NETWORK ID/NAME HERE - pub_key: INSERT YOUR PUBLIC KEY HERE - +- The configuration policy sets the thresholds for generating an onset event from DCAE to the Policy engine. Currently, the high-water mark is set to 700 packets while the low-water mark is set to 300 packets. The measurement interval is set to 10 seconds. +- When a threshold is crossed (i.e. the number of received packets is below 300 packets or above 700 packets per 10 seconds), the Policy engine executes the operational policy to request APPC to adjust the traffic volume to 500 packets per 10 seconds. +- APPC sends a request to the packet generator to adjust the traffic volume. +- Changes to the traffic volume can be observed through the link http://sink\_ip\_address:667. -ONAP Demo applications ---- +__Adjust packet generator:__ -Demo applications are installed and run automatically when the VNFs are instantiated. The user is not supposed to download and install the demo applications manually. +The packet generator contains 10 streams: fw\_udp1, fw\_udp2, fw\_udp3, . . . , fw\_udp10. Each stream generates 100 packets per 10 seconds. A script in /opt/run\_traffic\_fw\_demo.sh on the packet generator VM starts automatically and alternates high traffic (i.e. 10 active streams at the same time) and low traffic (1 active stream) every 5 minutes. -Two demo applications, vFirewall and vLoadBalancer/vDNS are included. +To enable a stream, include *{"id":"fw_udp1", "is-enabled":"true"}* in the *pg-stream* bracket. -vFirewall ---- +To adjust the traffic volume produced by the packet generator, run the following command in a shell, replacing PacketGen_IP in the HTTP argument with localhost (if you run it in the packet generator VM) or the packet generator IP address: -The vFirewall application contains 3 VMs: a firewall, a packet generator, and a packet sink. -The packet generator sends packets to the packet sink through the firewall. The firewall reports the volume of traffic from the packet generator to the sink to ONAP DCAE’s collector. To check the traffic volume to the sink, you can access the link http://sink_ip_address:667 through your browser. You can see the traffic volume in the charts. + curl -X PUT -H "Authorization: Basic YWRtaW46YWRtaW4=" -H "Content-Type: application/json" -H "Cache-Control: no-cache" -d '{"pg-streams":{"pg-stream": [{"id":"fw_udp1", "is-enabled":"true"},{"id":"fw_udp2", "is-enabled":"true"},{"id":"fw_udp3", "is-enabled":"true"},{"id":"fw_udp4", "is-enabled":"true"},{"id":"fw_udp5", "is-enabled":"true"}]}}' "http://PacketGen_IP:8183/restconf/config/sample-plugin:sample-plugin/pg-streams" -The packet generator includes a script that periodically generates different volumes of traffic. +The command above enables 5 streams. -The closed-loop policy has been configured to re-adjust the traffic volume when it is needed. -__Closedloop for vFirewall demo:__ +vLoadBalancer/vDNS Use Case +--- -Through the ONAP Portal’s Policy Portal, we can find the configuration and operation policies that is currently enabled for the vFirewall application. -+ The configuration policy sets the thresholds for generating an onset event from DCAE to the Policy engine. Currently the thresholds are set to 300 packets and 700 packets, while the measurement interval is set to 10 seconds. -+ Once one of the thresholds is crossed (e.g. the number of received packets is below 300 packets or above 700 per 10 seconds), the Policy engine executes the operational policy to request APP-C to change the configuration of the packet generator. -+ APP-C sends a request to the packet generator to adjust the traffic volume to 500 packets per 10 seconds. -+ The traffic volume can be observed through the link http://sink_ip_address:667. +The use case is composed of three VFs: packet generator, load balancer, and DNS server. These VFs run in three separate VMs. The packet generator issues DNS lookup queries that reach the DNS server via the load balancer. DNS replies reach the packet generator via the load balancer as well. The load balancer reports the average amount of traffic per DNS over a time interval to the DCAE collector. When the average amount of traffic per DNS server crosses a predefined threshold, the closed-loop is triggered and a new DNS server is instantiated. -__Adjust packet generator:__ +To test the application, you can run a DNS query from the packet generator VM: -The packet generator contains 10 streams: fw_udp1, fw_udp2, fw_udp3,...fw_udp10. Each stream generates 100 packets per 10 seconds. + dig @vLoadBalancer_IP host1.dnsdemo.onap.org -To enable a stream, include *{"id":"fw_udp1", "is-enabled":"true"}* in the *pg-stream* bracket. +The output below means that the load balancer has been set up correctly, has forwarded the DNS queries to one DNS instance, and the packet generator has received the DNS reply message. + + ; <<>> DiG 9.10.3-P4-Ubuntu <<>> @192.168.9.111 host1.dnsdemo.onap.org + ; (1 server found) + ;; global options: +cmd + ;; Got answer: + ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 31892 + ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 2 + ;; WARNING: recursion requested but not available + + ;; OPT PSEUDOSECTION: + ; EDNS: version: 0, flags:; udp: 4096 + ;; QUESTION SECTION: + ;host1.dnsdemo.onap.org. IN A + + ;; ANSWER SECTION: + host1.dnsdemo.onap.org. 604800 IN A 10.0.100.101 + + ;; AUTHORITY SECTION: + dnsdemo.onap.org. 604800 IN NS dnsdemo.onap.org. + + ;; ADDITIONAL SECTION: + dnsdemo.onap.org. 604800 IN A 10.0.100.100 + + ;; Query time: 0 msec + ;; SERVER: 192.168.9.111#53(192.168.9.111) + ;; WHEN: Fri Nov 10 17:39:12 UTC 2017 + ;; MSG SIZE rcvd: 97 + -To adjust the traffic volume sending from the packet generator, run the following command that enable 5 streams in a shell with localhost or the correct packet generator IP address in the http argument: +__Closedloop for vLoadBalancer/vDNS:__ -``` -curl -X PUT -H "Authorization: Basic YWRtaW46YWRtaW4=" -H "Content-Type: application/json" -H "Cache-Control: no-cache" -d '{"pg-streams":{"pg-stream": [{"id":"fw_udp1", "is-enabled":"true"},{"id":"fw_udp2", "is-enabled":"true"},{"id":"fw_udp3", "is-enabled":"true"},{"id":"fw_udp4", "is-enabled":"true"},{"id":"fw_udp5", "is-enabled":"true"}]}}' "http://PacketGen_IP:8183/restconf/config/sample-plugin:sample-plugin/pg-streams" -``` +Through the Policy Portal (accessible via the ONAP Portal), we can find the configuration and operation policies that are currently enabled for the vLoadBalancer/vDNS application. ++ The configuration policy sets the thresholds for generating an onset event from DCAE to the Policy engine. Currently, the threshold is set to 200 packets, while the measurement interval is set to 10 seconds. ++ Once the threshold is crossed (e.g. the number of received packets is above 200 packets per 10 seconds), the Policy engine executes the operational policy. The Policy engine queries A&AI to fetch the VNF UUID and sends a request to SO to spin up a new DNS instance for the VNF identified by that UUID. ++ SO spins up a new DNS instance. -A script in /opt/run_traffic_fw_demo.sh on the packet generator VM starts automatically and alternate the volume of traffic every 5 minutes. -vLoadBalancer/vDNS ---- +To change the volume of queries generated by the packet generator, run the following command in a shell, replacing PacketGen_IP in the HTTP argument with localhost (if you run it in the packet generator VM) or the packet generator IP address: -The vLoadBalancer/vDNS app contains 2 VMs in the base model: a load balancer and a DNS instance. When there are too many DNS queries, the closed-loop is triggered and a new DNS instance will be spun up. + curl -X PUT -H "Authorization: Basic YWRtaW46YWRtaW4=" -H "Content-Type: application/json" -H "Cache-Control: no-cache" -d '{"pg-streams":{"pg-stream": [{"id":"dns1", "is-enabled":"true"}]}}' "http://PacketGen_IP:8183/restconf/config/sample-plugin:sample-plugin/pg-streams" + ++ *{"id":"dns1", "is-enabled":"true"}* shows the stream *dns1* is enabled. The packet generator sends requests in the rate of 100 packets per 10 seconds. -To test the application, in the command prompt: ++ To increase the amount of traffic, you can enable more streams. The packet generator has 10 streams, *dns1*, *dns2*, *dns3* to *dns10*. Each of them generates 100 packets per 10 seconds. To enable the streams, please add *{"id":"dnsX", "is-enabled":"true"}* to the pg-stream bracket of the curl command, where *X* is the stream ID. -``` -# nslookup host1.dnsdemo.onap.org *vLoadBalancer_IP* +For example, if you want to enable 3 streams, the curl command will be: -Server: *vLoadBalancer_IP* -Address: *vLoadBalancer_IP* + curl -X PUT -H "Authorization: Basic YWRtaW46YWRtaW4=" -H "Content-Type: application/json" -H "Cache-Control: no-cache" -d '{"pg-streams":{"pg-stream": [{"id":"dns1", "is-enabled":"true"}, {"id":"dns2", "is-enabled":"true"},{"id":"dns3", "is-enabled":"true"}]}}' "http://PacketGen_IP:8183/restconf/config/sample-plugin:sample-plugin/pg-streams" -Name: host1.dnsdemo.onap.org -Address: 10.0.100.101 +When the VNF starts, the packet generator is automatically configured to run 5 streams. -``` +vVolumeGroup Use Case +--- -That means the load balancer has been set up correctly and has forwarded the DNS queries to the DNS instance. +The vVG directory contains the HEAT template (base\_vvg.yaml) and environment file (base\_vvg.env) used to spin up a volume group in OpenStack and attach it to an existing ONAP instance. -__Closedloop for vLoadBalancer/vDNS:__ +The HEAT environment file contains two parameters: -Through the Policy Portal (accessible via the ONAP Portal), we can find the configuration and operation policies that are currently enabled for the vLoadBalancer/vDNS application. -+ The configuration policy sets the thresholds for generating an onset event from DCAE to the Policy engine. Currently the threshold is set to 200 packets, while the measurement interval is set to 10 seconds. -+ Once the threshold is crossed (e.g. the number of received packets is above 200 packets per 10 seconds), the Policy engine executes the operational policy to query A&AI and send a request to MSO for spinning up a new DNS instance. -+ A new DNS instance will be then spun up. + volume_size: 100 + nova_instance: 1234456 +volume\_size is the size (in gigabytes) of the volume group. nova\_instance is the name or UUID of the VM to which the volume group will be attached. This parameter should be changed appropriately. -__Generate DNS queries:__ -To generate DNS queries to the vLoadBalancer/vDNS instance, a separate packet generator is prepared for this purpose. +ONAP Use Cases HEAT Templates +--- -1. Spin up the heat template in the repository: https://link_to_repo/demo/heat/vLB/packet_gen_vlb.yaml. +USE CASE VNFs SHOULD BE INSTANTIATED VIA ONAP. THE USER IS NOT SUPPOSED TO DOWNLOAD THE HEAT TEMPLATES AND RUN THEM MANUALLY. -2. Log in to the packet generator instance through ssh. +The vFWCL directory contains two HEAT templates, one for creating a packet generator (vPKG/base\_vpkg.yaml) and one for creating a firewall and a packet sink (vFWSNK/base\_vfw.yaml). This use case supports VNF onboarding, instantiation, and closed-loop. The vFW directory, instead, contains a single HEAT template (base\_vfw) that spins up the three VFs. This use case supports VNF onboarding and instantiation only (no support for closed-loop). For Amsterdam Release, the HEAT templates in vFWCL are recommended, so that users can test and demonstrate the entire ONAP end-to-end flow. -3. Change the IP address in the config file /opt/config/vlb_ipaddr.txt to the public IP address of the LoadBalancer instance. +The vLB directory contains a base HEAT template (base\_vlb.yaml) that install a packet generator, a load balancer, and a DNS instance, plus another HEAT template (dnsscaling.yaml) for the DNS scaling scenario, in which another DNS server is instantiated. -4. Execute the script /opt/vdnspacketgen_change_streams_ports.sh to restart sending the DNS queries to the new LoadBalancer address. +Before onboarding the VNFs in SDC, the user should set the following values in the HEAT environment files: -5. To change the volume of queries, execute the following command in a command prompt with the updated vLoadBalancer_IP address or localhost in the http argument: - -``` -curl -X PUT -H "Authorization: Basic YWRtaW46YWRtaW4=" -H "Content-Type: application/json" -H "Cache-Control: no-cache" -d '{"pg-streams":{"pg-stream": [{"id":"dns1", "is-enabled":"true"}]}}' "http://vLoadBalancer_IP:8183/restconf/config/sample-plugin:sample-plugin/pg-streams" -``` -+ *{"id":"dns1", "is-enabled":"true"}* shows the stream *dns1* is enabled. The packet generator sends requests in the rate of 100 packets per 10 seconds. + image_name: PUT THE VM IMAGE NAME HERE + flavor_name: PUT THE VM FLAVOR NAME HERE + public_net_id: PUT THE PUBLIC NETWORK ID HERE + dcae_collector_ip: PUT THE ADDRESS OF THE DCAE COLLECTOR HERE (NOTE: this is not required for vFWCL/vPKG/base\_vpkg.env) + pub_key: PUT YOUR KEY HERE + cloud_env: PUT openstack OR rackspace HERE + +image\_name, flavor\_name, \public\_net\_id, and pub\_key can be obtained as described in the ONAP Section. For deployment in OpenStack, cloud\_env must be openstack. -+ To increase the amount of traffic, we can enable more streams. The packet generator has 10 streams, *dns1*, *dns2*, *dns3* to *dns10*. Each of them generates 100 packets per 10 seconds. To enable the streams, please insert *{"id":"dnsX", "is-enabled":"true"}* where *X* is the stream ID in the pg-stream bracket of the curl command. -For example, if we want to enable 3 streams, the curl command should be: +The DNS scaling HEAT environment file for the vLoadBalancer use case also requires you to specify the private IP of the load balancer, so that the DNS can connect to the vLB: + + vlb_private_ip_1: PUT THE PRIVATE ADDRESS OF THE VLB IN THE ONAP NETWORK SPACE HERE -``` -curl -X PUT -H "Authorization: Basic YWRtaW46YWRtaW4=" -H "Content-Type: application/json" -H "Cache-Control: no-cache" -d '{"pg-streams":{"pg-stream": [{"id":"dns1", "is-enabled":"true"}, {"id":"dns2", "is-enabled":"true"},{"id":"dns3", "is-enabled":"true"}]}}' "http://vLoadBalancer_IP:8183/restconf/config/sample-plugin:sample-plugin/pg-streams" -``` +As an alternative, it is possible to set the HEAT environment variables after the VNF is onboarded via SDC by appropriately preloading data into SDNC. That data will be fetched and used by SO to overwrite the default parameters in the HEAT environment file before the VNF is instantiated. For further information about SDNC data preload, please visit the wiki page: https://wiki.onap.org/display/DW/Tutorial_vIMS+%3A+SDNC+Updates + +Each VNF has a MANIFEST.json file associated with the HEAT templates. During VNF onboarding, SDC reads the MANIFEST.json file to understand the role of each HEAT template that is part of the VNF (e.g. base template vs. non-base template). VNF onboarding requires users to create a zip file that contains all the HEAT templates and the MANIFEST file. To create the zip file, you can run the following command from shell: - + cd VNF_FOLDER (this is the folder that contains the HEAT templates and the MANIFEST file) + zip ZIP_FILE_NAME.zip * + +For information about VNF onboarding via the SDC portal, please refer to the wiki page: https://wiki.onap.org/display/DW/Design diff --git a/boot/bind_options b/boot/bind_options index 8ef7cc08..857e2d2a 100644 --- a/boot/bind_options +++ b/boot/bind_options @@ -10,9 +10,7 @@ options { allow-transfer { none; }; # disable zone transfers by default forwarders { - external_dns; - 8.8.8.8; - 8.8.4.4; + dns_forwarder; }; // If there is a firewall between you and nameservers you want diff --git a/boot/bind_zones b/boot/bind_zones index 3823aa66..870def1d 100644 --- a/boot/bind_zones +++ b/boot/bind_zones @@ -120,8 +120,6 @@ portal.api.simpledemo.openecomp.org. IN CNAME vm1.portal.simpledemo.openecomp.or ;Message Router ;mr.api.simpledemo.openecomp.org. IN CNAME vm1.mr.simpledemo.openecomp.org. ueb.api.simpledemo.openecomp.org. IN CNAME vm1.mr.simpledemo.openecomp.org. -mr.api.simpledemo.openecomp.org. IN A dcae_coll_ip_addr -collector.api.simpledemo.openecomp.org. IN A dcae_coll_ip_addr ;dbc.api.simpledemo.openecomp.org. IN CNAME vm1.mr.simpledemo.openecomp.org. ;drprov.api.simpledemo.openecomp.org. IN CNAME vm1.mr.simpledemo.openecomp.org. diff --git a/boot/bind_zones_onap b/boot/bind_zones_onap index 29891646..365e3a3a 100644 --- a/boot/bind_zones_onap +++ b/boot/bind_zones_onap @@ -120,8 +120,6 @@ portal.api.simpledemo.onap.org. IN CNAME vm1.portal.simpledemo.onap.org. ;Message Router ;mr.api.simpledemo.onap.org. IN CNAME vm1.mr.simpledemo.onap.org. ueb.api.simpledemo.onap.org. IN CNAME vm1.mr.simpledemo.onap.org. -mr.api.simpledemo.onap.org. IN A dcae_coll_ip_addr -collector.api.simpledemo.onap.org. IN A dcae_coll_ip_addr ;dbc.api.simpledemo.onap.org. IN CNAME vm1.mr.simpledemo.onap.org. ;drprov.api.simpledemo.onap.org. IN CNAME vm1.mr.simpledemo.onap.org. diff --git a/boot/cli_install.sh b/boot/cli_install.sh index c5ec4216..22fec53e 100644 --- a/boot/cli_install.sh +++ b/boot/cli_install.sh @@ -16,7 +16,7 @@ # limitations under the License. #******************************************************************************* -CLI_LATEST_BINARY="https://nexus.onap.org/service/local/artifact/maven/redirect?r=snapshots&g=org.onap.cli&a=cli-zip&e=zip&v=LATEST" +CLI_LATEST_BINARY="https://nexus.onap.org/content/repositories/releases/org/onap/cli/cli-zip/1.1.0/cli-zip-1.1.0.zip" CLI_INSTALL_DIR=/opt/onap/cli CLI_ZIP=cli.zip CLI_BIN=/usr/bin/onap diff --git a/boot/dcae2_install.sh b/boot/dcae2_install.sh index c1dbaa7a..99cfd34d 100644 --- a/boot/dcae2_install.sh +++ b/boot/dcae2_install.sh @@ -103,6 +103,7 @@ chmod 777 /opt/app/config/key rm -rf /opt/app/inputs-templates mkdir -p /opt/app/inputs-templates wget -P /opt/app/inputs-templates https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/input-templates/inputs.yaml +wget -P /opt/app/inputs-templates https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/input-templates/cdapinputs.yaml wget -P /opt/app/inputs-templates https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/input-templates/phinputs.yaml wget -P /opt/app/inputs-templates https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/input-templates/dhinputs.yaml wget -P /opt/app/inputs-templates https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/input-templates/invinputs.yaml @@ -111,6 +112,7 @@ wget -P /opt/app/inputs-templates https://nexus.onap.org/service/local/repositor wget -P /opt/app/inputs-templates https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/input-templates/he-ip.yaml wget -P /opt/app/inputs-templates https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.platform.blueprints/releases/input-templates/hr-ip.yaml + # generate blueprint input files pip install jinja2 wget https://nexus.onap.org/service/local/repositories/raw/content/org.onap.dcaegen2.deployments/releases/scripts/detemplate-bpinputs.py && (python detemplate-bpinputs.py /opt/config /opt/app/inputs-templates /opt/app/config; rm detemplate-bpinputs.py) diff --git a/boot/dcae2_vm_init.sh b/boot/dcae2_vm_init.sh index d5df44e4..0d9bebce 100644..100755 --- a/boot/dcae2_vm_init.sh +++ b/boot/dcae2_vm_init.sh @@ -160,20 +160,18 @@ register_multicloud_pod25dns_with_aai() local CLOUD_ENV local CLOUD_IDENTITY_URL local DNSAAS_SERVICE_URL - local DNSAAS_USERNAME - local DNSAAS_PASSWORD - local DNSAAS_TENANT_ID + local DNSAAS_USERNAME='demo' + local DNSAAS_PASSWORD='onapdemo' - CLOUD_REGION="$(cat /opt/config/openstack_region.txt)" + CLOUD_REGION="$(cat /opt/config/dnsaas_region.txt)" CLOUD_ENV="$(cat /opt/config/cloud_env.txt)" MCIP="$(cat /opt/config/openo_ip_addr.txt)" CLOUD_IDENTITY_URL="http://${MCIP}/api/multicloud-titanium_cloud/v0/${CLOUD_OWNER}_${CLOUD_REGION}/identity/v2.0" local RESPCODE DNSAAS_SERVICE_URL="$(cat /opt/config/dnsaas_keystone_url.txt)" - DNSAAS_USERNAME="$(cat /opt/config/dnsaas_username.txt)" - DNSAAS_PASSWORD="$(cat /opt/config/dnsaas_password.txt)" - DNSAAS_TENANT_ID="$(cat /opt/config/dnsaas_tenant_id.txt)" + # a tenant of the same name must be set up on the Deisgnate providing OpenStack + DNSAAS_TENANT_NAME="$(cat /opt/config/dnsaas_tenant_name.txt)" cat >"/tmp/${CLOUD_OWNER}_${CLOUD_REGION}.json" <<EOL { "cloud-owner" : "$CLOUD_OWNER", @@ -190,7 +188,7 @@ register_multicloud_pod25dns_with_aai() { "esr-system-info-id": "532ac032-e996-41f2-84ed-9c7a1766eb30", "cloud-domain": "Default", - "default-tenant" : "$DNSAAS_TENANT_ID", + "default-tenant" : "$DNSAAS_TENANT_NAME", "user-name" : "$DNSAAS_USERNAME", "password" : "$DNSAAS_PASSWORD", "service-url" : "$DNSAAS_SERVICE_URL", @@ -234,24 +232,34 @@ register_multicloud_pod25_with_aai() local CLOUD_OWNER='pod25' local CLOUD_VERSION='titanium_cloud' local CLOUD_REGION + local DNSAAS_CLOUD_REGION local CLOUD_ENV local MCIP local CLOUD_IDENTITY_URL local KEYSTONE_URL local USERNAME local PASSWORD - local TENANT_ID + local TENANT_NAME CLOUD_REGION="$(cat /opt/config/openstack_region.txt)" + DNSAAS_CLOUD_REGION="$(cat /opt/config/dnsaas_region.txt)" CLOUD_ENV="$(cat /opt/config/cloud_env.txt)" MCIP="$(cat /opt/config/openo_ip_addr.txt)" CLOUD_IDENTITY_URL="http://${MCIP}/api/multicloud-titanium_cloud/v0/${CLOUD_OWNER}_${CLOUD_REGION}/identity/v2.0" KEYSTONE_URL="$(cat /opt/config/openstack_keystone_url.txt)" + if [[ "$KEYSTONE_URL" == */v3 ]]; then + echo "$KEYSTONE_URL" + elif [[ "$KEYSTONE_URL" == */v2.0 ]]; then + echo "$KEYSTONE_URL" + else + KEYSTONE_URL="${KEYSTONE_URL}/v3" + echo "$KEYSTONE_URL" + fi USERNAME="$(cat /opt/config/openstack_user.txt)" PASSWORD="$(cat /opt/config/openstack_password.txt)" - TENANT_ID="$(cat /opt/config/tenant_id.txt)" + TENANT_NAME="$(cat /opt/config/tenant_name.txt)" cat >"/tmp/${CLOUD_OWNER}_${CLOUD_REGION}.json" <<EOL -{ +{ "cloud-owner" : "$CLOUD_OWNER", "cloud-region-id" : "$CLOUD_REGION", "cloud-region-version" : "$CLOUD_VERSION", @@ -261,13 +269,13 @@ register_multicloud_pod25_with_aai() "identity-url": "$CLOUD_IDENTITY_URL", "owner-defined-type" : "owner-defined-type", "sriov-automation" : false, - "cloud-extra-info" : "{\"epa-caps\":{\"huge_page\":\"true\",\"cpu_pinning\":\"true\",\"cpu_thread_policy\":\"true\",\"numa_aware\":\"true\",\"sriov\":\"true\",\"dpdk_vswitch\":\"true\",\"rdt\":\"false\",\"numa_locality_pci\":\"true\"},\"dns-delegate\":{\"cloud-owner\":\"pod25dns\",\"cloud-region-id\":\"RegionOne\"}}", + "cloud-extra-info" : "{\"epa-caps\":{\"huge_page\":\"true\",\"cpu_pinning\":\"true\",\"cpu_thread_policy\":\"true\",\"numa_aware\":\"true\",\"sriov\":\"true\",\"dpdk_vswitch\":\"true\",\"rdt\":\"false\",\"numa_locality_pci\":\"true\"},\"dns-delegate\":{\"cloud-owner\":\"pod25dns\",\"cloud-region-id\":\"${DNSAAS_CLOUD_REGION}\"}}", "esr-system-info-list" : { "esr-system-info" : [ - { + { "esr-system-info-id": "432ac032-e996-41f2-84ed-9c7a1766eb29", "cloud-domain": "Default", - "default-tenant" : "$TENANT_ID", + "default-tenant" : "$TENANT_NAME", "user-name" : "$USERNAME", "password" : "$PASSWORD", "service-url" : "$KEYSTONE_URL", @@ -342,8 +350,7 @@ register_dns_zone() local CLOUD_REGION local CLOUD_VERSION='titanium_cloud' local CLOUD_ENV - local DCAE_ZONE - local DNSAAS_TENANT_ID + local DNSAAS_TENANT_NAME local MCHOST local MCURL local MCMETHOD='-X POST' @@ -358,44 +365,64 @@ register_dns_zone() CLOUD_REGION="$(cat /opt/config/openstack_region.txt)" CLOUD_ENV="$(cat /opt/config/cloud_env.txt)" if [ -z "$1" ]; then DCAE_ZONE="$(cat /opt/config/dcae_zone.txt)"; else DCAE_ZONE="$1"; fi - DNSAAS_TENANT_ID="$(cat /opt/config/dnsaas_tenant_id.txt)" + DNSAAS_TENANT_NAME="$(cat /opt/config/dnsaas_tenant_name.txt)" MCHOST=$(cat /opt/config/openo_ip_addr.txt) MCURL="http://$MCHOST:9005/api/multicloud-titanium_cloud/v0/swagger.json" + MCDATA='-d "{\"auth\":{\"tenantName\": \"'${DNSAAS_TENANT_NAME}'\"}}"' MULTICLOUD_PLUGIN_ENDPOINT=http://${MCHOST}/api/multicloud-titanium_cloud/v0/${CLOUD_OWNER}_${CLOUD_REGION} - MULTICLOUD_PLUGIN_ENDPOINT=http://${MCHOST}:9005/api/multicloud-titanium_cloud/v0/${CLOUD_OWNER}_${CLOUD_REGION} + + ### zone operations + # because all VM's use 10.0.100.1 as their first DNS server, the designate DNS server as seocnd, we need to use a + # domain outside of the first DNS server's domain + local DCAE_DOMAIN + local ZONENAME + DCAE_DOMAIN="$(cat /opt/config/dcae_domain.txt)" + ZONENAME="${DCAE_ZONE}.${DCAE_DOMAIN}." + + echo "===> Register DNS zone $ZONENAME under $DNSAAS_TENANT_NAME" ### Get Token local TOKEN MCURL="${MULTICLOUD_PLUGIN_ENDPOINT}/identity/v3/auth/tokens" - TOKEN=$(call_api_for_response_header "$MCURL" "$MCMETHOD" "$MCRESP" "$MCHEADERS" "$MCAUTH" "$MCDATA" | grep 'X-Subject-Token' | sed "s/^.*: //") - #TOKEN=$(curl -v -s -H "Content-Type: application/json" -X POST -d "{\"tenantName\": \"${DNSAAS_TENANT_ID}\"}" "${MULTICLOUD_PLUGIN_ENDPOINT}/identity/v3/auth/tokens" 2>&1 | grep X-Subject-Token | sed "s/^.*: //") + echo "=====> Getting token from $MCURL" + #TOKEN=$(call_api_for_response_header "$MCURL" "$MCMETHOD" "$MCRESP" "$MCHEADERS" "$MCAUTH" "$MCDATA" | grep 'X-Subject-Token' | sed "s/^.*: //") + TOKEN=$(curl -v -s -H "Content-Type: application/json" -X POST -d "{\"auth\":{\"tenantName\": \"${DNSAAS_TENANT_NAME}\"}}" "${MCURL}" 2>&1 | grep X-Subject-Token | sed "s/^.*: //") echo "Received Keystone token $TOKEN from $MCURL" + if [ -z "$TOKEN" ]; then + echo "Faile to acquire token for creating DNS zone. Exit" + exit 1 + fi - ### zone operations - local ZONENAME - ZONENAME="${DCAE_ZONE}.dcaeg2.simpledemo.onap.org." + local PROJECTID + PROJECTID=$(curl -v -s -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN" -X GET "${MULTICLOUD_PLUGIN_ENDPOINT}/dns-delegate/v2/zones?name=${ZONENAME}" |grep 'project_id' |sed 's/^.*"project_id":"\([a-zA-Z0-9-]*\)",.*$/\1/') + if [ ! -z "$PROJECTID" ]; then + ### query the zone with zone id + echo "!!!!!!> zone $ZONENAME already registered by project $PROJECTID" + else + ### create a zone + echo "=====> No zone of same name $ZONENAME found, creating new zone " + curl -sv -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN" -X POST -d "{ \"name\": \"$ZONENAME\", \"email\": \"lji@research.att.com\"}" "${MULTICLOUD_PLUGIN_ENDPOINT}/dns-delegate/v2/zones" + fi ### list zones - curl -sv -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN" -X GET "${MULTICLOUD_PLUGIN_ENDPOINT}/dns-delegate/v2/zones" - - ### create a zone - echo "Creating zone $ZONENAME" - curl -sv -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN" -X POST -d "{ \"name\": \"$ZONENAME\", \"email\": \"lji@research.att.com\"}" "${MULTICLOUD_PLUGIN_ENDPOINT}/dns-delegate/v2/zones" + echo "=====> Zone listing" + curl -sv -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN" -X GET "${MULTICLOUD_PLUGIN_ENDPOINT}/dns-delegate/v2/zones" | python -m json.tool ### query the zone with zone name - curl -s -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN" -X GET "${MULTICLOUD_PLUGIN_ENDPOINT}/dns-delegate/v2/zones?name=${ZONENAME}" + #echo "=====> Querying zone $ZONENAME" + #curl -s -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN" -X GET "${MULTICLOUD_PLUGIN_ENDPOINT}/dns-delegate/v2/zones?name=${ZONENAME}" ### export ZONE id local ZONEID - ZONEID=$(curl -v -s -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN" -X GET "${MULTICLOUD_PLUGIN_ENDPOINT}/dns-delegate/v2/zones?name=${ZONENAME}" |sed 's/^.*"id":"\([a-zA-Z0-9-]*\)",.*$/\1/') - echo "After creation, zone $ZONENAME ID is $ZONEID" + ZONEID=$(curl -v -sb -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN" -X GET "${MULTICLOUD_PLUGIN_ENDPOINT}/dns-delegate/v2/zones?name=${ZONENAME}" |grep 'id' |sed 's/^.*"id":"\([a-zA-Z0-9-]*\)",.*$/\1/') + echo "=====> After creation, zone $ZONENAME ID is $ZONEID" ### query the zone with zone id - echo "Test listing zone info for zone $ZONENAME" - curl -sv -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN" -X GET "${MULTICLOUD_PLUGIN_ENDPOINT}/dns-delegate/v2/zones/${ZONEID}" + #echo "=====> Querying zone $ZONENAME by ID $ZONEID" + #curl -sv -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN" -X GET "${MULTICLOUD_PLUGIN_ENDPOINT}/dns-delegate/v2/zones/${ZONEID}" } @@ -406,7 +433,7 @@ delete_dns_zone() local CLOUD_VERSION='titanium_cloud' local CLOUD_ENV local DCAE_ZONE - local DNSAAS_TENANT_ID + local DNSAAS_TENANT_NAME local MCHOST local MCURL local MCMETHOD='-X GET' @@ -419,19 +446,22 @@ delete_dns_zone() CLOUD_REGION="$(cat /opt/config/openstack_region.txt)" CLOUD_ENV="$(cat /opt/config/cloud_env.txt)" DCAE_ZONE="$(cat /opt/config/dcae_zone.txt)" - DNSAAS_TENANT_ID="$(cat /opt/config/dnsaas_tenant_id.txt)" + DNSAAS_TENANT_NAME="$(cat /opt/config/dnsaas_tenant_name.txt)" MCHOST=$(cat /opt/config/openo_ip_addr.txt) MCURL="http://$MCHOST:9005/api/multicloud-titanium_cloud/v0/swagger.json" + local DCAE_DOMAIN + local ZONENAME + DCAE_DOMAIN="$(cat /opt/config/dcae_domain.txt)" + ZONENAME="${DCAE_ZONE}.${DCAE_DOMAIN}." + + MCDATA='"{\"auth\":{\"tenantName\": \"'${DNSAAS_TENANT_NAME}'\"}}"' MULTICLOUD_PLUGIN_ENDPOINT=http://${MCHOST}/api/multicloud-titanium_cloud/v0/${CLOUD_OWNER}_${CLOUD_REGION} - MULTICLOUD_PLUGIN_ENDPOINT=http://${MCHOST}:9005/api/multicloud-titanium_cloud/v0/${CLOUD_OWNER}_${CLOUD_REGION} ### Get Token local TOKEN - TOKEN=$(curl -v -s -H "Content-Type: application/json" -X POST -d "{\"tenantName\": \"${DNSAAS_TENANT_ID}\"}" "${MULTICLOUD_PLUGIN_ENDPOINT}/identity/v3/auth/tokens" 2>&1 | grep X-Subject-Token | sed "s/^.*: //") + TOKEN=$(curl -v -s -H "Content-Type: application/json" -X POST -d "{\"auth\":{\"tenantName\": \"${DNSAAS_TENANT_NAME}\"}}" "${MULTICLOUD_PLUGIN_ENDPOINT}/identity/v3/auth/tokens" 2>&1 | grep X-Subject-Token | sed "s/^.*: //") - local ZONENAME - ZONENAME="$1.dcae.simpledemo.onap.org." local ZONEID ZONEID=$(curl -v -s -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN" -X GET "${MULTICLOUD_PLUGIN_ENDPOINT}/dns-delegate/v2/zones?name=${ZONENAME}" |sed 's/^.*"id":"\([a-zA-Z0-9-]*\)",.*$/\1/') @@ -445,7 +475,7 @@ list_dns_zone() local CLOUD_VERSION='titanium_cloud' local CLOUD_ENV local DCAE_ZONE - local DNSAAS_TENANT_ID + local DNSAAS_TENANT_NAME local MCHOST local MCURL local MCMETHOD='-X GET' @@ -458,19 +488,21 @@ list_dns_zone() CLOUD_REGION="$(cat /opt/config/openstack_region.txt)" CLOUD_ENV="$(cat /opt/config/cloud_env.txt)" DCAE_ZONE="$(cat /opt/config/dcae_zone.txt)" - DNSAAS_TENANT_ID="$(cat /opt/config/dnsaas_tenant_id.txt)" + DNSAAS_TENANT_NAME="$(cat /opt/config/dnsaas_tenant_name.txt)" MCHOST=$(cat /opt/config/openo_ip_addr.txt) MCURL="http://$MCHOST:9005/api/multicloud-titanium_cloud/v0/swagger.json" + MCDATA='"{\"auth\":{\"tenantName\": \"'${DNSAAS_TENANT_NAME}'\"}}"' MULTICLOUD_PLUGIN_ENDPOINT=http://${MCHOST}/api/multicloud-titanium_cloud/v0/${CLOUD_OWNER}_${CLOUD_REGION} - MULTICLOUD_PLUGIN_ENDPOINT=http://${MCHOST}:9005/api/multicloud-titanium_cloud/v0/${CLOUD_OWNER}_${CLOUD_REGION} ### Get Token local TOKEN - TOKEN=$(curl -v -s -H "Content-Type: application/json" -X POST -d "{\"tenantName\": \"${DNSAAS_TENANT_ID}\"}" "${MULTICLOUD_PLUGIN_ENDPOINT}/identity/v3/auth/tokens" 2>&1 | grep X-Subject-Token | sed "s/^.*: //") + TOKEN=$(curl -v -s -H "Content-Type: application/json" -X POST -d "{\"auth\":{\"tenantName\": \"${DNSAAS_TENANT_NAME}\"}}" "${MULTICLOUD_PLUGIN_ENDPOINT}/identity/v3/auth/tokens" 2>&1 | grep X-Subject-Token | sed "s/^.*: //") + local DCAE_DOMAIN local ZONENAME - ZONENAME="$1.dcae.simpledemo.onap.org." + DCAE_DOMAIN="$(cat /opt/config/dcae_domain.txt)" + ZONENAME="${DCAE_ZONE}.${DCAE_DOMAIN}." local ZONEID ZONEID=$(curl -v -s -H "Content-Type: application/json" -H "X-Auth-Token: $TOKEN" -X GET "${MULTICLOUD_PLUGIN_ENDPOINT}/dns-delegate/v2/zones?name=${ZONENAME}" |sed 's/^.*"id":"\([a-zA-Z0-9-]*\)",.*$/\1/') @@ -485,14 +517,14 @@ NEXUS_USER=$(cat /opt/config/nexus_username.txt) NEXUS_PASSWORD=$(cat /opt/config/nexus_password.txt) NEXUS_DOCKER_REPO=$(cat /opt/config/nexus_docker_repo.txt) DOCKER_VERSION=$(cat /opt/config/docker_version.txt) -ZONE=$(cat /opt/config/dcae_zone.txt) -RANDSTR=$(cat /opt/config/rand_str.txt) +# use rand_str as zone +ZONE=$(cat /opt/config/rand_str.txt) MYFLOATIP=$(cat /opt/config/dcae_float_ip.txt) MYLOCALIP=$(cat /opt/config/dcae_ip_addr.txt) -TENANTNAME=$(cat /opt/config/tenant_name.txt) -OSUSERNAME=$(cat /opt/config/openstack_user.txt) -OSPASSWORD=$(cat /opt/config/openstack_password.txt) +# start docker image pulling while we are waiting for A&AI to come online +docker login -u "$NEXUS_USER" -p "$NEXUS_PASSWORD" "$NEXUS_DOCKER_REPO" +docker pull "$NEXUS_DOCKER_REPO/onap/org.onap.dcaegen2.deployments.bootstrap:$DOCKER_VERSION" && docker pull nginx & ######################################### # Wait for then register with A&AI @@ -528,8 +560,8 @@ rm -f /opt/config/runtime.ip.consul rm -f /opt/config/runtime.ip.cm -docker login -u "$NEXUS_USER" -p "$NEXUS_PASSWORD" "$NEXUS_DOCKER_REPO" -docker pull "$NEXUS_DOCKER_REPO/onap/org.onap.dcaegen2.deployments.bootstrap:$DOCKER_VERSION" +#docker login -u "$NEXUS_USER" -p "$NEXUS_PASSWORD" "$NEXUS_DOCKER_REPO" +#docker pull "$NEXUS_DOCKER_REPO/onap/org.onap.dcaegen2.deployments.bootstrap:$DOCKER_VERSION" docker run -d --name boot -v /opt/app/config:/opt/app/installer/config -e "LOCATION=$ZONE" "$NEXUS_DOCKER_REPO/onap/org.onap.dcaegen2.deployments.bootstrap:$DOCKER_VERSION" @@ -540,7 +572,7 @@ while [ ! -f /opt/app/config/runtime.ip.consul ]; do echo "."; sleep 30; done # start proxy for consul's health check -CONSULIP=$(head -1 /opt/config/runtime.ip.consul | sed 's/[[:space:]]//g') +CONSULIP=$(head -1 /opt/app/config/runtime.ip.consul | sed 's/[[:space:]]//g') echo "Consul is available at $CONSULIP" cat >./nginx.conf <<EOL diff --git a/boot/dns_install.sh b/boot/dns_install.sh index 2985bb7d..79272cbd 100644 --- a/boot/dns_install.sh +++ b/boot/dns_install.sh @@ -5,6 +5,7 @@ NEXUS_REPO=$(cat /opt/config/nexus_repo.txt) ARTIFACTS_VERSION=$(cat /opt/config/artifacts_version.txt) CLOUD_ENV=$(cat /opt/config/cloud_env.txt) + if [[ $CLOUD_ENV != "rackspace" ]] then # Add host name to /etc/host to avoid warnings in openstack images @@ -64,9 +65,12 @@ curl -k $NEXUS_REPO/org.onap.demo/boot/$ARTIFACTS_VERSION/$ZONE_ONAP -o /etc/bin curl -k $NEXUS_REPO/org.onap.demo/boot/$ARTIFACTS_VERSION/$OPTIONS_FILE -o /etc/bind/named.conf.options curl -k $NEXUS_REPO/org.onap.demo/boot/$ARTIFACTS_VERSION/named.conf.local -o /etc/bind/named.conf.local + + # Set the private IP address of each ONAP VM in the Bind configuration in OpenStack deployments if [[ $CLOUD_ENV != "rackspace" ]] then + sed -i "s/dns_forwarder/"$(cat /opt/config/dns_forwarder.txt)"/g" /etc/bind/named.conf.options sed -i "s/dns_ip_addr/"$(cat /opt/config/dns_ip_addr.txt)"/g" /etc/bind/named.conf.options sed -i "s/external_dns/"$(cat /opt/config/external_dns.txt)"/g" /etc/bind/named.conf.options sed -i "s/aai1_ip_addr/"$(cat /opt/config/aai1_ip_addr.txt)"/g" /etc/bind/zones/db.simpledemo.openecomp.org @@ -82,7 +86,6 @@ then sed -i "s/sdc_ip_addr/"$(cat /opt/config/sdc_ip_addr.txt)"/g" /etc/bind/zones/db.simpledemo.openecomp.org sed -i "s/sdnc_ip_addr/"$(cat /opt/config/sdnc_ip_addr.txt)"/g" /etc/bind/zones/db.simpledemo.openecomp.org sed -i "s/vid_ip_addr/"$(cat /opt/config/vid_ip_addr.txt)"/g" /etc/bind/zones/db.simpledemo.openecomp.org - sed -i "s/dcae_coll_ip_addr/"$(cat /opt/config/dcae_coll_ip_addr.txt)"/g" /etc/bind/zones/db.simpledemo.openecomp.org sed -i "s/clamp_ip_addr/"$(cat /opt/config/clamp_ip_addr.txt)"/g" /etc/bind/zones/db.simpledemo.openecomp.org sed -i "s/openo_ip_addr/"$(cat /opt/config/openo_ip_addr.txt)"/g" /etc/bind/zones/db.simpledemo.openecomp.org @@ -99,7 +102,6 @@ then sed -i "s/sdc_ip_addr/"$(cat /opt/config/sdc_ip_addr.txt)"/g" /etc/bind/zones/db.simpledemo.onap.org sed -i "s/sdnc_ip_addr/"$(cat /opt/config/sdnc_ip_addr.txt)"/g" /etc/bind/zones/db.simpledemo.onap.org sed -i "s/vid_ip_addr/"$(cat /opt/config/vid_ip_addr.txt)"/g" /etc/bind/zones/db.simpledemo.onap.org - sed -i "s/dcae_coll_ip_addr/"$(cat /opt/config/dcae_coll_ip_addr.txt)"/g" /etc/bind/zones/db.simpledemo.onap.org sed -i "s/clamp_ip_addr/"$(cat /opt/config/clamp_ip_addr.txt)"/g" /etc/bind/zones/db.simpledemo.onap.org sed -i "s/openo_ip_addr/"$(cat /opt/config/openo_ip_addr.txt)"/g" /etc/bind/zones/db.simpledemo.onap.org fi @@ -107,4 +109,5 @@ fi # Configure Bind modprobe ip_gre sed -i "s/OPTIONS=.*/OPTIONS=\"-4 -u bind\"/g" /etc/default/bind9 -service bind9 restart
\ No newline at end of file +service bind9 restart + diff --git a/boot/msb_vm_init.sh b/boot/msb_vm_init.sh index fbbb7c5f..41cfb433 100644 --- a/boot/msb_vm_init.sh +++ b/boot/msb_vm_init.sh @@ -128,7 +128,7 @@ curl -X POST -H "Content-Type: application/json" -d '{"serviceName": "catalog", curl -X POST -H "Content-Type: application/json" -d '{"serviceName": "emsdriver", "version": "v1", "url": "/api/emsdriver/v1","protocol": "REST", "nodes": [ {"ip": "'$OPENO_IP'","port": "8206"}]}' "http://$OPENO_IP:10081/api/microservices/v1/services" #UUI -curl -X POST -H "Content-Type: application/json" -d '{"serviceName": "usecaseui", "version": "v1", "url": "/api/usecaseui/server/v1","protocol": "REST", "nodes": [ {"ip": "'$OPENO_IP'","port": "8901"}]}' "http://$OPENO_IP:10081/api/microservices/v1/services" +curl -X POST -H "Content-Type: application/json" -d '{"serviceName": "usecaseui-server", "version": "v1", "url": "/api/usecaseui/server/v1","protocol": "REST", "nodes": [ {"ip": "'$OPENO_IP'","port": "8082"}]}' "http://$OPENO_IP:10081/api/microservices/v1/services" -curl -X POST -H "Content-Type: application/json" -d '{"serviceName": "usecaseui-gui", "version": "v1", "url": "/iui/usecaseui","path": "/iui/usecaseui","protocol": "UI", "nodes": [ {"ip": "'$OPENO_IP'","port": "8900"}]}' "http://$OPENO_IP:10081/api/microservices/v1/services" +curl -X POST -H "Content-Type: application/json" -d '{"serviceName": "usecaseui-ui", "version": "v1", "url": "/usecase-ui","path": "/iui/usecaseui","protocol": "UI", "nodes": [ {"ip": "'$OPENO_IP'","port": "8080"}]}' "http://$OPENO_IP:10081/api/microservices/v1/services" diff --git a/boot/named.conf.options b/boot/named.conf.options index a09931cb..23feebc5 100644 --- a/boot/named.conf.options +++ b/boot/named.conf.options @@ -10,8 +10,7 @@ options { allow-transfer { none; }; # disable zone transfers by default forwarders { - 8.8.8.8; - 8.8.4.4; + dns_forwarder; }; // If there is a firewall between you and nameservers you want diff --git a/boot/portal_install.sh b/boot/portal_install.sh index c1b816e0..67512e5c 100644 --- a/boot/portal_install.sh +++ b/boot/portal_install.sh @@ -68,7 +68,7 @@ apt-get install -y linux-image-extra-$(uname -r) linux-image-extra-virtual apt-get install -y --allow-unauthenticated docker-engine mkdir /opt/docker -curl -L https://github.com/docker/compose/releases/download/1.9.0/docker-compose-`uname -s`-`uname -m` > /opt/docker/docker-compose +curl -L https://github.com/docker/compose/releases/download/1.16.1/docker-compose-`uname -s`-`uname -m` > /opt/docker/docker-compose chmod +x /opt/docker/docker-compose # Set the MTU size of docker containers to the minimum MTU size supported by vNICs. OpenStack deployments may need to know the external DNS IP @@ -92,8 +92,7 @@ echo "nameserver "$DNS_IP_ADDR >> /etc/resolvconf/resolv.conf.d/head resolvconf -u # Clone Gerrit repository and run docker containers -mkdir -p /PROJECT/OpenSource/UbuntuEP/logs cd /opt git clone -b $GERRIT_BRANCH --single-branch $CODE_REPO -./portal_vm_init.sh
\ No newline at end of file +./portal_vm_init.sh diff --git a/boot/sdc_vm_init.sh b/boot/sdc_vm_init.sh index dd15c0a1..9626a2ee 100644 --- a/boot/sdc_vm_init.sh +++ b/boot/sdc_vm_init.sh @@ -3,7 +3,6 @@ NEXUS_USERNAME=$(cat /opt/config/nexus_username.txt) NEXUS_PASSWD=$(cat /opt/config/nexus_password.txt) NEXUS_DOCKER_REPO=$(cat /opt/config/nexus_docker_repo.txt) -NEXUS_DOCKER_PORT=$(echo $NEXUS_DOCKER_REPO | cut -d':' -f2) ENV_NAME=$(cat /opt/config/env_name.txt) MR_IP_ADDR=$(cat /opt/config/mr_ip_addr.txt) RELEASE=$(cat /opt/config/docker_version.txt) @@ -18,12 +17,7 @@ cp sdc/sdc-os-chef/scripts/docker_health.sh /data/scripts chmod +x /data/scripts/docker_run.sh chmod +x /data/scripts/docker_health.sh -if [ -e /opt/config/public_ip.txt ] -then - IP_ADDRESS=$(cat /opt/config/public_ip.txt) -else - IP_ADDRESS=$(ifconfig eth0 | grep "inet addr" | tr -s ' ' | cut -d' ' -f3 | cut -d':' -f2) -fi +IP_ADDRESS=$(cat /opt/config/private_ip.txt) cat /data/environments/Template.json | sed "s/yyy/"$IP_ADDRESS"/g" > /data/environments/$ENV_NAME.json sed -i "s/xxx/"$ENV_NAME"/g" /data/environments/$ENV_NAME.json @@ -31,4 +25,4 @@ sed -i "s/\"ueb_url_list\":.*/\"ueb_url_list\": \""$MR_IP_ADDR","$MR_IP_ADDR"\", sed -i "s/\"fqdn\":.*/\"fqdn\": [\""$MR_IP_ADDR"\", \""$MR_IP_ADDR"\"]/g" /data/environments/$ENV_NAME.json docker login -u $NEXUS_USERNAME -p $NEXUS_PASSWD $NEXUS_DOCKER_REPO -bash /data/scripts/docker_run.sh -e $ENV_NAME -r $RELEASE -p $NEXUS_DOCKER_PORT +bash /data/scripts/docker_run.sh -r $RELEASE diff --git a/boot/sdnc_vm_init.sh b/boot/sdnc_vm_init.sh index 5e48a96f..e5b907f7 100644 --- a/boot/sdnc_vm_init.sh +++ b/boot/sdnc_vm_init.sh @@ -23,4 +23,10 @@ docker tag $NEXUS_DOCKER_REPO/onap/admportal-sdnc-image:$DOCKER_IMAGE_VERSION on docker pull $NEXUS_DOCKER_REPO/onap/ccsdk-dgbuilder-image:$DGBUILDER_IMAGE_VERSION docker tag $NEXUS_DOCKER_REPO/onap/ccsdk-dgbuilder-image:$DGBUILDER_IMAGE_VERSION onap/ccsdk-dgbuilder-image:latest +docker pull $NEXUS_DOCKER_REPO/onap/sdnc-ueb-listener-image:$DOCKER_IMAGE_VERSION +docker tag $NEXUS_DOCKER_REPO/onap/sdnc-ueb-listener-image:$DOCKER_IMAGE_VERSION onap/sdnc-ueb-listener-image:latest + +docker pull $NEXUS_DOCKER_REPO/onap/sdnc-dmaap-listener-image:$DOCKER_IMAGE_VERSION +docker tag $NEXUS_DOCKER_REPO/onap/sdnc-dmaap-listener-image:$DOCKER_IMAGE_VERSION onap/sdnc-dmaap-listener-image:latest + /opt/docker/docker-compose up -d diff --git a/boot/uui_vm_init.sh b/boot/uui_vm_init.sh index d02cf067..01cee5f5 100755 --- a/boot/uui_vm_init.sh +++ b/boot/uui_vm_init.sh @@ -17,5 +17,5 @@ docker rm -f uui_ui docker rm -f uui_server # Insert docker run instructions here -docker run -i -t -d --name uui_ui -e MSB_ADDR=$OPENO_IP:80 $NEXUS_DOCKER_REPO/onap/usecase-ui:$DOCKER_IMAGE_VERSION -docker run -i -t -d --name uui_server -e MSB_ADDR=$OPENO_IP:80 $NEXUS_DOCKER_REPO/onap/usecase-ui/usecase-ui-server:$DOCKER_IMAGE_VERSION
\ No newline at end of file +docker run -i -t -d --name uui_ui -p 8080:8080 -e MSB_ADDR=$OPENO_IP:80 $NEXUS_DOCKER_REPO/onap/usecase-ui:$DOCKER_IMAGE_VERSION +docker run -i -t -d --name uui_server -p 8082:8082 -e MSB_ADDR=$OPENO_IP:80 -e MR_ADDR=$MR_IP:3904 $NEXUS_DOCKER_REPO/onap/usecase-ui/usecase-ui-server:$DOCKER_IMAGE_VERSION
\ No newline at end of file diff --git a/boot/vfc_vm_init.sh b/boot/vfc_vm_init.sh index 28ef67e0..f2ec40cc 100755 --- a/boot/vfc_vm_init.sh +++ b/boot/vfc_vm_init.sh @@ -4,27 +4,27 @@ NEXUS_USERNAME=$(cat /opt/config/nexus_username.txt) NEXUS_PASSWD=$(cat /opt/config/nexus_password.txt) NEXUS_DOCKER_REPO=$(cat /opt/config/nexus_docker_repo.txt) -DOCKER_IMAGE_VERSION=$(cat /opt/config/vfc_docker.txt) source /opt/config/onap_ips.txt +source /opt/config/vfc_docker.txt # Refresh images docker login -u $NEXUS_USERNAME -p $NEXUS_PASSWD $NEXUS_DOCKER_REPO -docker pull $NEXUS_DOCKER_REPO/onap/vfc/wfengine-activiti:$DOCKER_IMAGE_VERSION -docker pull $NEXUS_DOCKER_REPO/onap/vfc/wfengine-mgrservice:$DOCKER_IMAGE_VERSION -docker pull $NEXUS_DOCKER_REPO/onap/vfc/catalog:$DOCKER_IMAGE_VERSION -docker pull $NEXUS_DOCKER_REPO/onap/vfc/emsdriver:$DOCKER_IMAGE_VERSION -docker pull $NEXUS_DOCKER_REPO/onap/vfc/gvnfmdriver:$DOCKER_IMAGE_VERSION -docker pull $NEXUS_DOCKER_REPO/onap/vfc/jujudriver:$DOCKER_IMAGE_VERSION -docker pull $NEXUS_DOCKER_REPO/onap/vfc/nfvo/svnfm/huawei:$DOCKER_IMAGE_VERSION -docker pull $NEXUS_DOCKER_REPO/onap/vfc/nslcm:$DOCKER_IMAGE_VERSION -docker pull $NEXUS_DOCKER_REPO/onap/vfc/resmanagement:$DOCKER_IMAGE_VERSION -docker pull $NEXUS_DOCKER_REPO/onap/vfc/vnflcm:$DOCKER_IMAGE_VERSION -docker pull $NEXUS_DOCKER_REPO/onap/vfc/vnfmgr:$DOCKER_IMAGE_VERSION -docker pull $NEXUS_DOCKER_REPO/onap/vfc/vnfres:$DOCKER_IMAGE_VERSION -docker pull $NEXUS_DOCKER_REPO/onap/vfc/ztesdncdriver:$DOCKER_IMAGE_VERSION -docker pull $NEXUS_DOCKER_REPO/onap/vfc/ztevmanagerdriver:$DOCKER_IMAGE_VERSION -docker pull $NEXUS_DOCKER_REPO/onap/vfc/nfvo/svnfm/nokia:$DOCKER_IMAGE_VERSION +docker pull $NEXUS_DOCKER_REPO/onap/vfc/wfengine-activiti:$ACTIVITI_DOCKER_VER +docker pull $NEXUS_DOCKER_REPO/onap/vfc/wfengine-mgrservice:$MGRSERVICE_DOCKER_VER +docker pull $NEXUS_DOCKER_REPO/onap/vfc/catalog:$CATALOG_DOCKER_VER +docker pull $NEXUS_DOCKER_REPO/onap/vfc/emsdriver:$EMSDRIVER_DOCKER_VER +docker pull $NEXUS_DOCKER_REPO/onap/vfc/gvnfmdriver:$GVNFMDRIVER_DOCKER_VER +docker pull $NEXUS_DOCKER_REPO/onap/vfc/jujudriver:$JUJUDRIVER_DOCKER_VER +docker pull $NEXUS_DOCKER_REPO/onap/vfc/nfvo/svnfm/huawei:$HUAWEI_DOCKER_VER +docker pull $NEXUS_DOCKER_REPO/onap/vfc/nslcm:$NSLCM_DOCKER_VER +docker pull $NEXUS_DOCKER_REPO/onap/vfc/resmanagement:$RESMANAGEMENT_DOCKER_VER +docker pull $NEXUS_DOCKER_REPO/onap/vfc/vnflcm:$VNFLCM_DOCKER_VER +docker pull $NEXUS_DOCKER_REPO/onap/vfc/vnfmgr:$VNFMGR_DOCKER_VER +docker pull $NEXUS_DOCKER_REPO/onap/vfc/vnfres:$VNFRES_DOCKER_VER +docker pull $NEXUS_DOCKER_REPO/onap/vfc/ztesdncdriver:$ZTESDNCDRIVER_DOCKER_VER +docker pull $NEXUS_DOCKER_REPO/onap/vfc/ztevmanagerdriver:$ZTEVMANAGERDRIVER_DOCKER_VER +docker pull $NEXUS_DOCKER_REPO/onap/vfc/nfvo/svnfm/nokia:$NOKIA_DOCKER_VER docker rm -f vfc_wfengine_mgrservice docker rm -f vfc_wfengine_activiti @@ -43,18 +43,18 @@ docker rm -f vfc_ztevmanagerdriver docker rm -f vfc_svnfm_nokia # Insert docker run instructions here -docker run -i -t -d --name vfc_wfengine_activiti -p 8804:8080 -e SERVICE_IP=$OPENO_IP -e SERVICE_PORT=8804 -e OPENPALETTE_MSB_IP=$OPENO_IP -e OPENPALETTE_MSB_PORT=80 $NEXUS_DOCKER_REPO/onap/vfc/wfengine-activiti:$DOCKER_IMAGE_VERSION -docker run -i -t -d --name vfc_wfengine_mgrservice -p 8805:10550 -e SERVICE_IP=$OPENO_IP -e SERVICE_PORT=8805 -e OPENPALETTE_MSB_IP=$OPENO_IP -e OPENPALETTE_MSB_PORT=80 $NEXUS_DOCKER_REPO/onap/vfc/wfengine-mgrservice:$DOCKER_IMAGE_VERSION -docker run -i -t -d --name vfc_catalog -p 8806:8806 -e MSB_ADDR=$OPENO_IP:80 $NEXUS_DOCKER_REPO/onap/vfc/catalog:$DOCKER_IMAGE_VERSION -docker run -i -t -d --name vfc_emsdriver -p 8206:8206 -e MSB_ADDR=$OPENO_IP:80 -e VES_ADDR=$DCAE_COLL_IP:8080 -e VES_AUTHINFO="":"" $NEXUS_DOCKER_REPO/onap/vfc/emsdriver:$DOCKER_IMAGE_VERSION -docker run -i -t -d --name vfc_gvnfmdriver -p 8484:8484 -e MSB_ADDR=$OPENO_IP:80 $NEXUS_DOCKER_REPO/onap/vfc/gvnfmdriver:$DOCKER_IMAGE_VERSION -docker run -i -t -d --name vfc_jujudriver -p 8483:8483 -e MSB_ADDR=$OPENO_IP:80 $NEXUS_DOCKER_REPO/onap/vfc/jujudriver:$DOCKER_IMAGE_VERSION -docker run -i -t -d --name vfc_svnfm_huawei -p 8482:8482 -e MSB_ADDR=$OPENO_IP:80 $NEXUS_DOCKER_REPO/onap/vfc/nfvo/svnfm/huawei:$DOCKER_IMAGE_VERSION -docker run -i -t -d --name vfc_nslcm -p 8403:8403 -e MSB_ADDR=$OPENO_IP:80 $NEXUS_DOCKER_REPO/onap/vfc/nslcm:$DOCKER_IMAGE_VERSION -docker run -i -t -d --name vfc_resmanagement -p 8480:8480 -e MSB_ADDR=$OPENO_IP:80 $NEXUS_DOCKER_REPO/onap/vfc/resmanagement:$DOCKER_IMAGE_VERSION -docker run -i -t -d --name vfc_vnflcm -p 8801:8801 -e MSB_ADDR=$OPENO_IP:80 $NEXUS_DOCKER_REPO/onap/vfc/vnflcm:$DOCKER_IMAGE_VERSION -docker run -i -t -d --name vfc_vnfmgr -p 8803:8803 -e MSB_ADDR=$OPENO_IP:80 $NEXUS_DOCKER_REPO/onap/vfc/vnfmgr:$DOCKER_IMAGE_VERSION -docker run -i -t -d --name vfc_vnfres -p 8802:8802 -e MSB_ADDR=$OPENO_IP:80 $NEXUS_DOCKER_REPO/onap/vfc/vnfres:$DOCKER_IMAGE_VERSION -docker run -i -t -d --name vfc_ztesdncdriver -p 8411:8411 -e MSB_ADDR=$OPENO_IP:80 $NEXUS_DOCKER_REPO/onap/vfc/ztesdncdriver:$DOCKER_IMAGE_VERSION -docker run -i -t -d --name vfc_ztevmanagerdriver -p 8410:8410 -e MSB_ADDR=$OPENO_IP:80 $NEXUS_DOCKER_REPO/onap/vfc/ztevmanagerdriver:$DOCKER_IMAGE_VERSION -docker run -i -t -d --name vfc_svnfm_nokia -p 8486:8486 -e MSB_ADDR=$OPENO_IP:80 $NEXUS_DOCKER_REPO/onap/vfc/nfvo/svnfm/nokia:$DOCKER_IMAGE_VERSION
\ No newline at end of file +docker run -i -t -d --name vfc_wfengine_activiti -p 8804:8080 -e SERVICE_IP=$OPENO_IP -e SERVICE_PORT=8804 -e OPENPALETTE_MSB_IP=$OPENO_IP -e OPENPALETTE_MSB_PORT=80 $NEXUS_DOCKER_REPO/onap/vfc/wfengine-activiti:$ACTIVITI_DOCKER_VER +docker run -i -t -d --name vfc_wfengine_mgrservice -p 8805:10550 -e SERVICE_IP=$OPENO_IP -e SERVICE_PORT=8805 -e OPENPALETTE_MSB_IP=$OPENO_IP -e OPENPALETTE_MSB_PORT=80 $NEXUS_DOCKER_REPO/onap/vfc/wfengine-mgrservice:$MGRSERVICE_DOCKER_VER +docker run -i -t -d --name vfc_catalog -p 8806:8806 -e MSB_ADDR=$OPENO_IP:80 $NEXUS_DOCKER_REPO/onap/vfc/catalog:$CATALOG_DOCKER_VER +docker run -i -t -d --name vfc_emsdriver -p 8206:8206 -e MSB_ADDR=$OPENO_IP:80 -e VES_ADDR=$DCAE_COLL_IP:8080 -e VES_AUTHINFO="":"" $NEXUS_DOCKER_REPO/onap/vfc/emsdriver:$EMSDRIVER_DOCKER_VER +docker run -i -t -d --name vfc_gvnfmdriver -p 8484:8484 -e MSB_ADDR=$OPENO_IP:80 $NEXUS_DOCKER_REPO/onap/vfc/gvnfmdriver:$GVNFMDRIVER_DOCKER_VER +docker run -i -t -d --name vfc_jujudriver -p 8483:8483 -e MSB_ADDR=$OPENO_IP:80 $NEXUS_DOCKER_REPO/onap/vfc/jujudriver:$JUJUDRIVER_DOCKER_VER +docker run -i -t -d --name vfc_svnfm_huawei -p 8482:8482 -p 8443:8443 -e MSB_ADDR=$OPENO_IP:80 $NEXUS_DOCKER_REPO/onap/vfc/nfvo/svnfm/huawei:$HUAWEI_DOCKER_VER +docker run -i -t -d --name vfc_nslcm -p 8403:8403 -e MSB_ADDR=$OPENO_IP:80 $NEXUS_DOCKER_REPO/onap/vfc/nslcm:$NSLCM_DOCKER_VER +docker run -i -t -d --name vfc_resmanagement -p 8480:8480 -e MSB_ADDR=$OPENO_IP:80 $NEXUS_DOCKER_REPO/onap/vfc/resmanagement:$RESMANAGEMENT_DOCKER_VER +docker run -i -t -d --name vfc_vnflcm -p 8801:8801 -e MSB_ADDR=$OPENO_IP:80 $NEXUS_DOCKER_REPO/onap/vfc/vnflcm:$VNFLCM_DOCKER_VER +docker run -i -t -d --name vfc_vnfmgr -p 8803:8803 -e MSB_ADDR=$OPENO_IP:80 $NEXUS_DOCKER_REPO/onap/vfc/vnfmgr:$VNFMGR_DOCKER_VER +docker run -i -t -d --name vfc_vnfres -p 8802:8802 -e MSB_ADDR=$OPENO_IP:80 $NEXUS_DOCKER_REPO/onap/vfc/vnfres:$VNFRES_DOCKER_VER +docker run -i -t -d --name vfc_ztesdncdriver -p 8411:8411 -e MSB_ADDR=$OPENO_IP:80 $NEXUS_DOCKER_REPO/onap/vfc/ztesdncdriver:$ZTESDNCDRIVER_DOCKER_VER +docker run -i -t -d --name vfc_ztevmanagerdriver -p 8410:8410 -e MSB_ADDR=$OPENO_IP:80 $NEXUS_DOCKER_REPO/onap/vfc/ztevmanagerdriver:$ZTEVMANAGERDRIVER_DOCKER_VER +docker run -i -t -d --name vfc_svnfm_nokia -p 8486:8486 -e MSB_ADDR=$OPENO_IP:80 $NEXUS_DOCKER_REPO/onap/vfc/nfvo/svnfm/nokia:$NOKIA_DOCKER_VER diff --git a/heat/ONAP/onap_openstack_float.env b/heat/ONAP/deprecated/onap_openstack_float.env index e970b7f4..25517f8d 100644 --- a/heat/ONAP/onap_openstack_float.env +++ b/heat/ONAP/deprecated/onap_openstack_float.env @@ -158,7 +158,7 @@ parameters: mr_branch: master dcae_branch: master policy_branch: master - portal_branch: master + portal_branch: release-1.3.0 robot_branch: master sdc_branch: master sdnc_branch: master @@ -172,7 +172,7 @@ parameters: mr_docker: 1.1-STAGING-latest dcae_docker: 1.1-latest policy_docker: 1.1-STAGING-latest - portal_docker: 1.3-STAGING-latest + portal_docker: v1.3.0 robot_docker: 1.1-STAGING-latest sdc_docker: 1.1-STAGING-latest sdnc_docker: 1.2-STAGING-latest @@ -184,7 +184,7 @@ parameters: uui_docker: latest esr_docker: latest dgbuilder_docker: 0.1-STAGING-latest - cli_docker: 1.1-STAGING-latest + cli_docker: v1.1.0 ##################### # # diff --git a/heat/ONAP/onap_openstack_float.yaml b/heat/ONAP/deprecated/onap_openstack_float.yaml index 01f160ab..01f160ab 100644 --- a/heat/ONAP/onap_openstack_float.yaml +++ b/heat/ONAP/deprecated/onap_openstack_float.yaml diff --git a/heat/ONAP/onap_openstack_nofloat.env b/heat/ONAP/deprecated/onap_openstack_nofloat.env index 1b9cbd91..49abd664 100644 --- a/heat/ONAP/onap_openstack_nofloat.env +++ b/heat/ONAP/deprecated/onap_openstack_nofloat.env @@ -130,7 +130,7 @@ parameters: mr_branch: master dcae_branch: master policy_branch: master - portal_branch: master + portal_branch: release-1.3.0 robot_branch: master sdc_branch: master sdnc_branch: master @@ -144,7 +144,7 @@ parameters: mr_docker: 1.1-STAGING-latest dcae_docker: 1.1-latest policy_docker: 1.1-STAGING-latest - portal_docker: 1.3-STAGING-latest + portal_docker: v1.3.0 robot_docker: 1.1-STAGING-latest sdc_docker: 1.1-STAGING-latest sdnc_docker: 1.2-STAGING-latest @@ -156,7 +156,7 @@ parameters: uui_docker: latest esr_docker: latest dgbuilder_docker: 0.1-STAGING-latest - cli_docker: 1.1-STAGING-latest + cli_docker: v1.1.0 ##################### # # diff --git a/heat/ONAP/onap_openstack_nofloat.yaml b/heat/ONAP/deprecated/onap_openstack_nofloat.yaml index 136b1606..136b1606 100644 --- a/heat/ONAP/onap_openstack_nofloat.yaml +++ b/heat/ONAP/deprecated/onap_openstack_nofloat.yaml diff --git a/heat/ONAP/onap_rackspace.env b/heat/ONAP/deprecated/onap_rackspace.env index 82e31eff..d08c24e8 100644 --- a/heat/ONAP/onap_rackspace.env +++ b/heat/ONAP/deprecated/onap_rackspace.env @@ -83,7 +83,7 @@ parameters: mr_branch: master dcae_branch: master policy_branch: master - portal_branch: master + portal_branch: release-1.3.0 robot_branch: master sdc_branch: master sdnc_branch: master @@ -96,11 +96,11 @@ parameters: mr_docker: 1.1-STAGING-latest dcae_docker: 1.1-STAGING-latest policy_docker: 1.1-STAGING-latest - portal_docker: 1.3-STAGING-latest + portal_docker: v1.3.0 robot_docker: 1.1-STAGING-latest sdc_docker: 1.1-STAGING-latest sdnc_docker: 1.2-STAGING-latest vid_docker: 1.1-STAGING-latest clamp_docker: 1.1-STAGING-latest dgbuilder_docker: 0.1-STAGING-latest - cli_docker: 1.1-STAGING-latest + cli_docker: v1.1.0 diff --git a/heat/ONAP/onap_rackspace.yaml b/heat/ONAP/deprecated/onap_rackspace.yaml index a73053f2..a73053f2 100644 --- a/heat/ONAP/onap_rackspace.yaml +++ b/heat/ONAP/deprecated/onap_rackspace.yaml diff --git a/heat/ONAP/manifest-to-env.sh b/heat/ONAP/manifest-to-env.sh new file mode 100755 index 00000000..017b091a --- /dev/null +++ b/heat/ONAP/manifest-to-env.sh @@ -0,0 +1,28 @@ +#!/bin/bash +#==================LICENSE_START========================================== +# +# Copyright (c) 2017 Huawei Technologies Co., Ltd. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. +# +#==================LICENSE_END============================================ + +# USAGE: Pipe in docker-manifest.csv from the integration repo. This +# script converts it into a series of environment variable settings +# that can then be used with envsubst to set the docker versions in +# onap_openstack_template.env. +# +# EXAMPLE: +# source <(./manifest-to-env.sh < ~/Projects/onap/integration/version-manifest/src/main/resources/docker-manifest.csv) +# envsubst < onap_openstack_template.env > onap_openstack.env + +sed '1d' | awk -F , '{ v=$1; gsub(".*[./]","",$1); gsub("-","_",$1); print "export " toupper($1) "_DOCKER=" $2 " # " v }' diff --git a/heat/ONAP/onap_openstack.env b/heat/ONAP/onap_openstack.env index aad2d1e1..ac088bc3 100644 --- a/heat/ONAP/onap_openstack.env +++ b/heat/ONAP/onap_openstack.env @@ -6,7 +6,9 @@ parameters: # # ############################################## - public_net_id: PUT YOUR NETWORK ID/NAME HERE + public_net_id: PUT YOUR NETWORK ID HERE + + public_net_name: PUT YOUR NETWORK NAME HERE ubuntu_1404_image: PUT THE UBUNTU 14.04 IMAGE NAME HERE @@ -22,7 +24,7 @@ parameters: flavor_xxlarge: PUT THE XXLARGE FLAVOR NAME HERE - vm_base_name: vm1 + vm_base_name: onap key_name: onap_key @@ -42,6 +44,8 @@ parameters: openstack_tenant_id: PUT YOUR OPENSTACK PROJECT ID HERE + openstack_tenant_name: PUT YOUR OPENSTACK PROJECT NAME HERE + openstack_username: PUT YOUR OPENSTACK USERNAME HERE openstack_api_key: PUT YOUR OPENSTACK PASSWORD HERE @@ -65,6 +69,7 @@ parameters: dns_list: PUT THE ADDRESS OF THE EXTERNAL DNS HERE (e.g. a comma-separated list of IP addresses in your /etc/resolv.conf in UNIX-based Operating Systems) external_dns: PUT THE FIRST ADDRESS OF THE EXTERNAL DNS LIST HERE + dns_forwarder: PUT THE IP OF DNS FORWARDER FOR ONAP DEPLOYMENT'S OWN DNS SERVER oam_network_cidr: 10.0.0.0/16 ### Private IP addresses ### @@ -73,11 +78,6 @@ parameters: aai2_ip_addr: 10.0.1.2 appc_ip_addr: 10.0.2.1 dcae_ip_addr: 10.0.4.1 - dcae_coll_ip_addr: 10.0.4.102 - dcae_db_ip_addr: 10.0.4.101 - dcae_hdp1_ip_addr: 10.0.4.103 - dcae_hdp2_ip_addr: 10.0.4.104 - dcae_hdp3_ip_addr: 10.0.4.105 dns_ip_addr: 10.0.100.1 so_ip_addr: 10.0.5.1 mr_ip_addr: 10.0.11.1 @@ -90,48 +90,28 @@ parameters: clamp_ip_addr: 10.0.12.1 openo_ip_addr: 10.0.14.1 -# dcae_coll_float_ip: PUT DCAE COLLECTOR FLOATING IP HERE -# dcae_db_float_ip: PUT DCAE DATABASE FLOATING IP HERE -# dcae_hdp1_float_ip: PUT DCAE HADOOP VM1 FLOATING IP HERE -# dcae_hdp2_float_ip: PUT DCAE HADOOP VM2 FLOATING IP HERE -# dcae_hdp3_float_ip: PUT DCAE HADOOP VM3 FLOATING IP HERE - ########################### # # # Parameters used by DCAE # # # ########################### -# dcae_base_environment: 1-NIC-FLOATING-IPS - -# dcae_zone: ZONE - -# dcae_state: STATE - -# nexus_repo_root: https://nexus.onap.org - -# nexus_url_snapshot: https://nexus.onap.org/content/repositories/snapshots - -# gitlab_branch: master - -# dcae_code_version: 1.1.0 - dnsaas_config_enabled: PUT WHETHER TO USE PROXYED DESIGNATE dnsaas_region: PUT THE DESIGNATE PROVIDING OPENSTACK'S REGION HERE - dnsaas_tenant_id: PUT THE DESIGNATE PROVIDING OPENSTACK'S DEFAULT TENANT HERE dnsaas_keystone_url: PUT THE DESIGNATE PROVIDING OPENSTACK'S KEYSTONE URL HERE + dnsaas_tenant_name: PUT THE TENANT NAME IN THE DESIGNATE PROVIDING OPENSTACK HERE (FOR R1 USE THE SAME AS openstack_tenant_name) dnsaas_username: PUT THE DESIGNATE PROVIDING OPENSTACK'S USERNAME HERE dnsaas_password: PUT THE DESIGNATE PROVIDING OPENSTACK'S PASSWORD HERE dcae_keystone_url: PUT THE MULTIVIM PROVIDED KEYSTONE API URL HERE dcae_centos_7_image: PUT THE CENTOS7 VM IMAGE NAME HERE FOR DCAE LAUNCHED CENTOS7 VM - dcae_security_group: PUT THE SECURITY GROUP NAME HERE FOR DCAE LAUNCHED VMS - dcae_key_name: PUT THE ON BOARDED KEY-PAIR NAME HERE FOR DCAE LAUNCHED VMS + dcae_domain: PUT THE NAME OF DOMAIN THAT DCAE VMS REGISTER UNDER dcae_public_key: PUT THE PUBLIC KEY OF A KEYPAIR HERE TO BE USED BETWEEN DCAE LAUNCHED VMS dcae_private_key: PUT THE SECRET KEY OF A KEYPAIR HERE TO BE USED BETWEEN DCAE LAUNCHED VMS ################################ # # # Docker versions and branches # + # Generated using onap_openstack_template.env and manifest-to-env.sh # # ################################ @@ -139,7 +119,6 @@ parameters: appc_branch: master so_branch: master mr_branch: master - dcae_branch: master policy_branch: master portal_branch: release-1.3.0 robot_branch: master @@ -149,25 +128,40 @@ parameters: clamp_branch: master vnfsdk_branch: master - aai_docker: 1.1-STAGING-latest - appc_docker: 1.1-STAGING-latest - so_docker: 1.1-STAGING-latest - mr_docker: 1.1-STAGING-latest - dcae_docker: 1.1-latest - policy_docker: 1.1-STAGING-latest - portal_docker: 1.3-STAGING-latest + aai_docker: v1.1.0 + aai_sparky_docker: v1.1.0 + appc_docker: v1.2.0 + so_docker: v1.1.1 + dcae_docker: v1.1.0 + policy_docker: v1.1.1 + portal_docker: v1.3.0 robot_docker: 1.1-STAGING-latest - sdc_docker: 1.1-STAGING-latest - sdnc_docker: 1.2-STAGING-latest - vid_docker: 1.1-STAGING-latest - clamp_docker: 1.1-STAGING-latest - msb_docker: latest - mvim_docker: latest - vfc_docker: latest - uui_docker: latest - esr_docker: latest - dgbuilder_docker: 0.1-STAGING-latest - cli_docker: 1.1-STAGING-latest + sdc_docker: v1.1.0 + sdnc_docker: v1.2.1 + vid_docker: v1.1.1 + clamp_docker: v1.1.0 + msb_docker: 1.0.0 + mvim_docker: v1.0.0 + uui_docker: v1.0.1 + esr_docker: v1.0.0 + dgbuilder_docker: v0.1.0 + cli_docker: v1.1.0 + + vfc_nokia_docker: v1.0.2 + vfc_ztevmanagerdriver_docker: v1.0.2 + vfc_ztesdncdriver_docker: v1.0.0 + vfc_vnfres_docker: v1.0.1 + vfc_vnfmgr_docker: v1.0.1 + vfc_vnflcm_docker: v1.0.1 + vfc_resmanagement_docker: v1.0.0 + vfc_nslcm_docker: v1.0.2 + vfc_huawei_docker: v1.0.2 + vfc_jujudriver_docker: v1.0.0 + vfc_gvnfmdriver_docker: v1.0.1 + vfc_emsdriver_docker: v1.0.1 + vfc_catalog_docker: v1.0.2 + vfc_wfengine_mgrservice_docker: v1.0.0 + vfc_wfengine_activiti_docker: v1.0.0 ##################### # # @@ -176,7 +170,6 @@ parameters: ##################### aai_repo: http://gerrit.onap.org/r/aai/test-config appc_repo: http://gerrit.onap.org/r/appc/deployment.git - dcae_repo: http://gerrit.onap.org/r/dcae/demo/startup/controller.git mr_repo: http://gerrit.onap.org/r/dcae/demo/startup/message-router.git so_repo: http://gerrit.onap.org/r/so/docker-config.git policy_repo: http://gerrit.onap.org/r/policy/docker.git diff --git a/heat/ONAP/onap_openstack.yaml b/heat/ONAP/onap_openstack.yaml index 97f80581..27f53270 100644 --- a/heat/ONAP/onap_openstack.yaml +++ b/heat/ONAP/onap_openstack.yaml @@ -42,7 +42,11 @@ parameters: public_net_id: type: string - description: Public network for floating IP address allocation + description: The ID of the Public network for floating IP address allocation + + public_net_name: + type: string + description: The name of the Public network referred by public_net_id ubuntu_1404_image: type: string @@ -112,6 +116,10 @@ parameters: type: string description: OpenStack tenant ID + openstack_tenant_name: + type: string + description: OpenStack tenant name (matching with the openstack_tenant_id) + openstack_username: type: string description: OpenStack username @@ -152,7 +160,11 @@ parameters: external_dns: type: string - description: First element of the dns_list for ONAP network + description: Public IP of the external DNS for ONAP network + + dns_forwarder: + type: string + description: the forwarder address for setting up ONAP's private DNS server oam_network_cidr: type: string @@ -167,16 +179,6 @@ parameters: type: string dcae_ip_addr: type: string - dcae_coll_ip_addr: - type: string - dcae_db_ip_addr: - type: string - dcae_hdp1_ip_addr: - type: string - dcae_hdp2_ip_addr: - type: string - dcae_hdp3_ip_addr: - type: string dns_ip_addr: type: string so_ip_addr: @@ -199,16 +201,6 @@ parameters: type: string openo_ip_addr: type: string -# dcae_coll_float_ip: -# type: string -# dcae_db_float_ip: -# type: string -# dcae_hdp1_float_ip: -# type: string -# dcae_hdp2_float_ip: -# type: string -# dcae_hdp3_float_ip: -# type: string ########################### # # @@ -224,10 +216,6 @@ parameters: type: string description: the region of the cloud instance providing the Designate DNS as a Service - dnsaas_tenant_id: - type: string - description: the (default) tenant id of the cloud instance providing the Designate DNS as a Service - dnsaas_keystone_url: type: string description: the keystone URL of the cloud instance providing the Designate DNS as a Service @@ -240,13 +228,13 @@ parameters: type: string description: the password of the cloud instance providing the Designate DNS as a Service - dcae_keystone_url: + dnsaas_tenant_name: type: string - description: the keystone URL for DCAE to use (via MultiCloud) + description: the name of the tenant in the cloud instance providing the Designate DNS as a Service - dcae_key_name: + dcae_keystone_url: type: string - description: the name of the keypair on-boarded with Cloud + description: the keystone URL for DCAE to use (via MultiCloud) dcae_private_key: type: string @@ -260,38 +248,9 @@ parameters: type: string description: the id/name of the CentOS 7 VM imange - dcae_security_group: + dcae_domain: type: string - description: the security group to be used by DCAE VMs - - -# dcae_base_environment: -# type: string -# description: DCAE Base Environment configuration (RACKSPACE/2-NIC/1-NIC-FLOATING-IPS) - -# dcae_zone: -# type: string -# description: DCAE Zone to use in VM names created by DCAE controller - -# dcae_state: -# type: string -# description: DCAE State to use in VM names created by DCAE controller - -# nexus_repo_root: -# type: string -# description: Root URL of Nexus repository - -# nexus_url_snapshot: -# type: string -# description: Snapshot of Maven repository for DCAE deployment - -# gitlab_branch: -# type: string -# description: Branch of the Gitlab repository - -# dcae_code_version: -# type: string -# description: DCAE Code Version Number + description: the top level domain to register DCAE VMs (the zone will be random-str.dcae_domain) ##################### # # @@ -303,8 +262,6 @@ parameters: type: string appc_repo: type: string - dcae_repo: - type: string mr_repo: type: string so_repo: @@ -334,12 +291,12 @@ parameters: aai_docker: type: string + aai_sparky_docker: + type: string appc_docker: type: string so_docker: type: string - mr_docker: - type: string dcae_docker: type: string policy_docker: @@ -360,8 +317,6 @@ parameters: type: string mvim_docker: type: string - vfc_docker: - type: string uui_docker: type: string esr_docker: @@ -370,6 +325,36 @@ parameters: type: string cli_docker: type: string + vfc_nokia_docker: + type: string + vfc_ztevmanagerdriver_docker: + type: string + vfc_ztesdncdriver_docker: + type: string + vfc_vnfres_docker: + type: string + vfc_vnfmgr_docker: + type: string + vfc_vnflcm_docker: + type: string + vfc_resmanagement_docker: + type: string + vfc_nslcm_docker: + type: string + vfc_huawei_docker: + type: string + vfc_jujudriver_docker: + type: string + vfc_gvnfmdriver_docker: + type: string + vfc_emsdriver_docker: + type: string + vfc_catalog_docker: + type: string + vfc_wfengine_mgrservice_docker: + type: string + vfc_wfengine_activiti_docker: + type: string aai_branch: type: string @@ -379,8 +364,6 @@ parameters: type: string mr_branch: type: string - dcae_branch: - type: string policy_branch: type: string portal_branch: @@ -411,7 +394,6 @@ resources: properties: length: 4 - # Public key used to access ONAP components vm_key: type: OS::Nova::KeyPair @@ -425,6 +407,36 @@ resources: public_key: { get_param: pub_key } save_private_key: false + + # ONAP security group + onap_sg: + type: OS::Neutron::SecurityGroup + properties: + name: + str_replace: + template: base_rand + params: + base: onap_sg + rand: { get_resource: random-str } + description: security group used by ONAP + rules: + # All egress traffic + - direction: egress + ethertype: IPv4 + - direction: egress + ethertype: IPv6 + # ingress traffic + # ICMP + - protocol: icmp + - protocol: udp + port_range_min: 1 + port_range_max: 65535 + - protocol: tcp + port_range_min: 1 + port_range_max: 65535 + + + # ONAP management private network oam_onap: type: OS::Neutron::Net @@ -498,7 +510,6 @@ resources: __aai2_ip_addr__: { get_param: aai2_ip_addr } __appc_ip_addr__: { get_param: appc_ip_addr } __dcae_ip_addr__: { get_param: dcae_ip_addr } - __dcae_coll_ip_addr__: { get_param: dcae_coll_ip_addr } __so_ip_addr__: { get_param: so_ip_addr } __mr_ip_addr__: { get_param: mr_ip_addr } __policy_ip_addr__: { get_param: policy_ip_addr } @@ -511,6 +522,7 @@ resources: __openo_ip_addr__: { get_param: openo_ip_addr } __cloud_env__: { get_param: cloud_env } __external_dns__: { get_param: external_dns } + __dns_forwarder__: { get_param: dns_forwarder } template: | #!/bin/bash @@ -525,7 +537,6 @@ resources: echo "__aai2_ip_addr__" > /opt/config/aai2_ip_addr.txt echo "__appc_ip_addr__" > /opt/config/appc_ip_addr.txt echo "__dcae_ip_addr__" > /opt/config/dcae_ip_addr.txt - echo "__dcae_coll_ip_addr__" > /opt/config/dcae_coll_ip_addr.txt echo "__so_ip_addr__" > /opt/config/so_ip_addr.txt echo "__mr_ip_addr__" > /opt/config/mr_ip_addr.txt echo "__policy_ip_addr__" > /opt/config/policy_ip_addr.txt @@ -537,6 +548,7 @@ resources: echo "__clamp_ip_addr__" > /opt/config/clamp_ip_addr.txt echo "__openo_ip_addr__" > /opt/config/openo_ip_addr.txt echo "__external_dns__" > /opt/config/external_dns.txt + echo "__dns_forwarder__" > /opt/config/dns_forwarder.txt # Download and run install script curl -k __nexus_repo__/org.onap.demo/boot/__artifacts_version__/dns_install.sh -o /opt/dns_install.sh @@ -584,6 +596,7 @@ resources: __artifacts_version__: { get_param: artifacts_version } __dns_ip_addr__: { get_param: dns_ip_addr } __docker_version__: { get_param: aai_docker } + __aai_sparky_docker__ : { get_param: aai_sparky_docker } __gerrit_branch__: { get_param: aai_branch } __cloud_env__: { get_param: cloud_env } __external_dns__: { get_param: external_dns } @@ -601,6 +614,7 @@ resources: echo "__dns_ip_addr__" > /opt/config/dns_ip_addr.txt echo "__dmaap_topic__" > /opt/config/dmaap_topic.txt echo "__docker_version__" > /opt/config/docker_version.txt + echo "__aai_sparky_docker__" > /opt/config/sparky_version.txt echo "__gerrit_branch__" > /opt/config/gerrit_branch.txt echo "aai_instance_1" > /opt/config/aai_instance.txt echo "__cloud_env__" > /opt/config/cloud_env.txt @@ -864,7 +878,6 @@ resources: __artifacts_version__: { get_param: artifacts_version } __openstack_region__: { get_param: openstack_region } __dns_ip_addr__: { get_param: dns_ip_addr } - __docker_version__: { get_param: mr_docker } __gerrit_branch__: { get_param: mr_branch } __cloud_env__: { get_param: cloud_env } __keystone_url__: { get_param: keystone_url } @@ -887,6 +900,7 @@ resources: __public_net_id__: { get_param: public_net_id } __script_version__: { get_param: artifacts_version } __robot_repo__: { get_param: robot_repo } + __docker_version__: { get_param: robot_docker } template: | #!/bin/bash @@ -1113,7 +1127,7 @@ resources: __artifacts_version__: { get_param: artifacts_version } __dns_ip_addr__: { get_param: dns_ip_addr } __mr_ip_addr__: { get_param: mr_ip_addr } - __public_ip__: { get_attr: [sdc_floating_ip, floating_ip_address] } + __private_ip__: { get_param: sdc_ip_addr } __docker_version__: { get_param: sdc_docker } __gerrit_branch__: { get_param: sdc_branch } __cloud_env__: { get_param: cloud_env } @@ -1130,7 +1144,7 @@ resources: echo "__nexus_password__" > /opt/config/nexus_password.txt echo "__env_name__" > /opt/config/env_name.txt echo "__mr_ip_addr__" > /opt/config/mr_ip_addr.txt - echo "__public_ip__" > /opt/config/public_ip.txt + echo "__private_ip__" > /opt/config/private_ip.txt echo "__artifacts_version__" > /opt/config/artifacts_version.txt echo "__dns_ip_addr__" > /opt/config/dns_ip_addr.txt echo "__docker_version__" > /opt/config/docker_version.txt @@ -1215,147 +1229,6 @@ resources: ./portal_install.sh - # DCAE Controller instantiation -# dcae_c_private_port: -# type: OS::Neutron::Port -# properties: -# network: { get_resource: oam_onap } -# fixed_ips: [{"subnet": { get_resource: oam_onap_subnet }, "ip_address": { get_param: dcae_ip_addr }}] - -# dcae_c_floating_ip: -# type: OS::Neutron::FloatingIP -# properties: -# floating_network_id: { get_param: public_net_id } -# port_id: { get_resource: dcae_c_private_port } - -# dcae_c_vm: -# type: OS::Nova::Server -# properties: -# image: { get_param: ubuntu_1404_image } -# flavor: { get_param: flavor_medium } -# name: -# str_replace: -# template: base-dcae-controller -# params: -# base: { get_param: vm_base_name } -# key_name: { get_resource: vm_key } -# networks: -# - port: { get_resource: dcae_c_private_port } -# user_data_format: RAW -# user_data: -# str_replace: -# params: -# __nexus_repo__: { get_param: nexus_repo } -# __nexus_docker_repo__: { get_param: nexus_docker_repo } -# __nexus_username__: { get_param: nexus_username } -# __nexus_password__: { get_param: nexus_password } -# __nexus_url_snapshots__: { get_param: nexus_url_snapshot } -# __gitlab_branch__: { get_param: gitlab_branch } -# __dns_ip_addr__: { get_param: dns_ip_addr } -# __dcae_zone__: { get_param: dcae_zone } -# __dcae_state__: { get_param: dcae_state } -# __artifacts_version__: { get_param: artifacts_version } -# __tenant_id__: { get_param: openstack_tenant_id } -# __openstack_private_network_name__: { get_attr: [oam_onap, name] } -# __openstack_user__: { get_param: openstack_username } -# __openstack_password__: { get_param: openstack_api_key } -# __openstack_auth_method__: { get_param: openstack_auth_method } -# __key_name__: { get_param: key_name } -# __rand_str__: { get_resource: random-str } -# __pub_key__: { get_param: pub_key } -# __nexus_repo_root__: { get_param: nexus_repo_root } -# __openstack_region__: { get_param: openstack_region } -# __horizon_url__: { get_param: horizon_url } -# __keystone_url__: { get_param: keystone_url } -# __docker_version__: { get_param: dcae_docker } -# __gerrit_branch__: { get_param: dcae_branch } -# __dcae_code_version__: { get_param: dcae_code_version } -# __cloud_env__: { get_param: cloud_env } -# __public_net_id__: { get_param: public_net_id } -# __dcae_base_environment__: { get_param: dcae_base_environment } -# __dcae_ip_addr__: { get_param: dcae_ip_addr } -# __dcae_coll_ip_addr__: { get_param: dcae_coll_ip_addr } -# __dcae_db_ip_addr__: { get_param: dcae_db_ip_addr } -# __dcae_hdp1_ip_addr__: { get_param: dcae_hdp1_ip_addr } -# __dcae_hdp2_ip_addr__: { get_param: dcae_hdp2_ip_addr } -# __dcae_hdp3_ip_addr__: { get_param: dcae_hdp3_ip_addr } -# __dcae_float_ip__: { get_attr: [dcae_c_floating_ip, floating_ip_address] } -# __dcae_coll_float_ip__: { get_param: dcae_coll_float_ip } -# __dcae_db_float_ip__: { get_param: dcae_db_float_ip } -# __dcae_hdp1_float_ip__: { get_param: dcae_hdp1_float_ip } -# __dcae_hdp2_float_ip__: { get_param: dcae_hdp2_float_ip } -# __dcae_hdp3_float_ip__: { get_param: dcae_hdp3_float_ip } -# __external_dns__: { get_param: external_dns } -# __ubuntu_1404_image__: { get_param: ubuntu_1404_image } -# __ubuntu_1604_image__: { get_param: ubuntu_1604_image } -# __flavor_small__: { get_param: flavor_small } -# __flavor_medium__: { get_param: flavor_medium } -# __flavor_large__: { get_param: flavor_large } -# __flavor_xlarge__: { get_param: flavor_xlarge } -# __dcae_repo__: { get_param: dcae_repo } -# __mr_repo__: { get_param: mr_repo } -# template: | - #!/bin/bash - - # Create configuration files -# mkdir -p /opt/config -# echo "__nexus_repo__" > /opt/config/nexus_repo.txt -# echo "__nexus_docker_repo__" > /opt/config/nexus_docker_repo.txt -# echo "__nexus_username__" > /opt/config/nexus_username.txt -# echo "__nexus_password__" > /opt/config/nexus_password.txt -# echo "__nexus_url_snapshots__" > /opt/config/nexus_url_snapshots.txt -# echo "__gitlab_branch__" > /opt/config/gitlab_branch.txt -# echo "__docker_version__" > /opt/config/docker_version.txt -# echo "__artifacts_version__" > /opt/config/artifacts_version.txt -# echo "__dns_ip_addr__" > /opt/config/dns_ip_addr.txt -# echo "__gerrit_branch__" > /opt/config/gerrit_branch.txt -# echo "__dcae_zone__" > /opt/config/dcae_zone.txt -# echo "__dcae_state__" > /opt/config/dcae_state.txt -# echo "__tenant_id__" > /opt/config/tenant_id.txt -# echo "__openstack_private_network_name__" > /opt/config/openstack_private_network_name.txt -# echo "__openstack_user__" > /opt/config/openstack_user.txt -# echo "__openstack_password__" > /opt/config/openstack_password.txt -# echo "__openstack_auth_method__" > /opt/config/openstack_auth_method.txt -# echo "__key_name__" > /opt/config/key_name.txt -# echo "__rand_str__" > /opt/config/rand_str.txt -# echo "__pub_key__" > /opt/config/pub_key.txt -# echo "__nexus_repo_root__" > /opt/config/nexus_repo_root.txt -# echo "__openstack_region__" > /opt/config/openstack_region.txt -# echo "__horizon_url__" > /opt/config/horizon_url.txt -# echo "__keystone_url__" > /opt/config/keystone_url.txt -# echo "__cloud_env__" > /opt/config/cloud_env.txt -# echo "__public_net_id__" > /opt/config/public_net_id.txt -# echo "__dcae_base_environment__" > /opt/config/dcae_base_environment.txt -# echo "__dcae_code_version__" > /opt/config/dcae_code_version.txt -# echo "__dcae_ip_addr__" > /opt/config/dcae_ip_addr.txt -# echo "__dcae_coll_ip_addr__" > /opt/config/dcae_coll_ip_addr.txt -# echo "__dcae_db_ip_addr__" > /opt/config/dcae_db_ip_addr.txt -# echo "__dcae_hdp1_ip_addr__" > /opt/config/dcae_hdp1_ip_addr.txt -# echo "__dcae_hdp2_ip_addr__" > /opt/config/dcae_hdp2_ip_addr.txt -# echo "__dcae_hdp3_ip_addr__" > /opt/config/dcae_hdp3_ip_addr.txt -# echo "__dcae_float_ip__" > /opt/config/dcae_float_ip.txt -# echo "__dcae_coll_float_ip__" > /opt/config/dcae_coll_float_ip.txt -# echo "__dcae_db_float_ip__" > /opt/config/dcae_db_float_ip.txt -# echo "__dcae_hdp1_float_ip__" > /opt/config/dcae_hdp1_float_ip.txt -# echo "__dcae_hdp2_float_ip__" > /opt/config/dcae_hdp2_float_ip.txt -# echo "__dcae_hdp3_float_ip__" > /opt/config/dcae_hdp3_float_ip.txt -# echo "__external_dns__" > /opt/config/external_dns.txt -# echo "__ubuntu_1404_image__" > /opt/config/ubuntu_1404_image.txt -# echo "__ubuntu_1604_image__" > /opt/config/ubuntu_1604_image.txt -# echo "__flavor_small__" > /opt/config/flavor_small.txt -# echo "__flavor_medium__" > /opt/config/flavor_medium.txt -# echo "__flavor_large__" > /opt/config/flavor_large.txt -# echo "__flavor_xlarge__" > /opt/config/flavor_xlarge.txt -# echo "__dcae_repo__" > /opt/config/remote_repo.txt -# echo "__mr_repo__" > /opt/config/mr_repo.txt - - # Download and run install script -# curl -k __nexus_repo__/org.onap.demo/boot/__artifacts_version__/dcae_install.sh -o /opt/dcae_install.sh -# cd /opt -# chmod +x dcae_install.sh -# ./dcae_install.sh - - # Policy Engine instantiation policy_private_port: type: OS::Neutron::Port @@ -1610,7 +1483,6 @@ resources: __aai2_ip_addr__: { get_param: aai2_ip_addr } __appc_ip_addr__: { get_param: appc_ip_addr } __dcae_ip_addr__: { get_param: dcae_ip_addr } - __dcae_coll_ip_addr__: { get_param: dcae_coll_ip_addr } __so_ip_addr__: { get_param: so_ip_addr } __mr_ip_addr__: { get_param: mr_ip_addr } __policy_ip_addr__: { get_param: policy_ip_addr } @@ -1626,10 +1498,24 @@ resources: __vnfsdk_branch__: { get_param: vnfsdk_branch } __msb_docker__: { get_param: msb_docker } __mvim_docker__: { get_param: mvim_docker } - __vfc_docker__: { get_param: vfc_docker } __uui_docker__: { get_param: uui_docker } __esr_docker__: { get_param: esr_docker } __vnfsdk_repo__: { get_param: vnfsdk_repo } + __vfc_nokia_docker__: { get_param: vfc_nokia_docker } + __vfc_ztevmanagerdriver_docker__: { get_param: vfc_ztevmanagerdriver_docker } + __vfc_ztesdncdriver_docker__: { get_param: vfc_ztesdncdriver_docker } + __vfc_vnfres_docker__: { get_param: vfc_vnfres_docker } + __vfc_vnfmgr_docker__: { get_param: vfc_vnfmgr_docker } + __vfc_vnflcm_docker__: { get_param: vfc_vnflcm_docker } + __vfc_resmanagement_docker__: { get_param: vfc_resmanagement_docker } + __vfc_nslcm_docker__: { get_param: vfc_nslcm_docker } + __vfc_huawei_docker__: { get_param: vfc_huawei_docker } + __vfc_jujudriver_docker__: { get_param: vfc_jujudriver_docker } + __vfc_gvnfmdriver_docker__: { get_param: vfc_gvnfmdriver_docker } + __vfc_emsdriver_docker__: { get_param: vfc_emsdriver_docker } + __vfc_catalog_docker__: { get_param: vfc_catalog_docker } + __vfc_wfengine_mgrservice_docker__: { get_param: vfc_wfengine_mgrservice_docker } + __vfc_wfengine_activiti_docker__: { get_param: vfc_wfengine_activiti_docker } template: | #!/bin/bash @@ -1647,17 +1533,31 @@ resources: echo "__vnfsdk_branch__" > /opt/config/vnfsdk_branch.txt echo "__msb_docker__" > /opt/config/msb_docker.txt echo "__mvim_docker__" > /opt/config/mvim_docker.txt - echo "__vfc_docker__" > /opt/config/vfc_docker.txt echo "__uui_docker__" > /opt/config/uui_docker.txt echo "__esr_docker__" > /opt/config/esr_docker.txt echo "__vnfsdk_repo__" > /opt/config/vnfsdk_repo.txt + echo "export NOKIA_DOCKER_VER=__vfc_nokia_docker__" >> /opt/config/vfc_docker.txt + echo "export ZTEVMANAGERDRIVER_DOCKER_VER=__vfc_ztevmanagerdriver_docker__" >> /opt/config/vfc_docker.txt + echo "export ZTESDNCDRIVER_DOCKER_VER=__vfc_ztesdncdriver_docker__" >> /opt/config/vfc_docker.txt + echo "export VNFRES_DOCKER_VER=__vfc_vnfres_docker__" >> /opt/config/vfc_docker.txt + echo "export VNFMGR_DOCKER_VER=__vfc_vnfmgr_docker__" >> /opt/config/vfc_docker.txt + echo "export VNFLCM_DOCKER_VER=__vfc_vnflcm_docker__" >> /opt/config/vfc_docker.txt + echo "export RESMANAGEMENT_DOCKER_VER=__vfc_resmanagement_docker__" >> /opt/config/vfc_docker.txt + echo "export NSLCM_DOCKER_VER=__vfc_nslcm_docker__" >> /opt/config/vfc_docker.txt + echo "export HUAWEI_DOCKER_VER=__vfc_huawei_docker__" >> /opt/config/vfc_docker.txt + echo "export JUJUDRIVER_DOCKER_VER=__vfc_jujudriver_docker__" >> /opt/config/vfc_docker.txt + echo "export GVNFMDRIVER_DOCKER_VER=__vfc_gvnfmdriver_docker__" >> /opt/config/vfc_docker.txt + echo "export EMSDRIVER_DOCKER_VER=__vfc_emsdriver_docker__" >> /opt/config/vfc_docker.txt + echo "export CATALOG_DOCKER_VER=__vfc_catalog_docker__" >> /opt/config/vfc_docker.txt + echo "export MGRSERVICE_DOCKER_VER=__vfc_wfengine_mgrservice_docker__" >> /opt/config/vfc_docker.txt + echo "export ACTIVITI_DOCKER_VER=__vfc_wfengine_activiti_docker__" >> /opt/config/vfc_docker.txt + # Create env file with the IP address of all ONAP components echo "export AAI_IP1=__aai1_ip_addr__" >> /opt/config/onap_ips.txt echo "export AAI_IP2=__aai2_ip_addr__" >> /opt/config/onap_ips.txt echo "export APPC_IP=__appc_ip_addr__" >> /opt/config/onap_ips.txt echo "export DCAE_IP=__dcae_ip_addr__" >> /opt/config/onap_ips.txt - echo "export DCAE_COLL_IP=__dcae_coll_ip_addr__" >> /opt/config/onap_ips.txt echo "export SO_IP=__so_ip_addr__" >> /opt/config/onap_ips.txt echo "export MR_IP=__mr_ip_addr__" >> /opt/config/onap_ips.txt echo "export POLICY_IP=__policy_ip_addr__" >> /opt/config/onap_ips.txt @@ -1693,7 +1593,7 @@ resources: type: OS::Nova::Server properties: image: { get_param: ubuntu_1604_image } - flavor: { get_param: flavor_medium } + flavor: { get_param: flavor_small } name: str_replace: template: base-dcae-bootstrap @@ -1702,6 +1602,8 @@ resources: key_name: { get_resource: vm_key } networks: - port: { get_resource: dcae_c_private_port } + #security_groups: + # - { get_resource: onap_sg } user_data_format: RAW user_data: str_replace: @@ -1714,14 +1616,14 @@ resources: __nexus_docker_repo__: { get_param: nexus_docker_repo } __nexus_username__: { get_param: nexus_username } __nexus_password__: { get_param: nexus_password } - __dcae_repo__: { get_param: dcae_repo } - __gerrit_branch__: { get_param: dcae_branch } # conf for the ONAP environment where the DCAE bootstrap vm/conatiner runs __mac_addr__: { get_attr: [dcae_c_private_port, mac_address] } __dcae_ip_addr__: { get_param: dcae_ip_addr } __dcae_float_ip__: { get_attr: [dcae_c_floating_ip, floating_ip_address] } __dns_ip_addr__: { get_param: dns_ip_addr } __external_dns__: { get_param: external_dns } + __dns_forwarder__: { get_param: dns_forwarder } + __dcae_domain__: { get_param: dcae_domain } # conf for VMs DCAE is to bringup __openstack_keystone_url__: { get_param: keystone_url } __dcae_keystone_url__: { get_param: dcae_keystone_url } @@ -1729,22 +1631,28 @@ resources: __dcaeos_keystone_url__: { get_param: dcae_keystone_url } __dcaeos_region__: { get_param: openstack_region } __dcaeos_tenant_id__: { get_param: openstack_tenant_id } + __dcaeos_tenant_name__: { get_param: openstack_tenant_name } + __dcaeos_security_group__: + str_replace: + template: 'onap_sg_rand' + params: + rand: { get_resource: random-str } + #__dcaeos_security_group__: { get_attr: [onap_sg, name] } __dcaeos_username__: { get_param: openstack_username } __dcaeos_password__: { get_param: openstack_api_key } - __dcaeos_key_name__: { get_attr: [vm_key, name] } - __dcaeos_key_name__: { get_param: dcae_key_name } + __dcaeos_key_name__: { get_resource: vm_key } __dcaeos_public_key__: { get_param: dcae_public_key } __dcaeos_private_key__: { get_param: dcae_private_key } __dcaeos_private_network_name__: { get_attr: [oam_onap, name] } - __dcaeos_public_network_name__: { get_param: public_net_id } + __dcaeos_public_network_name__: { get_param: public_net_name } __dcaeos_ubuntu_1604_image__: { get_param: ubuntu_1604_image } __dcaeos_centos_7_image__: { get_param: dcae_centos_7_image } - __dcaeos_security_group__ : { get_param: dcae_security_group } - __dcaeos_flavor_id__: { get_param: flavor_medium } + __dcaeos_flavor_id__: { get_param: flavor_xlarge } + __dcaeos_flavor_id_cdap__: { get_param: flavor_xlarge } __dcaeos_dnsaas_config_enabled__: { get_param: dnsaas_config_enabled } __dcaeos_dnsaas_region__: { get_param: dnsaas_region } - __dcaeos_dnsaas_tenant_id__: { get_param: dnsaas_tenant_id} __dcaeos_dnsaas_keystone_url__: { get_param: dnsaas_keystone_url } + __dnsaas_tenant_name__: { get_param: dnsaas_tenant_name } __dcaeos_dnsaas_username__: { get_param: dnsaas_username } __dcaeos_dnsaas_password__: { get_param: dnsaas_password } # fixed private IPs @@ -1778,7 +1686,6 @@ resources: echo "__nexus_docker_repo__" > /opt/config/nexus_docker_repo.txt echo "__nexus_username__" > /opt/config/nexus_username.txt echo "__nexus_password__" > /opt/config/nexus_password.txt - echo "__dcae_repo__" > /opt/config/remote_repo.txt echo "__gerrit_branch__" > /opt/config/gerrit_branch.txt # conf for the ONAP environment where the DCAE bootstrap vm/conatiner runs echo "__mac_addr__" > /opt/config/mac_addr.txt @@ -1786,28 +1693,32 @@ resources: echo "__dcae_float_ip__" > /opt/config/dcae_float_ip.txt echo "__dns_ip_addr__" > /opt/config/dns_ip_addr.txt echo "__external_dns__" > /opt/config/external_dns.txt + echo "__dns_forwarder__" > /opt/config/dns_forwarder.txt + echo "__dcae_domain__" > /opt/config/dcae_domain.txt # conf for the OpenStack env where DCAE is deployed echo "__openstack_keystone_url__" > /opt/config/openstack_keystone_url.txt echo "__dcaeos_cloud_env__" > /opt/config/cloud_env.txt echo "__dcaeos_keystone_url__" > /opt/config/keystone_url.txt echo "__dcaeos_region__" > /opt/config/openstack_region.txt echo "__dcaeos_tenant_id__" > /opt/config/tenant_id.txt - echo "__dcaeos_tenant_id__" > /opt/config/tenant_name.txt + echo "__dcaeos_tenant_name__" > /opt/config/tenant_name.txt echo "__dcaeos_username__" > /opt/config/openstack_user.txt echo "__dcaeos_password__" > /opt/config/openstack_password.txt echo "__dcaeos_key_name__" > /opt/config/key_name.txt echo "__dcaeos_public_key__" > /opt/config/pub_key.txt echo "__dcaeos_private_key__" > /opt/config/priv_key echo "__dcaeos_private_network_name__" > /opt/config/openstack_private_network_name.txt + echo "__dcaeos_public_network_name__" > /opt/config/public_net_name.txt echo "__dcaeos_public_network_name__" > /opt/config/public_net_id.txt echo "__dcaeos_ubuntu_1604_image__" > /opt/config/ubuntu_1604_image.txt echo "__dcaeos_centos_7_image__" > /opt/config/centos_7_image.txt echo "__dcaeos_security_group__" > /opt/config/security_group.txt echo "__dcaeos_flavor_id__" > /opt/config/flavor_id.txt + echo "__dcaeos_flavor_id_cdap__" > /opt/config/flavor_id_cdap.txt echo "__dcaeos_dnsaas_config_enabled__" > /opt/config/dnsaas_config_enabled.txt echo "__dcaeos_dnsaas_region__" > /opt/config/dnsaas_region.txt - echo "__dcaeos_dnsaas_tenant_id__" > /opt/config/dnsaas_tenant_id.txt echo "__dcaeos_dnsaas_keystone_url__" > /opt/config/dnsaas_keystone_url.txt + echo "__dnsaas_tenant_name__" > /opt/config/dnsaas_tenant_name.txt echo "__dcaeos_dnsaas_username__" > /opt/config/dnsaas_username.txt echo "__dcaeos_dnsaas_password__" > /opt/config/dnsaas_password.txt # fixed private IP addresses of other ONAP components @@ -1831,4 +1742,4 @@ resources: curl -k __nexus_repo__/org.onap.demo/boot/__artifacts_version__/dcae2_install.sh -o /opt/dcae2_install.sh cd /opt chmod +x dcae2_install.sh - ./dcae2_install.sh + ./dcae2_install.sh > /tmp/dcae2_install.log 2>&1 diff --git a/heat/ONAP/onap_openstack_template.env b/heat/ONAP/onap_openstack_template.env new file mode 100644 index 00000000..99622120 --- /dev/null +++ b/heat/ONAP/onap_openstack_template.env @@ -0,0 +1,182 @@ +parameters: + + ############################################## + # # + # Parameters used across all ONAP components # + # # + ############################################## + + public_net_id: PUT YOUR NETWORK ID HERE + + public_net_name: PUT YOUR NETWORK NAME HERE + + ubuntu_1404_image: PUT THE UBUNTU 14.04 IMAGE NAME HERE + + ubuntu_1604_image: PUT THE UBUNTU 16.04 IMAGE NAME HERE + + flavor_small: PUT THE SMALL FLAVOR NAME HERE + + flavor_medium: PUT THE MEDIUM FLAVOR NAME HERE + + flavor_large: PUT THE LARGE FLAVOR NAME HERE + + flavor_xlarge: PUT THE XLARGE FLAVOR NAME HERE + + flavor_xxlarge: PUT THE XXLARGE FLAVOR NAME HERE + + vm_base_name: onap + + key_name: onap_key + + pub_key: PUT YOUR PUBLIC KEY HERE + + nexus_repo: https://nexus.onap.org/content/sites/raw + + nexus_docker_repo: nexus3.onap.org:10001 + + nexus_username: docker + + nexus_password: docker + + dmaap_topic: AUTO + + artifacts_version: 1.1.0-SNAPSHOT + + openstack_tenant_id: PUT YOUR OPENSTACK PROJECT ID HERE + + openstack_tenant_name: PUT YOUR OPENSTACK PROJECT NAME HERE + + openstack_username: PUT YOUR OPENSTACK USERNAME HERE + + openstack_api_key: PUT YOUR OPENSTACK PASSWORD HERE + + openstack_auth_method: password + + openstack_region: RegionOne + + horizon_url: PUT THE HORIZON URL HERE + + keystone_url: PUT THE KEYSTONE URL HERE (do not include version number) + + cloud_env: openstack + + + ###################### + # # + # Network parameters # + # # + ###################### + + dns_list: PUT THE ADDRESS OF THE EXTERNAL DNS HERE (e.g. a comma-separated list of IP addresses in your /etc/resolv.conf in UNIX-based Operating Systems) + external_dns: PUT THE FIRST ADDRESS OF THE EXTERNAL DNS LIST HERE + dns_forwarder: PUT THE IP OF DNS FORWARDER FOR ONAP DEPLOYMENT'S OWN DNS SERVER + oam_network_cidr: 10.0.0.0/16 + + ### Private IP addresses ### + + aai1_ip_addr: 10.0.1.1 + aai2_ip_addr: 10.0.1.2 + appc_ip_addr: 10.0.2.1 + dcae_ip_addr: 10.0.4.1 + dns_ip_addr: 10.0.100.1 + so_ip_addr: 10.0.5.1 + mr_ip_addr: 10.0.11.1 + policy_ip_addr: 10.0.6.1 + portal_ip_addr: 10.0.9.1 + robot_ip_addr: 10.0.10.1 + sdc_ip_addr: 10.0.3.1 + sdnc_ip_addr: 10.0.7.1 + vid_ip_addr: 10.0.8.1 + clamp_ip_addr: 10.0.12.1 + openo_ip_addr: 10.0.14.1 + + ########################### + # # + # Parameters used by DCAE # + # # + ########################### + + dnsaas_config_enabled: PUT WHETHER TO USE PROXYED DESIGNATE + dnsaas_region: PUT THE DESIGNATE PROVIDING OPENSTACK'S REGION HERE + dnsaas_keystone_url: PUT THE DESIGNATE PROVIDING OPENSTACK'S KEYSTONE URL HERE + dnsaas_tenant_name: PUT THE TENANT NAME IN THE DESIGNATE PROVIDING OPENSTACK HERE (FOR R1 USE THE SAME AS openstack_tenant_name) + dnsaas_username: PUT THE DESIGNATE PROVIDING OPENSTACK'S USERNAME HERE + dnsaas_password: PUT THE DESIGNATE PROVIDING OPENSTACK'S PASSWORD HERE + dcae_keystone_url: PUT THE MULTIVIM PROVIDED KEYSTONE API URL HERE + dcae_centos_7_image: PUT THE CENTOS7 VM IMAGE NAME HERE FOR DCAE LAUNCHED CENTOS7 VM + dcae_domain: PUT THE NAME OF DOMAIN THAT DCAE VMS REGISTER UNDER + dcae_public_key: PUT THE PUBLIC KEY OF A KEYPAIR HERE TO BE USED BETWEEN DCAE LAUNCHED VMS + dcae_private_key: PUT THE SECRET KEY OF A KEYPAIR HERE TO BE USED BETWEEN DCAE LAUNCHED VMS + + ################################ + # # + # Docker versions and branches # + # Generated using onap_openstack_template.env and manifest-to-env.sh + # # + ################################ + + aai_branch: master + appc_branch: master + so_branch: master + mr_branch: master + policy_branch: master + portal_branch: release-1.3.0 + robot_branch: master + sdc_branch: master + sdnc_branch: master + vid_branch: master + clamp_branch: master + vnfsdk_branch: master + + aai_docker: ${AAI_RESOURCES_DOCKER} + aai_sparky_docker: ${AAI_RESOURCES_DOCKER} + appc_docker: ${APPC_IMAGE_DOCKER} + so_docker: ${MSO_DOCKER} + dcae_docker: ${BOOTSTRAP_DOCKER} + policy_docker: ${POLICY_DB_DOCKER} + portal_docker: ${PORTAL_APPS_DOCKER} + robot_docker: 1.1-STAGING-latest + sdc_docker: ${SDC_BACKEND_DOCKER} + sdnc_docker: ${SDNC_IMAGE_DOCKER} + vid_docker: ${VID_DOCKER} + clamp_docker: ${CLAMP_DOCKER} + msb_docker: ${MSB_APIGATEWAY_DOCKER} + mvim_docker: ${FRAMEWORK_DOCKER} + uui_docker: ${USECASE_UI_SERVER_DOCKER} + esr_docker: ${ESR_SERVER_DOCKER} + dgbuilder_docker: ${CCSDK_DGBUILDER_IMAGE_DOCKER} + cli_docker: ${CLI_DOCKER} + + vfc_nokia_docker: ${NOKIA_DOCKER} + vfc_ztevmanagerdriver_docker: ${ZTEVMANAGERDRIVER_DOCKER} + vfc_ztesdncdriver_docker: ${ZTESDNCDRIVER_DOCKER} + vfc_vnfres_docker: ${VNFRES_DOCKER} + vfc_vnfmgr_docker: ${VNFMGR_DOCKER} + vfc_vnflcm_docker: ${VNFLCM_DOCKER} + vfc_resmanagement_docker: ${RESMANAGEMENT_DOCKER} + vfc_nslcm_docker: ${NSLCM_DOCKER} + vfc_huawei_docker: ${HUAWEI_DOCKER} + vfc_jujudriver_docker: ${JUJUDRIVER_DOCKER} + vfc_gvnfmdriver_docker: ${GVNFMDRIVER_DOCKER} + vfc_emsdriver_docker: ${EMSDRIVER_DOCKER} + vfc_catalog_docker: ${CATALOG_DOCKER} + vfc_wfengine_mgrservice_docker: ${WFENGINE_MGRSERVICE_DOCKER} + vfc_wfengine_activiti_docker: ${WFENGINE_ACTIVITI_DOCKER} + + ##################### + # # + # ONAP repositories # + # # + ##################### + aai_repo: http://gerrit.onap.org/r/aai/test-config + appc_repo: http://gerrit.onap.org/r/appc/deployment.git + mr_repo: http://gerrit.onap.org/r/dcae/demo/startup/message-router.git + so_repo: http://gerrit.onap.org/r/so/docker-config.git + policy_repo: http://gerrit.onap.org/r/policy/docker.git + portal_repo: http://gerrit.onap.org/r/portal.git + robot_repo: http://gerrit.onap.org/r/testsuite/properties.git + sdc_repo: http://gerrit.onap.org/r/sdc.git + sdnc_repo: http://gerrit.onap.org/r/sdnc/oam.git + vid_repo: http://gerrit.onap.org/r/vid.git + clamp_repo: http://gerrit.onap.org/r/clamp.git + vnfsdk_repo: http://gerrit.onap.org/r/vnfsdk/refrepo.git diff --git a/heat/vCPE/vbng/base_vcpe_vbng.env b/heat/vCPE/vbng/base_vcpe_vbng.env index be4f9728..43ccc514 100644 --- a/heat/vCPE/vbng/base_vcpe_vbng.env +++ b/heat/vCPE/vbng/base_vcpe_vbng.env @@ -21,6 +21,7 @@ vbng_name_0: zdcpe1cpe01bng01 vnf_id: vCPE_Infrastructure_Metro_vBNG_demo_app vf_module_id: vCPE_Intrastructure_Metro_vBNG + sdnc_ip_addr: 10.0.7.1 dcae_collector_ip: 10.0.4.102 dcae_collector_port: 8080 repo_url_blob: https://nexus.onap.org/content/sites/raw diff --git a/heat/vCPE/vbng/base_vcpe_vbng.yaml b/heat/vCPE/vbng/base_vcpe_vbng.yaml index 3dd7ca09..b2ae4e6f 100644 --- a/heat/vCPE/vbng/base_vcpe_vbng.yaml +++ b/heat/vCPE/vbng/base_vcpe_vbng.yaml @@ -169,6 +169,10 @@ parameters: type: string label: VPP Patch URL description: URL for VPP patch for vBNG + sdnc_ip_addr: + type: string + label: SDNC IP address + description: IP address of the SDNC ############# # # @@ -257,6 +261,7 @@ resources: __vpp_source_repo_url__ : { get_param: vpp_source_repo_url } __vpp_source_repo_branch__ : { get_param: vpp_source_repo_branch } __vpp_patch_url__ : { get_param: vpp_patch_url } + __sdnc_ip_addr__: { get_param: sdnc_ip_addr } template: | #!/bin/bash @@ -280,6 +285,7 @@ resources: echo "__vpp_source_repo_url__" > /opt/config/vpp_source_repo_url.txt echo "__vpp_source_repo_branch__" > /opt/config/vpp_source_repo_branch.txt echo "__vpp_patch_url__" > /opt/config/vpp_patch_url.txt + echo "__sdnc_ip_addr__" > /opt/config/sdnc_ip_addr.txt # Download and run install script curl -k __repo_url_blob__/org.onap.demo/vnfs/vcpe/__install_script_version__/v_bng_install.sh -o /opt/v_bng_install.sh diff --git a/heat/vCPE/vgmux/base_vcpe_vgmux.env b/heat/vCPE/vgmux/base_vcpe_vgmux.env index e81afa70..4b486a8d 100644 --- a/heat/vCPE/vgmux/base_vcpe_vgmux.env +++ b/heat/vCPE/vgmux/base_vcpe_vgmux.env @@ -11,12 +11,14 @@ onap_private_net_cidr: 10.0.0.0/16 bng_gmux_private_net_cidr: 10.1.0.0/24 mux_gw_private_net_cidr: 10.5.0.0/24 + brgemu_bng_private_net_cidr: 10.3.0.0/24 vgmux_private_ip_0: 10.1.0.20 vgmux_private_ip_1: 10.0.101.20 vgmux_private_ip_2: 10.5.0.20 vgmux_name_0: zdcpe1cpe01mux01 vnf_id: vCPE_Infrastructure_vGMUX_demo_app vf_module_id: vCPE_Intrastructure_Metro_vGMUX + bng_gmux_private_ip: 10.1.0.10 dcae_collector_ip: 10.0.4.102 dcae_collector_port: 8080 repo_url_blob: https://nexus.onap.org/content/sites/raw diff --git a/heat/vCPE/vgmux/base_vcpe_vgmux.yaml b/heat/vCPE/vgmux/base_vcpe_vgmux.yaml index ecdb1b1b..43bbb986 100644 --- a/heat/vCPE/vgmux/base_vcpe_vgmux.yaml +++ b/heat/vCPE/vgmux/base_vcpe_vgmux.yaml @@ -69,6 +69,10 @@ parameters: type: string label: vGMUX private network CIDR description: The CIDR of the vGMUX private network + brgemu_bng_private_net_cidr: + type: string + label: vBRG vBNG private network CIDR + description: The CIDR of the vBRG-vBNG private network onap_private_net_id: type: string label: ONAP management network name or ID @@ -105,6 +109,10 @@ parameters: type: string label: vCPE module ID description: The vCPE Module ID is provided by ONAP + bng_gmux_private_ip: + type: string + label: vBNG private IP address towards the vBNG-vGMUX private network + description: Private IP address that is assigned to the vBNG to communicate with the vGMUX dcae_collector_ip: type: string label: DCAE collector IP address @@ -232,12 +240,14 @@ resources: user_data: str_replace: params: - __bng_mux_net_ipaddr__ : { get_param: vgmux_private_ip_0 } + __mux_to_bng_net_ipaddr__ : { get_param: vgmux_private_ip_0 } __oam_ipaddr__ : { get_param: vgmux_private_ip_1 } __mux_gw_net_ipaddr__ : { get_param: vgmux_private_ip_2 } + __bng_to_mux_ipaddr__ : { get_param: bng_gmux_private_ip } __bng_mux_net_cidr__ : { get_param: bng_gmux_private_net_cidr } __oam_cidr__ : { get_param: onap_private_net_cidr } __mux_gw_net_cidr__ : { get_param: mux_gw_private_net_cidr } + __brg_bng_net_cidr__ : { get_param: brgemu_bng_private_net_cidr } __repo_url_blob__ : { get_param: repo_url_blob } __repo_url_artifacts__ : { get_param: repo_url_artifacts } __demo_artifacts_version__ : { get_param: demo_artifacts_version } @@ -255,12 +265,14 @@ resources: # Create configuration files mkdir /opt/config - echo "__bng_mux_net_ipaddr__" > /opt/config/bng_mux_net_ipaddr.txt + echo "__mux_to_bng_net_ipaddr__" > /opt/config/mux_to_bng_net_ipaddr.txt echo "__oam_ipaddr__" > /opt/config/oam_ipaddr.txt echo "__mux_gw_net_ipaddr__" > /opt/config/mux_gw_net_ipaddr.txt + echo "__bng_to_mux_ipaddr__ " > /opt/config/bng_to_mux_net_ipaddr.txt echo "__bng_mux_net_cidr__" > /opt/config/bng_mux_net_cidr.txt echo "__oam_cidr__" > /opt/config/oam_cidr.txt echo "__mux_gw_net_cidr__" > /opt/config/mux_gw_net_cidr.txt + echo "__brg_bng_net_cidr__" > /opt/config/brg_bng_net_cidr.txt echo "__repo_url_blob__" > /opt/config/repo_url_blob.txt echo "__repo_url_artifacts__" > /opt/config/repo_url_artifacts.txt echo "__demo_artifacts_version__" > /opt/config/demo_artifacts_version.txt diff --git a/heat/vCPE/vgw/base_vcpe_vgw.env b/heat/vCPE/vgw/base_vcpe_vgw.env index f1cadb83..6f33138e 100644 --- a/heat/vCPE/vgw/base_vcpe_vgw.env +++ b/heat/vCPE/vgw/base_vcpe_vgw.env @@ -17,6 +17,8 @@ vgw_name_0: zdcpe1cpe01gw01 vnf_id: vCPE_Infrastructure_GW_demo_app vf_module_id: vCPE_Customer_GW + mux_ip_addr: 10.5.0.20 + vg_vgmux_tunnel_vni: 100 dcae_collector_ip: 10.0.4.102 dcae_collector_port: 8080 repo_url_blob: https://nexus.onap.org/content/sites/raw diff --git a/heat/vCPE/vgw/base_vcpe_vgw.yaml b/heat/vCPE/vgw/base_vcpe_vgw.yaml index 173ba6dd..c4b98760 100644 --- a/heat/vCPE/vgw/base_vcpe_vgw.yaml +++ b/heat/vCPE/vgw/base_vcpe_vgw.yaml @@ -157,6 +157,14 @@ parameters: type: string label: Honeycomb Source Git Branch description: Git Branch for the Honeycomb source codes + mux_ip_addr: + type: string + label: vGMUX IP address + description: IP address of vGMUX + vg_vgmux_tunnel_vni: + type: number + label: vG-vGMUX tunnel vni + description: vni value of vG-vGMUX vxlan tunnel ############# # # @@ -233,6 +241,8 @@ resources: __vpp_source_repo_branch__ : { get_param: vpp_source_repo_branch } __hc2vpp_source_repo_url__ : { get_param: hc2vpp_source_repo_url } __hc2vpp_source_repo_branch__ : { get_param: hc2vpp_source_repo_branch } + __mux_ip_addr__: { get_param: mux_ip_addr } + __vg_vgmux_tunnel_vni__: { get_param: vg_vgmux_tunnel_vni } template: | #!/bin/bash @@ -252,6 +262,8 @@ resources: echo "__vpp_source_repo_branch__" > /opt/config/vpp_source_repo_branch.txt echo "__hc2vpp_source_repo_url__" > /opt/config/hc2vpp_source_repo_url.txt echo "__hc2vpp_source_repo_branch__" > /opt/config/hc2vpp_source_repo_branch.txt + echo "__mux_ip_addr__" > /opt/config/mux_ip_addr.txt + echo "__vg_vgmux_tunnel_vni__" > /opt/config/vg_vgmux_tunnel_vni.txt # Download and run install script curl -k __repo_url_blob__/org.onap.demo/vnfs/vcpe/__install_script_version__/v_gw_install.sh -o /opt/v_gw_install.sh diff --git a/heat/vFW/MANIFEST.json b/heat/vFW/MANIFEST.json new file mode 100644 index 00000000..af79f75b --- /dev/null +++ b/heat/vFW/MANIFEST.json @@ -0,0 +1,17 @@ +{ + "name": "virtualFireWall", + "description": "", + "data": [ + { + "file": "base_vfw.yaml", + "type": "HEAT", + "isBase": "true", + "data": [ + { + "file": "base_vfw.env", + "type": "HEAT_ENV" + } + ] + } + ] +}
\ No newline at end of file diff --git a/heat/vFW/base_vfw.yaml b/heat/vFW/base_vfw.yaml index 4fb19c00..3d5a22d1 100644 --- a/heat/vFW/base_vfw.yaml +++ b/heat/vFW/base_vfw.yaml @@ -1,7 +1,7 @@ ##########################################################################
#
#==================LICENSE_START==========================================
-#
+#
#
# Copyright (c) 2017 AT&T Intellectual Property. All rights reserved.
#
@@ -256,7 +256,7 @@ resources: __cloud_env__ : { get_param: cloud_env }
template: |
#!/bin/bash
-
+
# Create configuration files
mkdir /opt/config
echo "__dcae_collector_ip__" > /opt/config/dcae_collector_ip.txt
@@ -272,7 +272,7 @@ resources: echo "__protected_private_net_cidr__" > /opt/config/protected_private_net_cidr.txt
echo "__onap_private_net_cidr__" > /opt/config/onap_private_net_cidr.txt
echo "__cloud_env__" > /opt/config/cloud_env.txt
-
+
# Download and run install script
curl -k __repo_url_blob__/org.onap.demo/vnfs/vfw/__install_script_version__/v_firewall_install.sh -o /opt/v_firewall_install.sh
cd /opt
@@ -323,7 +323,7 @@ resources: __cloud_env__ : { get_param: cloud_env }
template: |
#!/bin/bash
-
+
# Create configuration files
mkdir /opt/config
echo "__fw_ipaddr__" > /opt/config/fw_ipaddr.txt
@@ -338,7 +338,7 @@ resources: echo "__unprotected_private_net_cidr__" > /opt/config/unprotected_private_net_cidr.txt
echo "__onap_private_net_cidr__" > /opt/config/onap_private_net_cidr.txt
echo "__cloud_env__" > /opt/config/cloud_env.txt
-
+
# Download and run install script
curl -k __repo_url_blob__/org.onap.demo/vnfs/vfw/__install_script_version__/v_packetgen_install.sh -o /opt/v_packetgen_install.sh
cd /opt
@@ -387,7 +387,7 @@ resources: __cloud_env__ : { get_param: cloud_env }
template: |
#!/bin/bash
-
+
# Create configuration files
mkdir /opt/config
echo "__protected_net_gw__" > /opt/config/protected_net_gw.txt
@@ -399,7 +399,7 @@ resources: echo "__protected_private_net_cidr__" > /opt/config/protected_private_net_cidr.txt
echo "__onap_private_net_cidr__" > /opt/config/onap_private_net_cidr.txt
echo "__cloud_env__" > /opt/config/cloud_env.txt
-
+
# Download and run install script
curl -k __repo_url_blob__/org.onap.demo/vnfs/vfw/__install_script_version__/v_sink_install.sh -o /opt/v_sink_install.sh
cd /opt
diff --git a/heat/vFWCL/vFWSNK/MANIFEST.json b/heat/vFWCL/vFWSNK/MANIFEST.json new file mode 100644 index 00000000..49383787 --- /dev/null +++ b/heat/vFWCL/vFWSNK/MANIFEST.json @@ -0,0 +1,17 @@ +{ + "name": "", + "description": "", + "data": [ + { + "file": "base_vfw.yaml", + "type": "HEAT", + "isBase": "true", + "data": [ + { + "file": "base_vfw.env", + "type": "HEAT_ENV" + } + ] + } + ] +} diff --git a/heat/vFWCL/vFWSNK/base_vfw.env b/heat/vFWCL/vFWSNK/base_vfw.env new file mode 100644 index 00000000..84ed850f --- /dev/null +++ b/heat/vFWCL/vFWSNK/base_vfw.env @@ -0,0 +1,32 @@ +parameters: + image_name: PUT THE VM IMAGE NAME HERE + flavor_name: PUT THE VM FLAVOR NAME HERE + public_net_id: PUT THE PUBLIC NETWORK ID HERE + unprotected_private_net_id: zdfw1fwl01_unprotected + unprotected_private_subnet_id: zdfw1fwl01_unprotected_sub + unprotected_private_net_cidr: 192.168.10.0/24 + protected_private_net_id: zdfw1fwl01_protected + protected_private_subnet_id: zdfw1fwl01_protected_sub + protected_private_net_cidr: 192.168.20.0/24 + onap_private_net_id: PUT THE ONAP PRIVATE NETWORK NAME HERE + onap_private_subnet_id: PUT THE ONAP PRIVATE NETWORK NAME HERE + onap_private_net_cidr: 10.0.0.0/16 + vfw_private_ip_0: 192.168.10.100 + vfw_private_ip_1: 192.168.20.100 + vfw_private_ip_2: 10.0.100.1 + vpg_private_ip_0: 192.168.10.200 + vsn_private_ip_0: 192.168.20.250 + vsn_private_ip_1: 10.0.100.3 + vfw_name_0: zdfw1fwl01fwl01 + vsn_name_0: zdfw1fwl01snk01 + vnf_id: vFirewall_demo_app + vf_module_id: vFirewallCL + dcae_collector_ip: PUT THE ADDRESS OF THE DCAE COLLECTOR HERE + dcae_collector_port: 8080 + repo_url_blob: https://nexus.onap.org/content/sites/raw + repo_url_artifacts: https://nexus.onap.org/content/groups/staging + demo_artifacts_version: 1.1.0 + install_script_version: 1.1.0-SNAPSHOT + key_name: vfw_key + pub_key: PUT YOUR KEY HERE + cloud_env: PUT openstack OR rackspace HERE diff --git a/heat/vFWCL/vFWSNK/base_vfw.yaml b/heat/vFWCL/vFWSNK/base_vfw.yaml new file mode 100644 index 00000000..c82e2e56 --- /dev/null +++ b/heat/vFWCL/vFWSNK/base_vfw.yaml @@ -0,0 +1,343 @@ +##########################################################################
+#
+#==================LICENSE_START==========================================
+#
+#
+# Copyright (c) 2017 AT&T Intellectual Property. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+#==================LICENSE_END============================================
+#
+# ECOMP is a trademark and service mark of AT&T Intellectual Property.
+#
+##########################################################################
+
+heat_template_version: 2013-05-23
+
+description: Heat template that deploys vFirewall Closed Loop demo app (vFW and vSink) for ONAP
+
+##############
+# #
+# PARAMETERS #
+# #
+##############
+
+parameters:
+ image_name:
+ type: string
+ label: Image name or ID
+ description: Image to be used for compute instance
+ flavor_name:
+ type: string
+ label: Flavor
+ description: Type of instance (flavor) to be used
+ public_net_id:
+ type: string
+ label: Public network name or ID
+ description: Public network that enables remote connection to VNF
+ unprotected_private_net_id:
+ type: string
+ label: Unprotected private network name or ID
+ description: Private network that connects vPacketGenerator with vFirewall
+ unprotected_private_subnet_id:
+ type: string
+ label: Unprotected private subnetwork name or ID
+ description: Private subnetwork of the protected network
+ unprotected_private_net_cidr:
+ type: string
+ label: Unprotected private network CIDR
+ description: The CIDR of the unprotected private network
+ protected_private_net_id:
+ type: string
+ label: Protected private network name or ID
+ description: Private network that connects vFirewall with vSink
+ protected_private_subnet_id:
+ type: string
+ label: Protected private subnetwork name or ID
+ description: Private subnetwork of the unprotected network
+ protected_private_net_cidr:
+ type: string
+ label: Protected private network CIDR
+ description: The CIDR of the protected private network
+ onap_private_net_id:
+ type: string
+ label: ONAP management network name or ID
+ description: Private network that connects ONAP components and the VNF
+ onap_private_subnet_id:
+ type: string
+ label: ONAP management sub-network name or ID
+ description: Private sub-network that connects ONAP components and the VNF
+ onap_private_net_cidr:
+ type: string
+ label: ONAP private network CIDR
+ description: The CIDR of the protected private network
+ vfw_private_ip_0:
+ type: string
+ label: vFirewall private IP address towards the unprotected network
+ description: Private IP address that is assigned to the vFirewall to communicate with the vPacketGenerator
+ vfw_private_ip_1:
+ type: string
+ label: vFirewall private IP address towards the protected network
+ description: Private IP address that is assigned to the vFirewall to communicate with the vSink
+ vfw_private_ip_2:
+ type: string
+ label: vFirewall private IP address towards the ONAP management network
+ description: Private IP address that is assigned to the vFirewall to communicate with ONAP components
+ vpg_private_ip_0:
+ type: string
+ label: vPacketGenerator private IP address towards the unprotected network
+ description: Private IP address that is assigned to the vPacketGenerator to communicate with the vFirewall
+ vsn_private_ip_0:
+ type: string
+ label: vSink private IP address towards the protected network
+ description: Private IP address that is assigned to the vSink to communicate with the vFirewall
+ vsn_private_ip_1:
+ type: string
+ label: vSink private IP address towards the ONAP management network
+ description: Private IP address that is assigned to the vSink to communicate with ONAP components
+ vfw_name_0:
+ type: string
+ label: vFirewall name
+ description: Name of the vFirewall
+ vsn_name_0:
+ type: string
+ label: vSink name
+ description: Name of the vSink
+ vnf_id:
+ type: string
+ label: VNF ID
+ description: The VNF ID is provided by ONAP
+ vf_module_id:
+ type: string
+ label: vFirewall module ID
+ description: The vFirewall Module ID is provided by ONAP
+ dcae_collector_ip:
+ type: string
+ label: DCAE collector IP address
+ description: IP address of the DCAE collector
+ dcae_collector_port:
+ type: string
+ label: DCAE collector port
+ description: Port of the DCAE collector
+ key_name:
+ type: string
+ label: Key pair name
+ description: Public/Private key pair name
+ pub_key:
+ type: string
+ label: Public key
+ description: Public key to be installed on the compute instance
+ repo_url_blob:
+ type: string
+ label: Repository URL
+ description: URL of the repository that hosts the demo packages
+ repo_url_artifacts:
+ type: string
+ label: Repository URL
+ description: URL of the repository that hosts the demo packages
+ install_script_version:
+ type: string
+ label: Installation script version number
+ description: Version number of the scripts that install the vFW demo app
+ demo_artifacts_version:
+ type: string
+ label: Artifacts version used in demo vnfs
+ description: Artifacts (jar, tar.gz) version used in demo vnfs
+ cloud_env:
+ type: string
+ label: Cloud environment
+ description: Cloud environment (e.g., openstack, rackspace)
+
+#############
+# #
+# RESOURCES #
+# #
+#############
+
+resources:
+ random-str:
+ type: OS::Heat::RandomString
+ properties:
+ length: 4
+
+ my_keypair:
+ type: OS::Nova::KeyPair
+ properties:
+ name:
+ str_replace:
+ template: base_rand
+ params:
+ base: { get_param: key_name }
+ rand: { get_resource: random-str }
+ public_key: { get_param: pub_key }
+ save_private_key: false
+
+ unprotected_private_network:
+ type: OS::Neutron::Net
+ properties:
+ name: { get_param: unprotected_private_net_id }
+
+ unprotected_private_subnet:
+ type: OS::Neutron::Subnet
+ properties:
+ name: { get_param: unprotected_private_subnet_id }
+ network_id: { get_resource: unprotected_private_network }
+ cidr: { get_param: unprotected_private_net_cidr }
+
+ protected_private_network:
+ type: OS::Neutron::Net
+ properties:
+ name: { get_param: protected_private_net_id }
+
+ protected_private_subnet:
+ type: OS::Neutron::Subnet
+ properties:
+ name: { get_param: protected_private_subnet_id }
+ network_id: { get_resource: protected_private_network }
+ cidr: { get_param: protected_private_net_cidr }
+
+ # Virtual Firewall instantiation
+ vfw_private_0_port:
+ type: OS::Neutron::Port
+ properties:
+ network: { get_resource: unprotected_private_network }
+ fixed_ips: [{"subnet": { get_resource: unprotected_private_subnet }, "ip_address": { get_param: vfw_private_ip_0 }}]
+
+ vfw_private_1_port:
+ type: OS::Neutron::Port
+ properties:
+ allowed_address_pairs: [{ "ip_address": { get_param: vpg_private_ip_0 }}]
+ network: { get_resource: protected_private_network }
+ fixed_ips: [{"subnet": { get_resource: protected_private_subnet }, "ip_address": { get_param: vfw_private_ip_1 }}]
+
+ vfw_private_2_port:
+ type: OS::Neutron::Port
+ properties:
+ network: { get_param: onap_private_net_id }
+ fixed_ips: [{"subnet": { get_param: onap_private_subnet_id }, "ip_address": { get_param: vfw_private_ip_2 }}]
+
+ vfw_0:
+ type: OS::Nova::Server
+ properties:
+ image: { get_param: image_name }
+ flavor: { get_param: flavor_name }
+ name: { get_param: vfw_name_0 }
+ key_name: { get_resource: my_keypair }
+ networks:
+ - network: { get_param: public_net_id }
+ - port: { get_resource: vfw_private_0_port }
+ - port: { get_resource: vfw_private_1_port }
+ - port: { get_resource: vfw_private_2_port }
+ metadata: {vnf_id: { get_param: vnf_id }, vf_module_id: { get_param: vf_module_id }}
+ user_data_format: RAW
+ user_data:
+ str_replace:
+ params:
+ __dcae_collector_ip__ : { get_param: dcae_collector_ip }
+ __dcae_collector_port__ : { get_param: dcae_collector_port }
+ __repo_url_blob__ : { get_param: repo_url_blob }
+ __repo_url_artifacts__ : { get_param: repo_url_artifacts }
+ __demo_artifacts_version__ : { get_param: demo_artifacts_version }
+ __install_script_version__ : { get_param: install_script_version }
+ __vfw_private_ip_0__ : { get_param: vfw_private_ip_0 }
+ __vfw_private_ip_1__ : { get_param: vfw_private_ip_1 }
+ __vfw_private_ip_2__ : { get_param: vfw_private_ip_2 }
+ __unprotected_private_net_cidr__ : { get_param: unprotected_private_net_cidr }
+ __protected_private_net_cidr__ : { get_param: protected_private_net_cidr }
+ __onap_private_net_cidr__ : { get_param: onap_private_net_cidr }
+ __cloud_env__ : { get_param: cloud_env }
+ template: |
+ #!/bin/bash
+
+ # Create configuration files
+ mkdir /opt/config
+ echo "__dcae_collector_ip__" > /opt/config/dcae_collector_ip.txt
+ echo "__dcae_collector_port__" > /opt/config/dcae_collector_port.txt
+ echo "__repo_url_blob__" > /opt/config/repo_url_blob.txt
+ echo "__repo_url_artifacts__" > /opt/config/repo_url_artifacts.txt
+ echo "__demo_artifacts_version__" > /opt/config/demo_artifacts_version.txt
+ echo "__install_script_version__" > /opt/config/install_script_version.txt
+ echo "__vfw_private_ip_0__" > /opt/config/vfw_private_ip_0.txt
+ echo "__vfw_private_ip_1__" > /opt/config/vfw_private_ip_1.txt
+ echo "__vfw_private_ip_2__" > /opt/config/vfw_private_ip_2.txt
+ echo "__unprotected_private_net_cidr__" > /opt/config/unprotected_private_net_cidr.txt
+ echo "__protected_private_net_cidr__" > /opt/config/protected_private_net_cidr.txt
+ echo "__onap_private_net_cidr__" > /opt/config/onap_private_net_cidr.txt
+ echo "__cloud_env__" > /opt/config/cloud_env.txt
+
+ # Download and run install script
+ curl -k __repo_url_blob__/org.onap.demo/vnfs/vfw/__install_script_version__/v_firewall_install.sh -o /opt/v_firewall_install.sh
+ cd /opt
+ chmod +x v_firewall_install.sh
+ ./v_firewall_install.sh
+
+
+ # Virtual Sink instantiation
+ vsn_private_0_port:
+ type: OS::Neutron::Port
+ properties:
+ network: { get_resource: protected_private_network }
+ fixed_ips: [{"subnet": { get_resource: protected_private_subnet }, "ip_address": { get_param: vsn_private_ip_0 }}]
+
+ vsn_private_1_port:
+ type: OS::Neutron::Port
+ properties:
+ network: { get_param: onap_private_net_id }
+ fixed_ips: [{"subnet": { get_param: onap_private_subnet_id }, "ip_address": { get_param: vsn_private_ip_1 }}]
+
+ vsn_0:
+ type: OS::Nova::Server
+ properties:
+ image: { get_param: image_name }
+ flavor: { get_param: flavor_name }
+ name: { get_param: vsn_name_0 }
+ key_name: { get_resource: my_keypair }
+ networks:
+ - network: { get_param: public_net_id }
+ - port: { get_resource: vsn_private_0_port }
+ - port: { get_resource: vsn_private_1_port }
+ metadata: {vnf_id: { get_param: vnf_id }, vf_module_id: { get_param: vf_module_id }}
+ user_data_format: RAW
+ user_data:
+ str_replace:
+ params:
+ __protected_net_gw__: { get_param: vfw_private_ip_1 }
+ __unprotected_net__: { get_param: unprotected_private_net_cidr }
+ __repo_url_blob__ : { get_param: repo_url_blob }
+ __repo_url_artifacts__ : { get_param: repo_url_artifacts }
+ __install_script_version__ : { get_param: install_script_version }
+ __vsn_private_ip_0__ : { get_param: vsn_private_ip_0 }
+ __vsn_private_ip_1__ : { get_param: vsn_private_ip_1 }
+ __protected_private_net_cidr__ : { get_param: protected_private_net_cidr }
+ __onap_private_net_cidr__ : { get_param: onap_private_net_cidr }
+ __cloud_env__ : { get_param: cloud_env }
+ template: |
+ #!/bin/bash
+
+ # Create configuration files
+ mkdir /opt/config
+ echo "__protected_net_gw__" > /opt/config/protected_net_gw.txt
+ echo "__unprotected_net__" > /opt/config/unprotected_net.txt
+ echo "__repo_url_blob__" > /opt/config/repo_url_blob.txt
+ echo "__install_script_version__" > /opt/config/install_script_version.txt
+ echo "__vsn_private_ip_0__" > /opt/config/vsn_private_ip_0.txt
+ echo "__vsn_private_ip_1__" > /opt/config/vsn_private_ip_1.txt
+ echo "__protected_private_net_cidr__" > /opt/config/protected_private_net_cidr.txt
+ echo "__onap_private_net_cidr__" > /opt/config/onap_private_net_cidr.txt
+ echo "__cloud_env__" > /opt/config/cloud_env.txt
+
+ # Download and run install script
+ curl -k __repo_url_blob__/org.onap.demo/vnfs/vfw/__install_script_version__/v_sink_install.sh -o /opt/v_sink_install.sh
+ cd /opt
+ chmod +x v_sink_install.sh
+ ./v_sink_install.sh
diff --git a/heat/vFWCL/vPKG/MANIFEST.json b/heat/vFWCL/vPKG/MANIFEST.json new file mode 100644 index 00000000..482b4294 --- /dev/null +++ b/heat/vFWCL/vPKG/MANIFEST.json @@ -0,0 +1,17 @@ +{ + "name": "", + "description": "", + "data": [ + { + "file": "base_vpkg.yaml", + "type": "HEAT", + "isBase": "true", + "data": [ + { + "file": "base_vpkg.env", + "type": "HEAT_ENV" + } + ] + } + ] +} diff --git a/heat/vFWCL/vPKG/base_vpkg.env b/heat/vFWCL/vPKG/base_vpkg.env new file mode 100644 index 00000000..a7a30e32 --- /dev/null +++ b/heat/vFWCL/vPKG/base_vpkg.env @@ -0,0 +1,25 @@ +parameters: + image_name: PUT THE VM IMAGE NAME HERE + flavor_name: PUT THE VM FLAVOR NAME HERE + public_net_id: PUT THE PUBLIC NETWORK ID HERE + unprotected_private_net_id: zdfw1fwl01_unprotected + unprotected_private_subnet_id: zdfw1fwl01_unprotected_sub + unprotected_private_net_cidr: 192.168.10.0/24 + onap_private_net_id: PUT THE ONAP PRIVATE NETWORK NAME HERE + onap_private_subnet_id: PUT THE ONAP PRIVATE NETWORK NAME HERE + onap_private_net_cidr: 10.0.0.0/16 + protected_private_net_cidr: 192.168.20.0/24 + vfw_private_ip_0: 192.168.10.100 + vpg_private_ip_0: 192.168.10.200 + vpg_private_ip_1: 10.0.100.2 + vsn_private_ip_0: 192.168.20.250 + vpg_name_0: zdfw1fwl01pgn01 + vnf_id: vPNG_Firewall_demo_app + vf_module_id: vTrafficPNG + repo_url_blob: https://nexus.onap.org/content/sites/raw + repo_url_artifacts: https://nexus.onap.org/content/groups/staging + demo_artifacts_version: 1.1.0 + install_script_version: 1.1.0-SNAPSHOT + key_name: vfw_key + pub_key: PUT YOUR PUBLIC KEY HERE + cloud_env: PUT openstack OR rackspace HERE diff --git a/heat/vFWCL/vPKG/base_vpkg.yaml b/heat/vFWCL/vPKG/base_vpkg.yaml new file mode 100644 index 00000000..79d35bd3 --- /dev/null +++ b/heat/vFWCL/vPKG/base_vpkg.yaml @@ -0,0 +1,221 @@ +##########################################################################
+#
+#==================LICENSE_START==========================================
+#
+#
+# Copyright (c) 2017 AT&T Intellectual Property. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+#==================LICENSE_END============================================
+#
+# ECOMP is a trademark and service mark of AT&T Intellectual Property.
+#
+##########################################################################
+
+heat_template_version: 2013-05-23
+
+description: Heat template that deploys the vFirewall Traffic Generator demo app for ONAP
+
+##############
+# #
+# PARAMETERS #
+# #
+##############
+
+parameters:
+ image_name:
+ type: string
+ label: Image name or ID
+ description: Image to be used for compute instance
+ flavor_name:
+ type: string
+ label: Flavor
+ description: Type of instance (flavor) to be used
+ public_net_id:
+ type: string
+ label: Public network name or ID
+ description: Public network that enables remote connection to VNF
+ unprotected_private_net_id:
+ type: string
+ label: Unprotected private network name or ID
+ description: Private network that connects vPacketGenerator with vFirewall
+ unprotected_private_subnet_id:
+ type: string
+ label: Unprotected private sub-network name or ID
+ description: Private subnetwork for the unprotected network
+ unprotected_private_net_cidr:
+ type: string
+ label: Unprotected private network CIDR
+ description: The CIDR of the unprotected private network
+ protected_private_net_cidr:
+ type: string
+ label: Protected private network CIDR
+ description: The CIDR of the protected private network
+ onap_private_net_id:
+ type: string
+ label: ONAP management network name or ID
+ description: Private network that connects ONAP components and the VNF
+ onap_private_subnet_id:
+ type: string
+ label: ONAP management sub-network name or ID
+ description: Private sub-network that connects ONAP components and the VNF
+ onap_private_net_cidr:
+ type: string
+ label: ONAP private network CIDR
+ description: The CIDR of the protected private network
+ vfw_private_ip_0:
+ type: string
+ label: vFirewall private IP address towards the unprotected network
+ description: Private IP address that is assigned to the vFirewall to communicate with the vPacketGenerator
+ vsn_private_ip_0:
+ type: string
+ label: vSink private IP address towards the protected network
+ description: Private IP address that is assigned to the vSink to communicate with the vFirewall
+ vpg_private_ip_0:
+ type: string
+ label: vPacketGenerator private IP address towards the unprotected network
+ description: Private IP address that is assigned to the vPacketGenerator to communicate with the vFirewall
+ vpg_private_ip_1:
+ type: string
+ label: vPacketGenerator private IP address towards the ONAP management network
+ description: Private IP address that is assigned to the vPacketGenerator to communicate with ONAP components
+ vpg_name_0:
+ type: string
+ label: vPacketGenerator name
+ description: Name of the vPacketGenerator
+ vnf_id:
+ type: string
+ label: VNF ID
+ description: The VNF ID is provided by ONAP
+ vf_module_id:
+ type: string
+ label: vPNG Traffic Generator module ID
+ description: The vPNG Module ID is provided by ONAP
+ key_name:
+ type: string
+ label: Key pair name
+ description: Public/Private key pair name
+ pub_key:
+ type: string
+ label: Public key
+ description: Public key to be installed on the compute instance
+ repo_url_blob:
+ type: string
+ label: Repository URL
+ description: URL of the repository that hosts the demo packages
+ repo_url_artifacts:
+ type: string
+ label: Repository URL
+ description: URL of the repository that hosts the demo packages
+ install_script_version:
+ type: string
+ label: Installation script version number
+ description: Version number of the scripts that install the vFW demo app
+ demo_artifacts_version:
+ type: string
+ label: Artifacts version used in demo vnfs
+ description: Artifacts (jar, tar.gz) version used in demo vnfs
+ cloud_env:
+ type: string
+ label: Cloud environment
+ description: Cloud environment (e.g., openstack, rackspace)
+
+#############
+# #
+# RESOURCES #
+# #
+#############
+
+resources:
+ random-str:
+ type: OS::Heat::RandomString
+ properties:
+ length: 4
+
+ my_keypair:
+ type: OS::Nova::KeyPair
+ properties:
+ name:
+ str_replace:
+ template: base_rand
+ params:
+ base: { get_param: key_name }
+ rand: { get_resource: random-str }
+ public_key: { get_param: pub_key }
+ save_private_key: false
+
+
+ # Virtual Packet Generator instantiation
+ vpg_private_0_port:
+ type: OS::Neutron::Port
+ properties:
+ network: { get_param: unprotected_private_net_id }
+ fixed_ips: [{"subnet": { get_param: unprotected_private_subnet_id }, "ip_address": { get_param: vpg_private_ip_0 }}]
+
+ vpg_private_1_port:
+ type: OS::Neutron::Port
+ properties:
+ network: { get_param: onap_private_net_id }
+ fixed_ips: [{"subnet": { get_param: onap_private_subnet_id }, "ip_address": { get_param: vpg_private_ip_1 }}]
+
+ vpg_0:
+ type: OS::Nova::Server
+ properties:
+ image: { get_param: image_name }
+ flavor: { get_param: flavor_name }
+ name: { get_param: vpg_name_0 }
+ key_name: { get_resource: my_keypair }
+ networks:
+ - network: { get_param: public_net_id }
+ - port: { get_resource: vpg_private_0_port }
+ - port: { get_resource: vpg_private_1_port }
+ metadata: {vnf_id: { get_param: vnf_id }, vf_module_id: { get_param: vf_module_id }}
+ user_data_format: RAW
+ user_data:
+ str_replace:
+ params:
+ __fw_ipaddr__: { get_param: vfw_private_ip_0 }
+ __protected_net_cidr__: { get_param: protected_private_net_cidr }
+ __sink_ipaddr__: { get_param: vsn_private_ip_0 }
+ __repo_url_blob__ : { get_param: repo_url_blob }
+ __repo_url_artifacts__ : { get_param: repo_url_artifacts }
+ __demo_artifacts_version__ : { get_param: demo_artifacts_version }
+ __install_script_version__ : { get_param: install_script_version }
+ __vpg_private_ip_0__ : { get_param: vpg_private_ip_0 }
+ __vpg_private_ip_1__ : { get_param: vpg_private_ip_1 }
+ __unprotected_private_net_cidr__ : { get_param: unprotected_private_net_cidr }
+ __onap_private_net_cidr__ : { get_param: onap_private_net_cidr }
+ __cloud_env__ : { get_param: cloud_env }
+ template: |
+ #!/bin/bash
+
+ # Create configuration files
+ mkdir /opt/config
+ echo "__fw_ipaddr__" > /opt/config/fw_ipaddr.txt
+ echo "__protected_net_cidr__" > /opt/config/protected_net_cidr.txt
+ echo "__sink_ipaddr__" > /opt/config/sink_ipaddr.txt
+ echo "__repo_url_blob__" > /opt/config/repo_url_blob.txt
+ echo "__repo_url_artifacts__" > /opt/config/repo_url_artifacts.txt
+ echo "__demo_artifacts_version__" > /opt/config/demo_artifacts_version.txt
+ echo "__install_script_version__" > /opt/config/install_script_version.txt
+ echo "__vpg_private_ip_0__" > /opt/config/vpg_private_ip_0.txt
+ echo "__vpg_private_ip_1__" > /opt/config/vpg_private_ip_1.txt
+ echo "__unprotected_private_net_cidr__" > /opt/config/unprotected_private_net_cidr.txt
+ echo "__onap_private_net_cidr__" > /opt/config/onap_private_net_cidr.txt
+ echo "__cloud_env__" > /opt/config/cloud_env.txt
+
+ # Download and run install script
+ curl -k __repo_url_blob__/org.onap.demo/vnfs/vfw/__install_script_version__/v_packetgen_install.sh -o /opt/v_packetgen_install.sh
+ cd /opt
+ chmod +x v_packetgen_install.sh
+ ./v_packetgen_install.sh
diff --git a/heat/vLB/MANIFEST.json b/heat/vLB/MANIFEST.json new file mode 100644 index 00000000..b22a67f3 --- /dev/null +++ b/heat/vLB/MANIFEST.json @@ -0,0 +1,28 @@ +{ + "name": "virtualLoadBalancer", + "description": "", + "data": [ + { + "file": "base_vlb.yaml", + "type": "HEAT", + "isBase": "true", + "data": [ + { + "file": "base_vlb.env", + "type": "HEAT_ENV" + } + ] + }, + { + "file": "dnsscaling.yaml", + "type": "HEAT", + "isBase": "false", + "data": [ + { + "file": "dnsscaling.env", + "type": "HEAT_ENV" + } + ] + } + ] +}
\ No newline at end of file diff --git a/heat/vVG/MANIFEST.json b/heat/vVG/MANIFEST.json new file mode 100644 index 00000000..3f9348b0 --- /dev/null +++ b/heat/vVG/MANIFEST.json @@ -0,0 +1,17 @@ +{ + "name": "", + "description": "", + "data": [ + { + "file": "base_vvg.yaml", + "type": "HEAT", + "isBase": "true", + "data": [ + { + "file": "base_vvg.env", + "type": "HEAT_ENV" + } + ] + } + ] +}
\ No newline at end of file diff --git a/heat/vVG/base_vvg.env b/heat/vVG/base_vvg.env new file mode 100644 index 00000000..2b4e72b8 --- /dev/null +++ b/heat/vVG/base_vvg.env @@ -0,0 +1,3 @@ +parameters: + volume_size: 100 + nova_instance: 1234456
\ No newline at end of file diff --git a/heat/vVG/base_vvg.yaml b/heat/vVG/base_vvg.yaml new file mode 100644 index 00000000..c20d4e48 --- /dev/null +++ b/heat/vVG/base_vvg.yaml @@ -0,0 +1,22 @@ +heat_template_version: 2013-05-23 +description: create a Nova instance, a Cinder volume and attach the volume to the instance. + +parameters: + nova_instance: + type: string + label: Instance name or ID + description: ID of the vm to use for the disk to be attached too + volume_size: + type: number + label: GB + description: Size of the volume to be created. +resources: + cinder_volume: + type: OS::Cinder::Volume + properties: + size: { get_param: volume_size } + volume_attachment: + type: OS::Cinder::VolumeAttachment + properties: + volume_id: { get_resource: cinder_volume } + instance_uuid: { get_param: nova_instance }
\ No newline at end of file diff --git a/vnfs/vCPE/kea-sdnc-notify-mod/etc/kea-dhcp4.conf.example b/vnfs/vCPE/kea-sdnc-notify-mod/etc/kea-dhcp4.conf.example index 9faaf870..b5f1a697 100644 --- a/vnfs/vCPE/kea-sdnc-notify-mod/etc/kea-dhcp4.conf.example +++ b/vnfs/vCPE/kea-sdnc-notify-mod/etc/kea-dhcp4.conf.example @@ -36,10 +36,10 @@ "pools" : [ { "pool": "10.3.0.2 - 10.3.0.255"} ], "next-server": "10.3.0.1", "option-data": [ - {"name": "tftp-server-name", - "data": "10.4.0.1"}, - {"name": "boot-file-name", - "data": "/dev/null"} + { + "name": "routers", + "data": "10.3.0.1" + } ] } ] diff --git a/vnfs/vCPE/scripts/kea-dhcp4-web.conf b/vnfs/vCPE/scripts/kea-dhcp4-web.conf new file mode 100644 index 00000000..4bf07044 --- /dev/null +++ b/vnfs/vCPE/scripts/kea-dhcp4-web.conf @@ -0,0 +1,63 @@ +{ +"Dhcp4": + { +# For testing, you can use veth pair as described in README.md +# vDHCP needs to lisetn on eth1 + "interfaces-config": { + "interfaces": ["eth1" ] + }, + +# How to load the hook library. + + "lease-database": { + "type": "memfile" + }, + + "expired-leases-processing": { + "reclaim-timer-wait-time": 10, + "flush-reclaimed-timer-wait-time": 25, + "hold-reclaimed-time": 3600, + "max-reclaim-leases": 100, + "max-reclaim-time": 250, + "unwarned-reclaim-cycles": 5 + }, + + "valid-lifetime": 3600, + +# Ensure you set some sensible defaults for the siaddr and option-data, +# otherwise the options won't be added at all. +# Also keep in mind that if kea doesn't receive the desired values for some +# reason, these values will be sent to the client. + "subnet4": [ + { "subnet": "10.2.0.0/24", + "pools" : [ { "pool": "10.2.0.2 - 10.2.0.255"} ], + "next-server": "10.2.0.1", + "option-data": [ + { + "name": "routers", + "data": "10.2.0.1" + } + ] + + } + ] + +}, + +"Logging": +{ + "loggers": [ + { + "name": "kea-dhcp4", + "output_options": [ + { + "output": "/var/log/kea-dhcp4.log" + } + ], + "severity": "DEBUG", + "debuglevel": 0 + }, + ] +} + +} diff --git a/vnfs/vCPE/scripts/kea-dhcp4.conf b/vnfs/vCPE/scripts/kea-dhcp4.conf index 508c0e62..d965072b 100644 --- a/vnfs/vCPE/scripts/kea-dhcp4.conf +++ b/vnfs/vCPE/scripts/kea-dhcp4.conf @@ -36,11 +36,12 @@ "pools" : [ { "pool": "10.3.0.2 - 10.3.0.255"} ], "next-server": "10.3.0.1", "option-data": [ - {"name": "tftp-server-name", - "data": "10.4.0.1"}, - {"name": "boot-file-name", - "data": "/dev/null"} + { + "name": "routers", + "data": "10.3.0.1" + } ] + } ] diff --git a/vnfs/vCPE/scripts/kea-dhcp4_no_hook.conf b/vnfs/vCPE/scripts/kea-dhcp4_no_hook.conf index 3e2287d1..170b8f3c 100644 --- a/vnfs/vCPE/scripts/kea-dhcp4_no_hook.conf +++ b/vnfs/vCPE/scripts/kea-dhcp4_no_hook.conf @@ -26,14 +26,14 @@ # Also keep in mind that if kea doesn't receive the desired values for some # reason, these values will be sent to the client. "subnet4": [ - { "subnet": "10.2.0.0/24", - "pools" : [ { "pool": "10.2.0.2 - 10.2.0.255"} ], - "next-server": "10.2.0.1", + { "subnet": "10.3.0.0/24", + "pools" : [ { "pool": "10.3.0.2 - 10.3.0.255"} ], + "next-server": "10.3.0.1", "option-data": [ - {"name": "tftp-server-name", - "data": "10.2.0.1"}, - {"name": "boot-file-name", - "data": "/dev/null"} + { + "name": "routers", + "data": "10.3.0.1" + } ] } ] diff --git a/vnfs/vCPE/scripts/v_bng_init.sh b/vnfs/vCPE/scripts/v_bng_init.sh index 6fb2eadc..ce20dc57 100644 --- a/vnfs/vCPE/scripts/v_bng_init.sh +++ b/vnfs/vCPE/scripts/v_bng_init.sh @@ -2,3 +2,13 @@ systemctl start vpp +# wait for TAP_DEV to become active before setting a route +TAP_DEV=tap0 +STATUS=$(ip link show $TAP_DEV 2> /dev/null) +while [ -z "$STATUS" ]; do + echo "$(date) v_bng_init.sh: $TAP_DEV is not yet ready..." + sleep 1 + STATUS=$(ip link show $TAP_DEV 2> /dev/null) +done +ip route add 10.3.0.0/24 via 192.168.40.41 dev $TAP_DEV + diff --git a/vnfs/vCPE/scripts/v_bng_install.sh b/vnfs/vCPE/scripts/v_bng_install.sh index e20128c5..49bca161 100644 --- a/vnfs/vCPE/scripts/v_bng_install.sh +++ b/vnfs/vCPE/scripts/v_bng_install.sh @@ -16,6 +16,7 @@ BRGEMU_BNG_NET_CIDR=$(cat /opt/config/brgemu_bng_net_cidr.txt) BRGEMU_BNG_NET_IPADDR=$(cat /opt/config/brgemu_bng_net_ipaddr.txt) CPE_SIGNAL_NET_CIDR=$(cat /opt/config/cpe_signal_net_cidr.txt) CPE_SIGNAL_NET_IPADDR=$(cat /opt/config/cpe_signal_net_ipaddr.txt) +SDNC_IP_ADDR=$(cat /opt/config/sdnc_ip_addr.txt) # Build states are: # 'build' - just build the code @@ -64,6 +65,10 @@ fi # endif BUILD_STATE != "build" if [[ $BUILD_STATE != "done" ]] then + # Enable IPV4 forwarding through kernel + sed -i 's/^.*\(net.ipv4.ip_forward\).*/\1=1/g' /etc/sysctl.conf + sysctl -p /etc/sysctl.conf + # Download required dependencies echo "deb http://ppa.launchpad.net/openjdk-r/ppa/ubuntu $(lsb_release -c -s) main" >> /etc/apt/sources.list.d/java.list echo "deb-src http://ppa.launchpad.net/openjdk-r/ppa/ubuntu $(lsb_release -c -s) main" >> /etc/apt/sources.list.d/java.list @@ -250,6 +255,11 @@ set interface ip address ${BNG_GMUX_NIC} ${BNG_GMUX_NET_IPADDR}/${BNG_GMUX_NET_C set vbng dhcp4 remote 10.4.0.1 local ${CPE_SIGNAL_NET_IPADDR} set vbng aaa config /etc/vpp/vbng-aaa.cfg nas-port 5060 +tap connect tap0 address 192.168.40.40/24 +set int state tap-0 up +set int ip address tap-0 192.168.40.41/24 +ip route add ${SDNC_IP_ADDR}/32 via 192.168.40.40 tap-0 + EOF cat > /etc/vpp/vbng-aaa.cfg << EOF diff --git a/vnfs/vCPE/scripts/v_gmux_install.sh b/vnfs/vCPE/scripts/v_gmux_install.sh index 50f754da..5e98fe1b 100644 --- a/vnfs/vCPE/scripts/v_gmux_install.sh +++ b/vnfs/vCPE/scripts/v_gmux_install.sh @@ -14,8 +14,10 @@ LIBEVEL_PATCH_URL=$(cat /opt/config/libevel_patch_url.txt) CLOUD_ENV=$(cat /opt/config/cloud_env.txt) MUX_GW_IP=$(cat /opt/config/mux_gw_net_ipaddr.txt) MUX_GW_CIDR=$(cat /opt/config/mux_gw_net_cidr.txt) -BNG_MUX_IP=$(cat /opt/config/bng_mux_net_ipaddr.txt) +MUX_TO_BNG_IP=$(cat /opt/config/mux_to_bng_net_ipaddr.txt) BNG_MUX_CIDR=$(cat /opt/config/bng_mux_net_cidr.txt) +BRG_BNG_CIDR=$(cat /opt/config/brg_bng_net_cidr.txt) +BNG_TO_MUX_IP=$(cat /opt/config/bng_to_mux_net_ipaddr.txt) # Build states are: # 'build' - just build the code @@ -234,10 +236,11 @@ EOF cat > /etc/vpp/setup.gate << EOF set int state ${BNG_MUX_NIC} up -set int ip address ${BNG_MUX_NIC} ${BNG_MUX_IP}/${BNG_MUX_CIDR#*/} +set int ip address ${BNG_MUX_NIC} ${MUX_TO_BNG_IP}/${BNG_MUX_CIDR#*/} set int state ${MUX_GW_NIC} up set int ip address ${MUX_GW_NIC} ${MUX_GW_IP}/${MUX_GW_CIDR#*/} +ip route add ${BRG_BNG_CIDR} via ${BNG_TO_MUX_IP} ${BNG_MUX_NIC} EOF fi # endif BUILD_STATE != "build" diff --git a/vnfs/vCPE/scripts/v_gw_install.sh b/vnfs/vCPE/scripts/v_gw_install.sh index 6074cdfa..53ac6903 100644 --- a/vnfs/vCPE/scripts/v_gw_install.sh +++ b/vnfs/vCPE/scripts/v_gw_install.sh @@ -11,6 +11,8 @@ HC2VPP_SOURCE_REPO_BRANCH=$(cat /opt/config/hc2vpp_source_repo_branch.txt) CLOUD_ENV=$(cat /opt/config/cloud_env.txt) MUX_GW_IP=$(cat /opt/config/mux_gw_private_net_ipaddr.txt) MUX_GW_CIDR=$(cat /opt/config/mux_gw_private_net_cidr.txt) +MUX_IP_ADDR=$(cat /opt/config/mux_ip_addr.txt) +VG_VGMUX_TUNNEL_VNI=$(cat /opt/config/vg_vgmux_tunnel_vni.txt) # Build states are: # 'build' - just build the code @@ -207,20 +209,14 @@ EOF MUX_GW_NIC=GigabitEthernet`echo ${NICS} | cut -d " " -f 2` # second interface in list GW_PUB_NIC=GigabitEthernet`echo ${NICS} | cut -d " " -f 4` # fourth interface in list -touch /etc/vpp/setup.gate cat > /etc/vpp/setup.gate << EOF set int state ${MUX_GW_NIC} up -set int ip address ${MUX_GW_NIC} 10.5.0.21/24 +set int ip address ${MUX_GW_NIC} ${MUX_GW_IP}/${MUX_GW_CIDR#*/} set int state ${GW_PUB_NIC} up set dhcp client intfc ${GW_PUB_NIC} hostname vg-1 -tap connect lstack address 192.168.1.1/24 -set int state tap-0 up - -create vxlan tunnel src 10.5.0.21 dst 10.5.0.20 vni 100 - -set interface l2 bridge tap-0 10 0 +create vxlan tunnel src ${MUX_GW_IP} dst ${MUX_IP_ADDR} vni ${VG_VGMUX_TUNNEL_VNI} set interface l2 bridge vxlan_tunnel0 10 1 set bridge-domain arp term 10 @@ -238,7 +234,7 @@ fi # endif BUILD_STATE != "build" if [[ $BUILD_STATE != "done" ]] then - # Download and install HC2VPP from source + # Download and install HC2VPP from source cd /opt git clone ${HC2VPP_SOURCE_REPO_URL} -b ${HC2VPP_SOURCE_REPO_BRANCH} hc2vpp @@ -361,6 +357,12 @@ EOF mv vpp-integration/minimal-distribution/target/vpp-integration-distribution-${l_version}-hc/vpp-integration-distribution-${l_version} /opt/honeycomb sed -i 's/127.0.0.1/0.0.0.0/g' /opt/honeycomb/config/honeycomb.json + # Disable automatic upgrades + if [[ $CLOUD_ENV != "rackspace" ]] + then + echo "APT::Periodic::Unattended-Upgrade \"0\";" >> /etc/apt/apt.conf.d/10periodic + sed -i 's/\(APT::Periodic::Unattended-Upgrade\) "1"/\1 "0"/' /etc/apt/apt.conf.d/20auto-upgrades + fi fi # endif BUILD_STATE != "done if [[ $BUILD_STATE != "build" ]] @@ -396,15 +398,46 @@ subnet 192.168.1.0 netmask 255.255.255.0 { } EOF - # Download DHCP config files - cd /opt - wget $REPO_URL_BLOB/org.onap.demo/vnfs/vcpe/$INSTALL_SCRIPT_VERSION/v_gw_init.sh - wget $REPO_URL_BLOB/org.onap.demo/vnfs/vcpe/$INSTALL_SCRIPT_VERSION/v_gw.sh +echo '#!/bin/bash +STATUS=$(ip link show lstack) +if [ -z "$STATUS" ] +then + vppctl tap connect lstack address 192.168.1.1/24 + vppctl set int state tap-0 up + vppctl set interface l2 bridge tap-0 10 0 +fi +IP=$(/sbin/ifconfig lstack | grep "inet addr:" | cut -d: -f2 | awk "{ print $1 }") +if [ ! -z "$STATUS" ] && [ -z "$IP" ] +then + ip link delete lstack + vppctl tap delete tap-0 + vppctl tap connect lstack address 192.168.1.1/24 + vppctl set int state tap-0 up + vppctl set interface l2 bridge tap-0 10 0 +fi' > /opt/v_gw_init.sh + chmod +x v_gw_init.sh - chmod +x v_gw.sh - mv v_gw.sh /etc/init.d - sed "s/Provides:/$/ v_gw" /etc/init.d/v_gw.sh - update-rc.d v_gw.sh defaults + + cat > /etc/systemd/system/vgw.service << EOF +[Unit] +Description=vGW service to run after honeycomb service +Requires=honeycomb.service +After=honeycomb.service + +[Service] +ExecStart=/opt/v_gw_init.sh +Restart=always +RestartSec=10 + +[Install] +WantedBy=multi-user.target +EOF + + systemctl enable /etc/systemd/system/vgw.service + + cp /etc/systemd/system/multi-user.target.wants/isc-dhcp-server.service /etc/systemd/system/ + sed -i '/Documentation/a Wants=vgw.service\nAfter=vgw.service' /etc/systemd/system/isc-dhcp-server.service + sed -i '/exec dhcpd/a Restart=always\nRestartSec=10' /etc/systemd/system/isc-dhcp-server.service # Rename network interface in openstack Ubuntu 16.04 images. Then, reboot the VM to pick up changes if [[ $CLOUD_ENV != "rackspace" ]] @@ -414,9 +447,7 @@ EOF sed -i "s/ens[0-9]*/eth0/g" /etc/network/interfaces.d/*.cfg sed -i "s/ens[0-9]*/eth0/g" /etc/udev/rules.d/70-persistent-net.rules echo 'network: {config: disabled}' >> /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg - echo "APT::Periodic::Unattended-Upgrade \"0\";" >> /etc/apt/apt.conf.d/10periodic reboot fi - ./v_gw_init.sh fi # endif BUILD_STATE != "build" diff --git a/vnfs/vCPE/scripts/v_web_init.sh b/vnfs/vCPE/scripts/v_web_init.sh index a9bf588e..fce6aaf6 100644 --- a/vnfs/vCPE/scripts/v_web_init.sh +++ b/vnfs/vCPE/scripts/v_web_init.sh @@ -1 +1,2 @@ #!/bin/bash +service kea-dhcp4-server start diff --git a/vnfs/vCPE/scripts/v_web_install.sh b/vnfs/vCPE/scripts/v_web_install.sh index e207dd09..685d675a 100644 --- a/vnfs/vCPE/scripts/v_web_install.sh +++ b/vnfs/vCPE/scripts/v_web_install.sh @@ -51,13 +51,22 @@ fi echo "deb http://ppa.launchpad.net/openjdk-r/ppa/ubuntu $(lsb_release -c -s) main" >> /etc/apt/sources.list.d/java.list echo "deb-src http://ppa.launchpad.net/openjdk-r/ppa/ubuntu $(lsb_release -c -s) main" >> /etc/apt/sources.list.d/java.list apt-get update -apt-get install --allow-unauthenticated -y wget openjdk-8-jdk apt-transport-https ca-certificates g++ libcurl4-gnutls-dev +apt-get install --allow-unauthenticated -y wget openjdk-8-jdk apt-transport-https ca-certificates kea-dhcp4-server g++ libcurl4-gnutls-dev sleep 1 -# Download DHCP config files +# Download DHCP config and init files cd /opt +wget $REPO_URL_BLOB/org.onap.demo/vnfs/vcpe/$INSTALL_SCRIPT_VERSION/kea-dhcp4-web.conf wget $REPO_URL_BLOB/org.onap.demo/vnfs/vcpe/$INSTALL_SCRIPT_VERSION/v_web_init.sh wget $REPO_URL_BLOB/org.onap.demo/vnfs/vcpe/$INSTALL_SCRIPT_VERSION/v_web.sh + + + +# Configure DHCP +cp kea-dhcp4-web.conf /etc/kea-dhcp4-server.conf +mv kea-dhcp4-web.conf /etc/kea/kea-dhcp4.conf + + chmod +x v_web_init.sh chmod +x v_web.sh mv v_web.sh /etc/init.d @@ -88,4 +97,4 @@ then reboot fi -./v_web_init.sh
\ No newline at end of file +./v_web_init.sh diff --git a/vnfs/vCPE/vpp-ves-agent-for-vgmux/src/patches/Hc2vpp-Add-VES-agent-for-vG-MUX.patch b/vnfs/vCPE/vpp-ves-agent-for-vgmux/src/patches/Hc2vpp-Add-VES-agent-for-vG-MUX.patch index 8c2e31b7..7899ed9a 100644 --- a/vnfs/vCPE/vpp-ves-agent-for-vgmux/src/patches/Hc2vpp-Add-VES-agent-for-vG-MUX.patch +++ b/vnfs/vCPE/vpp-ves-agent-for-vgmux/src/patches/Hc2vpp-Add-VES-agent-for-vG-MUX.patch @@ -6,7 +6,7 @@ Subject: [PATCH] Add VES agent configuration for vG-MUX Signed-off-by: Johnson Li <johnson.li@intel.com> diff --git a/pom.xml b/pom.xml -index 538a5d98..581bedfc 100644 +index 538a5d9..581bedf 100644 --- a/pom.xml +++ b/pom.xml @@ -44,13 +44,14 @@ @@ -28,7 +28,7 @@ index 538a5d98..581bedfc 100644 \ No newline at end of file diff --git a/ves/asciidoc/Readme.adoc b/ves/asciidoc/Readme.adoc new file mode 100644 -index 00000000..682e7555 +index 0000000..682e755 --- /dev/null +++ b/ves/asciidoc/Readme.adoc @@ -0,0 +1,3 @@ @@ -37,7 +37,7 @@ index 00000000..682e7555 +Overview of ves-agent diff --git a/ves/pom.xml b/ves/pom.xml new file mode 100644 -index 00000000..1ded0109 +index 0000000..1ded010 --- /dev/null +++ b/ves/pom.xml @@ -0,0 +1,56 @@ @@ -99,7 +99,7 @@ index 00000000..1ded0109 +</project> diff --git a/ves/ves-api/asciidoc/Readme.adoc b/ves/ves-api/asciidoc/Readme.adoc new file mode 100644 -index 00000000..b561268c +index 0000000..b561268 --- /dev/null +++ b/ves/ves-api/asciidoc/Readme.adoc @@ -0,0 +1,3 @@ @@ -109,7 +109,7 @@ index 00000000..b561268c \ No newline at end of file diff --git a/ves/ves-api/pom.xml b/ves/ves-api/pom.xml new file mode 100644 -index 00000000..78bf47b9 +index 0000000..78bf47b --- /dev/null +++ b/ves/ves-api/pom.xml @@ -0,0 +1,52 @@ @@ -167,10 +167,10 @@ index 00000000..78bf47b9 +</project> diff --git a/ves/ves-api/src/main/yang/vesagent.yang b/ves/ves-api/src/main/yang/vesagent.yang new file mode 100644 -index 00000000..a3c79797 +index 0000000..dde09c2 --- /dev/null +++ b/ves/ves-api/src/main/yang/vesagent.yang -@@ -0,0 +1,71 @@ +@@ -0,0 +1,77 @@ +module vesagent { + + yang-version 1; @@ -235,6 +235,12 @@ index 00000000..a3c79797 + description + "VES Working Mode, Demo Or Real Only."; + } ++ ++ leaf source-name { ++ type string; ++ description ++ "Override for the sourceName field in the VES event"; ++ } + } + } + @@ -244,7 +250,7 @@ index 00000000..a3c79797 +} diff --git a/ves/ves-impl/asciidoc/Readme.adoc b/ves/ves-impl/asciidoc/Readme.adoc new file mode 100644 -index 00000000..e07fb05f +index 0000000..e07fb05 --- /dev/null +++ b/ves/ves-impl/asciidoc/Readme.adoc @@ -0,0 +1,3 @@ @@ -254,7 +260,7 @@ index 00000000..e07fb05f \ No newline at end of file diff --git a/ves/ves-impl/pom.xml b/ves/ves-impl/pom.xml new file mode 100644 -index 00000000..5ed2c1b4 +index 0000000..5ed2c1b --- /dev/null +++ b/ves/ves-impl/pom.xml @@ -0,0 +1,157 @@ @@ -417,7 +423,7 @@ index 00000000..5ed2c1b4 +</project> diff --git a/ves/ves-impl/src/main/java/io/fd/hc2vpp/ves/VesModule.java b/ves/ves-impl/src/main/java/io/fd/hc2vpp/ves/VesModule.java new file mode 100644 -index 00000000..0cd60068 +index 0000000..0cd6006 --- /dev/null +++ b/ves/ves-impl/src/main/java/io/fd/hc2vpp/ves/VesModule.java @@ -0,0 +1,67 @@ @@ -490,7 +496,7 @@ index 00000000..0cd60068 +} diff --git a/ves/ves-impl/src/main/java/io/fd/hc2vpp/ves/jvpp/JVppVesProvider.java b/ves/ves-impl/src/main/java/io/fd/hc2vpp/ves/jvpp/JVppVesProvider.java new file mode 100644 -index 00000000..8afed84e +index 0000000..8afed84 --- /dev/null +++ b/ves/ves-impl/src/main/java/io/fd/hc2vpp/ves/jvpp/JVppVesProvider.java @@ -0,0 +1,59 @@ @@ -555,7 +561,7 @@ index 00000000..8afed84e + diff --git a/ves/ves-impl/src/main/java/io/fd/hc2vpp/ves/read/VesReaderFactory.java b/ves/ves-impl/src/main/java/io/fd/hc2vpp/ves/read/VesReaderFactory.java new file mode 100644 -index 00000000..bef652fd +index 0000000..bef652f --- /dev/null +++ b/ves/ves-impl/src/main/java/io/fd/hc2vpp/ves/read/VesReaderFactory.java @@ -0,0 +1,50 @@ @@ -611,7 +617,7 @@ index 00000000..bef652fd +} diff --git a/ves/ves-impl/src/main/java/io/fd/hc2vpp/ves/write/VesConfigCustomizer.java b/ves/ves-impl/src/main/java/io/fd/hc2vpp/ves/write/VesConfigCustomizer.java new file mode 100644 -index 00000000..e06afa73 +index 0000000..e06afa7 --- /dev/null +++ b/ves/ves-impl/src/main/java/io/fd/hc2vpp/ves/write/VesConfigCustomizer.java @@ -0,0 +1,127 @@ @@ -733,7 +739,7 @@ index 00000000..e06afa73 + throws WriteFailedException { + final VesAgentConfig request = new VesAgentConfig(); + -+ request.serverPort = config.getServerPort().byteValue(); ++ request.serverPort = config.getServerPort().intValue(); + request.readInterval = config.getReadInterval().byteValue(); + request.isAdd = config.getIsAdd().byteValue(); + request.serverAddr = ipv4AddressNoZoneToArray(config.getServerAddr().getValue()); @@ -744,10 +750,10 @@ index 00000000..e06afa73 +} diff --git a/ves/ves-impl/src/main/java/io/fd/hc2vpp/ves/write/VesModeCustomizer.java b/ves/ves-impl/src/main/java/io/fd/hc2vpp/ves/write/VesModeCustomizer.java new file mode 100644 -index 00000000..8b6d5a9a +index 0000000..248d819 --- /dev/null +++ b/ves/ves-impl/src/main/java/io/fd/hc2vpp/ves/write/VesModeCustomizer.java -@@ -0,0 +1,97 @@ +@@ -0,0 +1,99 @@ +/* + * Copyright (c) 2017 Intel Corp and/or its affiliates. + * @@ -804,7 +810,7 @@ index 00000000..8b6d5a9a + @Nonnull final WriteContext writeContext) throws WriteFailedException { + LOG.debug("Writing VES Agent Working Mode {} dataAfter={}", iid, dataAfter); + -+ checkArgument(dataAfter.getWorkingMode() != null && dataAfter.getBasePacketLoss() <= 100, ++ checkArgument(dataAfter.getSourceName() != null && dataAfter.getWorkingMode() != null && dataAfter.getBasePacketLoss() <= 100, + "VES Agent Working Mode need to be correctly configured."); + + setVesAgentMode(iid, dataAfter); @@ -816,7 +822,7 @@ index 00000000..8b6d5a9a + throws WriteFailedException { + LOG.debug("Writing VES Agent Working Mode {} {}-->{}", iid, dataBefore, dataAfter); + -+ checkArgument(dataAfter.getWorkingMode() != null && dataAfter.getBasePacketLoss() <= 100, ++ checkArgument(dataAfter.getSourceName() != null && dataAfter.getWorkingMode() != null && dataAfter.getBasePacketLoss() <= 100, + "VES Agent Working Mode need to be correctly configured."); + + setVesAgentMode(iid, dataAfter); @@ -829,7 +835,8 @@ index 00000000..8b6d5a9a + LOG.debug("Restoring VES Mode {} dataBefore={} to default.", iid, dataBefore); + + modeBuilder.setWorkingMode("Real") -+ .setBasePacketLoss(0L); ++ .setBasePacketLoss(0L) ++ .setSourceName(""); + + setVesAgentMode(iid, modeBuilder.build()); + } @@ -840,6 +847,7 @@ index 00000000..8b6d5a9a + + request.pktLossRate = mode.getBasePacketLoss().byteValue(); + request.workMode = mode.getWorkingMode().getBytes(); ++ request.sourceName = mode.getSourceName().getBytes(); + + LOG.debug("VES agent working mode change id={} request={}", id, request); + getReplyForWrite(jvppVes.vesAgentMode(request).toCompletableFuture(), id); @@ -847,7 +855,7 @@ index 00000000..8b6d5a9a +} diff --git a/ves/ves-impl/src/main/java/io/fd/hc2vpp/ves/write/VesWriterFactory.java b/ves/ves-impl/src/main/java/io/fd/hc2vpp/ves/write/VesWriterFactory.java new file mode 100644 -index 00000000..581f0460 +index 0000000..581f046 --- /dev/null +++ b/ves/ves-impl/src/main/java/io/fd/hc2vpp/ves/write/VesWriterFactory.java @@ -0,0 +1,54 @@ @@ -907,7 +915,7 @@ index 00000000..581f0460 +} diff --git a/ves/ves-impl/src/main/java/io/fd/hc2vpp/ves/write/VesagentCustomizer.java b/ves/ves-impl/src/main/java/io/fd/hc2vpp/ves/write/VesagentCustomizer.java new file mode 100644 -index 00000000..62e46cdb +index 0000000..62e46cd --- /dev/null +++ b/ves/ves-impl/src/main/java/io/fd/hc2vpp/ves/write/VesagentCustomizer.java @@ -0,0 +1,131 @@ @@ -1033,7 +1041,7 @@ index 00000000..62e46cdb + throws WriteFailedException { + final VesAgentConfig request = new VesAgentConfig(); + -+ request.serverPort = config.getServerPort().byteValue(); ++ request.serverPort = config.getServerPort().intValue(); + request.readInterval = config.getReadInterval().byteValue(); + request.isAdd = config.getIsAdd().byteValue(); + request.serverAddr = ipv4AddressNoZoneToArray(config.getServerAddr().getValue()); @@ -1043,7 +1051,7 @@ index 00000000..62e46cdb + } +} diff --git a/vpp-integration/minimal-distribution/pom.xml b/vpp-integration/minimal-distribution/pom.xml -index e126114a..ca0e5b24 100644 +index e126114..ca0e5b2 100644 --- a/vpp-integration/minimal-distribution/pom.xml +++ b/vpp-integration/minimal-distribution/pom.xml @@ -40,6 +40,7 @@ @@ -1074,6 +1082,3 @@ index e126114a..ca0e5b24 100644 <groupId>io.fd.hc2vpp.management</groupId> <artifactId>vpp-management-impl</artifactId> <version>${vpp-management-impl.version}</version> --- -2.12.2.windows.2 - diff --git a/vnfs/vCPE/vpp-ves-agent-for-vgmux/src/patches/Vpp-Add-VES-agent-for-vG-MUX.patch b/vnfs/vCPE/vpp-ves-agent-for-vgmux/src/patches/Vpp-Add-VES-agent-for-vG-MUX.patch index 7aed63f1..9d6233c4 100644 --- a/vnfs/vCPE/vpp-ves-agent-for-vgmux/src/patches/Vpp-Add-VES-agent-for-vG-MUX.patch +++ b/vnfs/vCPE/vpp-ves-agent-for-vgmux/src/patches/Vpp-Add-VES-agent-for-vG-MUX.patch @@ -4,13 +4,14 @@ Date: Fri, 22 Sep 2017 08:58:40 +0800 Subject: [PATCH] Add VES Agent to report statistics Change Log: +v3: Add option to configure source name for VES event v2: Use VES 5.x as agent library v1: Add VES agent to report statistics Signed-off-by: Johnson Li <johnson.li@intel.com> diff --git a/src/configure.ac b/src/configure.ac -index fb2ead6d..ea641525 100644 +index fb2ead6..ea64152 100644 --- a/src/configure.ac +++ b/src/configure.ac @@ -154,6 +154,7 @@ PLUGIN_ENABLED(lb) @@ -22,7 +23,7 @@ index fb2ead6d..ea641525 100644 ############################################################################### # Dependency checks diff --git a/src/plugins/Makefile.am b/src/plugins/Makefile.am -index 623892e7..84513755 100644 +index 623892e..8451375 100644 --- a/src/plugins/Makefile.am +++ b/src/plugins/Makefile.am @@ -69,6 +69,10 @@ if ENABLE_SNAT_PLUGIN @@ -38,7 +39,7 @@ index 623892e7..84513755 100644 # Remove *.la files diff --git a/src/plugins/ves.am b/src/plugins/ves.am new file mode 100644 -index 00000000..10f2194b +index 0000000..10f2194 --- /dev/null +++ b/src/plugins/ves.am @@ -0,0 +1,35 @@ @@ -79,7 +80,7 @@ index 00000000..10f2194b +# vi:syntax=automake diff --git a/src/plugins/ves/include/double_list.h b/src/plugins/ves/include/double_list.h new file mode 100644 -index 00000000..5cf7e1af +index 0000000..5cf7e1a --- /dev/null +++ b/src/plugins/ves/include/double_list.h @@ -0,0 +1,57 @@ @@ -142,7 +143,7 @@ index 00000000..5cf7e1af +#endif diff --git a/src/plugins/ves/include/evel.h b/src/plugins/ves/include/evel.h new file mode 100644 -index 00000000..6aceec30 +index 0000000..d696085 --- /dev/null +++ b/src/plugins/ves/include/evel.h @@ -0,0 +1,4494 @@ @@ -1869,7 +1870,7 @@ index 00000000..6aceec30 + * ::evel_free_event. + * @retval NULL Failed to create the event. + *****************************************************************************/ -+EVENT_MEASUREMENT * evel_new_measurement(double measurement_interval,const char* ev_name, const char *ev_id); ++EVENT_MEASUREMENT * evel_new_measurement(double measurement_interval,const char* ev_name, const char *ev_id, const char *ev_source_name); + +/**************************************************************************//** + * Free a Measurement. @@ -4642,7 +4643,7 @@ index 00000000..6aceec30 + diff --git a/src/plugins/ves/include/evel_internal.h b/src/plugins/ves/include/evel_internal.h new file mode 100644 -index 00000000..46f71af1 +index 0000000..46f71af --- /dev/null +++ b/src/plugins/ves/include/evel_internal.h @@ -0,0 +1,858 @@ @@ -5506,7 +5507,7 @@ index 00000000..46f71af1 +#endif diff --git a/src/plugins/ves/include/evel_throttle.h b/src/plugins/ves/include/evel_throttle.h new file mode 100644 -index 00000000..c97b3c37 +index 0000000..c97b3c3 --- /dev/null +++ b/src/plugins/ves/include/evel_throttle.h @@ -0,0 +1,214 @@ @@ -5726,7 +5727,7 @@ index 00000000..c97b3c37 +#endif diff --git a/src/plugins/ves/include/hashtable.h b/src/plugins/ves/include/hashtable.h new file mode 100644 -index 00000000..8be17dc1 +index 0000000..8be17dc --- /dev/null +++ b/src/plugins/ves/include/hashtable.h @@ -0,0 +1,97 @@ @@ -5829,7 +5830,7 @@ index 00000000..8be17dc1 +#endif diff --git a/src/plugins/ves/include/jsmn.h b/src/plugins/ves/include/jsmn.h new file mode 100644 -index 00000000..4ae6d9b4 +index 0000000..4ae6d9b --- /dev/null +++ b/src/plugins/ves/include/jsmn.h @@ -0,0 +1,93 @@ @@ -5928,7 +5929,7 @@ index 00000000..4ae6d9b4 +#endif /* __JSMN_H_ */ diff --git a/src/plugins/ves/include/metadata.h b/src/plugins/ves/include/metadata.h new file mode 100644 -index 00000000..1ee44092 +index 0000000..1ee4409 --- /dev/null +++ b/src/plugins/ves/include/metadata.h @@ -0,0 +1,58 @@ @@ -5992,7 +5993,7 @@ index 00000000..1ee44092 +#endif diff --git a/src/plugins/ves/include/ring_buffer.h b/src/plugins/ves/include/ring_buffer.h new file mode 100644 -index 00000000..1236b78b +index 0000000..1236b78 --- /dev/null +++ b/src/plugins/ves/include/ring_buffer.h @@ -0,0 +1,96 @@ @@ -6094,10 +6095,10 @@ index 00000000..1236b78b +#endif diff --git a/src/plugins/ves/ves.api b/src/plugins/ves/ves.api new file mode 100644 -index 00000000..a7106f8d +index 0000000..bae2620 --- /dev/null +++ b/src/plugins/ves/ves.api -@@ -0,0 +1,72 @@ +@@ -0,0 +1,74 @@ +/* + * Copyright (c) 2017 Intel and/or its affiliates. + * Licensed under the Apache License, Version 2.0 (the "License"); @@ -6146,6 +6147,7 @@ index 00000000..a7106f8d + @param context - sender context, to match reply w/ request + @param pkt_loss_rate - Base packet loss rate if Demo Mode + @param work_mode[] - Agent's work mode, real or demo ++ @param source_name[] - Agent's source name +*/ +define ves_agent_mode +{ @@ -6153,6 +6155,7 @@ index 00000000..a7106f8d + u32 context; + u32 pkt_loss_rate; + u8 work_mode[8]; ++ u8 source_name[129]; +}; + +/** \brief VES Agent Mode response @@ -6172,7 +6175,7 @@ index 00000000..a7106f8d + */ diff --git a/src/plugins/ves/ves_all_api_h.h b/src/plugins/ves/ves_all_api_h.h new file mode 100644 -index 00000000..72b15697 +index 0000000..72b1569 --- /dev/null +++ b/src/plugins/ves/ves_all_api_h.h @@ -0,0 +1,18 @@ @@ -6196,7 +6199,7 @@ index 00000000..72b15697 +#include <ves/ves.api.h> diff --git a/src/plugins/ves/ves_api.c b/src/plugins/ves/ves_api.c new file mode 100644 -index 00000000..7a9b8004 +index 0000000..06f0a96 --- /dev/null +++ b/src/plugins/ves/ves_api.c @@ -0,0 +1,139 @@ @@ -6283,7 +6286,7 @@ index 00000000..7a9b8004 + || !strcmp((char *)mp->work_mode, "DEMO")) + mode = VES_AGENT_MODE_DEMO; + -+ rv = ves_agent_set_mode(mode, (u32) ntohl(mp->pkt_loss_rate)); ++ rv = ves_agent_set_mode(mode, (u32) ntohl(mp->pkt_loss_rate), (char *) mp->source_name); + + REPLY_MACRO (VL_API_VES_AGENT_MODE_REPLY); +} @@ -6341,7 +6344,7 @@ index 00000000..7a9b8004 + */ diff --git a/src/plugins/ves/ves_msg_enum.h b/src/plugins/ves/ves_msg_enum.h new file mode 100644 -index 00000000..6e8a5dfa +index 0000000..6e8a5df --- /dev/null +++ b/src/plugins/ves/ves_msg_enum.h @@ -0,0 +1,31 @@ @@ -6378,10 +6381,10 @@ index 00000000..6e8a5dfa +#endif /* _VES_MSG_ENUM_H_ */ diff --git a/src/plugins/ves/ves_node.c b/src/plugins/ves/ves_node.c new file mode 100644 -index 00000000..7540dd16 +index 0000000..49d7e87 --- /dev/null +++ b/src/plugins/ves/ves_node.c -@@ -0,0 +1,646 @@ +@@ -0,0 +1,656 @@ +/* + * Copyright (c) 2017 Intel and/or its affiliates. + * Licensed under the Apache License, Version 2.0 (the "License"); @@ -6565,7 +6568,7 @@ index 00000000..7540dd16 + packets_out_this_round = 0; + } + -+ vpp_m = evel_new_measurement(vam->config.read_interval, "Measurement_vGMUX", "Generic_traffic"); ++ vpp_m = evel_new_measurement(vam->config.read_interval, "Measurement_vGMUX", "Generic_traffic", (char *) vam->config.source_name); + if(vpp_m != NULL) { + char str_pkt_loss[12]; + MEASUREMENT_VNIC_PERFORMANCE * vnic_performance = NULL; @@ -6912,11 +6915,17 @@ index 00000000..7540dd16 + +int +ves_agent_set_mode(ves_agent_mode_t mode, -+ u32 pkt_loss_rate) ++ u32 pkt_loss_rate, char *source_name) +{ + ves_agent_main_t *vam = &ves_agent_main; + int retval = 0; + ++ if (source_name != NULL) { ++ strncpy((char *) vam->config.source_name, source_name, MAX_SRC_NAME_LEN); ++ vam->config.source_name[MAX_SRC_NAME_LEN] = '\0'; ++ } else { ++ vam->config.source_name[0] = '\0'; ++ } + if (VES_AGENT_MODE_DEMO == mode) { + if (pkt_loss_rate > 100) { + vam->config.mode = VES_AGENT_MODE_REAL; @@ -6941,6 +6950,7 @@ index 00000000..7540dd16 + u32 pkt_loss_rate = 0; + ves_agent_mode_t mode = VES_AGENT_MODE_REAL; + int set_mode = 0; ++ u8 *source_name = NULL; + + while (unformat_check_input(input) != UNFORMAT_END_OF_INPUT) + { @@ -6955,13 +6965,15 @@ index 00000000..7540dd16 + set_mode = 1; + else if (unformat (input, "base %u", &pkt_loss_rate)) + ; ++ else if (unformat (input, "source %s", &source_name)) ++ ; + else + break; + } + + if (set_mode) + { -+ int retval = ves_agent_set_mode(mode, pkt_loss_rate); ++ int retval = ves_agent_set_mode(mode, pkt_loss_rate, (char *)source_name); + if (retval == 0) + return 0; + else @@ -6974,7 +6986,7 @@ index 00000000..7540dd16 + +VLIB_CLI_COMMAND (ves_mode_set_command, static) = { + .path = "set ves mode", -+ .short_help = "set ves mode <demo|real> [base <pkt-loss-rate>]", ++ .short_help = "set ves mode <demo|real> [base <pkt-loss-rate>] [source <name>]", + .function = ves_mode_set_command_fn, +}; + @@ -6983,11 +6995,12 @@ index 00000000..7540dd16 +{ + ves_agent_main_t *vam = &ves_agent_main; + -+ s = format(s, "%=8s %s\n", "Mode", "Base Packet Loss Rate"); ++ s = format(s, "%=8s %s %s\n", "Mode", "Base Packet Loss Rate", "Source Name"); + -+ s = format(s, "%=8s %.1f %%\n", ++ s = format(s, "%=8s %20.1f%% %s\n", + vam->config.mode == VES_AGENT_MODE_DEMO ? "Demo" : "Real", -+ (double) vam->config.base_pkt_loss); ++ (double) vam->config.base_pkt_loss, ++ (strlen((char *)vam->config.source_name) > 0) ? (char *)vam->config.source_name : "[default]"); + + return s; +} @@ -7030,10 +7043,10 @@ index 00000000..7540dd16 + */ diff --git a/src/plugins/ves/ves_node.h b/src/plugins/ves/ves_node.h new file mode 100644 -index 00000000..7b773843 +index 0000000..9a57f34 --- /dev/null +++ b/src/plugins/ves/ves_node.h -@@ -0,0 +1,66 @@ +@@ -0,0 +1,68 @@ +/* + * Copyright (c) 2017 Intel and/or its affiliates. + * Licensed under the Apache License, Version 2.0 (the "License"); @@ -7060,6 +7073,7 @@ index 00000000..7b773843 +#define DEFAULT_MEASURE_ETH "eth0" +#define DEFAULT_SERVER_PORT 8080 +#define DEFAULT_READ_INTERVAL 100 ++#define MAX_SRC_NAME_LEN 128 + +typedef enum { + VES_AGENT_MODE_REAL = 0, @@ -7075,6 +7089,7 @@ index 00000000..7b773843 + int is_enabled; + u32 base_pkt_loss; /* For demo only */ + ves_agent_mode_t mode; /* Demo or Real */ ++ u8 source_name[MAX_SRC_NAME_LEN+1]; +} ves_agent_config_t; + +typedef struct { @@ -7097,18 +7112,17 @@ index 00000000..7b773843 + u32 read_interval, int is_del); + +int ves_agent_set_mode(ves_agent_mode_t mode, -+ u32 pkt_loss_rate); ++ u32 pkt_loss_rate, char *source_name); + +#endif /* _VES_NODE_H_ */ diff --git a/src/vpp-api/java/Makefile.am b/src/vpp-api/java/Makefile.am -index f18e0c24..7f4738d8 100644 +index f18e0c2..7f4738d 100644 --- a/src/vpp-api/java/Makefile.am +++ b/src/vpp-api/java/Makefile.am -@@ -148,6 +148,26 @@ jvpp-snat/io_fd_vpp_jvpp_snat_JVppSnatImpl.h: $(jvpp_registry_ok) $(jvpp_snat_js - $(call japigen,snat,JVppSnatImpl) +@@ -149,6 +149,26 @@ jvpp-snat/io_fd_vpp_jvpp_snat_JVppSnatImpl.h: $(jvpp_registry_ok) $(jvpp_snat_js endif -+# + # +# VES Plugin +# +if ENABLE_VES_PLUGIN @@ -7128,12 +7142,13 @@ index f18e0c24..7f4738d8 100644 + $(call japigen,ves,JVppVesImpl) +endif + - # ++# # iOAM Trace Plugin # + if ENABLE_IOAM_PLUGIN diff --git a/src/vpp-api/java/jvpp-ves/jvpp_ves.c b/src/vpp-api/java/jvpp-ves/jvpp_ves.c new file mode 100644 -index 00000000..60e325b5 +index 0000000..60e325b --- /dev/null +++ b/src/vpp-api/java/jvpp-ves/jvpp_ves.c @@ -0,0 +1,108 @@ @@ -7247,7 +7262,7 @@ index 00000000..60e325b5 +} diff --git a/src/vpp-api/java/jvpp-ves/jvpp_ves.h b/src/vpp-api/java/jvpp-ves/jvpp_ves.h new file mode 100644 -index 00000000..642101ca +index 0000000..642101c --- /dev/null +++ b/src/vpp-api/java/jvpp-ves/jvpp_ves.h @@ -0,0 +1,43 @@ @@ -7294,6 +7309,3 @@ index 00000000..642101ca + + +#endif /* __included_jvpp_ves_h__ */ --- -2.14.1.windows.1 - diff --git a/vnfs/vCPE/vpp-ves-agent-for-vgmux/src/patches/vCPE-vG-MUX-libevel-fixup.patch b/vnfs/vCPE/vpp-ves-agent-for-vgmux/src/patches/vCPE-vG-MUX-libevel-fixup.patch index 639a7c6e..00b1b446 100644 --- a/vnfs/vCPE/vpp-ves-agent-for-vgmux/src/patches/vCPE-vG-MUX-libevel-fixup.patch +++ b/vnfs/vCPE/vpp-ves-agent-for-vgmux/src/patches/vCPE-vG-MUX-libevel-fixup.patch @@ -1,8 +1,28 @@ +diff --git a/vnfs/VES5.0/evel/evel-library/code/evel_library/evel.h b/vnfs/VES5.0/evel/evel-library/code/evel_library/evel.h +index 0ae1713..be3ae6c 100644 +--- a/vnfs/VES5.0/evel/evel-library/code/evel_library/evel.h ++++ b/vnfs/VES5.0/evel/evel-library/code/evel_library/evel.h +@@ -1715,13 +1715,14 @@ void evel_fault_type_set(EVENT_FAULT * fault, const char * const type); + * @param measurement_interval + * @param event_name Unique Event Name + * @param event_id A universal identifier of the event for analysis etc ++ * @param event_source_name Optional override of the source name + * + * @returns pointer to the newly manufactured ::EVENT_MEASUREMENT. If the + * event is not used (i.e. posted) it must be released using + * ::evel_free_event. + * @retval NULL Failed to create the event. + *****************************************************************************/ +-EVENT_MEASUREMENT * evel_new_measurement(double measurement_interval,const char* ev_name, const char *ev_id); ++EVENT_MEASUREMENT * evel_new_measurement(double measurement_interval,const char* ev_name, const char *ev_id, const char *ev_source_name); + + /**************************************************************************//** + * Free a Measurement. diff --git a/vnfs/VES5.0/evel/evel-library/code/evel_library/evel_event.c b/vnfs/VES5.0/evel/evel-library/code/evel_library/evel_event.c -index ced29b2..892e4b6 100644 +index 4de49bc..de6b362 100644 --- a/vnfs/VES5.0/evel/evel-library/code/evel_library/evel_event.c +++ b/vnfs/VES5.0/evel/evel-library/code/evel_library/evel_event.c -@@ -166,7 +166,8 @@ void evel_init_header(EVENT_HEADER * const header,const char *const eventname) +@@ -167,7 +167,8 @@ void evel_init_header(EVENT_HEADER * const header,const char *const eventname) header->last_epoch_microsec = tv.tv_usec + 1000000 * tv.tv_sec; header->priority = EVEL_PRIORITY_NORMAL; header->reporting_entity_name = strdup(openstack_vm_name()); @@ -12,7 +32,7 @@ index ced29b2..892e4b6 100644 header->sequence = event_sequence; header->start_epoch_microsec = header->last_epoch_microsec; header->major_version = EVEL_HEADER_MAJOR_VERSION; -@@ -180,7 +181,8 @@ void evel_init_header(EVENT_HEADER * const header,const char *const eventname) +@@ -181,7 +182,8 @@ void evel_init_header(EVENT_HEADER * const header,const char *const eventname) evel_init_option_string(&header->nfcnaming_code); evel_init_option_string(&header->nfnaming_code); evel_force_option_string(&header->reporting_entity_id, openstack_vm_uuid()); @@ -22,7 +42,7 @@ index ced29b2..892e4b6 100644 evel_init_option_intheader(&header->internal_field); EVEL_EXIT(); -@@ -215,7 +217,8 @@ void evel_init_header_nameid(EVENT_HEADER * const header,const char *const event +@@ -216,7 +218,8 @@ void evel_init_header_nameid(EVENT_HEADER * const header,const char *const event header->last_epoch_microsec = tv.tv_usec + 1000000 * tv.tv_sec; header->priority = EVEL_PRIORITY_NORMAL; header->reporting_entity_name = strdup(openstack_vm_name()); @@ -32,21 +52,105 @@ index ced29b2..892e4b6 100644 header->sequence = event_sequence; header->start_epoch_microsec = header->last_epoch_microsec; header->major_version = EVEL_HEADER_MAJOR_VERSION; -@@ -229,7 +232,8 @@ void evel_init_header_nameid(EVENT_HEADER * const header,const char *const event +@@ -230,7 +233,63 @@ void evel_init_header_nameid(EVENT_HEADER * const header,const char *const event evel_init_option_string(&header->nfcnaming_code); evel_init_option_string(&header->nfnaming_code); evel_force_option_string(&header->reporting_entity_id, openstack_vm_uuid()); - evel_force_option_string(&header->source_id, openstack_vm_uuid()); + /* evel_force_option_string(&header->source_id, openstack_vm_uuid()); */ + evel_force_option_string(&header->source_id, openstack_vnf_id()); /* vCPE quick hack */ ++ evel_init_option_intheader(&header->internal_field); ++ ++ EVEL_EXIT(); ++} ++ ++/**************************************************************************//** ++ * Initialize a newly created event header. ++ * ++ * @param header Pointer to the header being initialized. ++ *****************************************************************************/ ++void evel_init_header_source_name(EVENT_HEADER * const header,const char *const eventname, const char *eventid, const char *eventsrcname) ++{ ++ struct timeval tv; ++ ++ EVEL_ENTER(); ++ ++ assert(header != NULL); ++ assert(eventname != NULL); ++ assert(eventid != NULL); ++ ++ gettimeofday(&tv, NULL); ++ ++ /***************************************************************************/ ++ /* Initialize the header. Get a new event sequence number. Note that if */ ++ /* any memory allocation fails in here we will fail gracefully because */ ++ /* everything downstream can cope with NULLs. */ ++ /***************************************************************************/ ++ header->event_domain = EVEL_DOMAIN_HEARTBEAT; ++ header->event_id = strdup(eventid); ++ header->event_name = strdup(eventname); ++ header->last_epoch_microsec = tv.tv_usec + 1000000 * tv.tv_sec; ++ header->priority = EVEL_PRIORITY_NORMAL; ++ header->reporting_entity_name = strdup(openstack_vm_name()); ++ /* header->source_name = strdup(openstack_vm_name()); */ ++ /* vCPE quck hack */ ++ if (strlen(eventsrcname)) { ++ header->source_name = strdup(eventsrcname); ++ } else { ++ header->source_name = strdup(openstack_vnf_id()); ++ } ++ header->sequence = event_sequence; ++ header->start_epoch_microsec = header->last_epoch_microsec; ++ header->major_version = EVEL_HEADER_MAJOR_VERSION; ++ header->minor_version = EVEL_HEADER_MINOR_VERSION; ++ event_sequence++; ++ ++ /***************************************************************************/ ++ /* Optional parameters. */ ++ /***************************************************************************/ ++ evel_init_option_string(&header->event_type); ++ evel_init_option_string(&header->nfcnaming_code); ++ evel_init_option_string(&header->nfnaming_code); ++ evel_force_option_string(&header->reporting_entity_id, openstack_vm_uuid()); ++ /* evel_force_option_string(&header->source_id, openstack_vm_uuid()); */ ++ evel_force_option_string(&header->source_id, openstack_vnf_id()); /* vCPE quick hack */ evel_init_option_intheader(&header->internal_field); EVEL_EXIT(); +diff --git a/vnfs/VES5.0/evel/evel-library/code/evel_library/evel_scaling_measurement.c b/vnfs/VES5.0/evel/evel-library/code/evel_library/evel_scaling_measurement.c +index b73eb97..2446e02 100644 +--- a/vnfs/VES5.0/evel/evel-library/code/evel_library/evel_scaling_measurement.c ++++ b/vnfs/VES5.0/evel/evel-library/code/evel_library/evel_scaling_measurement.c +@@ -40,13 +40,14 @@ + * @param measurement_interval + * @param event_name Unique Event Name confirming Domain AsdcModel Description + * @param event_id A universal identifier of the event for: troubleshooting correlation, analysis, etc ++ * @param event_source_name Optional override of the source name + * + * @returns pointer to the newly manufactured ::EVENT_MEASUREMENT. If the + * event is not used (i.e. posted) it must be released using + * ::evel_free_event. + * @retval NULL Failed to create the event. + *****************************************************************************/ +-EVENT_MEASUREMENT * evel_new_measurement(double measurement_interval, const char* ev_name, const char *ev_id) ++EVENT_MEASUREMENT * evel_new_measurement(double measurement_interval, const char* ev_name, const char *ev_id, const char *ev_source_name) + { + EVENT_MEASUREMENT * measurement = NULL; + +@@ -72,7 +73,7 @@ EVENT_MEASUREMENT * evel_new_measurement(double measurement_interval, const char + /***************************************************************************/ + /* Initialize the header & the measurement fields. */ + /***************************************************************************/ +- evel_init_header_nameid(&measurement->header,ev_name,ev_id); ++ evel_init_header_source_name(&measurement->header,ev_name,ev_id,ev_source_name); + measurement->header.event_domain = EVEL_DOMAIN_MEASUREMENT; + measurement->measurement_interval = measurement_interval; + dlist_initialize(&measurement->additional_info); diff --git a/vnfs/VES5.0/evel/evel-library/code/evel_library/metadata.c b/vnfs/VES5.0/evel/evel-library/code/evel_library/metadata.c -index 11fef1b..d82f282 100644 +index 62ea6b5..6c322db 100644 --- a/vnfs/VES5.0/evel/evel-library/code/evel_library/metadata.c +++ b/vnfs/VES5.0/evel/evel-library/code/evel_library/metadata.c -@@ -59,6 +59,11 @@ static char vm_uuid[MAX_METADATA_STRING+1] = {0}; +@@ -60,6 +60,11 @@ static char vm_uuid[MAX_METADATA_STRING+1] = {0}; static char vm_name[MAX_METADATA_STRING+1] = {0}; /**************************************************************************//** @@ -58,7 +162,7 @@ index 11fef1b..d82f282 100644 * How many metadata elements we allow for in the retrieved JSON. *****************************************************************************/ static const int MAX_METADATA_TOKENS = 128; -@@ -289,6 +294,19 @@ EVEL_ERR_CODES openstack_metadata(int verbosity) +@@ -290,6 +295,19 @@ EVEL_ERR_CODES openstack_metadata(int verbosity) { EVEL_DEBUG("VM Name: %s", vm_name); } @@ -78,7 +182,7 @@ index 11fef1b..d82f282 100644 } exit_label: -@@ -318,6 +336,9 @@ void openstack_metadata_initialize() +@@ -319,6 +337,9 @@ void openstack_metadata_initialize() strncpy(vm_name, "Dummy VM name - No Metadata available", MAX_METADATA_STRING); @@ -88,7 +192,7 @@ index 11fef1b..d82f282 100644 } /**************************************************************************//** -@@ -590,3 +611,13 @@ const char *openstack_vm_uuid() +@@ -591,3 +612,13 @@ const char *openstack_vm_uuid() { return vm_uuid; } diff --git a/vnfs/vFW/scripts/update-vfw-op-policy.sh b/vnfs/vFW/scripts/update-vfw-op-policy.sh new file mode 100755 index 00000000..839250dc --- /dev/null +++ b/vnfs/vFW/scripts/update-vfw-op-policy.sh @@ -0,0 +1,74 @@ +#!/bin/bash + +if [ "$#" -ne 3 ]; then + echo "Usage: $(basename $0) <policy-vm-host> <resource-id> <path-to-Policy-VM-private-key>" + exit 1 +fi + +POLICY_HOST=$1 +RESOURCE_ID=$2 +PATH_TO_PRIVATE_KEY=$3 + +echo +echo "Updating vFW Operational Policy .." +echo + +curl -v -X PUT --header 'Content-Type: application/json' --header 'Accept: text/plain' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{ + "policyConfigType": "BRMS_PARAM", + "policyName": "com.BRMSParamvFirewall", + "policyDescription": "BRMS Param vFirewall policy", + "policyScope": "com", + "attributes": { + "MATCHING": { + "controller" : "amsterdam" + }, + "RULE": { + "templateName": "ClosedLoopControlName", + "closedLoopControlName": "ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a", + "controlLoopYaml": "controlLoop%3A%0D%0A++version%3A+2.0.0%0D%0A++controlLoopName%3A+ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a%0D%0A++trigger_policy%3A+unique-policy-id-1-modifyConfig%0D%0A++timeout%3A+1200%0D%0A++abatement%3A+false%0D%0A+%0D%0Apolicies%3A%0D%0A++-+id%3A+unique-policy-id-1-modifyConfig%0D%0A++++name%3A+modify+packet+gen+config%0D%0A++++description%3A%0D%0A++++actor%3A+APPC%0D%0A++++recipe%3A+ModifyConfig%0D%0A++++target%3A%0D%0A++++++%23+TBD+-+Cannot+be+known+until+instantiation+is+done%0D%0A++++++resourceID%3A+'${RESOURCE_ID}'%0D%0A++++++type%3A+VNF%0D%0A++++retry%3A+0%0D%0A++++timeout%3A+300%0D%0A++++success%3A+final_success%0D%0A++++failure%3A+final_failure%0D%0A++++failure_timeout%3A+final_failure_timeout%0D%0A++++failure_retries%3A+final_failure_retries%0D%0A++++failure_exception%3A+final_failure_exception%0D%0A++++failure_guard%3A+final_failure_guard" + } + } +}' http://${POLICY_HOST}:8081/pdp/api/updatePolicy + +sleep 5 + +echo +echo +echo "Pushing the vFW Policy .." +echo +echo + +curl -v --silent -X PUT --header 'Content-Type: application/json' --header 'Accept: text/plain' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{ + "pdpGroup": "default", + "policyName": "com.BRMSParamvFirewall", + "policyType": "BRMS_Param" +}' http://${POLICY_HOST}:8081/pdp/api/pushPolicy + +sleep 20 + +echo +echo +echo "Restarting PDP-D .." +echo +echo + +ssh -i $PATH_TO_PRIVATE_KEY root@${POLICY_HOST} "docker exec -t -u policy drools bash -c \"source /opt/app/policy/etc/profile.d/env.sh; policy stop; sleep 5; policy start\"" + +sleep 20 + +echo +echo +echo "PDP-D amsterdam maven coordinates .." +echo +echo + +curl -vvv --silent --user @1b3rt:31nst31n -X GET http://${POLICY_HOST}:9696/policy/pdp/engine/controllers/amsterdam/drools | python -m json.tool + + +echo +echo +echo "PDP-D control loop updated .." +echo +echo + +curl -v --silent --user @1b3rt:31nst31n -X GET http://${POLICY_HOST}:9696/policy/pdp/engine/controllers/amsterdam/drools/facts/closedloop-amsterdam/org.onap.policy.controlloop.Params | python -m json.tool diff --git a/vnfs/vLB/scripts/run_streams_dns.sh b/vnfs/vLB/scripts/run_streams_dns.sh index cf95fa53..b73c9ee7 100755 --- a/vnfs/vLB/scripts/run_streams_dns.sh +++ b/vnfs/vLB/scripts/run_streams_dns.sh @@ -1,9 +1,51 @@ #!/bin/bash +#Disable all the running streams vppctl packet-gen disable + +#Initial configuration: run only two streams vppctl packet-gen enable-stream dns1 vppctl packet-gen enable-stream dns2 -sleep 100 -vppctl packet-gen enable-stream dns3 -vppctl packet-gen enable-stream dns4 -vppctl packet-gen enable-stream dns5 + +sleep 180 + +#Rehash port numbers and re-run five streams every minute +while true; do + vppctl packet-gen disable + vppctl pac del dns1 + vppctl pac del dns2 + vppctl pac del dns3 + vppctl pac del dns4 + vppctl pac del dns5 + + #Update destination (vLB) IP + VLB_IPADDR=$(cat /opt/config/vlb_ipaddr.txt) + IPADDR1=$(cat /opt/config/local_private_ipaddr.txt) + sed -i -e "0,/UDP/ s/UDP:.*/UDP: "$IPADDR1" -> "$VLB_IPADDR"/" /opt/dns_streams/stream_dns1 + sed -i -e "0,/UDP/ s/UDP:.*/UDP: "$IPADDR1" -> "$VLB_IPADDR"/" /opt/dns_streams/stream_dns2 + sed -i -e "0,/UDP/ s/UDP:.*/UDP: "$IPADDR1" -> "$VLB_IPADDR"/" /opt/dns_streams/stream_dns3 + sed -i -e "0,/UDP/ s/UDP:.*/UDP: "$IPADDR1" -> "$VLB_IPADDR"/" /opt/dns_streams/stream_dns4 + sed -i -e "0,/UDP/ s/UDP:.*/UDP: "$IPADDR1" -> "$VLB_IPADDR"/" /opt/dns_streams/stream_dns5 + + #Update source ports (make them random) + sed -i -e "s/.*-> 53.*/ UDP: $RANDOM -> 53/" /opt/dns_streams/stream_dns1 + sed -i -e "s/.*-> 53.*/ UDP: $RANDOM -> 53/" /opt/dns_streams/stream_dns2 + sed -i -e "s/.*-> 53.*/ UDP: $RANDOM -> 53/" /opt/dns_streams/stream_dns3 + sed -i -e "s/.*-> 53.*/ UDP: $RANDOM -> 53/" /opt/dns_streams/stream_dns4 + sed -i -e "s/.*-> 53.*/ UDP: $RANDOM -> 53/" /opt/dns_streams/stream_dns5 + + vppctl exec /opt/dns_streams/stream_dns1 + vppctl exec /opt/dns_streams/stream_dns2 + vppctl exec /opt/dns_streams/stream_dns3 + vppctl exec /opt/dns_streams/stream_dns4 + vppctl exec /opt/dns_streams/stream_dns5 + + #Resume stream execution + vppctl packet-gen enable-stream dns1 + vppctl packet-gen enable-stream dns2 + vppctl packet-gen enable-stream dns3 + vppctl packet-gen enable-stream dns4 + vppctl packet-gen enable-stream dns5 + + sleep 60 +done
\ No newline at end of file diff --git a/vnfs/vLB/scripts/v_packetgen_install.sh b/vnfs/vLB/scripts/v_packetgen_install.sh index ca2957a7..6dce05ec 100644 --- a/vnfs/vLB/scripts/v_packetgen_install.sh +++ b/vnfs/vLB/scripts/v_packetgen_install.sh @@ -56,7 +56,6 @@ cd /opt wget $REPO_URL_BLOB/org.onap.demo/vnfs/vlb/$INSTALL_SCRIPT_VERSION/v_packetgen_init.sh wget $REPO_URL_BLOB/org.onap.demo/vnfs/vlb/$INSTALL_SCRIPT_VERSION/vpacketgen.sh wget $REPO_URL_BLOB/org.onap.demo/vnfs/vlb/$INSTALL_SCRIPT_VERSION/run_streams_dns.sh -wget $REPO_URL_BLOB/org.onap.demo/vnfs/vlb/$INSTALL_SCRIPT_VERSION/vdnspacketgen_change_streams_ports.sh wget $REPO_URL_ARTIFACTS/org/onap/demo/vnf/vlb/vlb_dns_streams/$DEMO_ARTIFACTS_VERSION/vlb_dns_streams-$DEMO_ARTIFACTS_VERSION-demo.tar.gz tar -zmxvf vlb_dns_streams-$DEMO_ARTIFACTS_VERSION-demo.tar.gz @@ -65,7 +64,6 @@ rm *.tar.gz chmod +x v_packetgen_init.sh chmod +x vpacketgen.sh chmod +x run_streams_dns.sh -chmod +x vdnspacketgen_change_streams_ports.sh # Install VPP export UBUNTU="xenial" @@ -93,7 +91,4 @@ then reboot fi -# Install a cron job that restart streams every minute. This allows to map streams to different vDNSs when we scale out the VNF -echo "* * * * * /opt/vdnspacketgen_change_streams_ports.sh" | crontab - ./v_packetgen_init.sh |