aboutsummaryrefslogtreecommitdiffstats
path: root/docs/onap-oom-heat.rst
blob: a49daa3dd73c9cf850b307b85bb6540bb91ff439 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
ONAP OOM HEAT Template
------------------

Source files
~~~~~~~~~~~~

- HEAT template files: https://git.onap.org/integration/tree/deployment/heat/onap-oom?h=casablanca
- Sample OpenStack RC file: https://git.onap.org/integration/tree/deployment/heat/onap-oom/env/windriver/Integration-SB-00-openrc?h=casablanca
- Sample environment file: https://git.onap.org/integration/tree/deployment/heat/onap-oom/env/windriver/onap-oom.env?h=casablanca
- Deployment script: https://git.onap.org/integration/tree/deployment/heat/onap-oom/scripts/deploy.sh?h=casablanca

Description
~~~~~~~~~~~

The ONAP Integration Project provides a sample HEAT template that
fully automates the deployment of ONAP using OOM as described in
:ref:`ONAP Operations Manager (OOM) over Kubernetes`.

The ONAP OOM HEAT template deploys up the entire ONAP platform.  It
spins up an HA-enabled Kubernetes cluster, and deploys ONAP using OOM
onto this cluster.
- 1 Rancher VM that also serves as a shared NFS server
- 3 etcd VMs for the Kubernetes HA etcd plane
- 2 orch VMs for the Kubernetes HA orchestration plane
- 12 k8s VMs for the Kubernetes HA compute hosts


Quick Start
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Using the Wind River lab configuration as an example, here is what
you need to do to deploy ONAP:

::

 git clone https://git.onap.org/integration
 cd integration/deployment/heat/onap-oom/
 source ./env/windriver/Integration-SB-00-openrc
 ./scripts/deploy.sh ./env/windriver/onap-oom.env


Environment and RC files
~~~~~~~~~~~~~~~~~~~~~~~~

Before deploying ONAP to your own environment, it is necessary to
customize the environment and RC files.  You should make a copy of the
sample RC and environment files shown above and customize the values
for your specific OpenStack environments.

The environment file contains a block called
integration_override_yaml.  The content of this block will be created
as the file integration_override.yaml in the deployed Rancher VM, and
used as the helm override files during the OOM deployment.  Be sure to
customize the necessary values within this block to match your
OpenStack environment as well.

**Notes on select parameters**

::
 apt_proxy: 10.12.5.2:8000
 docker_proxy: 10.12.5.2:5000

 rancher_vm_flavor: m1.large
 k8s_vm_flavor: m1.xlarge
 etcd_vm_flavor: m1.medium
 orch_vm_flavor: m1.medium

 key_name: onap_key

 helm_deploy_delay: 2.5m

It is recommended that you set up an apt proxy and a docker proxy
local to your lab.  If you do not wish to use such proxies, you can
set the apt_proxy and docker_proxy parameters to the empty string "".

rancher_vm_flavor needs to have 8 GB of RAM.
k8s_vm_flavor needs to have 16 GB of RAM.
etcd_vm_flavor needs to have 4 GB of RAM.
orch_vm_flavor needs to have 4 GB of RAM.

By default the template assumes that you have already imported a
keypair named "onap_key" into your OpenStack environment.  If the
desired keypair has a different name, change the key_name parameter.

The helm_deploy_delay parameter introduces a delay in-between the
deployment of each ONAP helm subchart to help alleviate system load or
contention issues caused by trying to spin up too many pods
simultaneously.  The value of this parameter is passed to the Linux
"sleep" command.  Adjust this parameter based on the performance and
load characteristics of your OpenStack environment.