diff options
author | Tomáš Levora <t.levora@partner.samsung.com> | 2019-10-10 14:04:08 +0200 |
---|---|---|
committer | Tomáš Levora <t.levora@partner.samsung.com> | 2019-10-24 10:27:38 +0200 |
commit | 2a355bb76368fd6bc727e8736cb07f6eabb7d038 (patch) | |
tree | ff16c70420da5e8dc05fc18dc00c82234f98e266 | |
parent | 19033d018dcc521008b15132a1666b95f292c6ac (diff) |
Updating documentation for El Alto
Removing necessity of merging docker data lists manually as it's already
solved in the build_nexus_blob.sh script
Updating all links and references to El Alto
Issue-ID: OOM-2016
Change-Id: I1e343a8af1d26f7f6f80a8d76fa7997883b678e4
Signed-off-by: Tomáš Levora <t.levora@partner.samsung.com>
-rw-r--r-- | docs/BuildGuide.rst | 64 | ||||
-rw-r--r-- | docs/InstallGuide.rst | 34 |
2 files changed, 26 insertions, 72 deletions
diff --git a/docs/BuildGuide.rst b/docs/BuildGuide.rst index 8b0c9b28..a654a4ce 100644 --- a/docs/BuildGuide.rst +++ b/docs/BuildGuide.rst @@ -42,8 +42,6 @@ Alternatively :: - ToDo: newer download scripts needs to be verified on Centos with ONAP Dublin - ############## # Centos 7.6 # ############## @@ -64,9 +62,6 @@ Subsequent steps are the same on both platforms: # install Python 3 (download scripts don't support Python 2 anymore) yum install -y python36 python36-pip - # twine package is needed by nexus blob build script - pip install twine - # docker daemon must be running on host service docker start @@ -136,10 +131,8 @@ so one might try following command to download most of the required artifacts in ./build/download/download.py --docker ./build/data_lists/infra_docker_images.list ../resources/offline_data/docker_images_infra \ --docker ./build/data_lists/rke_docker_images.list ../resources/offline_data/docker_images_for_nexus \ + --docker ./build/data_lists/k8s_docker_images.list ../resources/offline_data/docker_images_for_nexus \ --docker ./build/data_lists/onap_docker_images.list ../resources/offline_data/docker_images_for_nexus \ - --git ./build/data_lists/onap_git_repos.list ../resources/git-repo \ - --npm ./build/data_lists/onap_npm.list ../resources/offline_data/npm_tar \ - --pypi ./build/data_lists/onap_pip_packages.list ../resources/offline_data/pypi \ --http ./build/data_lists/infra_bin_utils.list ../resources/downloads @@ -160,34 +153,17 @@ Prerequisites: Whole nexus blob data will be created by running script build_nexus_blob.sh. It will load the listed docker images, run the Nexus, configure it as npm, pypi -and docker repositories. Then it will push all listed npm and pypi packages and -docker images to the repositories. After all is done the repository container -is stopped. - -.. note:: build_nexus_blob.sh script is using docker, npm and pip data lists for building nexus blob. Unfortunatelly we now have 2 different docker data lists (RKE & ONAP). So we need to merge them as visible from following snippet. This problem will be fixed in OOM-1890 - -You can run the script as following example: - -:: +and docker repositories. Then it will push all listed docker images to the repositories. After all is done the repository container is stopped. - # merge RKE and ONAP app data lists - cat ./build/data_lists/rke_docker_images.list >> ./build/data_lists/onap_docker_images.list +.. note:: In the current release scope we aim to maintain just single example data lists set, tags used in previous releases are not needed. Datalists are also covering latest versions verified by us despite user is allowed to build data lists on his own. - ./build/build_nexus_blob.sh - -.. note:: in current release scope we aim to maintain just single example data lists set, tags used in previous releases are not needed. Datalists are also covering latest versions verified by us despite user is allowed to build data lists on his own. - -Once the Nexus data blob is created, the docker images and npm and pypi -packages can be deleted to reduce the package size as they won't be needed in -the installation time: +Once the Nexus data blob is created, the docker images can be deleted to reduce the package size as they won't be needed in the installation time: E.g. :: rm -f /tmp/resources/offline_data/docker_images_for_nexus/* - rm -rf /tmp/resources/offline_data/npm_tar - rm -rf /tmp/resources/offline_data/pypi Part 4. Packages preparation -------------------------------------------------------- @@ -204,19 +180,19 @@ ONAP offline deliverable consist of 3 packages: | aux_package.tar | Contains auxiliary input files that can be added to ONAP | +---------------------------------------+------------------------------------------------------------------------------+ -All packages can be created using script build/package.py. Beside of archiving files gathered in the previous steps, script also builds installer software and apply patch over application repository to make it usable without internet access. +All packages can be created using script build/package.py. Beside of archiving files gathered in the previous steps, script also builds docker images used in on infra server. From onap-offline directory run: :: - ./build/package.py <helm charts repo> --application-repository_reference <commit/tag/branch> --application-patch_file <patchfile> --output-dir <target\_dir> --resources-directory <target\_dir> + ./build/package.py <helm charts repo> --application-repository_reference <commit/tag/branch> --output-dir <target\_dir> --resources-directory <target\_dir> For example: :: - ./build/package.py https://gerrit.onap.org/r/oom --application-repository_reference master --application-patch_file ./patches/onap.patch --output-dir ../packages --resources-directory ../resources + ./build/package.py https://gerrit.onap.org/r/oom --application-repository_reference master --output-dir ../packages --resources-directory ../resources In the target directory you should find tar files: @@ -240,35 +216,13 @@ Appendix 1. Step-by-step download procedure ./build/download/download.py --docker ./build/data_lists/infra_docker_images.list ../resources/offline_data/docker_images_infra \ --docker ./build/data_lists/rke_docker_images.list ../resources/offline_data/docker_images_for_nexus \ + --docker ./build/data_lists/k8s_docker_images.list ../resources/offline_data/docker_images_for_nexus \ --docker ./build/data_lists/onap_docker_images.list ../resources/offline_data/docker_images_for_nexus -**Step 2 - git repos** - -:: - - # Following step will download all git repos - ./build/download/download.py --git ./build/data_lists/onap_git_repos.list ../resources/git-repo - - -**Step 3 - npm packages** - -:: - - # Following step will download all npm packages - ./build/download/download.py --npm ./build/data_lists/onap_npm.list ../resources/offline_data/npm_tar - -**Step 4 - binaries** +**Step 2 - binaries** :: # Following step will download rke, kubectl and helm binaries ./build/download/download.py --http ./build/data_lists/infra_bin_utils.sh ../resources/downloads - -**Step 5 - pip packages** - -:: - - # Following step will download all pip packages - ./build/download/download.py --pypi ./build/data_lists/onap_pip_packages.list ../resources/offline_data/pypi - diff --git a/docs/InstallGuide.rst b/docs/InstallGuide.rst index 762fb52d..9239cad9 100644 --- a/docs/InstallGuide.rst +++ b/docs/InstallGuide.rst @@ -11,7 +11,7 @@ This document describes the correct offline installation procedure for `OOM ONAP Before you dive into the installation you should prepare the offline installer itself - the installer consists of at least two packages/resources. You can read about it in the `Build Guide`_, which provides the instructions for creating them. -This current version of the *Installation Guide* supports `Dublin release`_. +This current version of the *Installation Guide* supports `El Alto release`_. ----- @@ -20,9 +20,9 @@ This current version of the *Installation Guide* supports `Dublin release`_. Part 1. Prerequisites --------------------- -OOM ONAP deployment has certain hardware resource requirements - `Dublin requirements`_: +OOM ONAP deployment has certain hardware resource requirements - `El Alto requirements`_: -Community recommended footprint from `Dublin requirements`_ page is 16 VMs ``224 GB RAM`` and ``112 vCPUs``. We will not follow strictly this setup due to such demanding resource consumption and so we will deploy our installation across four nodes (VMs) instead of sixteen. Our simplified setup is definitively not supported or recommended - you are free to diverge - you can follow the official guidelines or make completely different layout, but the minimal count of nodes should not drop below three - otherwise you may have to do some tweaking to make it work, which is not covered here (there is a pod count limit for a single kubernetes node - you can read more about it in this `discussion <https://lists.onap.org/g/onap-discuss/topic/oom_110_kubernetes_pod/25213556>`_). +Community recommended footprint from `El Alto requirements`_ page is 16 VMs ``224 GB RAM`` and ``112 vCPUs``. We will not follow strictly this setup due to such demanding resource consumption and so we will deploy our installation across four nodes (VMs) instead of sixteen. Our simplified setup is definitively not supported or recommended - you are free to diverge - you can follow the official guidelines or make completely different layout, but the minimal count of nodes should not drop below three - otherwise you may have to do some tweaking to make it work, which is not covered here (there is a pod count limit for a single kubernetes node - you can read more about it in this `discussion <https://lists.onap.org/g/onap-discuss/topic/oom_110_kubernetes_pod/25213556>`_). .. _oooi_installguide_preparations_k8s_cluster: @@ -52,19 +52,19 @@ You don't need to care about these services now - that is the responsibility of Kubernetes cluster overview ^^^^^^^^^^^^^^^^^^^^^^^^^^^ -In Dublin we are using RKE as k8s orchestrator method, however everyone is free to diverge from this example and can set it up in own way omitting our rke playbook execution. +In El Alto we are using RKE as k8s orchestrator method, however everyone is free to diverge from this example and can set it up in own way omitting our rke playbook execution. -=================== ========= ==================== ============== ============ =============== -KUBERNETES NODE OS NETWORK CPU RAM STORAGE -=================== ========= ==================== ============== ============ =============== -**infra-node** RHEL 7 ``10.8.8.100/24`` ``8 vCPUs`` ``8 GB`` ``100 GB`` -**kube-node1** RHEL 7 ``10.8.8.101/24`` ``16 vCPUs`` ``56+ GB`` ``100 GB`` -**kube-node2** RHEL 7 ``10.8.8.102/24`` ``16 vCPUs`` ``56+ GB`` ``100 GB`` -**kube-node3** RHEL 7 ``10.8.8.103/24`` ``16 vCPUs`` ``56+ GB`` ``100 GB`` -SUM ``56 vCPUs`` ``176+ GB`` ``400 GB`` -================================================== ============== ============ =============== +=================== ================== ==================== ============== ============ =============== +KUBERNETES NODE OS NETWORK CPU RAM STORAGE +=================== ================== ==================== ============== ============ =============== +**infra-node** RHEL/CentOS 7.6 ``10.8.8.100/24`` ``8 vCPUs`` ``8 GB`` ``100 GB`` +**kube-node1** RHEL/CentOS 7.6 ``10.8.8.101/24`` ``16 vCPUs`` ``56+ GB`` ``100 GB`` +**kube-node2** RHEL/CentOS 7.6 ``10.8.8.102/24`` ``16 vCPUs`` ``56+ GB`` ``100 GB`` +**kube-node3** RHEL/CentOS 7.6 ``10.8.8.103/24`` ``16 vCPUs`` ``56+ GB`` ``100 GB`` +SUM ``56 vCPUs`` ``176+ GB`` ``400 GB`` +=========================================================== ============== ============ =============== -Unfortunately, the offline installer supports only **RHEL 7.x** distribution as of now. So, your VMs should be preinstalled with this operating system - the hypervisor and platform can be of your choosing. It is also worth knowing that the exact RHEL version (major and minor number - 7.6 for example) should match for the package build procedure and the target installation. That means: if you are building packages on RHEL 7.6 release your VMs should be RHEL 7.6 too. +Unfortunately, the offline installer supports only **RHEL 7.x** or **CentOS 7.6** distribution as of now. So, your VMs should be preinstalled with this operating system - the hypervisor and platform can be of your choosing. We will expect from now on that you installed four VMs and they are connected to the shared network. All VMs must be reachable from our *install-server* (below), which can be the hypervisor, *infra-node* or completely different machine. But in either of these cases the *install-server* must be able to connect over ssh to all of these nodes. @@ -341,7 +341,7 @@ Final configuration can resemble the following:: Helm chart value overrides ^^^^^^^^^^^^^^^^^^^^^^^^^^ -In Dublin OOM charts are coming with all ONAP components disabled, this setting is also prepackaged within our sw_package.tar. Luckily there are multiple ways supported how to override this setting. It's also necessary for setting-up VIM specific entries and basically to configure any stuff with non default values. +In El Alto OOM charts are coming with all ONAP components disabled, this setting is also prepackaged within our sw_package.tar. Luckily there are multiple ways supported how to override this setting. It's also necessary for setting-up VIM specific entries and basically to configure any stuff with non default values. First option is to use ``overrides`` key in ``application_configuration.yml``. These settings will override helm values originally stored in ``values.yaml`` files in helm chart directories. @@ -545,6 +545,6 @@ Usage is basically the same as with the default chroot way - the only difference ----- .. _Build Guide: ./BuildGuide.rst -.. _Dublin requirements: https://onap.readthedocs.io/en/dublin/guides/onap-developer/settingup/index.html#installing-onap -.. _Dublin release: https://docs.onap.org/en/dublin/release/ +.. _El Alto requirements: https://onap.readthedocs.io/en/elalto/guides/onap-developer/settingup/index.html#installing-onap +.. _El Alto release: https://docs.onap.org/en/elalto/release/ .. _OOM ONAP: https://wiki.onap.org/display/DW/ONAP+Operations+Manager+Project |