From 1ed2b6fce2c08867c55786fc4aeebe983f312b4c Mon Sep 17 00:00:00 2001 From: Tomáš Levora Date: Tue, 12 Nov 2019 14:28:27 +0000 Subject: Revert "Fix packaging offline-installer" This reverts commit 92477974b68c7638a43ffc869e3ea9fb854b3534. Reason for revert: Not solved handling of application_configuration.yml in package.py Change-Id: I814c01dc1f7334a961e991c42fd485d9af4675a1 Signed-off-by: Tomas Levora Issue-ID: OOM-2201 --- docs/BuildGuide.rst | 16 ++++++---------- docs/InstallGuide.rst | 6 +++--- 2 files changed, 9 insertions(+), 13 deletions(-) (limited to 'docs') diff --git a/docs/BuildGuide.rst b/docs/BuildGuide.rst index d0a558ba..27c0835e 100644 --- a/docs/BuildGuide.rst +++ b/docs/BuildGuide.rst @@ -128,18 +128,14 @@ so one might try following command to download most of the required artifacts in :: # following arguments are provided - # all data lists are taken from ./build/data_lists/ folder + # all data lists are taken in ./build/data_lists/ folder # all resources will be stored in expected folder structure within ../resources folder ./build/download/download.py --docker ./build/data_lists/infra_docker_images.list ../resources/offline_data/docker_images_infra \ - --http ./build/data_lists/infra_bin_utils.list ../resources/downloads - - # following docker images does not neccessary need to be stored under resources as they load into repository in next part - # if second argument for --docker is not present, images are just pulled and cached. - # Warning: script must be run twice separately, for more details run download.py --help - ./build/download/download.py --docker ./build/data_lists/rke_docker_images.list \ + --docker ./build/data_lists/rke_docker_images.list \ --docker ./build/data_lists/k8s_docker_images.list \ --docker ./build/data_lists/onap_docker_images.list \ + --http ./build/data_lists/infra_bin_utils.list ../resources/downloads Alternatively, step-by-step procedure is described in Appendix 1. @@ -152,7 +148,7 @@ Part 3. Populate local nexus Prerequisites: - All data lists and resources which are pushed to local nexus repository are available -- Following ports are not occupied by another service: 80, 8081, 8082, 10001 +- Following ports are not occupied buy another service: 80, 8081, 8082, 10001 - There's no docker container called "nexus" .. note:: In case you skipped the Part 2 for the artifacts download, please ensure that the onap docker images are cached and copy of resources data are untarred in *./onap-offline/../resources/* @@ -189,13 +185,13 @@ From onap-offline directory run: :: - ./build/package.py --build-version --application-repository_reference --output-dir --resources-directory + ./build/package.py --build_version "" --application-repository_reference --output-dir --resources-directory For example: :: - ./build/package.py https://gerrit.onap.org/r/oom --application-repository_reference master --output-dir /tmp/packages --resources-directory /tmp/resources + ./build/package.py https://gerrit.onap.org/r/oom --build_version "" --application-repository_reference master --output-dir /tmp/packages --resources-directory /tmp/resources In the target directory you should find tar files: diff --git a/docs/InstallGuide.rst b/docs/InstallGuide.rst index 947cd727..9239cad9 100644 --- a/docs/InstallGuide.rst +++ b/docs/InstallGuide.rst @@ -233,7 +233,7 @@ After all the changes, the ``'hosts.yml'`` should look similar to this:: infrastructure: hosts: infrastructure-server: - ansible_host: 10.8.8.100 + ansible_host: 10.8.8.13 #IP used for communication between infra and kubernetes nodes, must be specified. cluster_ip: 10.8.8.100 @@ -326,7 +326,7 @@ Second one controls time zone setting on host. It's value should be time zone na Final configuration can resemble the following:: resources_dir: /data - resources_filename: resources_package.tar + resources_filename: resources-package.tar app_data_path: /opt/onap app_name: onap timesync: @@ -432,7 +432,7 @@ Part 4. Post-installation and troubleshooting After all of the playbooks are run successfully, it will still take a lot of time until all pods are up and running. You can monitor your newly created kubernetes cluster for example like this:: - $ ssh -i ~/.ssh/offline_ssh_key root@10.8.8.100 # tailor this command to connect to your infra-node + $ ssh -i ~/.ssh/offline_ssh_key root@10.8.8.4 # tailor this command to connect to your infra-node $ watch -d -n 5 'kubectl get pods --all-namespaces' Alternatively you can monitor progress with ``helm_deployment_status.py`` script located in offline-installer directory. Transfer it to infra-node and run:: -- cgit 1.2.3-korg