summaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
authorDenis Kasanic <d.kasanic@partner.samsung.com>2019-11-14 12:35:46 +0100
committerDenis Kasanic <d.kasanic@partner.samsung.com>2019-11-15 11:35:00 +0000
commita7702f2e1721b90a4314af906ca3a8807d199f14 (patch)
tree0977a9f644de704c79862de44f8b80a0b1813669 /docs
parent1ed2b6fce2c08867c55786fc4aeebe983f312b4c (diff)
Update documentation
Fix several typos in Build and Install parts of docs Fix paths of configuration files in Install part Remove note discouraging making changes in config files Issue-ID: OOM-2158 Issue-ID: OOM-2197 Signed-off-by: Denis Kasanic <d.kasanic@partner.samsung.com> Change-Id: I28d9b43a56791bc3c1c53c12f7c852f5a1a885c6
Diffstat (limited to 'docs')
-rw-r--r--docs/BuildGuide.rst16
-rw-r--r--docs/InstallGuide.rst25
2 files changed, 20 insertions, 21 deletions
diff --git a/docs/BuildGuide.rst b/docs/BuildGuide.rst
index 27c0835e..01f248ff 100644
--- a/docs/BuildGuide.rst
+++ b/docs/BuildGuide.rst
@@ -128,14 +128,18 @@ so one might try following command to download most of the required artifacts in
::
# following arguments are provided
- # all data lists are taken in ./build/data_lists/ folder
+ # all data lists are taken from ./build/data_lists/ folder
# all resources will be stored in expected folder structure within ../resources folder
./build/download/download.py --docker ./build/data_lists/infra_docker_images.list ../resources/offline_data/docker_images_infra \
- --docker ./build/data_lists/rke_docker_images.list \
+ --http ./build/data_lists/infra_bin_utils.list ../resources/downloads
+
+ # following docker images do not necessarily need to be stored under resources as they load into repository in next part
+ # if second argument for --docker is not present, images are just pulled and cached.
+ # Warning: script must be run twice separately, for more details run download.py --help
+ ./build/download/download.py --docker ./build/data_lists/rke_docker_images.list \
--docker ./build/data_lists/k8s_docker_images.list \
--docker ./build/data_lists/onap_docker_images.list \
- --http ./build/data_lists/infra_bin_utils.list ../resources/downloads
Alternatively, step-by-step procedure is described in Appendix 1.
@@ -148,7 +152,7 @@ Part 3. Populate local nexus
Prerequisites:
- All data lists and resources which are pushed to local nexus repository are available
-- Following ports are not occupied buy another service: 80, 8081, 8082, 10001
+- Following ports are not occupied by another service: 80, 8081, 8082, 10001
- There's no docker container called "nexus"
.. note:: In case you skipped the Part 2 for the artifacts download, please ensure that the onap docker images are cached and copy of resources data are untarred in *./onap-offline/../resources/*
@@ -185,13 +189,13 @@ From onap-offline directory run:
::
- ./build/package.py <helm charts repo> --build_version "" --application-repository_reference <commit/tag/branch> --output-dir <target\_dir> --resources-directory <target\_dir>
+ ./build/package.py <helm charts repo> --build-version <version> --application-repository_reference <commit/tag/branch> --output-dir <target\_dir> --resources-directory <target\_dir>
For example:
::
- ./build/package.py https://gerrit.onap.org/r/oom --build_version "" --application-repository_reference master --output-dir /tmp/packages --resources-directory /tmp/resources
+ ./build/package.py https://gerrit.onap.org/r/oom --application-repository_reference master --output-dir /tmp/packages --resources-directory /tmp/resources
In the target directory you should find tar files:
diff --git a/docs/InstallGuide.rst b/docs/InstallGuide.rst
index 9239cad9..1f4514fa 100644
--- a/docs/InstallGuide.rst
+++ b/docs/InstallGuide.rst
@@ -124,17 +124,12 @@ Change the current directory to the ``'ansible'``::
You can see multiple files and directories inside - this is the *offline-installer*. It is implemented as a set of ansible playbooks.
-If you created the ``'sw'`` package according to the *Build Guide* then you should have had the ``'application'`` directory populated with at least the following files:
+If you created the ``'sw'`` package according to the *Build Guide* then you should have had the *offline-installer* populated with at least the following files:
-- ``application_configuration.yml``
-- ``hosts.yml``
+- ``application/application_configuration.yml``
+- ``inventory/hosts.yml``
-**NOTE:** The following paragraph describes a way how to create or fine-tune your own ``'application_configuration.yml'`` - we are discouraging you from executing this step. The recommended way is to use the packaged files inside the ``'application'`` directory.
-
-**NOT RECOMMENDED:** If for some reason you don't have these files inside the ``'application'`` directory or you simply want to do things the hard way then you can recreate them from their templates. It is better to keep the originals (templates) intact - so we will copy them to the ``'application'`` directory::
-
- $ cp ../config/application_configuration.yml application/
- $ cp inventory/hosts.yml application/
+Following paragraphs describe fine-tuning of ``'inventory.yml'`` and ``'application_configuration.yml'`` to reflect your VMs setup.
.. _oooi_installguide_config_hosts:
@@ -233,7 +228,7 @@ After all the changes, the ``'hosts.yml'`` should look similar to this::
infrastructure:
hosts:
infrastructure-server:
- ansible_host: 10.8.8.13
+ ansible_host: 10.8.8.100
#IP used for communication between infra and kubernetes nodes, must be specified.
cluster_ip: 10.8.8.100
@@ -326,7 +321,7 @@ Second one controls time zone setting on host. It's value should be time zone na
Final configuration can resemble the following::
resources_dir: /data
- resources_filename: resources-package.tar
+ resources_filename: resources_package.tar
app_data_path: /opt/onap
app_name: onap
timesync:
@@ -367,7 +362,7 @@ We are almost finished with the configuration and we are close to start the inst
You can use the ansible playbook ``'setup.yml'`` like this::
- $ ./run_playbook.sh -i application/hosts.yml setup.yml -u root --ask-pass
+ $ ./run_playbook.sh -i inventory/hosts.yml setup.yml -u root --ask-pass
You will be asked for password per each node and the playbook will generate a unprotected ssh key-pair ``'~/.ssh/offline_ssh_key'``, which will be distributed to the nodes.
@@ -383,7 +378,7 @@ This command behaves almost identically to the ``'setup.yml'`` playbook.
If you generated the ssh key manually then you can now run the ``'setup.yml'`` playbook like this and achieve the same result as in the first execution::
- $ ./run_playbook.sh -i application/hosts.yml setup.yml
+ $ ./run_playbook.sh -i inventory/hosts.yml setup.yml
This time it should not ask you for any password - of course this is very redundant, because you just distributed two ssh keys for no good reason.
@@ -412,7 +407,7 @@ We will use the default chroot option so we don't need any docker service to be
Installation is actually very straightforward now::
- $ ./run_playbook.sh -i application/hosts.yml -e @application/application_configuration.yml site.yml
+ $ ./run_playbook.sh -i inventory/hosts.yml -e @application/application_configuration.yml site.yml
This will take a while so be patient.
@@ -432,7 +427,7 @@ Part 4. Post-installation and troubleshooting
After all of the playbooks are run successfully, it will still take a lot of time until all pods are up and running. You can monitor your newly created kubernetes cluster for example like this::
- $ ssh -i ~/.ssh/offline_ssh_key root@10.8.8.4 # tailor this command to connect to your infra-node
+ $ ssh -i ~/.ssh/offline_ssh_key root@10.8.8.100 # tailor this command to connect to your infra-node
$ watch -d -n 5 'kubectl get pods --all-namespaces'
Alternatively you can monitor progress with ``helm_deployment_status.py`` script located in offline-installer directory. Transfer it to infra-node and run::