summaryrefslogtreecommitdiffstats
path: root/docs/InstallGuide.rst
diff options
context:
space:
mode:
authorMichal Ptacek <m.ptacek@partner.samsung.com>2019-03-06 16:25:43 +0000
committerMichal Ptacek <m.ptacek@partner.samsung.com>2019-03-07 10:14:05 +0000
commitc424cff7e699f3c03431197a3c4ce5bd1c881693 (patch)
treed1e3324b169dab060788404adbfebe1651136b91 /docs/InstallGuide.rst
parent22bb23fec60f2cb553e4ca2b02d67c0a3cbdfa3c (diff)
Adding postinstall section in InstallGuide
Describing common issues with onap deployments into InstallGuide. Change-Id: I7b039fbc357901c8bfa1d57db69f9344eea07077 Issue-ID: OOM-1701 Signed-off-by: Michal Ptacek <m.ptacek@partner.samsung.com>
Diffstat (limited to 'docs/InstallGuide.rst')
-rw-r--r--docs/InstallGuide.rst30
1 files changed, 30 insertions, 0 deletions
diff --git a/docs/InstallGuide.rst b/docs/InstallGuide.rst
index f34ee03e..e91c7bd7 100644
--- a/docs/InstallGuide.rst
+++ b/docs/InstallGuide.rst
@@ -333,11 +333,41 @@ This will take a while so be patient.
- ``rancher_kubernetes.yml``
- ``application.yml``
+----
+
+.. _oooi_installguide_postinstall:
+
+Part 4. Postinstallation and troubleshooting
+--------------------------------------------
+
After all the playbooks are finished, it will still take a lot of time until all pods will be up and running. You can monitor your newly created kubernetes cluster for example like this::
$ ssh -i ~/.ssh/offline_ssh_key root@10.8.8.4 # tailor this command to connect to your infra-node
$ watch -d -n 5 'kubectl get pods --all-namespaces'
+
+Final result of installation varies based on number of k8s nodes used and distribution of pods. In some dev envs we quite frequently hit problems with not all pods properly deployed. In successful deployments all jobs should be in successful state.
+This can be verified using ::
+
+ $ kubectl get jobs -n <namespace>
+
+If some of the job is hanging in some wrong end-state like ``'BackoffLimitExceeded'`` manual intervention is required to heal this and make also dependent jobs passing. More details about particular job state can be obtained using ::
+
+ $ kubectl describe job -n <namespace> <job_name>
+
+If manual intervention is required, one can remove failing job and retry helm install command directly, which will not launch full deployment but rather check current state of the system and rebuild parts which are not up & running. Exact commands are as follows ::
+
+ $ kubectl delete job -n <namespace> <job_name>
+ $ helm deploy <env_name> <helm_chart_name> --namespace <namespace_name>
+
+ E.g. helm deploy dev local/onap --namespace onap
+
+Once all pods are properly deployed and in running state, one can verify functionality e.g. by running onap healthchecks ::
+
+ $ cd <app_data_path>/<app_name>/helm_charts/robot
+ $ ./ete-k8s.sh onap health
+
+
-----
.. _oooi_installguide_appendix1: