diff options
Diffstat (limited to 'docs/sections/guides/infra_guides')
4 files changed, 476 insertions, 0 deletions
diff --git a/docs/sections/guides/infra_guides/oom_base_config_setup.rst b/docs/sections/guides/infra_guides/oom_base_config_setup.rst new file mode 100644 index 0000000000..d228f5df56 --- /dev/null +++ b/docs/sections/guides/infra_guides/oom_base_config_setup.rst @@ -0,0 +1,187 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 +.. International License. +.. http://creativecommons.org/licenses/by/4.0 +.. Copyright (C) 2022 Nordix Foundation + +.. Links +.. _HELM Best Practices Guide: https://docs.helm.sh/chart_best_practices/#requirements +.. _helm installation guide: https://helm.sh/docs/intro/install/ +.. _kubectl installation guide: https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/ +.. _Curated applications for Kubernetes: https://github.com/kubernetes/charts +.. _Cert-Manager Installation documentation: https://cert-manager.io/docs/installation/kubernetes/ +.. _Cert-Manager kubectl plugin documentation: https://cert-manager.io/docs/usage/kubectl-plugin/ +.. _Strimzi Apache Kafka Operator helm Installation documentation: https://strimzi.io/docs/operators/in-development/deploying.html#deploying-cluster-operator-helm-chart-str + +.. _oom_base_setup_guide: + +OOM Base Platform +################# + +As part of the initial base setup of the host Kubernetes cluster, +the following mandatory installation and configuration steps must be completed. + +.. contents:: + :backlinks: top + :depth: 1 + :local: +.. + +For additional platform add-ons, see the :ref:`oom_base_optional_addons` section. + +Install & configure kubectl +*************************** +The Kubernetes command line interface used to manage a Kubernetes cluster needs to be installed +and configured to run as non root. + +For additional information regarding kubectl installation and configuration see the `kubectl installation guide`_ + +To install kubectl, execute the following, replacing the <recommended-kubectl-version> with the version defined +in the :ref:`versions_table` table:: + + > curl -LO https://dl.k8s.io/release/v<recommended-kubectl-version>/bin/linux/amd64/kubectl + + > chmod +x ./kubectl + + > sudo mv ./kubectl /usr/local/bin/kubectl + + > mkdir ~/.kube + + > cp kube_config_cluster.yml ~/.kube/config.onap + + > export KUBECONFIG=~/.kube/config.onap + + > kubectl config use-context onap + +Validate the installation:: + + > kubectl get nodes + +:: + + NAME STATUS ROLES AGE VERSION + onap-control-1 Ready controlplane,etcd 3h53m v1.23.8 + onap-control-2 Ready controlplane,etcd 3h53m v1.23.8 + onap-k8s-1 Ready worker 3h53m v1.23.8 + onap-k8s-2 Ready worker 3h53m v1.23.8 + onap-k8s-3 Ready worker 3h53m v1.23.8 + onap-k8s-4 Ready worker 3h53m v1.23.8 + onap-k8s-5 Ready worker 3h53m v1.23.8 + onap-k8s-6 Ready worker 3h53m v1.23.8 + + +Install & configure helm +************************ +Helm is used for package and configuration management of the relevant helm charts. +For additional information, see the `helm installation guide`_ + +To install helm, execute the following, replacing the <recommended-helm-version> with the version defined +in the :ref:`versions_table` table:: + + > wget https://get.helm.sh/helm-v<recommended-helm-version>-linux-amd64.tar.gz + + > tar -zxvf helm-v<recommended-helm-version>-linux-amd64.tar.gz + + > sudo mv linux-amd64/helm /usr/local/bin/helm + +Verify the helm version with:: + + > helm version + +Helm's default CNCF provided `Curated applications for Kubernetes`_ repository called +*stable* can be removed to avoid confusion:: + + > helm repo remove stable + +Install the additional OOM plugins required to un/deploy the OOM helm charts:: + + > git clone http://gerrit.onap.org/r/oom + + > cp -R ~/oom/kubernetes/helm/plugins/ /usr/local/bin/helm/plugins + +Verify the plugins are installed:: + + > helm plugin ls + +:: + + NAME VERSION DESCRIPTION + deploy 1.0.0 install (upgrade if release exists) parent charty and all subcharts as separate but related releases + undeploy 1.0.0 delete parent chart and subcharts that were deployed as separate releases + + +Install the strimzi kafka operator +********************************** +Strimzi Apache Kafka provides a way to run an Apache Kafka cluster on Kubernetes +in various deployment configurations by using kubernetes operators. +Operators are a method of packaging, deploying, and managing Kubernetes applications. + +Strimzi Operators extend the Kubernetes functionality, automating common +and complex tasks related to a Kafka deployment. By implementing +knowledge of Kafka operations in code, the Kafka administration +tasks are simplified and require less manual intervention. + +The Strimzi cluster operator is deployed using helm to install the parent chart +containing all of the required custom resource definitions. This should be done +by a kubernetes administrator to allow for deployment of custom resources in to +any kubernetes namespace within the cluster. + +Full installation instructions can be found in the +`Strimzi Apache Kafka Operator helm Installation documentation`_. + +To add the required helm repository, execute the following:: + + > helm repo add strimzi https://strimzi.io/charts/ + +To install the strimzi kafka operator, execute the following, replacing the <recommended-strimzi-version> with the version defined +in the :ref:`versions_table` table:: + + > helm install strimzi-kafka-operator strimzi/strimzi-kafka-operator --namespace strimzi-system --version <recommended-strimzi-version> --set watchAnyNamespace=true --create-namespace + +Verify the installation:: + + > kubectl get po -n strimzi-system + +:: + + NAME READY STATUS RESTARTS AGE + strimzi-cluster-operator-7f7d6b46cf-mnpjr 1/1 Running 0 2m + + +Install Cert-Manager +******************** + +Cert-Manager is a native Kubernetes certificate management controller. +It can help with issuing certificates from a variety of sources, such as +Let’s Encrypt, HashiCorp Vault, Venafi, a simple signing key pair, self +signed or external issuers. It ensures certificates are valid and up to +date, and attempt to renew certificates at a configured time before expiry. + +Cert-Manager is deployed using regular YAML manifests which include all +the needed resources (the CustomResourceDefinitions, cert-manager, +namespace, and the webhook component). + +Full installation instructions, including details on how to configure extra +functionality in Cert-Manager can be found in the +`Cert-Manager Installation documentation`_. + +There is also a kubectl plugin (kubectl cert-manager) that can help you +to manage cert-manager resources inside your cluster. For installation +steps, please refer to `Cert-Manager kubectl plugin documentation`_. + + +To install cert-manager, execute the following, replacing the <recommended-cm-version> with the version defined +in the :ref:`versions_table` table:: + + > kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v<recommended-cm-version>/cert-manager.yaml + +Verify the installation:: + + > kubectl get po -n cert-manager + +:: + + NAME READY STATUS RESTARTS AGE + cert-manager-776c4cfcb6-vgnpw 1/1 Running 0 2m + cert-manager-cainjector-7d9668978d-hdxf7 1/1 Running 0 2m + cert-manager-webhook-66c8f6c75-dxmtz 1/1 Running 0 2m + diff --git a/docs/sections/guides/infra_guides/oom_base_optional_addons.rst b/docs/sections/guides/infra_guides/oom_base_optional_addons.rst new file mode 100644 index 0000000000..4b4fbf7883 --- /dev/null +++ b/docs/sections/guides/infra_guides/oom_base_optional_addons.rst @@ -0,0 +1,41 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 +.. International License. +.. http://creativecommons.org/licenses/by/4.0 +.. Copyright (C) 2022 Nordix Foundation + +.. Links +.. _Prometheus stack README: https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack#readme + +.. _oom_base_optional_addons: + +OOM Optional Addons +################### + +The following optional applications can be added to your kubernetes environment. + +Install Prometheus Stack +************************ + +Prometheus is an open-source systems monitoring and alerting toolkit with +an active ecosystem. + +Kube Prometheus Stack is a collection of Kubernetes manifests, Grafana +dashboards, and Prometheus rules combined with documentation and scripts to +provide easy to operate end-to-end Kubernetes cluster monitoring with +Prometheus using the Prometheus Operator. As it includes both Prometheus +Operator and Grafana dashboards, there is no need to set up them separately. +See the `Prometheus stack README`_ for more information. + +To install the prometheus stack, execute the following: + +- Add the prometheus-community Helm repository:: + + > helm repo add prometheus-community https://prometheus-community.github.io/helm-charts + +- Update your local Helm chart repository cache:: + + > helm repo update + +- To install prometheus, execute the following, replacing the <recommended-pm-version> with the version defined in the :ref:`versions_table` table:: + + > helm install prometheus prometheus-community/kube-prometheus-stack --namespace=prometheus --create-namespace --version=<recommended-pm-version> diff --git a/docs/sections/guides/infra_guides/oom_infra_setup.rst b/docs/sections/guides/infra_guides/oom_infra_setup.rst new file mode 100644 index 0000000000..d8fb743f42 --- /dev/null +++ b/docs/sections/guides/infra_guides/oom_infra_setup.rst @@ -0,0 +1,72 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 +.. International License. +.. http://creativecommons.org/licenses/by/4.0 +.. Copyright (C) 2022 Nordix Foundation + +.. Links +.. _Kubernetes: https://kubernetes.io/ +.. _Kubernetes best practices: https://kubernetes.io/docs/setup/best-practices/cluster-large/ +.. _kubelet confg guide: https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/ + +.. _oom_infra_setup_guide: + +OOM Infrastructure Guide +######################## + +.. figure:: ../../resources/images/oom_logo/oomLogoV2-medium.png + :align: right + +OOM deploys and manages ONAP on a pre-established Kubernetes_ cluster - the +creation of this cluster is outside of the scope of the OOM project as there +are many options including public clouds with pre-established environments. +If creation of a Kubernetes cluster is required, the life-cycle of this +cluster is independent of the life-cycle of the ONAP components themselves. + +.. rubric:: Minimum Hardware Configuration + +Some recommended hardware requirements are provided below. Note that this is for a +full ONAP deployment (all components). + +.. table:: OOM Hardware Requirements + + ===== ===== ====== ==================== + RAM HD vCores Ports + ===== ===== ====== ==================== + 224GB 160GB 112 0.0.0.0/0 (all open) + ===== ===== ====== ==================== + +Customizing ONAP to deploy only components that are needed will drastically reduce these requirements. +See the :ref:`OOM customized deployment<oom_customize_overrides>` section for more details. + +.. note:: + | Kubernetes supports a maximum of 110 pods per node - this can be overcome by modifying your kubelet config. + | See the `kubelet confg guide`_ for more information. + + | The use of many small nodes is preferred over a few larger nodes (for example 14 x 16GB - 8 vCores each). + + | OOM can be deployed on a private set of physical hosts or VMs (or even a combination of the two). + +.. rubric:: Software Requirements + +The versions of software that are supported by OOM are as follows: + +.. _versions_table: + +.. table:: OOM Software Requirements + + ============== =========== ======= ======== ======== ============ ================= ======= + Release Kubernetes Helm kubectl Docker Cert-Manager Prometheus Stack Strimzi + ============== =========== ======= ======== ======== ============ ================= ======= + Jakarta 1.22.4 3.6.3 1.22.4 20.10.x 1.8.0 35.x 0.28.0 + Kohn 1.23.8 3.8.2 1.23.8 20.10.x 1.8.0 35.x 0.31.1 + ============== =========== ======= ======== ======== ============ ================= ======= + + +.. toctree:: + :hidden: + + oom_base_config_setup.rst + oom_base_optional_addons.rst + oom_setup_ingress_controller.rst + + diff --git a/docs/sections/guides/infra_guides/oom_setup_ingress_controller.rst b/docs/sections/guides/infra_guides/oom_setup_ingress_controller.rst new file mode 100644 index 0000000000..8c261fdfd7 --- /dev/null +++ b/docs/sections/guides/infra_guides/oom_setup_ingress_controller.rst @@ -0,0 +1,176 @@ +.. This work is licensed under a Creative Commons Attribution 4.0 +.. International License. +.. http://creativecommons.org/licenses/by/4.0 +.. Copyright 2020, Samsung Electronics +.. Modification copyright (C) 2022 Nordix Foundation + +.. Links +.. _metallb Metal Load Balancer installation: https://metallb.universe.tf/installation/ + +.. _oom_setup_ingress_controller: + +OOM Ingress controller setup +############################ + +.. warning:: + This guide should prob go in the Optional addons section + +This optional guide provides instruction how to setup experimental ingress controller +feature. For this, we are hosting our cluster on OpenStack VMs and using the +Rancher Kubernetes Engine (RKE) to deploy and manage our Kubernetes Cluster and +ingress controller + +.. contents:: + :backlinks: top + :depth: 1 + :local: +.. + +The result at the end of this tutorial will be: + +#. Customization of the cluster.yaml file for ingress controller support + +#. Installation and configuration test DNS server for ingress host resolution + on testing machines + +#. Installation and configuration MLB (Metal Load Balancer) required for + exposing ingress service + +#. Installation and configuration NGINX ingress controller + +#. Additional info how to deploy ONAP with services exposed via Ingress + controller + +Customize cluster.yml file +************************** +Before setup cluster for ingress purposes DNS cluster IP and ingress provider +should be configured and following: + +.. code-block:: yaml + + --- + <...> + restore: + restore: false + snapshot_name: "" + ingress: + provider: none + dns: + provider: coredns + upstreamnameservers: + - <custer_dns_ip>:31555 + +Where the <cluster_dns_ip> should be set to the same IP as the CONTROLPANE +node. + +For external load balancer purposes, minimum one of the worker node should be +configured with external IP address accessible outside the cluster. It can be +done using the following example node configuration: + +.. code-block:: yaml + + --- + <...> + - address: <external_ip> + internal_address: <internal_ip> + port: "22" + role: + - worker + hostname_override: "onap-worker-0" + user: ubuntu + ssh_key_path: "~/.ssh/id_rsa" + <...> + +Where the <external_ip> is external worker node IP address, and <internal_ip> +is internal node IP address if it is required. + + +DNS server configuration and installation +***************************************** +DNS server deployed on the Kubernetes cluster makes it easy to use services +exposed through ingress controller because it resolves all subdomain related to +the ONAP cluster to the load balancer IP. Testing ONAP cluster requires a lot +of entries on the target machines in the /etc/hosts. Adding many entries into +the configuration files on testing machines is quite problematic and error +prone. The better wait is to create central DNS server with entries for all +virtual host pointed to simpledemo.onap.org and add custom DNS server as a +target DNS server for testing machines and/or as external DNS for Kubernetes +cluster. + +DNS server has automatic installation and configuration script, so installation +is quite easy:: + + > cd kubernetes/contrib/dns-server-for-vhost-ingress-testing + + > ./deploy\_dns.sh + +After DNS deploy you need to setup DNS entry on the target testing machine. +Because DNS listen on non standard port configuration require iptables rules +on the target machine. Please follow the configuration proposed by the deploy +scripts. +Example output depends on the IP address and example output looks like bellow:: + + DNS server already deployed: + 1. You can add the DNS server to the target machine using following commands: + sudo iptables -t nat -A OUTPUT -p tcp -d 192.168.211.211 --dport 53 -j DNAT --to-destination 10.10.13.14:31555 + sudo iptables -t nat -A OUTPUT -p udp -d 192.168.211.211 --dport 53 -j DNAT --to-destination 10.10.13.14:31555 + sudo sysctl -w net.ipv4.conf.all.route_localnet=1 + sudo sysctl -w net.ipv4.ip_forward=1 + 2. Update /etc/resolv.conf file with nameserver 192.168.211.211 entry on your target machine + + +MetalLB Load Balancer installation and configuration +**************************************************** + +By default pure Kubernetes cluster requires external load balancer if we want +to expose external port using LoadBalancer settings. For this purpose MetalLB +can be used. Before installing the MetalLB you need to ensure that at least one +worker has assigned IP accessible outside the cluster. + +MetalLB Load balancer can be easily installed using automatic install script:: + + > cd kubernetes/contrib/metallb-loadbalancer-inst + + > ./install-metallb-on-cluster.sh + + +Configuration of the Nginx ingress controller +********************************************* + +After installation of the DNS server and ingress controller, we can install and +configure ingress controller. +It can be done using the following commands:: + + > cd kubernetes/contrib/ingress-nginx-post-inst + + > kubectl apply -f nginx_ingress_cluster_config.yaml + + > kubectl apply -f nginx_ingress_enable_optional_load_balacer_service.yaml + +After deploying the NGINX ingress controller, you can ensure that the ingress port is +exposed as load balancer service with an external IP address:: + + > kubectl get svc -n ingress-nginx + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + default-http-backend ClusterIP 10.10.10.10 <none> 80/TCP 25h + ingress-nginx LoadBalancer 10.10.10.11 10.12.13.14 80:31308/TCP,443:30314/TCP 24h + + +ONAP with ingress exposed services +********************************** +If you want to deploy onap with services exposed through ingress controller you +can use full onap deploy yaml:: + + > onap/resources/overrides/onap-all-ingress-nginx-vhost.yaml + +Ingress also can be enabled on any onap setup override using following code: + +.. code-block:: yaml + + --- + <...> + global: + <...> + ingress: + enabled: true + |