diff options
Diffstat (limited to 'kud/hosting_providers')
-rw-r--r-- | kud/hosting_providers/containerized/README.md | 23 | ||||
-rwxr-xr-x | kud/hosting_providers/containerized/installer.sh | 88 | ||||
-rw-r--r-- | kud/hosting_providers/vagrant/README.md | 14 | ||||
-rwxr-xr-x | kud/hosting_providers/vagrant/installer.sh | 87 |
4 files changed, 184 insertions, 28 deletions
diff --git a/kud/hosting_providers/containerized/README.md b/kud/hosting_providers/containerized/README.md index 2f9a9e52..bd5b08a8 100644 --- a/kud/hosting_providers/containerized/README.md +++ b/kud/hosting_providers/containerized/README.md @@ -21,9 +21,9 @@ KUD installation installer is divided into two regions with args - `--install-pk * Container image is build using the `installer --install-pkg` arg and Kubernetes job is used to install the cluster using `installer --cluster <cluster-name>`. Installer will invoke the kubespray cluster.yml, kud-addsons and plugins ansible cluster. -Installer script finds the `hosts.init` for each cluster in `/opt/multi-cluster/<cluster-name>` +Installer script finds the `hosts.ini` for each cluster in `/opt/multi-cluster/<cluster-name>` -Kubernetes jobs(a cluster per job) are used to install multiple clusters and logs of each cluster deployments are stored in the `/opt/kud/multi-cluster/<cluster-name>/logs` and artifacts are stored as follows `/opt/kud/multi-cluster/<cluster-name>/artifacts` +Kubernetes jobs (a cluster per job) are used to install multiple clusters and logs of each cluster deployments are stored in the `/opt/kud/multi-cluster/<cluster-name>/logs` and artifacts are stored as follows `/opt/kud/multi-cluster/<cluster-name>/artifacts` ## Creating TestBed for Testing and Development @@ -38,26 +38,31 @@ $ pushd multicloud-k8s/kud/hosting_providers/containerized/testing $ vagrant up $ popd ``` -Do following steps to keep note of +Do the following steps to keep note of 1. Get the IP address for the Vagrant machine - <VAGRANT_IP_ADDRESS> 2. Copy the host /root/.ssh/id_rsa.pub into the vagrant /root/.ssh/authorized_keys 3. From host make sure to ssh into vagrant without password ssh root@<VAGRANT_IP_ADDRESS> ## Quickstart Installation Guide -Build the kud docker images as follows, add KUD_ENABLE_TESTS & KUD_PLUGIN_ENABLED for the testing only: +Build the kud docker images as follows. Add `KUD_ENABLE_TESTS` & `KUD_PLUGIN_ENABLED` +for the testing only. Currently only docker and containerd are supported CRI +runtimes and can be configured using the `CONTAINER_RUNTIME` environment variable. +To be able to run secure containers using Kata Containers, it is required to +change the CRI runtime to containerd. ``` $ git clone https://github.com/onap/multicloud-k8s.git && cd multicloud-k8s -$ docker build --rm \ +$ docker build --rm \ --build-arg http_proxy=${http_proxy} \ --build-arg HTTP_PROXY=${HTTP_PROXY} \ --build-arg https_proxy=${https_proxy} \ --build-arg HTTPS_PROXY=${HTTPS_PROXY} \ --build-arg no_proxy=${no_proxy} \ --build-arg NO_PROXY=${NO_PROXY} \ - --build-arg KUD_ENABLE_TESTS=true \ - --build-arg KUD_PLUGIN_ENABLED=true \ + --build-arg KUD_ENABLE_TESTS=true \ + --build-arg KUD_PLUGIN_ENABLED=true \ + --build-arg CONTAINER_RUNTIME=docker \ -t github.com/onap/multicloud-k8s:latest . -f kud/build/Dockerfile ``` Let's create a cluster-101 and cluster-102 hosts.ini as follows @@ -66,7 +71,7 @@ Let's create a cluster-101 and cluster-102 hosts.ini as follows $ mkdir -p /opt/kud/multi-cluster/{cluster-101,cluster-102} ``` -Create hosts.ini as follows in the direcotry cluster-101(c01 IP address 10.10.10.3) and cluster-102(c02 IP address 10.10.10.5). If user used Vagrant setup as mentioned in the above steps, replace the IP address with vagrant IP address +Create the hosts.ini as follows in the directory cluster-101(c01 IP address 10.10.10.3) and cluster-102(c02 IP address 10.10.10.5). If the user used a Vagrant setup as mentioned in the above steps, replace the IP address with the vagrant IP address. ``` $ cat /opt/kud/multi-cluster/cluster-101/hosts.ini @@ -97,7 +102,7 @@ kube-master ``` Do the same for the cluster-102 with c01 and IP address 10.10.10.5. -Create the ssh secret for Baremetal or VM based on your deployment. and Launch the kubernetes job as follows +Create the ssh secret for Baremetal or VM based on your deployment. Launch the kubernetes job as follows. ``` $ kubectl create secret generic ssh-key-secret --from-file=id_rsa=/root/.ssh/id_rsa --from-file=id_rsa.pub=/root/.ssh/id_rsa.pub $ CLUSTER_NAME=cluster-101 diff --git a/kud/hosting_providers/containerized/installer.sh b/kud/hosting_providers/containerized/installer.sh index 74c031dc..427850ab 100755 --- a/kud/hosting_providers/containerized/installer.sh +++ b/kud/hosting_providers/containerized/installer.sh @@ -14,7 +14,6 @@ set -o pipefail set -ex INSTALLER_DIR="$(readlink -f "$(dirname "${BASH_SOURCE[0]}")")" -KUD_ADDONS="" function install_prerequisites { #install package for docker images @@ -78,17 +77,35 @@ function install_kubespray { fi } -# install_k8s() - Install Kubernetes using kubespray tool +# install_k8s() - Install Kubernetes using kubespray tool including Kata function install_k8s { local cluster_name=$1 ansible-playbook $verbose -i \ $kud_inventory $kud_playbooks/preconfigure-kubespray.yml \ --become --become-user=root | \ tee $cluster_log/setup-kubernetes.log - ansible-playbook $verbose -i \ - $kud_inventory $dest_folder/kubespray-$version/cluster.yml \ - -e cluster_name=$cluster_name --become --become-user=root | \ - tee $cluster_log/setup-kubernetes.log + if [ "$container_runtime" == "docker" ]; then + echo "Docker will be used as the container runtime interface" + ansible-playbook $verbose -i \ + $kud_inventory $dest_folder/kubespray-$version/cluster.yml \ + -e cluster_name=$cluster_name --become --become-user=root | \ + tee $cluster_log/setup-kubernetes.log + elif [ "$container_runtime" == "containerd" ]; then + echo "Containerd will be used as the container runtime interface" + ansible-playbook $verbose -i \ + $kud_inventory $dest_folder/kubespray-$version/cluster.yml \ + -e $kud_kata_override_variables -e cluster_name=$cluster_name \ + --become --become-user=root | \ + tee $cluster_log/setup-kubernetes.log + #Install Kata Containers in containerd scenario + ansible-playbook $verbose -i \ + $kud_inventory -e "base_dest=$HOME" \ + $kud_playbooks/configure-kata.yml | \ + tee $cluster_log/setup-kata.log + else + echo "Only Docker or Containerd are supported container runtimes" + exit 1 + fi # Configure environment # Requires kubeconfig_localhost and kubectl_localhost to be true @@ -116,21 +133,37 @@ function install_addons { $kud_infra_folder/galaxy-requirements.yml --ignore-errors ansible-playbook $verbose -i \ - $kud_inventory -e "base_dest=$HOME" $kud_playbooks/configure-kud.yml | \ - tee $cluster_log/setup-kud.log - # The order of KUD_ADDONS is important: some plugins (sriov, qat) - # require nfd to be enabled. - for addon in $KUD_ADDONS $plugins_name; do + $kud_inventory -e "base_dest=$HOME" $kud_playbooks/configure-kud.yml \ + | tee $cluster_log/setup-kud.log + + kud_addons="${KUD_ADDONS:-} ${plugins_name}" + + for addon in ${kud_addons}; do echo "Deploying $addon using configure-$addon.yml playbook.." ansible-playbook $verbose -i \ - $kud_inventory -e "base_dest=$HOME" $kud_playbooks/configure-${addon}.yml | \ + $kud_inventory -e "base_dest=$HOME" \ + $kud_playbooks/configure-${addon}.yml | \ tee $cluster_log/setup-${addon}.log done echo "Run the test cases if testing_enabled is set to true." if [[ "${testing_enabled}" == "true" ]]; then failed_kud_tests="" - for addon in $KUD_ADDONS $plugins_name; do + # Run Kata test first if Kata was installed + if [ "$container_runtime" == "containerd" ]; then + #Install Kata webhook for test pods + ansible-playbook $verbose -i $kud_inventory -e "base_dest=$HOME" \ + -e "kata_webhook_runtimeclass=$kata_webhook_runtimeclass" \ + $kud_playbooks/configure-kata-webhook.yml \ + --become --become-user=root | \ + sudo tee $cluster_log/setup-kata-webhook.log + kata_webhook_deployed=true + pushd $kud_tests + bash kata.sh || failed_kud_tests="${failed_kud_tests} kata" + popd + fi + #Run other plugin tests + for addon in ${kud_addons}; do pushd $kud_tests bash ${addon}.sh || failed_kud_tests="${failed_kud_tests} ${addon}" case $addon in @@ -150,11 +183,30 @@ function install_addons { esac popd done + # Remove Kata webhook if user didn't want it permanently installed + if ! [ "$enable_kata_webhook" == "true" ] && [ "$kata_webhook_deployed" == "true" ]; then + ansible-playbook $verbose -i $kud_inventory -e "base_dest=$HOME" \ + -e "kata_webhook_runtimeclass=$kata_webhook_runtimeclass" \ + $kud_playbooks/configure-kata-webhook-reset.yml \ + --become --become-user=root | \ + sudo tee $cluster_log/kata-webhook-reset.log + kata_webhook_deployed=false + fi if [[ ! -z "$failed_kud_tests" ]]; then echo "Test cases failed:${failed_kud_tests}" return 1 fi fi + + # Check if Kata webhook should be installed and isn't already installed + if [ "$enable_kata_webhook" == "true" ] && ! [ "$kata_webhook_deployed" == "true" ]; then + ansible-playbook $verbose -i $kud_inventory -e "base_dest=$HOME" \ + -e "kata_webhook_runtimeclass=$kata_webhook_runtimeclass" \ + $kud_playbooks/configure-kata-webhook.yml \ + --become --become-user=root | \ + sudo tee $cluster_log/setup-kata-webhook.log + fi + echo "Add-ons deployment complete..." } @@ -230,6 +282,15 @@ kud_playbooks=$kud_infra_folder/playbooks kud_tests=$kud_folder/../../tests k8s_info_file=$kud_folder/k8s_info.log testing_enabled=${KUD_ENABLE_TESTS:-false} +container_runtime=${CONTAINER_RUNTIME:-docker} +enable_kata_webhook=${ENABLE_KATA_WEBHOOK:-false} +kata_webhook_runtimeclass=${KATA_WEBHOOK_RUNTIMECLASS:-kata-qemu} +kata_webhook_deployed=false +# For containerd the etcd_deployment_type: docker is the default and doesn't work. +# You have to use either etcd_kubeadm_enabled: true or etcd_deployment_type: host +# See https://github.com/kubernetes-sigs/kubespray/issues/5713 +kud_kata_override_variables="container_manager=containerd \ + -e etcd_deployment_type=host -e kubelet_cgroup_driver=cgroupfs" mkdir -p /opt/csar export CSAR_DIR=/opt/csar @@ -336,6 +397,7 @@ if [ "$1" == "--cluster" ]; then exit 0 fi + echo "Error: Refer the installer usage" usage exit 1 diff --git a/kud/hosting_providers/vagrant/README.md b/kud/hosting_providers/vagrant/README.md index 3d0766b3..3a93a73e 100644 --- a/kud/hosting_providers/vagrant/README.md +++ b/kud/hosting_providers/vagrant/README.md @@ -39,6 +39,20 @@ the following instructions: In-depth documentation and use cases of various Vagrant commands [Vagrant commands][3] is available on the Vagrant site. +### CRI Runtimes + +Currently both docker and containerd are supported CRI runtimes. If nothing is +specified then docker will be used by default. This can be changed by setting +the `CONTAINER_RUNTIME` environment variable. To be able to run secure +containers using Kata Containers it is required to change the CRI runtime to +containerd. + +``` +$ export CONTAINER_RUNTIME=containerd +``` + + + ## License Apache-2.0 diff --git a/kud/hosting_providers/vagrant/installer.sh b/kud/hosting_providers/vagrant/installer.sh index bc2e91ae..c88dc9e6 100755 --- a/kud/hosting_providers/vagrant/installer.sh +++ b/kud/hosting_providers/vagrant/installer.sh @@ -142,8 +142,31 @@ function install_k8s { echo "https_proxy: \"$https_proxy\"" | tee --append $kud_inventory_folder/group_vars/all.yml fi export ANSIBLE_CONFIG=$dest_folder/kubespray-$version/ansible.cfg - ansible-playbook $verbose -i $kud_inventory $kud_playbooks/preconfigure-kubespray.yml --become --become-user=root | sudo tee $log_folder/setup-kubernetes.log - ansible-playbook $verbose -i $kud_inventory $dest_folder/kubespray-$version/cluster.yml --become --become-user=root | sudo tee $log_folder/setup-kubernetes.log + + ansible-playbook $verbose -i $kud_inventory \ + $kud_playbooks/preconfigure-kubespray.yml --become --become-user=root \ + | sudo tee $log_folder/setup-kubernetes.log + if [ "$container_runtime" == "docker" ]; then + /bin/echo -e "\n\e[1;42mDocker will be used as the container runtime interface\e[0m" + ansible-playbook $verbose -i $kud_inventory \ + $dest_folder/kubespray-$version/cluster.yml --become \ + --become-user=root | sudo tee $log_folder/setup-kubernetes.log + elif [ "$container_runtime" == "containerd" ]; then + /bin/echo -e "\n\e[1;42mContainerd will be used as the container runtime interface\e[0m" + # Because the kud_kata_override_variable has its own quotations in it + # a eval command is needed to properly execute the ansible script + ansible_kubespray_cmd="ansible-playbook $verbose -i $kud_inventory \ + $dest_folder/kubespray-$version/cluster.yml \ + -e ${kud_kata_override_variables} --become --become-user=root | \ + sudo tee $log_folder/setup-kubernetes.log" + eval $ansible_kubespray_cmd + ansible-playbook $verbose -i $kud_inventory -e "base_dest=$HOME" \ + $kud_playbooks/configure-kata.yml --become --become-user=root | \ + sudo tee $log_folder/setup-kata.log + else + echo "Only Docker or Containerd are supported container runtimes" + exit 1 + fi # Configure environment mkdir -p $HOME/.kube @@ -159,25 +182,66 @@ function install_addons { _install_ansible sudo ansible-galaxy install $verbose -r $kud_infra_folder/galaxy-requirements.yml --ignore-errors ansible-playbook $verbose -i $kud_inventory -e "base_dest=$HOME" $kud_playbooks/configure-kud.yml | sudo tee $log_folder/setup-kud.log + # The order of KUD_ADDONS is important: some plugins (sriov, qat) - # require nfd to be enabled. - for addon in ${KUD_ADDONS:-topology-manager virtlet ovn4nfv nfd sriov qat optane cmk}; do + # require nfd to be enabled. Some addons are not currently supported with containerd + if [ "${container_runtime}" == "docker" ]; then + kud_addons=${KUD_ADDONS:-topology-manager virtlet ovn4nfv nfd sriov \ + qat optane cmk} + elif [ "${container_runtime}" == "containerd" ]; then + kud_addons=${KUD_ADDONS:-ovn4nfv nfd} + fi + + for addon in ${kud_addons}; do echo "Deploying $addon using configure-$addon.yml playbook.." - ansible-playbook $verbose -i $kud_inventory -e "base_dest=$HOME" $kud_playbooks/configure-${addon}.yml | sudo tee $log_folder/setup-${addon}.log + ansible-playbook $verbose -i $kud_inventory -e "base_dest=$HOME" \ + $kud_playbooks/configure-${addon}.yml | \ + sudo tee $log_folder/setup-${addon}.log done + echo "Run the test cases if testing_enabled is set to true." if [[ "${testing_enabled}" == "true" ]]; then failed_kud_tests="" - for addon in ${KUD_ADDONS:-multus topology-manager virtlet ovn4nfv nfd sriov qat optane cmk}; do + # Run Kata test first if Kata was installed + if [ "${container_runtime}" == "containerd" ]; then + #Install Kata webhook for test pods + ansible-playbook $verbose -i $kud_inventory -e "base_dest=$HOME" \ + -e "kata_webhook_runtimeclass=$kata_webhook_runtimeclass" \ + $kud_playbooks/configure-kata-webhook.yml \ + --become --become-user=root | \ + sudo tee $log_folder/setup-kata-webhook.log + kata_webhook_deployed=true + pushd $kud_tests + bash kata.sh || failed_kud_tests="${failed_kud_tests} kata" + popd + fi + # Run other plugin tests + for addon in ${kud_addons}; do pushd $kud_tests bash ${addon}.sh || failed_kud_tests="${failed_kud_tests} ${addon}" popd done + # Remove Kata webhook if user didn't want it permanently installed + if ! [ "${enable_kata_webhook}" == "true" ]; then + ansible-playbook $verbose -i $kud_inventory -e "base_dest=$HOME" \ + -e "kata_webhook_runtimeclass=$kata_webhook_runtimeclass" \ + $kud_playbooks/configure-kata-webhook-reset.yml \ + --become --become-user=root | \ + sudo tee $log_folder/kata-webhook-reset.log + fi if [[ ! -z "$failed_kud_tests" ]]; then echo "Test cases failed:${failed_kud_tests}" return 1 fi fi + # Check if Kata webhook should be installed and isn't already installed + if [ "$enable_kata_webhook" == "true" ] && ! [ "$kata_webhook_deployed" == "true" ]; then + ansible-playbook $verbose -i $kud_inventory -e "base_dest=$HOME" \ + -e "kata_webhook_runtimeclass=$kata_webhook_runtimeclass" \ + $kud_playbooks/configure-kata-webhook.yml \ + --become --become-user=root | \ + sudo tee $log_folder/setup-kata-webhook.log + fi echo "Add-ons deployment complete..." } @@ -248,6 +312,17 @@ kud_playbooks=$kud_infra_folder/playbooks kud_tests=$kud_folder/../../tests k8s_info_file=$kud_folder/k8s_info.log testing_enabled=${KUD_ENABLE_TESTS:-false} +container_runtime=${CONTAINER_RUNTIME:-docker} +enable_kata_webhook=${ENABLE_KATA_WEBHOOK:-false} +kata_webhook_runtimeclass=${KATA_WEBHOOK_RUNTIMECLASS:-kata-clh} +kata_webhook_deployed=false +# For containerd the etcd_deployment_type: docker is the default and doesn't work. +# You have to use either etcd_kubeadm_enabled: true or etcd_deployment_type: host +# See https://github.com/kubernetes-sigs/kubespray/issues/5713 +kud_kata_override_variables="container_manager=containerd \ + -e etcd_deployment_type=host -e kubelet_cgroup_driver=cgroupfs \ + -e \"{'download_localhost': false}\" -e \"{'download_run_once': false}\"" + sudo mkdir -p $log_folder sudo mkdir -p /opt/csar sudo chown -R $USER /opt/csar |