summaryrefslogtreecommitdiffstats
path: root/kud/hosting_providers
diff options
context:
space:
mode:
Diffstat (limited to 'kud/hosting_providers')
-rw-r--r--kud/hosting_providers/baremetal/README.md121
-rwxr-xr-xkud/hosting_providers/baremetal/aio.sh2
-rwxr-xr-xkud/hosting_providers/containerized/installer.sh60
-rw-r--r--kud/hosting_providers/containerized/inventory/group_vars/k8s-cluster.yml39
-rw-r--r--kud/hosting_providers/vagrant/README.md8
-rwxr-xr-xkud/hosting_providers/vagrant/installer.sh17
-rw-r--r--kud/hosting_providers/vagrant/inventory/group_vars/k8s-cluster.yml37
-rwxr-xr-xkud/hosting_providers/vagrant/setup.sh2
8 files changed, 237 insertions, 49 deletions
diff --git a/kud/hosting_providers/baremetal/README.md b/kud/hosting_providers/baremetal/README.md
index 5e1edf79..aabdf2b8 100644
--- a/kud/hosting_providers/baremetal/README.md
+++ b/kud/hosting_providers/baremetal/README.md
@@ -4,23 +4,128 @@
This project offers a means for deploying a Kubernetes cluster
that satisfies the requirements of [ONAP multicloud/k8s plugin][1]. Its
-ansible playbooks allow to provision a deployment on Baremetal.
-
+ansible playbooks allow to provision a deployment on Baremetal.
![Diagram](../../../docs/img/installer_workflow.png)
+## Kubernetes Baremetal Deployment Setup Instructions
+
+1. Hardware Requirements
+1. Software Requirements
+1. Instructions to run KUD on Baremetal environment
+1. aio.sh Explained
+1. Enabling Nested-Virtualization
+1. Deploying KUD Services
+1. Running test cases
+
+## Bare-Metal Provisioning
+
+The Kubernetes Deployment, aka KUD, has been designed to be consumed by Virtual Machines as well as Bare-Metal servers. The `aio.sh` script contains the bash instructions for provisioning an All-in-One Kubernetes deployment on a Bare-Metal server.
+
+This document lists the Hardware & Software requirements and provides a walkthrough to set up all-in-one deployment (a.i.o) using aio.sh.
+
+## Hardware Requirements
+CPUs -- 8
+
+Memory -- 32GB
+
+Hard Disk -- 150GB
+
+## Software Requirements
+Ubuntu Server 18.04 LTS
+
+## Instructions to run KUD on Baremetal environment
+Prepare the environment and clone the repo
+
+`$ sudo apt-get update -y`
+
+`$ sudo apt-get upgrade -y`
+
+`$ sudo apt-get install -y python-pip`
+
+`$ git clone https://git.onap.org/multicloud/k8s/`
+
+## Run script to setup KUD
+
+`$ k8s/kud/hosting_providers/baremetal/aio.sh`
+
+## [aio.sh](aio.sh) Explained
+This bash script provides an automated process for deploying an All-in-One Kubernetes cluster. Given that the ansible inventory file created by this script doesn't specify any information about user and password, it's necessary to execute this script as the root user.
+
+Overall, this script can be summarized in three general phases:
+
+1. Cloning and configuring the KUD project.
+1. Enabling Nested-Virtualization.
+1. Deploying KUD services.
+
+KUD requires multiple files(bash scripts and ansible playbooks) to operate. Therefore, it's necessary to clone the *ONAP multicloud/k8s* project to get access to the *vagrant* folder.
-## Deployment
+Ansible works with multiple systems, the way for selecting them is through the usage of the inventory. The inventory file is a static source for determining the target servers used for the execution of ansible tasks.
-The [installer](installer.sh) bash script contains the minimal
-Ubuntu instructions required for running this project.
+The *aio.sh* script creates an inventory file for addressing those tasks to localhost. The inventory file needs to be explicitly updated with the ansible_ssh_host=*with the IP address of the machine or host-IP* along with *ansible_ssh_port*. This is necessary to have some of the test cases run.
-NOTE: for cmk bare metal deployment, preset 1/2 CPUs for
- shared/exlusive pools respectively to fit CI server machines
- users can adjust the parameters to meet their own requirements.
+### Create the host.ini file for Kubespray and Ansible
+```
+
+cat <<EOL > ../vagrant/inventory/hosts.ini
+[all]
+Localhost ansible_ssh_host=10.10.110.21 ansible_ssh_port=22
+# The ansible_ssh_host IP is an example here. Please update the ansible_ssh_host IP accordingly
+
+[kube-master]
+localhost
+
+[kube-node]
+localhost
+
+[etcd]
+localhost
+
+[ovn-central]
+localhost
+
+[ovn-controller]
+localhost
+
+[virtlet]
+localhost
+
+[k8s-cluster:children]
+kube-node
+kube-master
+EOL
+
+```
+
+KUD consumes [kubespray](https://github.com/kubernetes-sigs/kubespray) for provisioning a Kubernetes base deployment. As part of the deployment process, this tool downloads and configures *kubectl* binary.
+
+Ansible uses SSH protocol for executing remote instructions. The following instructions create and register ssh keys which avoid the usage of passwords.
+
+### Generate ssh-keys
+`$ echo -e "\n\n\n" | ssh-keygen -t rsa -N ""`
+
+`$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys`
+
+`$ chmod og-wx ~/.ssh/authorized_keys`
+
+### Enabling Nested-Virtualization
+KUD installs [Virtlet](https://github.com/Mirantis/virtlet) Kubernetes CRI for running Virtual Machine workloads. Nested-virtualization gives the ability to run a Virtual Machine within another. The [node.sh](../vagrant/node.sh) bash script contains the instructions for enabling Nested-Virtualization.
+
+#### Enable nested virtualization
+`$ sudo ../vagrant/node.sh`
+
+### Deploying KUD Services
+Finally, the KUD provisioning process can be started through the use of the [installer](../vagrant/installer.sh) bash script. The output of this script is collected in the *kud_installer.log* file for future reference.
+
+#### Bring the cluster up by running the following
+`$ ../vagrant/installer.sh | tee kud_installer.log`
+
+## Running test cases
+The *kud/tests* folder contain the health check scripts that guarantee the proper installation/configuration of Kubernetes add-ons. Some of the examples for test scripts are *virtlet.sh, multus.sh, ovn4nfv.sh* etc.
## License
Apache-2.0
[1]: https://git.onap.org/multicloud/k8s
+
diff --git a/kud/hosting_providers/baremetal/aio.sh b/kud/hosting_providers/baremetal/aio.sh
index 6a304141..e16a082b 100755
--- a/kud/hosting_providers/baremetal/aio.sh
+++ b/kud/hosting_providers/baremetal/aio.sh
@@ -21,7 +21,7 @@ OVN_CENTRAL_IP_ADDRESS=${OVN_CENTRAL_IP_ADDRESS:-$(hostname -I | cut -d ' ' -f 1
echo "Preparing inventory for ansible"
cat <<EOL > inventory/hosts.ini
[all]
-localhost ansible_ssh_host=${OVN_CENTRAL_IP_ADDRESS} ansible_ssh_port=22
+localhost ansible_ssh_host=${OVN_CENTRAL_IP_ADDRESS} ansible_ssh_port=22 download_run_once=False download_localhost=False download_cache_dir=/tmp/kubespray_cache retry_stagger=10
[kube-master]
localhost
diff --git a/kud/hosting_providers/containerized/installer.sh b/kud/hosting_providers/containerized/installer.sh
index c443eaf1..b2ec52af 100755
--- a/kud/hosting_providers/containerized/installer.sh
+++ b/kud/hosting_providers/containerized/installer.sh
@@ -36,7 +36,6 @@ function _install_ansible {
pip install --no-cache-dir ansible==$version
}
-# install_k8s() - Install Kubernetes using kubespray tool
function install_kubespray {
echo "Deploying kubernetes"
version=$(grep "kubespray_version" ${kud_playbooks}/kud-vars.yml | \
@@ -50,7 +49,6 @@ function install_kubespray {
_install_ansible
wget https://github.com/kubernetes-incubator/kubespray/archive/$tarball
tar -C $dest_folder -xzf $tarball
- mv $dest_folder/kubespray-$version/ansible.cfg /etc/ansible/ansible.cfg
chown -R root:root $dest_folder/kubespray-$version
mkdir -p ${local_release_dir}/containers
rm $tarball
@@ -79,11 +77,14 @@ function install_kubespray {
fi
}
+# install_k8s() - Install Kubernetes using kubespray tool
function install_k8s {
- version=$(grep "kubespray_version" ${kud_playbooks}/kud-vars.yml | \
- awk -F ': ' '{print $2}')
local cluster_name=$1
ansible-playbook $verbose -i \
+ $kud_inventory $kud_playbooks/preconfigure-kubespray.yml \
+ --become --become-user=root | \
+ tee $cluster_log/setup-kubernetes.log
+ ansible-playbook $verbose -i \
$kud_inventory $dest_folder/kubespray-$version/cluster.yml \
-e cluster_name=$cluster_name --become --become-user=root | \
tee $cluster_log/setup-kubernetes.log
@@ -119,7 +120,9 @@ function install_addons {
ansible-playbook $verbose -i \
$kud_inventory -e "base_dest=$HOME" $kud_playbooks/configure-kud.yml | \
tee $cluster_log/setup-kud.log
- for addon in ${KUD_ADDONS:-virtlet ovn4nfv nfd sriov cmk $plugins_name}; do
+ # The order of KUD_ADDONS is important: some plugins (sriov, qat)
+ # require nfd to be enabled.
+ for addon in ${KUD_ADDONS:-virtlet ovn4nfv nfd sriov qat cmk $plugins_name}; do
echo "Deploying $addon using configure-$addon.yml playbook.."
ansible-playbook $verbose -i \
$kud_inventory -e "base_dest=$HOME" $kud_playbooks/configure-${addon}.yml | \
@@ -128,30 +131,34 @@ function install_addons {
echo "Run the test cases if testing_enabled is set to true."
if [[ "${testing_enabled}" == "true" ]]; then
- for addon in ${KUD_ADDONS:-virtlet ovn4nfv nfd sriov cmk $plugins_name}; do
+ failed_kud_tests=""
+ for addon in ${KUD_ADDONS:-virtlet ovn4nfv nfd sriov qat cmk $plugins_name}; do
pushd $kud_tests
- bash ${addon}.sh
+ bash ${addon}.sh || failed_kud_tests="${failed_kud_tests} ${addon}"
+ case $addon in
+ "onap4k8s" )
+ echo "Test the onap4k8s plugin installation"
+ for functional_test in plugin_edgex plugin_fw plugin_eaa; do
+ bash ${functional_test}.sh --external || failed_kud_tests="${failed_kud_tests} ${functional_test}"
+ done
+ ;;
+ "emco" )
+ echo "Test the emco plugin installation"
+ for functional_test in plugin_fw_v2; do
+ bash ${functional_test}.sh --external || failed_kud_tests="${failed_kud_tests} ${functional_test}"
+ done
+ ;;
+ esac
popd
done
+ if [[ ! -z "$failed_kud_tests" ]]; then
+ echo "Test cases failed:${failed_kud_tests}"
+ return 1
+ fi
fi
echo "Add-ons deployment complete..."
}
-# install_plugin() - Install ONAP Multicloud Kubernetes plugin
-function install_plugin {
- echo "Installing multicloud/k8s onap4k8s plugin"
- if [[ "${testing_enabled}" == "true" ]]; then
- pushd $kud_tests
- echo "Test the onap4k8s installation"
- bash onap4k8s.sh
- echo "Test the onap4k8s plugin installation"
- for functional_test in plugin_edgex plugin_fw plugin_eaa; do
- bash ${functional_test}.sh --external
- done
- popd
- fi
-}
-
# _print_kubernetes_info() - Prints the login Kubernetes information
function _print_kubernetes_info {
if ! $(kubectl version &>/dev/null); then
@@ -200,6 +207,9 @@ function install_pkg {
}
function install_cluster {
+ version=$(grep "kubespray_version" ${kud_playbooks}/kud-vars.yml | \
+ awk -F ': ' '{print $2}')
+ export ANSIBLE_CONFIG=$dest_folder/kubespray-$version/ansible.cfg
install_k8s $1
if [ ${2:+1} ]; then
echo "install default addons and $2"
@@ -207,12 +217,8 @@ function install_cluster {
else
install_addons
fi
-
echo "installed the addons"
- if ${KUD_PLUGIN_ENABLED:-false}; then
- install_plugin
- echo "installed the install_plugin"
- fi
+
_print_kubernetes_info
}
diff --git a/kud/hosting_providers/containerized/inventory/group_vars/k8s-cluster.yml b/kud/hosting_providers/containerized/inventory/group_vars/k8s-cluster.yml
index 5560dd97..18a55035 100644
--- a/kud/hosting_providers/containerized/inventory/group_vars/k8s-cluster.yml
+++ b/kud/hosting_providers/containerized/inventory/group_vars/k8s-cluster.yml
@@ -49,14 +49,9 @@ kubectl_localhost: true
local_volumes_enabled: true
local_volume_provisioner_enabled: true
-## Change this to use another Kubernetes version, e.g. a current beta release
-kube_version: v1.16.9
-
# Helm deployment
helm_enabled: true
-docker_version: 'latest'
-
# Kube-proxy proxyMode configuration.
# NOTE: Ipvs is based on netfilter hook function, but uses hash table as the underlying data structure and
# works in the kernel space
@@ -84,3 +79,37 @@ kube_pods_subnet: 10.244.64.0/18
# disable localdns cache
enable_nodelocaldns: false
+
+# pod security policy (RBAC must be enabled either by having 'RBAC' in authorization_modes or kubeadm enabled)
+podsecuritypolicy_enabled: true
+# The restricted spec is identical to the kubespray podsecuritypolicy_privileged_spec, with the replacement of
+# allowedCapabilities:
+# - '*'
+# by
+# requiredDropCapabilities:
+# - NET_RAW
+podsecuritypolicy_restricted_spec:
+ privileged: true
+ allowPrivilegeEscalation: true
+ volumes:
+ - '*'
+ hostNetwork: true
+ hostPorts:
+ - min: 0
+ max: 65535
+ hostIPC: true
+ hostPID: true
+ requiredDropCapabilities:
+ - NET_RAW
+ runAsUser:
+ rule: 'RunAsAny'
+ seLinux:
+ rule: 'RunAsAny'
+ supplementalGroups:
+ rule: 'RunAsAny'
+ fsGroup:
+ rule: 'RunAsAny'
+ readOnlyRootFilesystem: false
+ # This will fail if allowed-unsafe-sysctls is not set accordingly in kubelet flags
+ allowedUnsafeSysctls:
+ - '*'
diff --git a/kud/hosting_providers/vagrant/README.md b/kud/hosting_providers/vagrant/README.md
index f0210149..3d0766b3 100644
--- a/kud/hosting_providers/vagrant/README.md
+++ b/kud/hosting_providers/vagrant/README.md
@@ -23,6 +23,14 @@ its usage. This script supports two Virtualization technologies
$ sudo ./setup.sh -p libvirt
+There is a `default.yml` in the `./config` directory which creates multiple controllers and nodes.
+There are also sample configurations in the `./config/samples` directory. To use one of the samples,
+copy it into the `./config` directory as `pdf.yml`. If a `pdf.yml` exists in the `./config`
+directory it overrides the `default.yml` when the `vagrant up` command (in the next step) is run.
+For example:
+
+ $ cp ./config/samples/pdf.yml.aio ./config/pdf.yml
+
Once Vagrant is installed, it's possible to provision a cluster using
the following instructions:
diff --git a/kud/hosting_providers/vagrant/installer.sh b/kud/hosting_providers/vagrant/installer.sh
index 27ab7fc1..43638b4f 100755
--- a/kud/hosting_providers/vagrant/installer.sh
+++ b/kud/hosting_providers/vagrant/installer.sh
@@ -102,6 +102,7 @@ function _set_environment_file {
echo "export OVN_CENTRAL_ADDRESS=$(get_ovn_central_address)" | sudo tee --append /etc/environment
echo "export KUBE_CONFIG_DIR=/opt/kubeconfig" | sudo tee --append /etc/environment
echo "export CSAR_DIR=/opt/csar" | sudo tee --append /etc/environment
+ echo "export ANSIBLE_CONFIG=${ANSIBLE_CONFIG}" | sudo tee --append /etc/environment
}
# install_k8s() - Install Kubernetes using kubespray tool
@@ -117,7 +118,6 @@ function install_k8s {
_install_ansible
wget https://github.com/kubernetes-incubator/kubespray/archive/$tarball
sudo tar -C $dest_folder -xzf $tarball
- sudo mv $dest_folder/kubespray-$version/ansible.cfg /etc/ansible/ansible.cfg
sudo chown -R $USER $dest_folder/kubespray-$version
sudo mkdir -p ${local_release_dir}/containers
rm $tarball
@@ -139,6 +139,8 @@ function install_k8s {
if [[ -n "${https_proxy:-}" ]]; then
echo "https_proxy: \"$https_proxy\"" | tee --append $kud_inventory_folder/group_vars/all.yml
fi
+ export ANSIBLE_CONFIG=$dest_folder/kubespray-$version/ansible.cfg
+ ansible-playbook $verbose -i $kud_inventory $kud_playbooks/preconfigure-kubespray.yml --become --become-user=root | sudo tee $log_folder/setup-kubernetes.log
ansible-playbook $verbose -i $kud_inventory $dest_folder/kubespray-$version/cluster.yml --become --become-user=root | sudo tee $log_folder/setup-kubernetes.log
# Configure environment
@@ -155,17 +157,24 @@ function install_addons {
_install_ansible
sudo ansible-galaxy install $verbose -r $kud_infra_folder/galaxy-requirements.yml --ignore-errors
ansible-playbook $verbose -i $kud_inventory -e "base_dest=$HOME" $kud_playbooks/configure-kud.yml | sudo tee $log_folder/setup-kud.log
- for addon in ${KUD_ADDONS:-virtlet ovn4nfv nfd sriov qat optane cmk}; do
+ # The order of KUD_ADDONS is important: some plugins (sriov, qat)
+ # require nfd to be enabled.
+ for addon in ${KUD_ADDONS:-topology-manager virtlet ovn4nfv nfd sriov qat optane cmk}; do
echo "Deploying $addon using configure-$addon.yml playbook.."
ansible-playbook $verbose -i $kud_inventory -e "base_dest=$HOME" $kud_playbooks/configure-${addon}.yml | sudo tee $log_folder/setup-${addon}.log
done
echo "Run the test cases if testing_enabled is set to true."
if [[ "${testing_enabled}" == "true" ]]; then
- for addon in ${KUD_ADDONS:-multus virtlet ovn4nfv nfd sriov qat optane cmk}; do
+ failed_kud_tests=""
+ for addon in ${KUD_ADDONS:-multus topology-manager virtlet ovn4nfv nfd sriov qat optane cmk}; do
pushd $kud_tests
- bash ${addon}.sh
+ bash ${addon}.sh || failed_kud_tests="${failed_kud_tests} ${addon}"
popd
done
+ if [[ ! -z "$failed_kud_tests" ]]; then
+ echo "Test cases failed:${failed_kud_tests}"
+ return 1
+ fi
fi
echo "Add-ons deployment complete..."
}
diff --git a/kud/hosting_providers/vagrant/inventory/group_vars/k8s-cluster.yml b/kud/hosting_providers/vagrant/inventory/group_vars/k8s-cluster.yml
index 30fd5c0b..5b06b788 100644
--- a/kud/hosting_providers/vagrant/inventory/group_vars/k8s-cluster.yml
+++ b/kud/hosting_providers/vagrant/inventory/group_vars/k8s-cluster.yml
@@ -50,9 +50,6 @@ enable_nodelocaldns: false
local_volumes_enabled: true
local_volume_provisioner_enabled: true
-## Change this to use another Kubernetes version, e.g. a current beta release
-kube_version: v1.16.9
-
# Helm deployment
helm_enabled: true
@@ -79,3 +76,37 @@ download_localhost: True
kube_service_addresses: 10.244.0.0/18
# Subnet for Pod IPs
kube_pods_subnet: 10.244.64.0/18
+
+# pod security policy (RBAC must be enabled either by having 'RBAC' in authorization_modes or kubeadm enabled)
+podsecuritypolicy_enabled: true
+# The restricted spec is identical to the kubespray podsecuritypolicy_privileged_spec, with the replacement of
+# allowedCapabilities:
+# - '*'
+# by
+# requiredDropCapabilities:
+# - NET_RAW
+podsecuritypolicy_restricted_spec:
+ privileged: true
+ allowPrivilegeEscalation: true
+ volumes:
+ - '*'
+ hostNetwork: true
+ hostPorts:
+ - min: 0
+ max: 65535
+ hostIPC: true
+ hostPID: true
+ requiredDropCapabilities:
+ - NET_RAW
+ runAsUser:
+ rule: 'RunAsAny'
+ seLinux:
+ rule: 'RunAsAny'
+ supplementalGroups:
+ rule: 'RunAsAny'
+ fsGroup:
+ rule: 'RunAsAny'
+ readOnlyRootFilesystem: false
+ # This will fail if allowed-unsafe-sysctls is not set accordingly in kubelet flags
+ allowedUnsafeSysctls:
+ - '*'
diff --git a/kud/hosting_providers/vagrant/setup.sh b/kud/hosting_providers/vagrant/setup.sh
index 00b6e86f..79bf60c4 100755
--- a/kud/hosting_providers/vagrant/setup.sh
+++ b/kud/hosting_providers/vagrant/setup.sh
@@ -107,7 +107,7 @@ case ${ID,,} in
case $VAGRANT_DEFAULT_PROVIDER in
virtualbox)
- echo "deb http://download.virtualbox.org/virtualbox/debian trusty contrib" >> /etc/apt/sources.list
+ echo "deb http://download.virtualbox.org/virtualbox/debian bionic contrib" >> /etc/apt/sources.list
wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add -
wget -q https://www.virtualbox.org/download/oracle_vbox.asc -O- | sudo apt-key add -
packages+=(virtualbox-5.1 dkms)