[Kubernetes (K8S) Kubespray] Use Kubespray to deploy a Production Ready Kubernetes Cluster

Kubespray

Kubespray allows you to deploy a production-ready Kubernetes cluster (using Ansible or Vagrant) and since v2.3 can work together with Kubernetes kubeadm.

Features

  • Can be deployed on AWS, GCE, Azure, OpenStack, vSphere, Equinix Metal (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal

  • Highly available cluster

  • Composable (Choice of the network plugin for instance)

  • Supports most popular Linux distributions

  • Continuous integration tests

Usages

You have two ways to run Kubespray.

Shell Mode

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# Download kubespray
$ git clone [email protected]:kubernetes-sigs/kubespray.git

$ cd kubespray

# Install dependencies from ``requirements.txt``
$ sudo pip3 install -r requirements.txt

# Copy ``inventory/sample`` as ``inventory/mycluster``
$ cp -rfp inventory/sample inventory/mycluster

# Update Ansible inventory file with inventory builder
$ declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
$ CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}

# Review and change parameters under ``inventory/mycluster/group_vars``
$ cat inventory/mycluster/group_vars/all/all.yml
$ cat inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml

# Deploy Kubespray with Ansible Playbook - run the playbook as root
# The option `--become` is required, as for example writing SSL keys in /etc/,
# installing packages and interacting with various systemd daemons.
# Without --become the playbook will fail to run!
$ ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml

Or Docker Container Mode

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
$ docker pull quay.io/kubespray/kubespray:v2.16.0

# Enter into Docker container
$ docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \
--mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \
quay.io/kubespray/kubespray:v2.16.0 bash

# Update Ansible inventory file with inventory builder
[email protected]:/kubespray# declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5)
[email protected]:/kubespray# CONFIG_FILE=/inventory/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}

# Review and change parameters under ``inventory/mycluster/group_vars``
[email protected]:/kubespray# cat /inventory/group_vars/all/all.yml
[email protected]:/kubespray# cat /inventory/group_vars/k8s_cluster/k8s-cluster.yml

# Inside the container you may now run the kubespray playbooks:
[email protected]:/kubespray# ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml

[email protected]:/kubespray# ansible-playbook -i /inventory/hosts.yaml --private-key /root/.ssh/id_rsa cluster.yml

Advance

Proxy Configuration

1
2
3
4
5
6
7
8
# inventory/mycluster/group_vars/all/all.yml

## Set these proxy values in order to update package manager and docker daemon to use proxies
http_proxy: "http://192.168.88.130:38001"
https_proxy: "http://192.168.88.130:38001"

## Refer to roles/kubespray-defaults/defaults/main.yml before modifying no_proxy
no_proxy: "localhost,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.svc,.svc.cluster.local"

See Proxy - https://kubespray.io/#/docs/proxy?id=setting-up-environment-proxy to learn more.

FAQs

The conditional check ‘not kubeadm_version == downloads.kubeadm.version’ failed with Docker container

There may be a bug within Docker container. Retry use shell mode

1
$ ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml

CentOS 8 / Oracle Linux 8 / AlmaLinux 8 ship only with iptables-nft within Calico

1
2
3
4
5
6
# kubectl describe pod calico-node-gnjc9 -n kube-system
Warning Unhealthy 33m kubelet Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/run/bird/bird.ctl: connect: no such file or directory
Warning Unhealthy 33m kubelet Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/run/calico/bird.ctl: connect: connection refused
Warning Unhealthy 23m (x53 over 32m) kubelet Readiness probe failed:
Warning Unhealthy 8m32s (x60 over 32m) kubelet Liveness probe failed:
Warning BackOff 3m27s (x63 over 27m) kubelet Back-off restarting failed container

CentOS 8 / Oracle Linux 8 / AlmaLinux 8 ship only with iptables-nft (ie without iptables-legacy similar to RHEL8) The only tested configuration for now is using Calico CNI You need to add calico_iptables_backend: “NFT” or calico_iptables_backend: “Auto” to your configuration.

1
2
3
4
5
6
# inventory/mycluster/group_vars/k8s_cluster/k8s-net-calico.yml

# CentOS/OracleLinux/AlmaLinux
# https://kubespray.io/#/docs/centos8
# calico_iptables_backend: "NFT" or calico_iptables_backend: "Auto"
calico_iptables_backend: "NFT"

Then, run cluster.yaml again to update Kubernetes (K8S) Calico pods.

1
$ ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml

Ansible must be between 2.9.0 and 2.11.0

1
2
3
4
5
6
7
TASK [Check 2.9.0 <= Ansible version < 2.11.0] *************************************************************************************
fatal: [localhost]: FAILED! => {
"assertion": "ansible_version.string is version(maximal_ansible_version, \"<\")",
"changed": false,
"evaluated_to": false,
"msg": "Ansible must be between 2.9.0 and 2.11.0"
}

Pass maximal_ansible_version vars to 2.12+.

1
$ ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml --extra-vars maximal_ansible_version = 2.12.0 

Specify private key

Use --private-key=<Your Private Key>.

1
$ ansible-playbook -i inventory/mycluster/hosts.yaml --private-key ~/.ssh/id_rsa.mycluster

modprobe: FATAL: Module nf_conntrack_ipv4 not found in directory /lib/modules/4.18.0-305.17.1.el8_4.x86_64 since Kubernetes (K8S) v1.21+

1
2
3
4
# modprobe nf_conntrack_ipv4
modprobe: FATAL: Module nf_conntrack_ipv4 not found in directory /lib/modules/4.18.0-305.10.2.el8_4.x86_64

# lsmod | grep nf_conntrack_ipv4

Ensure IPVS required kernel modules (Notes: use nf_conntrack instead of nf_conntrack_ipv4 for Linux kernel 4.19 and later)

nf_conntrack_ipv4 havs been rename to nf_conntrack since Linux kernel 4.18+

1
2
3
4
5
6
7
# modprobe nf_conntrack

# lsmod | grep nf_conntrack
nf_conntrack 172032 0
nf_defrag_ipv6 20480 1 nf_conntrack
nf_defrag_ipv4 16384 1 nf_conntrack
libcrc32c 16384 2 nf_conntrack,xfs

Upgrade Kubernetes (K8S) to v1.22+.

1
2
3
4
5
6
# inventory/cloudolife-example/group_vars/k8s_cluster/k8s-cluster.yml

## Change this to use another Kubernetes version, e.g. a current beta release
- kube_version: v1.21.4
+ # kube_version: v1.21.4
+ kube_version: v1.22.2

References

[1] kubernetes-sigs/kubespray: Deploy a Production Ready Kubernetes Cluster - https://github.com/kubernetes-sigs/kubespray

[2] Deploy a Production Ready Kubernetes Cluster | Readme - https://kubespray.io/

[3] Getting started - https://kubespray.io/#/docs/getting-started

[3] Installing Kubernetes with Kubespray | Kubernetes - https://kubernetes.io/docs/setup/production-environment/tools/kubespray/

[4] Vagrant by HashiCorp - https://www.vagrantup.com/

[5] Ansible is Simple IT Automation - https://www.ansible.com/

[6] Kubernetes - https://kubernetes.io/

[7] Configuring calico/node - https://docs.projectcalico.org/reference/node/configuration

[8] k8sli/kubeplay: Deploy kubernetes by kubespray in offline - https://github.com/k8sli/kubeplay