Kubespray allows you to deploy a production-ready Kubernetes cluster (using Ansible or Vagrant) and since v2.3 can work together with Kubernetes kubeadm.
Features
Can be deployed on AWS, GCE, Azure, OpenStack, vSphere, Equinix Metal (bare metal), Oracle Cloud Infrastructure (Experimental), or Baremetal
Highly available cluster
Composable (Choice of the network plugin for instance)
# Download kubespray $ git clone[email protected]:kubernetes-sigs/kubespray.git $ cd kubespray # Install dependencies from ``requirements.txt`` $ sudo pip3 install -r requirements.txt # Copy ``inventory/sample`` as ``inventory/mycluster`` $ cp -rfp inventory/sample inventory/mycluster # Update Ansible inventory file with inventory builder $ declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5) $ CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]} # Review and change parameters under ``inventory/mycluster/group_vars`` $ cat inventory/mycluster/group_vars/all/all.yml $ cat inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml # Deploy Kubespray with Ansible Playbook - run the playbook as root # The option `--become` is required, as for example writing SSL keys in /etc/, # installing packages and interacting with various systemd daemons. # Without --become the playbook will fail to run! $ ansible-playbook -i inventory/mycluster/hosts.yaml --become --become-user=root cluster.yml
Or Docker Container Mode
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
$ docker pull quay.io/kubespray/kubespray:v2.16.0 # Enter into Docker container $ docker run --rm -it --mount type=bind,source="$(pwd)"/inventory/sample,dst=/inventory \ --mount type=bind,source="${HOME}"/.ssh/id_rsa,dst=/root/.ssh/id_rsa \ quay.io/kubespray/kubespray:v2.16.0 bash # Update Ansible inventory file with inventory builder root@1f615c0327ff:/kubespray# declare -a IPS=(10.10.1.3 10.10.1.4 10.10.1.5) root@1f615c0327ff:/kubespray# CONFIG_FILE=/inventory/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]} # Review and change parameters under ``inventory/mycluster/group_vars`` root@1f615c0327ff:/kubespray# cat /inventory/group_vars/all/all.yml root@1f615c0327ff:/kubespray# cat /inventory/group_vars/k8s_cluster/k8s-cluster.yml # Inside the container you may now run the kubespray playbooks: root@1f615c0327ff:/kubespray# ansible-playbook -i /inventory/inventory.ini --private-key /root/.ssh/id_rsa cluster.yml
## Set these proxy values in order to update package manager and docker daemon to use proxies http_proxy: "http://192.168.88.130:38001" https_proxy: "http://192.168.88.130:38001"
## Refer to roles/kubespray-defaults/defaults/main.yml before modifying no_proxy no_proxy: "localhost,127.0.0.0/8,10.0.0.0/8,172.16.0.0/12,192.168.0.0/16,.svc,.svc.cluster.local"
CentOS 8 / Oracle Linux 8 / AlmaLinux 8 ship only with iptables-nft within Calico
1 2 3 4 5 6
# kubectl describe pod calico-node-gnjc9 -n kube-system Warning Unhealthy 33m kubelet Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/run/bird/bird.ctl: connect: no such file or directory Warning Unhealthy 33m kubelet Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/run/calico/bird.ctl: connect: connection refused Warning Unhealthy 23m (x53 over 32m) kubelet Readiness probe failed: Warning Unhealthy 8m32s (x60 over 32m) kubelet Liveness probe failed: Warning BackOff 3m27s (x63 over 27m) kubelet Back-off restarting failed container
CentOS 8 / Oracle Linux 8 / AlmaLinux 8 ship only with iptables-nft (ie without iptables-legacy similar to RHEL8) The only tested configuration for now is using Calico CNI You need to add calico_iptables_backend: “NFT” or calico_iptables_backend: “Auto” to your configuration.
## Change this to use another Kubernetes version, e.g. a current beta release - kube_version: v1.21.4 + # kube_version: v1.21.4 + kube_version: v1.22.2