[Kubernetes (K8S) Kubespray] Use Kubespray to add or remove work node into the exist kubernetes (K8S) cluster

Add or remove work node into the exist kubernetes (K8S) cluster

This should be the easiest with Kubespray.

Usages

First, check current all node status(2 nodes, 2 control-plane,master nodes, without work node):

1
2
3
4
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 Ready control-plane,master 3d16h v1.21.4
node2 Ready control-plane,master 3d16h v1.21.4

Add a new work node

Add the new work node into kubernetes (K8S) cluster.

1) Add new node to the inventory

Edit inventory/mycluster/hosts.yaml file to add node3 host.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
all:
hosts:
node1:
ansible_host: 192.168.8.90
ip: 192.168.8.90
access_ip: 192.168.8.90
ansible_user: root
node2:
ansible_host: 192.168.8.91
ip: 192.168.8.91
access_ip: 192.168.8.91
ansible_user: root
# add new node3
node3:
ansible_host: 192.168.8.92
ip: 192.168.8.92
access_ip: 192.168.8.92
ansible_user: root
children:
kube_control_plane:
hosts:
node1:
node2:
kube_node:
hosts:
node1:
node2:
# add new node3
node3:
etcd:
hosts:
node1:
k8s_cluster:
children:
kube_control_plane:
kube_node:
calico_rr:
hosts: {}

2) Run scale.yml

Run ansible-playbook command with scale.yml and --limit=node3.

1
2
3
4
$ ansible-playbook -i inventory/mycluster/hosts.yaml scale.yml --limit=node3

# Similar to the above, but may affect all nodes without `--limit`
# $ ansible-playbook -i inventory/mycluster/hosts.yaml scale.yml

You can use --limit=NODE_NAME to limit Kubespray to avoid disturbing other nodes in the cluster.

Before using --limit run playbook facts.yml without the limit to refresh facts cache for all nodes.


3) Check current node status

Check current all node status(3 nodes, 2 control-plane,master nodes, 1 work node):

1
2
3
4
5
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 Ready control-plane,master 3d16h v1.21.4
node2 Ready control-plane,master 3d16h v1.21.4
node3 Ready none 5m16s v1.21.4

Remove the work node

Remove the work node from kubernetes (K8S) cluster.

1) Run remove-node.yml

Run ansible-playbook command with remove-node.yml and --e=node=node3.

1
$ ansible-playbook -i inventory/mycluster/hosts.yaml remove-node.yml -e node=node3

With the old node still in the inventory, run remove-node.yml. You need to pass -e node=NODE_NAME to the playbook to limit the execution to the node being removed.

If the node you want to remove is not online, you should add reset_nodes=false and allow_ungraceful_removal=true to your extra-vars: -e node=NODE_NAME -e reset_nodes=false -e allow_ungraceful_removal=true. Use this flag even when you remove other types of nodes like a control plane or etcd nodes.


2) Check current node status

check current all node status(2 nodes, 2 control-plane,master nodes, without work node):

1
2
3
4
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 Ready control-plane,master 3d16h v1.21.4
node2 Ready control-plane,master 3d16h v1.21.4

3) Remove the node from the inventory

Edit inventory/mycluster/hosts.yaml file to remove node3 host.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
all:
hosts:
node1:
ansible_host: 192.168.8.90
ip: 192.168.8.90
access_ip: 192.168.8.90
ansible_user: root
node2:
ansible_host: 192.168.8.91
ip: 192.168.8.91
access_ip: 192.168.8.91
ansible_user: root
children:
kube_control_plane:
hosts:
node1:
node2:
kube_node:
hosts:
node1:
node2:
etcd:
hosts:
node1:
k8s_cluster:
children:
kube_control_plane:
kube_node:
calico_rr:
hosts: {}

References

[1] Adding/replacing a node - https://kubespray.io/#/docs/nodes

[2] kubernetes-sigs/kubespray: Deploy a Production Ready Kubernetes Cluster - https://github.com/kubernetes-sigs/kubespray

[3] Getting started - https://kubespray.io/#/docs/getting-started

[3] Deploy a Production Ready Kubernetes Cluster | Readme - https://kubespray.io/

[4] Ansible is Simple IT Automation - https://www.ansible.com/

[5] Kubernetes - https://kubernetes.io/