Kubernetes version 1.13.2
Today I spend some time to investigate how to remove nodes from the k8s cluster that built by kubeadm
.
For example, I have a 3 nodes cluster called k8stest
, I deploy the application in namespace
test-1
, each worker node (k8stest2
and k8stest3
) holds some pods:
1 | kubectl get pods -n test-1 -o wide |
Drain and delete worker nodes
You can use kubectl drain
to safely evict all of your pods from a node before you perform maintenance on the node (e.g. kernel upgrade, hardware maintenance, etc.). Safe evictions allow the pod’s containers to gracefully terminate and will respect the PodDisruptionBudgets
you have specified.
The drain
evicts or deletes all pods except mirror pods (which cannot be deleted through the API server). If there are DaemonSet-managed pods, drain will not proceed without --ignore-daemonsets
, and regardless it will not delete any DaemonSet-managed pods, because those pods would be immediately replaced by the DaemonSet controller, which ignores unschedulable markings. If there are any pods that are neither mirror pods nor managed by ReplicationController
, ReplicaSet
, DaemonSet
, StatefulSet
or Job
, then drain will not delete any pods unless you use --force
. --force
will also allow deletion to proceed if the managing resource of one or more pods is missing.
Let’s first drain k8stest2
:
1 | kubectl drain k8stest2.fyre.ibm.com --delete-local-data --force --ignore-daemonsets |
When kubectl drain
returns successfully, that indicates that all of the pods (except the ones excluded as described in the previous paragraph) have been safely evicted (respecting the desired graceful termination period, and without violating any application-level disruption SLOs). It is then safe to bring down the node by powering down its physical machine or, if running on a cloud platform, deleting its virtual machine.
Let’s ssh to k8stest2
node and see what happens here, the payloads were gone:
1 | ssh k8stest2.fyre.ibm.com |
The given node will be marked unschedulable
to prevent new pods from arriving.
1 | kubectl get nodes |
Because the dedicated node k8stest2
was drained, so is-servicesdocker
and is-xmetadocker
keep pending:
1 | NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES |
Now it’s safe to delete node:
1 | kubectl delete node k8stest2.fyre.ibm.com |
1 | kubectl get nodes |
Repeat the steps above for worker node k8stest3
then only master node survives:
1 | kubectl get nodes |
Drain master node
It’s time to deal with master node:
1 | kubectl drain k8stest1.fyre.ibm.com --delete-local-data --force --ignore-daemonsets |
Let’s see what happens for infrastructure pods, some of them were gone:
1 | kubectl get pods -n kube-system |
Note that don’t do delete for master node.
Reset cluster
Run this in every node to revert any changes made by kubeadm init
or kubeadm join
:
1 | kubeadm reset -f |
All container were gone and also check if kubectl
still works?
1 | docker ps |
1 | kubectl get nodes |
Delete rpm and files
Finally, we need to delete rpms and remove residue in every node:
1 | yum erase -y kubeadm.x86_64 kubectl.x86_64 kubelet.x86_64 kubernetes-cni.x86_64 cri-tools socat |
1 | ## calico |