K8s with Containerd

Today I observed containerd container was used in our K8s prod environment, noting down the investigation and initial learning here.

Some good articles about Docker vs Containerd and Containerd K8s adaption:

The investigation was triggered by uneven load in prod K8s cluster. To examine node load, we can use below useful commands:

1
2
3
4
5
6
7
8
9
10
11
# check running/runnable queues size
# -t: append timestamp to each line output
vmstat -t 1

# -b: batch mode
# -n: page number
# -i: idle process hide
# -c: show command
# -H: thread
# -w: width output
top -b -n 1 -i -H -c -w | grep -E '^[0-9]+' | awk '{ if ($8 == "R" || $8 == "D") print $0 }'

The grep -E '^[0-9]+' is used to filter out big number PID threads which are somehow irrelevant here, and we only care about D, R state threads as they are the key contributors to LA.

Then, I found that the output of docker images/ps -a was empty, which made me realize probably a different container runtime was adapted.

Then, I picked a container thread from top and checked its parent PPID:

1
2
3
4
# BSD
ps axo pid,ppid,comm | grep <pid>
# standard
ps -eo pid,ppid,comm | grep <pid>

Showing the complete thread data with containerd:

1
2
3
root     3515874  0.0  0.0 111720  3712 ?        Sl   Apr03  28:56 /usr/bin/containerd-shim-runc-v1 
-namespace k8s.io -id 93a341648e8833e0212a257f1fd6aa2cded3f825f2475c16c15dc576b8c949a2 -address /run/
containerd/containerd.sock

P.S: A better way to check container runime in K8s:

1
2
kubectl get nodes -o wide
kubectl describe node <node_name>
0%