Kubernetes version 1.13.2
Server Login Failed
This morning I find I lose the connection with my icp4d kubernetes server (it was good last night), if I run:
1 | # kubectl get pods |
then:
1 | # kubectl version |
But it seems kubectl config is good, the token is there:
1 | # kubectl config view |
I check the docker and kubelet status, all are active:
1 | systemctl status docker |
Then I try to reboot all nodes, and it set up correctly. Don’t know how to reproduce this issue, no idea what happened and how to fix it (without rebooting), sadly.
Server Connection Refused 6443
Similar issue happened again in my dstest
cluster:
1 | # kubectl get pods -n test-1 |
Check this required ports list.
1 | # netstat -tunlp | grep 6443 |
Note, there is no
kube-apiserver
service insystemctl
, so how to restart it? Thekube-apiserver
is from a static pod, so I think I can restart the container directly bydocker restart <container ID>
:
1 | # docker ps -a | grep apiserver |
Haven’t got chance to reproduce this issue, this solution may not work…
In a health cluster:
1 | # kbc cluster-info |
Server Connection Refused 8080
This issue is similar to 6443 one, but it shows:
1 | The connection to the server localhost:8080 was refused - did you specify the right host or port? |
Recall that when we set up K8s cluster by kubeadm
, we run:
1 | ... |
I can reproduce this issue if the environment variable KUBECONFIG
is missing, so try to export it, both ways are fine:
1 | export KUBECONFIG=$HOME/.kube/config |
A good /etc/kubernetes
folder has these items:
1 | # ls -ltr /etc/kubernetes/ |
The manifests
contains yaml files for creating etcd, kube-apiserver and kube-controller-manager, kube-scheduler.