How to be a contribution beginner on k8s? https://youtu.be/o68ff5NokR8
- find issues, search label
good first issue
- communication
- build relationships
Kubernetes version 1.13.2
This article mainly talks about setting up k8s cluster by kubeadm
manually. As far as I know there are no coming changes that will significantly impact the validity of these steps.
Cluster Info
I have a 3 nodes bare-metal cluster called myk8s
with CentOS version 7.5, the /etc/hosts
file in each node:
1 | 172.16.158.44 myk8s1.fyre.ibm.com myk8s1 |
Let’ see the network interface on master node:
1 | # ifconfig -a |
Configure
For every node in cluster, following instruction below
Install utilities
1 | yum update -y |
Disable firewall
Check firewall status and disable it if active
1 | systemctl status firewalld |
1 | systemctl disable firewalld |
Install kubeadm kubectl and kubelet
1 | cat <<EOF > /etc/yum.repos.d/kubernetes.repo |
Setting SELinux in permissive mode by running setenforce 0
and sed ...
effectively disables it. This is required to allow containers to access the host filesystem, which is needed by pod networks for example. You have to do this until SELinux support is improved in the kubelet.
Currently installed:
1 | Installed: |
check /etc/sysctl.conf
file, for example:
1 | net.ipv6.conf.all.disable_ipv6 = 1 |
ensure that these 3 options exist and set to 1, because some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables being bypassed
1 | net.bridge.bridge-nf-call-ip6tables = 1 |
if these 3 items not set, edit net.ipv4.ip_forward = 1
and append net.bridge.bridge-nf-call-ip6tables = 1
and net.bridge.bridge-nf-call-iptables = 1
in sysctl.conf
file
then make sure that the net.bridge.bridge-nf-call
is enabled, check if br_netfilter
module is loaded. This can be done by running
1 | lsmod | grep br_netfilter |
if not, to load it explicitly call
1 | modprobe br_netfilter |
next run this command to reload setting
1 | sysctl --system |
then you can check the final setting:
1 | sysctl -a | grep -E "net.bridge|net.ipv4.ip_forward" |
Install docker
Uninstall old versions
Older versions of Docker were called docker or docker-engine. If these are installed, uninstall them, along with associated dependencies.
1 | yum remove docker \ |
Install Docker CE
currently Docker version 18.06.2
is recommended, but 1.11, 1.12, 1.13 and 17.03 are known to work as well. Keep track of the latest verified Docker version in the Kubernetes release notes
1 | # Set up the repository |
check result
1 | [root@myk8s1 ~] docker version |
Disable swap
why we need to disable swap? Swap brings disk IO overhead, as well as breaking cgroups for pod memory control.
in /etc/fstab
file, comment out swap setting
1 | /dev/mapper/centos-swap swap swap defaults 0 0 |
activate new configuration and check
1 | swapoff -a |
1 | [root@myk8s3 ~] free -h |
for
worker
nodes in cluster, stop here. Continue steps inmaster
node:
Initialize kubernetes cluster
I will use Calico
as the container network solution, in master node, run
1 | kubeadm init --pod-network-cidr=192.168.0.0/16 |
you can specify the version by using --kubernetes-version v1.13.3
, otherwise it will pull latest version from Internet.
you can see the output like this
1 | ... |
keep the last command for joining worker node later
1 | kubeadm join 9.30.97.218:6443 --token jjkiw2.n478eree0wrr3bmc --discovery-token-ca-cert-hash sha256:79659fb0b3fb0044f382ab5a5e317d4f775e821a61d0df4a401a4cbd8d8c5a7f |
then run following command in master node:
1 | mkdir -p $HOME/.kube |
now if you run kubectl version
, you will get something like below:
1 | [root@myk8s1 ~] kubectl version |
let’s check what kind of docker images pulled from network to create the cluster in master
1 | [root@myk8s1 ~] docker images |
Launch cluster network
1 | # kubectl get pods --all-namespaces |
you can find some pods are not ready, for example coredns-86c58d9df4-5dfh9
and coredns-86c58d9df4-d9bfm
, also the master node
1 | [root@myk8s1 ~] kubectl get nodes |
it’s time to set up network, you should first figure out which Calico
version you need, check kubernetes release note, we see currently it support Calico
version 3.3.1:
you can also refer this link to install, it’s the same setting as below:
1 | kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml |
after applying rbac-kdd.yaml
and calico.yaml
, now you can see
1 | # kubectl get pods --all-namespaces |
1 | # kubectl get nodes |
Note that I encountered the problem that when join the worker nodes, the
calico-node
becomes not ready
1 | # kubectl get pods --all-namespaces |
The reason is my master node has multiple eth
, I need to specify which one to use in order to be consistent among all nodes.
1 | wget https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml |
delete the previous Calico
deployment and then edit and apply yaml file again:
1 | [root@myk8s1 ~] kubectl get pods --all-namespaces |
Join worker nodes
Join worker nodes is pretty easy, run this command on all worker nodes:
1 | kubeadm join 9.30.97.218:6443 --token jjkiw2.n478eree0wrr3bmc --discovery-token-ca-cert-hash sha256:79659fb0b3fb0044f382ab5a5e317d4f775e821a61d0df4a401a4cbd8d8c5a7f |
Check node status
1 | # kubectl get nodes |
By default, tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired, you can create a new token by running the following command on the master node
1 | kubeadm token create |
If you don’t have the value of --discovery-token-ca-cert-hash
, you can get it by running the following command chain on the master node:
1 | openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \ |
Now a fresh kubernetes cluster with 1 master and 2 worker nodes is created.