How to use Kubeadm to set up a highly available Kubernetes cluster

When we set up a Kubernetes (k8s) cluster locally for the production environment, it is recommended to deploy it with high availability. High availability means installing a Kubernetes master node or worker node in HA. In this article, I will demonstrate how to set up a highly available Kubernetes cluster using the kubeadm utility.

To demonstrate, I used five CentOS 7 systems with the following details:

  • k8s-master-1 – minimum CentOS 7 – 192.168.1.40 – 2GB RAM, 2vCPU, 40 GB disk
  • k8s-master-2 – minimum CentOS 7 – 192.168.1.41 – 2GB RAM, 2vCPU, 40 GB disk
  • k8s-master-3 – minimum CentOS 7 – 192.168.1.42 – 2GB RAM, 2vCPU, 40 GB disk
  • k8s-worker-1 – minimum CentOS 7 – 192.168.1.43 – 2GB RAM, 2vCPU, 40 GB disk
  • k8s-worker-2 – minimum CentOS 7 – 192.168.1.44 – 2GB RAM, 2vCPU, 40 GB disk

How to use Kubeadm to set up a highly available Kubernetes cluster

Note: The etcd cluster can also be formed outside the master node, but for this we need additional hardware, so I installed etcd in the master node.

Set the minimum requirements for K8s cluster high availability:

  • Install Kubeadm, kubelet and kubectl on all master and worker nodes
  • Network connection between master node and worker node
  • Internet connection on all nodes
  • Root credentials or sudo privileged user on all nodes

Let's skip the installation and configuration steps.

Step 1. Set the host name and add entries in the /etc/hosts file

Run the hostnamectlcommand to set the hostname on each node. For example, the example shows the K8s master node:

$ hostnamectl set-hostname "k8s-master-1"
$ exec bash

Similarly, run the above commands on the remaining nodes and set their respective host names. After setting the hostnames on all the master nodes and working nodes /etc/hosts, add the following entries in the files on all nodes .

192.168.1.40   k8s-master-1
192.168.1.41   k8s-master-2
192.168.1.42   k8s-master-3
192.168.1.43   k8s-worker-1
192.168.1.44   k8s-worker-2
192.168.1.45   vip-k8s-master

I used another entry "192.168.1.45 vip-k8s-master" in the hosts file because I will use this IP and hostname when configuring haproxy and staying connected on all master nodes. This IP will be used as the kube-apiserver load balancer IP. All kube-apiserver requests will reach this IP, and then the requests will be distributed to the actual kube-apiserver nodes on the backend.

Step 2. Install and configure Keepalive and HAProxy on all master/worker nodes

Use the following yumcommands to install keepalived and haproxy on each master node:

$ sudo yum install haproxy keepalived -y

First, configure Keepalived on k8s-master-1, the creation check_apiserver.shscript will get the following content:

[kadmin@k8s-master-1 ~]$ sudo vi /etc/keepalived/check_apiserver.sh
#!/bin/sh
APISERVER_VIP=192.168.1.45
APISERVER_DEST_PORT=6443

errorExit() {
    echo "*** $*" 1>&2
    exit 1
}

curl --silent --max-time 2 --insecure https://localhost:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://localhost:${APISERVER_DEST_PORT}/"
if ip addr | grep -q ${APISERVER_VIP}; then
    curl --silent --max-time 2 --insecure https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/"
fi

Save and exit the file, set executable permissions:

$ sudo chmod + x /etc/keepalived/check_apiserver.sh

Back up the keepalived.conffile, and then empty the file.

[kadmin@k8s-master-1 ~]$ sudo cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf-org
[kadmin@k8s-master-1 ~]$ sudo sh -c '> /etc/keepalived/keepalived.conf'

Now paste the following into the /etc/keepalived/keepalived.conffile

[kadmin@k8s-master-1 ~]$ sudo vi /etc/keepalived/keepalived.conf
! /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script check_apiserver {
  script "/etc/keepalived/check_apiserver.sh"
  interval 3
  weight -2
  fall 10
  rise 2
}

vrrp_instance VI_1 {
    state MASTER
    interface enp0s3
    virtual_router_id 151
    priority 255
    authentication {
        auth_type PASS
        auth_pass P@##D321!
    }
    virtual_ipaddress {
        192.168.1.45/24
    }
    track_script {
        check_apiserver
    }
}

Save and close the file.

Note: For master-2 and 3 nodes, only two parameters of this file need to be changed. The status of master nodes 2 and 3 will change to SLAVEpriority 254 and 253 respectively.

Configure HAProxy on the k8s-master-1 node, edit its configuration file and add the following content:

[kadmin@k8s-master-1 ~]$ sudo cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg-org

Delete all lines after the default section and add the following lines

[kadmin@k8s-master-1 ~]$ sudo vi /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# apiserver frontend which proxys to the masters
#---------------------------------------------------------------------
frontend apiserver
    bind *:8443
    mode tcp
    option tcplog
    default_backend apiserver
#---------------------------------------------------------------------
# round robin balancing for apiserver
#---------------------------------------------------------------------
backend apiserver
    option httpchk GET /healthz
    http-check expect status 200
    mode tcp
    option ssl-hello-chk
    balance     roundrobin
        server k8s-master-1 192.168.1.40:6443 check
        server k8s-master-2 192.168.1.41:6443 check
        server k8s-master-3 192.168.1.42:6443 check

Save and exit the file.

How to use Kubeadm to set up a highly available Kubernetes cluster

Now copy these three files (check_apiserver.sh, keepalived.conf and haproxy.cfg) from k8s-master-1 to k8s-master-2 and 3

Run the following for loop to copy these files scp to master 2 and 3;

[kadmin@k8s-master-1 ~]$ for f in k8s-master-2 k8s-master-3; do scp /etc/keepalived/check_apiserver.sh /etc/keepalived/keepalived.conf root@$f:/etc/keepalived; scp /etc/haproxy/haproxy.cfg root@$f:/etc/haproxy; done

Note: Don't forget to change keepalived.confthe two parameters in the file we discussed above for k8s-master-2 and 3 .

If the firewall is running on the master node, add the following firewall rules on all three master nodes

$ sudo firewall-cmd --add-rich-rule='rule protocol value="vrrp" accept' --permanent
$ sudo firewall-cmd --permanent --add-port=8443/tcp
$ sudo firewall-cmd --reload

Now, start and enable keepalived and haproxy services on all three master nodes using the following commands:

$ sudo systemctl enable keepalived --now
$ sudo systemctl enable haproxy --now

After these services are successfully started, please verify whether VIP (virtual IP) has been enabled on the k8s-master-1 node, because we have marked k8s-master-1 as the MASTER node in the keepalived configuration file.

How to use Kubeadm to set up a highly available Kubernetes cluster

Perfect, the above output confirms that VIP has been enabled on k8s-master-1.

Step 3. Disable the swap partition and set SELinux as the permission rules and firewall rules for the primary and secondary nodes

To disable swap space on all nodes (including working nodes), run the following command:

$ sudo swapoff -a 
$ sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

Set SELinux to Permissive on all master nodes and working nodes, and run the following command,

$ sudo setenforce 0
$ sudo sed -i's / ^ SELINUX = enforcing $ / SELINUX = permissive /'/ etc / selinux / config

Firewall rules of the master node:

If the firewall is running on the master node, the following ports are allowed in the firewall

How to use Kubeadm to set up a highly available Kubernetes cluster

Run the following firewall-cmd command on all master nodes:

$ sudo firewall-cmd --permanent --add-port=6443/tcp
$ sudo firewall-cmd --permanent --add-port=2379-2380/tcp
$ sudo firewall-cmd --permanent --add-port=10250/tcp
$ sudo firewall-cmd --permanent --add-port=10251/tcp
$ sudo firewall-cmd --permanent --add-port=10252/tcp
$ sudo firewall-cmd --permanent --add-port=179/tcp
$ sudo firewall-cmd --permanent --add-port=4789/udp
$ sudo firewall-cmd --reload
$ sudo modprobe br_netfilter
$ sudo sh -c "echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables"
$ sudo sh -c "echo '1' > /proc/sys/net/ipv4/ip_forward"

Firewall rules for working nodes:

If the firewall is running on the worker nodes, allow the following ports in the firewall on all worker nodes

How to use Kubeadm to set up a highly available Kubernetes cluster

Run the following command on all worker nodes:

$ sudo firewall-cmd --permanent --add-port=10250/tcp
$ sudo firewall-cmd --permanent --add-port=30000-32767/tcp                                                   
$ sudo firewall-cmd --permanent --add-port=179/tcp
$ sudo firewall-cmd --permanent --add-port=4789/udp
$ sudo firewall-cmd --reload
$ sudo modprobe br_netfilter
$ sudo sh -c "echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables"
$ sudo sh -c "echo '1' > /proc/sys/net/ipv4/ip_forward"

Step 4. Install Container Operation (CRI) Docker on the master and worker nodes

Install Docker on all master and worker nodes, run the following command

$ sudo yum install -y yum-utils
$ sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
$ sudo yum install docker-ce -y

Run the following systemctl command to start and enable the docker service (also run this command on all master and worker nodes)

$ sudo systemctl enable docker --now

Now, let's install kubeadm, kubelet and kubectl in the next step

Step 5. Install Kubeadm, kubelet and kubectl

Install kubeadm, kubelet and kubectl on all primary and secondary nodes. Before installing these packages first, we must configure the Kubernetes repo repository and run the following commands on each master node and worker node

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

Now, run under the yum command to install these packages

$ sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

Run the following systemctl command to enable the kubelet service on all nodes (master and worker nodes)

$ sudo systemctl enable kubelet --now

Step 6. Initialize the Kubernetes cluster from the first master node

Now go to the first master node terminal and execute the following command

[kadmin@k8s-master-1 ~]$ sudo kubeadm init --control-plane-endpoint "vip-k8s-master:8443" --upload-certs

In the above command, --control-plane-endpointthe dns name and port of the load balancer (kube-apiserver) are set. In my case, the dns name is "vip-k8s-master" and the port is "8443", except for this --upload-certsoption will be automatically Share certificates between master nodes.

The output of the kubeadm command is as follows:

How to use Kubeadm to set up a highly available Kubernetes cluster

Great, the above output confirms that the Kubernetes cluster has been successfully initialized. In the output, we also obtained commands for other master nodes and worker nodes to join the cluster.

Note: It is recommended to copy this output to a text file for future reference.

Run the following command to allow local users to interact with the cluster using kubectl commands

[kadmin@k8s-master-1 ~]$ mkdir -p $HOME/.kube
[kadmin@k8s-master-1 ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[kadmin@k8s-master-1 ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
[kadmin@k8s-master-1 ~]$

Now, let's deploy the Pod network (CNI-container network interface). In my case, I deploy the calico plugin as a Pod network, please follow the kubectl command to run

[kadmin@k8s-master-1 ~]$ kubectl apply -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml

After the Pod network is successfully deployed, add the remaining two master nodes to the cluster. Just copy the command for the master node from the output to join the cluster, and then paste it on k8s-master-2 and k8s-master-3, examples are shown below

[kadmin@k8s-master-2 ~]$ sudo kubeadm join vip-k8s-master:8443 --token tun848.2hlz8uo37jgy5zqt  --discovery-token-ca-cert-hash sha256:d035f143d4bea38d54a3d827729954ab4b1d9620631ee330b8f3fbc70324abc5 --control-plane --certificate-key a0b31bb346e8d819558f8204d940782e497892ec9d3d74f08d1c0376dc3d3ef4

The output is as follows:

How to use Kubeadm to set up a highly available Kubernetes cluster

The above output confirms that k8s-master-3 has also successfully joined the cluster. Let us verify the node status from the kubectl command to the master-1 node and execute the following commands

[kadmin@k8s-master-1 ~]$ kubectl get nodes
NAME           STATUS   ROLES    AGE     VERSION
k8s-master-1   Ready    master   31m     v1.18.6
k8s-master-2   Ready    master   10m     v1.18.6
k8s-master-3   Ready    master   3m47s   v1.18.6
[kadmin@k8s-master-1 ~]$

Perfect, our three master nodes are all ready and joined the cluster.

Step 7. Join the Worker node to the Kubernetes cluster

To join a worker node to the cluster, copy the worker node's command from the output and paste it on both worker nodes. An example is shown below:

[kadmin@k8s-worker-1 ~]$ sudo kubeadm join vip-k8s-master:8443 --token tun848.2hlz8uo37jgy5zqt --discovery-token-ca-cert-hash sha256:d035f143d4bea38d54a3d827729954ab4b1d9620631ee330b8f3fbc70324abc5

[kadmin@k8s-worker-2 ~]$ sudo kubeadm join vip-k8s-master:8443 --token tun848.2hlz8uo37jgy5zqt --discovery-token-ca-cert-hash sha256:d035f143d4bea38d54a3d827729954ab4b1d9620631ee330b8f3fbc70324abc5

The output is as follows:

How to use Kubeadm to set up a highly available Kubernetes cluster

Now go to the k8s-master-1 node and run under the kubectl command to get the status worker node:

[kadmin@k8s-master-1 ~]$ kubectl get nodes
NAME           STATUS   ROLES    AGE     VERSION
k8s-master-1   Ready    master   43m     v1.18.6
k8s-master-2   Ready    master   21m     v1.18.6
k8s-master-3   Ready    master   15m     v1.18.6
k8s-worker-1   Ready    <none>   6m11s   v1.18.6
k8s-worker-2   Ready    <none>   5m22s   v1.18.6
[kadmin@k8s-master-1 ~]$

The above output confirms that both working nodes have joined the cluster and are in a ready state.

Run the following command to verify the status of deployment in the kube-system namespace.

[kadmin@k8s-master-1 ~]$ kubectl get pods -n kube-system

How to use Kubeadm to set up a highly available Kubernetes cluster

Step 8. Test the high availability of the Kubernetes cluster

Let's try to connect to the cluster from a remote computer (CentOS system) using the load balancer dns name and port. First, on the remote computer, we must install the kubectl package. Run the following command to set up the kubernetes software warehouse.

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

$ sudo yum install -y  kubectl --disableexcludes=kubernetes

Now /etc/hostadd the following entry in the file:

192.168.1.45   vip-k8s-master

Create the kube directory and /etc/kubernetes/admin.confcopy the files from the k8s-master-1 node to$HOME/.kube/config

$ mkdir -p $HOME/.kube
$ scp [email protected]:/etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

Now run the "kubectl get nodes" command,

[kadmin@localhost ~]$ kubectl get nodes
NAME           STATUS   ROLES    AGE    VERSION
k8s-master-1   Ready    master   3h5m   v1.18.6
k8s-master-2   Ready    master   163m   v1.18.6
k8s-master-3   Ready    master   157m   v1.18.6
k8s-worker-1   Ready    <none>   148m   v1.18.6
k8s-worker-2   Ready    <none>   147m   v1.18.6
[kadmin@localhost ~]$

Let's create a deployment called nginx-lab with the image "nginx", and then expose the deployment as a service of type "NodePort"

[kadmin@localhost ~]$ kubectl create deployment nginx-lab --image=nginx
deployment.apps/nginx-lab created
[kadmin@localhost ~]$
[kadmin@localhost ~]$ kubectl get deployments.apps nginx-lab
NAME        READY   UP-TO-DATE   AVAILABLE   AGE
nginx-lab   1/1     1            1           59s
[kadmin@localhost ~]$ kubectl get pods
NAME                         READY   STATUS    RESTARTS   AGE
nginx-lab-5df4577d49-rzv9q   1/1     Running   0          68s
test-844b65666c-pxpkh        1/1     Running   3          154m
[kadmin@localhost ~]$

Let's try to expand the copy from 1 to 4, run the following command:

[kadmin@localhost ~]$ kubectl scale deployment nginx-lab --replicas=4
deployment.apps/nginx-lab scaled
[kadmin@localhost ~]$
[kadmin@localhost ~]$ kubectl get deployments.apps nginx-lab
NAME        READY   UP-TO-DATE   AVAILABLE   AGE
nginx-lab   4/4     4            4           3m10s
[kadmin@localhost ~]$

Now to expose the deployment as a service, run the following command:

[kadmin@localhost ~]$ kubectl expose deployment nginx-lab --name=nginx-lab --type=NodePort --port=80 --target-port=80
service/nginx-lab exposed
[kadmin@localhost ~]$

Get the port details and try to use curl to access the nginx web server

[kadmin@localhost ~]$ kubectl get svc nginx-lab
NAME        TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
nginx-lab   NodePort   10.102.32.29   <none>        80:31766/TCP   60s
[kadmin@localhost ~]$

To access the nginx web server, we can use any master node or working node IP and port as "31766"

[kadmin@localhost ~]$ curl http://192.168.1.44:31766

The output is as follows:

How to use Kubeadm to set up a highly available Kubernetes cluster

Perfect, this is serious. We have successfully deployed a high-availability Kubernetes cluster using kubeadm on CentOS 7 servers.

Author: Pradeep Kumar Translator: Yue Yong
Original from: https://www.linuxtechi.com/setup-highly-available-kubernetes-cluster-kubeadm

Guess you like

Origin blog.51cto.com/mageedu/2546879