How to install a highly available K3s cluster?

about the author

Janakiram MSV is the principal analyst of Janakiram & Associates and a part-time teacher of the International School of Information Technology. He is also Google
Qualified
Developer, Amazon Certified Solution Architect, Amazon Certified Developer, Amazon Certified SysOps Administrator, and Microsoft Certified Azure Professional.

Janakiram is the ambassador of the Cloud Native Computing Foundation and one of the first Kubernetes certified administrators and Kubernetes certified application developers. He has worked in
well-known companies such as Microsoft, AWS, and Gigaom Research.

In the previous article , we have learned how to set up a multi-node etcd cluster. In this article, we will use the same infrastructure to set up and configure a highly available Kubernetes cluster based on K3s.

Highly available Kubernetes cluster

The control plane of a Kubernetes cluster is mostly stateless. The only stateful control plane component is the etcd database, which serves as the sole source of truth for the entire cluster. API Server serves as the gateway of etcd database, through which internal and external users can access and operate status.

The etcd database must be configured in HA mode to ensure that there is no single point of failure. There are two options for configuring the topology of a highly available (HA) Kubernetes cluster, depending on how to set up etcd.

The first topology is based on a stack cluster design, and each node runs an etcd instance with the control plane. Each control plane node runs an instance of kube-apiserver, kube-scheduler, and kube-controller-manager. kube-apiserver uses a load balancer to expose to worker nodes.

Each control plane node creates a local etcd member, and the etcd member only communicates with the kube-apiserver of this node. The same applies to local kube-controller-manager and kube-scheduler instances.

This topology requires at least three stack control plane modes for HA Kubernetes clusters. Kubeadm, this popular cluster installation tool, uses this topology to configure Kubernetes clusters.

Insert picture description here

The second topology uses an external etcd cluster installed and managed on a completely different set of hosts.

In this topology, each control plane node runs instances of kube-apiserver , kube-scheduler, and kube-controller-manager , where each etcd host communicates with the kube-apiserver of each control plane node .

Insert picture description here

The number of hosts required for this topology is twice that of the stacked HA topology. The HA cluster using this topology requires at least three control plane node hosts and three etcd node hosts.

For more information about starting a cluster, please refer to the official Kubernetes documentation:

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/

K3s in high availability mode

Since K3s are mostly deployed at the edge and hardware resources are limited, it may not be possible to run etcd database on a dedicated host. The deployment architecture is very similar to the stacked topology, except that the etcd database is configured in advance.

Insert picture description here

In this tutorial, I am using a bare metal infrastructure running on Intel NUC hardware. The mapping is as follows:

Insert picture description here

Refer to the previous part of this series of tutorials to install and configure etcd on the first three nodes with IP addresses 10.0.0.60, 10.0.0.61, and 10.0.0.62.

Install K3s server

Let us first install the server in all nodes where etcd is installed. SSH into the first node and set the following environment variables. This assumes that you have configured the etcd cluster following the steps in the previous tutorial.

export K3S_DATASTORE_ENDPOINT='https://10.0.0.60:2379,https://10.0.0.61:2379,https://10.0.0.62:2379'
export K3S_DATASTORE_CAFILE='/etc/etcd/etcd-ca.crt'
export K3S_DATASTORE_CERTFILE='/etc/etcd/server.crt'
export K3S_DATASTORE_KEYFILE='/etc/etcd/server.key'

These environment variables instruct the K3s installer to use the existing etcd database for state management.

Next, we will fill K3S_TOKEN with a token used when the agent joins the cluster.

export K3S_TOKEN="secret_edgecluster_token"

We are ready to install the server in the first node. Run the following command to start the process:

curl -sfL https://get.k3s.io | sh -

Repeat these steps in node 2 and node 3 to start additional servers.

At this point, you have a 3-node K3s cluster that runs the control plane and etcd components in a highly available mode.

sudo kubectl get nodes

Insert picture description here

You can check the service status with the following command:

sudo systemctl status k3s.service

Insert picture description here

Install K3s Agent

With the establishment and operation of the control plane, we can easily add worker nodes and agents to the cluster. We just need to make sure to use the same token associated with the server.

SSH into one of the worker nodes and run commands.

export K3S_TOKEN="secret_edgecluster_token"
export K3S_URL=https://10.0.0.60:6443

The environment variable K3S_URL is the agent that prompts the installer to configure the node to connect to the existing server.

Finally, run the same script as we did in the previous step.

curl -sfL https://get.k3s.io | sh -

Insert picture description here

Check whether the new node has been added to the cluster.

Insert picture description here

Congratulations! You have successfully installed a highly available K3s cluster and backed up an external etcd database.

Verify etcd database

Let us ensure that the k3s cluster is using etcd database for state management.

We will launch a simple NGINX Pod in the K3s cluster.

sudo kubectl run nginx --image nginx --port 80
sudo kubectl get pods

Insert picture description here

Pod specifications and status should be stored in etcd database. Let's try to retrieve it through etcdctl CLI. Install jq tool to parse JSON output.

Since the output is encoded in base64, we will decode it with the base64 tool.

etcdctl --endpoints https://10.0.0.61:2379 \
--cert /etc/etcd/server.crt \
--cacert /etc/etcd/etcd-ca.crt \
--key /etc/etcd/server.key get /registry/pods/default/nginx \
--prefix=true -w json | jq -r .kvs[].value | base64 -d

Insert picture description here

The output shows that the pod has an associated key and value in the etcd database. The special character is not displayed correctly, but it does show us enough data about the pod.

In this article, we learned how to set up and configure the K3s cluster in high availability mode, hoping to help you practice more smoothly on the edge.

Guess you like

Origin blog.csdn.net/qq_42206813/article/details/109391034