Kubernetes Advanced Self-study Series | Deploy Distributed Kubernetes Cluster

Book source: "Kubernetes Advanced Combat (Second Edition)"

Organize the reading notes while studying, and share them with everyone. If the copyright is violated, it will be deleted. Thank you for your support!

Attach a summary post: Kubernetes Advanced Self-study Series | Summary_COCOgsta's Blog-CSDN Blog


2.2.1 Prepare the basic environment

kubeadm can support building Kubernetes clusters on Ubuntu 16.04+, Debian 9+, CentOS 7 and RHEL 7, Fedora 25+, HypriotOS v1.0.1+ and Container Linux. The basic environment requirements for deployment also include that each independent host should have more than 2GB of memory and more than 2 CPU cores, the Swap device is disabled, and each host has complete network connectivity with each other.

The basic premise of deploying a Kubernetes cluster is to prepare the required machines. In the example in this section, 4 hosts will be used, as shown in Table 2-2. Among them, k8s-master01 is the control plane node, and the other 3 are working nodes.

The following explains the preparation of the specific basic environment.

  1. Basic system environment settings

(1) Host time synchronization

If each host can directly access the Internet, just start the chronyd service on each host directly. Otherwise, you need to use the time server in the local network. For example, you can configure the Master as a chrony server, and then other nodes will synchronize the time from the Master.

~$ sudo systemctl start chronyd.service
~$sudo systemctl enable chronyd.service

(2) Firewall settings of each node

The kube-proxy component running on each Node needs to use iptables or ipvs to build a Service resource object, which is one of the core resource types of Kubernetes. In order to simplify the complexity of the problem, the iptables-related services on all hosts are turned off in advance.

~$ sudo ufw disable  && sudo ufw status

(3) Disable the Swap device

When the system memory resources are tight, Swap can relieve it to a certain extent, but Swap is the space on the disk, and its performance is much different from that of memory, which will affect the effect of Kubernetes scheduling and orchestrating application running, so it needs to be disabled.

~$ sudo swapoff -a

(4) Ensure the uniqueness of MAC address and product_id

Generally speaking, the network interface card will have a unique MAC address, non-unique MAC address or product_id may cause the installation to fail. It is recommended that users directly avoid these problems when deploying or template-generating hosts and operating systems, or use orchestration tools such as Ansible to collect and compare such problems.

  1. Configure the container runtime engine

If both Docker and containerd are detected, Docker is preferred because Docker ships with containerd since version 18.09 and both can be detected. This example will use the currently most widely used Docker-CE engine.

1) Install the necessary packages after updating the apt index information.

~$ sudo apt update
~$ sudo apt install  apt-transport-https  ca-certificates  \
      curl  gnupg-agent  software-properties-common

2) Add Docker's official GPG certificate to verify the package signature, the code is as follows.

~$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

3) Add a stable version of the Docker-CE repository for apt.

~$ sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) stable"

4) Install docker-ce after updating the apt index.

~$ sudo apt update
~$ sudo apt install docker-ce docker-ce-cli containerd.io

5) Configure Docker. Edit the configuration file /etc/docker/daemon.json, and make sure the following configuration exists.

{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}

6) Start the Docker service, and set the service to start with the system boot.

~$ sudo systemctl  daemon-reload
~$ sudo systemctl  -start docker.service
~$ sudo systemctl  -enable docker.service
  1. Install kubeadm, kubelet, and kubectl

Users need to ensure that kubelet and kubectl match the version of the control plane installed through kubeadm, otherwise there may be a risk of version deviation, resulting in some unexpected errors and problems.

1) Install the necessary packages after updating the apt index information:

~$ sudo apt update && apt install -y apt-transport-https

2) Add the official Kubernetes program key:

~$ sudo curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -

3) Add the following content to the configuration file
/etc/apt/sources.list.d/kubernetes.list to add the Kubernetes package warehouse for apt:

deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main

4) Update the package index and install the package:

~$ sudo apt update
~$ sudo apt install -y kubelet kubeadm kubectl

2.2.2 Single control plane cluster

This section will use kubeadm to deploy a Kubernetes cluster consisting of 1 Master host and 3 Node hosts. The hosts used are k8s-master01, k8s-node01, k8s-node02, and k8s-node3, each host and network planning, etc. As shown in Figure 2-8.

Multi-host communication in a distributed system environment is usually based on the host name. Here, the hosts file will be used for host name resolution. Therefore, you need to edit the /etc/hosts file on the Master and each Node to ensure that its content is similar to the following.

172.29.9.1      k8s-master01.ilinux.io k8s-master01 k8s-api.ilinux.io
172.29.9.11     k8s-node01.ilinux.io k8s-node01 
172.29.9.12     k8s-node02.ilinux.io k8s-node02
172.29.9.13     k8s-node03.ilinux.io k8s-node03
  1. Initialize the control plane

The control plane components kube-apiserver, kube-controller-manager, and kube-scheduler initialized by kubeadm init, and the cluster state storage system etcd all run as static Pods.

The kubeadm init command can read simple configuration parameters from command line options, and it also supports fine-grained configuration settings using configuration files. The common command format for initialization with command line options is given below, and it is run on the k8s-master01 host.

~$ sudo kubeadm init \
    --image-repository registry.aliyuncs.com/google_containers \
    --kubernetes-version v1.19.0 \
    --control-plane-endpoint k8s-api.ilinux.io \
    --apiserver-advertise-address 172.29.9.1 \
    --pod-network-cidr 10.244.0.0/16 \
    --token-ttl 0
  1. Configure the command line tool kubectl

When using the kubeadm init command to initialize the control plane, a configuration file
/etc/kubernetes/admin.conf for administrator privileges will be automatically generated. Copy it to the $HOME/.kube/config file of a common user to be a cluster administrator to visit. Run the following command as a normal user on the k8s-master01 host.

  ~$ mkdir -p $HOME/.kube
  ~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  ~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

Next, the status information related to the cluster nodes can be obtained through the kubectl get nodes command. The NotReady status of the output is because the network plugin has not been deployed in the cluster.

~$ kubectl get nodes
NAME             STATUS     ROLES   AGE  VERSION
k8s-master01.ilinux.io   NotReady   master   3m29s     v1.19.0

Users can install kubectl on any host that can communicate with the API Server through k8s-api.ilinux.io, and copy or generate kubeconfig files for it to access the control plane, including each worker node deployed in the following chapters.

  1. Deploy the Flannel network plugin

The more popular plug-ins that provide Pod networks for Kubernetes include Flannel, Calico, and WeaveNet. Flannel is popular with users for its simplicity, rich patterns, easy deployment, and easy use.

The creation of Kubernetes resource objects is generally based on a configuration list in JSON or YAML format.

~$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

After the current node obtains the Docker image of Flannel and starts the Pod resource object, the command is actually completed, and the current node will also change to the Ready (ready) state.

~$ kubectl get nodes
NAME            STATUS   ROLES    AGE    VERSION
k8s-master01.ilinux.io   Ready    master     12m   v1.19.0
  1. Add worker nodes

Run the kubeadm join command on the host that has prepared the basic environment to add it to the cluster. This command requires the use of a shared token for authentication when communicating with the control plane for the first time. For example, run the following command on k8s-node01 to add it to the cluster.

~$ sudo kubeadm join k8s-api.ilinux.io:6443 --token dnacv7.b15203rny85vendw \
>     --discovery-token-ca-cert-hash sha256:61ea08553de1cbe76a3f8b14322cd276c57cbebd5369bc362700426e21d70fb8

In order to meet the requirements of subsequent two-way TLS authentication between the Master and Node components, the TLS Bootstrap initiated by the kubeadm join command generates a private key and certificate signing request on the node, and submits it to the CA of the control plane for automatic signing.

Then, repeat the above steps on k8s-node02 and k8s-node03 to add them to the cluster. When the cluster needs to be further expanded in the future, the adding process is similar to this. After the three nodes set in the deployment example in this section are all added to the cluster and started, the command result of obtaining node information again should be similar to the following command result.

~$ kubectl get nodes
NAME            STATUS   ROLES    AGE     VERSION
k8s-master01.ilinux.io   Ready    master     16m       v1.19.0
k8s-node01.ilinux.io     Ready    <none>   2m49s     v1.19.0
k8s-node02.ilinux.io     Ready    <none>   110s       v1.19.0
k8s-node03.ilinux.io     Ready    <none>   70s       v1.19.0

Guess you like

Origin blog.csdn.net/guolianggsta/article/details/130668067