Kubernetes v1.25.0 cluster construction practical case (new version includes Docker container runtime)

The docker container runtime was abandoned after k8s 1.24, and the installation method is different. Most of the ones found on the Internet are versions before 1.24. So I recorded the complete process of building it for everyone’s reference.

I. Introduction

There are many ways to deploy k8s, including kubeadm, kind, minikube, Kubespray, kops, etc. This article introduces the officially recommended kubeadm method to build a cluster.

2. Installation steps

1. Two virtual machines (IP is configured accordingly according to your own network environment) (master/node)

ip

hostname

192.168.1.100

master

192.168.1.101

node1

2. Turn off the firewall (master/node)

systemctl stop firewalld 
systemctl disable firewalld

3. Close selinux (master/node)

setenforce 0  # 临时关闭
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config # 永久关闭

4. Close swap (master/node)

swapoff -a    # 临时关闭;关闭swap主要是为了性能考虑
free             # 可以通过这个命令查看swap是否关闭了
sed -ri 's/.*swap.*/#&/' /etc/fstab  # 永久关闭

After opening fstab, comment out this line:

UUID=c83b0fb3-eb59-4b1e-bca0-a1731159c553 swap swap defaults 0 0

To set fstab permanently, you need to reboot the system. In order not to reboot immediately, use swapoff -a to temporarily shut it down before. It will take effect permanently after rebooting the system.

free -m

5. Add the corresponding relationship between host name and IP (master/node)

$ vim /etc/hosts
#添加如下内容:
192.168.1.100	master
192.168.1.101	node1
#保存退出

6. Modify the host name (master/node)

#k8s-master
[root@localhost ~] hostname
localhost.localdomain
[root@localhost ~] hostname master ##临时生效
[root@localhost ~] hostnamectl set-hostname master ##重启后永久生效
 #k8s-node1  
[root@localhost ~] hostname
localhost.localdomain
[root@localhost ~] hostname node1  ##临时生效
[root@localhost ~] hostnamectl set-hostname node1  ##重启后永久生效

7. Bridge settings (master/node)

$ cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
$ sysctl --system

p.s.

  • It is best to follow the above steps to avoid a lot of errors later.



8. Install docker (master/node).
If you have already installed docker, you do not need to install it again. You can ignore this step.

$ yum -y install docker-ce
# 设置开机启动
$ systemctl enable docker
# 启动docker
$ systemctl start docker

9. Add domestic Alibaba Cloud YUM software source (master/node) to kubernetes

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[k8s]
name=k8s
enabled=1
gpgcheck=0 
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
EOF

10. Install kubeadm, kubelet and kubectl (master/node)

#版本可以选择自己要安装的版本号
$ yum install -y kubelet-1.25.0 kubectl-1.25.0 kubeadm-1.25.0
# 此时,还不能启动kubelet,因为此时配置还不能,现在仅仅可以设置开机自启动
$ systemctl enable kubelet

11. Install container runtime (master/node)

If the k8s version is lower than version 1.24, you can ignore this step

Since version 1.24 is not directly compatible with the docker engine,

Docker Engine does not implement CRI, which is required for container runtimes to work in Kubernetes. For this purpose, an additional service cri-dockerd must be installed. cri-dockerd is a project based on traditional built-in Docker engine support, which was removed from the kubelet in version 1.24.

The latest k8s version is 1.28.x

You need to install a container runtime on each node in the cluster so that Pods can run on it. Higher versions of Kubernetes require you to use a runtime that conforms to the Container Runtime Interface (CRI).

The following are the usages of several common container runtimes in Kubernetes:

The following is using the cri-dockerd adapter to integrate Docker Engine with Kubernetes.

(1).Install cri-dockerd

wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.6/cri-dockerd-0.2.6.amd64.tgz
tar -xf cri-dockerd-0.2.6.amd64.tgz
cp cri-dockerd/cri-dockerd /usr/bin/
chmod +x /usr/bin/cri-dockerd

(2).Configure startup service

cat <<"EOF" > /usr/lib/systemd/system/cri-docker.service
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket
[Service]
Type=notify
ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.8
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
EOF

Mainly the following commands: ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.8

The version of pspause can be passed through kubeadm config images list

(3).Generate socket file

cat <<"EOF" > /usr/lib/systemd/system/cri-docker.socket
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service
[Socket]
ListenStream=%t/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker
[Install]
WantedBy=sockets.target
EOF

(4). Start the cri-docker service and configure startup

systemctl daemon-reload
systemctl enable cri-docker
systemctl start cri-docker
systemctl is-active cri-docker

12. Deploy Kubernetes (master) , the node does not need to execute kubeadm init

Create the kubeadm.yaml file with the following content:

kubeadm init \
--apiserver-advertise-address=192.168.1.100 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.25.0 \
--service-cidr=10.10.0.0/12 \
--pod-network-cidr=10.244.0.0/16 \
--ignore-preflight-errors=all \
--cri-socket unix:///var/run/cri-dockerd.sock

p.s.

--apiserver-advertise-address=master node IP

--pod-network-cidr=10.244.0.0/16 must be consistent with the IP in kube-flannel.yml later, that is, use 10.244.0.0/16 and do not change it.
After success, the output information at the end is as follows:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.9.2.94:6443 --token xhurmz.i2tnhhuw7c0ecuw6 \
	--discovery-token-ca-cert-hash sha256:b3683deac5daa34a5778ede0ac0210bfbefce78a380c738aac7c2304c1cb1e4f

ps is installed through kubeadm init, so the corresponding docker image will be downloaded after execution. Generally, you will find that the console is stuck for a long time. At this time, the image is being downloaded. You can check docker images to see if there are new images added.

13. Use the kubectl tool. After kubeadm is installed, the console will prompt you to execute the following commands. Follow them (that is, the last console output in step 11) (master/node)

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Then

vim /etc/profile
#加入以下变量
export KUBECONFIG=/etc/kubernetes/admin.conf
source /etc/profile

Test the kubectl command

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
master   NotReady    master   23m   v1.25.0

Generally speaking, the status will be NotReady at first. Maybe the program is still starting. After a while, it will change to Ready.

14. Install the Pod network plug-in flannel (master/node)

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Error: The connection to the server http://raw.githubusercontent.com was refused - did you specify the right host or port?
Reason: Foreign resources cannot be accessed.
Solution: Host configures an accessible IP.

vim /etc/hosts   
#在/etc/hosts增加以下这条
199.232.28.133 raw.githubusercontent.com

Re-execute the above command and the installation will be successful!

15. The node joins the master (node). Step 12: Console output content

kubeadm join 10.9.2.94:6443 --token ebe5w8.hfd3b59u9ww1r966 \
	--discovery-token-ca-cert-hash sha256:b3683deac5daa34a5778ede0ac0210bfbefce78a380c738aac7c2304c1cb1e4f \
 --ignore-preflight-errors=all \
--cri-socket unix:///var/run/cri-dockerd.sock

p.s.

--ignore-preflight-errors=all \

--cri-socket unix:///var/run/cri-dockerd.sock

These two lines must be added, otherwise various errors will be reported:

[preflight] Running pre-flight checks

error execution phase preflight: [preflight] Some fatal errors occurred:

[ERROR CRI]: container runtime is not running: output: time="2023-08-31T16:42:23+08:00" level=fatal msg="validate service connection: CRI v1 runtime API is not implemented for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"

, error: exit status 1

[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

To see the stack trace of this error execute with --v=5 or higher

Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock

To see the stack trace of this error execute with --v=5 or higher

16. You can view it on the node in the master

kubectl get nodes

At this point, the entire k8s cluster environment is basically completed! ! !

Notice

  • When installing, pay attention to the version of the program you installed.
  • K8s components also exist in the form of docker containers, so many dokcer images will be downloaded.
  • Generally, the installation will not succeed and there will be many problems. Use tailf /var/log/messages to track the logs.
  • It is best to synchronize the system time of several machines. The token in node communication is also related to time.

3. Relevant notes

  • After K8S kubeadm init, there is no record of kubeadm join. How to query?
#再生成一个token即可
kubeadm token create --print-join-command
#下在的命令可以查看历史的token
kubeadm token list
  • After the node node kubeadm join fails, what should I do if I want to join again?
#先执行
kubeadm -y reset
#再执行
kubeadm join  xx.....
  • Restart kubelet
systemctl daemon-reload
systemctl restart kubelet
  • Inquire
#查询节点
kubectl get nodes
#查询pods 一般要带上"-n"即命名空间。不带等同  -n dafault
kubectl get pods -n kube-system


 

4. Related issues

1. K8s "deprecates" docker?

I remember that at that time, the interpretation of "k8s abandoned docker" was flying all over the world, and many articles said that docker was dead. Later, there was a wave of news saying that it was not that docker was completely abandoned but that support for docker as a container runtime was removed.

  • What k8s removes is actually dockershim, which is an adapter between kubelet and docker, used to convert the docker interface into the CRI (container runtime interface) required by k8s. This is done to simplify the architecture of k8s, improve performance and security, and support more container runtimes.
  • k8s does not completely deprecate docker, but deprecates docker as a container runtime support. This means that k8s will no longer use docker to create and run containers, but will instead use other CRI-compliant runtimes such as containerd or CRI-O123. The reason for this is that docker does not comply with the CRI standard and requires an intermediate layer called dockershim to adapt to the k8s API.
  • Removing docker from k8s does not mean that docker is useless, or that you cannot or should not use docker as a development tool. Docker is still a very useful tool for building container images, and the images it generates are compliant with OCI (Open Container Initiative) standards. This means that any image built with docker will work fine with other container runtimes in k8s. So you don't need to worry about your docker image becoming invalid or incompatible.


If the article is helpful to you, please follow + like it!

Guess you like

Origin blog.csdn.net/citywu123/article/details/132684882