Kubernetes - kubeadm deployment

insert image description here

Kubeadm's official recommended solution is also under vigorous development. There are many small problems, and the expansion still needs to be done together with other solutions. You still need to spend some energy on high availability. If you are just playing around, it is still highly recommended, but if you want to use it in a formal environment, I still recommend you think twice.

Since kubeadm is more like a complete script package, if you want to expand it, you still need to cooperate with other solutions. For upgrades and the like, you can refer to the official upgrade guide, which is relatively easy.

1 Environment preparation

  • Preparation:
CPU name node type operating system IP address required components
node-128 master1 CentOS 7.9 192.168.17.128 docker,kubectl,kubeadm,containerd,keepalived,haproxy
node-129 master2 CentOS 7.9 192.168.17.129 docker,kubectl,kubeadm,containerd,keepalived,haproxy
node-130 node CentOS 7.9 192.168.17.130 docker,kublet, kube-proxy,containerd
node-131 node CentOS 7.9 192.168.17.130 docker,kublet, kube-proxy,containerd
[root@node-251 opt]# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)

1.1 Configure the hostname on each node and configure the Hosts file

[root@node-251 opt]# cat << eof >> /etc/hosts
> 192.168.71.253 node-253
> eof
[root@node-251 opt]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.71.251 node-251
192.168.71.252 node-252
192.168.71.253 node-253
[root@node-251 opt]# scp /etc/hosts node-252:/etc/hosts
...
hosts                                                                                                 100%  230   213.2KB/s   00:00
[root@node-251 opt]# scp /etc/hosts node-253:/etc/hosts
...
hosts                                                                                                 100%  230   228.2KB/s   00:00

1.2 Turn off the firewall, disable selinux, and turn off swap

#关闭系统防火墙
systemctl stop firewalld
systemctl disable firewalld

#关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config  #永久
setenforce 0  # 临时 

#关闭swap
swapoff -a   #临时
sed -ri 's/.*swap.*/#&/' /etc/fstab  #永久
#将桥接的IPV4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF 
net.bridge.bridge-nf-call-ip6tables = 1 
net.bridge.bridge-nf-call-iptables = 1 
EOF
sysctl --system  #生效

#时间同步
#使用阿里云时间服务器进行临时同步
[root@k8s-node1 ~]# ntpdate ntp.aliyun.com
 4 Sep 21:27:49 ntpdate[22399]: adjust time server 203.107.6.88 offset 0.001010 sec

1.3 Configure password-free login

Configure an SSH key pair on node-128 and send the public key to the rest of the hosts

[root@node-251 opt]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:ztGq/7P5XHswguUSHqM0AjeO4La7jVx1z4FVFQ0bC14 root@node-251
The key's randomart image is:
+---[RSA 2048]----+
|            o.E+ |
|  . . o    o o +.|
| . . = .  . . o  |
|  o . o o++ .    |
| . .  .oS+oB     |
|  .  . +.=+.o o  |
|   ..   + o. ..o |
| ..+   .  .o . ..|
|  +.. ....++o .. |
+----[SHA256]-----+
[root@node-251 opt]# ssh-copy-id root@node-252
...
[root@node-251 opt]# ssh-copy-id root@node-253
...
[root@node-251 opt]# ssh node-252
Last login: Wed Apr 19 16:23:06 2023 from 192.168.20.252
[root@node-252 ~]# exit
logout
Connection to node-252 closed.

Each node needs to be configured to be secret-free from each other to facilitate subsequent file copying

1.4 Configure kernel parameters

cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

1.5 Configure br_netfilter

Failure to configure initialization will result in an error

modprobe br_netfilter
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/ipv4/ip_forward

summary

systemctl stop firewall
systemctl stop firewalld
systemctl disable firewalld
getenforce
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system
ntpdate ntp.aliyun.com

2. Install K8s

2.1 Install docker (each node)

Add docker installation source

sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

install docker

yum install -y docker-ce

2.2 Install K8s components (each node)

Add k8s installation source

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

Install k8s components

yum install -y kubelet kubeadm kubectl

boot

systemctl enable kubelet
systemctl start kubelet
  
systemctl enable docker
systemctl start docker

kubelet startup error

systemctl status docker
journalctl -xe
kubelet[8783]: E0714 04:42:02.417578    8783 run.go:74] "command failed" err="failed to load kubelet config file, error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml"

The reason is that there is no need to execute yet:

kubeadm init

Don't worry, it can start normally after the master is initialized

2.3 Modify docker configuration

Kubernetes officially recommends that docker and others use systemd as cgroupdriver, otherwise kubelet cannot start

cat <<EOF > /etc/docker/daemon.json
{
    "registry-mirrors": ["https://registry.docker-cn.com"],
    "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

The author's version is systemd by default, so there is no need to modify it. In addition, if the configuration file is modified, daemon-reload will report an error after restarting. It
may be because it conflicts with the startup item.

/lib/systemd/system/docker.service
# 找到并删除下面这句话,保存退出,即可解决
# --exec-opt native.cgroupdriver=cgroupfs \

Verify whether docker's Cgroup is systemd

[root@node-128 ~]# docker info |grep Cgroup
  WARNING: You're not using the default seccomp profile
Cgroup Driver: systemd

2.4 Install and configure haproxy, keepalived (only master node)

In the scheduling service, Keepalived is responsible for displaying the cluster to the user as a whole, providing VIP, and providing failover of the scheduling server to ensure high availability of the scheduling service. The HaProxy tool is responsible for the load balancing function and is connected to the server cluster.

Generally speaking, there will be 2 servers running Keepalived, one is the main server and the other is the backup server, which acts as a virtual IP to the outside world.
Once Keepalived detects that the Haproxy process has stopped, it will try to restart Haproxy. If the restart fails, Keepalived will commit suicide. Another Keepalived node will automatically take over the service when it detects that the other party has disappeared. At this time, the virtual IP will drift to another Keepalived On the node, another Haproxy distributes the request.

Use a common life scene to demonstrate this process.
Once upon a time, Little C was going to report to Stanford. Although he didn't know where the school was, Stanford arranged a pick-up service, and he could take the pick-up bus to the school. Little C got off the high-speed rail full of joy, and went to find the signs for the pick-up station. He saw signs one after another, which said that there was a teacher who picked up the station. Xiaochen followed the signs and found the teacher who picked him up. The teacher saw Xiao C and knew that Xiao C was a classmate who was going to pick up the train at the station, so he was in charge of arranging for Xiao C to get on the first car in the sixth row. At this point, Little C successfully got on the pick-up car.
In the above story, the sign is equivalent to the VIP provided by Keepalived, and the teacher is equivalent to HaProxy responsible for load balancing scheduling. Little C followed the sign and found (through Keepalived’s VIP) the teacher responsible for arranging boarding (providing load balancing scheduling HaProxy), under the arrangement of the teacher, Xiao C got on the bus (the client’s information/request reached the back-end server that provides the service)

insert image description here

yum install keepalived haproxy -y

All master nodes execute, pay attention to replace the network card name and master node IP address

[root@node-128 kubernetes]# cat /etc/haproxy/haproxy.cfg
global
  maxconn  2000
  ulimit-n  16384
  log  127.0.0.1 local0 err
  stats timeout 30s
defaults
  log global
  mode  http
  option  httplog
  timeout connect 5000
  timeout client  50000
  timeout server  50000
  timeout http-request 15s
timeout http-keep-alive 15s
frontend monitor-in
  bind *:33305
  mode http
  option httplog
  monitor-uri /monitor
frontend k8s-master
  bind 0.0.0.0:7443
  bind 127.0.0.1:7443
  mode tcp
  option tcplog
  tcp-request inspect-delay 5s
  default_backend k8s-master
backend k8s-master
  mode tcp
  option tcplog
  option tcp-check
  balance roundrobin
  default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
  server node-128    192.168.17.128:6443   check
  server node-129    192.168.17.129:6443   check

On the k8s-master1 node, note that mcast_src_ip is replaced by the actual master1 ip address, and virtual_ipaddress is replaced by lb address

[root@node-128 kubernetes]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
    
    
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    
    
    script "/etc/keepalived/check_apiserver.sh"
    interval 5
    weight -5
    fall 2
    rise 1
}
vrrp_instance VI_1 {
    
    
    state MASTER
    interface ens37
    mcast_src_ip 192.168.17.128
    virtual_router_id 60
    priority 101
    advert_int 2
    authentication {
    
    
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
    
    
        192.168.17.120
    }
    track_script {
    
    
       chk_apiserver
    }
}

Modify /etc/keepalived/keepalived.conf in k8s-master2, pay attention to modify mcast_src_ip and virtual_ipaddress and state to BACKUP

[root@node-129 kubernetes]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
    
    
    router_id LVS_DEVEL
script_user root
    enable_script_security
}
vrrp_script chk_apiserver {
    
    
    script "/etc/keepalived/check_apiserver.sh"
    interval 5
    weight -5
    fall 2
    rise 1
}
vrrp_instance VI_1 {
    
    
    state BACKUP
    interface ens37
    mcast_src_ip 192.168.17.129
    virtual_router_id 60
    priority 101
    advert_int 2
    authentication {
    
    
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    virtual_ipaddress {
    
    
        192.168.17.120
    }
    track_script {
    
    
       chk_apiserver
    }
}

All master nodes configure the KeepAlived health check file:

$ cat > /etc/keepalived/check_apiserver.sh <<EOF
#!/bin/bash
err=0
for k in $(seq 1 3)
do
    check_code=$(pgrep haproxy)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 1
        continue
    else
        err=0
        break
    fi
done
if [[ $err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi
EOF

Start haproxy and keepalived---->all master nodes

$ chmod +x /etc/keepalived/check_apiserver.sh
$ systemctl daemon-reload
$ systemctl enable --now haproxy
$ systemctl enable --now keepalived 

Test whether lbip is in effect

$ telnet 192.168.2.120 7443
# 显示一下信息视为成功
Trying 192.168.2.120…
Connected to 192.168.2.120.
Escape character is ‘^]’.
Connection closed by foreign host.

2.5 Install Containerd (each node)

  • The Origin and Development of Containerd

With the strong rise of Docker, cloud computing has begun the container era. Dockers has developed rapidly with its unique container architecture and container "mirror", which has dealt a fatal dimensionality reduction blow to other container technologies. Many companies, including Google, cannot match it. In order not to be dominated by Docker, Google and other Internet companies jointly promote an open source container runtime as the core dependency of Docker—Containerd, which is an industry-standard container runtime that emphasizes simplicity and robustness. and portability. It was born in Docker and provides the following functions:

1. Manage the life cycle of containers (from creating containers to destroying containers)
2. Pull/push container images
3. Storage management (manage image and container data storage)
4. Call runc to run containers (interact with runc and other container runtimes) )
5. Manage container network interface and network

insert image description here
Afterwards, Google and Red Hat discussed with Docker to donate libcontainer to a neutral community (OCI, Open Container Initiative) and renamed it Runc. After the retirement of Docker, Google and others partnered to establish CNCF (Cloud Native Computing Foundation) for large-scale container orchestration to compete with Docker. Docker launched Swarm to compete with Kubernetes, but the results are clear at a glance.

Kubernetes has designed a set of interface rules CRI (Container Runntime Interface), and the first one to support this interface rule is Containerd. In order to continue to support Docker, a shim is integrated in the special component, which can translate CRI calls into Docker APIs to support Docker usage. This article deploys the specified version of Kubernetes through Kubeadm, and installs Containerd+Docker at the same time, supporting two container runtimes.

  • Install Containerd environment dependencies and set yum source and view Containerd version
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum list | grep containerd
  • Install Containerd
yum -y install containerd
  • Create a Containerd configuration file and modify the corresponding configuration

The default configuration file of Containerd is /etc/containerd/config.toml. We can generate a default configuration through commands. We need to put all the related files of Containerd into the /etc/containerd folder and create the /etc/containerd folder And generate the configuration file of Containerd.

mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
  • The speed of pulling public mirror warehouses in China is relatively slow. In order to save pulling time, it is necessary to configure the mirror of the mirror warehouse for Containerd. The difference between Containerd and Docker (from the article Containerd tutorial)

      - Containerd 只支持通过CRI拉取镜像的mirro,也就是说,只有通过crictl或者K8s调用时mirro 才会生效,通过ctr拉取是不会生效的。
      - Docker只支持为Docker Hub配置mirror,而Containerd支持为任意镜像仓库配置mirror。
    

    It is necessary to modify the registry configuration block in the configuration file /etc/containerd/config.toml, and plugins."io.containerd.grpc.v1.cri".registry.mirrorsadd in the section

    ...
    	 [plugins."io.containerd.grpc.v1.cri"]
        device_ownership_from_security_context = false
    ...
        restrict_oom_score_adj = false
        sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"
    ...
        [plugins."io.containerd.grpc.v1.cri".registry]
          config_path = ""
    
          [plugins."io.containerd.grpc.v1.cri".registry.auths]
    
          [plugins."io.containerd.grpc.v1.cri".registry.configs]
    
          [plugins."io.containerd.grpc.v1.cri".registry.headers]
    
    #      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
    
          [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
            [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
              endpoint = ["https://dockerhub.mirrors.nwafu.edu.cn"]
            [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
              endpoint = ["https://registry.aliyuncs.com/k8sxio"]
    

    registry.mirrors."xxx": Indicates the mirror warehouse that needs to be configured with mirror. For example, registry.mirrors."docker.io" means configure the mirror of docker.io.
    endpoint : Indicates the mirror acceleration service that provides mirror. For example, it is recommended to use the mirror acceleration service provided by Northwest Agriculture and Forestry University as the mirror of docker.io.

  • Start Containerd

    systemctl daemon-reload
    systemctl enable containerd
    systemctl restart containerd

2.6 Initialize the cluster with kubeadm (master node only)

What is kubeadm?

Let's take a look at the introduction of the official website

Kubeadm is a tool that provides kubeadm init and kubeadm join as a best-practice "shortcut path" for creating a Kubernetes cluster.
kubeadm performs the necessary operations to get a minimum viable cluster up and running. By design, it's only concerned with bootstrapping, not configuring the machine. Also, installing all sorts of nifty plugins like the Kubernetes Dashboard, monitoring solutions, and cloud-specific plugins is beyond the scope of this discussion.
Instead, we expect to build more advanced, custom tooling on top of kubeadm, and ideally using kubeadm as the basis for all deployments will make it easier to create spec-compliant clusters.

kubeadm allows k8s to run using a containerized solution.

Initialize the configuration file

$ kubeadm config print init-defaults > kubeadm.yaml
[root@node-129 kubernetes]# cat kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.17.128
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: node-128
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  certSANs:
  - 192.168.17.120
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.17.120:7443
controllerManager: {
    
    }
dns: {
    
    }
#  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.27.0
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {
    
    }

The above configuration details refer to https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta3

2.7 Download mirror (only master node)

  # 查看需要使用的镜像列表,若无问题,将得到如下列表
[root@node-128 kubernetes]# kubeadm config images list --config kubeadm.yaml
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.27.0
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.27.0
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.27.0
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.27.0
registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.7-0
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.10.1

  # 提前下载镜像到本地
[root@node-128 kubernetes]# kubeadm config images pull --config kubeadm.yaml
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.27.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.27.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.27.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.27.0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.7-0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.10.1

If it fails here, containerd may not be installed. The author has searched for relevant information, and it is probably that dockershim has been abandoned since K8S 1.24 version. For detailed information, please
refer to Discussion on container runtime – starting from the official removal of dockershim from K8s

2.8 Initialize the master node (only the master master node)

[root@node-128 kubernetes]# kubeadm init --config kubeadm.yaml --upload-certs
[init] Using Kubernetes version: v1.27.0
...

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.17.120:7443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:fe4890019230110147607a228c79af30ebdb4b3d7cecf7a0e9f36a2b3e74784f \
        --control-plane --certificate-key ec9e266ded35047f431b9efe82ce903672c589199973d88f6df210e80e5d3b75

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.17.120:7443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:fe4890019230110147607a228c79af30ebdb4b3d7cecf7a0e9f36a2b3e74784f

Next, follow the above prompts to configure the authentication of the kubectl client, master operation

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

If there is an error during the initialization process, after adjusting according to the error information, executekubeadm reset -f ; ipvsadm --clear ; rm -rf ~/.kube

2.9 Add other master nodes to the cluster

kubeadm join 192.168.17.120:7443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:fe4890019230110147607a228c79af30ebdb4b3d7cecf7a0e9f36a2b3e74784f \
        --control-plane --certificate-key ec9e266ded35047f431b9efe82ce903672c589199973d88f6df210e80e5d3b75
[root@node-128 kubernetes]# kubectl get nodes
NAME       STATUS     ROLES           AGE   VERSION
node-128   NotReady   control-plane   17m   v1.27.3
node-129   NotReady   control-plane   58s   v1.27.3

2.10 Add node nodes to the cluster

kubeadm join 192.168.17.120:7443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:fe4890019230110147607a228c79af30ebdb4b3d7cecf7a0e9f36a2b3e74784f
#如果报错,请用--v=5来查看,若出现下面错误:
[preflight] Some fatal errors occurred:
    [ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
error execution phase preflight

#报错原因:这个错误通常发生在 kubeadm init 或 kubeadm join 命令执行前的预检阶段,是由于系统中的 /proc/sys/net/bridge/bridge-nf-call-iptables 文件的内容未设置为 1。修复该错误可以使用以下方法:

#解决办法:
modprobe br_netfilter
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/ipv4/ip_forward
[root@node-129 kubernetes]# kubectl get nodes
NAME       STATUS     ROLES           AGE     VERSION
node-128   NotReady   control-plane   5m24s   v1.27.3
node-129   NotReady   control-plane   4m46s   v1.27.3
node-130   NotReady   <none>          3m36s   v1.27.3
node-131   NotReady   <none>          4s      v1.27.3

2.11 If you forget to add the command...

Can be generated by the following command

kubeadm token create --print-join-command

2.12 Install the network plug-in (installed on the master)

Can you see that the nodes are all NotReady? Because we also need to install a network plugin for them to work.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
namespace/kube-flannel unchanged
clusterrole.rbac.authorization.k8s.io/flannel unchanged
clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg unchanged
daemonset.apps/kube-flannel-ds unchanged

If there is no way to download, then find a way to surf the Internet scientifically

node status

[root@node-128 kubernetes]# kubectl get nodes
NAME       STATUS   ROLES           AGE     VERSION
node-128   Ready    control-plane   14m     v1.27.3
node-129   Ready    control-plane   13m     v1.27.3
node-130   Ready    <none>          12m     v1.27.3
node-131   Ready    <none>          9m15s   v1.27.3

Guess you like

Origin blog.csdn.net/u010230019/article/details/131721311