K8S binary deployment and installation

Table of contents

Environmental preparation

1. Common k8s deployment methods

It's been a while

Cube admin

Binary installation and deployment

turn off firewall

close selinux

close swap

Set the hostname according to the plan

Add hosts to the master

Chain that passes bridged IPv4 traffic to iptables

time synchronization

2. Deploy etcd cluster

1. Master node deployment

Upload etcd-cert.sh and etcd.sh to the /opt/k8s/ directory

//Create a directory for generating CA certificates, etcd server certificates and private keys

2. Modify node1 and node2

3. Start on the master1 node

Deploy the docker engine

Three, flannel network configuration

flannel network configuration

How Flannel works:

Add flannel network configuration information on the master1 node

Operate on all master nodes

4. Deploy the master component

Operate on the master1 node

Generate CA certificates, certificates and private keys for relevant components

Check if the process started successfully

Generate a certificate for kubectl to connect to the cluster

5. Deploy node components

Deploy node components


Environmental preparation

k8s集群master1:192.168.2.66 kube-apiserver kube-controller-manager kube-scheduler etcd

k8s cluster node1: 192.168.2.200 kubelet kube-proxy docker flannel

k8s cluster node2: 192.168.2.77 kubelet kube-proxy docker flannel

At least 2C4G

1. Common k8s deployment methods

It's been a while

Minikube is a tool that can quickly run a single-node micro-K8s locally. It is only used to learn and preview some features of K8s.
Deployment address: https://kubernetes.io/docs/setup/minikube

Cube admin

Kubeadmin is also a tool that provides kubeadm init and kubeadm join for rapid deployment of K8S clusters, relatively simple
https: / /kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

Binary installation and deployment

The first choice for production, download the binary package of the release version from the official, manually deploy each component and self-signed TLS certificate, and form a K8s cluster. Novices recommend
https: / / github.com/kubernetes/kubernetes/releases

Summary: kubeadm lowers the deployment threshold, but shields many details. It is difficult to troubleshoot problems encountered. If you want to be more controllable, it is recommended to use binary packages to deploy kubernetes clusters. Although manual deployment is troublesome, you can learn a lot of working principles during the period, which is also beneficial post-maintenance.

turn off firewall

systemctl stop firewalld
systemctl disable firewalld

close selinux

setenforce 0
sed -i 's/enforcing/disabled/' /etc/selinux/config

close swap

swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab

Set the hostname according to the plan

hostnamectl  set-hostname master01
hostnamectl  set-hostname node01
hostnamectl  set-hostname node02

Add hosts to the master

cat >>  /etc/hosts <<EOF
192.168.2.66 master01
192.168.2.200 node01
192.168.2.77 node02
EOF

Chain that passes bridged IPv4 traffic to iptables

cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sysctl --system

time synchronization

yum -y install ntpdate
ntpdate time.windows.com


  

back to the top

2. Deploy etcd cluster

As a service discovery system, etcd has the following characteristics:

• Simple, easy to install and configure, and provides HTTP API for interaction, easy to use

• Security: Supports SSL certificate verification

• Fast: A single instance supports 2k+ read operations per second

• Reliable: use the raft algorithm to achieve the availability and consistency of distributed system data

Prepare to issue a certificate environment:

CFSSL is a PKI/TLS tool open sourced by CloudFlare. CESSL includes a command-line tool and an HTTP API service for signing, verifying and bundling TLS certificates. Written in Go language.

CFSSL uses a configuration file to generate a certificate, so before self-signing, it needs to generate a configuration file in JSON format that it recognizes. CFSSL provides a convenient command line to generate a configuration file.

CFSSL is used to provide TLS certificates for etcd, and it supports signing three types of certificates:

1. Client certificate, the certificate carried by the server when connecting to the client, is used by the client to verify the identity of the server, such as kube-apiserver accessing etcd;

2. Server certificate, the certificate carried by the client when connecting to the server, is used by the server to verify the identity of the client, such as etcd provides external services:

3. Peer certificate, the certificate used when connecting to each other, such as verification and communication between etcd nodes.

Here all use the same set of certificate authentication.

Note: etcd will not be clustered here, it will be deployed directly on the master node

1. Master node deployment

Download Certificate Maker

curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
或者
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/local/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/local/bin/cfssljson
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/local/bin/cfssl-certinfo 
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo chmod +x /usr/local/bin/cfssl 

=================================

cfssl: tool command for certificate signing

cfssljson: convert the certificate (json format) generated by cfssl into a file-bearing certificate

cfssl-certinfo: Verify certificate information

cfssl-certinfo -cert <certificate name>

# View certificate information

//Create k8s working directory

mkdir /opt/k8s
cd /opt/k8s/


 

Upload etcd-cert.sh and etcd.sh to the /opt/k8s/ directory

chmod +x etcd-cert.sh etcd. sh


 

//Create a directory for generating CA certificates, etcd server certificates and private keys

mkdir /opt/k8s/etcd-cert
 
mv etcd-cert.sh etcd-cert/
cd /opt/k8s/etcd-cert/
./etcd-cert.sh

Generate CA certificate, etcd server certificate and private key

Upload etcd-v3.3.10-1inux-amd64.tar.gz to the /opt/k8s/ directory, unzip the etcd compressed package

cd /opt/k8s/
tar zxvf etcd-v3.3.10-linux-amd64.tar.gz
1s etcd-v3.3.10-linux-amd64
Documentation etcd etcdctl README-etcdctl.md README.md
READMEv2-etcdctl.md

============================
etcd is the startup command of etcd service, followed by various startup parameters

etcdctl mainly provides command line operations for etcd services

//Create a directory for storing etcd configuration files, command files, and certificates

mkdir -p /opt/etcd/{cfg,bin,ssl}
mv /opt/k8s/etcd-v3.3.10-linux- amd64/etcd /opt/k8s/etcd-v3.3.10-1inux-amd64/etcdct1 /opt/etcd/bin/
cp /opt/k8s/etcd-cert/*.pem /opt/etcd/ssl/
./etcd.sh etcd01 192.168.2.66 etcd02=https://192.168.2.200:2380,etcd03=https://192.168.2.77:2380

// Enter the stuck state and wait for other nodes to join. Here, three etcd services need to be started at the same time. If only one of them is started, the service will be stuck there until all etcd nodes in the cluster are started. This situation can be ignored

/ Open another window to check whether the etcd process is normal

ps -ef | grep etcd  

//Copy all etcd related certificate files and command files to the other two etcd cluster nodes

scp -r /opt/etcd/ [email protected]:/opt/
scp -r /opt/etcd/ [email protected]:/opt/


  

//Copy the etcd service management file to the other two etcd cluster nodes

scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/

2. Modify node1 and node2

Modify at node1 node

cd /opt/etcd/cfg/
vim etcd
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.2.200:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.2.200:2379"
 
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.2.200:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.2.200:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.2.66:2380,etcd02=https://192.168.2.200:2380,etcd03=https://192.168.2.77:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
systemctl daemon-reload
systemctl enable --now etcd.service

Modify at node2 node

cd /opt/etcd/cfg/
vim etcd
#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.2.77:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.2.77:2379"
 
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.2.77:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.2.77:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.2.66:2380,etcd02=https://192.168.2.200:2380,etcd03=https://192.168.2.77:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
systemctl daemon-reload
systemctl enable --now etcd.service

3. Start on the master1 node

First start on the master1 node

cd /root/k8s/
./ etc.sh etcd01 192.168.2.66:2380 etcd02 192.168.2.200:2380 etcd03 192.168.2.77:2380


  

Then start on node1 and node2 respectively

systemctl start etcd.service

Operate on the master1 node

1n -s /opt/etcd/bin/etcd* /usr/1oca1/bin

//Check etcd cluster status

cd /opt/etcd/ss1
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.2.66:2379,https://192.168.2.200:2379,https://192.168.2.77:2379" endpoint health --write-out=table

-----------------------------------------------
--cert -file: identify the HTTPS end using the sSL certificate file
--key-file: use this SSL key file to identify the HTTPS client
-ca-file: use this CA certificate to verify the certificate of the server that
enables https Separated list of machine addresses
cluster-health: Check the health of the etcd cluster
----------------------------------- ------------

ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.2.66:2379,https://192.168.2.200:2379,https://192.168.2.77:2379" --write-out=table member list

Deploy the docker engine

All node nodes deploy the docker engine

yum install -y yum-utils device-mapper-persistent-data 1vm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y docker-ce dqsker-ce-cli containerd.io
 
systemctl start docker.service
systemctl enable docker.service

back to the top

Three, flannel network configuration

flannel network configuration

Pod network communication in K8S:

●Communication between container and container in Pod

Containers in the same Pod (containers in the Pod will not cross hosts) share the same network command space, which is equivalent to them being on the same machine on the network, and they can use the localhost address to access each other's ports

● Communication between Pods in the same Node

Each Pod has a real global IP address. Different Pods in the same Node can directly use the IP address of the other Pod to communicate. Both Pod1 and Pod2 are connected to the same docker0 bridge through veth, and the network segment is the same , so they can communicate directly

● Communication between Pods on different Nodes

The pod address and docker0 are in the same network segment, the dockor0 network segment and the host machine network card are two different network segments, and the communication between different Nodos can be carried out through the host machine's physical network card

To achieve communication between Pods on different Nodes, it is necessary to find a way to address and communicate through the IP address of the physical network card of the host.

Therefore two conditions must be met:

Pod IPs cannot conflict:

Associate the IP of the Pod with the IP of the Node where it is located, and through this association, the Pods on different Nodes can communicate directly through the intranet IP address.

=Overlay Network:=

Overlay network, a virtual network technology mode superimposed on the second-tier or third-tier basic network, the hosts in the network are connected through virtual link tunnels (similar to VPN)

=VXLAN:=

Encapsulate the source data packet into UDP, and use the IP/MAC of the basic network as the outer packet header for encapsulation, and then transmit it on the Ethernet. After reaching the destination, the tunnel endpoint decapsulates it and sends the data to the target address

=Flannel:=

The function of Flannel is to allow Docker containers created by different node hosts in the cluster to have unique virtual IP addresses for the entire cluster

Flannel is a type of Overlay network. It also encapsulates TCP source data packets in another network packet for routing, forwarding and communication. Currently, it supports data forwarding methods such as UDP, VXLAN, and AWS VPC.

=ETCD’s Flannel provides instructions:=

Store and manage IP address segment resources that can be allocated by Flanne1
Monitor the actual address of each Pod in ETCD, and establish and maintain the Pod node routing table in memory

How Flannel works:

Pod1 on node1 needs to communicate with pod1 on node2

1. The data is sent from the Pod1 source container on node1, and forwarded to the flannel0 virtual network card through the docker0 virtual network card of the host;

2. Then flanneld encapsulates the pod ip into udp (the source pod IP and destination pod IP are encapsulated inside);

3. According to the routing table information saved in etcd, send it to the flanneld of the destination node2 through the physical network card to decapsulate and expose the pod IP in udp;

4. Finally, according to the destination pod IP, forward it to the destination pod through flannel0 virtual network card and docker0 virtual network card, and finally complete the communication

Add flannel network configuration information on the master1 node

Operate on the node01 node

#上传 cni-plugins-linux-amd64-v0.8.6.tgz 和 flannel.tar 到 /opt 目录中
cd /opt/

mkdir -p /opt/cni/bin
tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin

docker load -i flannel.tar
docker images
scp -r cni/ flannel.tar 192.168.2.200:/opt


  

Operate on all master nodes

//在 master01 节点上操作
#上传 kube-flannel.yml 文件到 /opt/k8s 目录中,部署 CNI 网络
cd /opt/k8s
kubectl apply -f kube-flannel.yml 

kubectl get pods -n kube-system

kubectl get nodes

//Modify the docker service management file, configure docker connection flannel

vim /lib/systemd/system/docker.service
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues stillt
# exists and systemd currently dges not support the cgroup feature set requi red
# for containers run by docker
EnvironmentFile=/run/flannel/subnet.env
#添加
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock
#修改
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always


  

//Restart the docker service

systemctl daemon-reload
systemctl restart docker


  

ifconfig #View flannel network

Test the ping of the other party's docker0 network card to prove that flannel plays a routing role

ping 172.17.21.1
 
docker run -it centos:7 /bin/bash #node1和node2都运行该命令
 
yum install net-tools -y #node1和node2都运行该命令
 
ifconfig //再次测试ping通两个node中的centos:7容器

back to the top

4. Deploy the master component

Operate on the master1 node

Upload master.zip and k8s-cert.sh to the /opt/k8s directory, unzip the master.zip compressed package

cd /opt/k8s/
unzip master.zip
apiserver.sh
scheduler.sh
controller-manager.sh
 
chmod +x * .sh


  
Create a kubernetes working directory

mkdir -p /opt/kubernetes/{cfg,bin,ssl}

Create directories for generating CA certificates, certificates and private keys for relevant components

mkdir /opt/k8s/k8s-cert
mv /opt/k8s/k8s-cert.sh /opt/k8s/k8s-cert
cd /opt/k8s/k8s-cert/
./k8s-cert.sh

Generate CA certificates, certificates and private keys for relevant components

//controller-manager and kube-scheduler are set to only call the apiserver of the current machine, using 127.0.0.1:8080 communication, so there is no need to issue a certificate

Copy the CA certificate, apiserver related certificate and private key to the ssl subdirectory of the kubernetes. working directory

cp ca*pem apiserver*pem /opt/kubernetes/ssl/

Upload kubernetes-server-linux-amd64.tar.gz to the /opt/k8s/ directory, unzip the kubernetes compressed package

cd /opt/k8s/
tar zxvf kubernetes-server-linux-amd64.tar.gz

Copy the key command files of the master component to the bin subdirectory of the kubernetes. working directory

cd /opt/k8s/kubernetes/server/bin
cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/
1n -s /opt/kubernetes/bin/* /usr/local/bin/

//Create a bootstrap token authentication file, which will be called when the apiserver starts, and then it is equivalent to creating a user in the cluster, and then you can use RBAC to authorize him

cd /opt/k8s/
vim token.sh
#!/bin/bash
#获取随机数前16个字节内容,以十六进制格式输出,并删除其中空格
BOOTSTRAP_TOKEN=$(head -e 16 /dev/urandom | od -An -t x | tr -d ‘ ’)
#生成token.csv 文件,按照Token序列号,用户名,UID,用户组的格式生成
cat > /opt/kubernetes/cfg/token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
chmod +x token.sh
./token.sh
 
./apiserver.sh 192.168.2.66 https://192.168.2.66:2379,https://192.168.2.200:2379,https://192.168.2.77:2379

Use head -c 16 /dev/urandom | od -An -tx | tr -d ' ' to randomly generate a serial number and create a token.csv file, or use a script to create

Binary files, tokens, and certificates are all ready, start the apiserver

Check if the process started successfully

ps aux | grep kube-apiserver

//k8s provides services through the kube-apiserver process, which runs on a single master node. By default, there are two ports 6443 and 8080
//The secure port 6443 is used to receive HTTPS requests for authentication based on Token files or client certificates

//Local port 8080 is used to receive HTTP requests, and non-authenticated or authorized HTTP requests access APIServer through this port

netstat -natp| grep 8080
netstat -natp | grep 6443

//Check the version information (you must ensure that the apiserver starts normally, otherwise you cannot query the version information of the server)

kubectl version

//Start the scheduler service

cd /opt/k8s/
./scheduler.sh 127.0.0.1
 
ps aux | grep kube-scheduler

//Start the controller-manager service

cd /opt/k8s/
./controller-manager.sh 127.0.0.1

back to the top

Generate a certificate for kubectl to connect to the cluster

./admin.sh

kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous

//Check node status

kubectl get cs

back to the top

5. Deploy node components

Deploy node components

Operate on the master1 node
//Copy kubelet and kube-proxy to the node node

cd /opt/k8s/kubernetes/server/bin
scp kubelet kube-proxy [email protected]:/opt/kubernetes/bin/
scp kubelet kube-proxy [email protected]:/opt/kubernetes/bin/

Operate on the node1 node

//Upload node.zip  to the /opt directory, unzip the node.zip compressed package, and get kubelet.sh, proxy.sh

cd /opt/
unzip node.zip

Operation on the master1 node =
// Create a directory for generating kubelet configuration files

mkdir /opt/k8s/kubeconfig

//Upload the kubeconfig.sh  file to the /opt/k8s/kubeconfig directory
#kubeconfig.sh file contains cluster parameters (CA certificate, API Server address), client parameters (certificate and private key generated above), cluster context context
parameters (cluster name, username). Kubenetes components (such as kubelet, kube-proxy) can switch to different clusters and connect to apiserver by specifying different kubeconfig files at startup

cd /opt/k8s/kubeconfig
chmod +x kubeconfig.sh

//Generate the configuration file of kubelet

cd /opt/k8a/kubeconfig
./kubeconfig.sh 192.168.2.66 /opt/k8s/k8s-cert/
 
1s
bootstrap.kubeconfig kubeconfig.sh kube-proxy.kubeconfig

//Copy the configuration files bootstrap.kubeconfig and kube-proxy.kubeconfig to the node node

cd /opt/k8s/kubeconfig
scp bootstrap.kubeconfig kube-proxy-kubeconfig [email protected]:/opt/kubernetes/cfg/
scp bootstrap.kubeconfig kube-proxy.kubeconfig [email protected]:/opt/kubernetes/cfg/

//RBAC authorization, bind the preset user kubelet-bootatrap with the built-in ClusterRole system:node-bootatrapper, so that it can initiate a CSR request

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

Kubelet uses the TLS Bootstrapping mechanism to automatically complete the registration to the kube-apiserver, which is very useful when the number of node nodes is large or the capacity is automatically expanded later.
After the Master apiserver enables TLS authentication, if the kubelet component of the node node wants to join the cluster, it must use a valid certificate issued by the CA to communicate with the apiserver. When there are many node nodes, signing the certificate is a very cumbersome task. Therefore, Kubernetes introduces the TLS bootstraping mechanism to automatically issue client certificates. The kubelet will automatically apply for a certificate from the apiserver as a low-privileged user, and the kubelet certificate is dynamically signed by the apiserver.

The first startup of kubelet initiates the first CSR request by loading the user Token and apiserver CA certificate in bootstrap.kubeconfig. This Token is pre-built in the token.csv of the apiserver node, and its identity is kubelet-bootstrap user and system: kubelet-bootstrap user group : If you want the first CSR request to succeed (that is, it will not be rejected by apiserver 401), you need to create a ClusterRoleBinding first, and bind the kubelet-bootstrap user to the built-in ClusterRole of system:node - bootstrapper (query through kubectl get clusterroles), so that It is capable of initiating a CSR authentication request.

The certificate during TLS bootstrapping is actually signed by the kube-controller-manager component, which means that the validity period of the certificate is controlled by the kube-controller-manager component; the kube-controller-manager component provides a --experimental-cluster-signing- The duration
parameter is used to set the valid time of the signed certificate: the default is 8760h0m0s, change it to 87600h0m0s, that is, TLS bootstrapping can be used to sign the certificate after 10 years.

That is to say, when the kubelet accesses the API Server for the first time, it uses a token for authentication. After passing the pass, the Controller Manager will generate a certificate for the kubelet, and subsequent accesses will use the certificate for authentication.
------------------------------------------

//View roles:

kubectl get clusterroles | grep system:node-bootstrapper

//View authorized roles:

kubectl get clusterrolebinding

Operate on the node1 node
//Use the kubelet.sh script to start the kubelet service

cd /opt/
chmod +x kubelet.sh
./kubelet.sh 192.168.2.200

//Check that the kubelet service is started

ps aux | grep kubelet

//Certificate not yet generated

ls /opt/kubernetes/ssl/

Operate on the master1 node
//Check the CSR request initiated by the kubelet of the node1 node, and Pending means waiting for the cluster to issue a certificate to the node.

kubectl get csr

//Request via CSR

kubectl certificate approve node-csr-12DGPu__kpLSBsGUHpvGs6Q89B9aYysw9C61pAagDEA 

// Check the CSR request status again, Approved, Issued means that the CSR request has been authorized and the certificate issued

kubectl get csr

//Check the status of the cluster nodes and successfully join the node1 node

kubectl get nodes

Operate on the node1 node //The certificate and kubelet.kubeconfig  file
are automatically generated

ls /opt/kubernetes/cfg/kubelet.kubeconfig
ls /opt/kubernetes/ssl/

//Load the ip_vs module

for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F
filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done

//Use the proxy.sh script to start the proxy service

cd /opt/
chmod +x proxy.sh
./proxy.sh 192.168.2.200
 
systemctl status kube-proxy.service

node2 node deployment
##Method 1:
//Copy the kubelet.sh and proxy.sh files on the node1 node to the node2 node

cd /opt/
scp kubelet.sh proxy.sh [email protected]:/opt/

//Use the kubelet.sh script to start the kubelet service

cd /opt/
chmod +x kubelet.sh
./kubelet.sh 192.168.2.77

//Operate on the master1 node, check the CSR request initiated by the kubelet of the node2 node, and Pending means waiting for the cluster to issue a certificate to the node.

kubectl get csr

//Request via CSR

kubectl certificate approve node-csr-NOI-9vufTLIqJgMWq4fHPNPHKbjCX1DGHptj7FqTa8A

// Check the CSR request status again, Approved, Issued means that the CSR request has been authorized and the certificate issued

kubectl get csr


//Check the status of the cluster nodes and successfully join the node1 node

kubectl get nodes

//Load the ip_vs module on the node2 node

for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done

//Use the proxy.sh script to start the proxy service

cd /opt/
chmod +x proxy.sh
./proxy.sh 192.168.2.77
 
systemctl status kube-proxy.service

Test connectivity:

kubectl create deployment nginx-test --image=nginx:1.14
kubectl get pod
kubectl get pod
kubectl describe pod nginx-test-7dc4f9dcc9-vlzmk

Guess you like

Origin blog.csdn.net/Liqi23/article/details/129135382