Kubernetes binary single-node cluster deployment

Kubernetes binary single-node cluster deployment
Common K8S is deployed according to the deployment method
Mini kube
Minikube is a tool that can quickly run a single-node miniature K8S locally. It is only used to learn and preview some features of K8S. Use the deployment address: https://kubernetes .io/docs/setup/minikube

●Kubeadmin
Kubeadmin is also a tool that provides kubeadm init and kubeadm join for rapid deployment of K8S clusters, which is relatively simple.
 https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

●Binary installation and deployment
First choice for production, download the binary package of the release version from the official, manually deploy each component and self-signed TLS certificate, and form a K8S cluster, recommended for novices. https://github.com/kubernetes/kubernetes/releases

Kubernetes binary deployment

Operating system initial configuration

#关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

#关闭SE安全中心
setenforce 0
sed -i 's/enforcing/disabled/' /etc/selinux/config 

#关闭swap 
swapoff -a                                   #临时关闭
sed -ri 's/.*swap.*/#&/' /etc/fstab          #永久关闭,&符号代表前面匹配的所有

#根据规划设置主机名
hostnamectl set-hostname master01 
hostnamectl set-hostname node01 
hostnamectl set-hostname node02

#在master添加hosts
cat >> /etc/hosts << EOF 
192.168.80.21 master01 
192.168.80.7 node01 
192.168.80.8 node02 
EOF

#将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf <<EOF 
net.bridge.bridge-nf-call-ip6tables = 1 
net.bridge.bridge-nf-call-iptables = 1 
net.ipv6.conf.all.disable_ipv6=1
net.ipv4.ip_forward=1
EOF
sysctl --system

#时间同步,可以加入计划任务定时执行减小偏差
yum install ntpdate -y
ntpdate time.windows.com

 

Deploying etcd cluster
etcd is an open source project initiated by the CoreOS team in June 2013. Its goal is to build a highly available distributed key-value (key-value) database. etcd internally uses the raft protocol as the consensus algorithm, and etcd is written in the go language.

As a service discovery system, Etcd has the following characteristics:

Simple: easy to install and configure, and provides HTTP API for interaction, easy to use
Secure: supports SSL certificate verification
Fast: a single instance supports 2k+ read operations per second
Reliable: uses the raft algorithm to achieve data availability and consistency in distributed systems
etcd currently uses port 2379 to provide HTTP API services by default, and port 2380 to communicate with peers (these two ports have been officially reserved for etcd by IANA (Internet Assigned Numbers Authority)). That is to say, etcd uses port 2379 to provide external communication for clients by default, and port 2380 for internal communication between servers. etcd is generally recommended to be deployed in a cluster in a production environment. Due to the leader election mechanism of etcd, an odd number of at least 3 or more is required.

Prepare to issue certificates Environment
CFSSL is a PKI/TLS tool open sourced by CloudFlare. CFSSL includes a command-line tool and an HTP API service for signing, verifying and bundling TLS certificates. Written in Go language.

CESSL uses a configuration file to generate a certificate, so before self-signing, it needs to generate a configuration file in json format that it recognizes. CESSL provides a convenient command line to generate a configuration file.

CFSSL is used to provide TLS certificates for etcd, and it supports signing three types of certificates:

client certificate, the certificate carried by the server when connecting to the client, used by the client to verify the identity of the server, such as kube-apiserver access etcd
server certificate, the certificate carried by the client when connecting to the server, used by the server to verify the identity of the client, For example, etcd provides external service
peer certificates, certificates used when connecting with each other, such as verification and communication between etcd nodes. Here all use the same set of certificate authentication.

Operate and download the certificate creation tool on the master01 node

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/local/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/local/bin/cfssljson
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/local/bin/cfssl-certinfo 
curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo 

chmod +x /usr/local/bin/cfssl*                       #提前下载好拉入/usr/local/bin/目录并授权    
字段解析
cfssl:证书签发的工具命令
cfssljson:将 cfssl 生成的证书(json格式)变为文件承载式证书
cfssl-certinfo:验证证书的信息

cfssl-certinfo-cert <证书名称>                        #查看证书的信息
#创建k8s工作目录
mkdir /opt/k8s 
cd /opt/k8s/

#上传 etcd-cert.sh 和 etcd.sh 到 /opt/k8s/ 目录中
chmod +x etcd-cert.sh etcd.sh

#创建用于生成CA证书、etcd 服务器证书以及私钥的目录
mkdir /opt/k8s/etcd-cert 
mv etcd-cert.sh etcd-cert/
cd /opt/k8s/etcd-cert/
./etcd-cert.sh                                       #生成CA证书、etcd 服务器证书以及私钥

Start etcd service

etcd binary package address: https://github.com/etcd-io/etcd/releases

#上传 etcd-v3.3.10-linux-amd64.tar.gz 到 /opt/k8s/ 目录中,解压 etcd 压缩包
cd /opt/k8s/
tar zxvf etcd-v3.4.9-linux-amd64.tar.gz
ls etcd-v3.4.9-linux-amd64
#etcd就是etcd 服务的启动命令,后面可跟各种启动参数
#etcdctl主要为etcd 服务提供了命令行操作
#创建用于存放 etcd 配置文件,命令文件,证书的目录
mkdir -p /opt/etcd/{cfg,bin,ssl}

#移动etcd和etcdctl文件到自定义的命令文件中
mv /opt/k8s/etcd-v3.4.9-linux-amd64/etcd /opt/k8s/etcd-v3.4.9-linux-amd64/etcdctl /opt/etcd/bin/             
cp /opt/k8s/etcd-cert/ *.pem /opt/etcd/ssl/

#启动etcd服务
/opt/k8s/etcd.sh etcd01 192.168.80.21 etcd02=https://192.168.80.7:2380,etcd03=https://192.168.80.8:2380
#进入卡住状态等待其他节点加入,这里需要三台etcd服务同时启动,如果只启动其中一台后,服务会卡在那里,直到集群中所有etcd节点都已启动,可忽略这个情况

#另外打开一个窗口查看etcd进程是否正常
ps -ef | grep etcd
#把etcd相关证书文件和命令文件全部拷贝到另外两个etcd集群节点
scp -r /opt/etcd/ [email protected]:/opt/
scp -r /opt/etcd/ [email protected]:/opt/

#把etcd服务管理文件拷贝到另外两个集群节点
scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/

#在etcd集群的其他节点配置对应的服务器名和ip地址
vim /opt/etcd/cfg/etcd
#启动etcd服务
systemctl start etcd
systemctl enable etcd
systemctl status etcd

#在master01节点(etcd01)上操作检查etcd群集状态
ln -s /opt/etcd/bin/etcd* /usr/local/bin 
#检查etcd群集状态
cd /opt/etcd/ssl 
/opt/etcd/bin/etcdctl \
--ca-file=ca.pem \
--cert-file=server.pem \
--key-file=server-key.pem \
--endpoints="https://192.168.80.21:2379,https://192.168.80.7:2379,https://192.168.80.8:2379" \
cluster-health
字段解析
--ca-file∶使用此CA证书验证启用https的服务器的证书
--cert-file∶识别HTTPS端使用SSL证书文件
--key-file∶使用此SSL密钥文件标识HTTPS客户端
--endpoints∶集群中以逗号分隔的机器地址列表
cluster-health∶检查etcd集群的运行状况

#检查etcd群集状态
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.80.21:2379,https://192.168.80.7:2379,https://192.168.80.8:2379" endpoint health --write-out=table
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.80.21:2379,https://192.168.80.7:2379,https://192.168.80.8:2379" --write-out=table member list

Deploy the docker engine
All node nodes deploy the docker engine

yum install -y yum-utils device-mapper-persistent-data lvm2 
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 
yum install -y docker-ce docker-ce-cli containerd.io

systemctl start docker.service
systemctl enable docker.service 

Deploy the master node component
Operate on the master01 node
Start the apiserver component

上传 master.zip 和 k8s-cert.sh 到 /opt/k8s 目录中,解压 master.zip压缩包
cd /opt/k8s/
unzip master.zip	#解压出压缩包里三种组件的启动脚本
apiserver.sh 
scheduler.sh
controller-manager.sh                                            
chmod +x *.sh		#给执行脚本权限
#创建kubernetes工作目录
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
#创建用于生成CA证书、相关组件的证书和私钥的目录
mkdir /opt/k8s/k8s-cert
mv /opt/k8s/k8s-cert.sh /opt/k8s/k8s-cert 
cd /opt/k8s/k8s-cert/
./k8s-cert.sh		#生成CA证书、相关组件的证书和私钥,需要在文件中添加apiserver可能用到的所有ip

ls *pem
admin-key.pem  apiserver-key.pem  ca-key.pem  kube-proxy-key.pem
admin.pem      apiserver.pem      ca.pem      kube-proxy.pem
#controller-manager和kube-scheduler设置为只调用当前机器的apiserver,使用127.0.0.1:8080通信,与apiserver在同一台机器上,因此不需要签发证书
#复制CA证书、apiserver相关证书和私钥到kubernetes工作目录的ssl子目录中
cp ca*pem apiserver*pem /opt/kubernetes/ssl/

#上传 kubernetes-server-linux-ama64.tar.gz 到 /opt/k8s/ 目录中,解压 kubernetes 压缩包
cd /opt/k8s/
tar zxvf kubernetes-server-linux-amd64.tar.gz

#复制master组件的关键命令文件到 kubernetes工作目录的 bin 子目录中
cd /opt/k8s/kubernetes/server/bin
cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/
ln -s /opt/kubernetes/bin/* /usr/local/bin/

#创建 bootstrap token 认证文件,apiserver 启动时会调用,然后就相当于在集群内创建了一个这个用户,接下来就可以用 RBAC 给他授权(注意文件token.csv内文件格式,如果格式问题可能后续证书无法正常对接)
cd /opt/k8s/
vim token.sh 
#!/bin/bash
#获取随机数前16个字节内容,以十六进制格式输出,并删除其中空格,/dey/urandom为生成随机数的设备,head -c 可以指定字节
BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ' )
#生成 token.csv 文件,按照 Token序列号,用户名,UID,用户组的格式生成,
cat > /opt/kubernetes/cfg/token.csv <<EOF
$BOOTSTRAP_TOKEN,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
chmod +x token.sh 
./token.sh
cat /opt/kubernetes/cfg/token.csv

#二进制文件token证书都准备好后,开启apiserver服务
cd /opt/k8s/
./apiserver.sh 192.168.80.21 https://192.168.80.21:2379,https://192.168.80.7:2379,https://192.168.80.8:2379 
#检查进程是否启动成功
ps aux | grep kube-apiserver
#k8s通过kube-apiserver这个进程提供服务,该进程运行在单个master节点上。默认有两个端口6443和8080 
#安全端口6443用于接收HTTPS请求,用于基于Token文件或客户端证书等认证
netstat -natp | grep 6443

#查看版本信息(必须保证apiserver启动正常,不然无法查询到server的版本信息)
kubectl version

#启动 scheduler 服务
./scheduler.sh
ps aux | grep kube-scheduler

#启动 controller-manager 服务
./controller-manager.sh
ps aux | grep kube-controller-manager

#生成kubectl连接集群的证书
./admin.sh
#绑定默认cluster-admin管理员集群角色,授权kubectl访问集群
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous

#查看master节点状态
kubectl get cs

Deploy Worker node node components
to operate on all node nodes

#创建kubernetes工作目录
mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}

#上传node.zip到/opt目录中,解压node.zip压缩包,获得kubelet.sh、proxy.sh
cd /opt/
unzip node.zip
chmod +x kubelet.sh proxy.sh
#把 kubelet、kube-proxy 拷贝到 node 节点
cd /opt/k8s/kubernetes/server/bin
scp kubelet kube-proxy [email protected]:/opt/kubernetes/bin/
scp kubelet kube-proxy [email protected]:/opt/kubernetes/bin/

#上传kubeconfig.sh文件到/opt/k8s/kubeconfig目录中,生成kubeconfig的配置文件
mkdir /opt/k8s/kubeconfig

cd /opt/k8s/kubeconfig
chmod +x kubeconfig.sh
./kubeconfig.sh 192.168.80.21 /opt/k8s/k8s-cert/

scp bootstrap.kubeconfig kube-proxy.kubeconfig [email protected]:/opt/kubernetes/cfg/
scp bootstrap.kubeconfig kube-proxy.kubeconfig [email protected]:/opt/kubernetes/cfg/

#RBAC授权,使用户 kubelet-bootstrap 能够有权限发起 CSR 请求
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

Kubelet uses the TLS Bootstrapping mechanism to automatically complete the registration to the kube-apiserver, which is very useful when the number of nodes is large or the capacity is automatically expanded later.

After the master apiserver enables TLS authentication, if the kubelet component of the node node wants to join the cluster, it must use a valid certificate issued by the CA to communicate with the apiserver. When there are many node nodes, signing the certificate is a very cumbersome task. Therefore, Kubernetes introduces the TLS bootstraping mechanism to automatically issue client certificates. The kubelet will automatically apply for a certificate from the apiserver as a low-privileged user, and the kubelet certificate is dynamically signed by the apiserver.

The first startup of kubelet initiates the first CSR request by loading the user Token and apiserver CA certificate in bootstrap.kubeconfig. This Token is pre-built in the token.csv of the apiserver node, and its identity is the kubelet-bootstrap user and the system:kubelet-bootstrap user group : If you want the first CSR request to be successful (that is, it will not be rejected by apiserver 401), you need to create a clusterRoleBinding first, and bind the kubelet-bootstrap user to the built-in clusterRole of system:node-bootstrapper (query through kubectl get clusterroles), so that It is capable of initiating a CSR authentication request.

The certificate of TLs bootstrapping is actually signed by the kube-controller-manager component, which means that the validity period of the certificate is controlled by the kube-controller-manager component: the kube-controller-manager component provides a --experimental-cluster-signing-duration parameter to set the valid time of the signed certificate; the default is
8760h0m0s, change it to 87600h0m0s, that is, sign the certificate with TLs bootstrapping after 10 years.

That is to say, when the kubelet accesses the API Server for the first time, it uses a token for authentication. After passing the pass, the Controller Manager will generate a certificate for the kubelet, and subsequent accesses will use the certificate for authentication.

#启动 kubelet 服务
cd /opt/
./kubelet.sh 192.168.80.7
ps aux | grep kubelet

Operate on the master01 node, request via CSR
Note: In the case of using a virtual machine, be sure to correspond to the IP address in each configuration file before executing all configuration files and scripts, otherwise errors will occur due to the BUG of the virtual machine itself All previous efforts have been wasted, do not shut down or restart during the configuration process to avoid accidents, I have a deep understanding of this and spent two IP addresses.

#检查到 node01 节点的 kubelet 发起的 CSR 请求,Pending 表示等待集群给该节点签发证书
kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-DkU_wDv2Kpdivv9GV06iI2a81QJpqjG8-_HCMDVJewo   17s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

#通过 CSR 请求
kubectl certificate approve node-csr-DkU_wDv2Kpdivv9GV06iI2a81QJpqjG8-_HCMDVJewo

#Approved,Issued 表示已授权 CSR 请求并签发证书
kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-DkU_wDv2Kpdivv9GV06iI2a81QJpqjG8-_HCMDVJewo   90s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued

#查看节点,由于网络插件还没有部署,节点会没有准备就绪 NotReady
kubectl get node
NAME           STATUS     ROLES    AGE     VERSION
192.168.80.7   NotReady   <none>   2m57s   v1.20.11
#加载 ip_vs 模块
for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done

#启动proxy服务
cd /opt/
./proxy.sh 192.168.80.7
ps aux | grep kube-proxy

Pod network communication in K8S
The communication between the container and the container
in the Pod The containers in the same Pod (the containers in the Pod will not cross the host machine) share the same network command space, which is equivalent to them on the same machine on the network Similarly, you can use the localhost address to access each other’s ports. For
communication between Pods in the same Node,
each Pod has a real global IP address. Different Pods in the same Node can directly use the IP address of the other Pod. Communication, Pod1 and Pod2 are connected to the same docker0 bridge through veth, the network segment is the same, so they can communicate directly between Pods
on different Nodes.
The host network card is two different network segments, and the communication between different Nodos can be carried out through the physical network card of the host machine. addressing and communication. Therefore, two conditions must be met:
the IP of the Pod cannot conflict, and the IP of the Pod is associated with the IP of the Node where it is located. Through this association, the Pods on different Nodes can communicate directly through the intranet IP address.
Overlay Network

Overlay network, a virtual network technology mode superimposed on the second-tier or third-tier basic network, the hosts in the network are connected through virtual link tunnels (similar to VPN)

VXLAN

Encapsulate the source data packet into UDP, and use the IP/MAC of the basic network as the outer packet header for encapsulation, and then transmit it on the Ethernet. After reaching the destination, the tunnel endpoint decapsulates it and sends the data to the target address

vxlan mode:

vxlan是一种overlay(虛拟隧道通信)技术,通过三层网络搭建虚拟的二层网络,跟udp模式具体实现不太一样:
1. udp模式是在用户态实现的,数据会先经过tun网卡,到应用程序,应用程序再做隧道封装,再进一次内核协议栈,而vxlan是在内核当中实现的,只经过一次协议栈,在协议栈内就把vxlan包组装好
2. udp模式的tun网卡是三三层转发,使用tun是在物理网络之上构建三三层网络,属于ip in udp,vxlan模式是二层实现,overlay是二层帧,属于mac in udp
3. vxlan由于采用mac in udp的方式,所以实现起来会涉及mac地址学习,arp广 播等二层知识,udp模式主要关注路由

Flannel

The function of Flannel is to allow Docker containers created by different node hosts in the cluster to have unique virtual IP addresses for the entire cluster

Flannel is a type of Overlay network. It also encapsulates TCP source data packets in another network packet for routing, forwarding and communication. Currently, it supports data forwarding methods such as UDP, VXLAN, and AWS VPC.

How Flannel works

node1上的pod1要和node2上的pod1进行通信
1.数据从node1上的Pod1源容器中发出,经由所在主机的docker0虚拟网卡转发到flannel0虚拟网卡;
2.在flannel0网卡有个flanneld服务把podip封装到udp中(里面封装的是源pod IP和目的pod IP);
3.根据在etcd保存的路由表信息,通过物理网卡发送给目的node2节点,数据包到达目标node2节点会被flanneld服务来进行解封装暴露出udp里的podIP;
4.最后根据目的podIP经flannel0虚拟网卡和docker0虚拟网卡转发到目的pod中,最后完成通信

 

Flannel of ETCD provides instructions

Store and manage IP address segment resources that can be allocated by Flanne1
Monitor the actual address of each Pod in ETCD, and establish and maintain the Pod node routing table in memory

Since the udp mode is forwarded in the user mode, there will be one more packet tunnel encapsulation, so the performance will be worse than the vxlan mode forwarded in the kernel mode.

The working principle of Flannel vxlan mode:
vxlan is implemented in the kernel. When the data packet is sent by the vxlan device, it will be marked with the header information of vlxan. After sending it out, the peer unpacks it, and the flannel.1 network card sends the original message to destination server.


Comparison of networking solutions for deploying Calico k8s:

Flannel scheme

It is necessary to encapsulate the data packet sent to the container on each node, and then use the tunnel to send the encapsulated data packet to the node node running the target Pod. The target node is responsible for removing the encapsulation, and sending the unencapsulated data packet to the target Pod. Data communication performance is greatly affected.

Calico program

Calico does not use tunnels or NAT to achieve forwarding, but treats Host as a router in the Internet, uses BGP to synchronize routes, and uses iptables to implement security access policies to complete cross-Host forwarding.

Calico is mainly composed of three parts:
Calico CNI plug-in: mainly responsible for docking with kubernetes for kubelet calls.

Felix: Responsible for maintaining the routing rules on the host machine, FIB forwarding information base, etc.

BIRD: Responsible for distributing routing rules, similar to routers.

Confd: Configuration management component.

How Calico works: Calico maintains the communication of each pod through the routing table. Calico's CNI plug-in will set a veth pair device for each container, and then connect the other end to the host network space. Since there is no bridge, the CNI plug-in also needs to configure a veth pair device for each container on the host. Routing rules for receiving incoming IP packets.
With such a vethpair device, the IP packet sent by the container will reach the host through the vethpair device, and then the host will send it to the correct gateway according to the next hop address of the routing rule, then reach the target host, and then reach the target container . These routing rules are maintained and configured by Felix, and the routing information is distributed by the calico BIRD component based on BGP. Calico actually treats all nodes in the cluster as border routers. They form a fully interconnected network and exchange routes with each other through BGP. These nodes are called BGP Peer

At present, flannel and calico are more commonly used. The function of flannel is relatively simple, and it does not have the ability to configure complex network policies. Calico is an excellent network management plug-in, but it has complex network configuration capabilities, which often means that its own configuration is more complicated. , so relatively speaking, relatively small and simple clusters use flannel. Considering future expansion, the future network may need to add more devices and configure more network policies, it is better to use calico

Flannel network configuration
operates on the node01 node

#上传 cni-plugins-linux-amd64-v0.8.6.tgz 和 flannel.tar 到 /opt 目录中
cd /opt/
docker load -i flannel.tar

mkdir /opt/cni/bin -p
tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin

Operate on the master01 node

#上传 kube-flannel.yml 文件到 /opt/k8s 目录中,部署 CNI 网络
cd /opt/k8s
kubectl apply -f kube-flannel.yml 

kubectl get pods -n kube-system
NAME                    READY   STATUS    RESTARTS   AGE
kube-flannel-ds-cjmkf   1/1     Running   0          57s

kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
192.168.80.7   Ready    <none>   40m   v1.20.11

Operate on the node02 node

#启动kubelet
cd /opt/
./kubelet.sh 192.168.80.8
ps aux | grep kubelet

Operate on the master01 node

#检查到 node02 节点的 kubelet 发起的 CSR 请求,Pending 表示等待集群给该节点签发证书
kubectl get csr

#通过 CSR 请求
kubectl certificate approve node-csr-QDzk2iM4frxRDTavZgCZfMx5x9wIrAgsum2K25BkQ_Q

#Approved,Issued 表示已授权 CSR 请求并签发证书
kubectl get csr

Operate on the node02 node

#加载 ipvs 模块
for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done

#使用proxy.sh脚本启动proxy服务
cd /opt/
./proxy.sh 192.168.80.8

#把node01节点的/opt/cni/目录传输到node02的opt目录中
scp -r cni [email protected]:/opt

#在master01节点上查看群集中的节点状态
kubectl get nodes

Deploy CoreDNS
to operate on all node nodes

#上传coredns.tar到/opt目录中
cd /opt
docker load -i coredns.tar

Operate on the master01 node

#上传 coredns.yaml 文件到 /opt/k8s 目录中,部署 CoreDNS 
cd /opt/k8s
kubectl apply -f coredns.yaml

kubectl get pods -n kube-system 
NAME                       READY   STATUS    RESTARTS   AGE
coredns-6954c77b9b-d9np6   1/1     Running   0          43s
kube-flannel-ds-cjmkf      1/1     Running   0          153m
kube-flannel-ds-sqck2      1/1     Running   5          138m

#DNS解析测试
kubectl run -it --rm dns-test --image=busybox:1.28.4 sh
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes
Server:    10.0.0.2
Address 1: 10.0.0.2 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local

Guess you like

Origin blog.csdn.net/shitianyu6/article/details/128253447