Binary installation K8S (single Master cluster architecture)


1. Install K8S

1. Single Master cluster architecture

insert image description here

k8s集群master01:192.168.154.10 kube-apiserver kube-controller-manager kube-scheduler etcd

k8s cluster node01: 192.168.154.11 kubelet kube-proxy docker
k8s cluster node02: 192.168.154.12

etcd cluster node 1: 192.168.154.10 etcd

2. Operating system initialization configuration

#关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

#关闭selinux
setenforce 0
sed -i 's/enforcing/disabled/' /etc/selinux/config

#关闭swap
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab 

#根据规划设置主机名
hostnamectl set-hostname master01
hostnamectl set-hostname node01
hostnamectl set-hostname node02

#在master添加hosts
cat >> /etc/hosts << EOF
192.168.154.10 master01
192.168.154.11 node01
192.168.154.12 node02
EOF

#调整内核参数
cat > /etc/sysctl.d/k8s.conf << EOF
#开启网桥模式,可将网桥的流量传递给iptables链
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
#关闭ipv6协议
net.ipv6.conf.all.disable_ipv6=1
net.ipv4.ip_forward=1
EOF

sysctl --system

#时间同步
yum install ntpdate -y
ntpdate time.windows.com

3. Deploy the docker engine

#安装依赖包
yum install -y yum-utils device-mapper-persistent-data lvm2 
--------------------------------------------------------------------------------------------
yum-utils:提供了 yum-config-manager 工具。
device mapper: 是Linux内核中支持逻辑卷管理的通用设备映射机制,它为实现用于存储资源管理的块设备驱动提供了一个高度模块化的内核架构。
device mapper存储驱动程序需要 device-mapper-persistent-data 和 lvm2。
--------------------------------------------------------------------------------------------

#设置阿里云镜像源
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 

#安装 Docker-CE并设置为开机自动启动
yum install -y docker-ce docker-ce-cli containerd.io

cd /etc/docker/
cat  > /etc/docker/daemon.json <<EOF
{
    
    
  "registry-mirrors": ["https://6ijb8ubo.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    
    
      "max-size": "500m","max-file": "3"
   }
}
EOF

systemctl start docker.service
systemctl enable docker.service 

insert image description here

4. Deploy etcd cluster

etcd is an open source project initiated by the CoreOS team in June 2013. Its goal is to build a highly available distributed key-value (key-value) database. etcd internally uses the raft protocol as the consensus algorithm, and etcd is written in the go language.

As a service discovery system, etcd has the following features:
Simple : simple to install and configure, and provides HTTP API for interaction, and easy to use Safe
: supports SSL certificate verification
Fast : single instance supports 2k+ read operations per second
Reliable : uses raft algorithm , to achieve the availability and consistency of distributed system data

etcd currently uses 2379ports to provide HTTP API services by default, and port 2380 communicates with peers (these two ports have been officially reserved for etcd by IANA (Internet Assigned Numbers Authority)). That is, etcd uses 2379ports by default to provide external communication for clients, and uses ports 2380for internal communication between servers.
etcd is generally recommended to be deployed in a cluster in a production environment. Due to the leader election mechanism of etcd, an odd number of at least 3 or more is required.

//在 master01 节点上操作

#准备cfssl证书生成工具
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/local/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/local/bin/cfssljson
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/local/bin/cfssl-certinfo

chmod +x /usr/local/bin/cfssl*
------------------------------------------------------------------------------------------
cfssl:证书签发的工具命令
cfssljson:将 cfssl 生成的证书(json格式)变为文件承载式证书
cfssl-certinfo:验证证书的信息
cfssl-certinfo -cert <证书名称>			#查看证书的信息
------------------------------------------------------------------------------------------

### 生成Etcd证书 ###
mkdir /opt/k8s
cd /opt/k8s/

#上传 etcd-cert.sh 和 etcd.sh 到 /opt/k8s/ 目录中
chmod +x etcd-cert.sh etcd.sh

#创建用于生成CA证书、etcd 服务器证书以及私钥的目录
mkdir /opt/k8s/etcd-cert
mv etcd-cert.sh etcd-cert/
cd /opt/k8s/etcd-cert/
./etcd-cert.sh			#生成CA证书、etcd 服务器证书以及私钥

ls
ca-config.json  ca-csr.json  ca.pem        server.csr       server-key.pem
ca.csr          ca-key.pem   etcd-cert.sh  server-csr.json  server.pem

#上传 etcd-v3.4.9-linux-amd64.tar.gz 到 /opt/k8s 目录中,启动etcd服务
cd /opt/k8s/
tar zxvf etcd-v3.4.9-linux-amd64.tar.gz
ls etcd-v3.4.9-linux-amd64
Documentation  etcd  etcdctl  README-etcdctl.md  README.md  READMEv2-etcdctl.md
------------------------------------------------------------------------------------------
etcd就是etcd 服务的启动命令,后面可跟各种启动参数
etcdctl主要为etcd 服务提供了命令行操作
------------------------------------------------------------------------------------------

#创建用于存放 etcd 配置文件,命令文件,证书的目录
mkdir -p /opt/etcd/{cfg,bin,ssl}

cd /opt/k8s/etcd-v3.4.9-linux-amd64/
mv etcd etcdctl /opt/etcd/bin/
cp /opt/k8s/etcd-cert/*.pem /opt/etcd/ssl/

cd /opt/k8s/
./etcd.sh etcd01 192.168.154.10 etcd02=https://192.168.154.11:2380,etcd03=https://192.168.154.12:2380
#进入卡住状态等待其他节点加入,这里需要三台etcd服务同时启动,如果只启动其中一台后,服务会卡在那里,直到集群中所有etcd节点都已启动,可忽略这个情况

#可另外打开一个窗口查看etcd进程是否正常
ps -ef | grep etcd

#把etcd相关证书文件、命令文件和服务管理文件全部拷贝到另外两个etcd集群节点
scp -r /opt/etcd/ [email protected]:/opt/
scp -r /opt/etcd/ [email protected]:/opt/
scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/

insert image description here

//在 master01 节点上操作
cd /opt
ls -R etcd/
scp -r etcd/ node01:/opt
scp -r etcd/ node02:/opt
cd /usr/lib/systemd/system
ls etcd.service
scp etcd.service node01:`pwd`
scp etcd.service node02:`pwd`

insert image description here
insert image description here
insert image description here

在node01节点上操作
cd /opt/etcd/cfg
vim etcd

#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.154.11:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.154.11:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.154.11:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.154.11:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.154.10:2380,etcd02=https://192.168.154.11:2380,etcd03=https://192.168.154.12:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#启动etcd服务
systemctl daemon-reload
systemctl enable --now etcd.service
systemctl status etcd.service

insert image description here

在node02节点上操作

cd /opt/etcd/cfg
vim etcd
#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.154.12:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.154.12:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.154.12:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.154.12:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.154.10:2380,etcd02=https://192.168.154.11:2380,etcd03=https://192.168.154.12:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

#启动etcd服务
systemctl daemon-reload
systemctl enable --now etcd.service
systemctl status etcd.service

insert image description here

#检查etcd群集状态
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.154.10:2379,https://192.168.154.11:2379,https://192.168.154.12:2379" endpoint health --write-out=table

insert image description here

ETCDCTL_API=3 /opt/etcd/bin/etcdctl --endpoints="https://192.168.154.10:2379,https://192.168.154.11:2379,https://192.168.154.12:2379" --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem  --write-out=table endpoint status

insert image description here

ETCDCTL_API=3 /opt/etcd/bin/etcdctl --endpoints="https://192.168.154.10:2379,https://192.168.154.11:2379,https://192.168.154.12:2379" --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem  --write-out=table member list

insert image description here

Implement the backup operation of etcd

cd 
mkdir etcd/backup -p
cd etcd/

ETCDCTL_API=3 /opt/etcd/bin/etcdctl --endpoints="https://192.168.154.10:2379" --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem  snapshot save /root/etcd-snapshot.db

insert image description here
insert image description here

Restore etcd operations

//查看etcd-snapshot.db的文件
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --endpoints="https://192.168.154.10:2379" --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem  snapshot status ./etcd-snapshot.db --write-out=table

insert image description here

//恢复etcd
ETCDCTL_API=3 /opt/etcd/bin/etcdctl --endpoints="https://192.168.154.10:2379" --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem  snapshot restore ./etcd-snapshot.db 

insert image description here

5. Deploy the Master component

//在 master01 节点上操作
#上传 master.zip 和 kubernetes-server-linux-amd64.tar.gz 到 /opt/k8s 目录中,解压 master.zip 压缩包
cd /opt/k8s/
unzip master.zip
chmod +x *.sh

#创建kubernetes工作目录
mkdir -p /opt/kubernetes/{
    
    bin,cfg,ssl,logs}

#创建用于生成CA证书、相关组件的证书和私钥的目录
mkdir /opt/k8s/k8s-cert
mv /opt/k8s/k8s-cert.sh /opt/k8s/k8s-cert
cd /opt/k8s/k8s-cert/
vim k8s-cert.sh     #改地址
./k8s-cert.sh				#生成CA证书、相关组件的证书和私钥

ls *pem
admin-key.pem  apiserver-key.pem  ca-key.pem  kube-proxy-key.pem  
admin.pem      apiserver.pem      ca.pem      kube-proxy.pem

insert image description here

#复制CA证书、apiserver相关证书和私钥到 kubernetes工作目录的 ssl 子目录中
cp kube-apiserver kube-controller-manager kube-scheduler kubectl /opt/kubernetes/bin/

#上传 kubernetes-server-linux-amd64.tar.gz 到 /opt/k8s/ 目录中,解压 kubernetes 压缩包
cd /opt/k8s/
tar zxvf kubernetes-server-linux-amd64.tar.gz

#复制master组件的关键命令文件到 kubernetes工作目录的 bin 子目录中
cd /opt/k8s/kubernetes/server/bin
cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/
ln -s /opt/kubernetes/bin/* /usr/local/bin/

insert image description here

#创建 bootstrap token 认证文件,apiserver 启动时会调用,然后就相当于在集群内创建了一个这个用户,接下来就可以用 RBAC 给他授权
cd /opt/k8s/
vim token.csv
cfe2bd4ece1251173600ff7fd6b02410,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

mv token.csv /opt/kubernetes/cfg/
chmod +x *.sh
vim apiserver.sh
./apiserver.sh 192.168.154.10 https://192.168.154.10:2379,https://192.168.154.11:2379,https://192.168.154.12:2379

#检查进程是否启动成功
ps aux | grep kube-apiserver

netstat -natp | grep 6443   #安全端口6443用于接收HTTPS请求,用于基于Token文件或客户端证书等认证

insert image description here
insert image description here

#启动 controller-manager 服务
./controller-manager.sh
ps aux | grep kube-controller-manager

#启动 scheduler 服务
cd /opt/k8s/
./scheduler.sh
ps aux | grep kube-scheduler

insert image description here
insert image description here

#生成kubectl连接集群的kubeconfig文件
./admin.sh

#绑定默认cluster-admin管理员集群角色,授权kubectl访问集群
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous

#通过kubectl工具查看当前集群组件状态
kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-2               Healthy   {
    
    "health":"true"}   
etcd-1               Healthy   {
    
    "health":"true"}   
etcd-0               Healthy   {
    
    "health":"true"}  

#查看版本信息
kubectl version

insert image description here
insert image description here

6. Deploy Worker Node components

//在所有 node 节点上操作
#创建kubernetes工作目录
mkdir -p /opt/kubernetes/{
    
    bin,cfg,ssl,logs}

#上传 node.zip 到 /opt 目录中,解压 node.zip 压缩包,获得kubelet.sh、proxy.sh
cd /opt/
unzip node.zip
chmod +x kubelet.sh proxy.sh

//在 master01 节点上操作
#把 kubelet、kube-proxy 拷贝到 node 节点
cd /opt/k8s/kubernetes/server/bin
scp kubelet kube-proxy [email protected]:/opt/kubernetes/bin/
scp kubelet kube-proxy [email protected]:/opt/kubernetes/bin/

#上传kubeconfig.sh文件到/opt/k8s/kubeconfig目录中,生成kubelet初次加入集群引导kubeconfig文件和kube-proxy.kubeconfig文件
#kubeconfig 文件包含集群参数(CA 证书、API Server 地址),客户端参数(上面生成的证书和私钥),集群 context 上下文参数(集群名称、用户名)。Kubenetes 组件(如 kubelet、kube-proxy)通过启动时指定不同的 kubeconfig 文件可以切换到不同的集群,连接到 apiserver。
mkdir /opt/k8s/kubeconfig

cd /opt/k8s/kubeconfig
chmod +x kubeconfig.sh
./kubeconfig.sh 192.168.154.10 /opt/k8s/k8s-cert/

#把配置文件 bootstrap.kubeconfig、kube-proxy.kubeconfig 拷贝到 node 节点
scp bootstrap.kubeconfig kube-proxy.kubeconfig [email protected]:/opt/kubernetes/cfg/
scp bootstrap.kubeconfig kube-proxy.kubeconfig [email protected]:/opt/kubernetes/cfg/

#RBAC授权,使用户 kubelet-bootstrap 能够有权限发起 CSR 请求证书
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

------------------------------------------------------------------------------------------
kubelet 采用 TLS Bootstrapping 机制,自动完成到 kube-apiserver 的注册,在 node 节点量较大或者后期自动扩容时非常有用。
Master apiserver 启用 TLS 认证后,node 节点 kubelet 组件想要加入集群,必须使用CA签发的有效证书才能与 apiserver 通信,当 node 节点很多时,签署证书是一件很繁琐的事情。因此 Kubernetes 引入了 TLS bootstraping 机制来自动颁发客户端证书,kubelet 会以一个低权限用户自动向 apiserver 申请证书,kubelet 的证书由 apiserver 动态签署。

kubelet 首次启动通过加载 bootstrap.kubeconfig 中的用户 Token 和 apiserver CA 证书发起首次 CSR 请求,这个 Token 被预先内置在 apiserver 节点的 token.csv 中,其身份为 kubelet-bootstrap 用户和 system:kubelet-bootstrap 用户组;想要首次 CSR 请求能成功(即不会被 apiserver 401 拒绝),则需要先创建一个 ClusterRoleBinding,将 kubelet-bootstrap 用户和 system:node-bootstrapper 内置 ClusterRole 绑定(通过 kubectl get clusterroles 可查询),使其能够发起 CSR 认证请求。

TLS bootstrapping 时的证书实际是由 kube-controller-manager 组件来签署的,也就是说证书有效期是 kube-controller-manager 组件控制的;kube-controller-manager 组件提供了一个 --experimental-cluster-signing-duration 参数来设置签署的证书有效时间;默认为 8760h0m0s,将其改为 87600h0m0s,即 10 年后再进行 TLS bootstrapping 签署证书即可。

也就是说 kubelet 首次访问 API Server 时,是使用 token 做认证,通过后,Controller Manager 会为 kubelet 生成一个证书,以后的访问都是用证书做认证了。

TLS bootstrapping 机制
有master的组件自动给kubelet签发证书
1)kubelet首次访问apiserver,是通过bootstrap.kubeconfig的token来认证的
2)kubelet会以一个低权限用户(token.csv里的kubelet-bootstrap)向apiserver发起CSR请求申请证书
3)如果apiserver通过CSR请求后,会由controller-manager根据配置文件生成证书,并通过apiserver发给kubelet
4) kubelet以后再访问apiserver就会使用签发的证书来做认证

------------------------------------------------------------------------------------------

insert image description here

//在 node01 节点上操作
//启动 kubelet 服务
cd /opt/
./kubelet.sh 192.168.154.11
ps aux | grep kubelet

//在 master01 节点上操作,通过 CSR 请求
//检查到 node01 节点的 kubelet 发起的 CSR 请求,Pending 表示等待集群给该节点签发证书
kubectl get csr
NAME                                                   AGE  SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-x-o7xwAYtP0-Cxemis_3WpGD5DFOadHldFQTK4UegTw   50s  kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

//通过 CSR 请求
kubectl certificate approve node-csr-x-o7xwAYtP0-Cxemis_3WpGD5DFOadHldFQTK4UegTw

//Approved,Issued 表示已授权 CSR 请求并签发证书
kubectl get csr

//查看节点,由于网络插件还没有部署,节点会没有准备就绪 NotReady
kubectl get nodes


//在 node01 节点上操作
//加载 ip_vs 模块
for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done

//启动proxy服务
cd /opt/
./proxy.sh 192.168.154.11
ps aux | grep kube-proxy

ls kubernetes/ssl/

insert image description here

insert image description here

insert image description here
insert image description here

//自动批准 CSR 请求
//在Master01主节点
kubectl create clusterrolebinding node-autoapprove-bootstrap --clusterrole=system:certificates.k8s.io:certificatesigningrequests:nodeclient --user=kubelet-bootstrap

kubectl create clusterrolebinding node-autoapprove-certificate-rotation --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient --user=kubelet-bootstrap

insert image description here

//在node01节点
cd /opt
scp kubelet.sh proxy.sh 192.168.154.12:/opt

//在node01节点
cd /opt
./kubelet.sh 192.168.154.12
ps aux | grep kubelet
ls kubernetes/ssl/

insert image description here
Go back to the master master node to check

kubectl get nodes

insert image description here

7. Deploy CNI network components

7.1 deploy flannel

Pod network communication in K8S:
The communication between the container and the container
in the Pod. The containers in the same Pod (the containers in the Pod will not cross the host machine) share the same network command space, which is equivalent to that they are on the same machine. As above, you can use the localhost address to access each other's ports.

● Communication between Pods in the same Node.
Each Pod has a real global IP address. Different Pods in the same Node can directly use the IP address of the other Pod for communication. Pod1 and Pod2 are connected through Veth to the same docker0 bridge, the network segment is the same, so they can communicate directly between them.

● Communication between Pods on different Nodes The Pod
address is in the same network segment as docker0. The network segment of docker0 and the network card of the host machine are two different network segments, and the communication between different Nodes can only be carried out through the physical network card of the host machine.
To achieve communication between Pods on different Nodes, it is necessary to find a way to address and communicate through the IP address of the physical network card of the host. Therefore, two conditions must be met: the IP of the Pod cannot conflict; the IP of the Pod is associated with the IP of the Node where it is located, and through this association, the Pods on different Nodes can communicate directly through the intranet IP address.

Overlay Network:
An overlay network, a virtual network technology model superimposed on a layer-2 or layer-3 basic network, where hosts in the network are connected through virtual link tunnels (similar to VPN).

VXLAN:
Encapsulate the source data packet into UDP, and use the IP/MAC of the basic network as the outer packet header for encapsulation, and then transmit it on the Ethernet. After reaching the destination, the tunnel endpoint decapsulates it and sends the data to the target address .

Flannel:
The function of Flannel is to allow Docker containers created by different node hosts in the cluster to have unique virtual IP addresses for the entire cluster.
Flannel is a kind of Overlay network. It also encapsulates TCP source data packets in another network packet for routing, forwarding and communication. Currently, it supports three data forwarding methods: udp, vxlan, and host-GW.

7.2 How Flannel udp mode works

insert image description here

1. After the data is sent from the source container of the Pod on host A, it is forwarded to the flannel0 interface via the cni0/docker0 bridge of the host (after installing the network plug-in, it is not docker0 but cni0 network card), and the flanneld service listens on flannel0 the other end of the interface.
2. The IP packet information sent to the flannel0 interface will be received by the flanneld process. After receiving the IP packet, the flanneld process will perform a UDP packet on the original basis (the UDP packet contains the data packet of the source Pod). 3. Flannel passes etcd (including
all The ip of the pod and the address of the corresponding node host) The service maintains a routing table between nodes. The IP address of the host (node ​​node) where the target container is located can be easily obtained by flanneld by querying etcd.
4. flanneld forwards the encapsulated UDP message through the physical network card. Port 8285 delivers the packet to the flanneld process that is listening
5. The flanneld running on host B unpacks the UDP packet to obtain the original IP packet, and the kernel forwards the IP packet to the cni0 bridge by querying the local routing table 6
. The cni0 bridge forwards IP packets to target Pods connected to the bridge. So far the whole process is over. The return message will be returned according to the original path of the above data flow

Flannel of ETCD provides instructions:
storage and management of IP address segment resources that can be allocated by Flannel
Monitor the actual address of each Pod in ETCD, and establish and maintain the Pod node routing table in memory

Since the udp mode is forwarded in the user mode, there will be one more packet tunnel encapsulation, so the performance will be worse than the vxlan mode forwarded in the kernel mode.

vxlan mode:
vxlan is an overlay (virtual tunnel communication) technology that builds a virtual layer-2 network through a layer-3 network. It is not the same as the specific implementation of the udp mode: (1) The udp mode
is implemented in the user mode, and the data will first After going through the tun network card, to the application program, the application program performs tunnel encapsulation, and enters the kernel protocol stack again, while vxlan is implemented in the kernel, only passes through the protocol stack once, and the vxlan package is assembled in the protocol stack (2
) The tun network card in udp mode is three-layer forwarding, using tun is to build a three-layer network on top of the physical network, which belongs to ip in udp, vxlan mode is a two-layer implementation, and overlay is a two-layer frame, which belongs to mac in udp (3)
vxlan The mac in udp method is adopted, so the implementation will involve mac address learning, arp broadcast and other layer 2 knowledge. The udp mode mainly focuses on routing

7.3 How Flannel vxlan mode works

insert image description here

1. After the data frame is sent from the source container of the Pod on host A, it is forwarded to the flannel.1 interface through the cin0 network interface of the host. 2. After
receiving the data frame, flannel.1 adds a VXLAN header and encapsulates it into a VXLAN UDP packet
3. Host A sends the packet to the physical network card of host B through the physical network card
4. Through the VXLAN 8472 port, the VXLAN packet is forwarded to the flannel.1 interface for decapsulation
5. According to the destination IP in the original message obtained after depacketization, The kernel sends the original message to cni0, and finally cni0 sends it to the PodB connected to this interface

//在 node01 节点上操作
//上传 cni-plugins-linux-amd64-v0.8.6.tgz 和 flannel.tar 到 /opt 目录中
cd /opt/
mkdir flannel
mv flannel-v0.21.5.zip flannel/
cd flannel/
unzip flannel-v0.21.5.zip

docker load -i flannel.tar
docker load -i flannel-cni-plugin.tar

insert image description here

cd /opt
mkdir -p /opt/cni/bin
cd flannel/
tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin
cd /opt
scp -r cni/ flannel/ 192.168.154.12:/opt

insert image description here

//node2节点上操作
cd /opt
cd flannel/
docker load -i flannel.tar
docker load -i flannel-cni-plugin.tar

insert image description here
insert image description here

//在node01节点上操作
cd /opt/flannel/
scp kube-flannel.yml 192.168.154.10:/opt/k8s

//在 master01 节点上操作
//上传 kube-flannel.yml 文件到 /opt/k8s 目录中,部署 CNI 网络
cd /opt/k8s
kubectl apply -f kube-flannel.yml 

kubectl get pods -A
NAMESPACE      NAME                    READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-6md78   1/1     Running   0          3s
kube-flannel   kube-flannel-ds-mbsvh   1/1     Running   0          3s


kubectl get nodes
NAME             STATUS   ROLES    AGE   VERSION
192.168.154.11   Ready    <none>   46h   v1.20.15
192.168.154.12   Ready    <none>   46h   v1.20.15

insert image description here

8. Deploy CoreDNS

coreDNS: CoreDNS is the default DNS implementation for Kubernetes. You can create a resource name-to-CLusterIP mapping for service resources in your cluster. Resolution
Kubernetes can optionally use DNS watering to avoid hardcoding a service's cluster IP address into your application.

//在所有 node 节点上操作
//上传 coredns.tar 到 /opt 目录中
cd /opt
docker load -i coredns.tar

//在 master01 节点上操作
//上传 coredns.yaml 文件到 /opt/k8s 目录中,部署 CoreDNS 
cd /opt/k8s
kubectl apply -f coredns.yaml

kubectl get pods -n kube-system 
NAME                          READY   STATUS    RESTARTS   AGE
coredns-5ffbfd976d-j6shb      1/1     Running   0          32s

//在 master01 节点上操作
cd /opt/k8s
vim test.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app.kubernetes.io/name: MyApp
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376
      
kubectl apply -f test.yaml

insert image description here

//DNS 解析测试
kubectl run -it --rm dns-test --image=busybox:1.28.4 sh
If you don't see a command prompt, try pressing enter.
/ # vi /etc/resolv.conf
复制default.svc.cluster.local
/ # nslookup my-service.default.svc.cluster.local

Name:      kubernetes
Address 1: 10.0.0.1 kubernetes.default.svc.cluster.local

insert image description here

insert image description here

insert image description here
insert image description here

Guess you like

Origin blog.csdn.net/ll945608651/article/details/131293470