K8s cni + flannel (cnm


Both CNI and CNM are solutions for container networks

  • CNM: Docker (the company behind docker container runtime) proposed
  • CNI: CoreOS proposed that the project was accepted and hosted by CNCF

CNM

Docker default network mode

  • host: Shared host network Network namespace
  • bridge: bridge the host network card
  • none: The container has an independent Network namespace, but no network configuration (used for common tests)

Such several network methods cannot meet the needs of container interconnection

CNM defines
Libnetwork as a native implementation of CNM. It provides an interface between the Docker daemon and the network driver. The network controller is responsible for connecting the driver to a network. Each driver is responsible for managing the network it owns and the various services provided for the network.
Insert picture description here
Several important terms are explained

  • Sandbox: Sandbox contains a network stack of containers. Including the management container's network card, routing table and DNS settings. An implementation of Sandbox is through the Linux network namespace, a FreeBSD Jail or other similar concepts. A Sandbox can contain multiple endpoints
  • Endpoint: An endpoint connects the Sandbox to the network. An endpoint can be implemented through veth pair, Open vSwitch internal port or other methods. An endpoint can only belong to one network and only one sandbox
  • Network: A network is a group of endpoints that can communicate with each other. The realization of a network can be linux bridge, vlan or other methods. A network can contain many endpoints
    Insert picture description hereCNI
    CNI is a network model, which defines the standard of the container network, and the implementation is completed by the network plug-in, then two components are involved: the container management system and the network plug-in. File communication in JSON format to realize network functions

Network plug-in classification

  • main: Binary files, such as birdge is the program to create a Linux bridge, ptp is the program to create the Veth Pair device, loopback is the program to create the lo device, etc.
  • ipam: realizes the management of IP addresses and is responsible for the program files for assigning IP addresses. For example, dhcp will initiate an address application to the DHCP server, and host-local will use the pre-set IP address segment for assignment
  • meta: This is maintained by the CNI community. For example, flannel is a CNI plug-in provided for the Flannel project. However, this plug-in cannot be used independently and must be used by calling the Main plug-in (to achieve network communication)
    Insert picture description here

The command started by kubelet has the following parameters

  • --Cni-bin-dir: used to search CNI plug-in directory, default /opt/cni/bin
  • --Cni-conf-dir: used to search CNI plug-in configuration file path, the default is /etc/cni/net.d
  • --Network-plugin: the name of the CNI plugin to be used, the default is the cnm (or cni) that comes with docker

Configure cni+flannel

The configuration is based on the deployment flannel written before, and the deployment of flannel is omitted in the process (the difference is that cnm is changed to cni)

Insert picture description here

  • The cnm plug-in is based on the docker0 virtual network card, while the cni is based on a separate creation, the name is customized in the configuration file
  • Two modes supported by flannel: vxlan and host-gw
    • vxlan: Realize the second layer, establish a tunnel based on ip
    • host-gw: Direct gateway routing and forwarding

kubelet placement

--cni-bin-dir=/opt/cni/bin \
--cni-conf-dir=/etc/cni/net.d \
--network-plugin=cni \

Download the cni file and unzip it to /opt/cni/bin

wget https://github.com/containernetworking/plugins/releases/download/v0.9.0/cni-plugins-linux-amd64-v0.9.0.tgz
mkdir -p /opt/cni/bin
tar -zxf cni-plugins-linux-amd64-v0.9.0.tgz  -C /opt/cni/bin

Create a cni configuration file

mkdir -p /etc/cni/net.d
cat /etc/cni/net.d/10-flannel.conf 
{
    
    
  "name": "cni0",
  "type": "flannel",
  "subnetFile": "/run/flannel/subnet.env",
  "delegate": {
    
    
    "hairpinMode": true,
    "isDefaultGateway": true
  }
}

Insert picture description hereCnm legacy issues
Since I was a cnm plugin before, I need to change the docker IP address, otherwise there will be problems with the routing, or I need to regenerate a subnet with flanel
Insert picture description here

Just write any address, it doesn’t conflict

Restart the service and check whether the service is normal

systemctl restart docker.service
systemctl restart kubelet.service
kubectl get nodes -o wide
#查看节点状态是否是Ready 

Reference: https://www.cni.dev/plugins/
Reference: https://www.cnblogs.com/rexcheny/p/10960233.html

Guess you like

Origin blog.csdn.net/yangshihuz/article/details/113585122
cNm