[Introduction to Flannel of Docker]

Docker implements cross-host container network communication tools including Pipework, Flannel, Weave, Open vSwitch (virtual switch), and Calico to implement cross-host container communication. The differences between Pipework, Weave, and Flannel are:

 

Weave's ideas

A special route container is arranged on each host, and the route containers of different hosts are connected. The route intercepts all ip requests of ordinary containers and sends them to ordinary containers on other hosts through udp packets.

In this way, the same flat network is seen on multiple containers across machines. Weave solves the network problem, but the deployment is still stand-alone.

 

flannel's idea 

Flannel is a network planning service designed by the CoreOS team for Kubernetes. In simple terms, its function is to allow Docker containers created by different node hosts in the cluster to have a unique virtual IP address for the entire cluster. But in the default Docker configuration,

The Docker service on each node is responsible for the IP allocation of the container on the node. One problem this causes is that containers on different nodes may get the same internal and external IP addresses. And enable these containers to find each other through IP addresses, that is, the same

ping each other.

 

The design purpose of Flannel is to re-plan the use rules of IP addresses for all nodes in the cluster, so that containers on different nodes can obtain IP addresses that "belong to the same intranet" and "non-repetitive", and allow containers belonging to different nodes. Containers can communicate directly through intranet IPs.

 

Flannel is essentially an "overlay network", which means that a network (application layer network) running on a network does not rely on ip addresses to transmit messages, but uses a mapping mechanism to combine ip addresses with Identifiers do mapping to resource location. That is

It wraps TCP data in another network packet for routing, forwarding and communication. Currently, data forwarding methods such as UDP, VxLAN, AWS VPC, and GCE routing are supported.

 

The principle is to configure an IP segment and the number of subnets for each host. For example, an overlay network can be configured to use the 10.100.0.0/16 segment, each host/24 subnets. So host A can accept 10.100.5.0/24, and host B can accept packets from 10.100.18.0/24. flannel uses etcd to maintain a mapping between assigned subnets and actual ip addresses. For the data path, flannel uses udp to encapsulate ip datagrams and forward them to the remote host. UDP was chosen as the forwarding protocol because it can penetrate firewalls. For example, AWS Classic cannot forward IPoIP or GRE network packets because its security groups only support TCP/UDP/ICMP.

 

flannel uses etcd to store configuration data and subnet assignment information. After flannel starts, the daemon first retrieves the configuration and list of subnets in use, selects an available subnet, and then attempts to register it.

etcd also stores this ip corresponding to each host. Flannel uses etcd's watch mechanism to monitor the change information of all elements under /coreos.com/network/subnets, and maintain a routing table according to him

 

 

The idea of ​​pipework

Pipework is a stand-alone tool that combines tools such as brctl. It can be considered that pipework solves the virtual network card, bridge, ip, etc. of the container on the host machine, and can be used in conjunction with other networks.

 

 

Flannel is a simple and easy way to configure a layer 3 network fabric designed for Kubernetes.

 

How it works

Flannel runs a small, single binary agent called flanneld on each host, and is responsible for allocating a subnet lease to each host out of a larger, preconfigured address space. Flannel uses either the Kubernetes API or etcd directly to store the network configuration, the allocated subnets, and any auxiliary data (such as the host's public IP). Packets are forwarded using one of several backend mechanisms including VXLAN and various cloud integrations.

 

Networking details

Platforms like Kubernetes assume that each container (pod) has a unique, routable IP inside the cluster. The advantage of this model is that it removes the port mapping complexities that come from sharing a single host IP.

 

Flannel is responsible for providing a layer 3 IPv4 network between multiple nodes in a cluster. Flannel does not control how containers are networked to the host, only how the traffic is transported between hosts. However, flannel does provide a CNI plugin for Kubernetes and a guidance on integrating with Docker.

 

Flannel is focused on networking. For network policy, other projects such as Calico can be used.

 

Getting started on Kubernetes

The easiest way to deploy flannel with Kubernetes is to use one of several deployment tools and distributions that network clusters with flannel by default. For example, CoreOS's Tectonic sets up flannel in the Kubernetes clusters it creates using the open source Tectonic Installer to drive the setup process.

 

Though not required, it's recommended that flannel uses the Kubernetes API as its backing store which avoids the need to deploy a discrete etcd cluster for flannel. This flannel mode is known as the kube subnet manager.

 

 

Flannel is an overlay network tool provided by CoreOS to solve the cross-host communication of Dokcer clusters. Its main idea is: set aside a network segment in advance, each host uses a part of it, and then each container is assigned a different ip; let all containers think that everyone is on the same directly connected network, and the bottom layer uses UDP/VxLAN, etc. Encapsulate and forward packets.

 

 



 

The process of sending a network packet from one container to another:

1. The container directly uses the ip of the target container to access, and by default it is sent out through eth0 inside the container.

2. The message is sent to vethXXX through the veth pair.

3. vethXXX is directly connected to the virtual switch docker0, and the message is sent out through the virtual bridge docker0.

4. Look up the routing table, the packets of the external container ip will be forwarded to the flannel0 virtual network card, which is a P2P virtual network card, and then the packets will be forwarded to flanneld listening on the other end.

5. Flanneld maintains the routing table between each node through etcd, encapsulates the original packet UDP, and sends it out through the configured iface.

6. The message finds the target host through the network between the hosts.

7. The message continues to go up to the transport layer, and is handed over to the flanneld program listening on port 8285 for processing.

8. The data is unpacked and sent to the flannel0 virtual network card.

9. Look up the routing table and find that the message of the corresponding container should be handed over to docker0.

10. docker0 finds the container connected to itself and sends the message.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326218531&siteId=291194637