Understand Docker's network mode and cross-host communication in one minute

Docker's four network modes

 

Bridge mode

 

When the Docker process starts, a virtual bridge named docker0 is created on the host, and the Docker container started on this host is connected to this virtual bridge. A virtual bridge works like a physical switch, so that all containers on the host are connected to a layer 2 network through the switch.

Allocate an IP from the docker0 subnet for the container to use, and set the IP address of docker0 as the container's default gateway. Create a pair of virtual network card veth pair devices on the host. Docker puts one end of the veth pair device in the newly created container and names it eth0 (the container's network card), and the other end is placed in the host, with a similar name like vethxxx Name and add this network device to the docker0 bridge. It can be viewed with the brctl show command.

Bridge mode is the default network mode of docker. If the --net parameter is not written, it is bridge mode. When using docker run -p, docker actually makes DNAT rules in iptables to realize the port forwarding function. You can use iptables -t nat -vnL to view.

The bridge mode is shown in the following figure:

 

Demo:

docker run -tid --net=bridge --name docker_bri1 \
             ubuntu-base: v3
             docker run -tid --net=bridge --name docker_bri2 \
             ubuntu-base: v3

brctl show
docker exec -ti docker_bri1 /bin/bash
docker exec -ti docker_bri1 /bin/bash

ifconfig –a
route –n

 

Host mode

 

 

If the host mode is used when starting the container, the container will not obtain an independent Network Namespace, but will share a Network Namespace with the host. The container will not virtualize its own network card, configure its own IP, etc., but use the host's IP and port. However, other aspects of the container, such as the file system, process list, etc., are still isolated from the host.

The Host mode is shown in the following figure:

 

Demo:

docker run -tid --net=host --name docker_host1 ubuntu-base:v3
docker run -tid --net=host --name docker_host2 ubuntu-base:v3

docker exec -ti docker_host1 /bin/bash
docker exec -ti docker_host1 /bin/bash

ifconfig –a
route –n

 

 

Container mode

 

 

This mode specifies that newly created containers share a Network Namespace with an existing container, rather than with the host. The newly created container will not create its own network card, configure its own IP, but share the IP, port range, etc. with a specified container. Similarly, in addition to the network, the two containers are isolated from other aspects such as file systems, process lists, etc. The processes of the two containers can communicate through the lo NIC device.

Schematic diagram of Container mode:

 

Demo:

docker run -tid --net=container:docker_bri1 \
              --name docker_con1 ubuntu-base:v3

docker exec -ti docker_con1 /bin/bash
docker exec -ti docker_bri1 /bin/bash

ifconfig –a
route -n

 

None mode

 

With none mode, the Docker container has its own Network Namespace, however, no network configuration is done for the Docker container. In other words, this Docker container has no network card, IP, routing and other information. We need to add network cards, configure IP, etc. to the Docker container ourselves.

Schematic diagram of Node mode:

Demo:

docker run -tid --net=none --name \
                docker_non1 ubuntu-base:v3

docker exec -ti docker_non1 /bin/bash

ifconfig –a
route -n

 

 

Cross-host communication

In Docker's default network environment, Docker containers on a single host can communicate directly through the docker0 bridge, while Docker containers on different hosts can only communicate through port mapping on the host. This port mapping method is extremely inconvenient for many cluster applications. If Docker containers can communicate directly using their own IP addresses, many problems will be solved. According to the implementation principle, it can be divided into direct routing mode, bridging mode (such as pipeline), and overlay tunnel mode (such as flannel , ovs +gre ) and so on.

 

direct route

 

Cross-host communication is achieved by adding static routes on the Docker host:

 

 

Pipework

 

 

Pipework is an easy-to-use Docker container network configuration tool. Implemented by more than 200 lines of shell scripts. Configure custom bridges, network cards, routes, etc. for Docker containers by using commands such as ip, brctl, ovs-vsctl, etc.

  • Use the newly created bri0 bridge instead of the default docker0 bridge

  • The difference between the bri0 bridge and the default docker0 bridge: there is a veth pair between bri0 and the host eth0

 

Flannel (Flannel + UDP or Flannel + VxLAN)

 

The cross-host communication of containers implemented by Flannel is achieved through the following process:

  • Install and run etcd and flannel on each host;

  • Plan and configure the docker0 subnet range of all hosts in etcd;

  • According to the configuration in etcd, flanneld on each host assigns a subnet to docker0 of the host to ensure that the network segment of docker0 on all hosts is not repeated, and compares the results (that is, the subnet information of docker0 on the host and the IP of the host). Correspondence) is stored in the etcd library, so that the etcd library saves the correspondence between the docker subnet information on all hosts and the IP of the host;

  • When you need to communicate with containers on other hosts, look up the etcd database and find the outip (the IP of the destination host) corresponding to the subnet of the destination container;

  • The original data packets are encapsulated in VXLAN or UDP data packets, and the IP layer is encapsulated with outip as the destination IP;

  • Since the destination IP is the host IP, the route is reachable;

  • The VXLAN or UDP data packet arrives at the destination host for decapsulation, decompresses the original data packet, and finally arrives at the destination container.

The Flannel mode is shown in the following figure:

 

Demo:

/opt/bin/etcdctl get /coreos.com/network/config
/opt/bin/etcdctl ls /coreos.com/network/subnets
/opt/bin/etcdctl get \
    /coreos.com/network/subnets/172.16.49.0-24


http://www.wtoutiao.com/p/199PeVu.html

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326918561&siteId=291194637