Docker-Docker host communication across the vessel

Under Docker default network environment, Docker containers on a single host can directly communicate docker0 bridge between the Docker containers on different hosts can communicate only through the port mapping on the host. This port mappings way for many cluster applications very inconvenient. If we can use your own IP address to communicate directly between Docker containers, it will solve many problems. The principle according to respectively direct routing, bridging (e.g., pipework), Overlay tunneling (e.g., flannel, ovs + gre) and the like.
 
docker0 Gateway Review:
First remove the old network
$ sudo ip link set dev docker0 down
$ sudo brctl delbr docker0
Modify /etc/docker/daemon.json, change the default gateway docker0
{
  "bip": "192.188.0.1/16”,
}
View
$ ifconfig docker0
docker0   Link encap:Ethernet  HWaddr 02:42:38:60:08:25
          inet addr:192.188.0.1  Bcast:0.0.0.0  Mask:255.255.0.0
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
Or create a new bridge bridge0 manual
$ sudo brctl addbr bridge0
$ sudo ip addr add 192.188.0.1/16 dev bridge0
$ sudo ip link set dev bridge0 up
$ ip addr show bridge0
4: bridge0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state UP group default
    link/ether 66:38:d0:0d:76:18 brd ff:ff:ff:ff:ff:ff
    inet 192.188.0.1/16 scope global bridge0
       valid_lft forever preferred_lft forever
Add the following configuration file in Docker /etc/docker/daemon.json, the Docker can be bridged to create a default on the bridge.
{
     "bridge": "bridge0",
}
 
Direct Routing
By adding a static route cross-host communications over Docker host:
Pipework
Pipework is an easy to use Docker container network configuration tool. Implemented by more than 200 line shell script. Custom configured by using ip, brctl, ovs-vsctl command Docker containers like a bridge, network cards, routing.
  • Use the new bridge br0 bridge instead of the default docker0
  • Br0 bridge the difference with the default docker0 bridge: between a host and br0 eth0 is veth pair
As shown, communication between the different containers pipework can use this tool to create a new virtual network adapter and docker container bridge to bind IP br0
 
Flannel(Flannel + UDP 或者 Flannel + VxLAN)
Cross-host communication Flannel achieved by the process of implementing the container:
  • ETCD installed and running (or other distributed key value stored in a database) and flannel on each host;
  • In etcd in planning the configuration docker0 subnet range of all hosts; Flannel automatically allocate a separate subnet for each host, the user only needs to specify a large pool of IP. Different subnet routing information is automatically generated by the configuration and Flannel.
  • flanneld etcd on each host in accordance with the configuration of the host-based subnet allocation docker0 ensure docker0 all hosts on the network segment will not be repeated, and the results (i.e. docker0 host IP subnet information present on this host correspondence between) into etcd library, so etcd library to save the correspondence between the docker host IP subnet information on this and all hosts (equivalent to maintain the routing table between a node etcd services);
  • When the container needs to communicate with other hosts, etcd database lookup, to find the sub-object corresponding to the container outip (IP purpose host computer);
  • The original packet is encapsulated in VXLAN or UDP packets, IP layer to outip destination IP encapsulation;
  • Since the purpose of IP is the host IP, and therefore the route is reachable: data sent from the source container, forwarded to flannel0 virtual NIC (P2P virtual NIC) where the host via docker0 virtual network adapter and the other end flanneld service listens flannel0 card, it was originally the data content of the package delivered to the destination node based on its own routing table flanneld service.
  • Decapsulating host object is solved for the original packet, a final object container VXLAN or UDP packet arrives.
PS: Flannel network has two modes: an overlay above the overlay network, and host-gw host as a gateway mode, dependent on the pure three-ip forwarding;
Disadvantages: Different container Flannel network can communicate directly, Flannel not provide network isolation. Network communication with the external network bridge need.
 
Weave
Weave is also an overlay network overlay network;
Weave responsible for their own exchange network configuration information, these databases do not need etcd or consul between hosts;
Weave subnet default configuration at all containers 10.32.0.0/12, if this particular subnet address space may be allocated conflicts with existing IP --ipalloc-range.
Weave network default configuration and all containers in a large subnet can communicate freely. To achieve network isolation, or the need to specify different IP subnet for the vessel; to the outer network communication, it is necessary to weave a host is added to the network, and the host computer as a gateway.
 
Calico
Calico is a pure three-layer embodiment, there is provided a multi-host communication between the virtual machine and to the container, without the use of overlay networks (e.g., flannel) drive, instead of using the virtual switching virtual routing, each virtual router reachability information dissemination through BGP ( routing) to other virtual or physical router.
Calico can customize their own subnet for each host by IP Pool.
Calico default configuration allows the communication between the container on the same network, but through its strong virtually any possible scenario Policy access control.
calico important component comprising:
  • Felix: Calico agent, running on each node workload. Configuration is responsible for configuration and routing rules and ACLS issued to ensure that the endpoint connectivity state.
  • etcd: a distributed key-value store, is mainly responsible for network metadata consistency, to ensure the accuracy Calico network status, it can be shared with kubernetes;
  • BGP Client: Felix is ​​responsible for the routing information is written to the current Calico kernel distribution network to ensure the effectiveness of communication between workload;
  • BGP Route Reflector: When using a large-scale deployment, get rid of all the nodes interconnected mesh pattern, by one or more BGPRoute Reflector to complete the centralized route distribution.
As shown below, describes the process from a source container through the source host, routing data through the center, then the final destination host assigned to the destination container.
The whole process is always carried out according to the rules of iptables routing forwarding, and no packet, unpacking process. Operation is performed on the basis flannel unpacked packet routing and forwarding is performed on the waste of computing resources of the CPU.
The figure is the performance comparison of various open source network components from the Internet to find. As it can be seen, whether the bandwidth or network latency, performance and a host of calico is about the same.

Guess you like

Origin www.cnblogs.com/yangyuliufeng/p/11494354.html