Docker network learning mode

Before we learned some basic knowledge Docker, we can note that the system will automatically generate a ip after we start a docker container, then it will certainly have ip network, so we have to learn docker network today.

Docker four network mode (Bridge, Host, Container, None)

Docker opens when you start a virtual bridge docker0, will start after the vessel was received docker0 bridge, and automatically assign an IP address to
Here Insert Picture Description
Here Insert Picture Description
each container will have a have a Pid
Here Insert Picture Description
each will have a unique Pid under / proc directory Pid corresponding to the name of a directory, ns / directory this directory has a file on the network Pid container
Here Insert Picture Description
we usually start after the container is connected to the network in the form of a bridge, the vessel will be generated ip bridge ip is incremented upward, generally as long as the container bridge docker0, use can be seen ip a corresponding plurality of virtual network adapter
Here Insert Picture Description
Here Insert Picture Description
veth devices are paired, one end of the inner container is named eth0, is added to the end of the bridge is named veth, they constitute a data transmission channel, into one end, the end, data communication

When you create a container Docker has three network modes, bridge do not need to be specified as the default -net, the other three modes -net need to specify when you create the container.

bridge	模式,使用–net=bridge指定,默认设置。
none	模式,使用–net=none指定。
host	模式,使用–net=host指定。
container	模式,使用–net=container:容器名称或ID指定

1. Bridge Mode

the bridge mode does not have a public IP container, only the host can directly access the external host is
not visible, but the container can access the Internet through the NAT rule host computer.

Here Insert Picture Description
Bridge bridge mode main implementation steps are as follows:

  • Docker Daemon veth pair using the technology to create two virtual network interface devices on the host, and is assumed to veth0 veth1. And veth pair technology characteristics may be guaranteed regardless of which network veth received packets are the packets to the other party.
  • Docker Daemon will veth0 attached to docker0 bridge Docker Daemon created. Network packets may ensure that the host sent veth0;
  • Docker Daemon will veth1 added to the namespace Docker Container belongs, and renamed eth0. Thus, to ensure that the network packet sent to the host computer if veth0, immediately eth0 is received, to the host to achieve connectivity Docker Container network; but also to ensure Docker Container eth0 used alone or network environment isolation container to achieve of
    bridge bridge mode drawbacks:
  • The most obvious is that this mode Docker Container does not have a public IP, i.e. eth0 host and not in the same network segment. The result is the world's container and can not communicate directly outside the host.
  • Although NAT mode through the middle of the process to achieve this, but there are still problems with NAT mode inconveniences, such as: container ports require competition, visitors inside the container service requires the use of an external port service discovery and other services informed on the host.
  • In addition NAT mode because it is the means to achieve the three-tier network, it will definitely affect the transmission efficiency of the network.

2.Host network mode

host mode is a good bridge bridge mode to add. Using host mode Docker Container, the IP address can be used directly with the outside host computer to communicate, if the host computer is a public IP eth0, then this container also has a public IP. At the same time the container port services can also use the port of the host, without additional NAT translation.

host vessel sharing mode allows host network stack, such benefits are external host
machine in direct communication with the container, but the lack of isolation of the vessel network.

Here Insert Picture Description
Defects Host network mode:
we can find such a network approach is convenient, but certainly lose some other features

  • The most obvious is weakening Docker Container isolation of network environment. I.e., the container no longer has isolated, independent network stack.
  • In addition, host mode Docker Container although you can make traditional services and the internal container indiscriminately, using no transformation, but due to the weakening of the isolation of the network, the containers will be shared using a competitive network stack and host;
  • In addition, the interior of the container port will no longer have all the resources, because some ports resources have been occupied by the host's own service, there are some ports have been used to bridge network port mapping mode container.

3.Container mode

Container Docker network mode is a mode in a more particular network.
Use -network = container when the container is created: name specified. (name specified
is the name of the vessel running)

Docker containers in this mode in a shared network stack, such that the two
may be used localhost efficient and rapid communication between the containers.

Here Insert Picture Description
Container network patterns to achieve the following main steps:

  • 1, to find other container (i.e., the container needs to be shared network environment) network namespace;
  • 2, the newly created Docker Container (container also needs to be shared to other networks) of the namespace, use the namespace other container.

The other container Docker Container network mode, can be used to better serve the communication between the containers.
In this mode Docker Container can be accessed by the other containers in the namespace localhost, higher transmission efficiency. Although a plurality of containers shared network environment, but the plurality of integrally formed containers still form a network isolated from the host and other containers. In addition, this model also saves a certain amount of network resources.

Container network defect model:
it does not improve the situation of the world other than the communication with the host vessel (and bridged patterns, can not connect equipment other than the host).

4.None network mode

Network environment is none, that is not Docker Container any network environment. Once Docker Container uses none network mode, the internal container can only use the loopback network equipment, there will be no other network resources. It can be said none mode Docker Container network settings do very little, but as the saying goes "less is more", in the absence of network configuration, the Docker as a developer, in order to do this an infinite number of other possible basis [custom web development. This also happens to open Docker reflects the design concept.

Here Insert Picture Description

Advanced Network Settings container

Custom network mode, docker provides three custom network drive:

  • bridge
  • overlay
  • macvlan
    bridge drive similar to the default mode network bridge, but added some new features,
    overlay and macvlan are used to create cross-host network.

Recommended custom network to control which container may communicate with each other, may
automatically resolve DNS name to the IP address of the vessel.

1. Add docker custom network
a. The ip address and gateway address assigned automatically

  1. View bridge before custom network (automatically assigned ip address and gateway address) network: bridge, host, none

Here Insert Picture Description
2. Create custom network
Here Insert Picture Description
3. See custom network bridge (automatically assigned ip address and gateway address) of the gateway address
Here Insert Picture Description
4. docker network inspect my_net1 view information custom network bridge (automatically assigned ip address and gateway address)
Here Insert Picture Description
5 using a custom network model to create a container
Here Insert Picture Description
we could find use custom bridge, it is communication between the container on the same bridge

2. -ip ip address parameter to specify the container, but must be on a custom bridge (ip address and gateway address custom), the default bridge mode does not support, the container can be on the same communication bridge

  • docker's default network bridge between custom domain name is resolved;
  • docker defined between the bridge network and the self-bridge system comes default is resolved;
  • But between docker default system comes with the bridge is not resolved.
    Here Insert Picture Description
    Here Insert Picture Description
    Here Insert Picture Description
    We could not find a direct communication when two containers of different network modes
    3. make use of two different network modes of communication container
    Here Insert Picture Description
    where our vm4 using my_net, vm5 using my_net2
    use docker network connect command vm4 add a my_net2 card
    Here Insert Picture Description
    successfully make communication
    but worth noting:
  • docker's bridge between custom network: the two sides can easily add other card
  • docker defined between the bridge network and the self-bridge system comes: only the system corresponding to the bridge carrying the container bridge add custom network card corresponding to the container. Which in turn will complain.
  • However, the system comes between the docker bridge: is communication, because it is on a network bridge.
  • docker 1.10 start, built up a DNS server. dns analytic functions must be used in a custom network.
    4. The container and the external network to access the Internet access to the container

How to access the Internet through the container iptables the SNAT realized
Here Insert Picture Description
how to access the container outside the network:

  • Port mapping, -p corresponding port specified

External network access to the container and the use of docker-proxy iptables DNAT
host to access the machine container iptables DNAT is used
to access the external access between containers or docker-proxy is implemented

Here Insert Picture Description
Example:

  • View the current firewall policy iptable the nat table
    Here Insert Picture Description
  • Nginx create a container port mapping configuration
    Here Insert Picture Description
    we can see the use of port forwarding in the last row of the table nat

Docker across the host network access

Cross-host networking solutions:
1, Docker native macvlan and overlay;
2, third-party flannel, weave and calico.

How many network programs are integrated with docker together?

libnetwork docker container network library
CNM (Container Network Model) model abstracts container network

CNM components in three categories:

Sandbox:容器网络栈。包括容器接口,dns,路由表。(namespace)
Endpoint:作用是将sanbox接入network(veth pair)
Network:包含一组endpoint,同一network的endpoint可以通信。

Here Insert Picture Description
1, to achieve macvlan network solutions for
card it provides a Linux kernel virtualization technology. No need Linux bridge, direct use of physical interfaces, performance gauge.

  • Two virtual machines across hosts need two network cards, one of which opened in promiscuous mode network card
    server1:
    Here Insert Picture Description
    configure the card to add:

    [root@server1 ~]# cd /etc/sysconfig/network-scripts/
    [root@server1 network-scripts]# vim ifcfg-eth1
    [root@server1 network-scripts]# cat ifcfg-eth1
    BOOTPROTO=none
    DEVICE=eth1
    ONBOOT=yes
    [root@server1 network-scripts]# ifup eth1 启动网卡
    server2:
    Here Insert Picture Description
    配置eth1网卡

    [root@server2 ~]# cd /etc/sysconfig/network-scripts/
    [root@server2 network-scripts]# vim ifcfg-eth1
    [root@server2 network-scripts]# cat ifcfg-eth1
    BOOTPROTO=none
    DEVICE=eth1
    ONBOOT=yes
    [root@server2 network-scripts]# ifup eth1

macvlan itself is linxu kernel module, essentially a NIC virtualization technology. Its function is to allow a plurality of virtual cards on the same physical network card, network data forwarding in the data link layer by different MAC addresses, a plurality of MAC addresses (i.e., a plurality of interface) on a card, each interface You can configure your own IP, Docker's macvlan network is actually used macvlan drive Linux.

Since a plurality of network packets from the MAC address is transmitted on the same card, it is necessary to open the card promiscuous mode ip link set eth0 promisc on.

  • Open promiscuous mode network card eth1 server1 and server2's

      [root@server1 network-scripts]# ip link set eth1 promisc on
      [root@server1 network-scripts]# ip addr show | grep eth1
      20: eth1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
      [root@server2 network-scripts]# ip link set eth1 promisc on
      [root@server2 network-scripts]# ip addr show | grep eth1
      3: eth1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
      只要我们可以看到PROMISC,则表示成功
    

Note: If you do not turn promiscuous mode, it will lead to macvlan network can not access the outside world, in particular when not in use vlan, performance can not ping the path, unable to ping other hosts within the same network.

2. The creation macvlan two hosts on the network

创建macvlan网络不同于桥接模式,需要指定网段和网关(因为要保证跨主机上网段和网关是相同的),并且都得是真实存在的

server1:
Here Insert Picture Description
server2:
Here Insert Picture Description
3. Use were created on the two hosts macvlan1 run a container
server1

[root@server1 ~]# docker run -it --name vm1 --network=macvlan1 --ip=172.22.0.10 ubuntu
root@a1c8bdb2a1e7:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
21: eth0@if20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default 
    link/ether 02:42:ac:16:00:0a brd ff:ff:ff:ff:ff:ff
    inet 172.22.0.10/24 brd 172.22.0.255 scope global eth0
       valid_lft forever preferred_lft forever

server2:

[root@server2 ~]# docker run -it --name vm2 --network=macvlan1 --ip=172.22.0.20 ubuntu
root@816cfcc1bb2a:/# 
root@816cfcc1bb2a:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
5: eth0@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default 
    link/ether 02:42:ac:16:00:14 brd ff:ff:ff:ff:ff:ff
    inet 172.22.0.20/24 brd 172.22.0.255 scope global eth0
       valid_lft forever preferred_lft forever
root@816cfcc1bb2a:/# 

Access test:
Here Insert Picture Description

  • macvlan mode does not rely on a bridge, so brctl show to view does not create a new bridge, but the container network view, you'll see a corresponding virtual network card is an interface 20.
    Here Insert Picture Description
  • View host network, eth1 card 20 is the virtual machine
    Here Insert Picture Description
    visible, eth0 container is the host of eth0 by macvlan out of the virtual interface. Container interface card directly connected to the host, this solution can be that the container without direct communication to external networks (as long as gateway) and looks no different from other independent hosts on the network through the NAT and port mapping.

4, macvlan card will monopolize the host solution
macvlan monopolize the host card, but the vlan sub-network interface plurality macvlan
vlan Layer physical network can be divided into logical network 4094, separated from each other, ranging from 1 vlan ID to 4094
we only need to use when creating a sub-interface vlan container can i solve:
Here Insert Picture Description
mutual communication between the container each vlan sub-interfaces created, added to the container sub-interface vlan

[root@server1 ~]# docker network connect macvlan1 vm3
[root@server1 ~]# docker attach vm3
root@067c3733cfea:/# 
root@067c3733cfea:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
23: eth0@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default 
    link/ether 02:42:ac:17:00:1e brd ff:ff:ff:ff:ff:ff
    inet 172.23.0.30/24 brd 172.23.0.255 scope global eth0
       valid_lft forever preferred_lft forever
24: eth1@if20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default 
    link/ether 02:42:ac:16:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.22.0.2/24 brd 172.22.0.255 scope global eth1
       valid_lft forever preferred_lft forever
root@067c3733cfea:/# ping 172.22.0.20
PING 172.22.0.20 (172.22.0.20) 56(84) bytes of data.
64 bytes from 172.22.0.20: icmp_seq=1 ttl=64 time=0.376 ms
64 bytes from 172.22.0.20: icmp_seq=2 ttl=64 time=0.395 ms
^C
--- 172.22.0.20 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.376/0.385/0.395/0.021 ms

Such containers can communicate with each other between the different hosts

Guess you like

Origin blog.csdn.net/weixin_42446031/article/details/91452126