Docker cross-host network configuration solution overlay

 

1. Overview of cross-host network

We have already learned several Docker network solutions: none, host, bridge and joined containers, which solve the problem of container communication within a single Docker Host. The focus of this chapter is to discuss cross-host inter-container communication solutions.

Docker cross-host network overlay (16)

Docker's native overlay and macvlan.
Third-party solutions: Commonly used ones include flannel, weave and calico.

2. Prepare the overlay environment

To support cross-host communication between containers, Docker provides an overlay driver. Docerk overlay network requires a key-value database to save network status information, including Network, Endpoint, IP, etc. Consul, Etcd and ZooKeeper are all key-vlaue software supported by Docker. We use Consul here.

1. Environment description

We will directly use the experimental environment created by docker-machine in the previous section. Practice various cross-host network solutions on docker hosts host1 (192.168.1.201) and host2 (192.168.1.203), and deploy supported components, such as Consul, on 192.168.1.200.

2. Create consul

Execute the following command on the device 192.168.1.200.

docker run -d -p 8500:8500 -h consul --name consul progrium/consul -server -bootstrap

After the container is started, Consul can be accessed through  http://192.168.1.200:8500  .

Docker cross-host network overlay (16)

3. Modify docker configuration file

Next, modify the configuration files /etc/systemd/system/docker.service.d/10-machine.conf of the docker daemons of host1 and host2.

Docker cross-host network overlay (16)

--cluster-store specifies the address of consul.
--cluster-advertise tells consul your connection address.

Restart the docker daemon.

systemctl daemon-reload  
systemctl restart docker.service

host1 and host2 will be automatically registered in the Consul database.

Docker cross-host network overlay (16)

4. Ready

When you are ready, the experimental environment is as shown below:

Docker cross-host network overlay (16)

3. Create an overlay network

1. Create a network in host1

Create the overlay network ov_net1 in host1.

[root@ubuntu ~ [host1]]# docker network create -d overlay ov_net1
49a8ea9add6a80d44cbd6becf22d66af40072cf9c3a346d66f94a6e72d8042e5

-d overlay specifies driver as overlay.

docker network ls View current network:

[root@ubuntu ~ [host1]]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
d0829fccb85c        bridge              bridge              local
f59af6b0b523        host                host                local
2613e0c2029e        none                null                local
49a8ea9add6a        ov_net1             overlay             global

2. Host2 View the created network

Notice that the SCOPE of ov_net1 is global, while the other networks are local. Check the existing network on host2:

[root@ubuntu ~ [host1]]# eval $(docker-machine env host2)
[root@ubuntu ~ [host2]]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
25f2e4a0bc16        bridge              bridge              local
bdab05c9c6b9        host                host                local
240f2dc93d43        none                null                local
49a8ea9add6a        ov_net1             overlay             global

ov_net1 can also be seen on host2. This is because when ov_net1 was created, host1 stored the overlay network information in consul, and host2 read the new network data from consul. Any subsequent changes to ov_net will be synchronized to host1 and host2.

3. View the detailed information of ov_net1

[root@ubuntu ~ [host2]]# docker network inspect ov_net1
[
    {
        "Name": "ov_net1",
        "Id": "49a8ea9add6a80d44cbd6becf22d66af40072cf9c3a346d66f94a6e72d8042e5",
        "Created": "2018-05-03T11:29:11.802222991+08:00",
        "Scope": "global",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "10.0.0.0/24",
                    "Gateway": "10.0.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {},
        "Labels": {}
    }
]

IPAM refers to IP Address Management. The IP space automatically allocated by docker to ov_net1 is 10.0.0.0/24.

4. Run the container in the overlay

1. Create container bbox1

Run a busybox container on host1 and connect to ov_net1.

[root@ubuntu ~ [host1]]# docker run -itd --name bbox1 --network ov_net1 busybox 
5246d782fc8fd30890bcf2bb34374c54db3ee277cae585572f4b20129b68e3fe

2. Check bbox1 network configuration

[root@ubuntu ~ [host1]]# docker exec bbox1 ip r
default via 172.18.0.1 dev eth1 
10.0.0.0/24 dev eth0 scope link  src 10.0.0.2 
172.18.0.0/16 dev eth1 scope link  src 172.18.0.2 

bbox1 has two network interfaces eth0 and eth1.
The IP of eth0 is 10.0.0.2, and it is connected to the overlay network ov_net1. eth1 IP 172.18.0.2, the default route of the container is eth1. In fact, docker will create a bridge network "docker_gwbridge" to provide all containers connected to the overlay network with the ability to access the external network.

[root@ubuntu ~ [host1]]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
74bdaebd8ae5        bridge              bridge              local
604a0b967e13        docker_gwbridge     bridge              local
f59af6b0b523        host                host                local
2613e0c2029e        none                null                local
49a8ea9add6a        ov_net1             overlay             global

From  docker network inspect docker_gwbridge the output, it can be confirmed that the IP address range of docker_gwbridge is 172.18.0.0/16, and the currently connected container is bbox1 (172.18.0.2).

[root@ubuntu ~ [host1]]# docker network inspect docker_gwbridge
[
    {
        "Name": "docker_gwbridge",
        "Id": "604a0b967e135d41c605547b65853a7a1315fceefe99c051d7a001e4e1207c1c",
        "Created": "2018-05-03T11:40:44.426460442+08:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "eba09cf91fa34e3a7eb1ab20047ce08ed8fb0ac4d8f2dc9ddf9a08116575eba2": {
                "Name": "gateway_eba09cf91fa3",
                "EndpointID": "3dd836e9db9ec3250cfa3bc1662ab8822918a7ea3a30b9e24e2ee7151f5f75b0",
                "MacAddress": "02:42:ac:12:00:02",
                "IPv4Address": "172.18.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.enable_icc": "false",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.name": "docker_gwbridge"
        },
        "Labels": {}
    }
]

And the gateway of this network is the IP 172.18.0.1 of the bridge docker_gwbridge, which can be viewed on host1.

root@host1:~# ifconfig docker_gwbridge
docker_gwbridge Link encap:Ethernet  HWaddr 02:42:95:80:cf:70  
          inet addr:172.18.0.1  Bcast:172.18.255.255  Mask:255.255.0.0
          inet6 addr: fe80::42:95ff:fe80:cf70/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:648 (648.0 B)

In this way, container bbox1 can access the external network through docker_gwbridge.

Docker cross-host network overlay (16)

5. Overlay network connectivity

1. Run bbox2 in host2

[root@ubuntu ~ [host2]]# docker run -itd --name bbox2 --network ov_net1 busybox
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
f70adabe43c0: Pull complete 
Digest: sha256:58ac43b2cc92c687a32c8be6278e50a063579655fe3090125dcb2af0ff9e1a64
Status: Downloaded newer image for busybox:latest
457e0e4363216d3fe70ea471f3c671863f166bf6d64edf1a92232d0cb585bd85

2. Check the routing situation of bbox2

[root@ubuntu ~ [host2]]# docker exec bbox2 ip r 
default via 172.18.0.1 dev eth1 
10.0.0.0/24 dev eth0 scope link  src 10.0.0.3 
172.18.0.0/16 dev eth1 scope link  src 172.18.0.2 

3. Interoperability test

The IP of bbox2 is 10.0.0.3 and you can ping bbox1 directly:

[root@ubuntu ~ [host2]]# docker exec bbox2 ping -c 2 bbox1
PING bbox1 (10.0.0.2): 56 data bytes
64 bytes from 10.0.0.2: seq=0 ttl=64 time=24.710 ms
64 bytes from 10.0.0.2: seq=1 ttl=64 time=0.524 ms

--- bbox1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.524/12.617/24.710 ms

It can be seen that containers in the overlay network can communicate directly, and docker also implements DNS services.

4. Implementation principle

Docker will create an independent network namespace for each overlay network, which will have a Linux bridge br0. One end of the veth pair is connected to the container (ie eth0), and the other end is connected to br0 of the namespace.

In addition to connecting all veth pairs, br0 will also connect to a vxlan device for establishing vxlan tunnels with other hosts. Data between containers is communicated through this tunnel. The logical network topology is shown in the figure:

root@host1:~# brctl show              
bridge name     bridge id               STP enabled     interfaces
docker0         8000.0242cf3d50df       no
docker_gwbridge         8000.02429580cf70       no              vethb08d6be
root@host2:~# brctl show
bridge name     bridge id               STP enabled     interfaces
docker0         8000.0242303d222f       no
docker_gwbridge         8000.02423eb55e58       no              vethdb98439

Docker cross-host network overlay (16)

To view the namespace of the overlay network, you can execute ip netns on host1 and host2 (please make sure you have executed ln -s /var/run/docker/netns /var/run/netns before this). You can see that there are An identical namespace "1-49a8ea9add":

root@host1:~# ip netns
c0052da621d7 (id: 1)
1-49a8ea9add (id: 0)
root@host2:~# ip netns
e486246b39d9 (id: 1)
1-49a8ea9add (id: 0)

"1-49a8ea9add" This is the namespace of ov_net1. Check the device on br0 in the namespace.

root@host1:~# ip netns exec 1-49a8ea9add brctl show
bridge name     bridge id               STP enabled     interfaces
br0             8000.767c8828d7b0       no              veth0
                                                        vxlan0

6. Overlay network isolation

1. Create network ov_net2

Different overlay networks are isolated from each other. We create a second overlay network ov_net2 and run container bbox3.

root@host1:~# docker network create -d overlay ov_net2
7c2ac9ec1a0ec477aa9230bcfffd91b1b79bee6ea7007a7a5de20ccae0b3d91c

2. Start container bbox3

root@host1:~# docker run -itd --name bbox3 --network ov_net2 busybox
1a3d81915a57b195a8b0baa2d1444bbeb5c947e7e203752ed975d6a900dbb141

3. View bbox3 network

The IP assigned to bbox3 is 10.0.1.2, try to ping bbox1 (10.0.0.2).

root@host1:~# docker exec -it bbox3 ip r
default via 172.18.0.1 dev eth1 
10.0.1.0/24 dev eth0 scope link  src 10.0.1.2 
172.18.0.0/16 dev eth1 scope link  src 172.18.0.3 

Docker cross-host network overlay (16)

The ping failed, which shows that different overlay networks are isolated. Can't communicate even through docker_gwbridge.

Docker cross-host network overlay (16)

If you want to realize communication between bbox3 and bbox1, you can also connect bbox3 to ov_net1.

docker network connect ov_net1 bbox3

Docker cross-host network overlay (16)

By default, docker allocates a 24-bit mask subnet (10.0. Of course, we can also specify the IP space through --subnet.

docker network create -d overlay --subnet 10.22.1.0/24 ov_net3

Guess you like

Origin blog.csdn.net/Franklin7B/article/details/93629445