An inter-container Docker host communication solutions
Docker network drive
- Overlay: Overlay network implemented based Docker native package VXLAN
- Macvlan: Docker divided into a plurality of sub-interfaces on the host interface logic card, each sub-interfaces identify a VLAN. Host interface connects directly Docker container
- LAN Interface: Docker forward to another host routing policy
Third-party network project
Tunnel option
- Flannel: VLAN support UDP encapsulation and transmission
- Weave: support for UDP (sleeve mode) and VXLAN (fastdb priority mode)
- OpenvSwitch: GRE protocol and supports VXLAN
Routing scheme
Calico: BGP protocol and supports IPIP tunnel. Each virtual router as a host, to achieve communications between different host container through BGP
二、Docker Overlay Network
Overlay network refers to the premise without changing the existing network infrastructure, through some agreed communication protocol, the encapsulating Layer 2 packets over the new data format in IP packets. This will not only be able to take full advantage of sophisticated IP routing protocol process data distribution; and the use of extended isolation identity median Overlay technology, can break through the 4000 limit the number of VLAN support up to 16M users and, if necessary, can be converted to broadcast traffic multicast traffic, broadcast data to avoid flooding.
Therefore, Overlay network is actually the most mainstream of the container cross-node data transfer and routing scheme.
To use Docker native Overlay network, any of the following conditions be met
- Docker run Swarm
- Use key-value store of Docker host cluster
Third, using the key-value store build Docker host cluster
Must meet the following conditions:
- A cluster host is connected to the key-value store, Docker support Consul, Etcd and Zookeeper
- Docker cluster host running a daemon
- The hosts in the cluster must have a unique host name, because the key-value store using the host name to identify a cluster member
- Linux cluster host kernel version 3.12+, support VXLAN packet processing, otherwise impassable
Fourth, deployment
4.1 System Environment
1
2
|
# docker -v
Docker version
17.12
.
0
-
ce, build c97c6d6
|
4.2 Installation Consul
1
2
3
4
5
6
7
8
9
10
11
12
|
# wget https://releases.hashicorp.com/consul/0.9.2/consul_0.9.2_linux_386.zip
# unzip consul_1.0.6_linux_amd64.zip
# mv consul /usr/bin/ && chmod a+x /usr/bin/consul
# 启动
nohup consul agent
-
server
-
bootstrap
-
ui
-
data
-
dir
/
data
/
docker
/
consul \
>
-
client
=
172.16
.
200.208
-
bind
=
172.16
.
200.208
&>
/
var
/
log
/
consul.log &
#-ui : consul 的管理界面
#-data-dir : 数据存储
|
4.3 Dockre node configuration daemon connection Consul
Should be modified on the two machines
docker2
1
2
3
4
5
6
7
|
# vim /lib/systemd/system/docker.service
ExecStart
=
/
usr
/
bin
/
dockerd
-
H tcp:
/
/
0.0
.
0.0
:
2375
-
H unix:
/
/
/
var
/
run
/
docker.sock
-
-
cluster
-
store consul:
/
/
172.16
.
200.208
:
8500
-
-
cluster
-
advertise
172.16
.
200.208
:
2375
# systemctl daemon-reload
# systemctl restart docker
|
docker3
1
2
3
4
5
6
7
|
# vim /lib/systemd/system/docker.service
ExecStart
=
/
usr
/
bin
/
dockerd
-
H tcp:
/
/
0.0
.
0.0
:
2375
-
H unix:
/
/
/
var
/
run
/
docker.sock
-
-
cluster
-
store consul:
/
/
172.16
.
200.208
:
8500
-
-
cluster
-
advertise
172.16
.
200.223
:
2375
# systemctl daemon-reload
# systemctl restart docker
|
4.4 View node information consul in
http://172.16.200.208:8500
4.5 Creating overlay network
1
2
3
4
5
6
7
8
9
10
11
12
|
# docker network create -d overlay multi_host
53b042104f366cde2cc887e7cc27cde52222a846c1141690c93e1e17d96120c5
# docker network ls
NETWORK
ID
NAME DRIVER SCOPE
3f5ff55c93e6
bridge bridge local
1e3aff32ba48
composelnmp_default bridge local
0d60b988fe59
composetest_default bridge local
b4cf6d623265 host host local
53b042104f36
multi_host
|
-d: specifies the type of network creation
Another machine will automatically synchronize the new network
details
# docker network inspect multi_host [ { "Name": "multi_host", "Id": "53b042104f366cde2cc887e7cc27cde52222a846c1141690c93e1e17d96120c5", "Created": "2018-03-07T16:23:38.682906025+08:00", "Scope": "global", "Driver": "overlay", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "10.0.0.0/24", "Gateway": "10.0.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": {}, "Options": {}, "Labels": {} } ]
4.6 使用overlay网络启动容器
分别在两台机器上使用overlay网络启动一个容器
1
|
# docker run -it --net=multi_host busybox
|
这两个节点上的容器的ip分别为:
1
2
3
4
5
6
7
8
9
|
[root@docker2 ~]
# docker run -it --net=multi_host busybox
/
# ifconfig
eth0 Link encap:Ethernet HWaddr
02
:
42
:
0A
:
00
:
00
:
02
inet addr:
10.0
.
0.2
Bcast:
10.0
.
0.255
Mask:
255.255
.
255.0
UP BROADCAST RUNNING MULTICAST MTU:
1450
Metric:
1
RX packets:
0
errors:
0
dropped:
0
overruns:
0
frame:
0
TX packets:
0
errors:
0
dropped:
0
overruns:
0
carrier:
0
collisions:
0
txqueuelen:
0
RX bytes:
0
(
0.0
B) TX bytes:
0
(
0.0
B)
|
1
2
3
4
5
6
7
8
|
/
# ifconfig
eth0 Link encap:Ethernet HWaddr
02
:
42
:
0A
:
00
:
00
:
03
inet addr:
10.0
.
0.3
Bcast:
10.0
.
0.255
Mask:
255.255
.
255.0
UP BROADCAST RUNNING MULTICAST MTU:
1450
Metric:
1
RX packets:
0
errors:
0
dropped:
0
overruns:
0
frame:
0
TX packets:
0
errors:
0
dropped:
0
overruns:
0
carrier:
0
collisions:
0
txqueuelen:
0
RX bytes:
0
(
0.0
B) TX bytes:
0
(
0.0
B)
|
它们之间是可以相互ping 通的
1
2
3
4
5
|
# ping 10.0.0.2
PING
10.0
.
0.2
(
10.0
.
0.2
):
56
data bytes
64
bytes
from
10.0
.
0.2
: seq
=
0
ttl
=
64
time
=
11.137
ms
64
bytes
from
10.0
.
0.2
: seq
=
1
ttl
=
64
time
=
0.251
ms
64
bytes
from
10.0
.
0.2
: seq
=
2
ttl
=
64
time
=
0.280
ms
|