Related article: Docker Swarm of the Docker Three Musketeers
This article is mainly to improve Docker Swarm, adding overlay-based networking communication so that Docker containers can be accessed across hosts.
There are roughly three ways to communicate between containers between different hosts:
- Use port mapping : directly map the service port of the container to the host, and the host communicates directly through the mapped port.
- Put the container on the network segment where the host is located : modify the docker's ip allocation network segment to be consistent with the host, and also modify the host's network structure.
- Third-party projects : flannel, weave or pipeline, etc. These solutions generally use SDN to build an overlay network to achieve container communication.
Before using overlay networking communication, we first install Docker, and Docker Machine (under Linux):
$ sudo curl -L https://github.com/docker/machine/releases/download/v0.13.0/docker-machine-`uname -s`-`uname -m` > /usr/local/bin/docker-machine
$ sudo chmod +x /usr/local/bin/docker-machine
One-click installation using script (Docker image acceleration address can be changed):
#!/bin/bash
set -e
create_kv() {
echo Creating kvstore machine.
docker-machine create -d virtualbox \
--engine-opt="registry-mirror=https://kvo9moak.mirror.aliyuncs.com" \
kvstore
docker $(docker-machine config kvstore) run -d \
-p "8500:8500" \
progrium/consul --server -bootstrap-expect 1
}
create_master() {
echo Creating cluster master
kvip=$(docker-machine ip kvstore)
docker-machine create -d virtualbox \
--swarm --swarm-master \
--swarm-discovery="consul://${kvip}:8500" \
--engine-opt="cluster-store=consul://${kvip}:8500" \
--engine-opt="cluster-advertise=eth1:2376" \
--engine-opt="registry-mirror=https://kvo9moak.mirror.aliyuncs.com" \
swarm-manager
}
create_nodes(){
kvip=$(docker-machine ip kvstore)
echo Creating cluster nodes
for i in 1 2; do
docker-machine create -d virtualbox \
--swarm \
--swarm-discovery="consul://${kvip}:8500" \
--engine-opt="cluster-store=consul://${kvip}:8500" \
--engine-opt="cluster-advertise=eth1:2376" \
--engine-opt="registry-mirror=https://kvo9moak.mirror.aliyuncs.com" \
swarm-node${i}
done
}
teardown(){
docker-machine rm kvstore -y
docker-machine rm -y swarm-manager
for i in 1 2; do
docker-machine rm -y swarm-node${i}
done
}
case $1 in
up)
create_kv
create_master
create_nodes
;;
down)
teardown
;;
*)
echo "Unknow command..."
exit 1
;;
esac
Run ./cluster.sh up
, you can automatically generate four hosts:
- A kvstore runs the consul service
- A swarm master machine, running the swarm manager service
- Two swarm node machines, both running the swarm node service and the docker daemon service
View the specific information of the four hosts:
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
kvstore - virtualbox Running tcp://192.168.99.100:2376 v18.03.1-ce
swarm-manager * (swarm) virtualbox Running tcp://192.168.99.101:2376 swarm-manager (master) v18.03.1-ce
swarm-node1 - virtualbox Running tcp://192.168.99.102:2376 swarm-manager v18.03.1-ce
swarm-node2 - virtualbox Running tcp://192.168.99.103:2376 swarm-manager v18.03.1-ce
Next verify that the cluster is installed correctly? Run the following command on the host (the host, not the Docker host):
$ eval $(docker-machine env --swarm swarm-manager)
$ docker info
Containers: 6
Running: 6
Paused: 0
Stopped: 0
Images: 5
Server Version: swarm/1.2.8
Role: primary
Strategy: spread
Filters: health, port, containerslots, dependency, affinity, constraint, whitelist
Nodes: 3
swarm-manager: 192.168.99.101:2376
└ ID: K6WX:ZYFT:UEHA:KM66:BYHD:ROBF:Z5KG:UHNE:U37V:4KX2:S5SV:YSCA|192.168.99.101:2376
└ Status: Healthy
└ Containers: 2 (2 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 1.021 GiB
└ Labels: kernelversion=4.9.93-boot2docker, operatingsystem=Boot2Docker 18.03.1-ce (TCL 8.2.1); HEAD : cb77972 - Thu Apr 26 16:40:36 UTC 2018, ostype=linux, provider=virtualbox, storagedriver=aufs
└ UpdatedAt: 2018-05-08T10:20:39Z
└ ServerVersion: 18.03.1-ce
swarm-node1: 192.168.99.102:2376
└ ID: RPRC:AVBX:7CBJ:HUTI:HI3B:RI6B:QI6O:M2UQ:ZT2I:HZ6J:33BL:HY76|192.168.99.102:2376
└ Status: Healthy
└ Containers: 2 (2 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 1.021 GiB
└ Labels: kernelversion=4.9.93-boot2docker, operatingsystem=Boot2Docker 18.03.1-ce (TCL 8.2.1); HEAD : cb77972 - Thu Apr 26 16:40:36 UTC 2018, ostype=linux, provider=virtualbox, storagedriver=aufs
└ UpdatedAt: 2018-05-08T10:21:09Z
└ ServerVersion: 18.03.1-ce
swarm-node2: 192.168.99.103:2376
└ ID: MKQ2:Y7EO:CKOJ:MGFH:B77S:3EWX:7YJT:2MBQ:CJSN:XENJ:BSKO:RAZP|192.168.99.103:2376
└ Status: Healthy
└ Containers: 2 (2 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 1
└ Reserved Memory: 0 B / 1.021 GiB
└ Labels: kernelversion=4.9.93-boot2docker, operatingsystem=Boot2Docker 18.03.1-ce (TCL 8.2.1); HEAD : cb77972 - Thu Apr 26 16:40:36 UTC 2018, ostype=linux, provider=virtualbox, storagedriver=aufs
└ UpdatedAt: 2018-05-08T10:21:06Z
└ ServerVersion: 18.03.1-ce
Plugins:
Volume:
Network:
Log:
Swarm:
NodeID:
Is Manager: false
Node Address:
Kernel Version: 4.9.93-boot2docker
Operating System: linux
Architecture: amd64
CPUs: 3
Total Memory: 3.063GiB
Name: 85be09a37044
Docker Root Dir:
Debug Mode (client): false
Debug Mode (server): false
Experimental: false
Live Restore Enabled: false
WARNING: No kernel memory limit support
You can see the specific information of the cluster.
Then, next to create the overlay network on the host, create the command:
$ docker network create -d overlay net1
d6a8a22298485a044b19fcbb62033ff1b4c3d4bd6a8a2229848
net1
Then we look at the overlay network named just created , the command:
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
d6a8a2229848 net1 overlay global
9c7f0e793838 swarm-manager/bridge bridge local
93787d9ba7ed swarm-manager/host host local
72fd1e63889e swarm-manager/none null local
c73e00c4c76c swarm-node1/bridge bridge local
95983d8f1ef1 swarm-node1/docker_gwbridge bridge local
a8a569d55cc9 swarm-node1/host host local
e7fa8403b226 swarm-node1/none null local
7f1d219b5c08 swarm-node2/bridge bridge local
e7463ae8c579 swarm-node2/docker_gwbridge bridge local
9a1f0d2bbdf5 swarm-node2/host host local
bea626348d6d swarm-node2/none null local
Next, we create two containers (executed on the host, it is convenient to use Docker Swarm), test net1
whether we can access each other using the network, and create the command:
$ docker run -d --net=net1 --name=c1 -e constraint:node==swarm-node1 busybox top
dab080b33e76af0e4c71c9365a6e57b2191b7aacd4f715ca11481403eccce12a
$ docker run -d --net=net1 --name=c2 -e constraint:node==swarm-node2 busybox top
583fde42182a7e8f27527d5c55163a32102dba566ebe1f13d1951ac214849c8d
Check the operation of the newly created container:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
583fde42182a busybox "top" 3 seconds ago Up 2 seconds swarm-node2/c2
dab080b33e76 busybox "top" 18 seconds ago Up 18 seconds swarm-node1/c1
Next, we look net1
at the specific details of the network:
$ docker network inspect net1
[
{
"Name": "net1",
"Id": "d6a8a2229848d40ce446f8f850a0e713a6c88a9b43583cc463f437f306724f28",
"Created": "2018-05-08T09:21:42.408840755Z",
"Scope": "global",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"583fde42182a7e8f27527d5c55163a32102dba566ebe1f13d1951ac214849c8d": {
"Name": "c2",
"EndpointID": "b7fcb0039ab21ff06b36ef9ba008c324fabf24badfe45dfa6a30db6763716962",
"MacAddress": "",
"IPv4Address": "10.0.0.3/24",
"IPv6Address": ""
},
"dab080b33e76af0e4c71c9365a6e57b2191b7aacd4f715ca11481403eccce12a": {
"Name": "c1",
"EndpointID": "8a80a83230edfdd9921357f08130fa19ef0b917bc4426aa49cb8083af9edb7f6",
"MacAddress": "",
"IPv4Address": "10.0.0.2/24",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
As you can see, the two container information (including IP addresses) we just created are also inside.
Then we test whether the two containers can access each other (directly ping the container name, it can also be accessed):
$ docker exec c1 ping -c 3 10.0.0.3
PING 10.0.0.3 (10.0.0.3): 56 data bytes
64 bytes from 10.0.0.3: seq=0 ttl=64 time=0.903 ms
64 bytes from 10.0.0.3: seq=1 ttl=64 time=0.668 ms
64 bytes from 10.0.0.3: seq=2 ttl=64 time=0.754 ms
--- 10.0.0.3 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.668/0.775/0.903 ms
$ docker exec c2 ping -c 3 10.0.0.2
PING 10.0.0.2 (10.0.0.2): 56 data bytes
64 bytes from 10.0.0.2: seq=0 ttl=64 time=0.827 ms
64 bytes from 10.0.0.2: seq=1 ttl=64 time=0.702 ms
64 bytes from 10.0.0.2: seq=2 ttl=64 time=0.676 ms
--- 10.0.0.2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.676/0.735/0.827 ms
$ docker exec c2 ping -c 3 c1
PING c1 (10.0.0.2): 56 data bytes
64 bytes from 10.0.0.2: seq=0 ttl=64 time=1.358 ms
64 bytes from 10.0.0.2: seq=1 ttl=64 time=0.663 ms
64 bytes from 10.0.0.2: seq=2 ttl=64 time=0.761 ms
--- c1 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.663/0.927/1.358 ms
Attached is the Docker Swarm architecture diagram:
References: