The content shared today is not only applicable to the blockchain, but also applicable to many scenarios that require distributed networking (only we encountered it in the blockchain scenario). So I am a little hesitant whether to put it in the Docker series or the blockchain series... As for some blogs, I didn’t write blockchain-related content. It’s not that I don’t want to write, but it’s just that I can’t pass the review and can’t publish it.
Closer to home, since our entire HyperLedger Besu blockchain cluster is built based on Docker, Docker remote networking is a problem that must be solved. Three solutions found during the research,
- Implemented using a Consul cluster. Consul will use its own communication protocol to achieve communication by configuring Docker's daemon.json with the cluster-store parameter (you need to bind the local network card when filling in the configuration). This method requires additional deployment of a highly available Consul cluster and strong binding with Docker;
- Use ETCD to form a network. Although the principle is the same as Consul, but because ETCD is written in C language, the performance will be better than Consul. However, due to my limited ability, I did not get it done for a while, so I finally gave up the implementation of ETCD;
- Use Docker Swarm to build a network. Docker can use this Swarm mode from version 1.12.0, and it can be realized with a small adjustment;
In summary, Swarm is simple, easy to use and seamlessly integrated with Docker is the main reason why I choose it. Considering that other nodes will be connected in the later stage and the access method is uncontrollable (it may be a cloud server or a server in a local computer room), this requires that the networking difficulty must be low and the reliability is strong. Swarm can fully meet my needs. current needs.
1. Principle of Docker Swarm
It can be seen from the above figure that Docker Swarm provides Cluster (cluster management mode), node discovery (service discovery), scheduler (rolling update) and other services. Among them, the access address between nodes can be recorded through service discovery, and then the internal communication of the container can be realized through automatic addressing of load balancing.
2. Docker Swarm Remote Networking
Reaching the standard for remote networking requires remote servers to be able to "communicate" with each other through the IP allocated in the Docker container (in this example, "able to ping" is used as the communication goal). Here I will not spend space to detail the problems of LB forwarding and addressing in the Swarm network, nor will I go deep into the overlay network for detailed explanations. There are a lot of these on the Internet. I only share things that show immediate results...
2.1 Server information
In order to verify the feasibility of Docker Swarm, this example uses four VMs for simulation
Machine name | machine ip | use the system | Docker version |
---|---|---|---|
node202 | 192.168.200.202 | CentOS | 18.09.3 |
node203 | 192.168.200.203 | CentOS | 18.06.3-ce |
node204 | 192.168.200.204 | CentOS | 18.06.3-ce |
node205 | 192.168.200.205 | CentOS | 18.06.3-ce |
The above four VMs all communicate with the LAN through bridging. In the case of verification here, we will not consider whether the communication between public network IPs can be done. If the most basic communication between servers cannot be realized, then There is no need to consider networking issues.
2.2 Create a Swarm network
Here node203 is used as the main network server, and after logging in to the server with ssh, execute
docker swarm init --advertise-addr 192.168.200.203
After executing the above statement, you will get other working nodes to join the required command
docker swarm join --token SWMTKN-1-5trsmvw8zfd84gxg9a71akao9hi0ro3tus6v2t8c2i0c9qtoi0-268axfn4r939lvj699whwiw7r 192.168.200.203:2377
If this command is lost, you can execute the following statement on the 203 machine to regenerate
docker swarm join-token manager
Next, execute the "docker swarm join --token" command in the server that needs to become a node. After all execution, you can view the node joining status through the docker node ls command in the node203 server, as shown below:
root@node203:/home/paohe/Documents/blockchain/vault/config# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
swnvjvsxfeirdlaujbaznxhts node202 Ready Active 18.09.3
zb3nif68me9mvm0go3l2agkk7 * node203 Ready Active Leader 18.06.3-ce
uk65wbkadgg6mwzk9h1yspdw4 node204 Ready Active 18.06.3-ce
zx1elhm4oihcqm0bgggt2y283 node205 Ready Active 18.06.3-ce
After all the required networking servers are added, you can see that there is an overlay network named ingress on the node203 server through docker network ls, as shown in the figure below:
root@node203:~# docker network ls
NETWORK ID NAME DRIVER SCOPE
f5a6b27119ef bridge bridge local
e1902557eda0 docker_gwbridge bridge local
c5ad4fc3d434 host host local
u5vjecempood ingress overlay swarm
f96d226b8f69 none null local
75f6834bc0cf prometheus_default bridge local
173d62c10b82 vault_default bridge local
After this it is possible to create a custom Swarm network.
2.3 Verify Overlay network availability
In order to further verify the availability of the Swarm network, another overlay (overlay network) is established for verification:
docker network create \
-d overlay \
--subnet=192.18.0.0/24 \
--gateway=192.18.0.254 \
--attachable besu_swarm
Execute the above statement to create an overlay network named besu_swarm and set the subnet range to start from 192.18.0.0 and the gateway to 192.18.0.254. Execute docker network ls on the node203 server to view the custom network besu_swarm, and because the Swarm-based overlay network is created, after the node203 server creates besu_swarm, the custom network information will be synchronized to other servers, as shown below:
root@node203:~# docker network ls
NETWORK ID NAME DRIVER SCOPE
z2w4r5fo1588 besu_swarm overlay swarm
f5a6b27119ef bridge bridge local
e1902557eda0 docker_gwbridge bridge local
c5ad4fc3d434 host host local
u5vjecempood ingress overlay swarm
f96d226b8f69 none null local
75f6834bc0cf prometheus_default bridge local
After the network is created, it is necessary to verify whether the subnet IPs can access each other. For this purpose, a busybox image needs to be downloaded and started. As follows:
docker pull busybox:latest
docker run -itd --name=busybox1 --network=besunetwork busybox:latest /bin/sh
You can view the ip address of busybox1 through docker network inspect besu_swarm, and the ip address of busybox1 is 192.18.0.1. Then follow the same method, start busybox2, busybox3, busybox4 in the same way on other servers and check their ip addresses, the following figure takes busybox2 as an example as shown in the following figure:
It shows that the current ip address of busybox2 is 192.18.0.3. Finally, docker exec -it $containerid sh
enter the busybox container and execute the ping command to check whether the container is connected. Take busybox1 and busybox2 as examples below, as shown in the figure below:
Example 1: busybox1 pings busybox2
Example 2: busybox2 pings busybox1
As shown in the figure above, both parties can ping each other through the subnet ip, and the remote networking has been completed so far.