1. Introduction to Swarm
Docker Swarm is an orchestration tool for managing cross-node containers. Compared with Docker Compose, Compose can only orchestrate containers on a single node. Swarm virtualizes a group of Docker nodes into a host, allowing users to operate on a single host. Complete the management of the entire container cluster. If you downloaded the latest version of Docker, then Swarm is already included and you don't need to install it.
The Docker Swarm architecture includes two roles, manager and node. The former is the node where Swarm Daemon works. It includes functions such as scheduler, routing, and service discovery. It is responsible for receiving cluster management requests from clients, and then scheduling Node to perform specific container work, such as Create, expand, and destroy containers, etc. The manager itself is also a node. Usually, for the high availability of the cluster, the number of managers is an odd number >= 3, and the number of nodes is not limited.
2. Swarm instance
2.1 Preparations
We need to prepare three nodes and install Docker Engine on each node before we can build the next instance. Here, choose the local virtual machine method to build three Centos virtual machines, and install Docker ( Docker first experience ).
192.168.160.100 ————作为manager
192.168.160.101 ————作为node1
192.168.160.102 ————作为node2
2.2 Create a cluster
Before creating a cluster, we want to docker node ls
check the information of the nodes in the cluster, and the feedback is that there is currently no node information, and the current node is not the manager.
博主之前已经创建过一个swarm集群,所以先删除掉。
[root@localhost ~]# docker swarm leave -f 删除swarm
Then we first perform the following operations on the manager node, indicating that we want to set it as a manager, and set our own communication IP to 192.168.160.100
.
[root@localhost ~]# docker swarm init --advertise-addr 192.168.160.100
Swarm initialized: current node (upral6mrlz1928ssrqrehjnr8) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-4jg8as3v4whcy9gtabon078ooyiu1fuytuelyb2ipqbcuxa7jk-55dc9derg80k6ovt0jx2kju28 192.168.160.100:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
As above, the setting of the manager node is completed, and the prompt information is as follows:
- You can execute docker swarm join --token... on other nodes to set the node as a working node and join the swarm cluster;
- You can execute the docker swarm join-token manager on other nodes to obtain the next execution instruction. After executing the instruction, the node will be set as the manager and join the swarm cluster from the node;
We are currently demonstrating the mode of one manager and two working nodes, so execute the first command on the other two node1 and node2:
docker swarm join --token SWMTKN-1-4jg8as3v4whcy9gtabon078ooyiu1fuytuelyb2ipqbcuxa7jk-55dc9derg80k6ovt0jx2kju28 192.168.160.100:2377
In this way, a swarm cluster is built.
2.3 Using clusters
The manager is the entrance to our cluster management. Our docker commands are all executed on the manager, and the dockr command cannot be executed on the node node. This must be kept in mind.
View current node information;
[root@localhost ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
i91jji6pecz0pic225aogilh7 localhost.localdomain Ready Active 20.10.21
so1xdgl2iudz0gobsnh60cgca localhost.localdomain Ready Active 20.10.21
upral6mrlz1928ssrqrehjnr8 * localhost.localdomain Ready Active Leader 20.10.21
Create a private network for intercommunication between containers on different nodes;
[root@localhost ~]# docker network create -d overlay niginx_network
0mbficvfzsly02hs5v34iq0p1
[root@localhost ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
df559dd730a7 bridge bridge local
a0e7ced640ef docker_gwbridge bridge local
9df73a6f4d7a host host local
9i5uwy0rfkek ingress overlay swarm
2187dde6db43 mynet bridge local
0mbficvfzsly niginx_network overlay swarm
fa0cdda1cb13 none null local
Deploy a service using the specified network (pull the image docker pull nginx in advance);
[root@localhost ~]# docker service create --replicas 1 --network niginx_network --name my_nginx -p 80:80 nginx:latest
goaillb240g83fi1ygzjj39h4
overall progress: 1 out of 1 tasks
1/1: running [==================================================>]
verify: Service converged
[root@localhost ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2193c8281810 nginx:latest "/docker-entrypoint.…" 3 minutes ago Up 3 minutes 80/tcp my_nginx.1.s558g27cnz6s4h9492eii03bp
Create a container using the nginx:latest image as above, the container name is my_nginx, and port 80 is exposed to the outside world;
View the list of running services;
[root@localhost ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
goaillb240g8 my_nginx replicated 1/1 nginx:latest *:80->80/tcp
Check which node a service is running on;
[root@localhost ~]# docker service ps my_nginx
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
s558g27cnz6s my_nginx.1 nginx:latest hostname-100 Running Running 5 minutes ago
Dynamically expand and shrink the number of containers for a service;
[root@localhost ~]# docker service scale my_nginx=4
my_nginx scaled to 4
overall progress: 4 out of 4 tasks
1/4: running [==================================================>]
2/4: running [==================================================>]
3/4: running [==================================================>]
4/4: running [==================================================>]
verify: Service converged
Using the update command is also equivalent: docker service update --replicas 3 my_nginx
;
Take a node offline so that it does not participate in task assignment;
[root@localhost ~]# docker node update --availability drain hostname-102
hostname-102
[root@localhost ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
upral6mrlz1928ssrqrehjnr8 * hostname-100 Ready Active Leader 20.10.21
so1xdgl2iudz0gobsnh60cgca hostname-101 Ready Active 20.10.21
i91jji6pecz0pic225aogilh7 hostname-102 Ready Drain 20.10.21
It is worth mentioning that if a node is set offline or goes down due to other failures, the containers on it will be transferred to other operational nodes, so as to ensure that there are always containers with the specified number of copies running.
On-line an offline node to make it participate in task assignment;
[root@localhost ~]# docker node update --availability active hostname-102
hostname-102
[root@localhost ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
upral6mrlz1928ssrqrehjnr8 * hostname-100 Ready Active Leader 20.10.21
so1xdgl2iudz0gobsnh60cgca hostname-101 Ready Active 20.10.21
i91jji6pecz0pic225aogilh7 hostname-102 Ready Active 20.10.21
Removing a task causes all containers of the task in the cluster to be deleted;
[root@localhost ~]# docker service rm my_nginx
my_nginx
[root@localhost ~]# docker service ps my_nginx
no such service: my_nginx
Join an existing swarm cluster
[root@localhost ~]# docker swarm join-token manager
To add a manager to this swarm, run the following command:
docker swarm join --token SWMTKN-1-4jg8as3v4whcy9gtabon078ooyiu1fuytuelyb2ipqbcuxa7jk-d6c6v0fonjuh07qg88h0dzn33 192.168.160.100:2377
[root@localhost ~]# docker swarm join-token worker
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-4jg8as3v4whcy9gtabon078ooyiu1fuytuelyb2ipqbcuxa7jk-55dc9derg80k6ovt0jx2kju28 192.168.160.100:2377
Node leaves the cluster;
[root@localhost ~]# docker swarm leave
Node left the swarm.
[root@localhost ~]# docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
upral6mrlz1928ssrqrehjnr8 * hostname-100 Ready Active Leader 20.10.21
so1xdgl2iudz0gobsnh60cgca hostname-101 Down Active 20.10.21
i91jji6pecz0pic225aogilh7 hostname-102 Ready Active 20.10.21
Delete the swarm cluster;
[root@localhost ~]# docker swarm leave -f
Node left the swarm.
When the last manager node leaves, the swarm cluster is automatically deleted.