Docker (6): Docker Swarm of the Docker Three Musketeers

In practice, it will be found that using a single Docker node in the production environment is far from enough, and it is imperative to build a Docker cluster. However, in the face of many container cluster systems such as Kubernetes, Mesos and Swarm, how should we choose? Among them, Swarm is native to Docker, and is also the easiest, easiest to learn, and the most resource-efficient, and is more suitable for small and medium-sized companies.

Introduction to Docker Swarm

Swarm was a separate project before Docker 1.12, and after Docker 1.12 was released, the project was merged into Docker as a subcommand of Docker. Currently, Swarm is the only tool provided by the Docker community that natively supports Docker cluster management. It can convert a system composed of multiple Docker hosts into a single virtual Docker host, allowing containers to form subnetworks across hosts.

Docker Swarm is an orchestration tool that provides clustering and scheduling capabilities for IT operations teams. Users can integrate all Docker Engines in the cluster into a "virtual Engine" resource pool, and communicate with a single master Swarm by executing commands, instead of having to communicate with each Docker Engine separately. With flexible scheduling policies, IT teams can better manage available host resources and ensure the efficient operation of application containers.

Docker Swarm Advantages

High performance at any scale

Scalability is key for enterprise-grade Docker Engine clusters and container scheduling. Companies of any size—whether they have five or thousands of servers—can effectively use Swarm in their environment.
After testing, the limit of Swarm scalability is running 50,000 deployed containers on 1,000 nodes, with sub-second startup times per container, with no performance penalty.

Flexible container scheduling

Swarm helps IT operations teams optimize performance and resource utilization under limited conditions. Swarm's built-in scheduler supports a variety of filters, including: node labels, affinity, and various container policies such as binpack, spread, random, and more.

Continuous availability of services

Docker Swarm provides high availability by the Swarm Manager, by creating multiple Swarm master nodes and formulating alternative strategies when the master node goes down. If a master node goes down, a slave node will be upgraded to a master node until the original master node returns to normal.
Additionally, if a node fails to join the cluster, Swarm continues to try to join, providing error alerts and logs. In case of node failure, Swarm can now try to reschedule containers to healthy nodes.

Compatibility with Docker API and Integration Support
Swarm fully supports Docker API, which means it provides a seamless experience for users of different Docker tools such as Docker CLI, Compose, Trusted Registry, Hub and UCP.

Docker Swarm provides native support for core functionality of Dockerized applications, such as multi-host networking and storage volume management. Developed Compose files can be easily deployed (via docker-compose up ) to test servers or Swarm clusters. Docker Swarm can also pull and run images from the Docker Trusted Registry or the Hub.

To sum up, Docker Swarm provides a set of high-availability Docker cluster management solutions, fully supports standard Docker API, facilitates the management and scheduling of cluster Docker containers, and makes full use of cluster host resources .

** Not all services should be deployed within a Swarm cluster. Databases and other stateful services are not suitable for deployment in Swarm clusters. **

Related concepts

node

A host running Docker can actively initialize a Swarm cluster or join an existing Swarm cluster, so that the host running Docker becomes a node of a Swarm cluster. Nodes are divided into management (manager) nodes and work (worker) nodes.

The management node is used for the management of the Swarm cluster, and the docker swarm command can only be executed on the management node (the node exiting the cluster command docker swarm leave can be executed on the worker node). A Swarm cluster can have multiple management nodes, but only one management node can become the leader, and the leader is implemented through the raft protocol.

The worker node is the task execution node, and the management node delivers the service to the worker node for execution. Manager nodes also act as worker nodes by default. You can also configure the service to run only on the management node. The following figure shows the relationship between the management node and the worker nodes in the cluster.

Services and Tasks

Task (Task) is the smallest scheduling unit in Swarm, which is currently a single container.
Services (Services) refers to a collection of tasks, and services define the properties of tasks. There are two modes of service:

  • replicated services run a specified number of tasks on each worker node according to certain rules.
  • global services run a task on each worker node

The two modes are specified by the --mode parameter of docker service create. The following diagram shows the relationship between containers, tasks, and services.

Create a Swarm cluster

We know that a Swarm cluster consists of manager nodes and worker nodes. Let's create a minimal Swarm cluster with one manager node and two worker nodes.

Initialize the cluster

Check out the virtual host, there is none right now

docker-machine ls
NAME   ACTIVE   DRIVER   STATE   URL   SWARM   DOCKER   ERRORS

Create a management node with virtualbox

docker-machine create --driver virtualbox manager1
#进入管理节点
docker-machine ssh manager1

Execute sudo -i to enter root privileges

We initialize a Swarm cluster on manager1 using docker swarm init.

docker@manager1:~$ docker swarm init --advertise-addr 192.168.99.100
Swarm initialized: current node (j0o7sykkvi86xpc00w71ew5b6) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-47z6jld2o465z30dl7pie2kqe4oyug4fxdtbgkfjqgybsy4esl-8r55lxhxs7ozfil45gedd5b8a 192.168.99.100:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

If your Docker host has multiple network cards with multiple IPs, you must use --advertise-addr to specify the IP.
The node that executes the docker swarm init command automatically becomes the management node.

The command docker infocan view the swarm cluster status:

Containers: 0
Running: 0
Paused: 0
Stopped: 0
  ...snip...
Swarm: active
  NodeID: dxn1zf6l61qsb1josjja83ngz
  Is Manager: true
  Managers: 1
  Nodes: 1
  ...snip...

Command docker node lsto view cluster node information:

docker@manager1:~$ docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
1ipck4z2uuwf11f4b9mnon2ul *   manager1            Ready               Active              Leader

Exit the virtual host

docker@manager1:~$ exit 

Add worker nodes

The previous step initialized a Swarm cluster with a management node. In the Docker Machine section, we learned that Docker Machine can create a virtual Docker host in seconds. Let's use it to create two Docker hosts and join them to in the cluster.

Create virtual host worker1

Create host

$ docker-machine create -d virtualbox worker1

Enter the virtual host worker1

$ docker-machine ssh worker1

Join the swarm cluster

docker@worker1:~$ docker swarm join \
    --token SWMTKN-1-47z6jld2o465z30dl7pie2kqe4oyug4fxdtbgkfjqgybsy4esl-8r55lxhxs7ozfil45gedd5b8a \
    192.168.99.100:2377

This node joined a swarm as a worker.  

Exit the virtual host

docker@worker1:~$ exit 

Create virtual host worker2

create

$ docker-machine create -d virtualbox worker2

Enter the virtual host worker2

$ docker-machine ssh worker2

Join the swarm cluster

docker@worker2:~$ docker swarm join \
    --token SWMTKN-1-47z6jld2o465z30dl7pie2kqe4oyug4fxdtbgkfjqgybsy4esl-8r55lxhxs7ozfil45gedd5b8a \
    192.168.99.100:2377

This node joined a swarm as a worker. 

Exit the virtual host

docker@worker2:~$ exit 

Two worker nodes are added

View clusters

Enter the management node:

docker-machine ssh manager1

View the virtual host on the host machine

docker@manager1:~$ docker-machine ls
NAME       ACTIVE   DRIVER       STATE     URL                         SWARM   DOCKER        ERRORS
manager1   *        virtualbox   Running   tcp://192.168.99.100:2376           v17.12.1-ce
worker1    -        virtualbox   Running   tcp://192.168.99.101:2376           v17.12.1-ce
worker2    -        virtualbox   Running   tcp://192.168.99.102:2376           v17.12.1-ce

Execute docker node ls on the master node to query cluster host information

docker@manager1:~$ docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
1ipck4z2uuwf11f4b9mnon2ul *   manager1            Ready               Active              Leader
rtcpqgcn2gytnvufwfveukgrv     worker1             Ready               Active
te2e9tr0qzbetjju5gyahg6f7     worker2             Ready               Active

This has created a minimal Swarm cluster with one manager node and two worker nodes.

Deployment service

We use the docker service command to manage services in the Swarm cluster, which can only be run on the management node.

New service

Enter the cluster management node:

docker-machine ssh manager1

Use docker China image

docker search alpine 
docker pull registry.docker-cn.com/library/alpine

Now we are running a service called helloworld in the Swarm cluster created in the previous section.

docker@manager1:~$ docker service create --replicas 1 --name helloworld alpine ping ityouknow.com
rwpw7eij4v6h6716jvqvpxbyv
overall progress: 1 out of 1 tasks
1/1: running   [==================================================>]
verify: Service converged

Command Explanation:

  • docker service createcommand to create a service
  • --nameservice name ishelloworld
  • --replicasSet the number of instances to start
  • alpineRefers to the image name used, ping ityouknow.comreferring to the bash the container runs

Use the command docker service ps rwpw7eij4v6h6716jvqvpxbyvto view the service progress

docker@manager1:~$ docker service ps rwpw7eij4v6h6716jvqvpxbyv
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                ERROR               PORTS
rgroe3s9qa53        helloworld.1        alpine:latest       worker1            Running             Running about a minute ago

Use docker service ls to see what services are running in the current Swarm cluster.

docker@manager1:~$ docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
yzfmyggfky8c        helloworld          replicated          0/1                 alpine:latest

Monitor cluster status

Log in to the management node manager1

docker-machine ssh manager1

Run to docker service inspect --pretty <SERVICE-ID>query the general status of the service, taking the helloworld service as an example:

docker@manager1:~$  docker service inspect --pretty helloworld

ID:             rwpw7eij4v6h6716jvqvpxbyv
Name:           helloworld
Service Mode:   Replicated
 Replicas:      1
Placement:
UpdateConfig:
 Parallelism:   1
 On failure:    pause
 Monitoring Period: 5s
 Max failure ratio: 0
 ...
 Rollback order:    stop-first
ContainerSpec:
 Image:         alpine:latest@sha256:7b848083f93822dd21b0a2f14a110bd99f6efb4b838d499df6d04a49d0debf8b
 Args:          ping ityouknow.com
Resources:
Endpoint Mode:  vip

Run docker service inspect helloworldquery for service details.

Run to docker service ps <SERVICE-ID>see which node is running the service:

docker@manager1:~$  docker service ps helloworld
NAME                                    IMAGE   NODE     DESIRED STATE  LAST STATE
helloworld.1.8p1vev3fq5zm0mi8g0as41w35  alpine  worker1  Running        Running 3 minutes

View the execution of tasks on worker nodes

docker-machine ssh  worker1

Execute on the node to docker psview the running status of the container.

docker@worker1:~$   docker ps
CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS               NAMES
96bf5b1d8010        alpine:latest       "ping ityouknow.com"   4 minutes ago       Up 4 minutes                            helloworld.1.rgroe3s9qa53lf4u4ky0tzcb8

In this case, we have successfully run a helloworld service in the Swarm cluster. According to the command, it can be seen that it is running on the worker1 node.

Elastic scaling experiment

Let's do a set of experiments to feel the powerful dynamic horizontal scaling feature of Swarm. First, dynamically adjust the number of service instances.

Adjust the number of instances

Increase or decrease the number of nodes served

Adjust the number of service instances of helloworld to 2

docker service update --replicas 2 helloworld

Check which node is running the service:

docker service ps helloworld
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                 ERROR                              PORTS
rgroe3s9qa53        helloworld.1        alpine:latest       manager1            Running             Running 8 minutes ago
a61nqrmfhyrl        helloworld.2        alpine:latest       worker2             Running             Running 9 seconds ago

Adjust the number of service instances of helloworld to 1

docker service update --replicas 1 helloworld

Check the node operation again:

docker service ps helloworld
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                 ERROR                              PORTS
a61nqrmfhyrl        helloworld.2        alpine:latest       worker2             Running             Running about a minute ago

Adjust the number of service instances of helloworld to 3 again

docker service update --replicas 3 helloworld
helloworld
overall progress: 3 out of 3 tasks
1/3: running   [==================================================>]
2/3: running   [==================================================>]
3/3: running   [==================================================>]
verify: Service converged

Check node operation:

docker@manager1:~$ docker service ps helloworld
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE             ERROR                              PORTS
mh7ipjn74o0d        helloworld.1        alpine:latest       worker2             Running             Running 40 seconds ago
1w4p9okvz0xw        helloworld.2        alpine:latest       manager1            Running             Running 2 minutes ago
snqrbnh4k94y        helloworld.3        alpine:latest       worker1             Running             Running 32 seconds ago

delete cluster service

docker service rm helloworld

Resize the cluster

Dynamically adjust the worker nodes of a Swarm cluster.

Add a cluster

Create virtual host worker3

$ docker-machine create -d virtualbox worker3

Enter the virtual host worker3

$ docker-machine ssh worker3

Join the swarm cluster

docker@worker3:~$ docker swarm join \
    --token SWMTKN-1-47z6jld2o465z30dl7pie2kqe4oyug4fxdtbgkfjqgybsy4esl-8r55lxhxs7ozfil45gedd5b8a \
    192.168.99.100:2377

This node joined a swarm as a worker. 

Exit the virtual host

docker@worker3:~$exit 

Execute docker node ls on the master node to query cluster host information

Log in to the master node

docker-machine ssh  manager1

View cluster nodes

docker@manager1:~$  docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
j0o7sykkvi86xpc00w71ew5b6 *   manager1            Ready               Active              Leader
xwv8aixasqraxwwpox0d0bp2i     worker1             Ready               Active
ij3z1edgj7nsqvl8jgqelrfvy     worker2             Ready               Active
i31yuluyqdboyl6aq8h9nk2t5     worker3             Ready               Active

It can be seen that there are more worker3 cluster nodes

Exit the Swarm cluster

If the Manager wants to exit the Swarm cluster, execute the following command on the Manager Node:

docker swarm leave  

You can exit the cluster. If there are other Worker Nodes in the cluster and you want the Manager to exit the cluster, add a mandatory option. The command line is as follows:

docker swarm leave --force

Perform an exit test on Worker2 and log in to the worker2 node

docker-machine ssh  worker2

execute exit command

docker swarm leave 

Check the cluster node status:

docker@manager1:~$ docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
j0o7sykkvi86xpc00w71ew5b6 *   manager1            Ready               Active              Leader
xwv8aixasqraxwwpox0d0bp2i     worker1             Ready               Active
ij3z1edgj7nsqvl8jgqelrfvy     worker2             Down                Active
i31yuluyqdboyl6aq8h9nk2t5     worker3             Ready               Active

It can be seen that the status of the cluster node worker2 has been offline

You can also join again

docker@worker2:~$ docker swarm join \
>     --token SWMTKN-1-47z6jld2o465z30dl7pie2kqe4oyug4fxdtbgkfjqgybsy4esl-8r55lxhxs7ozfil45gedd5b8a \
>     192.168.99.100:2377
This node joined a swarm as a worker.

check again

docker@manager1:~$ docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
j0o7sykkvi86xpc00w71ew5b6 *   manager1            Ready               Active              Leader
xwv8aixasqraxwwpox0d0bp2i     worker1             Ready               Active
0agpph1vtylm421rhnx555kkc     worker2             Ready               Active
ij3z1edgj7nsqvl8jgqelrfvy     worker2             Down                Active
i31yuluyqdboyl6aq8h9nk2t5     worker3             Ready               Active

It can be seen that the cluster node worker2 has rejoined the cluster

rebuild command

When using VirtualBox for testing, if you want to repeat the experiment, you can delete the experimental node and start again.

//停止虚拟机
docker-machine stop [arg...]  //一个或多个虚拟机名称

docker-machine stop   manager1 worker1 worker2
//移除虚拟机
docker-machine rm [OPTIONS] [arg...]

docker-machine rm manager1 worker1 worker2

After stopping and deleting the virtual host, you can create it again.

Summarize

Through the study of Swarm, I strongly feel the charm of automatic horizontal expansion, so that when the company's traffic bursts, it only needs to execute a command to complete the instance online. If you do automatic control according to the company's business flow, it will truly achieve fully automatic dynamic scaling.

For example, we can use scripts to monitor the company's business traffic. When the traffic is at a certain level, we start the corresponding number of N nodes. When the traffic decreases, we also dynamically reduce the number of service instances, which can save the company. resources, and you don’t have to worry about business outbreaks being overwhelmed by traffic. There is a reason why Docker can develop so well. Containerization is the most important part of DevOps. In the future, containerization technology will become more and more abundant and perfect, and intelligent operation and maintenance can be expected.

refer to

Getting started with swarm mode Docker - A detailed introduction to the use of Docker Swarm
from entry to practice

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324607080&siteId=291194637