Docker deploy Swarm cluster

Swarm introduction


Swarm is a relatively simple tool released by Docker in early December 2014 to manage Docker clusters. It turns a group of Docker hosts into a single, virtual host. Swarm uses the standard Docker API interface as its front-end access entry. In other words, various forms of Docker Client (docker client in Go, docker_py, docker, etc.) can directly communicate with Swarm. Swarm is almost all developed in the Go language. Last Friday, April 17, Swarm0.2 was released. Compared with version 0.1, version 0.2 adds a new strategy to schedule containers in the cluster so that it can be deployed on available nodes. Spread them, and support more Docker commands and cluster drivers.

Swarm deamon is just a scheduler and router. Swarm does not run the container itself. It just accepts the request sent by the docker client and schedules suitable nodes to run the container. This means that even if Swarm is due to some reason If it hangs up, the nodes in the cluster will run as usual. When Swarm resumes operation, it will collect and rebuild cluster information. The following is the structure diagram of Swarm:
Insert picture description here
Insert picture description here

Set up swarm cluster

lab environment

IP service Remarks
192.168.1.10 Docker (installed) swarm-manage
192.168.1.20 Docker (installed) swarm node1
192.168.1.30 Docker (installed) swarm node2

Experimental steps

Hostname change


In order to facilitate the experiment, change the host name and write the hosts file
192.168.1.10 for each host

[root@localhost ~]# vim /etc/hosts
# 添加
192.168.1.10 swarm-manage
192.168.1.20 node1
192.168.1.30 node2
[root@localhost ~]# hostname swarm-manage
[root@localhost ~]# bash
[root@swarm-manage ~]# 

192.168.1.20

[root@localhost ~]# hostname node1
[root@localhost ~]# bash
[root@node1 ~]# 

192.168.1.30

[root@localhost ~]# hostname node1
[root@localhost ~]# bash
[root@node2 ~]# 

Password-free login


192.168.1.10

[root@swarm-manage ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:dVRC5qbjHlvEC/2YwOkMZddtz/tKvi5DHU9YYG9k7Tc [email protected]
The key's randomart image is:
+---[RSA 2048]----+
|           .=.+.+|
|           + + =o|
|          + = .+=|
|         = O  oE=|
|        S B +. +=|
|         = =.=. o|
|          =.= o. |
|         . +oo  .|
|          o  +=o.|
+----[SHA256]-----+
[root@swarm-manage ~]# ssh-copy-id -i root@node1
[root@swarm-manage ~]# ssh-copy-id -i root@node2
[root@swarm-manage ~]# scp /etc/hosts/ root@node1:/etc/
[root@swarm-manage ~]# scp /etc/hosts/ root@node2:/etc

Initialize swarm cluster


By default, when the swarm cluster is initialized, the IP network segment in the assigned swarm cluster is 10.0.0.0/8. If you want to specify the network segment, you can also specify it through the command during initialization. as follows:

docker swarm init --default-addr-pool 10.20.0.0/16 --advertise-addr 192.168.1.10 can be used to specify the network segment used by the swarm cluster.

--Default-addr-pool-mask-length 26: Lengthen the subnet mask, and increase to 26 bits on the basis of 16

The command line option default-addr-pool 10.10.0.0/16 means that Docker will allocate subnets from the address range /16. If --default-addr-pool-mask-len is not specified or is explicitly set to 24, 256/24 networks will be generated in the form of 10.10.X.0/24.

swarm manage(192.168.1.10)

Specify the cluster management node as 192.168.1.10

[root@swarm-manage ~]# docker swarm init --advertise-addr 192.168.1.10
Swarm initialized: current node (ocu03ojg1j4nawgkr2mvo7eij) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-5xgjcrmf5c6i0fjyc73gz5t9rtg5r4g2zd28ajh81ts36lvdde-bsglhwea30jdkmddqi6ky0s23 192.168.1.10:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
[root@swarm-manage ~]# docker info | grep Swarm
 Swarm: active # 表示Swarm集群已经在运行了

After the initialization, there will be two important outputs. The first one is that if you want to add a swarm node, please execute docker swarm join --token SWMTKN-1-5xgjcrmf5c6i0fjyc73gz5t9rtg5r4g2zd28ajh81ts36lvdde-bsglhwea30jdkmdd23 192.168.1.

The second is, if you want to add a new manage swarm, please execute docker swarm join-token
manager on this machine , and then a token will be reported again to join the cluster as a manager.

Release swarm cluster port

[root@swarm-manage ~]# firewall-cmd --add-port=2377/tcp

Node joins the cluster


node1(192.168.1.20)

[root@node1 ~]# docker swarm join --token SWMTKN-1-5xgjcrmf5c6i0fjyc73gz5t9rtg5r4g2zd28ajh81ts36lvdde-bsglhwea30jdkmddqi6ky0s23 192.168.1.10:2377
This node joined a swarm as a worker.

node2(192.168.1.30)

[root@node2 ~]# docker swarm join --token SWMTKN-1-5xgjcrmf5c6i0fjyc73gz5t9rtg5r4g2zd28ajh81ts36lvdde-bsglhwea30jdkmddqi6ky0s23 192.168.1.10:2377
This node joined a swarm as a worker.

swarm-manage(192.168.1.10)

Check the cluster nodes, the Leader displayed by MANAGER is the management node of the cluster

[root@swarm-manage ~]# docker node ls
ID                            HOSTNAME       STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
c7kao8526ogi7v9vatzwprxr2     node1          Ready     Active                          20.10.2
mz35p4jafi44afxuytt1ab1m0     node2          Ready     Active                          20.10.2
ocu03ojg1j4nawgkr2mvo7eij *   swarm-manage   Ready     Active         Leader           20.10.2

Delete cluster node


docker node rm 节点ID

Node privilege


Upgrade the node2 host to manager. I always get an error after the privilege is raised. As long as the privilege is raised, the node status is DOWN. Checking the log, it is found that port 2377 is not released because there is no such host.

[root@swarm-manage ~]# docker node promote node2
Node node2 promoted to a manager in the swarm.

View the cluster nodes, Reachable also represents the manager node

[root@swarm-manage ~]# docker node ls
ID                            HOSTNAME       STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
c7kao8526ogi7v9vatzwprxr2     node1          Ready     Active                          20.10.2
mz35p4jafi44afxuytt1ab1m0     node2          Ready     Active         Reachable        20.10.2
ocu03ojg1j4nawgkr2mvo7eij *   swarm-manage   Ready     Active         Leader           20.10.2

Node downright


Reduce node2 host from manager to worker

[root@swarm-manage ~]# docker node demote node2

View management node


As long as it is a host in the swarm cluster, you can see the management node ip of swamr

[root@swarm-manage ~]# docker info
...
  Node Address: 192.168.1.10
  Manager Addresses:
   192.168.1.10:2377
...

Download the image of the graphical display tool


swarm-manage(192.168.1.10)

After the download is complete, run

[root@swarm-manage ~]# docker pull dockersamples/visualizer
[root@swarm-manage ~]# docker run -itd -p 8888:8080 -e HOST=192.168.1.11 \
-e PORT=8080 --volume /var/run/docker.sock:/var/run/docker.sock \
--name visualizer --restart always dockersamples/visualizer
[root@swarm-manage ~]# firewall-cmd --add-port=8888/tcp
success

Access verificationhttp://192.168.1.10:8888

As shown in the figure, all three nodes in the cluster are green

It is convenient to check which host node the container copy is running on after the swarm cluster is running
Insert picture description here

Guess you like

Origin blog.csdn.net/weixin_46152207/article/details/113172922