Docker Swarm cluster deployment of combat

basic concepts:

Swarm Introduction:
Swarm is a relatively simple tool Docker released in early December 2014, to manage Docker cluster that will host a group of Docker into a single virtual host.
Swarm using standard Docker API interface as the access entry front end thereof, in other words, all forms of Docker Client (dockerclient in Go, docker_py , docker , etc.) may communicate directly with each Swarm. Swarm almost all with Go language to complete the development, Swarm0.2 version adds a new strategy to dispatch container cluster that spread across the available nodes them, and support more Docker command and clustering drive. Swarm deamon just a scheduler (Scheduler) plus router (router), Swarm do not run their own container, which docker just accept the request sent by the client, the scheduling for the node to run the container, which means that, even if for some reason Swarm hung up, the cluster nodes will run as usual, when the Swarm to resume running, it will re-establish collect cluster information.
docker client sends a request over the scheduling for the node to run the container, which means that even Swarm hung up for some reason, the cluster nodes will run as usual, when the Swarm to resume running, it will re-establish collect cluster information.

Swarm cluster of characteristics:
the cluster may all be manager, but not all worker.

  • Node: node.
  • manager: manager, supervisor
  • worker: worker
  • service: a task manager to define the worker receives commands.

Prepare the environment:

Three hosts (centos7):
Docker Version: version 12 or higher.

node01:172.16.1.30
node02:172.16.1.31
node03:172.16.1.32

(1) First, modify the host name:

[root@sqm-docker01 ~]# hostnamectl set-hostname node01
[root@sqm-docker01 ~]# hostnamectl set-hostname node02
[root@sqm-docker01 ~]# hostnamectl set-hostname node03

(2) are disposed three hosts DNS:
[amdha01 the root @ ~] # Vim / etc / the hosts
Docker Swarm cluster deployment of combat
(. 3) is provided for Free density log:
// default carriage has to generate the key:
Docker Swarm cluster deployment of combat

//将密钥拷贝给node02和node03:
[root@node01 ~]# ssh-copy-id  node02
[root@node01 ~]# ssh-copy-id  node03
//有了免密登录,将host域名解析文件拷贝给其他两个节点:
[root@node01 ~]# scp /etc/hosts  [email protected]:/etc/hosts
[root@node01 ~]# scp /etc/hosts  [email protected]:/etc/hosts

Project operations:

1) initialize a cluster:

Specifies the current host for the cluster creator (leader)
Docker Swarm cluster deployment of combat

2) Since when initializing a cluster, it has been suggested command (to copy) to join the cluster, so the next will node02 and node03 hosts added to the cluster:

Docker Swarm cluster deployment of combat
Docker Swarm cluster deployment of combat

// Check if the node is added to the cluster:
Note: This operation is only manager have permission to view.

Docker Swarm cluster deployment of combat

* #### If you need to add to the other nodes in the cluster, and specify identity, when a command generated when you forget to initialize a cluster, you can execute the following command to view the command :
Note: Only the manager has the authority to end view.

#查看以worker端加入这个集群
[root@sqm-docker01 ~]# docker swarm join-token worker

Docker Swarm cluster deployment of combat

#查看以manager的身份加入这个集群
[root@sqm-docker01 ~]# docker swarm join-token manager

Docker Swarm cluster deployment of combat

(3) Configuration Interface web Ui:

# Pull image:
using a local image package, so directly introduced:
[root@node01 ~]# docker load --input myvisualizer.tar
Docker Swarm cluster deployment of combat

#运行服务:
[root@node01 ~]# docker run -d -p 8000:8080 -e HOST=172.16.1.30 -e PORT=8080 -v /var/run/docker.sock:/var/run/docker.sock  --name visualizer  dockersamples/visualizer
  • HOST specified address is the host address

## web access web interface:
the URL of: http://172.16.1.30:8000/
Docker Swarm cluster deployment of combat

We can see three nodes in the cluster.

(4) to build a swarm cluster network (overlay network)

Remember before you build overlay blog post, when to build overlay networks need to deploy consul (data center), but now is in the environment swarm cluster, the default function comes consul service, so you can create overlay network directly.

##创建overlay网络:
[root@node01 ~]# docker network  create -d overlay --attachable  docker

Note: When you create overlay network, if not coupled --attachable, then the network can not be applied to the container.

##在node01和node02之上分别运行一个容器,测试是否能够正常通信:
[root@node01 ~]# docker run -itd --name test1 --network docker busybox
[root@node02 ~]# docker run -itd --name test2 --network docker busybox

Docker Swarm cluster deployment of combat

(5) to build a private warehouse (shared image)

搭建私有仓库的目的是为了能够在一个集群中大家共用私有仓库中的镜像,能够很方便的部署服务,并且在公司中为了安全考虑,大多都是部署自己的私有仓库。

//以官方的registry镜像进行部署:
[root@node01 ~]# docker run -d --name registry --restart=always -p 5000:5000 registry:latest
//修改docker配置文件:
[root@node01 ~]# vim /usr/lib/systemd/system/docker.service 

修改内容如下:
Docker Swarm cluster deployment of combat

//重启docker服务:
[root@sqm-docker01 ~]# systemctl daemon-reload
[root@sqm-docker01 ~]# systemctl restart docker.service
//直接将配置文件拷贝给node02和node03
[root@sqm-docker01 ~]# scp /usr/lib/systemd/system/docker.service  node02:/usr/lib/systemd/system/docker.service

[root@sqm-docker01 ~]# scp /usr/lib/systemd/system/docker.service  node03:/usr/lib/systemd/system/docker.service

拷贝过去后,需要在node02和node03重载进程并且重启docker服务。

####部署完私有仓库,我们最好测试一下:

在node01上将apache镜像上传到私有仓库中:
[root@node01 ~]# docker tag httpd:latest  172.16.1.30:5000/myhttpd
[root@node01 ~]# docker push 172.16.1.30:5000/myhttpd

在其他节点上进行拉取:
[root@node02 ~]# docker pull 172.16.1.30:5000/myhttpd
Docker Swarm cluster deployment of combat
[root@node03 ~]# docker pull 172.16.1.30:5000/myhttpd
Docker Swarm cluster deployment of combat

(6)docker swarm集群配置service服务

##发布任务,并且创建副本数量为2
[root@node01 ~]# docker service create  --replicas 2 --name web01 -p 80:80 172.16.1.30:5000/myhttpd:latest 

--replicas:副本,可以基于该副本进行复制。--replicas 1表示只需要一个容器。

//查看服务:
[root@node01 ~]# docker service  ls

Docker Swarm cluster deployment of combat
//查看服务运行在集群中的哪个节点之上:
它是会参考节点的各种性能配置以及工作量,来实现分配到给哪个节点,以实现负载均衡。

Docker Swarm cluster deployment of combat
除了通过命令行的方式进行查看service的各种信息,还可以通过web网页进行查看:
Docker Swarm cluster deployment of combat

发布第二个服务:
[root@node01 ~]# docker service create --replicas 4 --name web02 -p 80 172.16.1.30:5000/myhttpd #随机生成端口
Docker Swarm cluster deployment of combat
可以看到依然会均衡的分布在每个节点。

(7)service服务的扩容与缩容:

实现扩容和缩容的原因很明确,缩容是当某一个节点压力过大,或者是服务器配置不足以承受所运行的服务,需要减少容器,以保证稳定运行,扩容呢?是当 某个节点的服务器处于闲置的状态下,多给分配几个服务运行,也是不影响的。

1)扩容:
[root@node01 ~]# docker service  scale web01=6

Docker Swarm cluster deployment of combat
在web网页进行查看:
Docker Swarm cluster deployment of combat

2)缩容:
[root@node01 ~]# docker service  scale web02=1

Docker Swarm cluster deployment of combat
在web网页上进行查看:
Docker Swarm cluster deployment of combat

(8)设置manager不参加工作:

在一个集群中,最好的状态是指定manager节点不参加工作,让node节点进行工作,好比是在一个公司中,老板是不可能工作的是吧,一般会去让员工进行工作。

Docker Swarm cluster deployment of combat

##指定manager节点不参加工作:
[root@node01 ~]# docker node update  --availability drain  node01 

Docker Swarm cluster deployment of combat
Docker Swarm cluster deployment of combat

从上图可以看到,manager已经不参加工作,所以运行的容器默认已经工作在node01和node02上。

(9)指定副本运行节点位置:

如果有需求,需要将发布的所以服务运行在同一台服务器上,该怎么实现呢?
方法一:

1)定义标签:
[root@node01 ~]# docker node update  --label-add disk=max node03  ##将标签定义到节点3上
2)发布服务:
[root@node01 ~]# docker service  create  --name test --replicas 5 -p 80 --constraint 'node.labels.disk==max'  172.16.1.30:5000/myhttpd

查看是否指定成功:
Docker Swarm cluster deployment of combat
Docker Swarm cluster deployment of combat

方法二:
直接指定节点主机名。
[root@node01 ~]# docker service create --replicas 5 --name testname --constraint 'node.hostname==node02' -p 80 172.16.1.30:5000/myhttpd
Docker Swarm cluster deployment of combat

service服务的更新&回滚:

1,服务的更新:

将以上服务(test)更新为2.0版本。

[root@node01 ~]# docker tag 172.16.1.30:5000/myhttpd:latest 172.16.1.30:5000/myhttpd:v2.0
[root@node01 ~]# docker push 172.16.1.30:5000/myhttpd:v2.0 

[root@node01 ~]# docker service update --image 172.16.1.30:5000/myhttpd:v2.0 test
Docker Swarm cluster deployment of combat
注意:当服务的版本升级了,它原来的版本依然会进行保留的。
并且在更新的过程中,默认是一个一个的依次进行更新的,当一个更新完成后,再去更新下一个。

2,服务自定义更新:

将以上服务进行更新为3.0版本。

[root@node01 ~]# docker tag 172.16.1.30:5000/myhttpd:v2.0 172.16.1.30:5000/myhttpd:v3.0
[root@node01 ~]# docker push 172.16.1.30:5000/myhttpd:v3.0 

[root@node01 ~]# docker service update --image 172.16.1.30:5000/myhttpd:v3.0 --update-parallelism 2 --update-delay 1m test

参数解释:
--update-parallelism 2 :设置并行(同时)更新的副本数。
--update-delay 1m(m(分钟 s(秒) h (小时)d(天) w(星期)):指定滚动更新的时间间隔。

Docker Swarm cluster deployment of combat
Docker Swarm cluster deployment of combat

3,服务的回滚操作:

当我们执行回滚操作时,默认是回滚到上一次操作的版本,只能在前后两个版本之间进行回滚,不能连续回滚。

[root@node01 ~]# docker service update --rollback test
Docker Swarm cluster deployment of combat
登陆web网页更直观的进行查看:
Docker Swarm cluster deployment of combat

回滚成功。。。

##测试再次回滚,它会回滚到哪个版本呢?

[root@node01 ~]# docker service  update --rollback test

Docker Swarm cluster deployment of combat

可以看到它会回滚到第一次回滚前的一个版本。证明是不可以连续回滚的。

docker swarm集群命令汇总:

//初始化集群:docker swarm init --advertise-addr 本机ip地址
//查看节点信息:docker node ls
//以worker端加入这个集群:docker swarm join-token worker
//以manager端加入这个集群:docker swarm join-token manager
//将节点升级为manager:docker node promote node2
//将节点降级为worker:docker node demote node2
//脱离集群:docker swarm leave
//删除节点(只有脱离了集群,才能够删除):docker node rm node2

//强制删除集群:docker swarm leave -f (必须强制删除)

//查看服务:docker service ls
//查看服务运行在哪个节点上(随机的):docker service ps 任务名

//发布一个任务:
docker service create --replicas 2 --name test -p 80 httpd:latest
//删除所有任务(容器):docker service rm ........(任务名)
或者:docker service ls | xargs docker service rm

//将任务进行扩容:docker service scale 服务名称=2
//进行缩容:docker service scale 服务名称=1

//设置某个节点工作:docker node update --availability active 节点名
//设置某个节点暂时不工作(挂起): docker node update --availability pause 节点名
//设置某个节点不参加工作: docker node update --availability drain 节点名
//更新服务:docker service update --image 172.16.1.30:5000/my_nginx:3.0(镜像名) test2(服务名)
//自定义更新:
docker service update --image 172.16.1.30:5000/my_nginx:4.0 --update-parallelism 2 --update-delay 1m test2

// rollback Service: Docker Service Update --rollback bdqn2


-------- end of this article so far, thanks for reading --------

Guess you like

Origin blog.51cto.com/13972012/2448332