Consul + registrator real-time configuration service discovery service based docker

Consul is a tool for service discovery and configuration. Consul is distributed, highly available, and is highly scalable.

Consul service provides the following key features:

  • Service Discovery: Consul available for clients to provide a service, such as api or mysql, some other clients available for use Consul to find providers of a given service through DNS or HTTP applications available easily find on which he relies services;.
  • Health Check: Consul available to the client provide any number of health check, assign a service (example: the webserver returned a status code 200 OK) or using the local node (example: memory usage is greater than 90%) The operator information can be used health is monitoring the cluster service discovery component used to send traffic to avoid unhealthy host;
  • Key / Value store: Applications are available according to their needs using the Key level Consul's / Value storage such as dynamic configuration, function tag, coordination, leadership elections and so on, so he simple HTTP API easier to use;.
  • Multi-data center: Consul supports multiple data centers out of the box, which means that users do not need to worry about the need for an additional layer of abstraction to make business expansion to multiple regions.

About Consul service will not introduce more, if you want to understand more of its functions, the venue to the Consul's official website .

Bowen outline:
First, the environment to prepare
two, executed on Docker01 binary command to deploy the consul service
three on docker02 and docker03 host by way of the vessel to run consul service
four in docker02 and docker03 hosts running registrator services by way of the container
V. services deployed Nginx on the host docker01, in order to provide reverse proxy
VI in docker01 installation consul-template command tools, templates and write
real-time discovery seven authentication service functions

First, prepare the environment

Consul + registrator real-time configuration service discovery service based docker

Its working diagram is as follows:
Consul + registrator real-time configuration service discovery service based docker

上面示意图的大概流程如下:
1、docker01主机上以二进制包的方式部署consul服务并后台运行,其身份为leader;
2、docker02、docker03以容器的方式运行consul服务,并加入到docker01的consul群集中;
3、在主机docker02、docker03上后台运行registrator容器,使其自动发现docker容器提供的服务;
4、在docker01上部署Nginx,提供反向代理服务,docker02、docker03主机上基于Nginx镜像,各运行两个web容器,提供不同的网页文件,以便测试效果;
5、在docker01上安装consul-template命令,将收集到的信息(registrator收集到容器的信息)写入template模板中,并且最终写入Nginx的配置文件中。
6、至此,实现客户端通过访问Nginx反向代理服务器(docker01),获得docker02、docker03服务器上运行的Nginx容器提供的网页文件。
注:registrator是一个自动发现docker container提供的服务,并且在后端服务注册中心(数据中心)注册服务。主要用来收集容器运行服务的信息,并且发送给consul。数据中心除了consul外,还有etcd、zookeeper等。

在开始之前,请先下载博文中配置所需要的源码包。

二、在Docker01上执行二进制命令部署consul服务

[root@docker01 ~]# rz          #上传我提供的压缩包
[root@docker01 ~]# unzip consul_1.5.1_linux_amd64.zip    #解包,解压后会得到一个命令
[root@docker01 ~]# mv consul /usr/local/bin/    #移动到命令存放路径
[root@docker01 ~]# chmod +x /usr/local/bin/consul    #赋予其执行权限
[root@docker01 ~]# nohup consul agent -server -bootstrap -ui -data-dir=/var/lib/consul-data -bind=192.168.20.6 -client=0.0.0.0 -node=master &
[1] 8330
[root@docker01 ~]# nohup: 忽略输入并把输出追加到"nohup.out"    
#执行命令后,会提示该信息,并占用终端,按回车键即可,
#运行上述命令后,会在当前目录下生成一个名为“nohup.out”的文件,其存放的是consul服务的运行日志
#执行上述命令后,consul就放到后台运行了,并返回其PID号,可以通过“jobs -l”命令进行查看

上述命令的相关参数解释如下:

  • -server:添加一个服务;
  • -bootstrap:一般在server单节点的时候使用,自选举为leader;
  • -ui:开启内部的web界面;
  • -bind:指定开启服务的IP(就是本机IP咯);
  • -client:指定服务的客户端(一般此处为任意);
  • -node:在集群内部通信使用的名称,默认是主机名。
    开启的端口作用如下:
  • 8300:集群节点;
  • 8301:集群内部访问的端口;
  • 8302:跨数据中心之间的通信;
  • 8500:http_ui;
  • 8600:DNS。

附加查询两条查询命令:

[root@docker01 ~]# consul info    #可以看到这个群集的leader及版本信息
#如:leader_addr = 192.168.20.6:8300
[root@docker01 ~]# consul members          #查看集群内部信息

至此,客户端可以访问docker01的8500端口进行验证,会看到以下页面:

Consul + registrator real-time configuration service discovery service based docker

三、在docker02及docker03主机上以容器的方式运行consul服务

###################  docker02服务器配置如下  #####################
[root@docker02 ~]# docker run -d --name consul -p 8301:8301 -p 8301:8301/udp -p 8500:8500 -p 8600:8600 -p 8600:8600/udp --restart=always progrium/consul -join 192.168.20.6 -advertise 192.168.20.7 -client 0.0.0.0 -node=node01
#上述命令中,“-join”是指定leader的IP地址(也就是docker01);“-advertise”是指定自己本身的IP地址
###################  docker03服务器配置如下  #####################
[root@docker03 ~]# docker run -d --name consul -p 8301:8301 -p 8301:8301/udp -p 8500:8500 -p 8600:8600 -p 8600:8600/udp --restart=always progrium/consul -join 192.168.20.6 -advertise 192.168.20.8 -client 0.0.0.0 -node=node02
#与docker02主机执行的命令类似,但是改成了自己的IP,及node名称改了
#注意:node名称在consul群集中,必须唯一。

注:主机docker01的consul服务也可以采用容器的方式部署,这里只是为了展示其多种部署方式而已。

至此,在docker01主机上,执行“consul members”命令,即可查看到docker02及docker03的信息,如下:

[root@docker01 ~]# consul members        #执行该命令
Node    Address            Status  Type    Build  Protocol  DC   Segment
master  192.168.20.6:8301  alive   server  1.5.1  2         dc1  <all>
node01  192.168.20.7:8301  alive   client  0.5.2  2         dc1  <default>
node02  192.168.20.8:8301  alive   client  0.5.2  2         dc1  <default>

客户端访问192.168.20.6的8500端口,通过以下操作,也可看到docker02或03主机上所有与docker容器相关的端口:

Consul + registrator real-time configuration service discovery service based docker

Consul + registrator real-time configuration service discovery service based docker

四、在docker02及docker03主机上以容器的方式运行registrator服务

#############  主机docker02配置如下  #############
[root@docker02 ~]# docker run -d --name registrator -v /var/run/docker.sock:/tmp/docker.sock --restart=always gliderlabs/registrator consul://192.168.20.7:8500
#上述命令的作用是将收集的容器信息发送给本机的8500端口来显示
#############  主机docker03配置如下  #############
[root@docker03 ~]# docker run -d --name registrator -v /var/run/docker.sock:/tmp/docker.sock --restart=always gliderlabs/registrator consul://192.168.20.8:8500
#同docker02,将收集的容器信息发送给本机的8500端口来显示

五、在主机docker01上部署Nginx服务,以便提供反向代理

部署Nginx服务,这里就不写注释了,若想优化Nginx服务,可以参考博文:Nginx安装及深度优化

[root@docker01 ~]# tar zxf nginx-1.14.0.tar.gz -C /usr/src
[root@docker01 ~]# useradd -M -s /sbin/nologin www
[root@docker01 ~]# cd /usr/src/nginx-1.14.0/
[root@docker01 nginx-1.14.0]# ./configure --prefix=/usr/local/nginx --user=www --group=www --with-http_stub_status_module --with-http_realip_module --with-pcre --with-http_ssl_module && make && make install
[root@docker01 nginx-1.14.0]# ln -s /usr/local/nginx/sbin/nginx /usr/local/sbin/
[root@docker01 nginx-1.14.0]# nginx

六、在docker01安装consul-template命令工具,并编写模板

consul-template的作用:将收集到的信息(把registrator收集到容器的信息)写入template模板中,并且最终写入Nginx的配置文件中。

1、生成consul-template命令工具(若要安装新版本的请移步Consul模板发布页面,下载最新版本使用):

[root@docker01 ~]# rz           #上传我提供的包
[root@docker01 ~]# unzip consul-template_0.19.5_linux_amd64.zip     #解包
[root@docker01 ~]# mv consul-template /usr/local/bin/   #移动到命令搜索路径
[root@docker01 ~]# chmod +x /usr/local/bin/consul-template    #赋予执行权限

2、在Nginx安装目录下,编写模板供consul-template命令工具使用,并且配置Nginx反向代理:

[root@docker01 ~]# cd /usr/local/nginx/
[root@docker01 nginx]# mkdir consul
[root@docker01 nginx]# cd consul/
[root@docker01 consul]# vim nginx.ctmpl    #新建一个模板文件
upstream http_backend {
  {{range service "nginx"}}            #这里的“Nginx”是基于docker镜像进行搜索的,而不是容器的名称
  server {{.Address}}:{{.Port}};
  {{ end }}
}
#以上是go语言编写的,目的是为了收集Nginx相关的IP地址及端口信息
#下面是定义反向代理
server {
  listen 8000;               #监听地址可任意指定,不要冲突即可
  server_name localhost;
  location / {
  proxy_pass http://http_backend;
  }
}
#编辑完成后,保存退出即可
[root@docker01 consul]# nohup consul-template -consul-addr 192.168.20.6:8500 -template "/usr/local/nginx/consul/nginx.ctmpl:/usr/local/nginx/consul/vhost.conf:/usr/local/sbin/nginx -s reload" &
#将本机收集到的信息,生成一个vhost.conf文件
#此服务必须在后台运行,才可实现服务的实时发现与更新
[root@docker01 consul]# vim ../conf/nginx.conf        #在主配置文件中进行调用生成的vhost.conf文件
include /usr/local/nginx/consul/*.conf;
}             #在配置文件末尾的花括号上方写入“include”配置,进行调用vhost.conf文件

七、验证服务的实时发现功能

Thus configured, the event of any docker02 or docker03 Nginx operation after the associated container station "-d" operation mode, will be added to the reverse proxy, the scheduling, once the container from accidentally shut off automatically from the reverse proxy configuration file removed.

Nginx can now run two containers were on docker02, and docker03, followed by the name of its container web01, web02 ......., its page document were: this is web01 test, this is web02 test .... ......

Page document prepared for different purpose is to facilitate client access when access is to differentiate Which container.

Because of its configuration similar process, I am here to write a procedure to run Nginx container, the other to follow suit.

Configuration examples are as follows (run web01 and modify its home page file):

[root@docker02 ~]# docker run -d -P --name web01 nginx
[root@docker02 ~]# docker exec -it web01 /bin/bash
root@ff910228a2b2:/# echo "this is a web01 test." > /usr/share/nginx/html/index.html 

After docker02 and docker03 running Nginx four containers (must be the way after running, which means that there must be "-d" option at run time), then, at this time of the visit docker01 port 8000, it will loop access to four page file container provides as follows:

[root@docker01 consul]# curl 192.168.20.6:8000
this is a web01 test.
[root@docker01 consul]# curl 192.168.20.6:8000
this is a web02 test.
[root@docker01 consul]# curl 192.168.20.6:8000
this is a web03 test.
[root@docker01 consul]# curl 192.168.20.6:8000
this is a web04 test.
[root@docker01 consul]# curl 192.168.20.6:8000
this is a web01 test.
[root@docker01 consul]# curl 192.168.20.6:8000
this is a web02 test.
#并且查看以下文件,会看到其中的配置
[root@docker01 consul]# pwd
/usr/local/nginx/consul
[root@docker01 consul]# cat vhost.conf     #以下web池中的server都是基于编写的模板自动生成的
upstream http_backend {

  server 192.168.20.7:32768;

  server 192.168.20.7:32769;

  server 192.168.20.8:32768;

  server 192.168.20.8:32769;

}

server {
  listen 8000;
  server_name localhost;
  location / {
  proxy_pass http://http_backend;
  }
}
#由于consul-template是在后台运行的,所以,只要检测到容器的变化,就会动态修改上述文件
#并且重启Nginx服务,使更改生效

If now delete all docker02 and docker03 Nginx container, leaving only one web01, then access port 8000 Nginx proxy server again, you can only ever access to web01 web page and view vhost.conf file, before adding the server address and port nor, as follows (to remove or stop Nginx container):

[root@docker01 consul]# cat vhost.conf         #该文件中只有web01容器的IP及端口信息了
upstream http_backend {

  server 192.168.20.7:32768;

}

server {
  listen 8000;
  server_name localhost;
  location / {
  proxy_pass http://http_backend;
  }
}
#多次访问,也只能访问到web01的页面:
[root@docker01 consul]# curl 192.168.20.6:8000
this is a web01 test.
[root@docker01 consul]# curl 192.168.20.6:8000
this is a web01 test.
[root@docker01 consul]# curl 192.168.20.6:8000
this is a web01 test.
[root@docker01 consul]# curl 192.168.20.6:8000
this is a web01 test.

So far, consul + registrator + docker real-time service find it done!

-------- end of this article so far, thanks for reading --------

Guess you like

Origin blog.51cto.com/14154700/2446102