Docker consul container service update and discovery

Table of contents

1. Introduction to Consul

1.What is service registration and discovery?

2. What is consul?

3.consul architecture 

2. Deploy consul server (192.168.88.10)

1. Create Consul service

2. View cluster information

3. Obtain cluster information through http api

3. Registrator server (192.168.88.60)

1.Install Gliderlabs/Registrator

2. Test whether the service discovery function is normal

3. Verify whether http and nginx services are registered to consul

4、consul-template

1. Prepare template nginx template file

2.yum install nginx

3. Configure and start template

4.Access template-nginx

5. Add an nginx container node

5.1 Add an nginx container node to test the service discovery and configuration update functions.

5.2 View the contents of the /usr/local/nginx/conf/vhost/kgc.conf file

5.3 Check the logs of three nginx containers and request normal polling to each container node.

5. consul multi-node


1. Introduction to Consul

1.What is service registration and discovery?

Service registration and discovery are indispensable and important components in the microservice architecture. At first, services were all single-node, which did not guarantee high availability and did not consider the pressure bearing of the service. Calls between services were simply accessed through interfaces. Until the emergence of distributed architecture with multiple nodes, the initial solution was to load balance the service front-end. In this way, the front-end must know the network location of all back-end services and configure them in the configuration file. There will be several questions here:

  • If you need to call the backend service AN, you need to configure the network locations of N services. The configuration is very troublesome.
  • Changes in the network location of the backend service require changes to the configuration of each caller

Since there are these problems, service registration and discovery are the solutions to these problems. The back-end service AN can register its current network location to the service discovery module, and the service discovery is recorded in the form of KV. K is generally the service name, and V is IP:PORT. The service discovery module performs health checks regularly and polls to see whether these backend services can be accessed. When the front-end calls the back-end service AN, it goes to the service discovery module to ask for their network location, and then calls their services. This method can solve the above problem. The front end does not need to record the network location of these back-end services at all. The front-end and back-end are completely decoupled!

2. What is consul?

Consul is Google's open source service management software developed using the Go language. Supports multi-data centers, distributed high availability, service discovery and configuration sharing. The Raft algorithm is used to ensure high availability of services. It has built-in service registration and discovery framework, distribution consistency protocol implementation, health check, Key/Value storage, and multi-data center solution, and no longer needs to rely on other tools (such as ZooKeeper, etc.). Service deployment is simple, with only a runnable binary package. Each node needs to run an agent, which has two operating modes: server and client. It is officially recommended that each data center requires 3 or 5 server nodes to ensure data security and ensure that the server-leader election can be carried out correctly.

In client mode, all services registered to the current node will be forwarded to the server node, and this information will not be persisted.
In server mode, the function is similar to that of client mode. The only difference is that it will persist all information locally, so that in the event of a failure, the information can be retained.
The server-leader is the boss of all server nodes. It is different from other server nodes in that it needs to be responsible for synchronizing registered information to other server nodes. It is also responsible for the health monitoring of each node.

Some key features provided by consul:

  • Service registration and discovery: Consul makes service registration and service discovery easy through DNS or HTTP interfaces. Some external services, such as those provided by saas, can also be registered in the same way.
  • Health check: Health check allows Consul to quickly alert on operations in the cluster. Integration with service discovery can prevent services from being forwarded to failed services.
  • Key/Value storage: A system used to store dynamic configurations. Provides a simple HTTP interface that can be operated anywhere.
  • Multi-datacenter: Support any number of regions without complex configuration.

Installing consul is for service registration, that is, some information about the container itself is registered in consul, and other programs can obtain the registered service information through consul. This is service registration and discovery.

3.consul architecture 

  • Registrator component : Discover the network location of the application and send it to the automatic discovery module of consul server/client for registration.
  • consul server component : collects automatically discovered information and persists all information that needs to be registered locally. You can synchronize registration information to other server nodes through server-leader, and perform health checks on each node.
  • consul template component : Based on the registration information of consul, the configuration file is automatically generated and replaced according to the configuration file template.
  • Proxy server : Use nginx as a load balancer and perform proxy forwarding based on the configuration generated by consul template.

2. Deploy consul server (192.168.88.10)

1. Create Consul service

mkdir /opt/consul
cp consul_0.9.2_linux_amd64.zip /opt/consul
cd /opt/consul
unzip consul_0.9.2_linux_amd64.zip
mv consul /usr/local/bin/

#设置代理,在后台启动 consul 服务端
consul agent \
-server \
-bootstrap \
-ui \
-data-dir=/var/lib/consul-data \
-bind=192.168.88.10 \
-client=0.0.0.0 \
-node=consul-server01 &> /var/log/consul.log &

-server: 以server身份启动。默认是client。
-bootstrap :用来控制一个server是否在bootstrap模式,在一个数据中心中只能有一个server处于bootstrap模式,当一个server处于 bootstrap模式时,可以自己选举为 server-leader。
-bootstrap-expect=2 :集群要求的最少server数量,当低于这个数量,集群即失效。
-ui :指定开启 UI 界面,这样可以通过 http://localhost:8500/ui 这样的地址访问 consul 自带的 web UI 界面。
-data-dir :指定数据存储目录。
-bind :指定用来在集群内部的通讯地址,集群内的所有节点到此地址都必须是可达的,默认是0.0.0.0。
-client :指定 consul 绑定在哪个 client 地址上,这个地址提供 HTTP、DNS、RPC 等服务,默认是 127.0.0.1。
-node :节点在集群中的名称,在一个集群中必须是唯一的,默认是该节点的主机名。
-datacenter :指定数据中心名称,默认是dc1。

netstat -natp | grep consul

启动consul后默认会监听5个端口:
8300:replication、leader farwarding的端口,集群内数据的读写和复制
8301:lan cossip的端口,单个数据中心gossip协议通讯
8302:wan gossip的端口,跨数据中心gossip协议通讯
8500:web ui界面的端口,提供获取服务列表、注册服务、注销服务等HTTP接口;提供UI服务
8600:使用dns协议查看节点信息的端口,采用DNS协议提供服务发现功能

2. View cluster information

#查看members状态
consul members

#查看集群状态
consul operator raft list-peers

#查看leader状态
consul info | grep leader

3. Obtain cluster information through http api

curl 192.168.88.10:8500/v1/status/peers 			#查看集群server成员
curl 192.168.88.10:8500/v1/status/leader			#集群 server-leader
curl 192.168.88.10:8500/v1/catalog/services			#注册的所有服务
curl 192.168.88.10:8500/v1/catalog/nginx			#查看 nginx 服务信息
curl 192.168.88.10:8500/v1/catalog/nodes			#集群节点详细信息

#访问ui页面
http://192.168.88.10:8500/ui

3. Registrator server (192.168.88.60)

1.Install Gliderlabs/Registrator

Gliderlabs/Registrator 可检查容器运行状态自动注册,还可注销 docker 容器的服务到服务配置中心。目前支持 Consul、Etcd 和 SkyDNS2。

docker pull gliderlabs/registrator:latest 

docker run -d \
--name=registrator \
--net=host \
-v /var/run/docker.sock:/tmp/docker.sock \
--restart=always \
gliderlabs/registrator:latest \
--ip=192.168.88.60 \
consul://192.168.88.10:8500

--net=host :把运行的docker容器设定为host网络模式。
-v /var/run/docker.sock:/tmp/docker.sock :把宿主机的Docker守护进程(Docker daemon)默认监听的Unix域套接字挂载到容器中。
--restart=always :设置在容器退出时总是重启容器。
--ip :刚才把network指定了host模式,所以我们指定ip为宿主机的ip。
consul :指定consul服务器的IP和端口。

2. Test whether the service discovery function is normal

docker run -itd -p:83:80 --name test-01 -h test01 nginx
docker run -itd -p:84:80 --name test-02 -h test02 nginx   	#-h:设置容器主机名

3. Verify whether http and nginx services are registered to consul

浏览器中,输入 http://192.168.80.15:8500,在 Web 页面中“单击 NODES”,然后单击“consurl-server01”,会出现 3 个服务。

//在consul服务器使用curl测试连接服务器
curl 192.168.88.10:8500/v1/catalog/services 
{"consul":[],"httpd":[],"nginx":[]}

4、consul-template

Consul-Template is an application that automatically replaces configuration files based on Consul. Consul-Template is a daemon process used to query Consul cluster information in real time, update any number of specified templates on the file system, and generate configuration files. After the update is completed, you can choose to run the shell command to perform the update operation and reload Nginx.

Consul-Template can query the service directory, Key, Key-values, etc. in Consul. This powerful abstraction and query language template makes Consul-Template particularly suitable for dynamically creating configuration files. For example: Create Apache/Nginx Proxy Balancers, Haproxy Backends, etc.

1. Prepare template nginx template file

//在consul服务器上操作
vim /opt/consul/nginx.ctmpl
#定义nginx upstream一个简单模板
upstream http_backend {
  {
   
   {range service "nginx"}}
   server {
   
   {.Address}}:{
   
   {.Port}};
   {
   
   {end}}
}

#定义一个server,监听8000端口,反向代理到upstream
server {
    listen 8000;
    server_name www.my.com;
    access_log /var/log/nginx/my.com-access.log;					#修改日志路径
 
    location / {
        proxy_set_header HOST $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Client-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_pass http://http_backend;
    }
}

2.yum install nginx

vim /etc/yum.repos.d/nginx.repo
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0
enabled=1

yum -y install nginx

systemctl start nginx

3. Configure and start template

unzip consul-template_0.19.3_linux_amd64.zip -d /opt/
cd /opt/
mv consul-template /usr/local/bin/

//在前台启动 template 服务,启动后不要按 ctrl+c 中止 consul-template 进程。
consul-template --consul-addr 192.168.88.10:8500 \
--template "/opt/consul/nginx.ctmpl:/etc/nginx/conf.d/my.conf:/usr/sbin/nginx -s reload" \
--log-level=info

//另外打开一个终端查看生成配置文件
vim /etc/nginx/conf.d/my.conf
upstream http_backend {
  
   server 192.168.80.10:83;
   
   server 192.168.80.10:84;
   
}

server {
  listen 8000;
  server_name localhost 192.168.80.15;
  access_log /var/log/nginx/kgc.cn-access.log;
  index index.html index.php;
  location / {
    proxy_set_header HOST $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header Client-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_pass http://http_backend;
  }
}

4.Access template-nginx

docker ps -a

docker exec -it test-01 bash
echo "this is test1 web" > /usr/share/nginx/html/index.html

docker exec -it test-02 bash
echo "this is test2 web" > /usr/share/nginx/html/index.html

浏览器访问:http://192.168.88.70:8000/ ,并不断刷新。

5. Add an nginx container node

5.1 Add an nginx container node to test the service discovery and configuration update functions.

docker run -itd -p:85:80 --name test-05  nginx:centos

//观察 template 服务,会从模板更新/usr/local/nginx/conf/vhost/kgc.conf 文件内容,并且重载 nginx 服务。

5.2 View the contents of the /usr/local/nginx/conf/vhost/kgc.conf file

cat /usr/local/nginx/conf/mycom.conf

5.3 Check the logs of three nginx containers and request normal polling to each container node.

docker logs -f test-01
docker logs -f test-02
docker logs -f test-05

5. consul multi-node

//添加一台已有docker环境的服务器192.168.88.20/24加入已有的群集中
consul agent \
-server \
-ui \
-data-dir=/var/lib/consul-data \
-bind=192.168.88.20 \
-client=0.0.0.0 \
-node=consul-server02 \
-enable-script-checks=true  \
-datacenter=dc1  \
-join 192.168.88.10 &> /var/log/consul.log &

-enable-script-checks=true :设置检查服务为可用
-datacenter : 数据中心名称
-join :加入到已有的集群中

Guess you like

Origin blog.csdn.net/q1y2y3/article/details/131936547