[Docker] Consul's container service update and discovery

1. Consul

insert image description here

2. What is service registration and discovery

  • Service registration and discovery are indispensable and important components in the microservice architecture. At first, the services were single-node, which did not guarantee high availability and did not consider the pressure bearing of the service. The calls between services were simply accessed through the interface. Until a distributed architecture with multiple nodes appeared later, the initial solution was to load balance the front end of the service, so that the front end must know the network locations of all backend services and configure them in the configuration file. There will be several problems here:
    ● If you need to call the backend service AN, you need to configure the network locations of N services, which is very troublesome to configure
    ● Changes in the network location of the backend service require changing the configuration of each caller

1.2 What is consul

  • consul is a service management software developed by google open source using go language. Supports multi-data centers, distributed high availability, service discovery and configuration sharing. The Raft algorithm is used to ensure high availability of services. Built-in service registration and discovery framework, distributed consistency protocol implementation, health check, Key/Value storage, multi-data center solution, no longer need to rely on other tools (such as ZooKeeper, etc.). Service deployment is simple, with only one executable binary package. Each node needs to run agent, which has two modes of operation: server and client. The official recommendation of each data center is that 3 or 5 server nodes are required to ensure data security and ensure that the election of the server-leader can be performed correctly.

  • In client mode, all services registered to the current node will be forwarded to the server node, and the information itself will not be persisted.

  • In the server mode, the function is similar to the client mode, the only difference is that it will persist all the information to the local, so that the information can be retained in case of failure.

  • The server-leader is the boss of all server nodes. It is different from other server nodes in that it needs to be responsible for synchronizing registered information to other server nodes, and is also responsible for the health monitoring of each node.

1.3 Some key features provided by consul

  • Service registration and discovery: Consul makes service registration and service discovery easy through the DNS or HTTP interface, and some external services, such as those provided by saas, can also be registered in the same way.

  • Health check: Health check enables consul to quickly alert the operation in the cluster. Integration with service discovery prevents service forwarding to failed services.

  • Key/Value Storage: A system for storing dynamic configurations. Provides a simple HTTP interface that can be operated anywhere.

  • Multi-datacenter: Support any number of regions without complicated configuration.

  • Installing consul is used for service registration, that is, some information of the container itself is registered in consul, and other programs can obtain registered service information through consul, which is service registration and discovery.

Two, Consul deployment

2.1 Environment configuration

node IP configuration
consul-server 192.168.243.102 Run consul service, nginx service, consul-template daemon process
registrator server 192.168.243.100 Run the registrator container, run the nginx container
systemctl stop firewalld.service
setenforce 0

2.2Consul server configuration

1. Create a Consul service

mkdir /opt/consul
cp consul_0.9.2_linux_amd64.zip /opt/consul
cd /opt/consul
unzip consul_0.9.2_linux_amd64.zip
mv consul /usr/local/bin/

//设置代理,在后台启动 consul 服务端
consul agent \
-server \
-bootstrap \
-ui \
-data-dir=/var/lib/consul-data \
-bind=192.168.80.15 \
-client=0.0.0.0 \
-node=consul-server01 &> /var/log/consul.log &

  • -server: Start as server. The default is client.
  • -bootstrap: Used to control whether a server is in bootstrap mode. In a data center, only one server can be in bootstrap mode. When a server is in
    bootstrap mode, it can be elected as server-leader by itself.
  • -bootstrap-expect=2 : The minimum number of servers required by the cluster, when it is lower than this number, the cluster will fail.
  • -ui : Specifies to enable the UI interface, so that the web UI interface that comes with consul can be accessed through an address such as http://localhost:8500/ui.
  • -data-dir : Specifies the data storage directory.
  • -bind : Specifies the communication address used within the cluster. All nodes in the cluster must be reachable to this address, and the default is 0.0.0.0.
  • -client : Specifies which client address consul is bound to. This address provides services such as HTTP, DNS, and RPC. The default is 127.0.0.1.
  • -node : The name of the node in the cluster. It must be unique in a cluster. The default is the host name of the node.
  • -datacenter : Specify the data center name, the default is dc1.
netstat -natp | grep consul

After starting consul, it will monitor 5 ports by default:
8300: port 8301 of replication and leader farwarding
: port 8302 of lan cossip: port 8500
of wan gossip
: port 8600 of web ui interface
: port for viewing node information using dns protocol

insert image description here

2. View cluster information

View members status

consul members
Node             Address             Status  Type    Build  Protocol  DC
consul-server01  192.168.80.15:8301  alive   server  0.9.2  2         dc1

View cluster status

consul operator raft list-peers

consul info | grep leader
	leader = true
	leader_addr = 192.168.80.15:8300

insert image description here

3. Obtain cluster information through http api

curl 127.0.0.1:8500/v1/status/peers 			#查看集群server成员
curl 127.0.0.1:8500/v1/status/leader			#集群 server-leader
curl 127.0.0.1:8500/v1/catalog/services			#注册的所有服务
curl 127.0.0.1:8500/v1/catalog/nginx			#查看 nginx 服务信息
curl 127.0.0.1:8500/v1/catalog/nodes			#集群节点详细信息

insert image description here

2.3 registrator server configuration

  • Container Service automatically joins the Nginx cluster

1. Install Gliderlabs/Registrator

  • Gliderlabs/Registrator can check the running status of the container to automatically register, and can also log out the service of the docker container to the service configuration center. Consul, Etcd and SkyDNS2 are currently supported.
docker run -d \
--name=registrator \
--net=host \
-v /var/run/docker.sock:/tmp/docker.sock \
--restart=always \
gliderlabs/registrator:latest \
--ip=192.168.80.14 \
consul://192.168.80.15:8500
  • –net=host : Set the running docker container to host network mode.
  • -v /var/run/docker.sock:/tmp/docker.sock : Mount the Unix domain socket that the host's Docker daemon (Docker daemon) listens to by default into the container.
  • --restart=always : Set to always restart the container when the container exits.
  • –ip : The network has just been specified as host mode, so we specify ip as the ip of the host machine.
  • consul: Specify the IP and port of the consul server.

insert image description here

2. Test whether the service discovery function is normal

docker run -itd -p:83:80 --name test-01 -h test01 nginx
docker run -itd -p:84:80 --name test-02 -h test02 nginx
docker run -itd -p:88:80 --name test-03 -h test03 httpd
docker run -itd -p:89:80 --name test-04 -h test04 httpd #-h: set container hostname

3. Verify whether the http and nginx services are registered with consul

  • In the browser, enter http://192.168.80.15:8500, "click NODES" on the Web page, and then click "consurl-server01", and 5 services will appear.
//在consul服务器使用curl测试连接服务器
curl 127.0.0.1:8500/v1/catalog/services 
{
    
    "consul":[],"httpd":[],"nginx":[]}

insert image description here

2.4consul-template

Consul-Template is an application that automatically replaces configuration files based on Consul. Consul-Template is a daemon process used to query Consul cluster information in real time, update any number of specified templates on the file system, and generate configuration files. After the update is complete, you can choose to run
the shell command to perform the update operation and reload Nginx.

Consul-Template can query the service directory, Key, Key-values, etc. in Consul. This powerful abstraction and query language templates make
Consul-Template particularly suitable for dynamically creating configuration files. For example: create Apache/Nginx Proxy Balancers, Haproxy Backends, etc.

1. Prepare the template nginx template file

//在consul服务器上操作
vim /opt/consul/nginx.ctmpl
#定义nginx upstream一个简单模板
upstream http_backend {
    
    
  {
    
    {
    
    range service "nginx"}}
   server {
    
    {
    
    .Address}}:{
    
    {
    
    .Port}};
   {
    
    {
    
    end}}
}

#定义一个server,监听8000端口,反向代理到upstream
server {
    
    
    listen 8000;
    server_name localhost 192.168.80.15;
    access_log /var/log/nginx/kgc.com-access.log;							#修改日志路径
    index index.html index.php;
    location / {
    
    
        proxy_set_header HOST $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Client-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_pass http://http_backend;
    }
}

insert image description here

2. Compile and install nginx

yum -y install pcre-devel zlib-devel gcc gcc-c++ make
useradd -M -s /sbin/nologin nginx
tar zxvf nginx-1.12.0.tar.gz -C /opt/
cd /opt/nginx-1.12.0/
./configure --prefix=/usr/local/nginx --user=nginx --group=nginx && make && make install

ln -s /usr/local/nginx/sbin/nginx /usr/local/sbin/

insert image description here

3. Configure nginx

vim /usr/local/nginx/conf/nginx.conf
......
http {
    
    
     include       mime.types;
     include  vhost/*.conf;       				#添加虚拟主机目录
     default_type  application/octet-stream;
......

//创建虚拟主机目录
mkdir /usr/local/nginx/conf/vhost

//创建日志文件目录
mkdir /var/log/nginx

//启动nginx
nginx

4. Configure and start the template

unzip consul-template_0.19.3_linux_amd64.zip -d /opt/
cd /opt/
mv consul-template /usr/local/bin/

//在前台启动 template 服务,启动后不要按 ctrl+c 中止 consul-template 进程。
consul-template --consul-addr 192.168.80.15:8500 \
--template "/opt/consul/nginx.ctmpl:/usr/local/nginx/conf/vhost/kgc.conf:/usr/local/nginx/sbin/nginx -s reload" \
--log-level=info

//另外打开一个终端查看生成配置文件
upstream http_backend {
    
    
  
   server 192.168.80.10:83;
   
   server 192.168.80.10:84;
   
}

server {
    
    
  listen 8000;
  server_name localhost 192.168.80.15;
  access_log /var/log/nginx/kgc.cn-access.log;
  index index.html index.php;
  location / {
    
    
    proxy_set_header HOST $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header Client-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_pass http://http_backend;
  }
}

insert image description here
insert image description here

5. Visit template-nginx

docker ps -a
CONTAINER ID   IMAGE                           COMMAND                  CREATED       STATUS          PORTS                NAMES
9f0dc08956f4   httpd                           "httpd-foreground"       1 hours ago   Up 1 hours      0.0.0.0:89->80/tcp   test-04
a0bde07299da   httpd                           "httpd-foreground"       1 hours ago   Up 1 hours      0.0.0.0:88->80/tcp   test-03
4f74d2c38844   nginx                           "/docker-entrypoint.…"   1 hours ago   Up 1 hours      0.0.0.0:84->80/tcp   test-02
b73106db285b   nginx                           "/docker-entrypoint.…"   1 hours ago   Up 1 hours      0.0.0.0:83->80/tcp   test-01
409331c16824   gliderlabs/registrator:latest   "/bin/registrator -i…"   1 hours ago   Up 1 hours                        registrator

docker exec -it 4f74d2c38844 bash
echo "this is test1 web" > /usr/share/nginx/html/index.html

docker exec -it b73106db285b bash
echo "this is test2 web" > /usr/share/nginx/html/index.html

#浏览器访问:http://192.168.80.15:8000/ ,并不断刷新。

insert image description here

6. Add an nginx container node

(1)增加一个 nginx 容器节点,测试服务发现及配置更新功能。
docker run -itd -p:85:80 --name test-05 -h test05 nginx

//观察 template 服务,会从模板更新/usr/local/nginx/conf/vhost/kgc.conf 文件内容,并且重载 nginx 服务。

(2)查看/usr/local/nginx/conf/vhost/kgc.conf 文件内容
cat /usr/local/nginx/conf/vhost/kgc.conf
upstream http_backend {
    
    

server 192.168.80.10:83;

server 192.168.80.10:84;

server 192.168.80.10:85;

}

(3)查看三台 nginx 容器日志,请求正常轮询到各个容器节点上
docker logs -f test-01
docker logs -f test-02
docker logs -f test-05

2.5 consul multi-node

//添加一台已有docker环境的服务器192.168.80.12/24加入已有的群集中
consul agent \
-server \
-ui \
-data-dir=/var/lib/consul-data \
-bind=192.168.80.12 \
-client=0.0.0.0 \
-node=consul-server02 \
-enable-script-checks=true  \
-datacenter=dc1  \
-join 192.168.80.15 &> /var/log/consul.log &

------------------------------------------------------------------------
-enable-script-checks=true :设置检查服务为可用
-datacenter : 数据中心名称
-join :加入到已有的集群中
------------------------------------------------------------------------

consul members
Node             Address             Status  Type    Build  Protocol  DC
consul-server01  192.168.80.15:8301  alive   server  0.9.2  2         dc1
consul-server02  192.168.80.12:8301  alive   server  0.9.2  2         dc1

consul operator raft list-peers
Node             ID                  Address             State     Voter  RaftProtocol
consul-server01  192.168.80.15:8300  192.168.80.15:8300  leader    true   2
consul-server02  192.168.80.12:8300  192.168.80.12:8300  follower  true   2

insert image description here

Guess you like

Origin blog.csdn.net/wang_dian1/article/details/131936508