Consul of docker (registration and discovery)

Table of contents

 1. What is service registration and discovery?

Two, what is consul

3. Consul deployment

3.1 Establish Consul service

3.1.1 Check the cluster status

3.1.2 Get cluster information through http api

3.2registrator server

3.2.1 Install Gliderlabs/Registrator

3.2.2 Test whether the service discovery function is normal

3.2.3 Verify whether http and nginx services are registered to consul

3.3consul-template

3.3.1 Prepare template nginx template file

3.3.2 Compile and install nginx

3.3.3 Configure nginx

3.3.4 Configure and start template

3.3.5 Access template-nginx

3.3.6 Add an nginx container node

3.4 consul multi-node


The yellow background is the key knowledge point

 1. What is service registration and discovery?

Service registration and discovery are indispensable and important components in the microservice architecture. At first, the services were single-node, which did not guarantee high availability and did not consider the pressure bearing of the service. The calls between services were simply accessed through the interface. Until the distributed architecture of multiple nodes appeared later, the initial solution was to balance the load on the service front-end, so that the front-end must know the network locations of all back-end services and configure them in the configuration file. There will be a few questions here:

  • If you need to call the backend service AN, you need to configure the network locations of N services, which is very troublesome to configure
  • The network location change of the backend service requires changing the configuration of each caller

Since there are these problems, service registration and discovery are to solve these problems. The back-end service AN can register its current network location to the service discovery module, and the service discovery will be recorded in the form of KV. K is generally the service name, and V is IP:PORT. The service discovery module conducts health checks regularly, and polls to see if these back-end services can be accessed. When the front-end calls the back-end service AN, it runs to the service discovery module to ask their network locations, and then calls their services. In this way, the above problems can be solved. The front end does not need to record the network locations of these back-end services at all, and the front-end and back-end are completely decoupled!

Two, what is consul

consul is a service management software developed by google open source using go language. Supports multi-data centers, distributed high availability, service discovery and configuration sharing. The Raft algorithm is used to ensure high availability of services. Built-in service registration and discovery framework, distributed consistency protocol implementation, health check, Key/Value storage, multi-data center solution, no longer need to rely on other tools (such as ZooKeeper, etc.). Service deployment is simple, with only one executable binary package. Each node needs to run agent, which has two modes of operation: server and client. The official recommendation of each data center is that 3 or 5 server nodes are required to ensure data security and ensure that the election of the server-leader can be performed correctly.

In client mode, all services registered to the current node will be forwarded to the server node, and the information itself will not be persisted. In the server mode, the function is similar to the client mode, the only difference is that it will persist all the information to the local, so that the information can be retained in case of failure.

The server-leader is the boss of all server nodes. It is different from other server nodes in that it needs to be responsible for synchronizing registered information to other server nodes, and is also responsible for the health monitoring of each node.

Some key features provided by consul:

  • Service registration and discovery: Consul makes service registration and service discovery easy through the DNS or HTTP interface, and some external services, such as those provided by saas, can also be registered in the same way.
  • Health check: Health check enables consul to quickly alert the operation in the cluster. Integration with service discovery prevents service forwarding to failed services.
  • Key/Value Storage: A system for storing dynamic configurations. Provides a simple HTTP interface that can be operated anywhere.
  • Multi-datacenter: Support any number of regions without complicated configuration.

Installing consul is used for service registration, that is, some information of the container itself is registered in consul, and other programs can obtain registered service information through consul, which is service registration and discovery.

3. Consul deployment

3.1 Establish Consul service

1. mkdir /opt/consul
   cp consul_0.9.2_linux_amd64.zip /opt/consul
   cd /opt/consul
   unzip consul_0.9.2_linux_amd64.zip
   mv consul /usr/local/bin/

//设置代理,在后台启动 consul 服务端
 consul agent \
-server \
-bootstrap \
-ui \
-data-dir=/var/lib/consul-data \
-bind=192.168.10.23 \
-client=0.0.0.0 \
-node=consul-server01 &> /var/log/consul.log &
  • -server: Start as server. The default is client.
  • -bootstrap: Used to control whether a server is in bootstrap mode. In a data center, only one server can be in bootstrap mode. When a server is in bootstrap mode, it can be elected as server-leader by itself.
  • -bootstrap-expect=2 : The minimum number of servers required by the cluster, when it is lower than this number, the cluster will fail.
  • -ui : Specifies to enable the UI interface, so that the web UI interface that comes with consul can be accessed through an address such as http://localhost:8500/ui .
  • -data-dir : Specifies the data storage directory.
  • -bind : Specifies the communication address used within the cluster. All nodes in the cluster must be reachable to this address, and the default is 0.0.0.0.
  • -client : Specifies which client address consul is bound to. This address provides services such as HTTP, DNS, and RPC. The default is 127.0.0.1. -node : The name of the node in the cluster. It must be unique in a cluster. The default is the host name of the node.
  • -datacenter : Specify the data center name, the default is dc1.
netstat -natp | grep consul

After starting consul, it will listen to 5 ports by default:

8300: ports for replication and leader farwarding

8301: port of lan cossip

8302: port for wan gossip

8500: the port of the web ui interface

8600: Use the dns protocol to view the port of node information

3.1.1 Check the cluster status

#查看members状态
consul members
Node             Address             Status  Type    Build  Protocol  DC
consul-server01  192.168.10.23:8301  alive   server  0.9.2  2         dc1

#查看集群状态
consul operator raft list-peers

consul info | grep leader
	leader = true
	leader_addr = 192.168.10.23:8300

3.1.2 Get cluster information through http api

curl 127.0.0.1:8500/v1/status/peers 			#查看集群server成员
curl 127.0.0.1:8500/v1/status/leader			#集群 server-leader
curl 127.0.0.1:8500/v1/catalog/services			#注册的所有服务
curl 127.0.0.1:8500/v1/catalog/nginx			#查看 nginx 服务信息
curl 127.0.0.1:8500/v1/catalog/nodes			#集群节点详细信息

3.2registrator server

3.2.1 Install Gliderlabs/Registrator

Gliderlabs/Registrator can check the running status of the container to automatically register, and can also log out the service of the docker container to the service configuration center. Consul, Etcd and SkyDNS2 are currently supported.

docker run -d \
--name=registrator \
--net=host \
-v /var/run/docker.sock:/tmp/docker.sock \
--restart=always \
gliderlabs/registrator:latest \
--ip=192.168.10.13 \
consul://192.168.10.23:8500

--net=host : Set the running docker container to host network mode.

-v /var/run/docker.sock:/tmp/docker.sock : host's Docker daemon

(Docker daemon) The default listening Unix domain socket is mounted in the container.

--restart=always : Set to always restart the container when the container exits.

--ip : The network has just been specified as the host mode, so we specify the ip as the host's ip.

consul: Specify the IP and port of the consul server.

3.2.2 Test whether the service discovery function is normal

docker run -itd -p:83:80 --name test-01 -h test01 nginx
docker run -itd -p:84:80 --name test-02 -h test02 nginx
docker run -itd -p:88:80 --name test-03 -h test03 httpd
docker run -itd -p:89:80 --name test-04 -h test04 httpd	 httpd			
	#-h:设置容器主机名

3.2.3 Verify whether http and nginx services are registered to consul

3.3consul-template

Consul-Template is an application that automatically replaces configuration files based on Consul. Consul-Template is a daemon process used to query Consul cluster information in real time, update any number of specified templates on the file system, and generate configuration files. After the update is complete, you can choose to run the shell command to perform the update operation and reload Nginx.

Consul-Template can query the service directory, Key, Key-values, etc. in Consul. This powerful abstraction and query language templates make Consul-Template particularly suitable for dynamically creating configuration files. For example: create Apache/Nginx Proxy Balancers, Haproxy Backends, etc.

3.3.1 Prepare template nginx template file

//Operate on the consul server

vim /opt/consul/nginx.ctmpl
#定义nginx upstream一个简单模板
upstream http_backend {
 {
   
   {range service "nginx"}}
server {
   
   {.Address}}:{
   
   {.Port}};
{
   
   {end}}
}

#定义一个server,监听8000端口,反向代理到upstream
server {
    listen 8000;
    server_name localhost 192.168.10.23;
    access_log /var/log/nginx/kgc.com-access.log;							#修改日志路径
    index index.html index.php;
    location / {
        proxy_set_header HOST $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Client-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_pass http://http_backend;
    }
}

3.3.2 Compile and install nginx

2. yum -y install pcre-devel zlib-devel gcc gcc-c++ make
   useradd -M -s /sbin/nologin nginx
   tar zxvf nginx-1.12.0.tar.gz -C /opt/
   cd /opt/nginx-1.12.0/
   ./configure --prefix=/usr/local/nginx --user=nginx --group=nginx && make -j && make install

ln -s /usr/local/nginx/sbin/nginx /usr/local/sbin/

3.3.3 Configure nginx

3. ......
   http {
     include       mime.types;
     include  vhost/*.conf;       				#添加虚拟主机目录
     default_type  application/octet-stream;
   ......

//创建虚拟主机目录
mkdir /usr/local/nginx/conf/vhost 

//创建日志文件目录
mkdir /var/log/nginx

//启动nginx
nginx

3.3.4 Configure and start template

4. unzip consul-template_0.19.3_linux_amd64.zip -d /opt/
   cd /opt/
   mv consul-template /usr/local/bin/

//在前台启动 template 服务,启动后不要按 ctrl+c 中止 consul-template 进程。
consul-template --consul-addr 192.168.10.23:8500 \
--template "/opt/consul/nginx.ctmpl:/usr/local/nginx/conf/vhost/kgc.conf:/usr/local/nginx/sbin/nginx -s reload" \
--log-level=info

//另外打开一个终端查看生成配置文件
upstream http_backend {
   server 192.168.10.13:83;

   server 192.168.10.13:84;

}

server {
  listen 8000;
  server_name 192.168.10.23;
  access_log /var/log/nginx/kgc.cn-access.log;
  index index.html index.php;
  location / {
    proxy_set_header HOST $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header Client-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_pass http://http_backend;
  }
}

3.3.5 Access template-nginx

5. docker ps -a
   CONTAINER ID   IMAGE                           COMMAND                  CREATED       STATUS          PORTS                NAMES
   9f0dc08956f4   httpd                           "httpd-foreground"       1 hours ago   Up 1 hours      0.0.0.0:89->80/tcp   test-04
   a0bde07299da   httpd                           "httpd-foreground"       1 hours ago   Up 1 hours      0.0.0.0:88->80/tcp   test-03
   4f74d2c38844   nginx                           "/docker-entrypoint.…"   1 hours ago   Up 1 hours      0.0.0.0:84->80/tcp   test-02
   b73106db285b   nginx                           "/docker-entrypoint.…"   1 hours ago   Up 1 hours      0.0.0.0:83->80/tcp   test-01
   409331c16824   gliderlabs/registrator:latest   "/bin/registrator -i…"   1 hours ago   Up 1 hours                        registrator

docker exec -it 4f74d2c38844 bash
echo "this is test1 web" > /usr/share/nginx/html/index.html

docker exec -it b73106db285b bash
echo "this is test2 web" > /usr/share/nginx/html/index.html

Browser access: http://192.168.10.23:8000/ , and keep refreshing.

3.3.6 Add an nginx container node

(1) Add an nginx container node to test service discovery and configuration update functions.

docker run -itd -p:85:80 --name test-05 -h test05 nginx

//Observing the template service, the content of the /usr/local/nginx/conf/vhost/kgc.conf file will be updated from the template, and the nginx service will be overloaded

(2) View the contents of the /usr/local/nginx/conf/vhost/kgc.conf file.

cat /usr/local/nginx/conf/vhost/kgc.conf
upstream http_backend {

server 192.168.10.23:83;:去!

server 192.168.10.23:84;

server 192.168.10.23:85;

server 192.168.10.23:86;

}

(3) Check the logs of the three nginx containers and request normal polling to each container node

docker logs -f test-01
docker logs -f test-02
docker logs -f test-05
docker logs -f test-06

3.4 consul multi-node

Add a server 192.168.10.14/24 with an existing docker environment to join the existing cluster

consul agent \
-server \
-ui \
-data-dir=/var/lib/consul-data \
-bind=192.168.10.14 \
-client=0.0.0.0 \
-node=consul-server02 \
-enable-script-checks=true  \
-datacenter=dc1  \
-join 192.168.10.23 &> /var/log/consul.log &

-enable-script-checks=true : Set the check service to be available

-datacenter : datacenter name

-join : join an existing cluster

consul members
Node             Address             Status  Type    Build  Protocol  DC
consul-server01  192.168.10.23:8301  alive   server  0.9.2  2         dc1
consul-server02  192.168.10.14:8301  alive   server  0.9.2  2         dc1


consul operator raft list-peers
Node             ID                  Address             State     Voter  RaftProtocol
Node             ID                  Address             State     Voter  RaftProtocol
consul-server01  192.168.10.23:8300  192.168.10.23:8300  leader    true   2
consul-server02  192.168.10.14:8300  192.168.10.13:8300  follower  true   2

Guess you like

Origin blog.csdn.net/m0_71888825/article/details/132427270