Achieve service discovery based cluster Docker's Consul

Service Discovery

In fact, simply, service discovery is hard binding relationship between the IP address and decoupling the service,
a typical cluster as an example, for the cluster, it is with multiple nodes, the nodes correspond to multiple IP (or the same IP different port number), the different nodes in the cluster responsibility is not the same.
For example, a data cluster nodes can be divided into read or write node, read and write nodes nodes are relative, not hard-bound, one logical node, with failover and recovery, it can transform the identity of the ( write variant readings, reading becomes writing; down from the main, from the ascending etc.)
when clusters provide services, for the outside world, when a cluster node identity need to transform the external transparent, because the identity of the outside world without having to transform a cluster node change the configuration, which requires a decoupling of service.

Consul, zookeeper middleware, is to do this transparent conversion, that is, service discovery. Here consul simple test as to achieve service discovery.

Consul is a decoupled service solutions (service mesh solution, tangled for a long time do not know how to translate), providing a service discovery, configuration, and full-function control system sub-functions (control plane).
Each of these functions can be used individually as needed, can also be used to build a complete decoupling of service together. Even if the case is a Google translation, this translation tangled for a long time, I do not know how to translate appropriate.

The following is a discovery service according to their own understanding of consul, finishing simple logical structure, the principle is quite easy to understand.

Consul is a service mesh solution providing a full featured control plane with service discovery, configuration, and segmentation functionality. Each of these features can be used individually as needed, or they can be used together to build a full service mesh. Consul requires a data plane and supports both a proxy and native integration model. Consul ships with a simple built-in proxy so that everything works out of the box, but also supports 3rd party proxy integrations such as Envoy. https://www.consul.io/intro/index.html

It provides the following key features:

Service Discovery:

  Consul of some clients can provide a service, such as api or mysql, Consul other clients can use to find the service provider.

  Use DNS or HTTP, the application can easily find what they are dependent services.

Health check:
  Consul client can provide some health checks, these health checks can be associated with a specified service (service has returned 200 OK), it can also be linked to the local node (memory usage if 90% or less).

  This information can be used by an operator to monitor the health status of the cluster, to be away from unhealthy host when the service discovery component routing.

Value store:
  application may use a hierarchical key storage Consul provided for some purpose, including dynamic configuration, signature, collaboration, etc. Leader Election. Through a simple HTTP API you can easily use this component.
Multi-data center:
  Consul has very good support for multiple data centers, which means that users do not have to worry about Consul multiple areas due to create more abstraction layers produced.

Consul is designed for application developers and DevOps community-friendly, he is very suitable for modern, scalable infrastructure.

Based docker achieved consul service discovery configurations, methods consul Agent achieve service registration through json registration mode,
wherein the service end consul is a 3-node cluster, the customer point 6 node 3 primary 3 Redis server cluster from the, consul redis achieve read and write the cluster service registry to find.
Although Redis cluster driven multiple IP connections, here only to test the "service discovery".
In fact, originally wanted to test MGR MySQL single master mode, read and write separation of service discovery, but MySQL a bit too heavy, the machine configuration is not enough, so do Redis cluster to test the service discovery

consul server cluster installation configuration
as a carrier service discovery, consul can use a single node running as a carrier analytical services, is a very important role, clustering has stronger resilience, and therefore more time is way multi-node cluster running parse this carrier services.
Used here as a three node cluster server to run the consul, consul three service node IP are: 172.18.0.11, 172.18.0.12,172.18.0.13, requires a fixed IP

docker network create --subnet=172.18.0.11/16 mynetwork

docker run -itd --name consul02 --net mynetwork --ip 172.18.0.12 -v /usr/local/docker_file/consul02/:/usr/local/ centos 

docker run -itd --name consul03 --net mynetwork --ip 172.18.0.13 -v /usr/local/docker_file/consul03/:/usr/local/ centos 

Services were created consul in each container (unzip consul_1.6.2_linux_amd64.zip decompression can be very simple)

Server.json node profile nodes following three containers, the only difference is designated as the current node bind_addr IP
as 172.18.0.11 node is configured, only different nodes corresponding modifications bind_addr machine (container) IP

/usr/local/server.json
{
    "datacenter": "dc1",
    "data_dir": "/usr/local/",
    "log_level": "INFO",
    "server": true,
    "bootstrap_expect": 3,
    "bind_addr": "172.18.0.11",
    "client_addr": "0.0.0.0",
    "start_join": ["172.18.0.11","172.18.0.12","172.18.0.13"],
    "ui":true
}

Log in turn three containers, start the consul service to server mode
./consul agent -server -config-dir = / usr / local> /usr/local/consul.log &
due profiles developed a cluster IP list, so after without explicitly added to the cluster (cluster join), under normal circumstances, start three nodes automatically form a cluster and automatically elected a leader.
State consul cluster service
./consul members --http-addr 172.18.0.11:8500
./consul operator raft list-peers -http-addr=172.18.0.12:8500


consul Client Installation Configuration
容器客户端节点安装,6个节点IP分别是:172.18.0.21,172.18.0.22,172.18.0.23,172.18.0.24,172.18.0.25,172.18.0.26
docker run -itd --name redis01 --net mynetwork --ip 172.18.0.21 -v /usr/local/docker_file/redis01/:/usr/local/ centos 
docker run -itd --name redis02 --net mynetwork --ip 172.18.0.22 -v /usr/local/docker_file/redis02/:/usr/local/ centos 
docker run -itd --name redis03 --net mynetwork --ip 172.18.0.23 -v /usr/local/docker_file/redis03/:/usr/local/ centos 
docker run -itd --name redis04 --net mynetwork --ip 172.18.0.24 -v /usr/local/docker_file/redis04/:/usr/local/ centos 
docker run -itd --name redis05 --net mynetwork --ip 172.18.0.25 -v /usr/local/docker_file/redis05/:/usr/local/ centos 
docker run -itd --name redis06 --net mynetwork --ip 172.18.0.26 -v /usr/local/docker_file/redis06/:/usr/local/ centos
 
6 client node configuration and service definition, service probe script as
follows 172.18.0.21 node is configured, only different nodes corresponding modifications bind_addr machine (container) IP

client.json
{
  "data_dir": "usr/local/consuldata",
  "enable_script_checks": true,
  "bind_addr": "172.18.0.21",
  "retry_join": ["172.18.0.11","172.18.0.12","172.18.0.13"],
  "retry_interval": "30s",
  "rejoin_after_leave": true,
  "start_join": ["172.18.0.11","172.18.0.12","172.18.0.13"]
}
Start each serving three consul client node, run in client mode, after the start, under normal circumstances it will be automatically added to the server cluster's consul.
./consul agent -config-dir=/usr/local/consuldata > /usr/local/consuldata/consul.log &
./consul members --http-addr 172.18.0.11:8500
 
 
consul Client Agent Service Registration
Six containers installed successively Redis nodes, a cluster is made (step abbreviated), where the client agent consul is a 3 clusters from the 3 Redis, not listed here to install Redis cluster.
Redis cluster installation reference https://www.cnblogs.com/wy123/p/12012848.html, is very convenient, a key to create the master node 3 6 3 clusters from the local (node container).
Wherein the master node is 172.18.0.21,172.18.0.22,172.18.0.23, the slave node is 172.18.0.24,172.18.0.25,172.18.0.26

Here is the use of w-master-redis-8888.service.consul name as a service agent three redis cluster nodes.
Redis-master-8888.json on 172.18.0.21 node (172.18.0.22,172.18.0.23,172.18.0.24,172.18.0.25,172.18.0.26 similar, only modified address)
{
  "services": 
  [
    {
      "name": "w-master-redis-8888",
      "tags": [
        "master"
      ],
      "address": "172.18.0.21",
      "port": 8888,
      "checks": [
        {
         "args":["sh","-c","/usr/local/consuldata/check_redis_master.sh 172.18.0.21 8888 ******"],
         "Shell":"/bin/bash",
         "interval": "15s"
        }
      ]
    }
  ]
}

redis-slave-8888.json 

{
  "services": 
  [
    {
      "name": "r-slave-redis-8888",
      "tags": [
        "master"
      ],
      "address": "172.18.0.21",
      "port": 8888,
      "checks": [
        {
         "args":["sh","-c","/usr/local/consuldata/check_redis_slave.sh 172.18.0.21 8888 ******"],
         "Shell":"/bin/bash",
         "interval": "15s"
        }
      ]
    }
  ]
}

Redis master node Consul client node (write node) service examination scripts check_redis_master.sh
following script from https://www.cnblogs.com/gomysql/p/8010552.html, do a simple modification in the node identity judgment the need to strengthen the logic.

#!/bin/bash
host=$1
myport=$2
auth=$3
if [ ! -n "$auth" ]
then
auth='\"\"'
fi
comm="/usr/local/redis_instance/redis8888/bin/redis-cli -h $host -p $myport -a $auth " 
role=`echo 'INFO Replication'|$comm |grep -Ec 'role:master'`
echo 'INFO Replication'|$comm
if [ $role -ne 1 ]
then
    exit 2
fi

Redis Consul client node from the node service examination script check_redis_slave.sh

#!/bin/bash
host=$1
myport=$2
auth=$3
if [ ! -n "$auth" ]
then
auth='\"\"'
fi
comm="/usr/local/redis_instance/redis8888/bin/redis-cli -h $host -p $myport -a $auth "
role=`echo 'INFO Replication'|$comm |grep -Ec 'role:slave'`
echo $role
echo 'INFO Replication'|$comm


if [ $role -ne 1 ]
then
    exit 2
fi

 

Consul Service Discovery

After redis cluster configuration successfully, reload agency services, consul reload, everything goes well, consul server can be configured in a resolution services.
Registered as the two services are r-slave-redis-8888, w-master-redis-8888, representing the read node Redis cluster.

It can be seen successfully resolve the w-master-redis-8888.service.consul this service, 172.18.0.21,172.18.0.22,172.18.0.23 mapped to three nodes.
It should be noted that these three nodes are the nodes write here just to achieve service discovery (although redis driver support multiple IP's)

Analytical r-slave-redis-8888.service.consul service point from the three nodes, 172.18.0.24,172.18.0.25,172.18.0.26

After failover service discovery: Analog primary node fails, manual failover node for 172.18.0.21, 172.18.0.21 and 172.18.0.24 role now exchange

Service discovery analysis results after failover to cluster Redis w-master-redis-8888.service.consul this service, successfully resolved three primary node to 172.18.0.24,172.18.0.22,172.18.0.23

Service discovery analysis results after failover to cluster Redis w-master-redis-8888.service.consul this service, successfully resolved three primary node to 172.18.0.24,172.18.0.22,172.18.0.23

遇到的问题:
1,cosnul服务端集群的时候,clustercenter一开始自定义了一个名称myconsule_datacenter,导致client节点死活加不进来,按照默认的dc1就没有问题
目前还不理解这个datacenter的命名规则是什么?
2,容器节点中的shell脚本要授予可执行权限chmod +x check_XXX.sh
3,其他异常问题,一定要看日志,搜索一下基本上都有结果。
以下纯粹是Redis集群的问题,与Consul没有直接关系,仅作为本测试中遇到的问题。
4,容器节点的Redis集群时,需要移除bind_ip的127.0.0.1节点,直接配置docker创建容器时候的IP,创建集群的时候会一致等待,waiting for the cluster to join
这一点redis-cli --cluster做的很扯淡,明明找不到节点,还要死等,不人为终止的话,他会一直waiting
5,Redis集群时候,因为主从都是相对的,需要相互识别对方,主从节点都要指定“masterauth”和“requirepass”,且密码一致,否则执行cluster  failover提示成功,但故障转移不成功
6,遇到一个灵异的问题(之前单机多实例的时候也遇到过),在启动容器上的Redis服务的时候,如果使用绝对路径启动,在创建集群的时候会出现从节点无法添加到集群中去的情况,停止服务,以相对路径方式重启之后就没有这个问题

总的来说consul这个中间件使用起来还算是比较简单,配置也很清爽,不像某些中间件令人作呕的配置结构(mycat???)
这里没有配置多数据中心模式,仅配置了单数据中心模式,作为一款服务发现的中间件,是完全没有问题的,尤其是作为MySQL集群不支持多IP连接驱动的数据库连接。

Guess you like

Origin www.linuxidc.com/Linux/2020-01/161957.htm