The use of micro-service consul

 

Foreword

Common registration center zookeeper, eureka, consul, etcd.
Come into consideration ecological development, convenience, and other language-independent point of view, select the consul, multi-data center support, support kv capacity, configuration can be extended to the center.
github Address: HTTPS: //github.com/hashicorp/consul
Consul's official website: https: //learn.hashicorp.com/consul

consul properties

consul is distributed, highly available, lateral spread. Some of the key characteristics of the consul offered:

  • service discovery: consul through DNS or HTTP interface enables service registration and service discovery becomes very easy, some external services, for example, can also register as saas provided.
  • health checking: Health detection can quickly alert the consul operations in the cluster. Integration and service discovery, service can be forwarded to the failure to prevent the above services.
  • key / value storage: a storage system for dynamically configured. It provides a simple HTTP interface that can operate anywhere.
  • multi-datacenter: no complex configuration, it can support any number of areas.

 

Consul architecture

Consul of micro-services are generally cluster, a cluster consists of two nodes Consul, Consul at these nodes which is divided into two roles, Server and Client.
consul node when you start to define their roles, client, server two kinds.
client node only responsible for forwarding an external request, all registered to the current node's service will be forwarded to the server node, the key to read / write to the server node itself is not persistent information, are stateless;
server node responsibility is to ensure data consistency protocol using a raft, in response to client requests, maintain cluster state, interact with other data centers, in addition to all the information persisted to local, so it encounters an error information can be retained.
Between nodes through gossip broadcast protocol (rumor protocol), data exchange between nodes, eventually reach consensus.

consul concept

  • Agent: Consul cluster daemon long-running, started to consul agent command can be run in client and server mode, you can run DNS or HTTP interface, its main role is to keep the run-time checking and synchronization services. .
  • Client: Client, stateless, to consume a minimal interface request will be forwarded to the server cluster within the LAN.
  • Server: server, save the configuration information, high availability clustering, in the LAN to communicate with local clients, the number of each data center server recommended communications with other data center is 3 or 5 over the WAN.
  • Datacenter: data center, multiple data center joint working to ensure the safe and efficient data storage
  • Consensus: coherence protocol using a Raft Protocol
  • RPC: remote communications program
  • Gossip: gossip-based protocol Serf achieve responsible members, failure detection, event broadcasting. Implement the message between respective nodes over UDP. And it is divided into two cases on the LAN WAN.

 

consul installation and testing installation

Under linux installation

(1) into the consul's official website to find their own development platform corresponding installation package download https://www.consul.io/downloads.html
or wget https://releases.hashicorp.com/consul/1.5.1/consul_1.5.1_linux_amd64 .zip
(2) after the download, decompress, to give an executable file Consul
(. 3) to move the file to the global environment variables
  $ Music Videos Consul the sudo / usr / local / bin /
(. 4) verify that the installation was successful $ Consul

window installation

(1) into the consul's official website to find their own development platform corresponding installation package download https://www.consul.io/downloads.html
(2) Once downloaded, unpack, get an executable file consul
(3) to move this file to the global environment variables

(4) Start consul command:

consul agent -dev -ui -node = cy-dev: This model can not be used in a production environment, because it would not persist any state in this mode, only the start mode for quick and easy start single node consul, -node node called cy, -ui can use the interface to access the default access. Test address http: // localhost: 8500

  • Node Name: This is the only name of the agent. By default, this is the host name of the machine, but you can use the -node mark customize. 
  • Data Center: This is the configuration of the agent to run the data center. Each node must be set to report to the other data center. -datacenter flag may be used to set the data center. For single DC configuration, the broker will default to "dc1". 
  • Server: This indicates that the proxy is a server or client mode. Server: false (bootstrap: false), it represents not running in server mode, in fact -dev is the development server mode. 
  • Client Address: This is the address for the proxy client interface. This includes HTTP and DNS port interface. By default, it only bind to the localhost. 
  • Cluster Address: This is the address and port for communication between the set of cluster Consul agent. Not all cluster Consul agent must use the same port, but the address must be available to all other nodes access.
consul agent common commands Interpretation 

1 modify the default port. 
Use -http- line parameters port command, e.g. modified 8080 
consul agent -dev -http port- 8080 

2 public access. 
using -client 0.0 . 0.0  
consul agent -dev - Port-HTTP 8080 -client 0.0 . 0.0 

3 . View cluster node information 
Consul Members 
the node Address Status Protocol Type Build DC Segment 
n3     127.0 . 0.0 : 8301   Alive Server   1.1 . 0   2          DC1 <All> 

the node: node name 
the address: node address
Status: alive nodes representing health 
Type: server is running state server status 
DC: dc1 indicates that the node belongs to Datacenter1 

4 -DATA-. Dir 
role: designated agent state data storage directory, which is all agent must be especially important for server because they have persistent cluster state 

5 -config-. dir 
role: location profile and check the definition of the specified service is located. Directory is necessary for the consul.d, the contents of the file are data format json. Detailed configuration see the official 

6 -config-. File 
effect: Specify a configuration file to be loaded 

7 .- dev 
role: the development server mode, although the server mode, but not for production environments, because there will be no lasting operation, that is, there will not write any data to disk 

8 -bootstrap-. the Expect 
action: start parameter indicates the minimum number of nodes in the election run-time service, when set to 1, it means that when a node is allowed to have elections; when set 3, then wait until three nodes running consul added to the server at the same time and to participate in elections, to be able to complete the cluster is working properly. Usually recommended server node point 3 - 5. 

9 .- the Node
Role: Specifies the node names in the cluster, the cluster name must be unique (this is the default host name of the machine), direct use of the machine's IP 

10 .- the bind 
action: IP address of the specified node, usually 0. 0.0 .0 cloud server or network address, can not write Ali goes outside the network address. It's Consul listening address, it must be accessible to all other nodes in the cluster. Although the binding address is not absolutely necessary, but it is preferable to provide a. 

. 11 .- server 
role: as specified node server, the number of each data server center (DC) of recommendation 3 - 5. 

12 is .- Client 
role: Client node is specified, bind address specified client interface, comprising: HTTP, DNS, RPC 
default is 127. 0.0 . 1 , only allows access to the loopback interface 

13 is .- Datacenter 
effect: was added to specify the machine which one data center. The old version is called -dc, -dc has failed
consul agent commonly used commands interpretation

 

Consul cluster

Five server node or group of nodes in a cluster, why is singular, because the cluster needs to elect a leader to ensure data consistency. More than half of the votes to be elected as leader.

The basic command:
Consul Agent -bind = 10.0.xx.55 -client -server = 0.0.0.0 -bootstrap-Expect =. 3--data the dir = / Data / file application / consul_data / -config-the dir / etc / Consul. d -node = server1

- Server represents the server is started identity
 - bind bind to indicate which ip (some servers will bind multiple network cards, you can force a specified ip bound by bind parameters)
 -client ip specify client access (consul has extensive the api interfaces, here refers to the client browser or caller), 0.0 . 0 0 means Any client IP
 -bootstrap-Expect = 3 represents the lowest nodes of the cluster server 3, below this value will not work (Note: similar zookeeper, as is usually an odd number of clusters to facilitate the elections, consul uses a raft algorithm)
 -data- dir represents the specified data storage directory (the directory must exist)
 - the node represents a node name displayed in the web ui
 -config-dir profile directory, which all loaded to the end of the file will .json

After a successful start, do not close the terminal window, you can in the browser, the next visit, similar to http: //10.0.xx.55: 8500 /, normal, then you should see a line of text: Consul Agent.

In order to prevent the terminal is turned off, consul quit, just be on the command, add a little something, it was taken into the background. Similarly:
the nohup XXX> / dev / null 2> &. 1 &

 

Now we use four machines create the cluster:

10.0.xx.55、10.0.xx.203、10.0.xx.204、10.0.xx.205

1. Start the server end

nohup consul agent -server -bind=10.0.xx.55 -client=0.0.0.0 -bootstrap-expect=3 -data-dir=/data/application/consul_data/ -config-dir /etc/consul.d -node=server1 > /dev/null 2>&1 &
nohup consul agent -server -bind=10.0.xx.203 -client=0.0.0.0 -bootstrap-expect=3 -data-dir=/data/application/consul_data/ -config-dir /etc/consul.d -node=server2 > /dev/null 2>&1 &
nohup consul agent -server -bind=10.0.xx.204 -client=0.0.0.0 -bootstrap-expect=3 -data-dir=/data/application/consul_data/ -config-dir /etc/consul.d -node=server3 > /dev/null 2>&1 &

Note change bind parameters ip, and node parameters in the node name

2. Start the client end
almost exactly the same, just remove the -server, running on 10.0.xx.205:

nohup consul agent -client=0.0.0.0 -data-dir=/data/application/consul_data/ -node=client1 -config-dir /etc/consul.d -ui > /dev/null 2>&1 &

3. The formation of cluster

Now we have three server node + 1 Ge client node, but the four nodes are independent of each other, can run on any node:

consul members

 

We can see only their own information nodes.

Allow yourself to join the cluster, you can run the following command (assuming that: the other three nodes are added 10.0.xx.205)

consul join 10.0.xx.205

After the success, it will output:

Successfully joined cluster by contacting 1 nodes.

The other two nodes (means: nodes outside 10.0.xx.205) on the operation similar to the above, are added to the cluster, after completion, can be verified again

 

We can see the information of four nodes have been.

tips: if, in turn, to a node removed from the cluster, you can run consul leave on that node.

 

 

Services Register / unregister

After conusl build a good cluster, the user or program can go to the consul query or registration services.
You can sign up for a service provided by the service definition file or invoke HTTP API.
1. The service definition file
to create a web.json file in the directory 10.0.xx.205 /etc/consul.d/, as follows:

 {
    "ID": "nginx1",
    "Name": "nginx",
    "Tags": [
    "primary",
    "v1"
    ],
    "Address": "127.0.0.1",
    "Port": 80,
    "EnableTagOverride": false,
    "Check": {
    "DeregisterCriticalServiceAfter": “12h",
    "HTTP": "http://localhost:5000/health",
    "Interval": "1s"
    }
    }

2. Call API HTTP
Postman (or other tool rest api, curl will do) to http: //10.0.xx.205: 8500 / v1 / agent / service / register, send the following json, http method specified as PUT, Content -Type designated as application / json

 {
    "ID": "nginx1",
    "Name": "nginx",
    "Tags": [
    "primary",
    "v1"
    ],
    "Address": "127.0.0.1",
    "Port": 80,
    "EnableTagOverride": false,
    "Check": {
    "DeregisterCriticalServiceAfter": “12h",
    "HTTP": "http://localhost:5000/health",
    "Interval": "1s"
    }
    }
A bit simple list of the most commonly used commands and interfaces api. 

. 1 . Consul client and server starts to form 

 server:   

       the nohup / Consul / Consul -config-Agent -ui the dir = / consul / config . 1 > /consul/consul.log 2 > & . 1 & 

   in can specify / consul / config file bind_addr to 192. 168.0 . 100 , Server attribute to true 

   Client: 

       the nohup / Consul / Consul -config-Agent -ui the dir = / Consul / config -join = 192.168 . 0.100 > /consul/consul.log 2 > & . 1 & 2 lists service: 
    curl HTTP: // localhost: 8500 / v1 / Agent / Members 3




. 添加服务

    curl  --request PUT  --data @test.json http://localhost:8500/v1/agent/service/register

test.json文件内容:

    {
    "ID": "nginx1",
    "Name": "nginx",
    "Tags": [
    "primary",
    "v1"
    ],
    "Address": "127.0.0.1",
    "Port": 80 ,
     " EnableTagOverride " : to false ,
     " the Check " : {
     " DeregisterCriticalServiceAfter " : "12h " , 
    " the HTTP " : " HTTP: // localhost: 5000 / Health " ,
     " the Interval " : " lS " 
    } 
    } 
the HTTP + the Interval is one way health check, the check default every 30 seconds to send a specific url http get requests, return code is checked by 2XX said, warning return 429 requests too much, others for failure. 

4 .
// localhost: 8500 / v1 / Agent / Service / deregister / nginx1 

5 . Access discovery service: dns queries 

   dig install 

   yum install the bind - utils 

   dns service discovery: 

       dig @ 127.0 . 0.1 -p 8600 servicename.service.consul 

   the DNS query system health checks to prevent the routing information to the node unhealthy, when the inquiry is completed, the failure of any node in the examination will be ignored in the results, for simple load balancing, each returned node set is random 

6 . checks health check 

    HTTP curl: // localhost: 8500 / v1 / Agent / Checks 

    setting state of this check is passing: 

    curl HTTP: // localhost: 8500 / v1 / Agent / check / Pass / nginx1 

7 configuration view. 

    curl HTTP: //localhost:8500/v1/kv/commons/test/config?raw
The most commonly used commands and interfaces api

 

refer:

https://cloud.tencent.com/developer/article/1033169

https://www.cnblogs.com/yjmyzz/p/replace-eureka-with-consul.html

Guess you like

Origin www.cnblogs.com/-wenli/p/11966787.html