[Consul service registration discovery of microservices]

Consul is an open source distributed service discovery and configuration management system developed by HashiCorp in Go language.

Service Discovery and Configuration Made Easy

 

Consul is a distributed, highly available system. This section will cover the basics, purposely omitting some unnecessary detail, so you can get a quick understanding of how Consul works. For more detail, please refer to the in-depth architecture overview.

 

Every node that provides services to Consul runs a Consul agent. Running an agent is not required for discovering other services or getting/setting key/value data. The agent is responsible for health checking the services on the node as well as the node itself.

 

The agents talk to one or more Consul servers. The Consul servers are where data is stored and replicated. The servers themselves elect a leader. While Consul can function with one server, 3 to 5 is recommended to avoid failure scenarios leading to data loss. A cluster of Consul servers is recommended for each datacenter.

 

Components of your infrastructure that need to discover other services or nodes can query any of the Consul servers or any of the Consul agents. The agents forward queries to the servers automatically.

 

Each datacenter runs a cluster of Consul servers. When a cross-datacenter service discovery or configuration request is made, the local Consul servers forward the request to the remote datacenter and return the result.

 

It has many advantages. Including: Based on raft protocol, relatively simple; Support health check, and support HTTP and DNS protocols Support WAN clusters across data centers Provide graphical interface cross-platform, support Linux, Mac, Windows

 

Service Discovery

HashiCorp Consul makes it simple for services to register themselves and to discover other services via a DNS or HTTP interface. Register external services such as SaaS providers as well.

 

Failure Detection

Pairing service discovery with health checking prevents routing requests to unhealthy hosts and enables services to easily provide circuit breakers.

 

Multi Datacenter

Consul scales to multiple datacenters out of the box with no complicated configuration. Look up services in other datacenters, or keep the request local.

 

KV Storage

Flexible key/value store for dynamic configuration, feature flagging, coordination, leader election and more. Long poll for near-instant notification of configuration changes.

 

Commonly used frameworks for service discovery are

zookeeper

eureka

etcd

consul

 

 

basic concept

agent

Each member of the consul cluster must run an agent, which can be started by the consul agent command. The agent can run in server state or client state. Naturally, nodes running in server state are called server nodes; nodes running in client state are called client nodes.

 

client node

Responsible for forwarding all RPCs to the server node. It is stateless and lightweight, so a large number of client nodes can be deployed.

 

server node

Responsible for the complex work of forming a cluster (election, state maintenance, forwarding requests to the lead), and the services provided by consul (responding to RCP requests). Considering fault tolerance and convergence, it is generally appropriate to deploy 3 to 5.

 

Gossip

Serf-based gossip protocol, responsible for membership, failure detection, event broadcasting, etc. The messages between each node are realized through UDP. Divided into two situations on the LAN and on the WAN.

 

Consult use scenarios

1) Docker instance registration and configuration sharing

2) Registration and configuration sharing of coreos instances

3) Vitess cluster

4) Configuration sharing of SaaS applications

5) Integrate with confd service to dynamically generate nginx and haproxy configuration files

 

Advantages of Consul

Using the Raft algorithm to ensure consistency is more straightforward than the complex Paxos algorithm. In comparison, zookeeper uses Paxos, while etcd uses Raft.

Supports multiple data centers, and the services of the internal and external networks are monitored on different ports. A multi-data center cluster can avoid a single point of failure in a single data center, and its deployment needs to consider network latency, fragmentation, etc. Neither zookeeper nor etcd provide support for multi-data center functions.

Support for health checks. etcd does not provide this feature.

Support http and dns protocol interface. The integration of zookeeper is more complicated, etcd only supports http protocol.

The official web management interface is provided, etcd does not have this function.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326079186&siteId=291194637