Consul basic concepts, like comparison and internal principles

The main terms of the basic concepts Consul, as well as the principles of its internal implementation, and comparing this article we look at the Eureka.

# 1. What is the Consul?
Consul grid solution is a service, providing a service discovery, and fully functional segment functions of the control plane. Each of these functions may be used alone or if necessary, may be used to construct a full-service grid together. Consul required data plane and support agent and native integration model. Consul comes with a simple built-in agent, so everything can be out of the box, but also supports the integration of third-party agents, such as Envoy.
Consul provides key features:

- Service Discovery: Consul clients can sign up for the service, such as api or mysql, Consul other clients can use to find a given service providers. Use DNS or HTTP, applications can easily find the services they depend on.
- Health Check: Consul client can provide any number of health checks, these checks with a given service ( "the Web server returns a 200 OK") or the local node ( "memory utilization below 90%") is associated. Operators can use this information to monitor cluster health, service discovery component uses this information to route traffic to a place away from the unhealthy host.
- KV Storage: Applications can be Consul-level key / value storage for any purpose, including the dynamic configuration, function tag, coordination, leader elections. Simple HTTP API make it easy to use.
- Security service communication: Consul may serve to generate and distribute TLS certificate to establish a TLS connection to each other. [Intentions] (https://www.consul.io/docs/connect/intentions.html) (intended) can be used to define which services allow communication. You can use intent to easily manage service segments can be changed in real time, instead of using a complex network topologies and static firewall rules.
- Multiple Data Center: Consul supports multiple data centers. This means that users do not have to worry about building the Consul extra layer of abstraction to be extended to more areas.

. # 2 Consul framework
## 2.1 Consul in terms
Before describing architecture, we provide a glossary to help clarify what is being discussed:
- agent (agent) - Acting Consul is on each member of the cluster of long-running daemon program. It is run by the command to start the consul agent. Agent can run in client or server mode. Since all nodes must run the agent, so the client or server node called easier, but there are other examples of the agent. All agents are running DNS or HTTP interface, and is responsible for running inspect and maintain synchronization service.

- Client mode (client agent) - The client is to forward all calls to the RPC proxy server. The client is relatively stateless. The only background activity performed by the client is involved in LAN gossip pool (LAN Gossip pool). It takes very, very little resources and consumes only a small amount of network bandwidth.
- server mode (server agent) - server is a proxy with expanded responsibilities, including participation Raft arbitration, maintenance of the cluster state, responding to RPC query, and other data center switching WAN Gossip (WAN Gossip) and forwards the query to the leader or remote data center.

- Data Center (datacenter) - Although the definition of a data center might seem obvious, but it must be considered minor details. For example, in the EC2, the plurality of available area is considered to comprise a single data center is? Our data center is defined as a dedicated, low-latency and high-bandwidth network environments. This eliminates communication through the public Internet, but for our purposes, a number of available areas within a single EC2 regions are considered part of a single data center.

- consensus (consensus) - when used in our document, we use to represent a consensus agreement on elected leaders as well as agreement on the transaction order. Since these transactions are applied [FSM] (https://en.wikipedia.org/wiki/Finite-state_machine), therefore our definition of consensus means that the state machine replication consistency. [Wikipedia] on (https://en.wikipedia.org/wiki/Consensus_%28computer_science%29) is described in more detail consensus [here] (https://www.consul.io/docs/internals/ consensus.html) describes our implementation.

- Gossip - Consul built on [Serf] (https://www.serf.io/), which provides a complete [gossip protocol] (https://en.wikipedia.org/wiki/Gossip_protocol) ( gossip protocol gossip protocol), for a variety of purposes. Serf provide members of the maintenance, fault detection and event broadcasts. We have more gossip described in these documents use. Gossip random participating node to the communication node, mainly through UDP.

- LAN Gossip - tank means gossip LAN, which comprises nodes located in the same local area network or data center.

- WAN Gossip - refers to contains only the server (servers) of WAN gossip pool. The primary servers located in different data centers, typically communicate via the Internet or a wide area network.

- RPC - remote procedure call. This allows the client requests a server request / response mechanism side issued.

## 2.2 ten thousand feet look Consul
! [Consul Chart] (https://img-blog.csdnimg.cn/20190607103459828.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L1llbGxvd1N0YXI1,size_16,color_FFFFFF , t_70)
let's break this image and description of each piece. First, we can see two data centers, labeled as "one" and "two". Consul provide first-class support for multiple data centers, and hope that this is a common situation.

Within each data center, we have a mix of client and server. ** expected to have three to five server. This failure to strike a balance between availability and performance, because with the addition of more machines, a consensus gradually slows down **. However, the number of clients is not limited and can be easily extended to thousands or tens of thousands.

** All nodes in the data center are involved in gossip protocol. ** This means that there is a gossip pool, which contains all the nodes for a given data center. This has several purposes: First, do not need to configure the address of the server to the client; discovery is done automatically. Second, work to detect node failures than on the server, but distributed. This allows the fault detection scheme than naive heartbeat more scalable. Third, it is used as a messaging layer, such as for notification when a leader elections and other important events.


Each data center servers are part of a single set of peer Raft. This means that they jointly elect a leader, a selected server with additional responsibilities. ** leader is responsible for handling all queries and transactions. As part of the consensus agreement, the transaction must also be copied to all peers. Because of this requirement, when a non-leader RPC server receives the request, it will forward it to the cluster leader. **

** server node as part of the run ** WAN gossip pool. LAN pool and this pool is different because it delayed for higher Internet has been optimized, and is expected to contain only other Consul server nodes. The purpose of this is to allow the data pool with a low center touch pattern (low-touch manner) discover each other. Online create a new data center WAN gossip like to join an existing pool as simple. Since the servers are running in this pool, it also supports the request across data centers. ** When the server receives a request for different data centers, which will forward it to the correct server random data center. The server can then forwarded to the local leader **.

This leads to coupling between the data centers is very low, but due to the failure detection, and multiplexing the connection cache, the request across the data center is relatively quick and reliable.

In general, no replication of data between different data centers Consul. When a request for another data center resources, the local server will Consul Consul remote RPC request is forwarded to the resource server and returns the result. If the remote data center is unavailable, these resources are not available, but this will not affect local data center. In some special cases, it can be copied a limited subset of the data, for example, [Consul built ACL] (https://learn.hashicorp.com/consul/day-2-operations/acl-replication) replication, or external tools such as [consul-replicate] (https://github.com/hashicorp/consul-replicate).

In some places, the client proxy can cache data from the server to make it available locally to improve performance and reliability. Examples include connection Connect certificates and certificate intent intentions, it allows the client proxy to make decisions locally relevant inbound connection requests without a roundtrip to the server situation. Some API also supports an optional endpoint result cache. This helps improve the reliability, because the local agency can continue to respond to certain queries, such as service discovery or connection authorization from the cache server even if the connection to the server is interrupted or is temporarily unavailable.

# 3. Gossip protocol and Raft agreement
in order to understand the principles Consul, you are around, but Gossip protocol and Raft agreement.

## 3.1 Protocol gossip protocol Gossip
### 3.1.1 What is the protocol Gossip
Gossip algorithm to its name, inspired by office gossip, gossip about as long as a person, for a limited period of time all of them will know the information of the gossip, this way too similar to the spread of the virus, so there are many gossip alias "gossip algorithm", "epidemic propagation algorithm", "viral infection algorithm", "rumor propagation algorithm." For more, see [this blog] (https://www.cnblogs.com/xingzc/p/6165084.html)
gossip protocol ### 3.1.2 Consul The
Consul use two different pools of gossip. We will each pool are called LAN or WAN pool.

Each data center has a LAN Consul gossip pool, containing all members of the data center, including client and server. LAN pool for some purpose. Membership information allows clients to automatically discover servers, reducing the amount of configuration required. Distributed Fault Detection allows the sharing of the entire cluster fault detection, rather than concentrated in a few server. Finally, gossip pool allows reliable and fast event broadcast events such as the election leader.

WAN pool is globally unique, because no matter how the data center, all servers should be involved WAN pool. WAN pool membership information provided allows the server to perform cross data center request. Consul integrated fault detectors allows gracefully handle the entire data center loss of connectivity with the remote data center or just a single server.

All of these features is through (https://www.serf.io/) provided by [Serf]. It is used as an embedded database in order to provide these functions. From the user's point of view, it does not matter, because the abstract should be masked by the Consul. However, as a developer, learn how to make use of this library is very useful.

## 3.2 Raft agreement
Raft agreement, it is responsible for leader election and log synchronization.
Recommend you look at [this blog] (https://www.cnblogs.com/xybaby/p/10124083.html),
you can also go to [this site] (http://thesecretlivesofdata.com/raft/) by animated way, like playing games like online experience raft protocol works.


# 4. Comparison of Eureka and the
Eureka because it is part of Netflix OSS suite, 2.x stop maintenance, but 1.x is still active.
! [Insert Picture description here] (https://img-blog.csdnimg.cn/20190607112436458.png?x-oss-process=image/watermark,type_ZmFuZ3poZW5naGVpdGk,shadow_10,text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L1llbGxvd1N0YXI1,size_16,color_FFFFFF,t_70)
to focus on that here under consul with the difference in Eureka. And comparison with other systems, can [click on this URL to view on their own. ] (https://www.consul.io/intro/vs/index.html)

Eureka is a service discovery tool. The architecture is mainly client / server, each data center server has a set of Eureka, each typically a usable area. Usually, Eureka customers to use an embedded SDK to registration and discovery services. For non-local integrated client, using the Ribbon and other sidecar transparently discovery services through Eureka.

** Eureka using best-effort service replication provides a consistent view of the weak. When a client registers with the server, which will attempt to replicate to other servers but does not provide guarantee **. Service registration survival time (TTL) is very short, require the client to the server heartbeat detection. Unhealthy or service node after the cessation of the heartbeat, causing them out and removed from the registry. Discovery request can be routed to any service, because the best copy, these services can provide outdated or missing data. This simplified model for easy cluster management and high scalability.

** Consul offers a range of super features, including richer health checks, key / value store and multiple data centers perception **. Consul needs of each set of data center servers, and agent on each client, such as the use of similarly Ribbon sidecar. Acting Consul allow most applications do not know Consul, perform service registration and discovery through a configuration file or through DNS load balancer sidecars.

** Consul providing strong consistency guarantees, because the server using the protocol Raft replication status **. Consul supports a rich run health checks, including TCP, HTTP, Nagios / Sensu compatible script or based Ture of Eureka. The client node involved in gossip health check based on the distribution of work to check health check, rather than a centralized heartbeat, which became the scalability challenges. Discovery request is routed to the consul elected leaders, allowing them to default it is strongly consistent. Allowed to read obsolete allows any client which requests the server process, thereby allowing the same as Eureka linear scalability.

** Consul of strong consistency means it can be used as a lock service coordination and cluster leader election. Eureka does not provide similar assurance, and often requires coordination is required to perform the service or have run ZooKeeper greater consistency requirements. **

Consul provides the necessary support for service-oriented architecture features in the toolkit. This includes service discovery, also includes a wealth of health checks, lock, key / values, multiple data centers co-event system and ACL. Consul and consul-template and envconsul ecosystems such tools are trying to minimize the changes required for integration of applications, in order to avoid the need for native integration through the SDK. Eureka is part of a larger Netflix OSS suite, the suite of applications expect relatively homogeneous and tightly integrated. Therefore, Eureka addressed only a limited part of the problem, expect other tools such as ZooKeeper can be used simultaneously.

To summarize the above few paragraphs:
- Consul provides strong consistency by Raft agreement, while weak consistency Eureka provides.
- Consul by Gossip protocol work better distribute health check, rather than a centralized Eureka heartbeat (to keep the client requests the server)
- Due to strong consistency Raft provided, Consul can be used for leadership selection, clustering protocol lock service, and only by means of Eureka Zookeeper.

In addition, Spring Cloud Consul official support for the site at https://spring.io/projects/spring-cloud-consul, I will write back the use of specific articles.

Reference # 5:
https://www.consul.io/intro/index.html
https://www.consul.io/intro/vs/eureka.html
https://www.consul.io/docs/internals /architecture.html
https://www.consul.io/docs/internals/consensus.html
https://www.consul.io/docs/internals/gossip.html
https://www.cnblogs.com/xingzc/ the p-/ 6165084.html
https://www.cnblogs.com/xybaby/p/10124083.html

After the article will first send a letter to the micro, I welcome the attention of the public micro-channel number, we can talk about
! [Insert Picture description here] (https://img-blog.csdnimg.cn/20190601153218300.png?x-oss -process = image / watermark, type_ZmFuZ3poZW5naGVpdGk, shadow_10, text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L1llbGxvd1N0YXI1, size_16, color_FFFFFF, t_70)

 

Guess you like

Origin www.cnblogs.com/YellowStar5/p/11022472.html