Technical table
Zookeeper | Etcd | Consul | Eureka | |
---|---|---|---|---|
CAP model | CP | CP | CP | Of |
Data consistency algorithm | BAR | Raft | Raft | ❌ |
Multi-data center | ❌ | ❌ | ✅ | ❌ |
Multi-language support | Client | Http/gRPC | Http/DNS | Http |
Watch | TCP | Long Polling | Long Polling | Long Polling |
KV storage | ✅ | ✅ | ✅ | ❌ |
Health Check Service | Heartbeat | Heartbeat | Service status, memory, hard drives, etc. |
customize |
Self-monitoring | ❌ | metrics | metrics | metrics |
SpringCloud support | ✅ | ✅ | ✅ | ✅ |
Own development language | Java | Go | Go | Java |
CAP model
CAP these three letters stand for:
- Consistence (consistency)
- Availability (availability)
- Partition Tolerance (partitions fault tolerance)
In a distributed system, this third party can not have both.
Due to the network, distributed systems P is essential, means that only select AP or CP.
CP represents the data consistency is the first one, AP representatives of availability is the first one.
They only four Eureka is the AP, Eureka can also be used in data inconsistencies, as long as the data is eventually agreed to.
If you want to know more about the CAP can be found in:
Architecture design theory CAP and BASE
Data consistency
ZAB atomic broadcast protocol is zookeeper, Paxos algorithm based on the change.
Raft is more widely used in engineering strong consistency, decentralized, highly available distributed protocol.
Both algorithms did wrong, distributed consensus can be achieved, but different implementations.
Eureka the AP is selected, do not require strong consistency, there will be no data consistency using the algorithm.
Paxos and Raft Reference:
Distributed Paxos consensus algorithm
A Distributed Consensus Algorithm Raft
Multi-data center
It is more room, only Consul support.
zookeeper does not support multiple data centers means that if you deploy across multiple rooms set zookeeper, once the network division occurs, then not available.
Consul is by Gossip protocol.
Gossip protocol messages pass exponential rate of hundreds of rapid spread in the network to a mass ten.
Abnormal network node will not affect any Gossip Spread, highly distributed system fault tolerance.
Gossip protocol is decentralized, all peer nodes, network node does not know the whole situation, as long as the network is connected, any node can spread the message to the entire network.
Watch
Zookeeper's watch is easy to implement, is the TCP Ping.
Long Polling (long polling) is pull mode, the client request intervals pulling data once.
Long Polling initiated by the client, the server if there is no data, will wait until the server data, or wait for a timeout, after returning, the client initiates Long Polling again.
Multi-language support
Zookeeper multi-language client is more mature.
Consul supports DNS is more interesting, if the first time you see may not understand.
DNS approach allows applications to use the service discovery without requiring any highly integrated with the Consul.
For example, no need to send requests to the Consul HTTP, DNS server lookup may be used directly by name, as redis.service.us-east-1.consul
will be rerouted located find us-east-1
the data center redis
nodes and services.
Using DNS ways to integrate a DNS resolver library in your program, you can also customize the local DNS Server.
Custom Local DNS Server refers to the .consul
entire domain forwarding the request to the Consul Agent.
Health Check Service
Heartbeat relatively simple way, clients can report their own survival status.
But survival does not mean health, such as an application service layer is no problem, but the database connection fails, then it can not provide normal service, which is alive but not healthy.
Eureka support custom health check logic.
Consul support is very comprehensive, you can configure a custom service interface address health check, as well as improve the management interface, you can view the status of all health services and check nodes.
Recommended reading: