Eureka contrast ZooKeeper

CAP is Consistency, Availablity and Partition Tolerance abbreviations. General distributed system wherein up to two meet. The Partition Tolerance is the key to distributed systems, and therefore will retain this feature.

Eureka is constructed based on the principle AP, and the CP is based on the principle ZooKeeper built. These can be reflected from their property.

ZK has a Leader, and can not be used when Leader of a new Leader by Paxos (ZAB) algorithm elections. The Leader of the purpose is to ensure that when information is written only write to the Leader, Leader will synchronize information to Followers. This process ensures data consistency.

Contrast of ZK, Eureka do not elect a Leader. Eureka each server individually save the service registered address, Eureka nor Leader concept. We therefore can not guarantee that each had a Eureka server data has been saved properties. When Eureka registrant heartbeat can not be maintained, and still save the information registration list for a long time. Of course, the client must be shielded broken server with an effective mechanism to reflect this Spring Cloud is the Ribbon.

Eureka because there is no election process to elect Leader, and therefore information can be written independently. Therefore, there may be cases appear inconsistent data. But when network problems, and each server can be independently complete service. Of course, some of the client load balancing and Fail Over mechanisms are required to assist the completion of additional functionality. In contrast, ZK as based on the principle of CP, to ensure good data consistency, but does not support high availability. In an internal system, mainly service registration and discovery, rather than configuration (file) share, Eureka therefore more suitable for the construction of internal services.

 

 

zookeeper is not designed for high availability

Due to the disaster recovery across the room, many systems actually need to deploy across the room. For cost considerations we usually make multiple rooms simultaneously, rather than build N-fold redundancy. That is certainly a single room barely full flow (Can you imagine Google only one room in the world to work in it). Since the zookeeper cluster can only have one master, so once the connection between the engine room failure, zookeeper master can only take care of a room, the other room to run because there is no master of business modules can only be stopped. So concentrate all traffic to the engine room there is the master, so the system crash.

Even in the case of a single room inside, due to the different segments, adjusting room switch when occasionally occur isolated network segment. In fact the room basically subnet adjust short-lived network isolation and the like will occur every month. At that moment the zookeeper in an unusable state. If the entire business system based zookeeper (such as the requirement for each service request can obtain master address business systems go zookeeper), the availability of the system will be very fragile.

Due to the extremely sensitive zookeeper isolated network, for any sign of trouble will lead zookeeper network reacts aggressively. This makes the zookeeper's 'unavailable' more time, we can not let the zookeeper 'unavailable', the system becomes unavailable.

zookeeper election process is slow

It is hard to see from a theoretical weakness, but you will encounter once you'd rather die. We have said before, the network is in fact often appear incomplete state of isolation, while the zookeeper is very sensitive to that situation. Once the network isolation occurs, zookeeper will initiate the election process. zookeeper election process usually takes 30 to 120 seconds, because there is no period zookeeper master, are unavailable. For network inside occasionally, such as half a second of network isolation, zookeeper due to the election process, and the unavailability enlarged several times.

Performance is limited zookeeper

Typical zookeeper's tps is probably more than ten thousand, can not be covered within the system at every turn billions of calls every day. Therefore, each request to get zookeeper business system master information is impossible. Therefore, the client must own cache zookeeper business system master address. Thus provided zookeeper 'strong consistency' is actually unusable. If we need strong consistency, but also other mechanisms for security: for example, using an automated script the old master of business systems to kill off, but that there will be many pitfalls (not here to expand this subject, the reader may ask yourself there which traps).

zookeeper can not effectively control authority

zookeeper access control is very weak

Inside large, complex systems, using zookeeper must develop a set of their own and then additional access control system, control system through the set of access privileges and then zookeeper additional access control system not only increases the system complexity and maintenance costs, but also reduces the system even with the overall performance zookeeper difficult to avoid data inconsistencies in front of business systems have already been discussed, due to the performance limitations zookeeper, we can not make internal calls each time the system are gone zookeeper, therefore there is always some point, there will be business systems two master (client side caching service systems business systems master information is regularly updated from the zookeeper, so there will be no problem to update synchronization).

 

 

Original: https: //blog.csdn.net/coorz/article/details/70921252 

Original: https: //blog.csdn.net/ab123456bcde/article/details/77930236 
 

 

 

Guess you like

Origin blog.csdn.net/shaolong1013/article/details/90271103