Typical distributed application scenarios of Zookeeper cluster

 

1. Configuration management-demand is data consistency

ZooKeeper uses a combination of push and pull: the client registers the node that it needs to pay attention to the server. Once the data of the node changes, the server will send a Watcher event notification to the corresponding client. The client receives this After the message notification, you need to take the initiative to go to the server to get the latest data.

 

2. Naming Service

The service needs to be accessed, such as the uri provided by the service, but the uri is variable, so you need to give the changed uri a fixed name, and get the service uri by this name every time.

         Zookeeper allows the service to create a node, the node name is the service name, and the data is uri. In this way, the naming is completed.

 

3. Load balancing

1) Load registration / update-load ip: port can be saved to zookeeper's temporary node data and has consistency

2) Load health check-For the offline load, the temporary node created by it is automatically deleted, and the ip: port data is also deleted.

3) Load dispatching-get all the temporary nodes under the load node and use the load balancing algorithm to select a load

 

4.master election

The client cluster is regularly created on ZooKeeper.-A temporary node, such as / master_ election /

2013-09- 20 / binding. In this process, only one client can successfully create this node, then this;

The machine where the client is located becomes the Master. At the same time, other guests who have not successfully created a node on ZooKeeper.

The client will register a watcher with a child node change on node / master_ election / 2013-09-20.

Monitor whether the current Master machine is alive, once it is found that the current Master hangs, then the remaining clients will

Will re-run the Master election.

 

5. Distributed lock

When an exclusive lock needs to be acquired, all clients will attempt to call the create () interface.

Create a temporary child node / exclusive_ lock / lock under the / exclusive_ lock node. We also introduced in the previous sections,

ZooKeeper will ensure that in all clients, only one client can be successfully created, then you can

The client is considered to have acquired the lock. At the same time, all clients that have not acquired the lock need to go to the / exclusive_lock section

Click to register a Watcher monitoring of a child node change, so as to monitor the change of the lock node in real time.

Guess you like

Origin www.cnblogs.com/handwrit2000/p/12688012.html