How does zookeeper implement distributed locks? Zookeeper distributed lock mechanism

Zookeeper is a CP architecture cluster that uses the zab consistency protocol to ensure strong consistency of data. When the data in zookeeper is modified, all node data will be automatically modified internally before query services are provided. There will be no data loss caused by asynchronous synchronization like redis.

Zookeeper data is in the form of a directory tree. Each directory is called a znode. Data can be stored in a znode (generally no more than 1M), and child nodes can also be added to it.

There are two types of nodes in zookeeper, namely temporary nodes and permanent nodes.

(1) Permanent node: After the client disconnects from zookeeper, the node still exists.

(2) Temporary node: After the client disconnects from zookeeper, the node will be deleted. Subnodes cannot be created under temporary nodes.

SEQUENTIAL attribute: ZooKeeper allows users to add a special attribute to each node: SEQUENTIAL. Once a node is marked with this attribute, ZooKeeper will automatically append an integer number to the end of its node name when the node is created. This integer number is an auto-incrementing number maintained by the parent node.

Zookeeper distributed lock mechanism

Relying on the unique mechanism of zk node path (using the feature of zk that multiple nodes with the same name cannot be created in the same directory to implement the distributed lock function. For the same path, only one client can be successfully created , all others failed to create).

Distributed lock mechanism

Simple version of zookeeper lock

Node uniqueness: For the same path, only one client can be created successfully, and the others will fail.

Create a temporary node: After the client disconnects from zookeeper, the node will be deleted and there is no need to set a lock timeout.

Temporary node-based zk lock (request queuing)

Implementation based on temporary nodes will produce a thundering herd effect and slightly poor performance.

Simple version of zookeeper lock

zk lock based on temporary sequential nodes (request queuing)

zk lock for temporary sequential nodes


 

release lock

Use curator to implement zk lock

Curator is a set of zookeeper client frameworks open sourced by Netflix. It encapsulates most of Zookeeper's functions, such as Leader election, distributed locks, etc., which reduces the development work of technical personnel on the underlying details when using Zookeeper.

The following types of locks are encapsulated in Curator:

InterProcessMutex:分布式可重入排它锁
InterProcessSemaphoreMutex:分布式不可重入排它锁
InterProcessReadWriteLock:分布式读写锁
InterProcessMultiLock:多重共享锁,将多个锁作为单个实体管理的容器
InterProcessSemaphoreV2:共享信号量

In actual development, you can directly use various officially implemented distributed locks in the Curator client, and there is no need to reinvent the wheel.

Guess you like

Origin blog.csdn.net/Blue92120/article/details/133276077