How to implement a distributed lock with Zookeeper?

1. Background

Zookeeper recent study, at the beginning of the contact Zookeeper, Zookeeper do not know what's the use. Zookeeper is a lot of information and will be described as a "class Unix / Linux file system" middleware, cause I will Zookeeper difficult and distributed class Unix / Linux file system application together.

Later in the rough read "ZooKeeper distributed collaborative process technology explain" and the "consistency from Paxos Zookeeper distributed to the principles and practice of" two books, and wrote a number of hands-CURD demo, preliminary Zookeeper for a certain understanding.

But more superficial, in order to further deepen the understanding of Zookeeper, I use the free time to write this article the corresponding demo - based distributed lock Zookeeper implementation. By writing the Distributed Lock demo, so I watcher mechanism of Zookeeper, Zookeeper have a better understanding of the use and so on.

But I wrote a distributed lock is quite simple, is not enough to achieve beautiful, just a practice, for reference only. Well, digression stop here, then we have to talk about Zookeeper distributed lock-based implementation.

2. exclusive lock and write lock implementation

I will illustrate an exclusive lock and write lock detailed implementation process, and together with the corresponding flow chart to help you understand the process of implementation. Here to talk about the realization of an exclusive lock.

2.1 exclusive lock implementation

Also known as an exclusive lock exclusive lock, it is easy to understand their purpose from the literal meaning. That is, if an action before O1 of the process of locking access to resources R1, at the end of the operation O1 access to the resources R1, other operations are not allowed to access the resource R1. Be more simple definition of an exclusive lock, then this is the definition of how to achieve the Zookeeper's "class Unix / Linux file system" structure it? Before locking the answer, we look at the Photo:

The node structure exclusive lock Zookeeper

As shown above, for an exclusive lock, we can be seen as a lock resource nodes R1, R1 seen as operating O1 access resources to create a node lock, delete lock release resource R1 seen as a node. In this way we will be exclusive lock Zookeeper definition corresponds to a specific node structure, acquiring the lock by creating a lock node, delete node releases the lock. The detailed process is as follows:

  • Multiple clients lock temporary node to create competition

  • In which a client node successfully created lock, other client node is set to lock watcher 3

  • Holding a lock client node lock or delete the client crashes, delete a node lock Zookeeper

  • Other clients notified lock removed node

  • Repeating the above four steps until the client without waiting to acquire a lock

I.e. exclusive lock The above specific implementation steps, it's not very complicated, is not repeated here.

Acquire an exclusive lock flowchart

2.2 read-write lock implementation

Having achieved an exclusive lock, and this festival is said to read and write locks. A read-write lock comprising a lock and a write lock on resource R1 plus operation O1 read lock, and the lock is obtained, other operations can be simultaneously read lock on resource R1 is provided, shared read operation.

If the operation O1 R1 plus a write lock on the resource and get the lock, then other operating resources R1 set different types of locks are blocked. In conclusion, with shared read locks, whereas exclusive write locks. So in Zookeeper, we can achieve the above operation does what kind of node structure?

Read-write lock node structure Zookeeper

In Zookeeper, the structure due to the different nodes to read and write and exclusive locks, the read-write lock client did not have to lock node to create competition. So in the beginning, all clients will create their own lock node. If nothing else, all the nodes can be created successfully locks, the lock case shown in Figure 3 node structure. Thereafter, the client fetches all the child nodes / share_lock Zookeeper from the end, and determines whether the acquired own lock. If the client creates a read lock node acquiring a lock condition (one of which can meet) as follows:

  • Node numbers that you create rows in front of all other child nodes

  • In front of the node that you create a write lock-free node

If the client creates a write lock node, due to the exclusive write locks. So get a lock condition to be simpler, just lock node to determine whether to create their own row in front of other child nodes can be.

不同于独占锁,读写锁的实现稍微复杂一下。读写锁有两种实现方式,各有异同,接下来就来说说这两种实现方式。

读写锁的第一种实现

第一种实现是对 /sharelock 节点设置 watcher,当 /sharelock 下的子节点被删除时,未获取锁的客户端收到 /share_lock 子节点变动的通知。在收到通知后,客户端重新判断自己创建的子节点是否可以获取锁,如果失败,再次等待通知。详细流程如下:

  • 所有客户端创建自己的锁节点

  • 从 Zookeeper 端获取 /sharelock 下所有的子节点,并对 /sharelock 节点设置 watcher

  • 判断自己创建的锁节点是否可以获取锁,如果可以,持有锁。否则继续等待

  • 持有锁的客户端删除自己的锁节点,其他客户端收到 /share_lock 子节点变动的通知

  • 重复步骤2、3、4,直至无客户端在等待获取锁了

上述步骤对于的流程图如下:

Acquiring a write lock flowchart of an implementation

上面获取读写锁流程并不复杂,但却存在性能问题。以图3所示锁节点结构为例,第一个锁节点 host1-W-0000000001 被移除后,Zookeeper 会将 /share_lock 子节点变动的通知分发给所有的客户端。

但实际上,该子节点变动通知除了能影响 host2-R-0000000002 节点对应的客户端外,分发给其他客户端则是在做无用功,因为其他客户端即使获取了通知也无法获取锁。所以这里需要做一些优化,优化措施是让客户端只在自己关心的节点被删除时,再去获取锁。

读写锁的第二种实现

在了解读写锁第一种实现的弊端后,我们针对这一实现进行优化。这里客户端不再对 /share_lock 节点进行监视,而只对自己关心的节点进行监视。还是以图3的锁节点结构进行举例说明,host2-R-0000000002 对应的客户端 C2 只需监视 host1-W-0000000001 节点是否被删除即可。

The host3-W-0000000003 C3 corresponding client only monitors whether host2-R-0000000002 node can be removed only host2-R-0000000002 node is deleted, the client C3 to acquire the lock. And the notification is deleted nodes host1-W-0000000001, C3 generated by the client is useless, even if the client C3 in response to the notification would not be able to acquire the lock.

Summarize here, lock node different clients concerned are different. If the client creates a read lock nodes, the client just need to find the node number is smaller than the last read lock of a write lock node, and you can set the watcher. And if it is a write lock node, even simpler, the client only on one node of a set watcher can. Detail of the process is as follows:

  • All clients to create their own lock node

  • Get all the child nodes / share_lock from end Zookeeper

  • Lock node to determine whether to create their own locks can be acquired, if you can, hold the lock. Otherwise it is set to lock watcher node you care about

  • Holding a lock client node to delete their own lock, the client receives a notification that the node is removed, and acquire the lock

  • Repeat step 4 until the client without waiting to acquire a lock

For the above-described steps of the flowchart are as follows:

2 is a flowchart implemented acquire write lock

3. Written in the last

This article describes in detail the implementation process based on the Zookeeper distributed lock, and to achieve a more simple distributed lock demo based on two principles lock described above, the code placed on github, a friend in need and seeking.

Because this is just a simple demo, code implementation is not beautiful, for reference purposes only. Finally, if you feel pretty good article, welcome thumbs up. If there is nothing wrong with it, but also feel free to ask, I will humbly corrections.

Source: https://segmentfault.com/a/1190000010895869

Guess you like

Origin blog.csdn.net/Java__xiaoze/article/details/90638755