Distributed Lock the zookeeper

zookeeper achieve Distributed Lock

zookeeper case realization

  1. Add jar package depends (frame using curator)
<dependency>
     <groupId>org.apache.curator</groupId>
     <artifactId>curator-framework</artifactId>
     <version>2.12.0</version>
</dependency>
<dependency>
    <groupId>org.apache.curator</groupId>
     <artifactId>curator-recipes</artifactId>
     <version>2.12.0</version>
</dependency>
  1. Configuration curator
public class ZKCuratorManager {
    private static InterProcessMutex lock;
    private static CuratorFramework cf;
    private static String zkAddr = "127.0.0.1:2181";
    private static String lockPath = "/distribute-lock";
    static {
        RetryPolicy retryPolicy = new ExponentialBackoffRetry(1000, 3);
        cf = CuratorFrameworkFactory.builder()
                .connectString(zkAddr)
                .sessionTimeoutMs(2000)
                .retryPolicy(retryPolicy)
                .build();
        cf.start();
    }

    public static InterProcessMutex getLock(){
        lock = new InterProcessMutex(cf, lockPath);
        return lock;
    }
}
  1. Lock acquisition and release
public class ZKCuratorLock {
    //从配置类中获取分布式锁对象
    private static InterProcessMutex lock =  ZKCuratorManager.getLock();
    //加锁
    public static boolean acquire(){
        try {
            lock.acquire();
            System.out.println(Thread.currentThread().getName() + " acquire success");
        } catch (Exception e) {
            e.printStackTrace();
        }
        return true;
    }
    //锁的释放
    public static void release(){
        try {
            lock.release();
            System.out.println(Thread.currentThread().getName() + " release success");
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
}
  1. Use CyclicBarrier simulate concurrent acquisition Distributed Lock
/**
 * 使用CyclicBarrier模拟并发获取分布式锁
 */
public class CucTest {
    public static void main(String[] args) {
        int N = 4;
        CyclicBarrier barrier  = new CyclicBarrier(N);
        for(int i=0;i<N;i++){
            
            new Writer(barrier).start();
        }

        System.out.println("END");
    }
    static class Writer extends Thread{
        private CyclicBarrier cyclicBarrier;
        public Writer(CyclicBarrier cyclicBarrier) {
            this.cyclicBarrier = cyclicBarrier;
        }
 
        @Override
        public void run() {
            System.out.println("线程"+Thread.currentThread().getName()+"正在写入数据...");
            try {
                Thread.sleep(5000);      //以睡眠来模拟写入数据操作
                System.out.println("线程"+Thread.currentThread().getName()+"写入数据完毕,等待其他线程写入完毕");
                cyclicBarrier.await();
            } catch (Exception e) {
                e.printStackTrace();
            }
            System.out.println("所有线程写入完毕,继续处理其他任务...");
            //加锁
            ZKCuratorLock.acquire();
            System.out.println("线程"+ Thread.currentThread().getName() +"获得分布式锁");
            try {
                Thread.sleep(2000);
                ZKCuratorLock.release();
                System.out.println("线程"+Thread.currentThread().getName()+"释放分布式锁");
            } catch (Exception e) {
                e.printStackTrace();
            }
            System.out.println("END");
        }
    }
}
  1. Test Results


    16305080-5b229abe2721407a.png
    zk achieve distributed lock test results .png

The principle zookeeper

  • First, zk, there are a lock, the lock is a node on the zk. Then the client must acquire the lock.
  • A first assume that the client initiated the request for zk plus distributed lock, the lock request is to use a special concept in zk, called "temporary order of nodes." Is directly under the "my_lock" lock this node, create a sequence node, the node has a node number sequence zk internal self-maintenance.
  • Then he would check all child nodes under "my_lock" lock this node, and these child nodes are sorted according to the numbers, this time he will get a collection.
  • If it came in first, then you can be locked up.
  • Client B wants to lock up, and this time he would do the same thing: that begins by creating a temporary order of the nodes in the "my_lock" lock this node.
  • Client B because it is the second to create a sequence of nodes, the internal zk maintains a number "2."
  • Next Client B will lock down decision logic, query all child nodes of "my_lock" Lock node. Also check the order of nodes that you have created, is not the first set? Lock failure!
  • After the lock fails, the client will be B by ZK's API, for his order on a node plus a listener. Monitor whether the node is deleted and other changes!
  • Subsequently, after the client A lock may be some code logic processing, and the lock is released. This node will be deleted.
  • After you remove the node, zk will be responsible for notifying listeners listeners this node, which is added before the client B listener, said: Brother, that you listen node has been deleted, the lock is released.
  • At this point the client B listener to perceive a sequence of nodes are removed, it is ahead of his release the lock a client.
  • In this case, it will notify the client B attempts to re-acquire the lock, which is acquiring sub-node under "my_lock" a set of nodes. Analyzing cycles or more.


    16305080-edb91faac8e6dcd8.png
    Principle flow chart .png

Reproduced in: https: //www.jianshu.com/p/bee78409f122

Guess you like

Origin blog.csdn.net/weixin_33733810/article/details/91154944