Handwritten Zookeeper distributed lock

Before saying that Zookeeper is a distributed lock, we know that Redis can also be a distributed lock. Then why should I use Zookeeper as a distributed lock?

The picture above shows the technical comparison of database, Redis, and Zookeeper for implementing distributed locks. Needless to say, the performance of the database to implement distributed locks is definitely very low. Although Redis has high performance, it loses to Zookeeper in terms of final consistency. Zookeeper has natural advantages in distributed clusters. In a production environment, middleware is generally deployed in the form of a cluster, so here is the problem of master-slave synchronization. In a Redis master-slave cluster, if the Master node sends RDB file synchronization to the Slave node, the Master node may hang and the file The data may be lost for one second. However, Redis has a set of synchronization confirmation strategies. Locking with Redis is actually setnx data in Redis. Generally, it is necessary for the master node to synchronize data to the slave node before it can be considered successful. Of course, in order to improve efficiency, there is a RedLock model, which believes that as long as the number of successful synchronization nodes exceeds half, the lock is considered successful. However, this RedLock model algorithm is quite troublesome to maintain. But Zookeeper naturally supports this model.

Another reason for using Zookeeper is that it has a watcher mechanism, which blocks when the thread cannot grab the lock. When the thread holding the lock releases the lock, the watcher mechanism will actively notify other threads to wake up to grab the lock. With Redis lock, the process of thread grabbing is a spin-setnx KV process. This lock grab is not as elegant as Zookeeper's lock grab method. So I like to use Zookeeper to implement distributed locks.

 

Zookeeper features

 

 

1. Node tree data structure, znode is a node similar to the Unix file system path, you can store or get data from this node;
2. Through the client, you can add, delete, modify, and check the znode, and you can also register a watcher to monitor the znode Variety.

Distributed locks can be realized through the above two features.

 

The figure above is the flowchart of Zookeeper's implementation of distributed locks. The following is the implementation of the code.

Code

 

Create a ZkDistributeLockclass that implements the Lock interface

@Slf4j
public class ZkDistributeLock implements Lock {

    @Autowired
    private ZookeeperConfig config;

    private String lockPath;

    private ZkClient client;

    private String currentPath;

    private String beforePath;
    ...
}

 

The root node of the lock is created in the constructor.

public ZkDistributeLock(String lockPath) {
    super();
    this.lockPath = lockPath;
    ZookeeperConfig zookeeperConfig = new ZookeeperConfig();
    this.client = zookeeperConfig.getConnectionWithoutSpring();
    this.client.setZkSerializer(new MyZkSerializer());

    if (!this.client.exists(lockPath)) {
        try {
            this.client.createPersistent(lockPath);
        } catch (ZkNodeExistsException e) {
            e.getStackTrace();
        }
    }
}

 

lock()Method writing

@Override
public void lock() {

    if (!tryLock()) {
        //没获得锁,阻塞自己
        waitForLock();
        //再次尝试
        lock();
    }

}

tryLock()Method writing

public boolean tryLock() { 
//不会阻塞 //创建节点 
if (this.currentPath == null) { 
    currentPath = this.client.createEphemeralSequential(lockPath + "/", "lock"); 
} 
List<String> children = this.client.getChildren(lockPath); 
Collections.sort(children); 
if (currentPath.equals(lockPath + "/" + children.get(0))) { 
    return true; 
} else {
     int currentIndex = children.indexOf(currentPath.substring(lockPath.length() + 1));             
     beforePath = lockPath + "/" + children.get(currentIndex - 1); 
} 
log.info("锁节点创建成功:{}", lockPath); 
return false;
}

unlock()Method writing

@Override
public void unlock() {
    client.delete(this.currentPath);
}

waitForLock()Method writing

private void waitForLock() {

    CountDownLatch count = new CountDownLatch(1);
    IZkDataListener listener = new IZkDataListener() {
        @Override
        public void handleDataChange(String s, Object o) throws Exception {

        }

        @Override
        public void handleDataDeleted(String s) throws Exception {
            System.out.println(String.format("收到节点[%s]被删除了",s));
            count.countDown();
        }
    };

    client.subscribeDataChanges(this.beforePath, listener);

    //自己阻塞自己
    if (this.client.exists(this.beforePath)) {
        try {
            count.await();
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
    }

    //取消注册
    client.unsubscribeDataChanges(this.beforePath, listener);
}

 

test

//创建订单时加锁
@Override
public void createOrder() {

    String lockPath = "/distribute_lock";
    String orderCode = null;
    //zk 分布式锁
    Lock lock = new ZkDistributeLock(lockPath);
    // zkDistributeLock.setLockPath(lockPath);
    lock.lock();
    try {
        orderCode = ocg.getOrderCode();
    } finally {
        lock.unlock();
    }

    log.info("当前线程:{},生成订单编号:{}",Thread.currentThread().getName() , orderCode);
    //其他逻辑

}

@Test
public void testDisLock() {
    //并发数
    int currency = 20;

    //循环屏障
    CyclicBarrier cyclicBarrier = new CyclicBarrier(currency);

    for (int i = 0; i < currency; i++) {
        new Thread(() -> {
            // OrderServiceImplWithDisLock orderService = new OrderServiceImplWithDisLock();
            System.out.println(Thread.currentThread().getName() + "====start====");
            //等待一起出发
            try {
                //CyclicBarrier共享锁模拟并发
                cyclicBarrier.await();
            } catch (InterruptedException | BrokenBarrierException e) {
                e.printStackTrace();
            }
            orderService.createOrder();
        }).start();

    }
}

 

Run the test method, and the lock node will be generated on zookeeper.

 

If the thread grabs the lock, it will show that the lock node is created successfully

 

And execute the code inside the lock

 

Can not grab the lock, will block, waiting for the lock to be released

 

When the lock is released, the node is deleted and the wacther mechanism is notified

The thread that grabs the lock executes the tasks in the current thread

 

github code: [read original] visible

Recommended in the past

Scan the QR code to get more exciting. Or search Lvshen_9 on WeChat , you can reply to get information in the background

1.回复"java" 获取java电子书;

2.回复"python"获取python电子书;

3.回复"算法"获取算法电子书;

4.回复"大数据"获取大数据电子书;

5.回复"spring"获取SpringBoot的学习视频。

6.回复"面试"获取一线大厂面试资料

7.回复"进阶之路"获取Java进阶之路的思维导图

8.回复"手册"获取阿里巴巴Java开发手册(嵩山终极版)

9.回复"总结"获取Java后端面试经验总结PDF版

10.回复"Redis"获取Redis命令手册,和Redis专项面试习题(PDF)

11.回复"并发导图"获取Java并发编程思维导图(xmind终极版)

Another: Click [ My Benefits ] to have more surprises.

Guess you like

Origin blog.csdn.net/wujialv/article/details/108437963