Curator realizes distributed lock (reentrant non-reentrant read-write interlocking semaphore fence counter)

foreword

Curatorzookeeper is a set of clients open sourced by netflix , and it is currently the top-level project of Apache. Compared with the native client provided by Zookeeper, Curator has a higher level of abstraction, which simplifies the development of Zookeeper client . Curator solves many very low-level details of zookeeper client development, including connection reconnection, repeated registration of wathcer and NodeExistsException exceptions, etc.

Curator mainly solves three types of problems:

  • Encapsulate the connection processing between ZooKeeper client and ZooKeeper server
  • Provides a set of Fluent-style operation API
  • Provide abstract encapsulation of ZooKeeper's various application scenarios (recipes, such as: distributed lock service, cluster leader election, shared counter, cache mechanism, distributed queue, etc.), these implementations follow the best practices of zk, and consider various an extreme case

Curator consists of a series of modules. For general developers, the commonly used ones curator-frameworkare curator-recipes:

  • curator-framework: Provides common zk-related underlying operations
  • curator-recipes: Provides some references to typical usage scenarios of zk. The distributed locks that this section focuses on are provided by this package

code practice

curator 4.3.0Support zookeeper 3.4.xand 3.5, but you need to pay attention curatorto the dependencies passed in, which need to match the version used on the actual server side. Take usage zookeeper 3.4.14as an example.

<dependency>
    <groupId>org.apache.curator</groupId>
    <artifactId>curator-framework</artifactId>
    <version>4.3.0</version>
    <exclusions>
        <exclusion>
            <groupId>org.apache.zookeeper</groupId>
            <artifactId>zookeeper</artifactId>
        </exclusion>
    </exclusions>
</dependency>
<dependency>
    <groupId>org.apache.curator</groupId>
    <artifactId>curator-recipes</artifactId>
    <version>4.3.0</version>
    <exclusions>
        <exclusion>
            <groupId>org.apache.zookeeper</groupId>
            <artifactId>zookeeper</artifactId>
        </exclusion>
    </exclusions>
</dependency>
<dependency>
    <groupId>org.apache.zookeeper</groupId>
    <artifactId>zookeeper</artifactId>
    <version>3.4.14</version>
</dependency>

1. Configuration

Add curatorclient configuration:

@Configuration
public class CuratorConfig {
    
    

    @Bean
    public CuratorFramework curatorFramework(){
    
    
        // 重试策略,这里使用的是指数补偿重试策略,重试3次,初始重试间隔1000ms,每次重试之后重试间隔递增。
        RetryPolicy retry = new ExponentialBackoffRetry(1000, 3);
        // 初始化Curator客户端:指定链接信息 及 重试策略
        CuratorFramework client = CuratorFrameworkFactory.newClient("192.168.1.111:2181", retry);
        client.start(); // 开始链接,如果不调用该方法,很多方法无法工作
        return client;
    }
}

2. Reentrant lock InterProcessMutex

ReentrantSimilar to and means that JDKthe ReentrantLocksame client can acquire multiple times while owning the lock without being blocked. It is implemented by classes InterProcessMutex.

// 常用构造方法
public InterProcessMutex(CuratorFramework client, String path)
// 获取锁
public void acquire();
// 带超时时间的可重入锁
public boolean acquire(long time, TimeUnit unit);
// 释放锁
public void release();

testing method:

@Autowired
private CuratorFramework curatorFramework;

public void checkAndLock() {
    
    
     InterProcessMutex mutex = new InterProcessMutex(curatorFramework, "/curator/lock");
    try {
    
    
        // 加锁
        mutex.acquire();

        // 处理业务
        // 例如查询库存 扣减库存
        
        // this.testSub(mutex); 如想重入,则需要使用同一个InterProcessMutex对象

        // 释放锁
        mutex.release();
    } catch (Exception e) {
    
    
        e.printStackTrace();
    }
}

public void testSub(InterProcessMutex mutex) {
    
    

    try {
    
    
        mutex.acquire();
    	System.out.println("测试可重入锁。。。。");
        mutex.release();
    } catch (Exception e) {
    
    
        e.printStackTrace();
    }
}

Note: If you want to reentrant, you need to use the same InterProcessMutex object.

3. Non-reentrant lock InterProcessSemaphoreMutex

Specific implementation: InterProcessSemaphoreMutexSimilar to InterProcessMutexcalling a method, the difference is that the lock is non-reentrant and cannot be reentrant in the same thread.

public InterProcessSemaphoreMutex(CuratorFramework client, String path);
public void acquire();
public boolean acquire(long time, TimeUnit unit);
public void release();

case:

@Autowired
private CuratorFramework curatorFramework;

public void deduct() {
    
    

    InterProcessSemaphoreMutex mutex = new InterProcessSemaphoreMutex(curatorFramework, "/curator/lock");
    try {
    
    
        mutex.acquire();

        // 处理业务
        // 例如查询库存 扣减库存

    } catch (Exception e) {
    
    
        e.printStackTrace();
    } finally {
    
    
        try {
    
    
            mutex.release();
        } catch (Exception e) {
    
    
            e.printStackTrace();
        }
    }
}

4. Reentrant read-write lock InterProcessReadWriteLock

similar . JDK_ ReentrantReadWriteLockA thread with a write lock can re-enter a read lock, but a read lock cannot enter a write lock. This also means that write locks can be downgraded to read locks. It is not possible to upgrade from a read lock to a write lock. The main implementation class InterProcessReadWriteLock:

// 构造方法
public InterProcessReadWriteLock(CuratorFramework client, String basePath);
// 获取读锁对象
InterProcessMutex readLock();
// 获取写锁对象
InterProcessMutex writeLock();

Note: Write locks will block the requesting thread until released, while read locks will not

public void testZkReadLock() {
    
    
    try {
    
    
        InterProcessReadWriteLock rwlock = new InterProcessReadWriteLock(curatorFramework, "/curator/rwlock");
        rwlock.readLock().acquire(10, TimeUnit.SECONDS);
        // TODO:一顿读的操作。。。。
        //rwlock.readLock().unlock();
    } catch (Exception e) {
    
    
        e.printStackTrace();
    }
}

public void testZkWriteLock() {
    
    
    try {
    
    
        InterProcessReadWriteLock rwlock = new InterProcessReadWriteLock(curatorFramework, "/curator/rwlock");
        rwlock.writeLock().acquire(10, TimeUnit.SECONDS);
        // TODO:一顿写的操作。。。。
        //rwlock.writeLock().unlock();
    } catch (Exception e) {
    
    
        e.printStackTrace();
    }
}

5. Interlock InterProcessMultiLock

Multi Shared Lockis a locked container. When called acquire, all locks will be acquired acquire, and if the request fails, all locks will be acquired release. releaseAll locks are locked when called release(failure is ignored). Basically, it is the representative of the group lock, and the request release operation on it will be passed to all the locks it contains. Implementation class InterProcessMultiLock:

// 构造函数需要包含的锁的集合,或者一组ZooKeeper的path
public InterProcessMultiLock(List<InterProcessLock> locks);
public InterProcessMultiLock(CuratorFramework client, List<String> paths);

// 获取锁
public void acquire();
public boolean acquire(long time, TimeUnit unit);

// 释放锁
public synchronized void release();

6. Semaphore InterProcessSemaphoreV2

A counting semaphore is similar JDK. A set of licenses ( ) maintained in , and called leases ( ) in . Note that all instances must use the same value. The call returns a lease object. The client must be in these lease objects, otherwise the leases will be lost. However, if the client is lost for some reason , then the leases held by these clients will be automatically restored , so that other clients can continue to use these leases. The main implementation class :SemaphoreJDKSemaphorepermitsCubatorLeasenumberOfLeasesacquirefinallyclosesessioncrashcloseInterProcessSemaphoreV2

// 构造方法
public InterProcessSemaphoreV2(CuratorFramework client, String path, int maxLeases);

// 注意一次你可以请求多个租约,如果Semaphore当前的租约不够,则请求线程会被阻塞。
// 同时还提供了超时的重载方法
public Lease acquire();
public Collection<Lease> acquire(int qty);
public Lease acquire(long time, TimeUnit unit);
public Collection<Lease> acquire(int qty, long time, TimeUnit unit)

// 租约还可以通过下面的方式返还
public void returnAll(Collection<Lease> leases);
public void returnLease(Lease lease);

Case code:

Add method in StockController:

@GetMapping("test/semaphore")
public String testSemaphore(){
    
    
    this.stockService.testSemaphore();
    return "hello Semaphore";
}

Add method in StockService:

public void testSemaphore() {
    
    
    // 设置资源量 限流的线程数
    InterProcessSemaphoreV2 semaphoreV2 = new InterProcessSemaphoreV2(curatorFramework, "/locks/semaphore", 5);
    try {
    
    
        Lease acquire = semaphoreV2.acquire();// 获取资源,获取资源成功的线程可以继续处理业务操作。否则会被阻塞住
        this.redisTemplate.opsForList().rightPush("log", "10010获取了资源,开始处理业务逻辑。" + Thread.currentThread().getName());
        TimeUnit.SECONDS.sleep(10 + new Random().nextInt(10));
        this.redisTemplate.opsForList().rightPush("log", "10010处理完业务逻辑,释放资源=====================" + Thread.currentThread().getName());
        semaphoreV2.returnLease(acquire); // 手动释放资源,后续请求线程就可以获取该资源
    } catch (Exception e) {
    
    
        e.printStackTrace();
    }
}

7. Fence barrier

  1. DistributedBarrierThe parameters in the constructor barrierPathare used to determine a fence, as long as the barrierPathparameters are the same (the path is the same), it is the same fence. Typically fences are used as follows:

    1. master clientset a fence
    2. Other clients will call to waitOnBarrier()wait for the fence to be removed, and the program processing thread will be blocked
    3. Once the clientfence is removed, other client handlers will continue to run in the meantime.

DistributedBarrierThe main method of the class is as follows:

setBarrier() - 设置栅栏
waitOnBarrier() - 等待栅栏移除
removeBarrier() - 移除栅栏
  1. DistributedDoubleBarrierDouble fences, allowing clients to synchronize at the beginning and end of calculations. Processes start computing when enough processes have joined the double fence, and leave the fence when the computation is complete. DistributedDoubleBarrierRealized the function of double fence. The constructor is as follows:

    // client - the client
    // barrierPath - path to use
    // memberQty - the number of members in the barrier
    public DistributedDoubleBarrier(CuratorFramework client, String barrierPath, int memberQty);
    
    enter()enter(long maxWait, TimeUnit unit) - 等待同时进入栅栏
    leave()leave(long maxWait, TimeUnit unit) - 等待同时离开栅栏
    

memberQtyis the number of members, when the entermethod is called, the members are blocked until all members are called enter. When the leavemethod is called, it also blocks the calling thread until all members have been called leave.

Note: memberQtyThe value of the parameter is just a threshold, not a limit. When the number of waiting fences is greater than or equal to this value the fence will open!

Like the fence ( DistributedBarrier), the barrierPathparameters of the double fence are also used to determine whether it is the same fence. The usage of the double fence is as follows:

  1. Create a double fence ( ) on the same path from multiple clients DistributedDoubleBarrier, and then call enter()the method to enter the fence when the number of fences is reached memberQty.
  2. When the number of fences is reached memberQty, multiple clients stop blocking and continue running until the leave()method is executed, waiting for memberQtya number of fences to block into the leave()method at the same time.
  3. memberQtyA number of barriers are blocked into the leave()method at the same time, and the methods of multiple clients leave()stop blocking and continue to run.

8. Shared counters

Take advantage ZooKeeperof counters that can be shared across a cluster. Just use the same pathto get the latest counter value, which is ZooKeeperguaranteed by the consistency of . CuratorThere are two counters, one intfor counting and one longfor counting.

8.1. SharedCount

The methods related to shared counters SharedCountare as follows:

// 构造方法
public SharedCount(CuratorFramework client, String path, int seedValue);
// 获取共享计数的值
public int getCount();
// 设置共享计数的值
public void setCount(int newCount) throws Exception;
// 当版本号没有变化时,才会更新共享变量的值
public boolean  trySetCount(VersionedValue<Integer> previous, int newCount);
// 通过监听器监听共享计数的变化
public void addListener(SharedCountListener listener);
public void addListener(final SharedCountListener listener, Executor executor);
// 共享计数在使用之前必须开启
public void start() throws Exception;
// 关闭共享计数
public void close() throws IOException;

Use Cases:

StockController:

@GetMapping("test/zk/share/count")
public String testZkShareCount(){
    
    
    this.stockService.testZkShareCount();
    return "hello shareData";
}

StockService:

public void testZkShareCount() {
    
    
    try {
    
    
        // 第三个参数是共享计数的初始值
        SharedCount sharedCount = new SharedCount(curatorFramework, "/curator/count", 0);
        // 启动共享计数器
        sharedCount.start();
        // 获取共享计数的值
        int count = sharedCount.getCount();
        // 修改共享计数的值
        int random = new Random().nextInt(1000);
        sharedCount.setCount(random);
        System.out.println("我获取了共享计数的初始值:" + count + ",并把计数器的值改为:" + random);
        sharedCount.close();
    } catch (Exception e) {
    
    
        e.printStackTrace();
    }
}

8.2. DistributedAtomicNumber

DistributedAtomicNumberThe interface is an abstraction of the distributed atomic numerical type, which defines the methods that the distributed atomic numerical type needs to provide.

DistributedAtomicNumberThe interface has two implementations: DistributedAtomicLongandDistributedAtomicInteger

insert image description here

These two implementations delegate the execution of various atomic operations to DistributedAtomicValue, so the two implementations are similar, except that the types of values ​​represented are different. Here is DistributedAtomicLongan example to demonstrate

DistributedAtomicLongIn addition to the larger counting range SharedCount, SharedCountit is simpler and easier to use. It first tries to set the counter using optimistic locking, and if it is unsuccessful (for example, the counter has been updated by other clients during the period), it uses the InterProcessMutexmethod to update the count value. This counter has a series of operations:

  • get(): get the current value
  • increment():plus one
  • decrement(): minus one
  • add(): increase a specific value
  • subtract(): Subtract a specific value
  • trySet(): try to set the count value
  • forceSet(): Forcibly set the count value

Finally, the returned result must be checked succeeded(), indicating whether the operation was successful. If the operation is successful, it preValue()represents the value before the operation , and postValue()represents the value after the operation.

Guess you like

Origin blog.csdn.net/weixin_43847283/article/details/128603255