Detailed Redisson achieve the underlying principle of distributed lock Redis

I, EDITORIAL


Now the interview, the general will talk about this stuff in a distributed system. Interviewer chatting from Services Framework (Spring Cloud, Dubbo), all the way to talk to a distributed transaction, distributed lock, ZooKeeper and other knowledge.

So in this article we will talk to distributed lock piece of knowledge, to look at specific implementation principle Redis distributed lock.

To be honest, if landing the production environment in the company with a distributed lock, they must be open-source library will use, such as Redis distributed lock, the general is to use Redisson like frame, very easy to use.

If you are interested, you can go and see Redisson official website to see how the introduction of Redisson dependencies in the project, and then based on Redis implementation of distributed lock lock and release the lock.

Here for everyone to see the use of a simple piece of code snippets, the first intuitive feel:
img
how, on top of that code, is not feeling simply will not do!

In addition, people also supports single-instance redis, redis Sentinel, redis cluster, redis master-slave and other deployment architecture that can give you perfectly.


Two, Redisson realization of the underlying principles of distributed lock Redis

Well, next on the adoption of a hand drawing, for everyone to talk about the realization of the principle Redisson Redis open source framework for distributed lock.
img

(1) locking mechanism

Let's look at the above picture is now a client to be locked. If the client is facing a redis cluster cluster, he will first select a machine according to hash node.

Note that here , just select a machine! This is key!

Then, for some lua script will be sent to the redis, that part of lua script is as follows:
img
Why use lua script it?

因为一大坨复杂的业务逻辑,可以通过封装在lua脚本中发送给redis,保证这段复杂业务逻辑执行的原子性

那么,这段lua脚本是什么意思呢?

KEYS[1] 代表的是你加锁的那个key,比如说:

RLock lock = redisson.getLock("myLock");

这里你自己设置了加锁的那个锁key就是“myLock”。

ARGV[1] 代表的就是锁key的默认生存时间,默认30秒。

ARGV[2] 代表的是加锁的客户端的ID,类似于下面这样:

8743c9c0-0795-4907-87fd-6c719a6b4586:1

给大家解释一下,第一段if判断语句,就是用“exists myLock”命令判断一下,如果你要加锁的那个锁key不存在的话,你就进行加锁。

如何加锁呢?很简单,用下面的命令:

hset myLock

8743c9c0-0795-4907-87fd-6c719a6b4586:1 1

通过这个命令设置一个hash数据结构,这行命令执行后,会出现一个类似下面的数据结构:
img
上述就代表8743c9c0-0795-4907-87fd-6c719a6b4586:1这个客户端对“myLock”这个锁key完成了加锁。

接着会执行pexpire myLock 30000命令,设置myLock这个锁key的生存时间是30秒。

好了,到此为止,ok,加锁完成了。

(2)锁互斥机制

那么在这个时候,如果客户端2来尝试加锁,执行了同样的一段lua脚本,会咋样呢?

很简单,第一个if判断会执行exists myLock,发现myLock这个锁key已经存在了。

接着第二个if判断,判断一下,myLock锁key的hash数据结构中**,是否包含客户端2的ID**,但是明显不是的,因为那里包含的是客户端1的ID。
所以,客户端2会获取到pttl myLock返回的一个数字,这个数字代表了myLock这个锁key的剩余生存时间。 比如还剩15000毫秒的生存时间。

此时客户端2会进入一个while循环,不停的尝试加锁。

(3)watch dog自动延期机制

客户端1加锁的锁key默认生存时间才30秒,如果超过了30秒,客户端1还想一直持有这把锁,怎么办呢?

简单!只要客户端1一旦加锁成功,就会启动一个watch dog看门狗,他是一个后台线程,会每隔10秒检查一下,如果客户端1还持有锁key,那么就会不断的延长锁key的生存时间。

(4)可重入加锁机制

那如果客户端1都已经持有了这把锁了,结果可重入的加锁会怎么样呢?

比如下面这种代码:
img
这时我们来分析一下上面那段lua脚本。

第一个if判断肯定不成立,exists myLock会显示锁key已经存在了。

第二个if判断会成立,因为myLock的hash数据结构中包含的那个ID,就是客户端1的那个ID,也就是8743c9c0-0795-4907-87fd-6c719a6b4586:1

此时就会执行可重入加锁的逻辑,他会用:

incrby myLock 

 8743c9c0-0795-4907-87fd-6c71a6b4586:1 1

通过这个命令,对客户端1的加锁次数,累加1。

此时myLock数据结构变为下面这样:
img

大家看到了吧,那个myLock的hash数据结构中的那个客户端ID,就对应着加锁的次数

(5)释放锁机制

如果执行lock.unlock(),就可以释放分布式锁,此时的业务逻辑也是非常简单的。

其实说白了,就是每次都对myLock数据结构中的那个加锁次数减1。

如果发现加锁次数是0了,说明这个客户端已经不再持有锁了,此时就会用:

del myLock命令,从redis里删除这个key。

And then, another client 2 can attempt to complete the lock.

This is known as the implementation mechanism Redisson open source framework for distributed lock.

Generally, we in the production system, can be used Redisson framework provided by this library to be distributed based on redis lock lock and release the lock.

(6) the above-described disadvantages of distributed lock Redis

In fact, the biggest problem with the above kind of program is that if you are a redis master instance, written myLock lock key of this value, this time will be copied to the master slave asynchronous corresponding instance.

However, this process occurs once redis master down, standby switching, redis slave becomes a redis master.

Then will lead the client to try 2 when locked, the lock is completed in the new redis master, but also thought he was a client successfully added a lock.

At this time, it will lead to more clients to complete a distributed lock lock.

Then the system will have problems on the business semantics, resulting in a variety of dirty data .

So this is redis cluster, or redis master-slave architecture master-slave asynchronous replication redis distributed lock caused the biggest flaw: when redis master instance of downtime could lead to multiple clients simultaneously complete lock.

Published 107 original articles · won praise 14 · views 40000 +

Guess you like

Origin blog.csdn.net/belongtocode/article/details/103395771