The underlying implementation of lock

       In the last blog post, I briefly explained the difference between lock lock and synchronized and their respective application advantages and disadvantages, so what causes their respective application biases, which of course depend on their underlying implementation, so today we will first Let's take a look at the underlying principles of lock locks, because the underlying implementation of lock locks has a lot of content, so I will write another blog post to share with you~

       Lock locks are written in Java and have nothing to do with the JVM implementation.

       There are many lock implementation classes in the JUClocks package, commonly used are the reentrant lock ReentrantLock, the read-write lock ReadWriteLock, their implementations all depend on the java.util.concurrent.AbstractQueuedSynchronizer class, their implementation process is similar, first of all we Let's take a look at how ReentrantLock is implemented.

       After observing that ReentrantLock delegates all operations of the Lock interface to a Sync class , which inherits AbstractQueuedSynchronizer :

static abstract class Sync extends AbstractQueuedSynchronizer  

       Sync has two subclasses:

//公平锁与非公平锁
final static class NonfairSync extends Sync  
final static class FairSync extends Sync  

       Let's take a look at the calling process of ReentrantLock through the source code. First, let's take a look at its construction method:

public ReentrantLock() {
    sync = new NonfairSync();
}

       Very simple, the secondary construction method is a reference to the unfair lock constructor, which also shows that ReentrantLock is an unfair lock by default. Let's take a look at the unfair lock:

static final class NonfairSync extends Sync {
        private static final long serialVersionUID = 7316153563782823691L;
        final void lock() {
            if (compareAndSetState(0, 1))
                setExclusiveOwnerThread(Thread.currentThread());
            else
                acquire(1);
        }
        protected final boolean tryAcquire(int acquires) {
            return nonfairTryAcquire(acquires);
        }
    }
       NonfairSync is a static inner class. In the ReentrantLock.lock function, the NonfairSync.lock method will be called. First, it will try to change the state field in the current AQS from 0 to 1 through the CAS method. If the modification is successful, it means The original state is 0, no thread occupies the lock, and the lock is successfully acquired, just call the setExclusiveOwnerThread function to set the current thread as the thread holding the lock. Otherwise, after the CAS operation fails, the acquire(1) function is called to try to acquire the lock, just like a normal lock. In the method of trying to acquire the lock, the nonfairTryAcquire method is called, which will first judge the current state. If c==0, it means that no thread is competing for the lock. If not c !=0, it means that a thread is owning the lock. If c==0 is found, the state value is set to acquires through CAS, and the initial call value of acquires is 1. Each time the thread re-enters the lock, it will be +1, and each unlock will be -1, but the lock will be released when it is 0. If the CAS setting is successful, it can be expected that any other thread will not succeed in calling CAS, and it is considered that the current thread has obtained the lock and is also used as a Running thread. Obviously, this Running thread has not entered the waiting queue. If c !=0 but find that you already own the lock, just simply ++acquires and modify the status value, but because there is no competition, it is modified by setStatus instead of CAS, which means that this code also implements a biased lock. function, the following is the source code of this method:
final boolean nonfairTryAcquire(int acquires) {
            final Thread current = Thread.currentThread();
            int c = getState();
            if (c == 0) {
                if (compareAndSetState(0, acquires)) {
                    setExclusiveOwnerThread(current);
                    return true;
                }
            }
            else if (current == getExclusiveOwnerThread()) {
                int nextc = c + acquires;
                if (nextc < 0) // overflow
                    throw new Error("Maximum lock count exceeded");
                setState(nextc);
                return true;
            }
            return false;
        }
       Knowing this, let's take a look at the calling process of the Reentrant.lock() method:


       We talked about the tryAcquire() method above. Next, let's take a look at the implementation of addWaiter:

private Node addWaiter(Node mode) {
        Node node = new Node(Thread.currentThread(), mode);
        // Try the fast path of enq; backup to full enq on failure
        Node pred = tail;
        if (pred != null) {
            node.prev = before;
            if (compareAndSetTail(pred, node)) {
                pred.next = node;
                return node;
            }
        }
        enq(node);
        return node;
    }

       The addWaiter method is responsible for wrapping the thread that cannot currently obtain the lock as a Node and adding it to the end of the queue, where the parameter mode is an exclusive lock or a shared lock, the default is null, an exclusive lock. The action of appending to the tail of the queue is divided into two steps:
1) If the current tail already exists (tail!=null), use CAS to update the current thread to Tail
2) If the current Tail is null or the thread fails to call CAS to set the tail , then continue to set Tail through the enq method.
       The following is the enq method:

private Node enq(final Node node) {
        for (;;) {
            Node t = tail;
            if (t == null) { // Must initialize
                if (compareAndSetHead(new Node()))
                    tail = head;
            } else {
                node.prev = t;
                if (compareAndSetTail(t, node)) {
                    t.next = node;
                    return t;
                }
            }
        }
    }

        该方法就是循环调用CAS,即使有高并发的场景,无限循环将会最终成功把当前线程追加到队尾(或设置队头)。总而言之,addWaiter的目的就是通过CAS把当前线程追加到队尾,并返回包装后的Node实例把线程包装为Node对象的主要原因,除了用Node构造供虚拟队列外,还用Node包装了各种线程状态,这些状态被精心设计为一些数字值:

  • SIGNAL(-1) :线程的后继线程正/已被阻塞,当该线程release或cancel时要重新这个后继线程(unpark)
  • CANCELLED(1):因为超时或中断,该线程已经被取消
  • CONDITION(-2):表明该线程被处于条件队列,就是因为调用了Condition.await而被阻塞
  • PROPAGATE(-3):传播共享锁
  • 0:0代表无状态

       简单说来,AbstractQueuedSynchronizer会把所有的请求线程构成一个CLH队列,当一个线程执行完毕(lock.unlock())时会激活自己的后继节点,但正在执行的线程并不在队列中,而那些等待执行的线程全部处于阻塞状态,经过调查线程的显式阻塞是通过调用LockSupport.park()完成,而LockSupport.park()则调用sun.misc.Unsafe.park()本地方法,再进一步,HotSpot在Linux中中通过调用pthread_mutex_lock函数把线程交给系统内核进行阻塞。

       CLH队列实现加锁的过程示意图如下:

       与synchronized相同的是,这也是一个虚拟队列,不存在队列实例,仅存在节点之间的前后关系。令人疑惑的是为什么采用CLH队列呢?原生的CLH队列是用于自旋锁,但Doug Lea把其改造为阻塞锁。当有线程竞争锁时,该线程会首先尝试获得锁,这对于那些已经在队列中排队的线程来说显得不公平,这也是非公平锁的由来,与synchronized实现类似,这样会极大提高吞吐量。如果已经存在Running线程,则新的竞争线程会被追加到队尾,具体是采用基于CAS的Lock-Free算法,因为线程并发对Tail调用CAS可能会导致其他线程CAS失败,解决办法是循环CAS直至成功。

    至此我们已经从源码的角度对Lock锁的实现依次进行了分析,希望大家都能对Lock锁有更深的了解,这样我们不仅可以在面试中应对自如,而且对我们在日常学习中解决高并发问题也会有很大的帮助~

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324379711&siteId=291194637