Talk about several JVM-level locks in Java

According to Moore's Law, the performance of computers will continue to soar because the associated costs of computing infrastructure will continue to decline over time. Specific to the CPU, it has developed from a simple single-core system to a multi-core system, and the cache performance has also been greatly improved. With the advent of multi-core CPUs, computers can now run multiple tasks simultaneously. Moreover, with the significant efficiency improvement brought about by multiple improvements in hardware development, multi-threaded programming at the software level has become an inevitable trend. However, multi-threaded programming also brings some data security issues.

With all of these trends, the industry has recognized that when there are security breaches, there must be protections in place. Following this trend, virtual "locks" were invented to solve thread safety issues. In this article, we'll examine several typical JVM-level locks in Java that have emerged over the years.

1、synchronized

The synchronized keyword is a classic and very typical lock in Java. In fact, it's also the most commonly used one. Prior to JDK 1.6, "synchronized" was a rather "heavyweight" lock. However, with the update and upgrade of Java, this lock is also constantly optimized. Today, the lock is not so "heavy". And, in fact, in some scenarios it even outperforms typical lightweight locks. Moreover, in methods and code blocks with the synchronized keyword, only one thread is allowed to access a specific code segment at the same time, preventing multiple threads from concurrently modifying the same block of data.

1.1) Lock upgrade

Before JDK 1.5 (inclusive), the underlying implementation of the synchronized keyword was relatively heavy, so it was called a "heavyweight lock". However, after JDK 1.5, various improvements have been made to the "synchronized" lock, which has become less heavyweight. Therefore, the implementation method is the lock escalation process. Let's first look at how "synchronization" locks are implemented after JDK 1.5. When it comes to the principle of synchronization locks, we must first understand the layout of Java objects in memory.

As shown in the figure above, after an object is created, the storage layout of the object in the Java memory of the JVM HotSpot virtual machine can be divided into three types.

1) Object header

The information stored in this area is divided into two parts.

  • Runtime data (MarkWord) of the object itself: this data stores the hashCode, garbage collection (GC) generation age, lock type flag, ID of the thread that favors the lock, the thread that points to the Compare and Swap (CAS) lock pointer of the LockRecord, among other information . The mechanism of synchronized lock is highly related to this part (MarkWord). The lowest three digits of the MarkWord indicate the lock status. For these three bits, one of them is a biased lock bit, and the other two are normal lock bits.
  • Object's class pointer (Class Pointer): An object's pointer is a pointer to its class metadata. The JVM uses this to determine the class of an instance.

 2) Instance data area

This area stores valid information of the object, such as the contents of all fields in the object.

3) Align padding

The implementation of JVM HotSpot stipulates that the starting address of the object must be an integer multiple of 8 bytes. That is to say, the data read by a 64-bit operating system at one time is an integer multiple of 64 bits, that is, 8 bytes. Therefore, HotSpot does "alignment" to read objects efficiently. If the actual memory size of the object is not an integer multiple of 8 bytes, HotSpot will "padded" the object to an integer multiple of 8 bytes. Therefore, the size of the alignment and padding regions is dynamic.

1.2) Synchronized lock upgrade

When a thread enters the synchronized state and tries to acquire a lock, the synchronized lock upgrade process is as follows.

 To sum up, the upgrade order of synchronization locks is: biased locks>lightweight locks>heavyweight locks . The detailed triggering of lock upgrade in each step is as follows.

1) Biased Lock: (biased lock)

In JDK 1.8, lightweight locks are the default. However, by setting -XX:BiasedLockingStartupDelay=0, a biased lock is attached immediately after the object is synchronized . When the thread is in the bias lock state, MarkWord records the ID of the current thread.

2) Upgrading to the Lightweight Lock: (Upgrading to the Lightweight Lock)
 When the next thread competes for the biased lock, the system first checks whether the thread ID stored in MarkWord is consistent with the thread ID. If not, the system cancels the biased lock immediately and upgrades it to a lightweight lock. Each thread generates a LockRecord (LR) in its own thread stack. Then, each thread sets the MarkWord in the lock object header as a pointer to its own LR through CAS operation (spin). If a thread successfully sets the MarkWord, the thread acquires the lock . Therefore, the CAS operations performed for "synchronization" are done by C++ code in HotSpot's bytecodeInterpreter.cpp file.

3) Upgrading to the Heavyweight Lock: (Upgrade to the Heavyweight Lock)

If the lock competition intensifies, such as the number of thread spins or the number of spinning threads exceeds a threshold, this threshold is controlled by the JVM itself. For JDK versions after 1.6, the lock is upgraded to a heavyweight lock . Then, the heavyweight lock applies for resources from the operating system.

At the same time, the thread is also suspended, enters the waiting queue of the operating system kernel state , waits for the operating system to schedule it, and maps back to the user state. In a heavyweight lock, it needs to be converted from the kernel state to the user state. This process takes a long time, which is one of the reasons why it is characterized as "heavyweight".

1.3) Others:

1) Reentrancy:

There is a mandatory atomic locking mechanism inside the synchronized lock, which is a reentrant lock. When a thread uses a synchronized method, another synchronized method of the object is called. That is, after a thread obtains the object lock, once the object lock is requested again, the thread can always obtain the lock. In Java, the operation of a thread to acquire an object lock is thread-based, not call-based.

The MarkWord in the object header corresponding to the synchrnoized block and the statement records the thread holder and the counter of the lock . When a thread request is successful, the JVM records the thread holding the lock and sets the count to 1. At this point, if another thread requests the lock, that thread must wait.

When the thread holding the lock requests the lock again, the lock can be acquired again, and the count is incremented accordingly. The count is decremented when the thread exits the synchronized method or block. Finally, if the count is 0, the lock is released.

2) Pessimistic Lock (Mutex and Exclusive Lock): (pessimistic lock (mutex and exclusive lock))

A synchronized lock is a pessimistic lock, more precisely an exclusive lock. In other words, if the current thread acquires the lock, any other thread that needs the lock must wait. Lock contention continues until the thread holding the lock releases the lock.

2、ReentrantLock

ReentrantLock is similar to synchronized, but its implementation is quite different from synchronization lock. Specifically, it is implemented based on the classic AbstractQueueSyncronized (AQS). AQS is implemented based on volatile and CAS. AQS maintains a volatile state variable to count the number of reentry attempts of the reentrant lock. Similarly, locking and releasing locks are also based on this variable. ReentrantLock provides some additional features that synchronized locks do not have, so it is better than synchronized locks.

2.1) ReentrantLock property: 

1) Reentrant:

ReentrantLock, like the synchronized keyword, supports reentrant locks. However, they are implemented slightly differently. ReentrantLock judges whether the resource has been locked by the state of AQS . For the same thread, if it is locked, the status value is increased by 1, and if it is unlocked, the status value is decreased by 1. Note that unlocking is only valid for the current exclusive thread, otherwise an exception will occur . If the status value is 0, the unlock is successful.

2) Manually lock and unlock:

The synchronized keyword automatically locks and unlocks. In contrast, ReentrantLock requires lock() and unlock() methods and try/finally statement blocks to manually lock and unlock.

3)lock timeout:

The synchronized keyword cannot set a lock timeout. If a deadlock occurs in the thread that acquired the lock, other threads will remain blocked. ReentrantLock provides the tryLock method, which can set a timeout for the thread that acquires the lock . If the timeout is exceeded, the thread is skipped and does nothing, thus preventing deadlock.

4)Fair and Unfair Locks

The synchronized keyword is an unfair lock, and the first thread to grab the lock runs first. By setting true or false in the constructor of ReentrantLock, fair locks and unfair locks can be achieved. If set to true, threads need to follow the "first come, first served" rule. Every time a thread wants to acquire a lock, it will construct a thread node, and then append it to the "tail" of the doubly linked list for queuing, waiting for the previous node in the queue to release the lock resource.

5) Interruptible:

The lockInterruptibly() method in ReentrantLock allows a thread to respond to interruption while blocked . For example, thread t1 obtains a ReentrantLock through the lockInterruptibly() method and runs a long task. Other threads can use the interrupt() method to immediately interrupt the running of thread t1 and obtain the ReentrantLock of thread t1. However, the lock() method of ReentrantLock or threads with synchronized locks will not respond to the interrupt() method of other threads until this method actively releases the lock.

2.2)ReentrantReadWriteLock

ReentrantReadWriteLock (read-write lock) is actually two locks, one is WriteLock (write lock) and the other is ReadLock (read lock). The rules for read-write locks are: read-read non-exclusive, read-write exclusive, and write-write exclusive. In some practical scenarios, the frequency of reading is much higher than that of writing. If ordinary locks are used for concurrency control, read and write are mutually exclusive, which is inefficient.

In order to optimize the operating efficiency in this scenario, read-write locks came into being. In general, the inefficiency of exclusive locks comes from the fierce competition in the critical section under high concurrency, which leads to thread context switching. When the concurrency is not very high, the efficiency of read-write locks may be lower than that of exclusive locks, because additional maintenance is required . Therefore, you need to choose the appropriate lock according to the actual situation.

ReentrantReadWriteLock is also implemented based on AQS . The difference between ReentrantLock and ReentrantReadWriteLock is that the latter has the properties of shared lock and exclusive lock. The locking and unlocking in the read-write lock is based on Sync and inherited from AQS . It is mainly realized by the state in AQS and the waitState variable in node.

The main difference between the implementation of a read-write lock and a common mutual exclusion lock is that the read lock status and the write lock status need to be recorded separately, and the waiting queue needs to treat these two lock operations differently. In ReentrantReadWriteLock, the int state in AQS is divided into high 16 bits and low 16 bits to record the read lock and write lock states respectively , as shown in the figure below.

 1)The WriteLock (Write Lock) Is a Pessimistic Lock (Exclusive Lock or Mutex)

By calculating state&((1<<16)-1), all the upper 16 bits of the state are erased. Therefore, the low bits of status record the number of write lock reentries.

The following is the source code for obtaining a write lock.

/**
         * 获取写锁
           Acquires the write lock.
         *  如果此时没有任何线程持有写锁或者读锁,那么当前线程执行CAS操作更新status,
         *  若更新成功,则设置读锁重入次数为1,并立即返回
         * <p>Acquires the write lock if neither the read nor write lock
         * are held by another thread
         * and returns immediately, setting the write lock hold count to
         * one.
         *  如果当前线程已经持有该写锁,那么将写锁持有次数设置为1,并立即返回
         * <p>If the current thread already holds the write lock then the
         * hold count is incremented by one and the method returns
         * immediately.
         *  如果该锁已经被另外一个线程持有,那么停止该线程的CPU调度并进入休眠状态,
         *  直到该写锁被释放,且成功将写锁持有次数设置为1才表示获取写锁成功
         * <p>If the lock is held by another thread then the current
         * thread becomes disabled for thread scheduling purposes and
         * lies dormant until the write lock has been acquired, at which
         * time the write lock hold count is set to one.
         */
        public void lock() {
            sync.acquire(1);
        }
/**
     * 该方法为以独占模式获取锁,忽略中断
     * 如果调用一次该"tryAcquire"方法更新status成功,则直接返回,代表抢锁成功
     * 否则,将会进入同步队列等待,不断执行"tryAcquire"方法尝试CAS更新status状态,直到成功抢到锁
     * 其中"tryAcquire"方法在NonfairSync(公平锁)中和FairSync(非公平锁)中都有各自的实现
     *
     * Acquires in exclusive mode, ignoring interrupts.  Implemented
     * by invoking at least once {@link #tryAcquire},
     * returning on success.  Otherwise the thread is queued, possibly
     * repeatedly blocking and unblocking, invoking {@link
     * #tryAcquire} until success.  This method can be used
     * to implement method {@link Lock#lock}.
     *
     * @param arg the acquire argument.  This value is conveyed to
     *        {@link #tryAcquire} but is otherwise uninterpreted and
     *        can represent anything you like.
     */
    public final void acquire(int arg) {
        if (!tryAcquire(arg) &&
            acquireQueued(addWaiter(Node.EXCLUSIVE), arg))
            selfInterrupt();
    }
    protected final boolean tryAcquire(int acquires) {
            /*
             * Walkthrough:
             * 1、如果读写锁的计数不为0,且持有锁的线程不是当前线程,则返回false
             * 1. If read count nonzero or write count nonzero
             *    and owner is a different thread, fail.
             * 2、如果持有锁的计数不为0且计数总数超过限定的最大值,也返回false
             * 2. If count would saturate, fail. (This can only
             *    happen if count is already nonzero.)
             * 3、如果该锁是可重入或该线程在队列中的策略是允许它尝试抢锁,那么该线程就能获取锁
             * 3. Otherwise, this thread is eligible for lock if
             *    it is either a reentrant acquire or
             *    queue policy allows it. If so, update state
             *    and set owner.
             */
            Thread current = Thread.currentThread();
            //获取读写锁的状态
            int c = getState();
            //获取该写锁重入的次数
            int w = exclusiveCount(c);
            //如果读写锁状态不为0,说明已经有其他线程获取了读锁或写锁
            if (c != 0) {
                //如果写锁重入次数为0,说明有线程获取到读锁,根据"读写锁互斥"原则,返回false
                //或者如果写锁重入次数不为0,且获取写锁的线程不是当前线程,根据"写锁独占"原则,返回false
                // (Note: if c != 0 and w == 0 then shared count != 0)
                if (w == 0 || current != getExclusiveOwnerThread())
                    return false;
               //如果写锁可重入次数超过最大次数(65535),则抛异常
                if (w + exclusiveCount(acquires) > MAX_COUNT)
                    throw new Error("Maximum lock count exceeded");
                //到这里说明该线程是重入写锁,更新重入写锁的计数(+1),返回true
                // Reentrant acquire
                setState(c + acquires);
                return true;
            }
            //如果读写锁状态为0,说明读锁和写锁都没有被获取,会走下面两个分支:
            //如果要阻塞或者执行CAS操作更新读写锁的状态失败,则返回false
            //如果不需要阻塞且CAS操作成功,则当前线程成功拿到锁,设置锁的owner为当前线程,返回true
            if (writerShouldBlock() ||
                !compareAndSetState(c, c + acquires))
                return false;
            setExclusiveOwnerThread(current);
            return true;
        }

Source code for releasing the write lock:

/*
  * Note that tryRelease and tryAcquire can be called by
  * Conditions. So it is possible that their arguments contain
  * both read and write holds that are all released during a
  * condition wait and re-established in tryAcquire.
  */
 protected final boolean tryRelease(int releases) {
     //若锁的持有者不是当前线程,抛出异常
     if (!isHeldExclusively())
         throw new IllegalMonitorStateException();
     //写锁的可重入计数减掉releases个
     int nextc = getState() - releases;
     //如果写锁重入计数为0了,则说明写锁被释放了
     boolean free = exclusiveCount(nextc) == 0;
     if (free)
        //若写锁被释放,则将锁的持有者设置为null,进行GC
        setExclusiveOwnerThread(null);
     //更新写锁的重入计数
     setState(nextc);
     return free;
 }

2)The ReadLock (Read Lock) Is a Shared Lock (Optimistic Lock)

16 additional bits are introduced by computing the state of unsigned zero padding >>>16. So the high bit of state records the number of reentries of the write lock.

The process of acquiring a read lock is slightly more complicated than acquiring a write lock. First, the system judges whether the write lock count is 0, and the current thread does not hold an exclusive lock. If yes, the system directly returns the result. If not, the system checks whether the read thread needs to be blocked, whether the number of read locks is less than the threshold, and whether the setting status comparison is successful.

If no read lock currently exists, sets firstReader and firstReaderHoldCount for the first reading thread. If the current thread is the first thread to read, the firstReaderHoldCount value is incremented. Otherwise, set the value of the HoldCounter object corresponding to the current thread. After the update is successful, the reentry count of the current thread is recorded in the readHolds (ThreadLocal type) of firstReaderHoldCount in the copy of the current thread. This is to implement the new getReadHoldCount() method in JDK 1.6. This method can get the number of times the current thread re-enters the shared lock. That is, the total number of re-entries of multiple threads is recorded in the state.

The introduction of this method makes the code a lot more complicated, but the principle is still very simple: if there is only one thread, there is no need to use ThreadLocal, and the reentry count can be directly stored in the firstReaderHoldCount member variable. Need to use ThreadLocal variable, readHolds when another thread happens. Each thread has its own copy to keep its own reentrancy count.

The following is the source code for obtaining a read lock:

/**
         * 获取读锁
         * Acquires the read lock.
         * 如果写锁未被其他线程持有,执行CAS操作更新status值,获取读锁后立即返回
         * <p>Acquires the read lock if the write lock is not held by
         * another thread and returns immediately.
         * 
         * 如果写锁被其他线程持有,那么停止该线程的CPU调度并进入休眠状态,直到该读锁被释放
         * <p>If the write lock is held by another thread then
         * the current thread becomes disabled for thread scheduling
         * purposes and lies dormant until the read lock has been acquired.
         */
        public void lock() {
            sync.acquireShared(1);
        }
   /**
     * 该方法为以共享模式获取读锁,忽略中断
     * 如果调用一次该"tryAcquireShared"方法更新status成功,则直接返回,代表抢锁成功
     * 否则,将会进入同步队列等待,不断执行"tryAcquireShared"方法尝试CAS更新status状态,直到成功抢到锁
     * 其中"tryAcquireShared"方法在NonfairSync(公平锁)中和FairSync(非公平锁)中都有各自的实现
     * (看这注释是不是和写锁很对称)
     * Acquires in shared mode, ignoring interrupts.  Implemented by
     * first invoking at least once {@link #tryAcquireShared},
     * returning on success.  Otherwise the thread is queued, possibly
     * repeatedly blocking and unblocking, invoking {@link
     * #tryAcquireShared} until success.
     *
     * @param arg the acquire argument.  This value is conveyed to
     *        {@link #tryAcquireShared} but is otherwise uninterpreted
     *        and can represent anything you like.
     */
    public final void acquireShared(int arg) {
        if (tryAcquireShared(arg) < 0)
            doAcquireShared(arg);
    }
    protected final int tryAcquireShared(int unused) {
            /*
             * Walkthrough:
             * 1、如果已经有其他线程获取到了写锁,根据"读写互斥"原则,抢锁失败,返回-1
             * 1.If write lock held by another thread, fail.
             * 2、如果该线程本身持有写锁,那么看一下是否要readerShouldBlock,如果不需要阻塞,
             *    则执行CAS操作更新state和重入计数。
             *    这里要注意的是,上面的步骤不检查是否可重入(因为读锁属于共享锁,天生支持可重入)
             * 2. Otherwise, this thread is eligible for
             *    lock wrt state, so ask if it should block
             *    because of queue policy. If not, try
             *    to grant by CASing state and updating count.
             *    Note that step does not check for reentrant
             *    acquires, which is postponed to full version
             *    to avoid having to check hold count in
             *    the more typical non-reentrant case.
             * 3、如果因为CAS更新status失败或者重入计数超过最大值导致步骤2执行失败
             *    那就进入到fullTryAcquireShared方法进行死循环,直到抢锁成功
             * 3. If step 2 fails either because thread
             *    apparently not eligible or CAS fails or count
             *    saturated, chain to version with full retry loop.
             */
        
            //当前尝试获取读锁的线程
            Thread current = Thread.currentThread();
            //获取该读写锁状态
            int c = getState();
            //如果有线程获取到了写锁 ,且获取写锁的不是当前线程则返回失败
            if (exclusiveCount(c) != 0 &&
                getExclusiveOwnerThread() != current)
                return -1;
            //获取读锁的重入计数
            int r = sharedCount(c);
            //如果读线程不应该被阻塞,且重入计数小于最大值,且CAS执行读锁重入计数+1成功,则执行线程重入的计数加1操作,返回成功
            if (!readerShouldBlock() &&
                r < MAX_COUNT &&
                compareAndSetState(c, c + SHARED_UNIT)) {
                //如果还未有线程获取到读锁,则将firstReader设置为当前线程,firstReaderHoldCount设置为1
                if (r == 0) {
                    firstReader = current;
                    firstReaderHoldCount = 1;
                } else if (firstReader == current) {
                    //如果firstReader是当前线程,则将firstReader的重入计数变量firstReaderHoldCount加1
                    firstReaderHoldCount++;
                } else {
                    //否则说明有至少两个线程共享读锁,获取共享锁重入计数器HoldCounter
                    //从HoldCounter中拿到当前线程的线程变量cachedHoldCounter,将此线程的重入计数count加1
                    HoldCounter rh = cachedHoldCounter;
                    if (rh == null || rh.tid != getThreadId(current))
                        cachedHoldCounter = rh = readHolds.get();
                    else if (rh.count == 0)
                        readHolds.set(rh);
                    rh.count++;
                }
                return 1;
            }
            //如果上面的if条件有一个都不满足,则进入到这个方法里进行死循环重新获取
            return fullTryAcquireShared(current);
        }
        /**
         * 用于处理CAS操作state失败和tryAcquireShared中未执行获取可重入锁动作的full方法(补偿方法?)
         * Full version of acquire for reads, that handles CAS misses
         * and reentrant reads not dealt with in tryAcquireShared.
         */
        final int fullTryAcquireShared(Thread current) {
            /*
             * 此代码与tryAcquireShared中的代码有部分相似的地方,
             * 但总体上更简单,因为不会使tryAcquireShared与重试和延迟读取保持计数之间的复杂判断
             * This code is in part redundant with that in
             * tryAcquireShared but is simpler overall by not
             * complicating tryAcquireShared with interactions between
             * retries and lazily reading hold counts.
             */
            HoldCounter rh = null;
            //死循环
            for (;;) {
                //获取读写锁状态
                int c = getState();
                //如果有线程获取到了写锁
                if (exclusiveCount(c) != 0) {
                    //如果获取写锁的线程不是当前线程,返回失败
                    if (getExclusiveOwnerThread() != current)
                        return -1;
                    // else we hold the exclusive lock; blocking here
                    // would cause deadlock.
                } else if (readerShouldBlock()) {//如果没有线程获取到写锁,且读线程要阻塞
                    // Make sure we're not acquiring read lock reentrantly
                    //如果当前线程为第一个获取到读锁的线程
                    if (firstReader == current) {
                        // assert firstReaderHoldCount > 0;
                    } else { //如果当前线程不是第一个获取到读锁的线程(也就是说至少有有一个线程获取到了读锁)
                        //
                        if (rh == null) {
                            rh = cachedHoldCounter;
                            if (rh == null || rh.tid != getThreadId(current)) {
                                rh = readHolds.get();
                                if (rh.count == 0)
                                    readHolds.remove();
                            }
                        }
                        if (rh.count == 0)
                            return -1;
                    }
                }
                /**
                 *下面是既没有线程获取写锁,当前线程又不需要阻塞的情况
                 */
                //重入次数等于最大重入次数,抛异常
                if (sharedCount(c) == MAX_COUNT)
                    throw new Error("Maximum lock count exceeded");
                //如果执行CAS操作成功将读写锁的重入计数加1,则对当前持有这个共享读锁的线程的重入计数加1,然后返回成功
                if (compareAndSetState(c, c + SHARED_UNIT)) {
                    if (sharedCount(c) == 0) {
                        firstReader = current;
                        firstReaderHoldCount = 1;
                    } else if (firstReader == current) {
                        firstReaderHoldCount++;
                    } else {
                        if (rh == null)
                            rh = cachedHoldCounter;
                        if (rh == null || rh.tid != getThreadId(current))
                            rh = readHolds.get();
                        else if (rh.count == 0)
                            readHolds.set(rh);
                        rh.count++;
                        cachedHoldCounter = rh; // cache for release
                    }
                    return 1;
                }
            }
        }

Next is the source code for releasing the read lock:

/**
  * Releases in shared mode.  Implemented by unblocking one or more
  * threads if {@link #tryReleaseShared} returns true.
  *
  * @param arg the release argument.  This value is conveyed to
  *        {@link #tryReleaseShared} but is otherwise uninterpreted
  *        and can represent anything you like.
  * @return the value returned from {@link #tryReleaseShared}
  */
public final boolean releaseShared(int arg) {
    if (tryReleaseShared(arg)) {//尝试释放一次共享锁计数
        doReleaseShared();//真正释放锁
        return true;
    }
        return false;
}
/**
 *此方法表示读锁线程释放锁。
 *首先判断当前线程是否为第一个读线程firstReader,
 *若是,则判断第一个读线程占有的资源数firstReaderHoldCount是否为1,
  若是,则设置第一个读线程firstReader为空,否则,将第一个读线程占有的资源数firstReaderHoldCount减1;
  若当前线程不是第一个读线程,
  那么首先会获取缓存计数器(上一个读锁线程对应的计数器 ),
  若计数器为空或者tid不等于当前线程的tid值,则获取当前线程的计数器,
  如果计数器的计数count小于等于1,则移除当前线程对应的计数器,
  如果计数器的计数count小于等于0,则抛出异常,之后再减少计数即可。
  无论何种情况,都会进入死循环,该循环可以确保成功设置状态state
 */
protected final boolean tryReleaseShared(int unused) {
      // 获取当前线程
      Thread current = Thread.currentThread();
      if (firstReader == current) { // 当前线程为第一个读线程
          // assert firstReaderHoldCount > 0;
         if (firstReaderHoldCount == 1) // 读线程占用的资源数为1
              firstReader = null;
          else // 减少占用的资源
              firstReaderHoldCount--;
     } else { // 当前线程不为第一个读线程
         // 获取缓存的计数器
         HoldCounter rh = cachedHoldCounter;
         if (rh == null || rh.tid != getThreadId(current)) // 计数器为空或者计数器的tid不为当前正在运行的线程的tid
             // 获取当前线程对应的计数器
             rh = readHolds.get();
         // 获取计数
         int count = rh.count;
         if (count <= 1) { // 计数小于等于1
             // 移除
             readHolds.remove();
             if (count <= 0) // 计数小于等于0,抛出异常
                 throw unmatchedUnlockException();
         }
         // 减少计数
         --rh.count;
     }
     for (;;) { // 死循环
         // 获取状态
         int c = getState();
         // 获取状态
         int nextc = c - SHARED_UNIT;
         if (compareAndSetState(c, nextc)) // 比较并进行设置
             // Releasing the read lock has no effect on readers,
             // but it may allow waiting writers to proceed if
             // both read and write locks are now free.
             return nextc == 0;
     }
 }
 /**真正释放锁
  * Release action for shared mode -- signals successor and ensures
  * propagation. (Note: For exclusive mode, release just amounts
  * to calling unparkSuccessor of head if it needs signal.)
  */
private void doReleaseShared() {
        /*
         * Ensure that a release propagates, even if there are other
         * in-progress acquires/releases.  This proceeds in the usual
         * way of trying to unparkSuccessor of head if it needs
         * signal. But if it does not, status is set to PROPAGATE to
         * ensure that upon release, propagation continues.
         * Additionally, we must loop in case a new node is added
         * while we are doing this. Also, unlike other uses of
         * unparkSuccessor, we need to know if CAS to reset status
         * fails, if so rechecking.
         */
        for (;;) {
            Node h = head;
            if (h != null && h != tail) {
                int ws = h.waitStatus;
                if (ws == Node.SIGNAL) {
                    if (!compareAndSetWaitStatus(h, Node.SIGNAL, 0))
                        continue;            // loop to recheck cases
                    unparkSuccessor(h);
                }
                else if (ws == 0 &&
                         !compareAndSetWaitStatus(h, 0, Node.PROPAGATE))
                    continue;                // loop on failed CAS
            }
            if (h == head)                   // loop if head changed
                break;
        }
    }

Through the analysis, it can be seen that when a thread holds a read lock, the thread cannot acquire the write lock, because no matter whether it holds the read lock or not, if the current read lock is occupied, its attempt to acquire the write lock will fail through the current thread.

Next, while a thread holds a write lock, that thread can go on to acquire a read lock. In the process of acquiring the read lock, if the write lock is occupied, the read lock can only be acquired if the current thread occupies the write lock.

3、LongAdder

In high-concurrency scenarios, performing i++ directly on integers of type Integer cannot guarantee the atomicity of the operation, leading to thread safety issues. For this, we use AtomicInteger in juc, which is an Integer class that provides atomic operations. Internally, thread safety is achieved through CAS. However, when a large number of threads access a lock at the same time, spinning occurs because a large number of threads fail to perform CAS operations.

This leads to excessive consumption of CPU resources and low execution efficiency. Doug Lea was not satisfied with this, so he optimized CAS in JDK 1.8 and provided LongAdder , which is based on the idea of ​​CAS segment lock .

LongAdder is implemented based on CAS and volatile, provided by Unsafe. A base variable and a Cell array are maintained in the Striped64 parent class of LongAdder. When multiple threads operate on a variable, the CAS operation is first performed on the base variable. The cell array is used when additional threads are detected. For example, when base is about to be updated, more threads are detected. That is, the casBase method fails to update the base value, and the cell array is automatically used. Each thread corresponds to a cell, and the CAS operation is performed on the cell in each thread .

In this way, the update pressure of a single value can be shared among multiple values, reducing the "hotness" of a single value. It also reduces the spin of a large number of threads, improves concurrency efficiency, and disperses concurrency pressure. This kind of segment lock requires additional memory units, but the cost is almost negligible in high concurrency scenarios. Segment locks are an excellent optimization. ConcurrentHashMap in juc ​​is also based on segment locks to ensure thread safety for reading and writing.

Guess you like

Origin blog.csdn.net/liuxiao723846/article/details/129173087