Understanding and source code analysis of AQS related synchronization components of java concurrency

Write in front

I have written AQS related blog posts before, so this article will write about the synchronization components related to AQS.
AQS is a basic framework provided in java to build synchronization components. It provides basic synchronization state management, thread blocking, queuing, and wake-up mechanisms. This facilitates the user to customize the synchronization component.
In the java concurrent package, many components also use AQS.

Let's briefly introduce the commonly used synchronization components used in AQS.
Here we classify according to the AQS resource sharing method:

  • Exclusive: ReentrantLock
  • Shared: Semaphore, CountDownLatch, CycliBarrier
  • Exclusive + shared: read-write lock

ReentrantLock

Re-entrant lock As the
name implies, this component is a lock (it implements the Lock interface), which supports continuing to acquire the lock when the lock object has been acquired.
(Synchronized can implicitly re-enter the lock)
So when ReentrantLock holds the lock object, it calls the lock() method again to acquire the lock without being blocked.

As for how to implement it, let’s look at the source code. Here we first look at the implementation of unfair locks. As for the difference between fairness and unfairness, we will look at it later.
The following is the acquisition of the AQS unfair lock integrated in the ReentrantLock source code:

	final boolean nonfairTryAcquire(int acquires) {
    
    
            final Thread current = Thread.currentThread();
            int c = getState();
            if (c == 0) {
    
    //如果状态为0,则意为无锁状态,可以获取
                if (compareAndSetState(0, acquires)) {
    
    
                    setExclusiveOwnerThread(current);
                    return true;
                }
            }
            else if (current == getExclusiveOwnerThread()) {
    
    //如果有锁,判断持锁线程是不是自己
                int nextc = c + acquires;
                if (nextc < 0) // overflow
                    throw new Error("Maximum lock count exceeded");
                setState(nextc);
                return true;
            }
            return false;
        }

Here is ReentrantLock rewriting the tryAcquire method of AQS, in which the nonFairlyAcquire method is called by default.

  • If the synchronization state is 0, it means a lock-free state, and you can get
  • If there is a lock, determine whether the thread holding the lock is itself. If it is oneself, continue to increase the synchronization state variable.

In this way, the logic is very clear.
Let's take a look at the tryRelease method:

	protected final boolean tryRelease(int releases) {
    
    
            int c = getState() - releases;
            if (Thread.currentThread() != getExclusiveOwnerThread())
                throw new IllegalMonitorStateException();
            boolean free = false;
            if (c == 0) {
    
    
                free = true;
                setExclusiveOwnerThread(null);
            }
            setState(c);
            return free;
        }

emmmm corresponds to acquisition. It is necessary to determine whether the lock is held by yourself, and then reduce the synchronization state variable.

in conclusion:

  • For reentrant locks, the value of its synchronization state variable is 0-positive infinity, where 0 is no lock, and non-zero is how many locks there are.
  • How many times the lock is added, the lock must be released how many times to release the lock successfully.
  • Only the thread currently holding the lock can acquire the lock again.

Let's take a look at the concept of fair lock and unfair lock and how it is reflected in ReentrantLock.
Insert picture description here
(Image source: https://www.cnblogs.com/a1439775520/p/12947010.html) The
above is the class structure diagram of ReentrantLock.
We only need to look at the tryAcquire methods of NonFairSync and FairSync to understand the difference between fairness and unfairness.

The following is the source code obtained fairly:

	protected final boolean tryAcquire(int acquires) {
    
    
            final Thread current = Thread.currentThread();
            int c = getState();
            if (c == 0) {
    
    
                if (!hasQueuedPredecessors() &&//**看这里**
                    compareAndSetState(0, acquires)) {
    
    
                    setExclusiveOwnerThread(current);
                    return true;
                }
            }
            else if (current == getExclusiveOwnerThread()) {
    
    
                int nextc = c + acquires;
                if (nextc < 0)
                    throw new Error("Maximum lock count exceeded");
                setState(nextc);
                return true;
            }
            return false;
        }

Compared with the above nonfairTryAcquire method, we found that there is only one difference, that is, when c==0, one more hasQueuePredecessors method is judged.

	public final boolean hasQueuedPredecessors() {
    
    
        // The correctness of this depends on head being initialized
        // before tail and on head.next being accurate if the current
        // thread is first in queue.
        Node t = tail; // Read fields in reverse initialization order
        Node h = head;
        Node s;
        return h != t &&
            ((s = h.next) == null || s.thread != Thread.currentThread());
    }

Look at the return statement in the last line:

  • h!=t, If equal, it means the queue is empty, if not equal, it means there are at least two different nodes
  • h.next==nullIf it is not empty, there are successor nodes
  • s.thread != Thread.currentThread(), If they are equal, it means that their predecessor node is the head node, and it is their turn to compete, so of course they can compete

In general, the method is to judge whether the queue is empty, or when the queue is not empty, whether its predecessor is the head node, so that the edges can compete. In other cases, it returns true and cannot compete.

To sum up, determine whether there are other threads waiting in the queue, if waiting, queue up obediently, otherwise directly compete.
Let me cite such an example to illustrate fair locks and unfair locks.

  • Fair lock: The canteen serves meals, and at the beginning, everyone and everyone who comes to serve meals will go directly to serve meals. When there are too many people start to line up. The newcomer found that there was a line, and honestly went straight to the line.
  • Unfair lock: The canteen is serving food, no one at the beginning, and everyone who comes to eat directly goes to the restaurant. When there are too many people start to line up. At this time, the newcomer found that there was a line, it was not good, and jumped to the window to grab food. If the grab is successful, you will be served dinner, if you fail, you will be blamed by others (fiction), and you will go to the end of the line obediently.

Of course, both have their pros and cons. Fair lock can guarantee fairness and guarantee FIFO.
The throughput of unfair locks is large, because there is no need to wake up subsequent nodes, which saves a lot of overhead.
However, unfair locks may cause hunger. When you are an ordinary user who goes to the bank to do business, queue up, and then there are always VIPs in front of the queue, do you say you are hungry? I have encountered it, and tmd is really annoying.

Semaphore

Generally referred to as a semaphore , it is mainly implemented, allowing access to a resource and allowing N threads to access it at the same time.

Look directly at the source code, the source code of Semaphore is still relatively small.

There are also concepts of fairness and unfairness in Semaphore. It seems that it is not easy to talk about locking.
Look at its fair access:

	protected int tryAcquireShared(int acquires) {
    
    
            for (;;) {
    
    
                if (hasQueuedPredecessors())
                    return -1;
                int available = getState();
                int remaining = available - acquires;
                if (remaining < 0 ||
                    compareAndSetState(available, remaining))
                    return remaining;
            }
        }

Same as ReentrantLock, there is little difference between fairness and unfairness.
The logic here is also very simple, if hasQueuedPrecessors, it fails.
If the current synchronization state minus parameter (usually 1) is less than 0, it means that the acquisition has failed, abandon the acquisition, and return directly to remaining (negative number). (The synchronization state variable greater than 0 means that the thread is also allowed to obtain resources)

Semaphore calls the template method in AQS. When the acquisition fails, it will be added to the synchronization queue and then blocked.

Then about the release, the source code here is:

	public void release(int permits) {
    
    
        if (permits < 0) throw new IllegalArgumentException();
        sync.releaseShared(permits);
    }

When the synchronization state variable >=0, the releaseShared template method of AQS will be called.

In general, Semaphore limits the number of threads that can access synchronized resources.

CountDownLatch

Its function is to let several threads wait for several threads.

Generally we have to implement a thread to wait for another thread. We often use the wait() and notify() methods of the lock variable,
but that can only be achieved. When I am notified by the waiting thread, there will be a wait() immediately. The thread is awakened. At the same time, it is accompanied by the release and acquisition of the lock, which cannot achieve our effect.

What we want is that a group of threads wait for the entire group of threads to finish before continuing.
So, CountDownLatch helped us realize this idea.

Insert picture description hereCountDownLatch countDownLatch = new CountDownLatch(3);
We set the counter of CountDownLatch to 3 in the main thread,
call other threads, and then await(), and enter the block. At this time, when other threads execute the countDown method, the counter of CountDownLatch will be decremented by 1, and when it is reduced to 0, then The main thread will be awakened.

Here is an example to demonstrate:

public class CountdownLatchExample {
    
    

    public static void main(String[] args) throws InterruptedException {
    
    
        final int totalThread = 10;
        CountDownLatch countDownLatch = new CountDownLatch(totalThread);
        ExecutorService executorService = Executors.newCachedThreadPool();
        for (int i = 0; i < totalThread; i++) {
    
    
            executorService.execute(() -> {
    
    
                System.out.print("run..");
                countDownLatch.countDown();
            });
        }
        countDownLatch.await();
        System.out.println("end");
        executorService.shutdown();
    }
}
run..run..run..run..run..run..run..run..run..run..end

Here to explain, countdownLatch is a one-time, and the counter is 0 if it is 0, and it cannot be reset.
Secondly, several waiting threads continue to execute after satisfying the conditions. As the condition, the waiting threads can continue to execute or wait regardless of whether it does not matter.

Two classic uses of CountDownLatch:

  • A thread waits for n threads to finish executing before starting to run. Initialize the counter of CountDownLatch to n: new CountDownLatch(n), every time a task thread finishes executing, decrement the counter by 1 countdownlatch.countDown(), when the value of the counter becomes 0, the thread of await() on CountDownLatch Will be awakened. A typical application scenario is that when starting a service, the main thread needs to wait for multiple components to be loaded before continuing.
  • Realize the maximum parallelism for multiple threads to start executing tasks. Note that it is parallelism, not concurrency. It emphasizes that multiple threads start executing at the same time. Similar to a race, multiple threads are placed at the starting point, waiting for the starting gun to sound, and then running at the same time. The method is to initialize a shared CountDownLatch object and initialize its counter to 1: new CountDownLatch(1), multiple threads first coundownlatch.await() before starting to perform tasks, and when the main thread calls countDown(), the counter becomes 0. Multiple threads are awakened at the same time. ( Start at the same time )

(About the source code of CountDownLatch, I will add it later if necessary)

CycliBarrier

Loop fence

The implementation of CountDownLatch is based on AQS, while CycliBarrier is based on ReentrantLock (ReentrantLock also belongs to the AQS synchronizer) and Condition.

/** The lock for guarding barrier entry */
    private final ReentrantLock lock = new ReentrantLock();
    /** Condition to wait on until tripped */
    private final Condition trip = lock.newCondition();

It is a barrier, and its function is to ensure that in a position, after a sufficient number of threads reach this position, everyone can start together (similar to the above simultaneous start). Those who arrive early are waiting, but those who arrive late are not in a hurry.

It needs to use the default construction method to specify the number of threads intercepted by the barrier

public CyclicBarrier(int parties, Runnable barrierAction) {
    
    
    if (parties <= 0) throw new IllegalArgumentException();
    this.parties = parties;
    this.count = parties;
    this.barrierCommand = barrierAction;
}

The parties here means that when the number of intercepted threads reaches this value, the fence is opened to let all threads pass.
And Runnable refers to this method is automatically executed when the value is reached.

Application scenario:
CyclicBarrier can be used for multi-threaded calculation data, and finally combined calculation results. For example, we use an Excel to save all the bank records of the user, and each sheet saves the bank records of an account for the past year. Now we need to count the user’s daily average bank records. First, use multithreading to process the bank records in each sheet. After all executions are completed, the daily average bank turnover of each sheet is obtained, and finally, the barrierAction is used to calculate the daily average bank turnover of the entire Excel using the calculation results of these threads.

When the thread reaches the barrier, it executes the await() method, enters sleep, and waits for other threads.
The dowait method is called in the await method. The source code is as follows:

	private int dowait(boolean timed, long nanos)
        throws InterruptedException, BrokenBarrierException,
               TimeoutException {
    
    
        final ReentrantLock lock = this.lock;
        lock.lock();
        try {
    
    
            final Generation g = generation;

            if (g.broken)
                throw new BrokenBarrierException();

            if (Thread.interrupted()) {
    
    
                breakBarrier();
                throw new InterruptedException();
            }

            int index = --count;//将计数器值减1 count是CycliBarrier的一个变量,就是计数器
            //当计数器值为0时,则可以放行
            if (index == 0) {
    
      // tripped
                boolean ranAction = false;
                try {
    
    
                    final Runnable command = barrierCommand;
                    if (command != null)
                        command.run();
                    ranAction = true;
                    //这里是关键
                    //将count重置
                    //唤醒之前等待的线程
                    //下一波执行开始
                    nextGeneration();
                    return 0;
                } finally {
    
    
                    if (!ranAction)
                        breakBarrier();
                }
            }

            // loop until tripped, broken, interrupted, or timed out
            for (;;) {
    
    
                try {
    
    
                    if (!timed)
                        trip.await();
                    else if (nanos > 0L)
                        nanos = trip.awaitNanos(nanos);
                } catch (InterruptedException ie) {
    
    
                    if (g == generation && ! g.broken) {
    
    
                        breakBarrier();
                        throw ie;
                    } else {
    
    
                        // We're about to finish waiting even if we had not
                        // been interrupted, so this interrupt is deemed to
                        // "belong" to subsequent execution.
                        Thread.currentThread().interrupt();
                    }
                }

                if (g.broken)
                    throw new BrokenBarrierException();

                if (g != generation)
                    return index;

                if (timed && nanos <= 0L) {
    
    
                    breakBarrier();
                    throw new TimeoutException();
                }
            }
        } finally {
    
    
            lock.unlock();
        }
    }

As you can see from my comments, the most important thing here is the nextGeneration method:

private void nextGeneration() {
    
    
        // signal completion of last generation
        trip.signalAll();
        // set up next generation
        count = parties;
        generation = new Generation();
    }

As you can see, this method is to reset the counter and wake up all other waiting threads.
Therefore, we know that the CLH lock used here is different from AQS.
The previous thread is blocked.
Every time it is the last thread to wake up other threads.

This is also because of the use of ReentrantLock and condition.

Compared with CountDownLatch, CycliBarrier is more like a valve, it is to ensure that all threads reach a specific location.
And CountDownLatch focuses more on the concept of conditions.

Read-write lock

In normal times, we all talk about the 28 principle, read 8 and write 2, and read is safe, and write is not safe.
So read is shared and write is exclusive.

I'm a little tired here, I will write later.

Guess you like

Origin blog.csdn.net/qq_34687559/article/details/114276489