Explain the Java Concurrency Package in a Simple Way—Analysis of CountDownLauch Principle

Explain the Java Concurrency Package in a Simple Way—Analysis of CountDownLauch Principle

A ray of sky Tianyu Star IT Haha
CountDownLauch is a synchronization tool set in the Java Concurrency Package, which is often referred to as a counter in concurrency, and another is called a lock!
CountDownLauch is mainly used in two scenarios. One is called a switch, which allows one or a group of threads to wait continuously before a task is completed. This situation is often referred to as lock-up. In layman's terms, it is equivalent to a gate. All threads are blocked before the gate is opened. Once the gate is opened, all threads will pass, but once the gate is opened, all threads are blocked. If it passes, then the locked state becomes invalid, and the state of the door cannot be changed, only the open state. Another scenario is often called a counter. It allows a task to be split into N small tasks. The main thread waits until all tasks are completed. When each task is completed, the counter is decremented by one until all tasks are completed. Blocking of the main thread.
Let's take a look at the API corresponding to CountDownLauch.
Explain the Java Concurrency Package in a Simple Way—Analysis of CountDownLauch Principle
Explain the Java Concurrency Package in a Simple Way—Analysis of CountDownLauch Principle

CountDownLatch maintains a positive counter, the countDown method decrements the counter, and the await method waits for the counter to reach 0. All await threads will block until the counter reaches 0 or the waiting thread is interrupted or timed out.
Let's take a look at a corresponding application example:


package com.yhj.lauth;
import java.util.Date;
import java.util.concurrent.CountDownLatch;
//工人
class Worker extends Thread{
    privateintworkNo;//工号
    private CountDownLatch startLauch;//启动器-闭锁
    private CountDownLatch workLauch;//工作进程-计数器
    public Worker(int workNo,CountDownLatch startLauch,CountDownLatch workLauch) {
       this.workNo = workNo;
       this.startLauch = startLauch;
       this.workLauch = workLauch;
    }
    @Override
    publicvoid run() {
       try {
           System.out.println(new Date()+" - YHJ"+workNo+" 准备就绪!准备开工!");
           startLauch.await();//等待老板发指令
           System.out.println(new Date()+" - YHJ"+workNo+" 正在干活...");
           Thread.sleep(100);//每人花100ms干活
       } catch (InterruptedException e) {
           e.printStackTrace();
       }finally{
           System.out.println(new Date()+" - YHJ"+workNo+" 工作完成!");
           workLauch.countDown();
       }
    }
}
//测试用例
publicclass CountDownLauthTestCase {

    publicstaticvoid main(String[] args) throws InterruptedException {
       int workerCount = 10;//工人数目
       CountDownLatch startLauch = new CountDownLatch(1);//闭锁相当于开关
       CountDownLatch workLauch = new CountDownLatch(workerCount);//计数器
       System.out.println(new Date()+" - Boss:集合准备开工了!");
       for(int i=0;i<workerCount;++i){
           new Worker(i, startLauch, workLauch).start();
       }
       System.out.println(new Date()+" - Boss:休息2s后开工!");
       Thread.sleep(2000);
       System.out.println(new Date()+" - Boss:开工!");
       startLauch.countDown();//打开开关
       workLauch.await();//任务完成后通知Boss
       System.out.println(new Date()+" - Boss:不错!任务都完成了!收工回家!");
    }
}
执行结果:
Sat Jun 08 18:59:33 CST 2013 - Boss:集合准备开工了!
Sat Jun 08 18:59:33 CST 2013 - YHJ0 准备就绪!准备开工!
Sat Jun 08 18:59:33 CST 2013 - YHJ2 准备就绪!准备开工!
Sat Jun 08 18:59:33 CST 2013 - YHJ1 准备就绪!准备开工!
Sat Jun 08 18:59:33 CST 2013 - YHJ4 准备就绪!准备开工!
Sat Jun 08 18:59:33 CST 2013 - Boss:休息2s后开工!
Sat Jun 08 18:59:33 CST 2013 - YHJ8 准备就绪!准备开工!
Sat Jun 08 18:59:33 CST 2013 - YHJ6 准备就绪!准备开工!
Sat Jun 08 18:59:33 CST 2013 - YHJ3 准备就绪!准备开工!
Sat Jun 08 18:59:33 CST 2013 - YHJ7 准备就绪!准备开工!
Sat Jun 08 18:59:33 CST 2013 - YHJ5 准备就绪!准备开工!
Sat Jun 08 18:59:33 CST 2013 - YHJ9 准备就绪!准备开工!
Sat Jun 08 18:59:35 CST 2013 - Boss:开工!
Sat Jun 08 18:59:35 CST 2013 - YHJ0 正在干活...
Sat Jun 08 18:59:35 CST 2013 - YHJ2 正在干活...
Sat Jun 08 18:59:35 CST 2013 - YHJ1 正在干活...
Sat Jun 08 18:59:35 CST 2013 - YHJ4 正在干活...
Sat Jun 08 18:59:35 CST 2013 - YHJ8 正在干活...
Sat Jun 08 18:59:35 CST 2013 - YHJ6 正在干活...
Sat Jun 08 18:59:35 CST 2013 - YHJ3 正在干活...
Sat Jun 08 18:59:35 CST 2013 - YHJ7 正在干活...
Sat Jun 08 18:59:35 CST 2013 - YHJ5 正在干活...
Sat Jun 08 18:59:35 CST 2013 - YHJ9 正在干活...
Sat Jun 08 18:59:35 CST 2013 - YHJ5 工作完成!
Sat Jun 08 18:59:35 CST 2013 - YHJ1 工作完成!
Sat Jun 08 18:59:35 CST 2013 - YHJ3 工作完成!
Sat Jun 08 18:59:35 CST 2013 - YHJ6 工作完成!
Sat Jun 08 18:59:35 CST 2013 - YHJ7 工作完成!
Sat Jun 08 18:59:35 CST 2013 - YHJ9 工作完成!
Sat Jun 08 18:59:35 CST 2013 - YHJ4 工作完成!
Sat Jun 08 18:59:35 CST 2013 - YHJ0 工作完成!
Sat Jun 08 18:59:35 CST 2013 - YHJ2 工作完成!
Sat Jun 08 18:59:35 CST 2013 - YHJ8 工作完成!
Sat Jun 08 18:59:35 CST 2013 - Boss:不错!任务都完成了!收工回家!

In this example, two CountDownLauchs are used to construct two scenarios respectively. The first startLauch is equivalent to a switch. Before it is turned on, no thread executes. When it is turned on, all threads can execute at the same time. The second workerLauch is actually a counter. When the counter is not reduced to zero, the main thread waits forever. When all threads are executed, the main thread unblocks and continues execution!
The second scenario is often used in the thread pool we will learn later, we will discuss it later!
There is also an important feature here, which is the
effect of memory consistency: the operation happen-before before calling countDown() in the thread immediately follows the operation that corresponds to the successful return of await() from another thread.
We have seen the scene application, so what principle is it based on and how is it realized?
Let's look at the corresponding source code:


privatestaticfinalclass Sync extends AbstractQueuedSynchronizer

On the second line of the class, we see a synchronizer that implements AQS inside. Let's focus on the methods we use: await and countDown. First look at the await method


publicvoid await() throws InterruptedException {
        sync.acquireSharedInterruptibly(1);
    }

Obviously, it is to directly call the internal re-implemented synchronizer to acquire the shared lock method (we have been talking about exclusive locks before, and today we take this opportunity to talk about the shared lock mechanism together).


publicfinalvoid acquireSharedInterruptibly(int arg) throws InterruptedException {
        if (Thread.interrupted())
            thrownew InterruptedException();
        if (tryAcquireShared(arg) < 0)
            doAcquireSharedInterruptibly(arg);
    }

Here, if the thread is interrupted, it will exit directly, otherwise it will try to acquire a shared lock. Let’s look at the implementation of tryAcquireShared(arg) (this method is overridden by an internal class):


publicint tryAcquireShared(int acquires) {
            return getState() == 0? 1 : -1;
        }

The so-called shared lock means that all threads sharing the lock share the same resource. Once any thread gets the shared resource, then all threads have the same resource. That is, under normal circumstances, the shared lock is just a sign, and all threads wait for whether this sign is satisfied. Once satisfied, all threads are activated (equivalent to all threads getting the lock). The lock CountDownLatch here is based on the realization of the shared lock. And obviously the identifier here is that state, etc. are not equal to zero, and state is actually how many threads are competing for this resource. We can see earlier that it is a data greater than 0 passed in through the constructor, so it returns here at this moment Is always -1.


Sync(int count) {
            setState(count);
        }

When the data returned by tryAcquireShared is less than zero, it means that the resource has not been acquired and needs to be blocked. At this time, the code doAcquireSharedInterruptibly() is executed:


privatevoid doAcquireSharedInterruptibly(int arg)
        throws InterruptedException {
        final Node node = addWaiter(Node.SHARED);
        try {
            for (;;) {
                final Node p = node.predecessor();
                if (p == head) {
                    int r = tryAcquireShared(arg);
                    if (r >= 0) {
                        setHeadAndPropagate(node, r);
                        p.next = null; // help GC
                        return;
                    }
                }
                if (shouldParkAfterFailedAcquire(p, node) &&
                    parkAndCheckInterrupt())
                    break;
            }
        } catch (RuntimeException ex) {
            cancelAcquire(node);
            throw ex;
        }
        // Arrive here only if interrupted
        cancelAcquire(node);
        thrownew InterruptedException();
    }

Here first add a node to the CLH queue in shared mode, and then check the current node's predecessor node (the inserted data is at the end of the queue), if the predecessor node is the head node and the current counter is 0, wake up Successor node (wake up later), otherwise determine whether to block, if necessary, block the current thread! Until it is awakened or interrupted!


privatefinalboolean parkAndCheckInterrupt() {
        LockSupport.park(this);
        return Thread.interrupted();
    }

Note here that the parameter obj in LockSupport.park(Obj) is a blocked monitoring object, not a blocked object. The blocked object is the thread of the current operation, so the corresponding thread should be settled when unpacking! Don't confuse it!


publicstaticvoid park(Object blocker) {
        Thread t = Thread.currentThread();
        setBlocker(t, blocker);
        unsafe.park(false, 0L);
        setBlocker(t, null);
}
publicstaticvoid unpark(Thread thread) {
        if (thread != null)
            unsafe.unpark(thread);
    }

Let's take a look at the implementation of the corresponding countDown method


publicvoid countDown() {
        sync.releaseShared(1);
    }

First of all, every time countDown is executed, the lock release operation of the internal method will be executed!


publicfinalboolean releaseShared(int arg) {
        if (tryReleaseShared(arg)) {
            Node h = head;
            if (h != null && h.waitStatus != 0)
                unparkSuccessor(h);
            returntrue;
        }
        returnfalse;
    }

If the attempt is successful, set the current node as the head node and wake up the subsequent nodes of the corresponding node!


publicboolean tryReleaseShared(int releases) {
            // Decrement count; signal when transition to zero
            for (;;) {
                int c = getState();
                if (c == 0)
                    returnfalse;
                int nextc = c-1;
                if (compareAndSetState(c, nextc))
                    return nextc == 0;
            }
        }

Similarly, the method of releasing the lock is also implemented by the synchronization class inside CountDownLauch. This method spins to detect the number of the current counter. If it is equal to zero, it means that all the previously blocked threads have been released. It returns false directly, otherwise CAS sets the current counter and decrements Go to the countdown number. If the data is zero after the setting is successful, it means that the execution has been completed and the blocked thread needs to be released. Return true (note that the subtle return nextc == 0 here), otherwise return false.
Let's look back at the releaseShared method again. When tryReleaseShared returns true, it means that the counter has reached zero and the blocked resources need to be released! At this time, execute the unparkSuccessor(h) method to wake up the head node in the queue.
Here, a delicate queue is designed to release blocked threads in turn, instead of waking up all threads directly in a method similar to singleAll. So how does it work? In our code, only the head node is awakened (in fact, it is the successor node of the head node, and the head node is just an empty node). Let's first look at the implementation of unparkSuccessor


privatevoid unparkSuccessor(Node node) {
        /*
         * Try to clear status in anticipation of signalling.  It is
         * OK if this fails or if status is changed by waiting thread.
         */
        compareAndSetWaitStatus(node, Node.SIGNAL, 0);
        /*
         * Thread to unpark is held in successor, which is normally
         * just the next node.  But if cancelled or apparently null,
         * traverse backwards from tail to find the actual
         * non-cancelled successor.
         */
        Node s = node.next;
        if (s == null || s.waitStatus > 0) {
            s = null;
            for (Node t = tail; t != null && t != node; t = t.prev)
                if (t.waitStatus <= 0)
                    s = t;
        }
        if (s != null)
            LockSupport.unpark(s.thread);
    }

Obviously, we can see that the incoming parameter is the head node. After setting the data through CAS, the subsequent nodes of the head node are awakened (note that the unpack is the thread instead of the blocking monitor). Then I returned!
How do the remaining blocked threads wake up? Let's look at the implementation of doAcquireSharedInterruptibly in the await method


privatevoid doAcquireSharedInterruptibly(int arg)
        throws InterruptedException {
        final Node node = addWaiter(Node.SHARED);
        try {
            for (;;) {
                final Node p = node.predecessor();
                if (p == head) {
                    int r = tryAcquireShared(arg); // tag 2
                    if (r >= 0) {
                        setHeadAndPropagate(node, r); // tag 3
                        p.next = null; // help GC
                        return;
                    }
                }
                if (shouldParkAfterFailedAcquire(p, node) &&
                    parkAndCheckInterrupt())// tag 1
                    break;
            }
        } catch (RuntimeException ex) {
            cancelAcquire(node);
            throw ex;
        }
        // Arrive here only if interrupted
        cancelAcquire(node);
        thrownew InterruptedException();
    }

Earlier we can see that the block is executed when parkAndCheckInterrupt() is executed. When we wake up the successor node of the head node (the first node that enters the queue), this line of code of tag1 is awakened, and after the break, it continues to spin, and At this time, the tag2 line of code detects that the counter is already 0, so the result returned by tryAcquireShared(arg) is 1 (all returned before is -1), r is greater than zero, enter the tag3 code, and tag3 will set the current thread as the head end Point, and then continue to wake up subsequent successor nodes.


privatevoid setHeadAndPropagate(Node node, int propagate) {
        setHead(node); // tag 4
        if (propagate > 0 && node.waitStatus != 0) {
            /*
             * Don't bother fully figuring out successor.  If it
             * looks null, call unparkSuccessor anyway to be safe.
             */
            Node s = node.next;
            if (s == null || s.isShared())
                unparkSuccessor(node); // tag 5
        }
    }

After the successor node is awakened, it will continue to wake up the successor node, and then wake up the data in the queue in turn!
The entire CountDownLatch looks like this. In fact, with the principles and implementation of atomic operations and AQS, it is relatively easy to analyze CountDownLatch.

Guess you like

Origin blog.51cto.com/15061944/2593716