Commonly used locks and AQS for concurrent programming

Table of contents

1. Commonly used locks (except Synchronized)

LongAddr

ReentrantLock

CountDownLatch

CyclicBarrier

Phaser

ReadWriteLock

Semaphore

Exchanger

LockSupport

Two, AQS

3. ThreadLocal


1. Commonly used locks (except Synchronized)

LongAddr

        First declare that LongAddr is not a lock, it is an atomic operation class, similar to AtomicLong. From the above, we can know that the count++ operation is not an atomic operation. If we want to make it an atomic operation, we can lock it. In addition to locking, we can also implement it through atomic classes, such as the incrementAndGet() method of AtomicLong or the increment() method of LongAddr.

        So what is the difference between these two atomic classes? The bottom layer of AtomicLong and LongAddr is implemented by CAS spin lock, but there is only one lock inside AtomicLong, and there is a segment lock in LongAddr, as if its internal data is divided into an array, and each element in the array is processed Lock, and finally add the array as a whole.

        Use the following simple program to make a comparison:

static long count = 0L;
    static AtomicLong countAtomic = new AtomicLong(0L);
    static LongAdder countLongAddr = new LongAdder();

    public static void main(String[] args) throws InterruptedException {
        Thread[] threads = new Thread[2000];

        Object lock = new Object();
        for (int i = 0; i < threads.length; i++){
            threads[i] = new Thread(() -> {
                for (int j = 0; j < 50000; j++){
                    synchronized (lock){
                        count++;
                    }
                }
            });
        }
        Long start = System.currentTimeMillis();
        for (Thread thread : threads) thread.start();
        for (Thread thread : threads) thread.join();
        System.out.println("synchronized耗时:"+(System.currentTimeMillis()-start));

        for (int i = 0; i < threads.length; i++){
            threads[i] = new Thread(() -> {
                for (int j = 0; j < 50000; j++){
                    countAtomic.incrementAndGet();
                }
            });
        }
        start = System.currentTimeMillis();
        for (Thread thread : threads) thread.start();
        for (Thread thread : threads) thread.join();
        System.out.println("AtomicLong耗时:"+(System.currentTimeMillis()-start));

        for (int i = 0; i < threads.length; i++){
            threads[i] = new Thread(() -> {
                for (int j = 0; j < 50000; j++){
                    countLongAddr.increment();
                }
            });
        }
        start = System.currentTimeMillis();
        for (Thread thread : threads) thread.start();
        for (Thread thread : threads) thread.join();
        System.out.println("LongAddr耗时:"+(System.currentTimeMillis()-start));
    }

Below is their time-consuming information

synchronized耗时:4550
AtomicLong耗时:1636
LongAddr耗时:244

Through comparison, it is found that their time-consuming Synchronized>Atomic>LongAddr, but when the number of threads is relatively small or the number of cycles per thread is relatively small, the efficiency of Atomic and LongAddr is not necessarily higher, and further comparisons are needed.

ReentrantLock

        First look at the following piece of code

ReentrantLock lock = new ReentrantLock();
    void first(){
        try {
            lock.lock();
            for (int i = 0; i < 5; i++){
                System.out.println(i);
                if (i == 3){
                    second();
                }
            }
        }catch (Exception e){
            e.printStackTrace();
        }finally {
           lock.unlock();
        }
    }
    void second(){
        try {
            lock.lock();
            System.out.println("second");
        }catch (Exception e){
            e.printStackTrace();
        }finally {
            lock.unlock();
        }
    }

    public static void main(String[] args) {
        D_Reentrant reentrant = new D_Reentrant();
        reentrant.first();
    }

Take a look at the execution output:

0
1
2
3
second
4

        From the above execution results, you can see that ReentrantLock is a reentrant lock. Synchronized is also a reentrant lock, so where is the advantage of ReentrantLock? Because it is more flexible and has some richer features, see the following code snippet:

// 创建公平锁
ReentrantLock reentrantLock = new ReentrantLock(true);
// 尝试获取锁
Boolean lockFlag = reentrantLock.tryLock();
// 在5秒内尝试获取锁
lockFlag = reentrantLock.tryLock(5, TimeUnit.SECONDS);
// 可以被打断的加锁,当其它线程调用interrupt方法的时候,抛出异常,并且释放锁
reentrantLock.lockInterruptibly();

CountDownLatch

        It can block the thread, pass in a value when creating, block when the value is not 0, release when the thread is zero, internal thread safety is realized through CAS, see the following code for details:

// 创建CountDownLatch,并且入参为10
        CountDownLatch countDownLatch = new CountDownLatch(10);
        Thread[] threads = new Thread[10];
        for (int i = 0; i < threads.length; i++){
            threads[i] = new Thread(() -> {
                System.out.println("thread start");
                // 对countDownLatch中的值减1,通过cas来保证线程安全
                countDownLatch.countDown();
            });
        }
        for (Thread thread : threads){
            thread.start();
        }
        // 进行阻塞,当countDownLatch中的值为0的时候放行
        countDownLatch.await();
        System.out.println("end");

Output result:

thread start
thread start
thread start
thread start
thread start
thread start
thread start
thread start
thread start
thread start
end

CyclicBarrier

        It is just the opposite of CountDownLatch. CountDownLatch is counting --, when it is reduced to 0, it will be released; while CyclicBarrier is counting ++, when it is added to the specified value, it will be released. CyclicBarrier can be used repeatedly without resetting the value. See the following code for specific use:

// 创建CyclicBarrier,并且指定当阻拦的线程数量为25是执行指定的runnable
CyclicBarrier cyclicBarrier = new CyclicBarrier(25, new Runnable() {
    @Override
    public void run() {
        System.out.println("桌满,开席");
    }
});
for (int i = 0; i < 100; i++){
    new Thread(()->{
        try {
            // 线程进行阻塞
            cyclicBarrier.await();
        } catch (InterruptedException e) {
            e.printStackTrace();
        } catch (BrokenBarrierException e) {
            e.printStackTrace();
        }
    }).start();
}

According to the above explanation, if 25 threads are not blocked to perform one print and can be recycled, then the above code will print 4 times, and the specific print results are as follows:

桌满,开席
桌满,开席
桌满,开席
桌满,开席

Phaser

        According to the different phases, to control the execution of threads. It can be understood as a CyclicBarrier with stages, and each stage is reached to do different things. Not many are used in actual projects. It is a class that only appeared after 1.7. Just use the following code to understand it:

    /**
     * 自定义自己的Phaser,然后定义各个阶段,
     * 需要注意阶段是从0开始
     */
    static class MarryPhaser extends Phaser{
        @Override
        protected boolean onAdvance(int phase, int registeredParties) {
            switch (phase) {
                case 0:
                    System.out.println("所有人到齐了!"+registeredParties+"个人");
                    return false;
                case 1:
                    System.out.println("所有人吃完了!"+registeredParties+"个人");
                    return false;
                case 2:
                    System.out.println("所有人离开了!"+registeredParties+"个人");
                    return false;
                case 3:
                    System.out.println("入洞房!"+registeredParties+"个人");
                    return true;
                default:
                    return true;
            }

        }
    }

    static MarryPhaser marryPhaser = new MarryPhaser();

    public static void waitSleep(int number) {
        try {
            Thread.sleep(number);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
    }
    static Random random = new Random();
    static class Person extends Thread {
        private String name;
        public Person(String name){
            this.name = name;
        }
        public void arrive(){
            waitSleep(random.nextInt(1000));
            // 第一阶段,所有人必须都到达以后才能去进行下一步
            marryPhaser.arriveAndAwaitAdvance();
        }
        public void eat(){
            waitSleep(random.nextInt(1000));
            // 第二阶段,所有人必读都吃完以后才能进行下一步
            marryPhaser.arriveAndAwaitAdvance();
        }
        public void leave(){
            waitSleep(random.nextInt(1000));
            // 第三阶段,所有人必须都离开以后才能进行下一步
            marryPhaser.arriveAndAwaitAdvance();
        }
        public void sleep(){
            if ("新郎".equals(name) || "新娘".equals(name)){
                waitSleep(random.nextInt(1000));
                // 第四阶段,新郎和新娘睡觉就是入洞房,结束
                marryPhaser.arriveAndAwaitAdvance();
            }else {
                // 第四阶段不需要除新郎和新娘以外的人参与
                marryPhaser.arriveAndDeregister();
            }
        }
        @Override
        public void run(){
            arrive();
            eat();
            leave();
            sleep();
        }
    }

    public static void main(String[] args) {
        marryPhaser.bulkRegister(7);
        for (int i = 0; i < 5; i++){
            new Thread(new Person(i+"person")).start();
        }
        new Thread(new Person("新郎")).start();
        new Thread(new Person("新娘")).start();
    }

Take a look at the output:

所有人到齐了!7个人
所有人吃完了!7个人
所有人离开了!7个人
入洞房!2个人

ReadWriteLock

        Read-write lock, this kind of lock performs different lock processing methods for reading and writing, shared locks for read locks, and exclusive locks for write locks. Look at the following piece of code:

    public static void deal(Lock lock, String deal){
        try {
            lock.lock();
            System.out.println(deal);
            Thread.sleep(1000);
        }catch (Exception e){
            e.printStackTrace();
        }finally {
            lock.unlock();
        }
    }


    public static void main(String[] args) {
        // 创建ReadWriteLock
        ReadWriteLock lock = new ReentrantReadWriteLock();
        // 获取读锁
        Lock readLock = lock.readLock();
        // 获取写锁
        Lock writeLock = lock.writeLock();

        Runnable readR = () -> deal(readLock, "read");
        Runnable writeR = () -> deal(writeLock, "write");
        for (int i = 0; i < 10; i++) new Thread(readR).start();
        for (int i = 0; i < 2; i++) new Thread(writeR).start();
    }

If the read lock and write lock are both exclusive locks, then it will execute for about 12s, but after running the above code, you will find that read is completed almost at the same time, while write is completed one by one. Through the above code, it can be proved that the read lock is a shared lock, and the write lock is an exclusive lock.

Semaphore

        When creating a Semaphore, a semaphore will be specified. This semaphore represents how many threads are allowed to execute at the same time, and it can play a role of current limiting. Take a look at the following code:

        // 创建Semaphore对象
        Semaphore semaphore = new Semaphore(2);
        new Thread(() -> {
            try {
                // 获取信号量
                semaphore.acquire();
                System.out.println("first start");
                Thread.sleep(1000);
                System.out.println("first end");
                // 释放信号量
                semaphore.release();
            }catch (Exception e){
                e.printStackTrace();
            }

        }).start();
        new Thread(() -> {
            try {
                // 获取信号量
                semaphore.acquire();
                System.out.println("second start");
                Thread.sleep(1000);
                System.out.println("second end");
                // 释放信号量
                semaphore.release();
            }catch (Exception e){
                e.printStackTrace();
            }

        }).start();
        new Thread(() -> {
            try {
                // 获取信号量
                semaphore.acquire();
                System.out.println("third start");
                Thread.sleep(1000);
                System.out.println("third end");
                // 释放信号量
                semaphore.release();
            }catch (Exception e){
                e.printStackTrace();
            }

        }).start();

The following is the output:

first start
second start
first end
second end
third start
third end

It can be seen from the output results that since the semaphore is 2, two threads run at the same time. When one thread releases the semaphore, the other thread gets the semaphore and continues to output.

Exchanger

        Exchanger enables threads to communicate with each other and exchange information. It should be noted that the number of threads is required to be two. Take a look at the following code:

        // 创建Exchanger对象
        Exchanger<String> exchanger = new Exchanger<>();
        new Thread(() -> {
            try {
                // 与另外一个线程进行通信,将本线程的T1交换给另一个线程
                String str = exchanger.exchange("T1");
                System.out.println("T1线程"+str);
            }catch (Exception e){
                e.printStackTrace();
            }

        }).start();
        new Thread(() -> {
            try {
                // 与另外一个线程进行通信,将本线程的T2交换给另一个线程
                String str = exchanger.exchange("T2");
                System.out.println("T2线程"+str);
            }catch (Exception e){
                e.printStackTrace();
            }

        }).start();

Take a look at the output below:

T2线程T1
T1线程T2

From the above output results, we can see that the two threads have exchanged information, and printed the information exchanged in their own threads respectively.

LockSupport

        It can suspend threads without using locks. And it can block the thread and wake up the thread at any time, through the following code to have a better understanding of it, the following code realizes the cross printing of 1A2B3C4D:


    static Thread t1 = null;
    static Thread t2 = null;
    public static void main(String[] args) {
        char[] letterArray = "ABCDEEF".toCharArray();
        char[] numberArray = "1234567".toCharArray();

        t1 = new Thread(() -> {
          for (char number : numberArray){
              System.out.println(number);
              // 恢复t2线程
              LockSupport.unpark(t2);
              // 暂停t1线程
              LockSupport.park();
          }
        });
        t2 = new Thread(() -> {
            for (char letter : letterArray){
                // 暂停t2线程
                LockSupport.park();
                System.out.println(letter);
                // 恢复t1线程
                LockSupport.unpark(t1);
            }
        });
        t1.start();
        t2.start();

    }

This kind of cross printing can also be realized through wait and notify. It should be noted that wait will release the lock, and notify will not release the lock. See the following code:

        char[] letterArray = "ABCDEF".toCharArray();
        char[] numberArray = "123456".toCharArray();
        Object object = new Object();
        new Thread(() -> {
            synchronized (object){
                for (char number : numberArray){
                    try {
                        System.out.println(number);
                        // 通知另一个线程启动
                        object.notify();
                        // 当前线程阻塞,并释放锁
                        object.wait();
                    }catch (Exception e){
                        e.printStackTrace();
                    }

                }
                // 防止线程未结束
                object.notify();
            }

        }).start();
        new Thread(() -> {
            synchronized (object){
                for (char letter : letterArray){
                    try {
                        System.out.println(letter);
                        // 通知另一个线程启动
                        object.notify();
                        // 当前线程阻塞,并释放锁
                        object.wait();
                    }catch (Exception e){
                        e.printStackTrace();
                    }

                }
                // 防止线程未结束
                object.notify();
            }

        }).start();

Through this cross-printing method, add a knowledge point condition of ReentrantLock, see the following code:

        // 创建ReentrantLock锁
        ReentrantLock reentrantLock = new ReentrantLock();
        // 创建线程1的condition
        Condition first = reentrantLock.newCondition();
        // 创建线程2的condition
        Condition second = reentrantLock.newCondition();
        char[] letterArray = "ABCDEF".toCharArray();
        char[] numberArray = "123456".toCharArray();

        new Thread(() -> {
            try {
                reentrantLock.lock();
                for(char number : numberArray){
                    System.out.println(number);
                    // 唤醒线程2
                    second.signal();
                    // 线程1等待并释放锁
                    first.await();
                }
                // 最终唤醒线程2,用于线程2运行结束
                second.signal();
            }catch (Exception e){
                e.printStackTrace();
            }finally {
                reentrantLock.unlock();
            }

        }).start();
        new Thread(() -> {
            try {
                reentrantLock.lock();
                for(char letter : letterArray){
                    System.out.println(letter);
                    // 唤醒线程1
                    first.signal();
                    // 阻塞线程2并释放锁
                    second.await();
                }
                // 最终唤醒线程1,用于线程1运行结束
                first.signal();
            }catch (Exception e){
                e.printStackTrace();
            }finally {
                reentrantLock.unlock();
            }
        }).start();

Two, AQS

        AQS (AbstractQueuedSynchronizer), many locks are implemented internally through AQS, such as the above CountDownLatch, ReentrantLock, etc. AQS is mainly composed of two parts, one is the state representing the lock state, and the other is a doubly linked list that stores thread information. Each element of this collection is Node, which contains thread-related information. The general structure is as follows:

        Before introducing AQS, let's take a look at the source code of ReentrantLock's lock() method:

    // ReentrantLock的lock方法,调用了sync的lock方法
    public void lock() {
        sync.lock();
    }

    // ReentrantLock的匿名内部类Sync继承了AbstractQueuedSynchronizer类,也就是AQS
    abstract static class Sync extends AbstractQueuedSynchronizer{...}

    // ReentrantLock的匿名内部类NonfairSync继承了Sync,根据继承的可传递性,也继承了AQS
    static final class NonfairSync extends Sync {
        private static final long serialVersionUID = 7316153563782823691L;
        
        final void lock() {
            // 调用AQS的cas方法进行加锁,如果加锁成功,则进行加锁处理
            if (compareAndSetState(0, 1))
                setExclusiveOwnerThread(Thread.currentThread());
            else
                // 如果调用AQS的cas方法加锁失败,则调用AQS的acquire方法
                acquire(1);
        }

        protected final boolean tryAcquire(int acquires) {
            return nonfairTryAcquire(acquires);
        }
    }
    // AQS的设置state的方法,也就是获取锁
    protected final boolean compareAndSetState(int expect, int update) {
        // 调用unsafe的cas方法
        return unsafe.compareAndSwapInt(this, stateOffset, expect, update);
    }
    
    
    // AQS的acquire的方法
    public final void acquire(int arg) {
        // if条件里面第一步判断是否能获取到锁,因为使用的短路且,如果能获取到锁,则条件判断结束
        // 如果没有获取到锁,则尝试将线程放入队列中
        if (!tryAcquire(arg) &&
            acquireQueued(addWaiter(Node.EXCLUSIVE), arg))
            selfInterrupt();
    }
    // AQS的tryAcquire方法实现这里采用了模板的设计模式,由子类去进行具体的实现
    protected boolean tryAcquire(int arg) {
        throw new UnsupportedOperationException();
    }
    // AQS的addWaiter方法,通过CAS的方式往AQS的队列中的尾部添加新的线程节点
    private Node addWaiter(Node mode) {
        Node node = new Node(Thread.currentThread(), mode);
        // Try the fast path of enq; backup to full enq on failure
        Node pred = tail;
        if (pred != null) {
            node.prev = pred;
            if (compareAndSetTail(pred, node)) {
                pred.next = node;
                return node;
            }
        }
        enq(node);
        return node;
    }
    // AQS的acquireQueued方法
    final boolean acquireQueued(final Node node, int arg) {
        boolean failed = true;
        try {
            boolean interrupted = false;
            // 通过死循环的方式,让进来的线程去获取锁
            for (;;) {
                final Node p = node.predecessor();
                // 当当前线程的节点的前一个节点为头结点的时候,尝试去获取锁
                if (p == head && tryAcquire(arg)) {
                    setHead(node);
                    p.next = null; // help GC
                    failed = false;
                    return interrupted;
                }
                if (shouldParkAfterFailedAcquire(p, node) &&
                    parkAndCheckInterrupt())
                    interrupted = true;
            }
        } finally {
            if (failed)
                cancelAcquire(node);
        }
    }
    // ReentrantLock的匿名内部类NonfairSync的tryAcquire方法
    protected final boolean tryAcquire(int acquires) {
        // 调用ReentrantLock的匿名内部类Sync的nonfairTryAcquire方法
        return nonfairTryAcquire(acquires);
    }
    
    // ReentrantLock的匿名内部类Sync的nonfairTryAcquire方法
    final boolean nonfairTryAcquire(int acquires) {
            final Thread current = Thread.currentThread();
            // 获取AQS的当前锁状态
            int c = getState();
            // 如果当前锁没被使用
            if (c == 0) {
                // 调用AQS的cas方法上锁,如果上锁成功,则放回成功
                if (compareAndSetState(0, acquires)) {
                    setExclusiveOwnerThread(current);
                    return true;
                }
            }
            // 如果当前锁被占用,并且占用线程未当前线程,则岁state进行+1处理
            // 也是通过这种方式实现可重入
            else if (current == getExclusiveOwnerThread()) {
                int nextc = c + acquires;
                if (nextc < 0) // overflow
                    throw new Error("Maximum lock count exceeded");
                setState(nextc);
                return true;
            }
            return false;
        }

In fact, the source code of AQS has been partially interpreted above, and then look at its state and doubly linked list:

// 通过state来标识锁状态,0为无锁    
private volatile int state;
// 双向链表的头结点
private transient volatile Node tail;
// 双向链表的尾结点
private transient volatile Node head;
// AQS的内部类Node
static final class Node {
      
        static final Node SHARED = new Node();
        
        static final Node EXCLUSIVE = null;

        /** waitStatus value to indicate thread has cancelled */
        static final int CANCELLED =  1;
        /** waitStatus value to indicate successor's thread needs unparking */
        static final int SIGNAL    = -1;
        /** waitStatus value to indicate thread is waiting on condition */
        static final int CONDITION = -2;
        /**
         * waitStatus value to indicate the next acquireShared should
         * unconditionally propagate
         */
        static final int PROPAGATE = -3;

        // 当前线程的等待状态
        volatile int waitStatus;

        // 当前节点的前节点
        volatile Node prev;

        // 当前节点的尾结点
        volatile Node next;

        // 当前节点的线程信息
        volatile Thread thread;

        /**
         * Link to next node waiting on condition, or the special
         * value SHARED.  Because condition queues are accessed only
         * when holding in exclusive mode, we just need a simple
         * linked queue to hold nodes while they are waiting on
         * conditions. They are then transferred to the queue to
         * re-acquire. And because conditions can only be exclusive,
         * we save a field by using special value to indicate shared
         * mode.
         */
        Node nextWaiter;

        /**
         * Returns true if node is waiting in shared mode.
         */
        final boolean isShared() {
            return nextWaiter == SHARED;
        }

        // 获取当前节点的前节点
        final Node predecessor() throws NullPointerException {
            Node p = prev;
            if (p == null)
                throw new NullPointerException();
            else
                return p;
        }

        Node() {    // Used to establish initial head or SHARED marker
        }

        Node(Thread thread, Node mode) {     // Used by addWaiter
            this.nextWaiter = mode;
            this.thread = thread;
        }

        Node(Thread thread, int waitStatus) { // Used by Condition
            this.waitStatus = waitStatus;
            this.thread = thread;
        }
    }

After jdk1.9, the Varhandle type is added to represent a reference to a variable, as shown in the following figure:

 The role of this type is that he can perform atomic operations on ordinary objects, such as the compareAndSet() method and the getAndAdd() method. It is implemented by C, which can be understood as directly operating binary codes.

3. ThreadLocal

        ThreadLocal is thread-private, and the object put into ThreadLocal is only visible to the current object. Learn more through the following code:

// 创建threadLocal对象
    static ThreadLocal<Product> threadLocal = new ThreadLocal<>();

    public static void main(String[] args) {
        new Thread(() -> {
            try {
                // 线程1沉睡两秒
                Thread.sleep(2000);
            }catch (Exception e){
                e.printStackTrace();
            }
            // 然后输出threadLocal中的对象
            System.out.println(threadLocal.get());
        }).start();
        new Thread(() -> {
            try {
                Thread.sleep(1000);
            }catch (Exception e){
                e.printStackTrace();
            }
            // 线程2沉睡一秒以后往threadlocal中存放对象
            threadLocal.set(new Product());
        }).start();
    }
    static class Product{
        private String name = "product";
    }

If the ThreadLocal object is not thread-private, then thread 1 should output an object that substitutes the "product" value, then let's look at the output:

null

The output result is null, which proves that ThreadLocal is thread-private. You can have a better understanding of it by looking at the source code:

    // ThreadLocal的set方法
    public void set(T value) {
        Thread t = Thread.currentThread();
        ThreadLocalMap map = getMap(t);
        if (map != null)
            // 如果map不为null,则将值存放到map中
            map.set(this, value);
        else
            // 如果map为null,则创建map
            createMap(t, value);
    }

    // ThreadLocal的getMap方法,获取的是当前线程的THreadLocalMap(它是Thread的内部类)
    ThreadLocalMap getMap(Thread t) {
        return t.threadLocals;
    }
    // ThreadLocal的createMap方法
    void createMap(Thread t, T firstValue) {
        t.threadLocals = new ThreadLocalMap(this, firstValue);
    }
    

        In the previous JVM garbage collection, we can know that the implementation of ThreadLocalMap in ThreadLocal is implemented through weak references, but weak references are only for the key in the entry, but the value still exists, so after the use is over, execute remove in finally () method to prevent oom from appearing.

        It has a very important role in our development, that is, things in spring. If there are multiple database operations in the same transaction, if these operations use different database links, then there is no way to manage them. At this time, ThreadLocal is used, and the first obtained database link is put into the ThreadLocal of the current thread, and other database operations of the current thread are taken from the ThreadLocal, so that the normal operation of the transaction can be guaranteed.

Guess you like

Origin blog.csdn.net/weixin_38612401/article/details/124051573
Recommended