JAVA LinkedBlockingQueue detailed analysis

story background

Recently in an interview, I was asked how to achieve one LinkedBlockingQueue, and I was suddenly confused.
The dialogue is as follows:

Interviewer: Let me introduce the project you have done recently.
Me: Abba abaa.
Interviewer: Okay, so if you are asked to design one LinkedBlockingQueue, how to realize it?
Me: Okay, I will...emmm, forget (very embarrassing)

I know that this thing blocks the queue, but I really don't know how to implement it internally, because similar functions are usually considered to be implemented in an asynchronous producer-consumer way (at least I do).
The blocking queue is mainly centered on the queue to control the production and consumption rate, which requires other methods to control the asynchronous queue.
Hope to use this article to help learn and share.

structural analysis

insert image description here
As shown in the figure above, LinkedBlockingQueuesome operations of the queue are added to the function of the collection, and the analysis starts one by one.

java.util.Collection

The root interface of the java collection mainly specifies the basic operations of the collection, including the addition and deletion of single/multiple elements, the query of the number of collections, and the return of lost objects. As the root interface, in order to ensure versatility, its constraints are relatively loose, and it is only proposed through the
document specifications for implementation.

java.util.AbstractCollection

public abstract class AbstractCollection<E> implements Collection<E>
insert image description here

This is java.util.Collectionthe basic skeleton of the implementation of the interface. It is an implementation class of unmodifiable collection elements.
The methods in it can be replaced if there is a more effective processing method in the specific implementation class.
The implementation method depends java.util.Iterator. If you want a modifiable collection, you need Reimplemented in subclasses to replace the default iterator.

java.util.Queue

public interface Queue<E> extends Collection<E>
insert image description here

This is the root interface of the queue. Based on java.util.Collectionthe interface, it provides an additional addition and viewing operation that does not throw an exception.
In addition, the storage of its elements should be ordered, but the specific order is handled by subclasses.
Blocking the queue exchange Defined by java.util.concurrent.BlockingQueuethe interface.
nullSince it will be regarded as a poison pill object (special value) that currently has no elements in the queue, it is best not to allow insertion.
Since the stored elements are ordered, it is necessary to consider reimplementing equalsand hashCodemethods (when the element values ​​​​are equal, but hash is different).

java.util.concurrent.BlockingQueue

public interface BlockingQueue<E> extends Queue<E>
insert image description here
This interface Queueprovides time-limited addition operations on the basis of .
BlockingQueueFour types of operations are provided:

Throws exception Special value Blocks Times out
Insert add(e) offer(e) put(e) offer(e, time, unit)
Remove remove() poll() take() poll(time, unit)
Examine element() peek() none none
Some methods of the original collection need to be used in the queue, because it is internally based on iterators and cannot guarantee thread safety.
The documentation specifically states that queue operation methods (except bulk operations, for example addAll) need to be thread-safe, but does not limit how they are implemented.
Multi-threaded operations on the same queue need to be operated sequentially.

Note : You cannot add nullvalues ​​to this queue, because it is used as pollthe return value of an operation failure (due to limited capacity, and not completed within the specified time), indicating that the queue currently has no elements.

java.util.AbstractQueue

public abstract class AbstractQueue<E> extends AbstractCollection<E> implements Queue<E>
insert image description here
This abstract class provides Queuethe basic skeleton of the implementation.
The main thing is that the relevant methods of the interface are special values ​​( or ) Queuethat throw exceptions rather than Collectionsreturn . At the same time, interpolation of values ​​is not allowed.nullfalsenull

java.util.concurrent.LinkedBlockingQueue

public class LinkedBlockingQueue<E> extends AbstractQueue<E> implements BlockingQueue<E>, java.io.Serializable
insert image description here
LinkedBlockingQueueIt is an implementation of a blocking queue, which maintains elements in a single linked list.
Compared with an array, the implementation of a linked list has higher throughput, but it is not very predictable in a concurrent environment.

Detailed analysis of LinkedBlockingQueue

Attributes

// 最大容量,默认为Integer.MAX_VALUE
private final int capacity;

// 当前元素数量
private final AtomicInteger count = new AtomicInteger();

// 队列头, 不变 head.item == null
transient Node<E> head;

// 队尾, 不变 last.item == null
private transient Node<E> last;

// take, poll等操作持有的锁,其实就是对队列读取的锁
private final ReentrantLock takeLock = new ReentrantLock();

// 正在等待取的线程队列
private final Condition notEmpty = takeLock.newCondition();

// put, offer等操作持有的锁,其实就是对队列写的锁
private final ReentrantLock putLock = new ReentrantLock();

// 正在等待写入的线程队列
private final Condition notFull = putLock.newCondition();

static class Node<E> {
    
    
    E item;

    // 只会是下面情况的一种: 1.真的后继节点; 2.null,没有后继节点,也就是最后的节点; 3.head节点;
    Node<E> next;

    Node(E x) {
    
     item = x; }
}

Through the above description, we can understand LinkedBlockingQueuethat the basis of itself is to 单链表generate a waiting queue for corresponding operations to ensure thread safety and notification of data presence or absence.ReentrantLocknewCondition()

Constructor

public LinkedBlockingQueue() {
    
    
    this(Integer.MAX_VALUE);
}

// 限制队列容量,并初始化队列的 head 和 last 节点.
public LinkedBlockingQueue(int capacity) {
    
    
    if (capacity <= 0) throw new IllegalArgumentException();
    this.capacity = capacity;
    last = head = new Node<E>(null);
}

// LinkedBlockingQueue(int capacity)初始化,然后加写锁,将集合c一个个入队.
public LinkedBlockingQueue(Collection<? extends E> c) {
    
    
    this(Integer.MAX_VALUE);
    final ReentrantLock putLock = this.putLock;
    putLock.lock(); // 写锁(以重入锁实现,对队尾的插入进行控制)
    try {
    
    
        int n = 0;
        for (E e : c) {
    
    
        	// null元素抛出异常
            if (e == null)
                throw new NullPointerException();
            if (n == capacity)
                throw new IllegalStateException("Queue full");
            enqueue(new Node<E>(e)); //将元素封装成Node,入队
            ++n;
        }
        count.set(n);
    } finally {
    
    
        putLock.unlock(); // 释放
    }
}

Join the team and leave the team

// 入队,将节点插入到last之前
private void enqueue(Node<E> node) {
    
    
    // assert putLock.isHeldByCurrentThread();
    // assert last.next == null;
    last = last.next = node;
}

// 出队,将head的后继返回出来
private E dequeue() {
    
    
    // assert takeLock.isHeldByCurrentThread();
    // assert head.item == null;
    Node<E> h = head;
    Node<E> first = h.next;
    h.next = h; // help GC
    head = first;
    E x = first.item;
    first.item = null;
    return x;
}

lock, release lock

// 完全加锁,避免其他线程写入和读取
// 在我看来就是加读写锁
void fullyLock() {
    
    
    putLock.lock();
    takeLock.lock();
}

// 完全解锁
// 在我看来就是释放读写锁
void fullyUnlock() {
    
    
    takeLock.unlock();
    putLock.unlock();
}

Add elements

// 除非线程被中断,否则堵塞到写入成功
public void put(E e) throws InterruptedException {
    
    
    if (e == null) throw new NullPointerException();
    int c = -1;
    Node<E> node = new Node<E>(e);
    final ReentrantLock putLock = this.putLock;
    final AtomicInteger count = this.count;
    // 可中断锁
    putLock.lockInterruptibly();
    try {
    
    
        // 队列没有空间,自旋等待有空
        while (count.get() == capacity) {
    
    
        	// 等待队列未满信号
            notFull.await();
        }
        // 队列有空间,入队
        enqueue(node);
        c = count.getAndIncrement();
        // 如果队列还有空间,通知等待队列
        if (c + 1 < capacity)
        	// 发送队列未满信号
            notFull.signal();
    } finally {
    
    
    	// 释放锁
        putLock.unlock();
    }
    if (c == 0)
    	// 发送队列已满信号
        signalNotEmpty();
}

// 添加元素,直到超时失败或者中断异常
public boolean offer(E e, long timeout, TimeUnit unit)
    throws InterruptedException {
    
    

    if (e == null) throw new NullPointerException();
    long nanos = unit.toNanos(timeout);
    int c = -1;
    final ReentrantLock putLock = this.putLock;
    final AtomicInteger count = this.count;
    putLock.lockInterruptibly();
    try {
    
    
        while (count.get() == capacity) {
    
    
            if (nanos <= 0)
                return false;
            // 等待队列未满信号 nanos长的时间
            // 返回的nanos是收到信号后剩余的时间
            // 因为存在并发写入的可能,所以需要自旋/锁保证(自旋通常更高效)
            nanos = notFull.awaitNanos(nanos);
        }
        enqueue(new Node<E>(e));
        c = count.getAndIncrement();
        if (c + 1 < capacity)
            notFull.signal();
    } finally {
    
    
        putLock.unlock();
    }
    if (c == 0)
        signalNotEmpty();
    return true;
}

// 添加元素,有空间且添加成功返回true,否则false
public boolean offer(E e) {
    
    
    if (e == null) throw new NullPointerException();
    final AtomicInteger count = this.count;
    if (count.get() == capacity)
    	// 队列没有空间,直接返回
        return false;
    int c = -1;
    Node<E> node = new Node<E>(e);
    final ReentrantLock putLock = this.putLock;
    // 堵塞到获取到锁
    putLock.lock();
    try {
    
    
        if (count.get() < capacity) {
    
    
            enqueue(node);
            c = count.getAndIncrement();
            if (c + 1 < capacity)
                notFull.signal();
        }
    } finally {
    
    
        putLock.unlock();
    }
    if (c == 0)
        signalNotEmpty();
    return c >= 0;
}

take elements

The behavior of taking elements is similar to that of adding elements, except 写锁变that 读锁the condition notFullbecomes notEmpty.

public E take() throws InterruptedException {
    
    
    E x;
    int c = -1;
    final AtomicInteger count = this.count;
    final ReentrantLock takeLock = this.takeLock;
    takeLock.lockInterruptibly();
    try {
    
    
        while (count.get() == 0) {
    
    
            notEmpty.await();
        }
        x = dequeue();
        c = count.getAndDecrement();
        if (c > 1)
            notEmpty.signal();
    } finally {
    
    
        takeLock.unlock();
    }
    if (c == capacity)
        signalNotFull();
    return x;
}

public E poll(long timeout, TimeUnit unit) throws InterruptedException {
    
    
    E x = null;
    int c = -1;
    long nanos = unit.toNanos(timeout);
    final AtomicInteger count = this.count;
    final ReentrantLock takeLock = this.takeLock;
    takeLock.lockInterruptibly();
    try {
    
    
        while (count.get() == 0) {
    
    
            if (nanos <= 0)
                return null;
            nanos = notEmpty.awaitNanos(nanos);
        }
        x = dequeue();
        c = count.getAndDecrement();
        if (c > 1)
            notEmpty.signal();
    } finally {
    
    
        takeLock.unlock();
    }
    if (c == capacity)
        signalNotFull();
    return x;
}

public E poll() {
    
    
    final AtomicInteger count = this.count;
    if (count.get() == 0)
        return null;
    E x = null;
    int c = -1;
    final ReentrantLock takeLock = this.takeLock;
    takeLock.lock();
    try {
    
    
        if (count.get() > 0) {
    
    
            x = dequeue();
            c = count.getAndDecrement();
            if (c > 1)
                notEmpty.signal();
        }
    } finally {
    
    
        takeLock.unlock();
    }
    if (c == capacity)
        signalNotFull();
    return x;
}

delete element

// 删除p节点,trail为p的前驱节点
void unlink(Node<E> p, Node<E> trail) {
    
    
    p.item = null;
    trail.next = p.next;
    if (last == p)
        last = trail;
	// 队列未满,发出未满信号
    if (count.getAndDecrement() == capacity)
        notFull.signal();
}

// 移除匹配的元素
public boolean remove(Object o) {
    
    
    if (o == null) return false;
    fullyLock();
    try {
    
    
    	// trail表示当前的尾巴节点(item为null)
    	// p表当前比较的节点
    	// head,last都是item为null的特殊节点
        for (Node<E> trail = head, p = trail.next;
             p != null;
             trail = p, p = p.next) {
    
    
             // 找到需要的元素
            if (o.equals(p.item)) {
    
    
            	// 删除
                unlink(p, trail);
                return true;
            }
        }
        return false;
    } finally {
    
    
        fullyUnlock();
    }
}

Methods inherited from Collection

The implementation of these methods is similar, so I won’t go into details.
The main thing is to lock completely at the beginning fullyUnlock(), and then traverse through the linked list to obtain the required elements.

ReentrantLock

Reentrant lock.
Internally, it stateachieves reentry by recording the number of times the current thread locks. Then
the release also requires the same number of times to achieve complete release.
Yes ReentrantLock, when Athe thread has acquired the lock, Bthe thread must wait for Athe release to acquire it.
Internally It is based on AQSthe two queues maintained by the implemented CLH queue, namely 等待取锁and 等待条件(nodes in this queue will be added to the waiting lock queue after receiving the notification).

java.util.concurrent.locks.AbstractQueuedSynchronizer.ConditionObject

Condition object.
It is the actual object created in it. The LinkedBlockingQueueproperty passed internally maintains the exclusive/shared identity of the node. Therefore, whenever an element is obtained from, it will check whether the number of elements is remaining, and if there is a call method, notify a Threads waiting to obtain the condition. Threads dequeued from the waiting condition queue will be added to the pending execution queue of the double-linked list, and wait for the predecessor to complete the execution before executing.private final Condition notFull = putLock.newCondition();newCondition()
java.util.concurrent.locks.AbstractQueuedSynchronizer.NodenextWaiter
LinkedBlockingQueuenotEmpty.signal()

lock logic

// 非公平锁
static final class NonfairSync extends Sync {
    
    
    final void lock() {
    
    
        if (compareAndSetState(0, 1))
        	// 加锁成功 这是加锁的简单逻辑,应该是对锁争用不激烈的优化
            setExclusiveOwnerThread(Thread.currentThread());
        else
        	// 加锁失败,进入尝试取锁阶段
            acquire(1);
    }

    protected final boolean tryAcquire(int acquires) {
    
    
        return nonfairTryAcquire(acquires);
    }
    
    // 检查并设置当前锁的持有线程, true表示成功持有锁
	final boolean nonfairTryAcquire(int acquires) {
    
    
      final Thread current = Thread.currentThread();
        int c = getState();
        if (c == 0) {
    
    
        	// 对减少CAS的优化,类似于双检索的逻辑
            if (compareAndSetState(0, acquires)) {
    
    
                setExclusiveOwnerThread(current);
                return true;
            }
        }
        else if (current == getExclusiveOwnerThread()) {
    
    
        	// 当前线程已经持有锁,则增加重入计数
            int nextc = c + acquires;
            if (nextc < 0) // overflow
                throw new Error("Maximum lock count exceeded");
            setState(nextc);
            return true;
        }
        return false;
    }

	// 来自 AQS 的方法
	public final void acquire(int arg) {
    
    
		// 调用上面的 tryAcquire 逻辑
        if (!tryAcquire(arg) &&
        	// addWaiter 创建一个排他的CLH节点
        	// acquireQueued将创建好的Node加入到CLH队列中,并且会检查是否还存在前驱节点,如果没有,则自身会尝试取锁
            acquireQueued(addWaiter(Node.EXCLUSIVE), arg))
            selfInterrupt();
    }
}

The logic of an unfair lock is basically the same as that of a fair lock. Only at the beginning, tryAcquireit will additionally check whether there is a predecessor node, and if there is, it will directly return the lock failure.

Queue of threads waiting to acquire locks:queue of threads waiting to execute

CLH

JUC( java.util.concurrent) is a java concurrency toolkit, basically written by Doug Lea. The code written by such a great person has considerable learning value.

It implements the basic framework of multi-thread synchronization, and internally maintains the acquisition of conditional resources by multiple threads through a CLH queue.

CLH (Craig, Landin, and Hagersten locks): It is a spin lock that can ensure no starvation and provide first-come, first-served fairness.
The CLH lock is also a scalable, high-performance, and fair spin lock based on linked lists. The application thread only spins on local variables. It continuously polls the status of the predecessor, and ends the spin if it finds that the predecessor has released the lock.

The CLH of AQS in JDK 1.8 is a variant implementation. It mainly transforms the originally used single-linked list into a double-linked list, and adds a reference to the precursor node, which is used to check the removal and direct wake-up of the precursor node when it fails.

some tricks

Copy class member variables to method local variables

Doug LeaThe JUC written in many places 将类成员变量复制到方法本地变量上, why bother?
<< Performance of locally copied members ? >>
The content is as follows:

It’s a coding style made popular by Doug Lea.
It’s an extreme optimization that probably isn’t necessary;
you can expect the JIT to make the same optimizations.
(you can try to check the machine code yourself!)
Nevertheless, copying to locals produces the smallest bytecode, and for low-level code it’s nice to write code that’s a little closer to the machine.
Also, optimizations of finals (can cache even across volatile reads) could be better. John Rose is working on that.
For some algorithms in j.u.c, copying to a local is necessary for correctness.

General meaning:

This is Doug Lea's coding style.
An extreme optimization that probably won't work. You can expect the JIT to perform the same optimizations.
Copying class member variables as method local variables results in minimal bytecode generation; it is also easier to write low-level code close to the machine.
In some algorithms (especially the algorithms in the JDK concurrency package), copying as local variables, or using final variables, is necessary to ensure concurrency correctness.

So we don't need to worry too much about this code pattern. You do a good job of concurrency/synchronization-related correctness coding to ensure the correctness of the algorithm, and you don't need to deliberately imitate and use this code pattern.

Guess you like

Origin blog.csdn.net/weixin_46080554/article/details/108820344