On the Java source code and concurrent queue BlockingQueue summary

On the Java source code and concurrent queue BlockingQueue summary

Original Address: http://jachindo.top:8090/archives/java%E5%B9%B6%E5%8F%91%E9%98%9F%E5%88%97blockingqueue%E6%80%BB%E7%BB % 93% E4% B8% 8E % E6% BA% 90% E7% A0% 81% E6% B5% 85% E6% 9E% 90

§ Classification and Description


§ LinkedBlockingQueue

§ constitution

  • The bottom layer is a linked list

  • put and take lock lock

  • notEmpty notFull conditions and conditions, so that the corresponding threads waiting condition

    private final Condition notEmpty = takeLock.newCondition();
    private final ReentrantLock putLock = new ReentrantLock();
    

    Locks take locks and put the lock is to ensure the safety of the thread queue operations, design kinds of locks to take and put two operations can be performed simultaneously, independently of each other.


§ Add obstruction

There are several new methods, such as: add, put, offer, the difference between the three above have said. We get put method, for example, put method when confronted with the queue is full, it will have been blocking it until the queue dissatisfaction, and when his own wake, will continue to perform, the source code is as follows:

  1. New data to the queue,The first step is locked, so the new data is thread-safe;
  2. New data queue, simple **Appended to the end of the list** you can;
  3. When you add, if **Queue is full, the current thread will be blocked**, blocking the ability to lock the bottom layer is used, the bottom and also implement other relevant queue;
  4. After the success of the new data, at the appropriate time, will **Arouse waiting thread put the (queue dissatisfied), or take a waiting thread (the queue is not empty)** This ensures that the queue once put or take conditions are met, can immediately evoke blocking the thread continues to run, to ensure that evoke the opportunity is not wasted.

§ Block Delete

To take an example, and put the whole process is very similar, are first locked and then out the data from the head of the queue,If the queue is empty, there will always be blocked to queue value.


§ SynchronousQueue

Reference: https: //www.imooc.com/read/47/article/862


§ Features

  1. queueDo not store data, so there is no sizeCan not iterative;
  2. Returns the insertion operationWe must wait for another thread to complete the corresponding data deletion,vice versa;
  3. Queue data structure consists of two components, namely the first out **StackAnd first-in-first-outqueue**,The non-fair stack, the queue fair

§ structure

 		// 堆栈和队列共同的接口
    // 负责执行 put or take
    abstract static class Transferer<E> {
        // e 为空的,会直接返回特殊值,不为空会传递给消费者
        // timed 为 true,说明会有超时时间
        abstract E transfer(E e, boolean timed, long nanos);
    }

    // 堆栈 后入先出 非公平
    // Scherer-Scott 算法
    static final class TransferStack<E> extends Transferer<E> {
    }

    // 队列 先入先出 公平
    static final class TransferQueue<E> extends Transferer<E> {
    }

    private transient volatile Transferer<E> transferer;

    // 无参构造器默认为非公平的
    public SynchronousQueue(boolean fair) {
        transferer = fair ? new TransferQueue<E>() : new TransferStack<E>();
    }
  1. Stacks and queues are a common interface, called Transferer, the interface has two methods: transfer, which is amazing, will assume the dual function take and put in;
  2. When we initialized, it can choose to use a stack or queue, if you do not choose, the default is to stack class comments also illustrates this point, the stack is higher than the efficiency of the queue.

§ stack structure

Stack elements:

static final class SNode {
    // 栈的下一个,就是被当前栈压在下面的栈元素
    volatile SNode next;
    // 节点匹配,用来判断阻塞栈元素能被唤醒的时机
    // 比如我们先执行 take,此时队列中没有数据,take 被阻塞了,栈元素为 SNode1
    // 当有 put 操作时,会把当前 put 的栈元素赋值给 SNode1 的 match 属性,并唤醒 take 操作
    // 当 take 被唤醒,发现 SNode1 的 match 属性有值时,就能拿到 put 进来的数据,从而返回
    volatile SNode match;
    // 栈元素的阻塞是通过线程阻塞来实现的,waiter 为阻塞的线程
    volatile Thread waiter;
    // 未投递的消息,或者未消费的消息
    Object item;             
} 

§ pushing and popping process

Is a method of bottom: transfer ()

  1. The method is put or take determination method;
  2. Header data determines whether the stack is empty, if the stack is empty or the operation of the head and this consistent operation, then walk a 3, 5 or walking;
  3. Determine whether the operation to set the timeout, if a timeout and set the timeout and return null, otherwise go 4;
  4. If the stack is empty head, to stack the current operating head, or the head stack is not empty, but the head of the operand stack and this same operation, the current also arranged to operate the first stack, other threads can meet and see their own can not meet the blockage itself. Take for example a current operation, but there is no data in the queue, then their blocking;
  5. If the stack is already the head of blocking live, others need to wake up, wake up the stack to determine whether the current operation head, you can wake up to go to 6, else go 4;
  6. Himself as a node, assign attributes to match the stack head and wakes up the stack head node;
  7. After the head stack wake up, get the match attribute is the information they wake node returns.

§ queue structure

/** 队列头 */
transient volatile QNode head;
/** 队列尾 */
transient volatile QNode tail;

// 队列的元素
static final class QNode {
    // 当前元素的下一个元素
    volatile QNode next;         
    // 当前元素的值,如果当前元素被阻塞住了,等其他线程来唤醒自己时,其他线程
    // 会把自己 set 到 item 里面
    volatile Object item;         // CAS'ed to or from null
    // 可以阻塞住的当前线程
    volatile Thread waiter;       // to control park/unpark
    // true 是 put,false 是 take
    final boolean isData;
}  

§ scenes to be used

Messaging middlewareWill be used, messaging middleware to ensure messages can be quickly pushed to consumers, generally two modes using push-pull, push the message is pushed to the server to the client, the client is active pull pull data to the server in the process of drawing, if the server has no data, pull requests will wait, wait until after the server has data immediately returned, and the pull of principle is very similar SynchronousQueue


§ DelayQueue

§ Features

  1. Queue element to be performed at the expiration time, closer HOL, expired earlier;
  2. Unexpired element can not be Take;
  3. Empty elements are not allowed.

§ structure

DelayQueue the generic category is as follows:

public class DelayQueue<E extends Delayed> extends AbstractQueue<E>
    implements BlockingQueue<E> {

As can be seen from the generic, a DelayQueue the elements Delayed must be a subclass of the interface is the key expression Delayed delay capability, which inherits Comparable interface, and defines how long the left expired method , as follows:

public interface Delayed extends Comparable<Delayed> {
    long getDelay(TimeUnit unit);
}

That DelayQueue queue element must implement Delayed Comparable interface and the interface, and overrides getDelay compareTo method and the method for the job , or at compile time, the compiler will remind us of elements must enforce Delayed interface.


DelayQueue also makes extensive use of PriorityQueue queueThe large number of functions. Role is to make prioritization based on the expiration time, let the first be expired before execution.

Reuse idea here is very important, we often encounter this idea in the source code, for example, Ability ability LinkedHashMap multiplexing capability of HashMap, Set multiplexing Map, as well as DelayQueue complex here with the PriorityQueue


§ data into and out

  • put the data to be locked

  • PriorityQueue queue using the underlying data put operations, sorted by the compareTo element method, we hope that the final result is a sort of small to large, because we want the team are head of expired data, we need inside compareTo method achieved: by sorting the expiration time of each element, as follows:

    (int) (this.getDelay(TimeUnit.MILLISECONDS) - o.getDelay(TimeUnit.MILLISECONDS));
    
  • Data fetch (Unlocked), If there are elements to the expiration time, you will be able to come up with data, if not expired elements, the thread would have been blocked


§ ArrayBlockingQueue

§ Features

  1. Bounded blocking array capacity once created, subsequent size can not be modified;
  2. Elements are ordered according to the FIFO sort, insert the data from the tail, to take the data from the queue head;
  3. When the queue is full, the data will be put into the queue is blocked, the queue is empty, the queue to get the data will be blocked.

As can be seen from the class of Note ArrayBlockingQueue and general configuration of the array are not the same, it is not capable of dynamic expansion , if the queue is full or empty, and put Take will be blocked.


§ structure

// 队列存放在 object 的数组里面
// 数组大小必须在初始化的时候手动设置,没有默认大小
final Object[] items;

// 下次拿数据的时候的索引位置
int takeIndex;

// 下次放数据的索引位置
int putIndex;

// 当前已有元素的大小
int count;

// 可重入的锁
final ReentrantLock lock;

// take的队列
private final Condition notEmpty;

// put的队列
private final Condition notFull;

This code has two key fields,*** takeIndex and putIndex, respectively, next time take data and put the index location data. So when the new data and get the data, all without calculation, will be able to know what to add to the position, should take data from any location. ***


During initialization, there are two important parameters: size of the array, whether it is fair, the source code is as follows:

public ArrayBlockingQueue(int capacity, boolean fair) {
    if (capacity <= 0)
        throw new IllegalArgumentException();
    this.items = new Object[capacity];
    lock = new ReentrantLock(fair);
    // 队列不为空 Condition,在 put 成功时使用
    notEmpty = lock.newCondition();
    // 队列不满 Condition,在 take 成功时使用
    notFull =  lock.newCondition();
}

We can see from the source code, the second parameter is fair, mainly for read-write lock is fair is fair if the lock, then the lock competition, will be in the order of first come, first served, if the non-fair locks, random when lock contention.

For lock fair and unfair, we give an example: for example, now queue is full, there are many threads to perform put operations, there must be many threads blocked waiting for, when there are other threads executing take, will wake up waiting threads, If the lock is fair, it will be in the order of block waiting, in turn awaken blocked thread, if a non-fair locks, randomly wake up the sleeping thread.

*** So many threads queue is full when the put operation, if it is fair to add locks, the order of the array elements is blocked thread is released in the order, it is in order, rather than a fair lock, due to the blocking thread is released random order, the order of elements into the array will not follow the order of insertion. ***


§ New data

From the source code, we can see that, in fact, added to two situations:

  1. The new centrally located, directly add, is putIndex The following illustration shows the standard for the position in the array 5, less than the tail, it can be added directly to calculate the new location should be the next 6;

  2. The new position to the tail, then we must start from scratch the next time you add, the diagram is as follows:

    Notice that when added to the tail, the next new team will re-start again from the head.


§ take data

Take each data location is the location of takeIndex, after finding that take this data will takeIndex plus 1, calculate the next index position to take data, there is a special case,If you take this position data is already the tail, then take the next location of the data necessary to start from scratch, that is, from 0 began.


§ delete data

There are two cases, the first case takeIndex == removeIndex, we look at the draw a diagram of treatment:


The second case was divided into two:

  1. If removeIndex + 1 != putIndexso, the next element is moved to put forward a simplified diagram below:


  1. If removeIndex + 1 == putIndexso, the value put into the position of the modified putIndex deleted, a schematic diagram is as follows:

Published 28 original articles · won praise 16 · views 3157

Guess you like

Origin blog.csdn.net/Newbie_J/article/details/104440212