Simple blocking, non-blocking queue

What is a queue

Queue is a very commonly used data structure, one in and one out, first in first out.

 Picture 1.png

Blocking queue and non-blocking queue

Non-blocking queues and blocking queues, of course, are thread-safe, so there is no need to worry about unpredictable problems in a multi-threaded concurrent environment. Why is there a distinction between non-blocking and blocking? Let us assume that there is a bounded queue now.

Bounded non-blocking queue

If the queue is full, inserting the tail again will immediately return to the insertion failure, and ignore the insertion.

If the queue is empty, removing and getting the first element of the queue will immediately return the removal failure and ignore the removal operation.

Bounded blocking queue

If the queue is full, inserting the tail again will block the thread until the element in the queue is dequeued, and the thread will continue to execute. Of course, if the queue continues to have no space for us to insert, we can choose a timeout waiting mechanism and adopt a processing method similar to a bounded queue. If the insertion fails after the fixed time we set, the insertion fails and the thread continues to execute.

If the queue is empty, removing and getting the first element of the queue will block the thread until a new element is inserted in the queue. In the same way, if the queue continues to have no new elements added, we can also choose the timeout waiting mechanism. When it exceeds the fixed time set by us and it is still not removed and the acquisition is successful, null is returned, otherwise the acquired element is returned, and the thread continues to execute.

Comparison summary

We found that the actual difference between blocking queues and non-blocking queues is that they choose different rejection strategies when there are operations that exceed their capacity. The non-blocking queue adopts the strategy of direct rejection and ignores, while the blocking queue can have two rejection schemes: forced waiting (to ensure absolute data security) and overtime waiting (to ensure that threads will not be blocked for too long).

Common methods of queues

Several main methods in non-blocking queues

  1. add(E e): insert the element e to the end of the queue, if the insertion is successful, it will return true; if the insertion fails (that is, the queue is full), an exception will be thrown;
  2. remove(): Remove the element at the head of the team. If the removal is successful, it returns true; if the removal fails (the queue is empty), an exception will be thrown;
  3. offer(E e): insert element e to the end of the queue, if the insertion is successful, it returns true; if the insertion fails (that is, the queue is full), it returns false;
  4. poll(): Remove and get the first element of the team, if successful, return the first element of the team; otherwise, return null;
  5. peek(): Get the first element of the team, if successful, return the first element of the team; otherwise, return null

Note: For non-blocking queues, it is generally recommended to use the three methods of offer, poll and peek, and it is not recommended to use the add and remove methods. Because the three methods of offer, poll, and peek can be used to determine whether the operation is successful or not through the return value, but the use of add and remove methods cannot achieve this effect.

Several main methods in the blocking queue

  1. put(E e): used to store elements to the end of the queue, if the queue is full, wait;
  2. take(): used to take the element from the head of the queue, if the queue is empty, wait;
  3. offer(E e,long timeout, TimeUnit unit): The method is used to store elements to the end of the queue. If the queue is full, it will wait for a certain time. When the time limit is reached, if the insertion has not been successful, it will return false; otherwise, it will return. true;
  4. poll(long timeout, TimeUnit unit): The method is used to fetch the element from the head of the team. If the queue is empty, it will wait for a certain time. When the time limit is reached, if fetched, it will return null; otherwise, it will return the fetched element;

Important subclasses of queues

Inheritance diagram

image.png

Introduction to blocking queue subclasses

ArrayBlockingQueue

The bounded blocking queue implemented with an array does not guarantee that threads will access the queue fairly (access the queue in the order of blocking) by default. When the queue is available, the blocked threads can compete for the access qualification of the queue. Of course, you can also use the following The constructor creates a fair blocking queue. ArrayBlockingQueue<String> blockingQueue = new ArrayBlockingQueue<>(10, true). (In fact, this fairness is achieved by setting ReentrantLock to true: that is, the thread with the longest waiting time will operate first). Use ReentrantLock condition to achieve blocking.

LinkedBlockingQueue

A bounded blocking queue based on a linked list. The default and maximum length of this queue is Integer.MAX_VALUE (the default is still an unbounded queue). This queue sorts the elements according to the first-in first-out principle. The implementation principle of this queue is basically the same as that of ArrayBlockingQueue. ReentrantLock is also used to control concurrency, so what is the specific difference between ArrayBlockingQueue and ArrayBlockingQueue?

The difference between ArrayBlockingQueue and LinkedBlockingQueue

The implementation of locks in the queue is different (the efficiency is significantly improved in a high-concurrency environment)

The locks in the queue implemented by ArrayBlockingQueue are not separated, that is, the same lock is used for production and consumption;

The locks in the queue implemented by LinkedBlockingQueue are separated, that is, putLock is used for production, and takeLock is used for consumption.

The operation is different during production or consumption (every insertion and deletion slightly affects performance)

When the queue implemented by ArrayBlockingQueue is produced and consumed, the enumeration object is directly inserted or removed;

In the production and consumption of the queue implemented by LinkedBlockingQueue, the enumeration object needs to be converted to Node for insertion or removal, which will affect performance

The queue size is initialized differently

The size of the queue must be specified in the queue implemented by ArrayBlockingQueue;

The size of the queue may not be specified in the queue implemented by LinkedBlockingQueue, but the default is Integer.MAX_VALUE

to sum up

As you can see, due to the value of the blocking queue itself, the first element is taken every time, the excellent traversal and query characteristics of the original array have no place forever. In the case of a low concurrency environment, ArrayBlockingQueue is more efficient than LinkedBlockingQueue because of the direct insertion and removal of enumerations. The enumeration is encapsulated as Node and then inserted and removed. However, as the amount of concurrency continues to increase, LinkedBlockingQueue is bound to be much faster than ArrayBlockingQueue because of the lock separation mechanism.

PriorityBlockingQueue

PriorityBlockingQueue is an unbounded queue. It has no restrictions. You can add elements infinitely when memory allows it. It is also a priority queue, which is judged by the object passed in by the constructor. The object passed in must implement the comparable interface. It should be noted that the order of elements with the same priority cannot be guaranteed.

DelayQueue

DelayQueue is implemented on the basis of PriorityQueue. The bottom layer is also an array construction method. It is an unbounded blocking queue that stores Delayed elements. Elements can be extracted from it only when the delay expires. The head of the queue is the Delayed element that has the longest storage time after the delay expires. If none of the delays have expired, the queue has no head and poll will return null. When the getDelay(TimeUnit.NANOSECONDS) method of an element returns a value less than or equal to zero, the expiration occurs and poll removes the element. This queue does not allow null elements.

You can think of DelayQueue as a prison. Every element that enters has to wait for the sentence to expire before it can be released.

DelayQueue is very useful. DelayQuue can be used in the following application scenarios.

The design of the cache system: DelayedQueue can be used to save the validity period of cached elements, and a thread can be used to query DelayQueue in a loop. Once the element can be obtained from the DelayQueue, it means that the cache validity period has expired. There are also orders due, time-limited payment and so on.

SychronousQueue

A queue with no capacity will not store data. Every time a put is executed, a take must be executed, otherwise it will be blocked. The lock is not used. Realized by cas, the throughput is extremely high. The internal use is the blocking queue of ArrayBlockingQueue, so it can be replaced with ArrayBlockingQueue functionally, but SynchronousQueue is lightweight, SynchronousQueue does not have any internal capacity, and we can use it to safely exchange single elements between threads. So the function is relatively single, and the advantage lies in its light weight.

The queue length of SynchronousQueue is 0. At first I thought it was not very useful, but then I found that it is one of the most useful queue implementation classes in the entire Java Collection Framework, especially for the use case of passing elements between two threads.

LinkedBlockingDeque

We found that this blocking queue is very special. All other blocking queues end with Queue, and it does end with Deque.

LinkedBlockingDeque is a two-way concurrent blocking queue implemented by a doubly linked list (addFirst, addLast, offerFirst, offerLast, peekFirst, and peekLast methods are added).

The blocking queue supports both FIFO and FILO operation modes, that is, simultaneous operations (insert/delete) from the head and tail of the queue; and, the blocking queue supports thread safety. When multiple threads compete for the same resource, a certain thread After obtaining the resource, other threads need to block and wait. In addition, LinkedBlockingDeque is also optional capacity (to prevent excessive inflation), that is, you can specify the capacity of the queue. If not specified, the default capacity size is equal to Integer.MAX_VALUE. When initializing LinkedBlockingDeque, you can set the capacity method to over-expand it. In addition, the two-way blocking queue can be used in the "work stealing" mode.

Introduction to non-blocking queue subclasses

PriorityQueue

Back to the PriorityBlockingQueue we talked about earlier. PriorityBlockingQueue is a priority queue, not a first-in-first-out queue. Elements are removed in order of priority, and the queue has no upper limit (PriorityBlockingQueue is a repackaging of PriorityQueue, which is based on the heap data structure, and PriorityQueue has no capacity limit, the same as ArrayList, so when put on the priority blocking queue It will not be blocked. Although this queue is logically unbounded, because resources are exhausted, an attempt to perform an add operation may cause an OutOfMemoryError).

Therefore, in order to make the data no longer have the problem of OOM, the non-blocking queue PriorityQueue was strengthened, and PriorityBlockingQueue was designed, and its retrieval operation take was blocked. ReentrantLock is also used to control concurrency.

ConcurrentLinkedQueue

ConcurrentLinkedQueue is an unbounded thread-safe queue based on a linked list. It uses a first-in first-out rule to sort nodes. When we add an element, it will be added to the end of the queue; when we get an element, it will return to the queue. Elements of the head. The thread safety of ConcurrentLinkedQueue is guaranteed by taking CAS operations when inserting and deleting. Since CAS does not use locks, it is possible that an offer, poll, or remove operation is being performed when acquiring the size, resulting in an inaccurate number of elements (the same weak consistency as ConcurrentHashMap), so the size function is not very high in the case of concurrency it works.

Therefore, from the perspective of use, we can regard it as an efficient, thread-safe LinkedList.

LinkedTransferQueue (a powerful queue that is easily overlooked)

LinkedTransferQueue is an unbounded blocking TransferQueue queue composed of a linked list structure. Compared with other blocking queues, LinkedTransferQueue has more tryTransfer and transfer methods.

In terms of execution efficiency: LinkedTransferQueue uses a preemptive mode. This means that when the consumer thread fetches the element, if the queue is not empty, the data will be taken directly. If the queue is empty, a node (the node element is null) will be generated to join the queue, and then the consumer thread will be waiting on this node , When the producer thread enters the queue later, it is found that there is a node with a null element. The producer thread will not enter the queue, and will directly fill the node with the element, and wake up the waiting thread of the node, and the awakened consumer thread will take it. Walk the element and return from the called method. We call this kind of node operation the "matching" method.

The producer of LinkedTransferQueue will block until the element added to the queue is consumed by a certain consumer (not just adding it to the queue). The newly added transfer method is used to implement this constraint. As the name suggests, blocking occurs when elements are transferred from one thread to another. It effectively realizes the transfer of elements between threads (in the way of establishing the happens-before relationship in the Java memory model).

TransferQueue is more versatile and easier to use than SynchronousQueue, because you can decide whether to use the BlockingQueue method (translator's note: such as the put method) or to ensure that a transfer is completed (translator's note: the transfer method). In the case that there are elements in the queue, calling the transfer method can ensure that all elements before the transferred element in the queue can be processed. It not only integrates the functions of these categories, but also provides a more efficient implementation (matching method).

 

Guess you like

Origin blog.csdn.net/weixin_47184173/article/details/115267958