queue in java

Non-blocking queue: ConcurrentLinkedQueue

ConcurrentLinkedQueue is an unbounded thread-safe queue based on linked nodes. It sorts nodes using a first-in, first-out rule. When we add an element, it will be added to the tail of the queue. When we get an element, it will return The element at the head of the queue.

Blocking queue: BlockingQueue

1. ArrayBlockingQueue
      is implemented based on an array of blocking queues. Inside ArrayBlockingQueue, a fixed-length array is maintained to cache data objects in the queue. This is a commonly used blocking queue. In addition to a fixed-length array, ArrayBlockingQueue also stores two An integer variable that identifies the position of the head and tail of the queue in the array, respectively.
  ArrayBlockingQueue shares the same lock object when producers put data and consumers get data, which means that the two cannot really run in parallel, which is especially different from LinkedBlockingQueue; according to the implementation principle, ArrayBlockingQueue can be used completely Separating locks, allowing for fully parallel running of producer and consumer operations. The reason why Doug Lea did not do this may be because the data writing and obtaining operations of the ArrayBlockingQueue are light enough that the introduction of an independent locking mechanism, in addition to bringing additional complexity to the code, does not account for the performance at all. to any cheap. Another notable difference between ArrayBlockingQueue and LinkedBlockingQueue is that the former does not generate or destroy any extra object instances when inserting or removing elements, while the latter generates an extra Node object. In a system that needs to process large batches of data efficiently and concurrently for a long time, there is still a certain difference in its impact on GC. When creating an ArrayBlockingQueue, we can also control whether the internal lock of the object adopts a fair lock, and an unfair lock is adopted by default.

2. LinkedBlockingQueue
      The linked list-based blocking queue, similar to ArrayListBlockingQueue, also maintains a data buffer queue (the queue is composed of a linked list). When the producer puts a piece of data into the queue, the queue will get the data from the producer, and Cached inside the queue, and the producer returns immediately; only when the queue buffer reaches the maximum cache capacity (LinkedBlockingQueue can specify this value through the constructor), the producer queue will be blocked until the consumer consumes a copy from the queue Data, the producer thread will be woken up, and vice versa, the processing on the consumer side is based on the same principle. The reason why LinkedBlockingQueue can efficiently process concurrent data is also because it uses independent locks for the producer and consumer to control data synchronization, which also means that in the case of high concurrency, producers and consumers can be parallel. Operate the data in the queue to improve the concurrent performance of the entire queue.
As developers, we need to pay attention that if a LinkedBlockingQueue object is constructed without specifying its capacity, LinkedBlockingQueue will default to a capacity similar to infinite size (Integer.MAX_VALUE), so that if the speed of the producer is greater than that of the consumer The speed of the system may not have waited until the queue is full and blocked, and the system memory may have been exhausted.

3. DelayQueue
      The element in the DelayQueue can only be obtained from the queue when the specified delay time is up. DelayQueue is a queue with no size limit, so the operation of inserting data into the queue (producer) will never be blocked, and only the operation of getting data (consumer) will be blocked.
Usage scenarios: There are few usage scenarios of
  DelayQueue, but they are quite ingenious. Common examples are to use a DelayQueue to manage a connection queue that has not responded to a timeout.

4. PriorityBlockingQueue
      is a priority-based blocking queue (priority judgment is determined by the Compator object passed in by the constructor), but it should be noted that PriorityBlockingQueue does not block data producers, but only when there is no consumable data. , the consumer of blocking data. Therefore, special attention should be paid when using it. The speed at which the producer produces data must not be faster than the speed at which the consumer consumes data, otherwise, all available heap memory space will eventually be exhausted over time. When implementing PriorityBlockingQueue, the internal control thread synchronization lock adopts fair lock.

5. SynchronousQueue
      is an unbuffered waiting queue, similar to an unmediated direct transaction, a bit like a producer and consumer in a primitive society. The producer takes the product to the market and sells it to the final consumer of the product, while the consumer You must go to the market in person to find the direct producer of the goods you want. If one party does not find a suitable target, then I am sorry, everyone is waiting at the market. Compared with the buffered BlockingQueue, there is one less intermediary dealer link (buffer), if there are dealers, the producer directly wholesales the products to the dealer, and does not need to care that the dealer will eventually sell these products to those Consumers, because dealers can stock a part of the goods, compared with the direct transaction mode, the overall throughput of the intermediate dealer model will be higher (can be bought and sold in batches); but on the other hand, because of the introduction of dealers, This adds additional transaction links from the producer to the consumer, and the timely response performance of a single product may be reduced.
  There are two different ways of declaring a SynchronousQueue, with different behaviors between them. The difference between fair mode and non-fair mode:
  If fair mode is adopted: SynchronousQueue will use fair lock and cooperate with a FIFO queue to block redundant producers and consumers, so as to achieve the overall fairness strategy of the system;
  But if it is an unfair mode (SynchronousQueue defaults): SynchronousQueue uses unfair locks and cooperates with a LIFO queue to manage redundant producers and consumers. In the latter mode, if there is a gap between the processing speeds of producers and consumers, it is very difficult to It is prone to starvation, that is, there may be some producers or consumers whose data will never be processed.

refer to:

http://www.cnblogs.com/jackyuj/archive/2010/11/24/1886553.html

http://ifeve.com/concurrentlinkedqueue/

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325207512&siteId=291194637