What are the queues in Java and what are the differences.

What are the queues in Java, in fact, what are the implementations of queues, such as: ConcurrentLinkedQueue, LinkedBlockingQueue, ArrayBlockingQueue, LinkedList.

It originated from a production accident I experienced. There is a service that collects the logs of the business system. The developers of this service used LinkedList in the sdk for the business system and did not do concurrency control, which caused this service to frequently The logs of the business system cannot be collected normally (the lost logs and the log reporting thread stops running).

Looking at the source code of the add() method, we can see why:


public boolean add(E e) {
    linkLast(e);//调用linkLast,在队列尾部添加元素
    return true;
} 

void linkLast(E e) {
    final Node<E> l = last;
    final Node<E> newNode = new Node<>(l, e, null);
    last = newNode;
    if (l == null)
        first = newNode;
    else
        l.next = newNode;
    size++;//多线程情况下,如果业务系统没做并发控制,size的数量会远远大于实际元素的数量
    modCount++;
}

The above example shows that LinkedList is multi-threaded and does not do concurrency control. The value of size is much larger than the actual value of the queue. There are 100 threads, each adding 1000 elements, and finally only 2030 are actually added. element:

The variable size value of List is: 88371
The 2031st element is taken out as null

The solution is to use locks or queues that support adding elements as atomic operations, such as ConcurrentLinkedQueue, LinkedBlockingQueue, etc.

We have analyzed the source code of LinkedBlockingQueue's put and other methods, which are atomic operations of adding elements using ReentrantLock. Then take a brief look at the add and offer() methods of the high-concurrency queue, which use CAS to implement lock-free atomic operations:

public boolean add(E e) {
  return offer(e);
}
public boolean offer(E e) {

        checkNotNull(e);
        final Node<E> newNode = new Node<E>(e);
        for (Node<E> t = tail, p = t;;) {
            Node<E> q = p.next;
            if (q == null) {
                // p is last node
                if (p.casNext(null, newNode)) {
                    // Successful CAS is the linearization point
                    // for e to become an element of this queue,
                    // and for newNode to become "live".
                    if (p != t) // hop two nodes at a time
                        casTail(t, newNode);  // Failure is OK.
                    return true;
                }
                // Lost CAS race to another thread; re-read next
            }else if (p == q)
                // We have fallen off list.  If tail is unchanged, it
                // will also be off-list, in which case we need to
                // jump to head, from which all live nodes are always
                // reachable.  Else the new tail is a better bet.
                p = (t != (t = tail)) ? t : head;
            else
                // Check for tail updates after two hops.
                p = (p != t && t != (t = tail)) ? t : q;
        }
    }

Next, we will use the high-concurrency queue to transform the above demo. You only need to change the content in the demo and reverse the content of the comments in the following two lines, and you can find that no elements are lost:

public static LinkedList list = new LinkedList();
//public static ConcurrentLinkedQueue list = new ConcurrentLinkedQueue();

Looking at the poll() method of the high-performance queue, I feel that NB, the method of taking elements also uses CAS to achieve atomic operations. Therefore, in the process of actual use, when we don't care so much about the processing order of elements, the queue elements There can be multiple consumers, and no data will be lost:

public E poll() {
        restartFromHead:
        for (;;) {
            for (Node<E> h = head, p = h, q;;) {
                E item = p.item;

                if (item != null && p.casItem(item, null)) {
                    // Successful CAS is the linearization point
                    // for item to be removed from this queue.
                    if (p != h) // hop two nodes at a time
                        updateHead(h, ((q = p.next) != null) ? q : p);
                    return item;
                }
                else if ((q = p.next) == null) {
                    updateHead(h, p);
                    return null;
                }
                else if (p == q)
                    continue restartFromHead;
                else
                    p = q;
            }
        }
    }

About ConcurrentLinkedQueue and LinkedBlockingQueue:

  • LinkedBlockingQueue uses the lock mechanism, ConcurrentLinkedQueue uses the CAS algorithm, although the underlying lock acquisition of the LinkedBlockingQueue also uses the CAS algorithm
  • Regarding taking elements, ConcurrentLinkedQueue does not support blocking to take elements, and LinkedBlockingQueue supports the blocking take() method. If you need the blocking effect of ConcurrentLinkedQueue consumers, you need to implement it yourself
  • Regarding the performance of inserting elements, ConcurrentLinkedQueue is definitely the fastest from a literal and simple analysis of the code, but this also depends on the specific test scenario. I did two simple demos for testing. The test results are as follows, two The performance of each is similar, but in actual use, especially on servers with multiple CPUs, the gap between locks and no locks is reflected. ConcurrentLinkedQueue will be much faster than LinkedBlockingQueue: ConcurrentLinkedQueuePerform: 100 when using ConcurrentLinkedQueue The number of elements added by the thread loop is: 33828193 
    LinkedBlockingQueuePerform: In the case of using LinkedBlockingQueue, the number of elements added by 100 thread loops is: 33827382

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325391014&siteId=291194637