Concurrent containers 11

A summary of concurrent containers provided by JDK

Most of these containers provided by the JDK are in java.util.concurrentpackages.

  • ConcurrentHashMap: Thread-safe HashMap

  • CopyOnWriteArrayList: Thread-safe List, the performance is very good in situations where there is more reading and less writing, far better than Vector.

  • ConcurrentLinkedQueue: An efficient concurrent queue, implemented using linked lists. It can be seen as a thread-safe LinkedList, which is a non-blocking queue.

  • BlockingQueue: This is an interface, which is implemented internally by JDK through linked lists, arrays, etc. Represents a blocking queue, which is very suitable for use as a channel for data sharing.

  • ConcurrentSkipListMap: implementation of skip list. This is a Map that uses the data structure of a skip table for fast search.

二 ConcurrentHashMap

I won’t go into details here because it is relatively common.

3CopyOnWriteArrayList

3.1 Introduction to CopyOnWriteArrayList

public class CopyOnWriteArrayList<E>
extends Object
implements List<E>, RandomAccess, Cloneable, Serializable

In many application scenarios, read operations may be much larger than write operations. Since the read operation does not modify the original data at all, locking every read is actually a waste of resources. We should allow multiple threads to access the internal data of List at the same time, after all, the read operation is safe.

这和ReentrantReadWriteLockThe ideas of read-write locks are very similar, that is, read-read sharing, write-write mutual exclusion, read-write mutual exclusion, and write-read mutual exclusion. The JDK provides CopyOnWriteArrayListan analogy that goes further than the idea of ​​read-write locks. In order to maximize the reading performance, CopyOnWriteArrayListreading is completely lock-free, and what's even better: writing will not block the reading operation. Only synchronous waiting is required between writing and writing. In this way, the performance of read operations will be greatly improved. So how is it done?

3.2 How CopyOnWriteArrayList does it

CopyOnWriteArrayListAll mutable operations on a class (add, set, etc.) are implemented by creating a new copy of the underlying array. When the List needs to be modified, I do not modify the original content, but make a copy of the original data and write the modified content into the copy. After writing, replace the original data with the modified copy, so as to ensure that the writing operation will not affect the reading operation.

It can be seen from CopyOnWriteArrayListthe name that CopyOnWriteArrayListit is CopyOnWritean ArrayList CopyOnWritethat meets Perform a write operation in the new memory. After writing, point the pointer to the original memory to the new memory, and the original memory can be recycled.

3.3 Simple analysis of CopyOnWriteArrayList reading and writing source code

3.3.1 Implementation of CopyOnWriteArrayList read operation

The read operation does not have any synchronization control and lock operations. The reason is that the internal array array will not be modified and will only be replaced by another array, so data security can be guaranteed.

    /** The array, accessed only via getArray/setArray. */
    private transient volatile Object[] array;
    public E get(int index) {
        return get(getArray(), index);
    }
    @SuppressWarnings("unchecked")
    private E get(Object[] a, int index) {
        return (E) a[index];
    }
    final Object[] getArray() {
        return array;
    }
3.3.2 Implementation of CopyOnWriteArrayList write operation

The add() method of the CopyOnWriteArrayList write operation adds a lock when adding the collection, ensuring synchronization and avoiding multiple copies when writing by multiple threads.

    /**
     * Appends the specified element to the end of this list.
     *
     * @param e element to be appended to this list
     * @return {@code true} (as specified by {@link Collection#add})
     */
    public boolean add(E e) {
        final ReentrantLock lock = this.lock;
        lock.lock();//加锁
        try {
            Object[] elements = getArray();
            int len = elements.length;
            Object[] newElements = Arrays.copyOf(elements, len + 1);//拷贝新数组
            newElements[len] = e;
            setArray(newElements);
            return true;
        } finally {
            lock.unlock();//释放锁
        }
    }

4 ConcurrentLinkedQueue

The thread-safe Queue provided by Java can be divided into blocking queue and non-blocking queue . A typical example of blocking queue is BlockingQueue, and a typical example of non-blocking queue is ConcurrentLinkedQueue. In actual applications, blocking queue or non-blocking queue should be selected according to actual needs. . Blocking queues can be implemented through locking, and non-blocking queues can be implemented through CAS operations.

As can be seen from the name, ConcurrentLinkedQueuethis queue uses a linked list as its data structure. ConcurrentLinkedQueue should be regarded as the best-performing queue in a high-concurrency environment. It has good performance because of its complex internal implementation.

The internal code of ConcurrentLinkedQueue will not be analyzed. Everyone knows that ConcurrentLinkedQueue mainly uses CAS non-blocking algorithm to achieve thread safety.

ConcurrentLinkedQueue is suitable for scenarios where performance requirements are relatively high and multiple threads are reading and writing to the queue at the same time. That is, if the cost of locking the queue is high, it is suitable to use the lock-free ConcurrentLinkedQueue instead.

5BlockingQueue

Above we have mentioned ConcurrentLinkedQueue as a high-performance non-blocking queue. What we are going to talk about next is the blocking queue-BlockingQueue. BlockingQueue is widely used in "producer-consumer" problems because BlockingQueue provides blocking insertion and removal methods. When the queue container is full, the producer thread will be blocked until the queue is not full; when the queue container is empty, the consumer thread will be blocked until the queue is not empty.

BlockingQueue is an interface that inherits from Queue, so its implementation class can also be used as an implementation of Queue, and Queue inherits from the Collection interface. The following are the relevant implementation classes of BlockingQueue:

The following mainly introduces: ArrayBlockingQueue, LinkedBlockingQueue, PriorityBlockingQueue, the implementation classes of these three BlockingQueue.

5.2 ArrayBlockingQueue

5.1 Brief introduction to BlockingQueue

ArrayBlockingQueue is a bounded queue implementation class of the BlockingQueue interface, and the bottom layer is implemented using an array . Once an ArrayBlockingQueue is created, its capacity cannot be changed. Its concurrency control is controlled by reentrant locks. Whether it is an insertion operation or a read operation, the lock needs to be acquired to operate. When the queue is full, trying to put an element into the queue will cause the operation to block; trying to take an element from an empty queue will also block.

By default, ArrayBlockingQueue cannot guarantee the fairness of thread access to the queue. The so-called fairness means strictly following the absolute time order of thread waiting, that is, the thread waiting first can access the ArrayBlockingQueue first. Unfairness means that the order of accessing ArrayBlockingQueue does not follow strict time order. It is possible that when ArrayBlockingQueue can be accessed, threads blocked for a long time still cannot access ArrayBlockingQueue. If fairness is guaranteed, throughput will usually be reduced. If you need to obtain a fair ArrayBlockingQueue, you can use the following code:

private static ArrayBlockingQueue<Integer> blockingQueue = new ArrayBlockingQueue<Integer>(10,true);

5.3 LinkedBlockingQueue

The underlying blocking queue of LinkedBlockingQueue is based on a one-way linked list . It can be used as an unbounded queue or a bounded queue. It also meets the characteristics of FIFO and has higher throughput compared with ArrayBlockingQueue. In order to prevent the capacity of LinkedBlockingQueue from rapidly increasing and loss Lots of memory. Usually when creating a LinkedBlockingQueue object, its size is specified. If not specified, the capacity is equal to Integer.MAX_VALUE.

    /**
     *某种意义上的无界队列
     * Creates a {@code LinkedBlockingQueue} with a capacity of
     * {@link Integer#MAX_VALUE}.
     */
    public LinkedBlockingQueue() {
        this(Integer.MAX_VALUE);
    }

    /**
     *有界队列
     * Creates a {@code LinkedBlockingQueue} with the given (fixed) capacity.
     *
     * @param capacity the capacity of this queue
     * @throws IllegalArgumentException if {@code capacity} is not greater
     *         than zero
     */
    public LinkedBlockingQueue(int capacity) {
        if (capacity <= 0) throw new IllegalArgumentException();
        this.capacity = capacity;
        last = head = new Node<E>(null);
    }

5.4 PriorityBlockingQueue

PriorityBlockingQueue is an unbounded blocking queue that supports priority. By default, elements are sorted in natural order. You can also specify element sorting rules through custom class implementation methods, or specify sorting rules compareTo()through constructor parameters during initialization .Comparator

PriorityBlockingQueue concurrency control uses ReentrantLock , and the queue is an unbounded queue (ArrayBlockingQueue is a bounded queue. LinkedBlockingQueue can also specify the maximum capacity of the queue by passing capacity in the constructor, but PriorityBlockingQueue can only specify the initial queue size, and elements can be inserted later. time, if there is not enough space, it will automatically expand ).

Simply put, it is a thread-safe version of PriorityQueue. Null values ​​cannot be inserted. At the same time, the object inserted into the queue must be of comparable size (comparable), otherwise a ClassCastException exception will be reported. Its insertion operation put method will not block because it is an unbounded queue (the take method will block when the queue is empty).

六 ConcurrentSkipListMap

Another difference between using a skip list to implement a Map and using a hash algorithm to implement a Map is that hashing does not preserve the order of elements, while all elements in a skip list are sorted. So when traversing the jump list, you will get an ordered result. Therefore, if your application requires orderliness, then skip lists are your best choice. The class in the JDK that implements this data structure is ConcurrentSkipListMap.

I have already talked about table adjustment in the redis chapter, so I won’t go into details here.

Guess you like

Origin blog.csdn.net/qq_52988841/article/details/132587210