[Logic of Java Programming] Concurrent Containers

Copy-on-write List and Set

CopyOnWriteArrayList and CopyOnWriteArraySet, Copy-On-Write is copy-on-write

CopyOnWriteArrayList

CopyOnWriteArrayList implements the List interface, and its usage is basically the same as other Lists.
CopyOnWriteArrayList features:
1. Thread-safe, can be accessed concurrently by multiple threads
2. The iterator does not support modification operations, but will not throw ConcurrentModificationException
3. Supports some conforming operations in an atomic manner

There are several problems with synchronized based containers . When iterating, the entire list object needs to be locked; compound operations are not safe; pseudo-synchronization.
And CopyOnWirteArrayList directly supports two atomic methods:

// 不存在才添加,如果添加了,返回true,否则返回false
public boolean addIfAbsent(E e)
// 批量添加c中的非重复元素,不存在才添加,返回实际添加的个数
public int addAllAbsent(Collection<? extedns E> c)

CopyOnWirteArrayList is also an array inside, but this array is atomically updated as a whole. For each modification operation, a new array is created, the contents of the original array are copied to the new array, the required modifications are made on the new array, and the internal array reference is atomically set.

All read operations first get the currently referenced array, and then directly access the array. In the process of reading, the internal array reference may have been modified, but it will not affect the reading operation, and it still accesses the original array content.

That is to say: the contents of the array are read-only, and writes are performed by creating a new array and then atomically modifying the array reference.


In CopyOnWirteArrayList, reading does not require locks, it can be parallelized, and reading and writing can also be performed in parallel, but multiple threads cannot write at the same time, and each write operation needs to acquire a lock first. CopyOnWirteArrayList uses ReentrantLock internally

// 声明了volatile,以保证内存可见性
private volatile transient Object[] array;
final Object[] getArray() {
    return array;
}
final void setArray(Object[] a) {
    arr = a;
}
transient final ReentrantLock lock = new ReentrantLock();
// 构造方法
public CopyOnWirteArrayList() {
    setArray(new Object[0]);
}

add method

public boolean add(E e) {
    final ReentrantLock lock = this.lock;
    // 获取锁
    lock.lock();
    try {
        // 获取当前数组
        Object[] elements = getArray();
        int len = elements.length;
        // 复制出一个长度加1的新数组
        Object[] newElements = Arrays.copyOf(elements, len + 1);
        // 在新数组中添加元素
        newElements[len] = e;
        // 原子性地修改内部数组引用
        setArray(newElements);
        return true;
    } finally {
        lock.unlock();
    }
}

indexOf method

public int indexOf(E e, int index) {
    // 获取当前数组
    Object[] elements = getArray();
    return indexOf(e, elements, index, elements.length);
}
// 数据都是以参数形式传递,数组内容不会被修改,不存在并发问题
private static int indexOf(Object o, Object[] elements,
                       int index, int fence) {
    if (o == null) {
        for (int i = index; i < fence; i++)
            if (elements[i] == null)
                return i;
    } else {
        for (int i = index; i < fence; i++)
            if (o.equals(elements[i]))
                return i;
    }
    return -1;
}

The performance of CopyOnWriteArrayList is very low, and it is not suitable for scenarios with large arrays and frequent modifications. It is designed to optimize read operations. Reads do not require synchronization and have high performance, but while optimizing reads, they sacrifice write performance.

Two ideas for ensuring thread safety have been introduced before: one is lock, using synchronized or ReentrantLock; the other is circular CAS. Copy-on-write embodies a new way of ensuring thread safety.
Both locks and round-robin CAS control access conflicts to the same resource, while copy-on-write reduces conflicts by duplicating resources.

CopyOnWriteArraySet

CopyOnWriteArraySet implements the Set interface and does not contain duplicate elements. Internally implemented through CopyOnWriteArrayList

The add method is to call the addIfAbsent method of CopyOnWriteArrayList.

CopyOnWriteArrayList and CopyOnWriteArraySet are suitable for scenarios where there are far more reads than writes and the collection is not too large.

ConcurrentHashMap

ConcurrentHashMap is a concurrent version of HashMap with the following features:

  • Concurrency safety
  • Direct support for some atomic compound operations
  • Support high concurrency, read operations are completely parallel, and write operations support a certain degree of parallelism
  • Compared with the synchronized container Collections.synchronizedMap, iteration does not need to be locked and will not throw ConcurrentModificationException
  • weak consistency

The synchronization container uses synchronized, and all methods compete for the same lock;
ConcurrentHashMap uses segment lock technology to divide data into multiple segments, and each segment has an independent lock.

Weak consistency: After the iterator of ConcurrentHashMap is created, it will traverse each element according to the hash table structure, but during the traversal process, the internal elements may change. If the change occurs in the traversed part, the iterator will not. will be reflected, and if the change occurs in an untraversed section, the iterator will detect and reflect it.

Map and Set based on skip table

The concurrent versions corresponding to TreeMap/TreeSet in Java concurrent packages are ConcurrentSkipListMap and ConcurrentSkipListSet.

TreeSet is implemented based on TreeMap, and similarly, ConcurrentSkipListSet and ConcurrentSkipListMap are also implemented.

ConcurrentSkipListMap is implemented based on SkipList, which is called skip list or skip list, and is a data structure.

ConcurrentSkipListMap has the following characteristics:

  • No locks are used, all operations are non-blocking
  • Similar to ConcurrentHashMap, iterators do not throw exceptions and are weakly consistent
  • Similar to ConcurrentHashMap, it implements the ConcurrentMap interface and supports some atomic composite operations
  • Similar to TreeMap, sortable, defaults to the natural order of keys

The size method of ConcurrentSkipListMap is different from most container implementations. This method is not a constant number operation. It needs to traverse all elements. The time complexity is O(N), and after the traversal is completed, the number of elements may have changed.

concurrent queue

  • Lock-free and non-blocking concurrent queues: ConcurrentLinkedQueue and ConcurrentLinkedDeque
  • Ordinary blocking queue: ArrayBlockingQueue based on array, LinkedBlockingQueue and LinkedBlockingDeque based on linked list
  • Priority blocking queue: PriorityBlockingQueue
  • Delay blocking queue: DelayQueue
  • Other blocking queues: SynchronousQueue and LinkedTransferQueue

Lock-free and non-blocking means that these queues do not use locks, and all operations are always executed immediately, mainly through circular CAS to achieve concurrency security;
blocking queues mean that these queues use locks and conditions, and many operations need to acquire locks or satisfy characteristics Condition, when the lock is not acquired or the condition is waited, it will wait (block) until the lock is acquired or the condition is satisfied

lock-free non-blocking concurrent queue

ConcurrentLinkedQueue and ConcurrentLinkedDeque, which are suitable for multiple threads using a queue concurrently, are implemented based on linked lists and have no size limit. Similar to ConcurrentSkipListMap, their size method is not a constant number operation

ConcurrentLinkedQueue implements the Queue interface, representing a FIFO queue, enqueuing from the tail, dequeuing from the head, and a single-list linked list inside.
ConcurrentLinkedDeque implements the Deque interface, which represents a double-ended queue, which can enter and exit the queue at both ends. The interior is a doubly linked list.

The most basic principle of both classes is cyclic CAS

normal blocking queue

Blocking queues all implement the interface BlockingQueue and may wait when entering/dequeuing. The main method

// 入队,如果队列满,等待直到队列有空间
void put(E e) throws InterruptedException; 
// 出队,如果队列空,等待直接队列不为空,返回头部元素  
E take() throws InterruptedException; 
// 入队,如果队列满,最多等待指定的时间,如果超时还是满,返回false
boolean offer(E e, long timeout, TimeUnit unit) throws InterruptedException; 
// 出队,如果队列空,最多等待指定的时间,如果超时还是空,返回null
E poll(long timeout, TimeUnit unit) throws InterruptedException;  

Both ArrayBlockingQueue and LinedBlockingQueue implement the Queue interface; LinedBlockingDequeue implements the Deque interface.
ArrayBlockingQueue is implemented based on cyclic arrays and is bounded. The size needs to be specified when it is created, and it
will not . Unlike ArrayDeque, ArrayDeque is also implemented based on circular arrays, but it is unbounded and will automatically expand.
LinedBlockingQueue is implemented based on a singly linked list. The maximum length can be specified or not specified when it is created. The default is unlimited. LinedBlockingDeque is the same as LinedBlockingQueue, the maximum length is also optional at creation, and the default is unlimited, but it is implemented based on a doubly linked list.

Internally they are implemented using explicit lock ReentrantLock and explicit condition Condition

priority blocking queue

Common blocking queues are first in, first out, while priority queues are dequeued by priority, and those with higher priorities are first out.

PriorityBlockingQueue is a concurrent version of PriorityQueue

Delayed blocking queue

DelayQueue is a special kind of priority queue, it is unbounded, it requires each element to implement the Delayd interface

public interface Delayed extends Comparable<Delayed> {

    /**
     * Returns the remaining delay associated with this object, in the
     * given time unit.
     *
     * @param unit the time unit
     * @return the remaining delay; zero or negative values indicate
     * that the delay has already elapsed
     */
    long getDelay(TimeUnit unit);
}

getDelay returns an integer of a given time unit unit, indicating how long to delay, if it is less than or equal to 0, it means no longer delay.

DelayQueue can be used to implement timed tasks, which dequeue elements according to the delay time of elements. Elements can only be removed from the queue after their delay has expired.
DelayQueue is implemented based on PriorityQueue, which uses a lock ReentrantLock to protect all access

other blocking queues

SynchronousQueue is different from general queue, it has no space to store elements.
Its enqueue operation waits for another thread's dequeue operation, and if no other thread is waiting to receive an element from the queue, the put operation waits. And vice versa, take operation is the same

LinkedTransferQueue implements the TransferQueue interface, which is a sub-interface of BlockingQueue and adds some additional functions. When the producer puts elements in the queue, it can wait for the consumer to receive it before returning, which is suitable for some messaging type applications.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325875150&siteId=291194637