[Java concurrent tool class-collaboration] concurrent container


Containers in Java are mainly divided into four categories: List, Map, Set, and Queue, but not all containers are thread-safe, such as ArrayList, which is not thread-safe.

1. So how can you turn ArrayList into a thread-safe container?

In fact, the idea is very simple, just encapsulate the non-thread-safe container inside the object, and then control the access method.

Let's take ArrayList as an example to see how to make it thread-safe.

SafeArrayList<T>{
  List<T> c = new ArrayList<>(); //封装ArrayList
  //控制访问路径
  synchronized T get(int idx){
    return c.get(idx);
  }
  synchronized  void add(int idx, T t) {
    c.add(idx, t);
  }
  synchronized  boolean addIfNotExist(T t){
    if(!c.contains(t)) {
      c.add(t);
      return true;
    }
    return false;
  }
}

Encapsulate the ArrayList above, and then use synchronized to lock the ArrayList access method, that is, only one thread can access the ArrayList, which becomes thread-safe. So can not all non-thread-safe containers be thread-safe in this way?

2. Synchronize containers

The java SDK thinks of the above situation and provides this method.

List list = Collections.synchronizedList(new ArrayList());
Set set = Collections.synchronizedSet(new HashSet());
Map map = Collections.synchronizedMap(new HashMap());

Note: It is no problem for a thread-safe container to call a single method, but when calling multiple methods, that is, when combining operations, you need to pay attention to race conditions.
Just like using an iterator to traverse the container.

List list = Collections.synchronizedList(new ArrayList());
Iterator i = list.iterator(); 
while (i.hasNext())
  foo(i.next());

Although the container is thread-safe and there is no problem accessing a single method, there is a security problem when combined. When thread T1 accesses i.hasNext () and thread T2 accesses i.next (), thread T1 accesses i.next ) There is a problem.
Correct approach:

List list = Collections. synchronizedList(new ArrayList());
synchronized (list) {  //加锁之后只能一个线程访问这两个组合操作了
  Iterator i = list.iterator(); 
  while (i.hasNext())
    foo(i.next());
}    

The synchronization containers provided by Java are also Vector, Stack and Hashtable, which are also implemented based on synchronized. The traversal of these three containers must also be locked to ensure mutual exclusion.

3. Concurrent containers

The so-called thread-safe container before java1.5 refers to the synchronous container . Because the synchronized implementation guarantees mutual exclusion, the serialization is too high and the performance is too poor.
Later, after the Java 1.5 version, a container with higher performance was provided, which is generally called a concurrent container .

Concurrent containers are still those four categories:
Insert picture description here

3.1 List

The only implementation class in List is CopyOnWriteArrayList. CopyOnWrite, as the name implies, will make a new copy of the shared variable when writing. The advantage of this is that the read operation is completely lock-free.

  • Combine the following figure to understand the principle of CopyOnWriteArrayList.
    CopyOnWriteArrayList maintains an array internally, the member variable array points to the array, and all read operations are based on this array. If there is a write operation while traversing, such as adding element 9, it will copy a copy of the original array, and then write the copy array operation, after the execution, point the array to the copy array.
    Insert picture description here

  • Application scenario: CopyOnWriteArrayList is only suitable for scenarios with very few write operations, and can tolerate short-term inconsistencies in reading and writing. For example, in the above operation, the written elements cannot be traversed immediately.

  • Note: The CopyOnWriteArrayList iterator is read-only and does not support addition, deletion, modification. Because the iterator traverses only a snapshot, and adding, deleting, and modifying snapshots is meaningless.

3.2 Map

The two implementations of the Map interface are ConcurrentHashMap and ConcurrentSkipListMap.

  • Difference: The keys of ConcurrentHashMap are unordered. The keys of ConcurrentSkipListMap are ordered.
  • The following table summarizes the key and value requirements of the Map-related implementation classes.
    Insert picture description here
  • Performance: SkipList in ConcurrentSkipListMap itself is a data structure, Chinese is generally translated as "jump table". The average time complexity of jump table insertion, deletion, and query operations is O (log n), which is theoretically not related to the number of concurrent threads, so in the case of very high concurrency, if you are not satisfied with ConcurrentHashMap performance Try ConcurrentSkipListMap.

3.3 Set

The two implementations of the Set interface are CopyOnWriteArraySet and ConcurrentSkipListSet,

  • For usage scenarios, you can refer to the CopyOnWriteArrayList (read more and write less) and ConcurrentSkipListMap (key ordered) described above. Their principles are the same, so I will not repeat them here.

3.4 Queue

Queues in java concurrent packets can be classified into two dimensions:

  • Blocking and non-blocking: Blocking means that the queue is full, the enqueue operation is blocked, the queue is empty, and the dequeue operation is blocked. Blocking keywords are used in Java concurrent packets to indicate blocking queues.
  • Single-ended and double-ended: Single-ended can only enter the team at the end of the team, the first team out; double-ended is the end of the team and the team capital can enter the team. Single-ended queues are marked with Queue, and double-ended queues are marked with Deque

The two dimensions are combined into four categories:

  • Single-ended blocking queue: ArrayBlockingQueue, LinkedBlockingQueue, SynchronousQueue, LinkedTransferQueue, PriorityBlockingQueue, and DelayQueue are implemented.
  1. ArrayBlockingQueue: Generally, an array queue will be held internally.
  2. LinkedBlockingQueue: A linked list queue is generally held internally.
  3. SynchronousQueue: does not hold the queue, the producer consumer mode, the producer's thread enqueue operation must wait for the consumer thread's dequeue operation.
  4. LinkedTransferQueue: Integrates the functions of LinkedBlockingQueue and SynchronousQueue, and its performance is better than LinkedBlockingQueue.
  5. PriorityBlockingQueue supports dequeuing according to priority.
  6. DelayQueue supports delayed dequeuing.
  • Double-ended blocking queue: its implementation is LinkedBlockingDeque.
    Insert picture description here
  • Single-ended non-blocking queue: its implementation is ConcurrentLinkedQueue.
  • Double-ended non-blocking queue: its implementation is ConcurrentLinkedDeque.

Note: Only the ArrayBlockingQueue and LinkedBlockingQueue of these queues support bounded, so when using other unbounded queues, you must fully consider whether there are hidden dangers that cause OOM.
It is not recommended to use unbounded queues in general work.

Reference: Geek Time
More: Deng Xin

Published 34 original articles · Likes0 · Visits 1089

Guess you like

Origin blog.csdn.net/qq_42634696/article/details/105173018