20. Concurrent containers: What pits do we need to fill? -Concurrency tools

1. Synchronization container and its precautions

In 12. How to write concurrent programs with object-oriented thinking? Introduce object-oriented thinking to write concurrent programs.


class SafeArrayList<T> {
//	封装	ArrayList
	List<T> c = new ArrayList<>();

//	控制访问路径
	synchronized T get(int idx) {
		return c.get(idx);
	}

	synchronized void add(int idx, T t) {
		c.add(idx, t);
	}

	synchronized boolean addIfNotExist(T t) {
		if (!c.contains(t)) {
			c.add(t);
			return true;
		}
		return false;
	}
}

This is very simple to achieve thread safety.
There are ready-made thread-safe classes in Java SDk, which respectively wrap ArrayList,
HashSet and HashMap into thread-safe List, Set and Map.

List list = Collections.synchronizedList(new ArrayList());
Set set = Collections.synchronizedSet(new HashSet());
Map map = Collections.synchronizedMap(new HashMap());

Combining operations requires attention to race conditions . Even if each operation can guarantee atomicity, it cannot guarantee the atomicity of combining operations.

An easily overlooked "pit" in the container field is to traverse the container with an iterator . The following method does not guarantee the atomicity of the combination operation.

List list = Collections.synchronizedList(new ArrayList());
Iterator i = list.iterator();
while (i.hasNext())
foo(i.next());

The correct approach is as follows:

List list =	Collections.synchronizedList(new ArrayList());
synchronized (list)	{		
Iterator	i	=	list.iterator();	
while	(i.hasNext())
foo(i.next());
}				

The public method of the wrapper class locks this of the object, which is the list here.

2. Concurrent containers and their precautions

All methods of synchronous containers are synchronized and locked, and the performance is poor. After java1.5, a container with better performance is provided. We generally call it a concurrent container.

Commonly used concurrent containers are as follows:
Insert picture description here

2.1 List

CopyOnWriteArrayList, CopyOnWrite will make a new copy of the shared variable when writing. The advantage of this is that the read operation is completely lock-free.

CopyOnWriteArrayList maintains an array internally, and the member variable array points to this internal array. All read operations are based on array, as shown in the following figure, iterator Iterator traverses the array array.
Insert picture description here
If there is a write operation while traversing the array, such as adding elements, how is CopyOnWriteArrayList handled? CopyOnWriteArrayList will copy an array, and then perform the operation of adding elements on the newly copied array, and then point the array to this new array after execution. As you can see from the figure below, reading and writing can be performed in parallel. The traversal operation is always performed based on the original array, while the write operation is performed based on the new array.
Insert picture description here
Note the pit:

  • Application scenarios, CopyOnWriteArrayList is only suitable for scenarios with very few write operations, and can tolerate short-term inconsistencies in reading and writing.
  • The CopyOnWriteArrayList iterator is read-only and does not support addition, deletion, modification.

2.2 Map

The two implementations are ConcurrentHashMap and ConcurrentSkipListMap, the main difference is that the key of ConcurrentHashMap is unordered , and the key of ConcurrentSkipListMap is ordered . The key and value of both are not allowed to be null, otherwise an error will be reported.
Insert picture description here
ConcurrentSkipListMap uses skip table, which has better performance.

2.3 Set

The two implementations of the Set interface are CopyOnWriteArraySet and ConcurrentSkipListSet. For usage scenarios, you can refer to CopyOnWriteArrayList and ConcurrentSkipListMap described earlier, and their principles are the same.

2.4 Queue

Classification from two dimensions.

  • Blocking and non-blocking , the so-called blocking refers to when the queue is full, the enqueue operation is blocked; when the queue is empty, the dequeue operation is blocked;
  • Single-ended and double-ended, single-ended refers to the team can only enter the team at the end of the team, the first team out; and double-ended refers to the team at the end of the team can both enter and leave the team.

Blocking queues are identified by the Blocking keyword, single-ended queues are identified by Queue, and double-ended queues are identified by Deque .

  1. Single-ended blocking queue: In fact, the existing ArrayBlockingQueue, LinkedBlockingQueue, SynchronousQueue, LinkedTransferQueue, PriorityBlockingQueue and DelayQueue. Internally, there will be a queue. This queue can be an array (the implementation of which is ArrayBlockingQueue) or a linked list (the implementation of which is LinkedBlockingQueue); it can even not hold the queue (the implementation of which is SynchronousQueue). The enqueue operation must wait for the dequeue operation of the consumer thread. LinkedTransferQueue combines the functions of LinkedBlockingQueue and SynchronousQueue, and its performance is better than LinkedBlockingQueue; PriorityBlockingQueue supports dequeuing according to priority; DelayQueue supports delayed dequeuing.
    Insert picture description here
    2. Double-ended blocking queue: its implementation is LinkedBlockingDeque.
    Insert picture description here
    Only ArrayBlockingQueue and LinkedBlockingQueue support bounded. When using other unbounded queues, we must fully consider whether there are hidden dangers that cause OOM.
Published 97 original articles · praised 3 · 10,000+ views

Guess you like

Origin blog.csdn.net/qq_39530821/article/details/102655814