Java concurrent programming principle 3 (CountDownLatch, Semaphore, CopyOnWriteArrayList, ConcurrentHashMap)

一、CountDownLatch,Semaphore:

1.1 What is CountDownLatch? What's the use? How is the bottom layer realized?

The essence of CountDownLatch is actually a counter.

When multi-threaded concurrently processing business, you need to wait for other threads to finish processing, and then do subsequent operations such as merging. When responding to users, you can use CountDownLatch for counting. After other threads appear, the main thread will be awakened.

CountDownLatch itself is implemented based on AQS.

When new CountDownLatch, specify the specific value directly. This value will be copied to the state attribute.

When the child thread finishes processing the task, it executes the countDown method, and the internal state is directly given to state - 1.

When the state is reduced to 0, the thread suspended by await will be awakened.

CountDownLatch cannot be reused, and it will cool down after use.

1.2 What is a Semaphore? What's the use? How is the bottom layer realized?

Semaphore is a tool class that can be used for current limiting function.

For example, Hystrix involves semaphore isolation, which requires a limited number of concurrent threads, so it can be implemented using semaphores.

For example, if the current service requires at most 10 threads to work at the same time, set the semaphore to 10. No task submission needs to obtain a semaphore, just go to work, and return the semaphore when finished.

The semaphore is also implemented based on AQS.

When constructing a semaphore, specify the number of semaphore resources. When acquiring, specify how many semaphores to acquire. CAS guarantees atomicity, and the return is similar.

1.3 When the main thread ends, will the program stop?

If the main thread ends, but there are still user threads executing, it will not end!

If the main thread ends, the rest are daemon threads, end!

Two, CopyOnWriteArrayList:

2.1 How does CopyOnWriteArrayList ensure thread safety? Are there any downsides?

When CopyOnWriteArrayList writes data, it guarantees atomicity based on ReentrantLock.

Secondly, when writing data, a copy will be copied and written. After the writing is successful, it will be written to the array in CopyOnWriteArrayList.

Ensure that when reading data, there will be no data inconsistency.

If the amount of data is relatively large, a copy needs to be copied every time the data is written, which occupies too much space. If the amount of data is relatively large, it is not recommended to use CopyOnWriteArrayList.

Write operations are required to ensure atomicity, read operations are guaranteed to be concurrent, and the amount of data is not large~

三、ConcurrentHashMap(JDK1.8)

3.1 Why is HashMap not thread-safe?

Question 1: There are loops in JDK1.7 (during expansion).

Problem 2: Data will be overwritten and data may be lost.

Question 3: Secondly, the counter, which is also traditional ++, records inaccurately when recording the number of elements and the number of times HashMap is written.

Question 4: Data migration and capacity expansion may also result in data loss.

3.2 How does ConcurrentHashMap ensure thread safety?

1: Tail plug, followed by expansion with CAS to ensure thread safety

2: When writing to an array, the security is guaranteed based on CAS, and when it is inserted into a linked list or inserted into a red-black tree, security is guaranteed based on synchronized.

3: Here ConcurrentHashMap is a technology implemented by LongAdder, and the bottom layer is still CAS. (Atomic Long)

4: When ConcurrentHashMap expands, one point is based on CAS to ensure that there will be no concurrency problems in data migration. Secondly, ConcurrentHashMap also provides concurrent expansion operations. For example, the array length is expanded from 64 to 128. If two threads are expanded at the same time,

Thread A receives the data migration task of 64-48 indexes, and thread B receives the index data migration task of 47-32. The key is that when receiving tasks, it is based on CAS to ensure thread safety.

3.3 After the ConcurrentHashMap is built, is the array created? If not, how can I guarantee the thread safety of initializing the array?

ConcurrentHashMap is a lazy loading mechanism, and most framework components are lazy loading~

It is based on CAS to ensure the safety of the initialization thread. This not only involves CAS to modify the sizeCtl variable to control the atomicity of the thread initialization data, but also uses DCL. The outer layer judges that the array is not initialized, and the sizeCtl is modified based on CAS in the middle. Make an array uninitialized judgment.

image.png

3.4 Why is the load factor 0.75, and why does the linked list turn into a red-black tree when the length reaches 8?

And the load factor of ConcurrentHashMap is not allowed to be modified!

The load factor of 0.75 can be explained in two ways.

Why not 0.5, why not 1?

0.5: If the load factor is 0.5, adding half of the data will start expanding

  • Advantages: less hash collisions and high query efficiency.
  • Disadvantages: Expansion is too frequent, and the space utilization rate is low.

1: If the load factor is 1, the data is added to the length of the array to start expanding

  • Advantages: Infrequent capacity expansion, good space utilization.
  • Disadvantages: Hash conflicts will be particularly frequent, and data will be hung on the linked list, which will affect the query efficiency, and even the linked list will be too long to generate a red-black tree, which will affect the efficiency of writing.

0.75 can be said to be a middle choice, taking into account both aspects.

Let’s talk about the Poisson distribution. When the load factor is 0.75, according to the Poisson distribution, the probability of the length of the linked list reaching 8 is very low. The logo in the source code is 0.00000006, and the probability of generating a red-black tree is extremely low.

Although ConcurrentHashMap introduces red-black trees, red-black trees have higher maintenance costs for writing, so you can use them if you can. The comments in the HashMap source code also describe that red-black trees should be avoided as much as possible.

As for the degeneration of 6 into a linked list, it is because the tree is full of 7 values, and 7 is not degenerated to prevent frequent conversions between linked lists and red-black trees. Here, 6 degenerates and leaves an intermediate value to avoid frequent conversions.

The scene where the put operation is too frequent will cause the blockage of put during the expansion period?

Under normal circumstances, it will not cause blockage.

Because if it is found that there is no data at the current index position during the put operation, the data will be dropped to the old array normally.

If during the put operation, it is found that the current location data has been migrated to the new array, and it cannot be inserted normally at this time, to help expand the capacity, quickly end the expansion operation, and re-select the index position query

3.5 When will ConcurrentHashMap be expanded, and what is the expansion process?

  • The number of elements in ConcurrentHashMap reaches the threshold for load factor calculation, then directly expand the capacity
  • When the putAll method is called to query a large amount of data, it may also cause a direct expansion operation. If the inserted data is greater than the threshold for the next expansion, the large amount of data will be directly expanded and then inserted.
  • When the length of the array is less than 64 and the length of the linked list is greater than or equal to 8, expansion will be triggeredimage.png

The expansion process: (sizeCtl is an int type variable, used to control initialization and expansion)

  • Each expansion thread needs to calculate an expansion identification stamp based on the length of oldTable (to avoid the inconsistency of the array lengths of two expansion threads. Second, ensure that the 16 bits of the expansion identification stamp are 1, so that a left shift of 16 bits will result in a negative number)
  • The first expanded thread will sizeCtl + 2, which means that there is currently 1 thread to expand the capacity
  • Except for the first expansion thread, other threads will increase sizeCtl + 1, which means that another thread has come to help expansion
  • The first thread initializes the new array.
  • Each thread will receive the task of migrating data, and migrate the data in oldTable to newTable. By default, each thread receives a migration data task with a length of 16 each time
  • When the data migration is completed, when each thread goes to claim the task again, it finds that there is no task to receive, so it quits the expansion, and sets sizeCtl - 1.
  • The last thread that exits capacity expansion, after finding -1, there is 1 left, and the last thread that exits capacity expansion will check again from the beginning to the end to see if there is any leftover data that has not been migrated (this situation basically does not happen), after checking , and then -1, so that the sizeCtl is deducted and the expansion is completed.

3.6 How is the counter of ConcurrentHashMap implemented?

This is implemented based on the mechanism of LongAdder, but it does not directly use the reference of LongAdder, but writes a code with a similarity of more than 80% according to the principle of LongAdder, and uses it directly.

LongAdder uses CAS to add to ensure atomicity, and secondly based on segment locks to ensure concurrency.

3.7 Will the read operation of ConcurrentHashMap be blocked?

No matter where it is checked, it will not be blocked.

query array? : The first block is to check whether the element is in the array, and then return it directly.

Query linked list? : The second block, if there is no special situation, just query next and next in the linked list.

When expanding? : In the third block, if the current index position is -1, it means that all the data at the current position has been migrated to the new array, and you can directly go to the new array to query, regardless of whether the expansion is completed or not.

Query red-black tree? : If a thread is writing to the red-black tree, can the reading thread still query the red-black tree at this time? Because the red-black tree may be rotated to ensure balance, and the rotation will change the pointer, which may cause problems. Therefore, when converting a red-black tree, not only a red-black tree, but also a doubly linked list will be kept. At this time, the doubly linked list will be queried to prevent the reading thread from being blocked. As for how to judge whether there is a thread writing, waiting to write or read the red-black tree, judge according to the lockState of TreeBin, if it is 1, it means that there is a thread writing, if it is 2, it means that there is a writing thread waiting to write, if it is 4n , which means that there are multiple threads doing read operations.

Guess you like

Origin blog.csdn.net/lx9876lx/article/details/129116483