Interview ConcurrentHashMap, just read this one!

 

This article summarizes the frequently-tested ConcurrentHashMap interview questions and interviews with ConcurrentHashMap. Just read this one! In order to help everyone review efficiently, "★" is used to indicate the frequency of the interview,"★ "the more, the higher the frequency!

 

Realization principle

What is the realization principle of ConcurrentHashMap? ★★★★★

The implementation of ConcurrentHashMap in JDK1.7 and JDK1.8 is different.

First look at JDK1.7

ConcurrentHashMap in JDK1.7 is composed of  Segment an array structure and  HashEntry an array structure, that is, ConcurrentHashMap divides the hash bucket array into small arrays (Segments), and each small array consists of n HashEntry.

As shown in the figure below, first divide the data into a segment of storage, and then assign a lock to each segment of data. When a thread occupies the lock to access one of the data, the data of other segments can also be accessed by other threads, realizing the real Concurrent access.

Segment is an internal class of ConcurrentHashMap, the main components are as follows:

Segment inherits ReentrantLock, so Segment is a reentrant lock and plays the role of lock. Segment defaults to 16, which means that the concurrency is 16.

HashEntry, which stores elements, is also a static internal class. The main components are as follows:

Among them, the data value of HashEntry and the next node next are modified with volatile to ensure the visibility of data acquisition in a multi-threaded environment !

Let's look at JDK1.8 again

In terms of data structure, ConcurrentHashMap in JDK1.8 chooses the same Node array + linked list + red-black tree structure as HashMap ; in the implementation of locks, the original segment locks are discarded and CAS + synchronizedmore fine-grained locks are used. .

The lock level is controlled at the level of the more fine-grained hash bucket array elements, which means that only the head node of the linked list (the root node of the red-black tree) needs to be locked, and it will not affect other hash bucket array elements. Reading and writing greatly improves the concurrency.

Why use the built-in lock synchronized to replace the reentrant lock ReentrantLock in JDK1.8? ★★★★★

  • In JDK1.6, a large number of optimizations have been introduced to the implementation of synchronized locks, and synchronized has a variety of lock states, which will change step by step from lock-free -> biased lock -> lightweight lock -> heavyweight lock.
  • Reduce memory overhead. Assuming that reentrant locks are used to obtain synchronization support, each node needs to inherit AQS to obtain synchronization support. But not every node needs synchronization support. Only the head node of the linked list (the root node of the red-black tree) needs to be synchronized, which undoubtedly brings huge memory waste.

access

What is the execution logic of the put method of ConcurrentHashMap? ★★★★

First look at JDK1.7

First locate the corresponding segment, and then perform the put operation.

The source code is as follows:

First, it will try to acquire the lock. If the acquisition fails, there must be competition from other threads, and then  scanAndLockForPut() spin to acquire the lock.

  1. Attempt to spin to acquire the lock.
  2. If the number of retries is reached, the  MAX_SCAN_RETRIES lock acquisition will be blocked to ensure successful acquisition.

Look at JDK1.8 again

It can be roughly divided into the following steps:

  1. Calculate the hash value according to the key;

  2. Determine whether to initialize;

  3. Locate the Node, get the first node f, and judge the first node f:

    • If it is null, try to add it through CAS;
    • If it is  f.hash = MOVED = -1 , it means that other threads are expanding and participating in the expansion together;
    • If not satisfied, synchronized locks the f node, judges whether it is a linked list or a red-black tree, and traverses the insertion;
  4. When the length of the linked list reaches 8, the array is expanded or the linked list is converted into a red-black tree.

The source code is as follows:

What is the execution logic of the get method of ConcurrentHashMap? ★★★★

Similarly, first look at JDK1.7

First, calculate the hash value to locate the specific segment according to the key, then obtain and locate the HashEntry object according to the hash value, and traverse the linked list of the HashEntry object to find the corresponding element.

Since the shared variables involved in HashEntry are all modified with volatile, volatile can ensure memory visibility, so it is the latest value each time it is obtained.

The source code is as follows:

Look at JDK1.8 again

It can be roughly divided into the following steps:

  1. Calculate the hash value according to the key, and judge whether the array is empty;

  2. If it is the first node, return directly;

  3. If it is a red-black tree structure, query from the red-black tree;

  4. If it is a linked list structure, loop through the judgment.

The source code is as follows:

Does the get method of ConcurrentHashMap need to be locked? Why? ★★★

The get method does not need to be locked. Because the element value and pointer next of Node are modified with volatile, thread A is visible to thread B when it modifies the value of the node or adds a new node in a multi-threaded environment.

This is one of the reasons why it is more efficient than other concurrent collections such as Hashtable and HashMap wrapped by Collections.synchronizedMap().

Does the get method do not need to be locked related to the volatile-modified hash bucket array? ★★★

It doesn't matter. The tablevolatile modification of the hash bucket array is mainly to ensure visibility when the array is expanded.

other

Why does ConcurrentHashMap not support the key or value being null? ★★★

Let's first talk about why value cannot be null. Because ConcurrentHashMap is used for multi-threading, if you ConcurrentHashMap.get(key)get null, it cannot be judged. Whether the mapped value is null or the corresponding key is not found and is null, there is ambiguity.

And HashMap state for a single thread, but it may be used containsKey(key) to judge whether or not contained in the end of the null.

We use the contradiction method to reason:

Assuming that ConcurrentHashMap allows to store the value of null, there are two threads A and B at this time. Thread A calls the ConcurrentHashMap.get(key)method and returns null. We don't know whether this null is unmapped null or the stored value is null.

Assuming that at this time, the true situation of returning null is that the corresponding key is not found. So, we can use  ConcurrentHashMap.containsKey(key)to test our hypothesis holds, we expect the result returns false.

But after we call the  ConcurrentHashMap.get(key)method, containsKeybefore the method, thread B performed the ConcurrentHashMap.put(key, null)operation. Then we call the containsKeymethod to return true, which is not consistent with our hypothetical real situation, which is ambiguous.

As for why the key in ConcurrentHashMap cannot be null, the source code is written like this, haha. If the interviewer is not satisfied, answer that because the author Doug does not like null, the null key is not allowed to exist at the beginning of the design. For those who want to know more about it, you can read this article. This interview question I really don’t know what the interviewer wants.

What is the concurrency of ConcurrentHashMap? ★★

Concurrency can be understood as the maximum number of threads that can simultaneously update ConccurentHashMap without lock contention while the program is running. In JDK1.7, it is actually the number of segment locks in ConcurrentHashMap, that is, the array length of Segment[]. The default is 16, and this value can be set in the constructor.

If you set the concurrency by yourself, ConcurrentHashMap will use the smallest power of 2 greater than or equal to this value as the actual concurrency, that is, if you set the value to 17, then the actual concurrency is 32.

If the concurrency is set too small, serious lock contention problems will be caused; if the concurrency is set too large, accesses originally located in the same segment will spread to different segments, and the CPU cache hit rate will decrease, which will cause The performance of the program is degraded.

In JDK1.8, the segment concept has been abandoned, and the Node array + linked list + red-black tree structure has been selected. The concurrency depends on the size of the array.

Is the ConcurrentHashMap iterator strong consistency or weak consistency? ★★

Unlike the strong consistency of the HashMap iterator, the ConcurrentHashMap iterator is weak.

After the iterator of ConcurrentHashMap is created, it will traverse each element according to the hash table structure, but during the traversal process, internal elements may change. If the change occurs in the traversed part, the iterator will not reflect it. And if the change occurs in the untraversed part, the iterator will find and reflect it, which is weak consistency.

In this way, the iterator thread can use the original old data, and the writer thread can also complete the changes concurrently. More importantly, this ensures the continuity and scalability of concurrent execution of multiple threads, which is the key to performance improvement. For those who want to know more about it, you can read this article: http://ifeve.com/ConcurrentHashMap-weakly-consistent/

What is the difference between ConcurrentHashMap in JDK1.7 and JDK1.8? ★★★★★

  • Data structure: The segment lock data structure is cancelled, and replaced by the array + linked list + red-black tree structure.
  • Guarantee thread safety mechanism: JDK1.7 uses Segment's segmented lock mechanism to achieve thread safety, in which Segment inherits from ReentrantLock. JDK1.8 adopts to CAS+synchronizedensure thread safety.
  • Lock granularity: JDK1.7 is to lock the segments that need to perform data operations, and JDK1.8 is adjusted to lock each array element (Node).
  • The linked list is converted into a red-black tree: the simplification of the hash algorithm for locating nodes will bring disadvantages, and the hash conflict will intensify. Therefore, when the number of linked list nodes is greater than 8 (and the total amount of data is greater than or equal to 64), the linked list will be converted into a red-black tree for storage .
  • Query time complexity: From JDK1.7 traversal linked list O(n), JDK1.8 becomes traversal red-black tree O(logN).

Which is more efficient, ConcurrentHashMap or Hashtable? why? ★★★★★

ConcurrentHashMap is more efficient than Hashtable, because Hashtable adds a large lock to the entire hash table to achieve thread safety. The lock granularity of ConcurrentHashMap is lower. In JDK1.7, segmented locks are used to achieve thread safety, and in JDK1.8, they are used to CAS+synchronizedachieve thread safety.

Specifically talk about the lock mechanism of Hashtable★★★★★

Hashtable uses synchronized to achieve thread safety. A large lock is added to the entire hash table. When multi-threaded access, as long as one thread accesses or manipulates the object, other threads can only block and wait for the required lock to be released. Performance will be very poor in a highly competitive multi-threaded scene!

Is there any other way to safely manipulate map under multithreading? ★★★

You can also use Collections.synchronizedMapthe method to synchronize the method.

If the incoming HashMap object is actually a layer of packaging for the method of HashMap, the object lock is used to ensure thread safety in the multi-threaded scenario, and the essence is to lock the entire table of the HashMap. The performance is still very poor in a highly competitive multi-threaded environment, so it is not recommended!

At last

The ConcurrentHashMap of this article is here, if you feel good, don't forget to like it~

What kind of articles do you want to see, please leave a message or private message~

Shoulders of giants

https://www.cnblogs.com/keeya/p/9632958.html

http://www.justdojava.com/2019/12/18/java-collection-15.1

Guess you like

Origin blog.csdn.net/qq_33762302/article/details/115296225