HashMap expansion mechanism

HashMap has made major changes in 1.7 and 1.8

Before 1.7, the array + linked list was used. Its data node was an Entry node and an internal class; before 1.7, its data insertion process used header insertion. Although the header insertion method is more efficient, it is in the resize process. When, repeatedly call a transfer method to rehash some of the entries inside, which may cause a loop of the linked list, and an infinite loop may occur in the next Get; 1.7 is not locked, and it may be concurrent in multiple threads In the case of, the data cannot guarantee that it is a safe value, that is, the value I push in, and the value I push in when it is taken out

jdk1.8 is upgraded to an array + linked list/red-black tree, turning the original Entry node into a Node node. Its entire put process has also been optimized, 1.8 is tail insertion;

When is the linked list in HashMap transformed into a red-black tree?

1. The length of the linked list is greater than 8, and the official source code is as follows:

2. When condition 1 is met, the treeifyBin method is called to transform the red-black tree. In this method, if the length of the array is less than MIN_TREEIFY_CAPACITY (64), it will choose expansion instead of transforming it into a red-black tree. 


TreeNodes (red and black trees) occupy twice the space of ordinary Nodes (linked lists), for the balance of time and space.

The distribution frequency of nodes will follow the Poisson distribution, and the probability that the length of the linked list reaches 8 elements is 0.00000006, which is almost impossible.

Why is the threshold value 8 converted into a red-black tree different from the threshold value 6 converted into a linked list? It is to avoid frequent conversion back and forth

 

At the end of the article, I recommend some popular technical blog links :

  1. JAVA related deep technical blog link
  2. Flink related technical blog links
  3. Spark core technology link
  4. Design Pattern-Deepin Technology Blog Link
  5. Machine learning-deep technology blog link
  6. Hadoop related technical blog links
  7. Super dry goods-Flink mind map, it took about 3 weeks to compile and proofread
  8. Deepen the core principles of JAVA JVM to solve various online faults [with case]
  9. Please talk about your understanding of volatile? --A recent "hardcore contest" between Xiao Lizi and the interviewer
  10. Talk about RPC communication, an interview question that is often asked. Source code + notes, package understanding
  11. In-depth talk about Java garbage collection mechanism [with schematic diagram and tuning method]

Welcome to scan the QR code below or search the public account "Big Data Senior Architect", we will push more and timely information to you, welcome to communicate!

                                           

       

Guess you like

Origin blog.csdn.net/weixin_32265569/article/details/108444796