You will not see the source of cures illnesses --- HashMap source code analysis

Point of attention, do not get lost; Java-related technologies and continuously updated information! ! !
You do not point a praise it ~~~

HashMap is based on the Map interface class hash table implementation. This implementation provides all of the map-related operations, and allows the use of null value of the key is null. (Hashtable HashMap with substantially the same, except HashMap is not synchronized, and it allows you to null keys and values.); In addition, the internal HashMap element arrangements are unordered.

Suppose the hash function element can be reasonably dispersed in each hash bucket, then the efficiency of the HashMap put, get other basic operations will be high (the time constant of the level of complexity is O (n)). Time iteration of all the elements HashMap instance with its capacity (the number of hash buckets) and size (the number of key-value pairs) proportional to the sum. So, if you are concerned about the iterative performance of HashMap, it should not be set very high initial capacity, the load factor or set very low.
Here Insert Picture Description

Welcome into the group exchange

One example of HashMap has two parameters affect its performance: initial capacity and load factor. Capacity refers to the number of buckets in the hash table, the initial capacity is specified when creating the initial size of the hash table. Load factor is a measure used to measure when the hash table at full capacity and to what extent, the hash table should be automatic expansion. When the number of elements in the hash table exceeds the product of the load factor and the current capacity of the hash table are recalculated hash (rehashed) (i.e., reconstruction of the internal data structures), a large number of hash table buckets into the original two appointments times.

Generally, setting the default value to a load factor of 0.75, between the time cost and space cost is relatively good trade-off. This value is a little higher can reduce the space overhead but increase the lookup cost (reflected in most operations HashMap class, including get and put). When we set the initial capacity, it should be considered a reasonable number of elements intended load and load factor, thereby reducing the number of operations of rehash. If the initial capacity greater than the maximum number of entries divided by the load factor (initial capacity> max entries / load factor), the reload operation does not occur.

If an instance of HashMap need to store a lot of elements (key-value pairs), specify a large enough capacity so that it can create a HashMap storage efficiency is much higher than the automatic expansion.

Please note hashCode if many of the key used () method results are the same, then the performance of the hash table will be very slow. In order to improve the impact, when the key is Comparable, HashMap would use these sort of bonds to improve efficiency.

Please note, HashMap is not synchronized. If multiple threads access a HashMap, and at least one thread structural changes occur, it must be synchronized externally. (Refers to any structural changes to increase or deletion of key-value pairs, is embodied in the source attribute changes modCount operation causes only a modification value corresponding to the key not are structural changes). External synchronization by synchronizing a generally encapsulates the complete map of the object.

If no such object, you can use a map to convert Collections.synchronizedMap synchronized map, this action is best completed at the time of creation, before converting to avoid accidental unsynchronized access to the map.

Map m = Collections.synchronizedMap(new HashMap(...));

HashMap iterator set of all related methods are fail-fast (fail-fast): If the iterator is created, in addition to the iterator's own remove method, map the occurrence of structural changes, the iterator will throw a ConcurrentModificationException. Therefore, in the face of concurrent modification, the iterator quickly and cleanly to fail without taking any risks.

Please note that the fail-fast iterators characteristics when unsynchronized concurrent modification, is not hard and fast guarantee. Fail-fast iterators will do our best to throw a ConcurrentModificationException. Therefore, the preparation of the program relies on this exception in order to ensure its correctness is wrong: the fail-fast behavior of iterators should be used only to detect errors.

Constructor

HashMap constructor for a total of four:

  • Constructor with no arguments, the default initial capacity of 16, a default load factor 0.75

  • Specified initial capacity, the default load factor 0.75

  • Specified initial capacity and load factor

  • By passing the map structure
    in which 1,2,4 will call the first three kinds of constructors, fourth with just a convenient way to construct an existing Map HashMap, so look at the focus 3,4 achieve two configurations function here.

public class HashMap<K,V> 
 	extends AbstractMap<K,V> 
 	implements Map<K,V>, Cloneable, Serializable { 
 	
	//......
	
	// 空表
 static final Entry<?,?>[] EMPTY_TABLE = {};
 // 哈希表
 transient Entry<K,V>[] table = (Entry<K,V>[]) EMPTY_TABLE;
	
 // 容器扩容阈值,当容器大小(size)达到此值时,容器就会扩容。
 // size = 容量 * 负载因子
 // 如果table == EMPTY_TABLE,那么就会用这个值作为初始容量,创建新的哈希表
 int threshold;
 
 // 负载因子
 final float loadFactor;
 // 构造函数3:指定初始容量和负载因子
 public HashMap(int initialCapacity, float loadFactor) {
 // 检查参数
 if (initialCapacity < 0)
 throw new IllegalArgumentException("Illegal initial capacity: " + initialCapacity);
 if (initialCapacity > MAXIMUM_CAPACITY)
 initialCapacity = MAXIMUM_CAPACITY;
 if (loadFactor <= 0 || Float.isNaN(loadFactor))
 throw new IllegalArgumentException("Illegal load factor: " + loadFactor);
 // 设置负载因子	
 this.loadFactor = loadFactor;
 // 默认的阈值等于初始化容量
 threshold = initialCapacity;
 init();
 }
 // 构造函数4:用传入的map构造一个新的HashMap 
 public HashMap(Map<? extends K, ? extends V> m) {
 this(Math.max((int) ( m.size() / DEFAULT_LOAD_FACTOR) + 1,
 DEFAULT_INITIAL_CAPACITY), DEFAULT_LOAD_FACTOR);
 // 分配哈希表空间
 inflateTable(threshold);
 putAllForCreate(m);
 }
 
 //......
 
}

The above source code, points to note:

  1. The default threshold value is equal to the initial expansion of capacity, 16. When the hash table is empty, the HashMap will the threshold as the initial capacity of the hash table built in the interior, the substance is a hash table array
  2. inflateTable method is to create a hash table, the operation of the memory space allocation table (inflate translated as "inflation" means, it will be described in detail later). However constructor specified initial capacity and load factor and not immediately call inflateTable. Find a place in the source code of all calls inflateTable are:
graph LR
HashMap构造函数-Map为参数 --> inflateTable
put --> inflateTable
putAll --> inflateTable
clone --> inflateTable
readObject --> inflateTable

Preliminary look, only the argument list is Map constructor calls inflateTable, but the internal HashMap (Map map) constructor logic is to call it HashMap (int initialCapacity, float loadFactor) constructor initializes finished the capacity and load factor, again the call inflateTable. So a little summary: HashMap will not immediately create a hash table in the initialization phase.

Call logic

For a better understanding of the calling code, the call relationships between the lists FIG methods:
Here Insert Picture Description
internal data structure

Internal data structures maintained HashMap + is an array list, for each key stored in the Entry HashMap static inner class, the structure as shown below: Here Insert Picture Description
realize the put

public V put(K key, V value) {
 if (table == EMPTY_TABLE) {
 inflateTable(threshold);
 }
 if (key == null)
 return putForNullKey(value);
 int hash = hash(key);
 int i = indexFor(hash, table.length);
 for (Entry<K,V> e = table[i]; e != null; e = e.next) {
 Object k;
 if (e.hash == hash && ((k = e.key) == key || key.equals(k))) {
 V oldValue = e.value;
 e.value = value;
 e.recordAccess(this);
 return oldValue;
 }
 }
 modCount++;
 addEntry(hash, key, value, i);
 return null;
 }
 void addEntry(int hash, K key, V value, int bucketIndex) {
 // 如果容器大小大于等于阈值,且目标桶的entry不等于null
 if ((size >= threshold) && (null != table[bucketIndex])) {
 // 容器扩容: 哈希表原长度 * 2
 resize(2 * table.length);
 // 重新计算键的哈希值
 hash = (null != key) ? hash(key) : 0;
 // 重新计算哈希值对应存储的哈希表的位置
 bucketIndex = indexFor(hash, table.length);
 }
 createEntry(hash, key, value, bucketIndex);
 }
  1. Put inside method first determines the hash table is not empty, if the table is empty, the hash table is established (an array of internal data structures mentioned above), the tables built, you can store the key space correct.
  2. To store the key-value pair, you need to hash code (the hash), computing a hash key code is returned in accordance with the value of an int type, and then calculates the position stored in the fixed-length array (index) based on a hash code
  3. After obtaining subscript, it is necessary to find the location stored in the hash table. HashMap will first load the specified Entry object is stored in the index, if the Entry is not empty, it is more the Entry and the hash key (key when the comparison with == and equals to compare). If you came with put hash, key match, covering the value on the Entry, and then directly returns the old value; otherwise, look for the next Entry Entry points until the last Entry so far.
  4. If HashMap load the specified Entry object index is stored in null, or is looking for a complete strip Entry list did not match the hash and key. Then add a call addEntry Entry
  5. addEntry method will do some pre-processing. HashMap container determines the current number of stored key expansion has reached a set threshold, if it reaches the expansion twice. After expansion is recalculated hash code, and recalculates the new storage location and the new array length hash code. After doing potential treatment, call createEntry a new Entry.
  6. Since the above has been done pre-treatment, createEntry methods do not have to worry about expansion, you can rest assured that keep Entry. This method will be put in storage until the given index came in key, value, of course, the key, value is wrapped in the Entry, the Entry will point to let the old Entry.

Build logic hash table (inflateTable)

Build a hash table is implemented in inflateTable method:

	/**
 * 将一个数换算成2的n次幂
 * @param number
 * @return
 */
 private static int roundUpToPowerOf2(int number) {
 // assert number >= 0 : "number must be non-negative";
 return number >= MAXIMUM_CAPACITY
 ? MAXIMUM_CAPACITY
 : (number > 1) ? Integer.highestOneBit((number - 1) << 1) : 1;
 // 理解 Integer.highestOneBit((number - 1) << 1)
 // 比如 number = 23,23 - 1 = 22,二进制是:10110
 // 22 左移一位(右边补1个0),结果是:101100
 // Integer.highestOneBit() 函数的作用是取左边最高一位,其余位取0,
 // 即:101100 -> 100000,换成十进制就是 32
 }
 /**
 * inflate有“膨胀”、“充气”的意思。
 * 理解为初始化哈希表,分配哈希表内存空间
 */
 private void inflateTable(int toSize) {
 // Find a power of 2 >= toSize
 // 找出大于等于toSize的2的n次幂,作为哈希表的容量
 int capacity = roundUpToPowerOf2(toSize);
 // 计算新的扩容阈值: 容量 * 负载因子
 threshold = (int) Math.min(capacity * loadFactor, MAXIMUM_CAPACITY + 1);
 // 指定容量建哈希表
 table = new Entry[capacity];
 // 根据容量判断是否需要初始化hashSeed
 initHashSeedAsNeeded(capacity);
 }

Understand what roundUpToPowerOf2 method:

roundUpToPowerOf2部分计算结果:
roundUpToPowerOf2(0) = 1
roundUpToPowerOf2(1) = 1
roundUpToPowerOf2(2) = 2
roundUpToPowerOf2(3) = 4
roundUpToPowerOf2(4) = 4
roundUpToPowerOf2(5) = 8
roundUpToPowerOf2(6) = 8
roundUpToPowerOf2(7) = 8
roundUpToPowerOf2(8) = 8
roundUpToPowerOf2(9) = 16
roundUpToPowerOf2(10) = 16
roundUpToPowerOf2(11) = 16
roundUpToPowerOf2(12) = 16
roundUpToPowerOf2(13) = 16
roundUpToPowerOf2(14) = 16
roundUpToPowerOf2(15) = 16
roundUpToPowerOf2(16) = 16
roundUpToPowerOf2(17) = 32
roundUpToPowerOf2(6)计算示例:
计算公式:Integer.highestOneBit((5 - 1) << 1)
计算5<<1:
 00000101
<<1
-------------
 00001010
 
1010的十进制是10,然后计算Integer.highestOneBit(10),
该函数的作用是取传入数值的最高位然后其余低位取0,
所以Integer.highestOneBit(10)应该等于二进制的1000,即8

It is noteworthy that, inflateTable last also called a initHashSeedAsNeeded (capacity) method, which is based on the capacity to decide whether to initialize hashSeed, hashSeed default is 0, if the initialization hashSeed, its value will be a random value.

Alternative hashing与hashSeed

There is a constant ALTERNATIVE_HASHING_THRESHOLD_DEFAULT in the source code, it notes provide some information worth noting:

/**
 * The default threshold of map capacity above which alternative hashing is
 * used for String keys. Alternative hashing reduces the incidence of
 * collisions due to weak hash code calculation for String keys.
 * <p/>
 * This value may be overridden by defining the system property
 * {@code jdk.map.althashing.threshold}. A property value of {@code 1}
 * forces alternative hashing to be used at all times whereas
 * {@code -1} value ensures that alternative hashing is never used.
 */
 static final int ALTERNATIVE_HASHING_THRESHOLD_DEFAULT = Integer.MAX_VALUE;

To the effect that, ALTERNATIVE_HASHING_THRESHOLD_DEFAULT is a default threshold value, when the key is a key-value pair of String type, and map capacity reached this threshold, enable alternate hash (alternative hashing). Can reduce standby hash key of type String hash code is calculated (easier) incidence hash collision occurs. This value can be specified by defining the system property jdk.map.althashing.threshold. If the value is 1, indicating force always spare hash; -1 indicates if disabled.

HashMap has a static inner class Holder, its role is initialized according jdk.map.althashing.threshold and ALTERNATIVE_HASHING_THRESHOLD_DEFAULT ALTERNATIVE_HASHING_THRESHOLD, the relevant code in the virtual machine starts as follows:

	/**
 * Holder维护着一些只有在虚拟机启动后才能初始化的值
 */
 private static class Holder {
 /**
 * 触发启用备用哈希的哈希表容量阈值
 */
 static final int ALTERNATIVE_HASHING_THRESHOLD;
 static {
 // 读取JVM参数 -Djdk.map.althashing.threshold
 String altThreshold = java.security.AccessController.doPrivileged(
 new sun.security.action.GetPropertyAction(
 "jdk.map.althashing.threshold"));
 int threshold;
 try {
 // 如果该参数没有值,采用默认值
 threshold = (null != altThreshold)
 ? Integer.parseInt(altThreshold)
 : ALTERNATIVE_HASHING_THRESHOLD_DEFAULT;
 // 如果参数值为-1,禁用备用哈希
 // ALTERNATIVE_HASHING_THRESHOLD_DEFAULT也是等于Integer.MAX_VALUE
 // 所以jdk默认是禁用备用哈希的
 if (threshold == -1) {
 threshold = Integer.MAX_VALUE;
 }
 // 参数为其它负数,则视为非法参数
 if (threshold < 0) {
 throw new IllegalArgumentException("value must be positive integer.");
 }
 } catch(IllegalArgumentException failed) {
 throw new Error("Illegal value for 'jdk.map.althashing.threshold'", failed);
 }
 ALTERNATIVE_HASHING_THRESHOLD = threshold;
 }
 } 

Mentioned before, inflateTable last also called a initHashSeedAsNeeded (capacity) method, which is based on the capacity to decide whether to initialize hashSeed, hashSeed default is 0, if the initialization hashSeed. So let's look at this method:

 /**
 * A randomizing value associated with this instance that is applied to
 * hash code of keys to make hash collisions harder to find. If 0 then
 * alternative hashing is disabled.
 */
 transient int hashSeed = 0;
 /**
 * 按需初始化哈希种子
 */
 final boolean initHashSeedAsNeeded(int capacity) {
 // 如果hashSeed != 0,表示当前正在使用备用哈希
 boolean currentAltHashing = hashSeed != 0;
 // 如果vm启动了且map的容量大于阈值,使用备用哈希
 boolean useAltHashing = sun.misc.VM.isBooted() &&
 (capacity >= Holder.ALTERNATIVE_HASHING_THRESHOLD);
 // 异或操作,如果两值同时为false,或同时为true,都算是false。
 boolean switching = currentAltHashing ^ useAltHashing;
 if (switching) {
 // 把hashSeed设置成随机值
 hashSeed = useAltHashing
 ? sun.misc.Hashing.randomHashSeed(this)
 : 0;
 }
 return switching;
 }

As can be seen from the comments hashSeed variables, a random seed hash value, the hash key calculation code will be used when the seeds of this purpose is to further reduce hash collisions. If hashSeed = 0 disables the standby hash.

The Holder maintained ALTERNATIVE_HASHING_THRESHOLD enable threshold is triggered spare hash, the value indicates if the capacity of the container (note that capacity, not actual size) reached the value of the container should enable alternate hash.

Holder will try to read the incoming and the JVM startup parameters -Djdk.map.althashing.threshold assigned to ALTERNATIVE_HASHING_THRESHOLD. Its value has the following meaning:

  • ALTERNATIVE_HASHING_THRESHOLD = 1, always spare hash
  • ALTERNATIVE_HASHING_THRESHOLD = -1, disables the spare hashing

In initHashSeedAsNeeded (int capacity) method, will determine if the capacity of the container> = ALTERNATIVE_HASHING_THRESHOLD, will generate a random hash seed hashSeed, the seed will be used to invoke the method during the hash put in Method:

 /**
 * 获取key的哈希码,并应用一个补充的哈希函数,构成最终的哈希码。
 * This is critical because HashMap uses power-of-two length hash tables, that
 * otherwise encounter collisions for hashCodes that do not differ
 * in lower bits. Note: Null keys always map to hash 0, thus index 0.
 */
 final int hash(Object k) {
 // 如果哈希种子是随机值,使用备用哈希
 // (方法调用链:inflateTable()-->initHashSeedAsNeeded()-->hash(),
 // 在initHashSeedAsNeeded()中已判断了是否需要初始化哈希种子)
 int h = hashSeed;
 if (0 != h && k instanceof String) {
 return sun.misc.Hashing.stringHash32((String) k);
 }
 h ^= k.hashCode();
 // This function ensures that hashCodes that differ only by
 // constant multiples at each bit position have a bounded
 // number of collisions (approximately 8 at default load factor).
 h ^= (h >>> 20) ^ (h >>> 12);
 return h ^ (h >>> 7) ^ (h >>> 4);
 }

Computation storage subscript (indexFor)

/**
 * 根据哈希码计算返回哈希表的下标
 */
 static int indexFor(int h, int length) {
 // assert Integer.bitCount(length) == 1 : "length must be a non-zero power of 2";
 return h & (length-1);
 }

The code and simple, there are a few interesting places.

Why is the capacity to design to the n-th power of 2

Note, length essence is the internal capacity of the array, but also note that the n-th power of 2, is not a multiple of 2. Look at the following test code:

public class Main {
 static final int hash(Object k) {
 int hashSeed = 0;
 int h = hashSeed;
 if (0 != h && k instanceof String) {
 return sun.misc.Hashing.stringHash32((String) k);
 }
 h ^= k.hashCode();
 h ^= (h >>> 20) ^ (h >>> 12);
 return h ^ (h >>> 7) ^ (h >>> 4);
 }
 static int indexFor(int h, int length) {
 return h & (length-1);
 }
 public static void main(String[] args) {
 String key = "14587";
 int h = hash(key);
 int capacity = 16;
 for (int i = 0; i < 10; i++) {
 System.out.println(String.format("哈希码: %d, 容量: %d, 下标: %d",
 h, // 同一个哈希码
 (capacity<<i), // 不同的容量
 indexFor(h,capacity<<i))); //计算出来的下标
 }
 }
// key: hello
// 哈希码: 96207088, 容量: 16, 下标: 0
// 哈希码: 96207088, 容量: 32, 下标: 16
// 哈希码: 96207088, 容量: 64, 下标: 48
// 哈希码: 96207088, 容量: 128, 下标: 112
// 哈希码: 96207088, 容量: 256, 下标: 240
// 哈希码: 96207088, 容量: 512, 下标: 240
// 哈希码: 96207088, 容量: 1024, 下标: 240
// 哈希码: 96207088, 容量: 2048, 下标: 240
// 哈希码: 96207088, 容量: 4096, 下标: 240
// 哈希码: 96207088, 容量: 8192, 下标: 240
// key: 4
// 哈希码: 55, 容量: 16, 下标: 7
// 哈希码: 55, 容量: 32, 下标: 23
// 哈希码: 55, 容量: 64, 下标: 55
// 哈希码: 55, 容量: 128, 下标: 55
// 哈希码: 55, 容量: 256, 下标: 55
// 哈希码: 55, 容量: 512, 下标: 55
// 哈希码: 55, 容量: 1024, 下标: 55
// 哈希码: 55, 容量: 2048, 下标: 55
// 哈希码: 55, 容量: 4096, 下标: 55
// 哈希码: 55, 容量: 8192, 下标: 55
// key: 14587
// 哈希码: 48489485, 容量: 16, 下标: 13
// 哈希码: 48489485, 容量: 32, 下标: 13
// 哈希码: 48489485, 容量: 64, 下标: 13
// 哈希码: 48489485, 容量: 128, 下标: 13
// 哈希码: 48489485, 容量: 256, 下标: 13
// 哈希码: 48489485, 容量: 512, 下标: 13
// 哈希码: 48489485, 容量: 1024, 下标: 13
// 哈希码: 48489485, 容量: 2048, 下标: 1037
// 哈希码: 48489485, 容量: 4096, 下标: 1037
// 哈希码: 48489485, 容量: 8192, 下标: 1037
}

The above hash, indexFor HashMap from the copy source code are over, hashSeed = 0 is the default value HashMap, main key method by hash code is calculated and then the array length hash code is calculated by the following standard method performed put logic. As can be seen from the test results, the same hash code, when multiple expansion, the algorithm used indexFor less fluctuation index, this can reduce the number of operations of the mobile expansion caused Entry.

You can take a look at key 4, capacity of the subject process under 16,32,64 ... when indexFor calculations.

字符串“4”的哈希码是:55(二进制110111)
当length = 16时:
 h & (length-1)
= 55 & (16-1)
= 110111 & 1111
当length = 32时:
 h & (32-1)
= 55 & (16-1)
= 110111 & 11111
当length = 64时:
 h & (length-1)
= 55 & (64-1)
= 110111 & 111111

Here Insert Picture Description
Since the expansion of capacity will be doubled each time (capacity x 2), after the turn to the specific number of times (red dotted line to the left), and do with h-bit computing is certainly 1 are all so calculated the index will be the same . This way, although the expansion will cause the index changes, but relatively stable.

Just think, if the capacity is binary 17,33,65 ... then lenght-1 in addition to the high (the leftmost one) is 1, and the rest is 0, different hash and length-1 to do with the operation of index calculated more easily duplicate index. All the bits lenght-1 is 1, the calculated standard allows more uniform distribution of hash to reduce collisions.

Summary of what the capacity is designed to n-th power of 2 in order to:

  • Put in the method, it is calculated call indexFor subscript, the capacity is designed to make the n-th power of 2 relatively uniform subscript reduce hash collisions
  • In the expansion-related transfer method, there are calls indexFor recalculate the index. Capacity is designed to make the n-th power of 2 is relatively stable when recalculated expansion index, reduce movement element

Expansion and thread-safety issues

/**
 * Rehashes the contents of this map into a new array with a
 * larger capacity. This method is called automatically when the
 * number of keys in this map reaches its threshold.
 *
 * If current capacity is MAXIMUM_CAPACITY, this method does not
 * resize the map, but sets threshold to Integer.MAX_VALUE.
 * This has the effect of preventing future calls.
 *
 * @param newCapacity the new capacity, MUST be a power of two;
 * must be greater than current capacity unless current
 * capacity is MAXIMUM_CAPACITY (in which case value
 * is irrelevant).
 */
 void resize(int newCapacity) {
 // 缓存就哈希表数据
 Entry[] oldTable = table;
 int oldCapacity = oldTable.length;
 if (oldCapacity == MAXIMUM_CAPACITY) {
 threshold = Integer.MAX_VALUE;
 return;
 }
 // 用扩容容量创建一个新的哈希表
 Entry[] newTable = new Entry[newCapacity];
 transfer(newTable, initHashSeedAsNeeded(newCapacity));
 table = newTable;
 threshold = (int)Math.min(newCapacity * loadFactor, MAXIMUM_CAPACITY + 1);
 }
 /**
 * 把所有条目从当前哈希表转移到新哈希表
 */
 void transfer(Entry[] newTable, boolean rehash) {
 int newCapacity = newTable.length;
 for (Entry<K,V> e : table) {
 while(null != e) {
 Entry<K,V> next = e.next;
 if (rehash) {
 e.hash = null == e.key ? 0 : hash(e.key);
 }
 int i = indexFor(e.hash, newCapacity);
 e.next = newTable[i];
 newTable[i] = e;
 e = next;
 }
 }
 }

Here Insert Picture Description
FIG transfer process from the display can be seen, after the transfer list will reverse. 3-> 7-> 9 becomes 9-> 7-> 3. In a single-threaded environment, it will not be closed loop.

But in a multithreaded environment, there may be multiple threads call transfer, and transfer method to access a global variable table, and modify the index pointing to the Entry. Since the transfer process will lead to a linked list in reverse order, it is likely to occur in the closed-loop reference: 3-> 7-> 9-> 3, and then, when calling the get method, it is an infinite loop.

I want to help, and progress together. Like it a point, to a certainty!

Guess you like

Origin blog.csdn.net/XingXing_Java/article/details/90669081