HashMap source code analysis of Java collection (interview summary)

To sort out the knowledge of HahsMap, the article structure and ideas are as follows: 

table of Contents

1. Main features

2. Inheritance

Three, data structure

Array + linked list + red-black tree (JDK1.8 adds the red-black tree part)

Main element 

Four, core method analysis

hash() 

comparableClassFor()

tableSizeFor()

HashMap() 

get()

put()

resize()

treeifyBin()

compute() 

Five, interview questions

The difference between hashMap and hashTable

Talk about the optimization of hashMap in 1.7 and 1.8

Is HashMap thread safe? Is there any way to thread safety

How is the hash function designed

Can you tell me in detail, what are the benefits?

Why this design can increase hashability

How does LinkedHashMap achieve orderly

Talk about the logical ideas of several main functions get(), put(), resize(), replace(), remove()

Six, reference materials


1. Main features

  • The underlying implementation is linked list array + red-black tree, zipper method
  • The key is stored in Set, and no repetition is allowed. If the key is used as an object, the hashCode and equals methods need to be rewritten
  • Allow empty keys and empty values, but only one empty key
  • The elements are disordered, and the order will change from time to time
  • The time complexity of insertion and acquisition is basically O(1) (provided that there is a proper hash function, so that the elements are distributed in a uniform position)
  • Two key factors: initial capacity, load factor

2. Inheritance

public class HashMap<K,V> extends AbstractMap<K,V>
        implements Map<K,V>, Cloneable, Serializable {
   
   

Three, data structure

Array + linked list + red-black tree (JDK1.8 adds the red-black tree part)

preview

Main element 

    /**
     * 默认初始容量16——必须是2的幂
     * 01向左补四位,2的四次方
     * hashCode & (length-1); 15位与14位相比,与hashcode相与会有更多的结果,且不浪费空间
     * 所以将length定位二次幂,在进行hash运算时,不同的key算得index相同的几率较小,那么数据在数组上分布就比较均匀,
     */
    static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16

    /**
     * 最大容量,必须是2的幂 2的30次方
     */
    static final int MAXIMUM_CAPACITY = 1 << 30;

    /**
     * 载荷因子
     */
    static final float DEFAULT_LOAD_FACTOR = 0.75f;

    /**
     * treeify_threshold由链表转化为红黑书的阀值
     */
    static final int TREEIFY_THRESHOLD = 8;

    /**
     * 红黑树节点转换链表节点的阈值
     */
    static final int UNTREEIFY_THRESHOLD = 6;

    /**
     * 转红黑树时数组应该满足的长度
     * 至少是 4 * TREEIFY_THRESHOLD ,节省效率
     */
    static final int MIN_TREEIFY_CAPACITY = 64;

    /**
     * 基本的哈希节点,链表节点, 继承自Entry
     * k,v是Map<k,v>传入的数据类型
     */
    static class Node<K,V> implements Map.Entry<K,V> {
        final int hash;
        final K key;
        V value;
        Node<K,V> next;

        Node(int hash, K key, V value, Node<K,V> next) {
            this.hash = hash;
            this.key = key;
            this.value = value;
            this.next = next;
        }
        @Override
        public final K getKey()        { return key; }
        @Override
        public final V getValue()      { return value; }
        @Override
        public final String toString() { return key + "=" + value; }
        @Override
        public final int hashCode() {
            return Objects.hashCode(key) ^ Objects.hashCode(value);
        }
        @Override
        public final V setValue(V newValue) {
            V oldValue = value;
            value = newValue;
            return oldValue;
        }
        @Override
        public final boolean equals(Object o) {
            //存储位置相同
            if (o == this) {
                return true;
            }
            //instanceof是Java中的一个双目运算符,用来测试一个对象是否为一个类的实例
            if (o instanceof Map.Entry) {
                Map.Entry<?,?> e = (Map.Entry<?,?>)o;
                return Objects.equals(key, e.getKey()) && Objects.equals(value, 
    e.getValue());
            }
            return false;
        }
    }
    //将不需要序列化的属性前添加关键字transient,序列化对象的时候,这个属性就不会被序列化。
    //table数组
    transient Node<K,V>[] table;

    /**
     * Holds cached entrySet(). Note that AbstractMap fields are used
     * for keySet() and values().
     */
    transient Set<Map.Entry<K,V>> entrySet;

    // 大小
    transient int size;

    transient int modCount;

    /**
     * 转化为红黑树的阀值
     */
    int threshold;
    /**
     * 哈希表的负载系数。
     */
    final float loadFactor;

 A good hash algorithm and expansion mechanism can make the probability of hash collisions small, and the hash bucket array (Node[] table) takes up less space

Four, core method analysis

hash() 

The first step is to get key.hashCode()

The second step of high 16-bit exclusive OR operation
(>>> means unsigned right shift, also called logical right shift, that is, if the number is positive, the high bit is filled with 0, and if the number is negative, the high bit is the same Complement 0)

The Hash algorithm is essentially three steps: taking the hashCode value of the key, high-order operation, and modulo operation .

static final int hash(Object key) {
    int h;
    return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
}

Here is a graph to show the entire process of changing key.hashcode() to the subscript

 

 Through this calculation process, it can be seen that the generated array index will reduce the probability of collision due to the increase of'disturbance'.

comparableClassFor()

 /**
     * Returns x's Class if it is of the form "class C implements
     * Comparable<C>", else null.
     */
    static Class<?> comparableClassFor(Object x) {
        if (x instanceof Comparable) {
            Class<?> c; Type[] ts, as; Type t; ParameterizedType p;
            if ((c = x.getClass()) == String.class) // bypass checks
                return c;
            if ((ts = c.getGenericInterfaces()) != null) {
                for (int i = 0; i < ts.length; ++i) {
                    if (((t = ts[i]) instanceof ParameterizedType) &&
                            ((p = (ParameterizedType)t).getRawType() ==
                                    Comparable.class) &&
                            (as = p.getActualTypeArguments()) != null &&
                            as.length == 1 && as[0] == c) // type arg is c
                        return c;
                }
            }
        }
        return null;
    }

    /**
     * Returns k.compareTo(x) if x matches kc (k's screened comparable
     * class), else 0.
     */
    @SuppressWarnings({"rawtypes","unchecked"}) // for cast to Comparable
    static int compareComparables(Class<?> kc, Object k, Object x) {
        return (x == null || x.getClass() != kc ? 0 :
                ((Comparable)k).compareTo(x));
    }

   

tableSizeFor()

 [Function] Return to the power of 2 times the given target capacity. Set our incoming capacity to be greater than and the closest 2^N

 [Interpretation]

For details, see: Find a number greater than and closest to 2^N

   //补位,将原本为0的空位填补为1,最后加1时,最高有效位进1,其余变为0,如此就可以取到最近的2的幂
    static final int tableSizeFor(int cap) {
        //减一后,最右一位肯定和cap的最右一位不同,即一个为0,一个为1
        int n = cap - 1;
        //(>>>)无符号右移一位,(|)按位或
        n |= n >>> 1;
        //(>>>)无符号右移两位,(|)按位或
        n |= n >>> 2;
        //(>>>)无符号右移四位,(|)按位或
        n |= n >>> 4;
        //(>>>)无符号右移八位,(|)按位或
        n |= n >>> 8;
        //(>>>)无符号右移十六位,(|)按位或,为何到16呢,存疑
        n |= n >>> 16;
        return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
    }

HashMap() 

  The initial capacity and load factor are specified, and the parameters will be verified. The
  initial capacity cannot be a negative number and cannot be greater than the maximum capacity 1 << 30 (2^30)


    public HashMap(int initialCapacity, float loadFactor) {
        if (initialCapacity < 0) {
            throw new IllegalArgumentException("Illegal initial capacity: " + 
    initialCapacity);
        }
        if (initialCapacity > MAXIMUM_CAPACITY) {
            initialCapacity = MAXIMUM_CAPACITY;
        }
        if (loadFactor <= 0 || Float.isNaN(loadFactor)) {
            throw new IllegalArgumentException("Illegal load factor: " + loadFactor);
        }
        this.loadFactor = loadFactor;
        this.threshold = tableSizeFor(initialCapacity);
    }

    public HashMap(int initialCapacity) {
        this(initialCapacity, DEFAULT_LOAD_FACTOR);
    }
    public HashMap() {
        this.loadFactor = DEFAULT_LOAD_FACTOR;
    }

    /**
     *
     * @param   m the map whose mappings are to be placed in this map
     * @throws  NullPointerException if the specified map is null
     */
    public HashMap1(Map<? extends K, ? extends V> m) {
        this.loadFactor = DEFAULT_LOAD_FACTOR;
        putMapEntries(m, false);
    }

get()

public V get(Object key) {
    Node<K,V> e;
    //还是先计算 哈希值
    return (e = getNode(hash(key), key)) == null ? null : e.value;
}

final Node<K,V> getNode(int hash, Object key) {
    Node<K,V>[] tab; Node<K,V> first, e; int n; K k;
    //tab 指向哈希表,n 为哈希表的长度,first 为 (n - 1) & hash 位置处的桶中的头一个节点
    if ((tab = table) != null && (n = tab.length) > 0 &&
        (first = tab[(n - 1) & hash]) != null) {
        //如果桶里第一个元素就相等,直接返回
        if (first.hash == hash &&
            ((k = first.key) == key || (key != null && key.equals(k))))
            return first;
        //否则就得慢慢遍历找
        if ((e = first.next) != null) {
            if (first instanceof TreeNode)
                //如果是树形节点,就调用树形节点的 get 方法
                return ((TreeNode<K,V>)first).getTreeNode(hash, key);
            do {
                //do-while 遍历链表的所有节点
                if (e.hash == hash &&
                    ((k = e.key) == key || (key != null && key.equals(k))))
                    return e;
            } while ((e = e.next) != null);
        }
    }
    return null;
}

put()

Here is a related answer from Zhihu, a great god. The picture description is very vivid, quoted as follows:

(The link to the answer is as follows: https://zhuanlan.zhihu.com/p/21673805 )

preview

     If the array is positioned to a position not directly inserted element;
     if targeting array elements should Location comparison with the inserted key,
     if the same key cover directly,
     if the key is not the same, it is determined whether p is a tree node,
     if If yes, call e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value) to add the element;
     if not, traverse the end of the list and insert it.

public V put(K key, V value) {
    return putVal(hash(key), key, value, false, true);
}
 
final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
               boolean evict) {
    Node<K,V>[] tab; Node<K,V> p; int n, i;
    // 1.校验table是否为空或者length等于0,如果是则调用resize方法进行初始化
    if ((tab = table) == null || (n = tab.length) == 0)
        n = (tab = resize()).length;
    // 2.通过hash值计算索引位置,将该索引位置的头节点赋值给p,如果p为空则直接在该索引位置新增一个节点即可
    if ((p = tab[i = (n - 1) & hash]) == null)
        tab[i] = newNode(hash, key, value, null);
    else {
        // table表该索引位置不为空,则进行查找
        Node<K,V> e; K k;
        // 3.判断p节点的key和hash值是否跟传入的相等,如果相等, 则p节点即为要查找的目标节点,将p节点赋值给e节点
        if (p.hash == hash &&
            ((k = p.key) == key || (key != null && key.equals(k))))
            e = p;
        // 4.判断p节点是否为TreeNode, 如果是则调用红黑树的putTreeVal方法查找目标节点
        else if (p instanceof TreeNode)
            e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
        else {
            // 5.走到这代表p节点为普通链表节点,则调用普通的链表方法进行查找,使用binCount统计链表的节点数
            for (int binCount = 0; ; ++binCount) {
                // 6.如果p的next节点为空时,则代表找不到目标节点,则新增一个节点并插入链表尾部
                if ((e = p.next) == null) {
                    p.next = newNode(hash, key, value, null);
                    // 7.校验节点数是否超过8个,如果超过则调用treeifyBin方法将链表节点转为红黑树节点,
                    // 减一是因为循环是从p节点的下一个节点开始的
                    if (binCount >= TREEIFY_THRESHOLD - 1)
                        treeifyBin(tab, hash);
                    break;
                }
                // 8.如果e节点存在hash值和key值都与传入的相同,则e节点即为目标节点,跳出循环
                if (e.hash == hash &&
                    ((k = e.key) == key || (key != null && key.equals(k))))
                    break;
                p = e;  // 将p指向下一个节点
            }
        }
        // 9.如果e节点不为空,则代表目标节点存在,使用传入的value覆盖该节点的value,并返回oldValue
        if (e != null) {
            V oldValue = e.value;
            if (!onlyIfAbsent || oldValue == null)
                e.value = value;
            afterNodeAccess(e); // 用于LinkedHashMap
            return oldValue;
        }
    }
    ++modCount;
    // 10.如果插入节点后节点数超过阈值,则调用resize方法进行扩容
    if (++size > threshold)
        resize();
    afterNodeInsertion(evict);  // 用于LinkedHashMap
    return null;
}

resize()

Expansion, in the expansion of 1.7, all hashcodes have to be recalculated after the length is changed, which consumes a lot of money. In the expansion of 1.8, the mechanism becomes more ingenious, saving calculations, and learning after specific changes.

final Node<K,V>[] resize() {
    Node<K,V>[] oldTab = table;
    int oldCap = (oldTab == null) ? 0 : oldTab.length;
    int oldThr = threshold;
    int newCap, newThr = 0;
    // 1.老表的容量不为0,即老表不为空
    if (oldCap > 0) {
        // 1.1 判断老表的容量是否超过最大容量值:如果超过则将阈值设置为Integer.MAX_VALUE,并直接返回老表,
        // 此时oldCap * 2比Integer.MAX_VALUE大,因此无法进行重新分布,只是单纯的将阈值扩容到最大
        if (oldCap >= MAXIMUM_CAPACITY) {
            threshold = Integer.MAX_VALUE;
            return oldTab;
        }
        // 1.2 将newCap赋值为oldCap的2倍,如果newCap<最大容量并且oldCap>=16, 则将新阈值设置为原来的两倍
        else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
                 oldCap >= DEFAULT_INITIAL_CAPACITY)
            newThr = oldThr << 1; // double threshold
    }
    // 2.如果老表的容量为0, 老表的阈值大于0, 是因为初始容量被放入阈值,则将新表的容量设置为老表的阈值
    else if (oldThr > 0)
        newCap = oldThr;
    else {
        // 3.老表的容量为0, 老表的阈值为0,这种情况是没有传初始容量的new方法创建的空表,将阈值和容量设置为默认值
        newCap = DEFAULT_INITIAL_CAPACITY;
        newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
    }
    // 4.如果新表的阈值为空, 则通过新的容量*负载因子获得阈值
    if (newThr == 0) {
        float ft = (float)newCap * loadFactor;
        newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
                  (int)ft : Integer.MAX_VALUE);
    }
    // 5.将当前阈值设置为刚计算出来的新的阈值,定义新表,容量为刚计算出来的新容量,将table设置为新定义的表。
    threshold = newThr;
    @SuppressWarnings({"rawtypes","unchecked"})
    Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
    table = newTab;
    // 6.如果老表不为空,则需遍历所有节点,将节点赋值给新表
    if (oldTab != null) {
        for (int j = 0; j < oldCap; ++j) {
            Node<K,V> e;
            if ((e = oldTab[j]) != null) {  // 将索引值为j的老表头节点赋值给e
                oldTab[j] = null; // 将老表的节点设置为空, 以便垃圾收集器回收空间
                // 7.如果e.next为空, 则代表老表的该位置只有1个节点,计算新表的索引位置, 直接将该节点放在该位置
                if (e.next == null)
                    newTab[e.hash & (newCap - 1)] = e;
                // 8.如果是红黑树节点,则进行红黑树的重hash分布(跟链表的hash分布基本相同)
                else if (e instanceof TreeNode)
                    ((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
                else { // preserve order
                    // 9.如果是普通的链表节点,则进行普通的重hash分布
                    Node<K,V> loHead = null, loTail = null; // 存储索引位置为:“原索引位置”的节点
                    Node<K,V> hiHead = null, hiTail = null; // 存储索引位置为:“原索引位置+oldCap”的节点
                    Node<K,V> next;
                    do {
                        next = e.next;
                        // 9.1 如果e的hash值与老表的容量进行与运算为0,则扩容后的索引位置跟老表的索引位置一样
                        if ((e.hash & oldCap) == 0) {
                            if (loTail == null) // 如果loTail为空, 代表该节点为第一个节点
                                loHead = e; // 则将loHead赋值为第一个节点
                            else
                                loTail.next = e;    // 否则将节点添加在loTail后面
                            loTail = e; // 并将loTail赋值为新增的节点
                        }
                        // 9.2 如果e的hash值与老表的容量进行与运算为非0,则扩容后的索引位置为:老表的索引位置+oldCap
                        else {
                            if (hiTail == null) // 如果hiTail为空, 代表该节点为第一个节点
                                hiHead = e; // 则将hiHead赋值为第一个节点
                            else
                                hiTail.next = e;    // 否则将节点添加在hiTail后面
                            hiTail = e; // 并将hiTail赋值为新增的节点
                        }
                    } while ((e = next) != null);
                    // 10.如果loTail不为空(说明老表的数据有分布到新表上“原索引位置”的节点),则将最后一个节点
                    // 的next设为空,并将新表上索引位置为“原索引位置”的节点设置为对应的头节点
                    if (loTail != null) {
                        loTail.next = null;
                        newTab[j] = loHead;
                    }
                    // 11.如果hiTail不为空(说明老表的数据有分布到新表上“原索引+oldCap位置”的节点),则将最后
                    // 一个节点的next设为空,并将新表上索引位置为“原索引+oldCap”的节点设置为对应的头节点
                    if (hiTail != null) {
                        hiTail.next = null;
                        newTab[j + oldCap] = hiHead;
                    }
                }
            }
        }
    }
    // 12.返回新表
    return newTab;
}

treeifyBin()

/**
 * 将链表节点转为红黑树节点
 */
final void treeifyBin(Node<K,V>[] tab, int hash) {
    int n, index; Node<K,V> e;
    // 1.如果table为空或者table的长度小于64, 调用resize方法进行扩容
    if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY)
        resize();
    // 2.根据hash值计算索引值,将该索引位置的节点赋值给e,从e开始遍历该索引位置的链表
    else if ((e = tab[index = (n - 1) & hash]) != null) {
        TreeNode<K,V> hd = null, tl = null;
        do {
            // 3.将链表节点转红黑树节点
            TreeNode<K,V> p = replacementTreeNode(e, null);
            // 4.如果是第一次遍历,将头节点赋值给hd
            if (tl == null)	// tl为空代表为第一次循环
                hd = p;
            else {
                // 5.如果不是第一次遍历,则处理当前节点的prev属性和上一个节点的next属性
                p.prev = tl;    // 当前节点的prev属性设为上一个节点
                tl.next = p;    // 上一个节点的next属性设置为当前节点
            }
            // 6.将p节点赋值给tl,用于在下一次循环中作为上一个节点进行一些链表的关联操作(p.prev = tl 和 tl.next = p)
            tl = p;
        } while ((e = e.next) != null);
        // 7.将table该索引位置赋值为新转的TreeNode的头节点,如果该节点不为空,则以以头节点(hd)为根节点, 构建红黑树
        if ((tab[index] = hd) != null)
            hd.treeify(tab);
    }
}

compute() 

    @Override
    public V compute(K key,
                     BiFunction<? super K, ? super V, ? extends V> remappingFunction) {
        if (remappingFunction == null)
            throw new NullPointerException();
        int hash = hash(key);
        Node<K,V>[] tab; Node<K,V> first; int n, i;
        int binCount = 0;
        TreeNode<K,V> t = null;
        Node<K,V> old = null;
        if (size > threshold || (tab = table) == null ||
            (n = tab.length) == 0)
            n = (tab = resize()).length;
        if ((first = tab[i = (n - 1) & hash]) != null) {
            if (first instanceof TreeNode)
                old = (t = (TreeNode<K,V>)first).getTreeNode(hash, key);
            else {
                Node<K,V> e = first; K k;
                do {
                    if (e.hash == hash &&
                        ((k = e.key) == key || (key != null && key.equals(k)))) {
                        old = e;
                        break;
                    }
                    ++binCount;
                } while ((e = e.next) != null);
            }
        }
        V oldValue = (old == null) ? null : old.value;
        V v = remappingFunction.apply(key, oldValue);
        if (old != null) {
            if (v != null) {
                old.value = v;
                afterNodeAccess(old);
            }
            else
                removeNode(hash, key, null, false, true);
        }
        else if (v != null) {
            if (t != null)
                t.putTreeVal(this, tab, hash, key, v);
            else {
                tab[i] = newNode(hash, key, v, first);
                if (binCount >= TREEIFY_THRESHOLD - 1)
                    treeifyBin(tab, hash);
            }
            ++modCount;
            ++size;
            afterNodeInsertion(true);
        }
        return v;
    }

Five, interview questions

The difference between hashMap and hashTable

  1. HashMap allows key and value to be null, but Hashtable does not.
  2. The default initial capacity of HashMap is 16, and Hashtable is 11.
  3. The expansion of HashMap is doubled, and the expansion of Hashtable is doubled by 1.
  4. HashMap is not thread-safe, Hashtable is thread-safe.
  5. The hash value of HashMap has been recalculated, and Hashtable directly uses hashCode.
  6. HashMap removes the contains method in Hashtable.
  7. HashMap inherits from the AbstractMap class, and Hashtable inherits from the Dictionary class.

Talk about the optimization of hashMap in 1.7 and 1.8

  1. The underlying data structure adds a red-black tree structure. The default change threshold is array length>64, linked list length>8. The search operation needs to traverse the linked list under the node array node. When the bucket length is too long, the query will be reduced. Efficiency, so the red-black tree is added in 1.8. This structure is faster to add, delete, modify and check when the amount of data is large. When the length of the linked list decreases <=6, the red-black tree is converted to a linked list again, the linked list O(n), red Black tree O(logn)
  2. Optimize the hash algorithm for high-level operations, h^(h>>>16), four perturbations become one perturbation, improving efficiency
  3. When adding, the head plug becomes the tail plug. The head plug method will reverse the linked list. In the multi-threaded environment, the head plug will produce a loop, and the tail plug will not be reversed, which is safer
  4. Optimize the expansion mechanism, resize() expansion mechanism, the expansion threshold = set capacity * load factor (16*0.75), when the threshold is reached, the expansion will be twice the current. The most expensive performance is the recalculation of the index value after expansion, index=h&(length-1), optimized in 1.8, after expansion, the element is either in the original position or moved to the original position to the power of 2 , And the order of the linked list remains unchanged

Is HashMap thread safe? Is there any way to thread safety

 

  1. It is not thread-safe. In 1.7, there will be linked list loops, repeated insertions, and no lock operations in the source code.
  2. There are three methods to achieve thread safety, Tablemap, ConcerrentHashMap, and Collections.synchronizedMap, tableMap locks the entire operation method, the granularity is too large, there is basically no suitable use scenario, Collections.synchronizedMap is an internal class in Collections, and the map is passed. Into the internally defined synchronizedMap object with lock can achieve thread safety. ConcerrentHashMap uses segmented lock (locks the current node), CAS+synchronized, reduces the lock granularity and increases the amount of concurrency

How is the hash function designed

        First get the hashcode (32 bits) of the key, and then perform the XOR operation on the high 16 and low 16 bits of the hashcode

static final int hash(Object key) {
    int h;
    return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
}

Can you tell me in detail, what are the benefits?

This is called a perturbation function, which performs an exclusive OR operation on the high sixteen bits and low sixteen bits of the hashcode.
There are two benefits:

  1. Bit operation can improve the efficiency of the algorithm
  2. Will not cause increased collisions due to the high-level not participating in the calculation of the subscript, and increase the hashcode of the hashcode,

Why this design can increase hashability

        key.hashCode() is derived from the hash value of the key itself, which is too large, so it must be ANDed with the array length-1 (0 is 0).
This operation can make the number smaller, but it also creates a problem, that is, only the last few digits will be taken, and the chance of collision will increase.
Therefore, a method is needed to increase the variability of the data to be modulated. The high-order XOR operation in the above hash is designed to solve this problem.
It combines the high and low bits of the original hash code to XOR. This increases the randomness of the low position.

How does LinkedHashMap achieve orderly

LinkedHashMap internally maintains a singly linked list with head and tail nodes. At the same time, LinkedHashMap node Entry not only inherits the Node attribute of HashMap,
but also has before and after used to identify the front node and the rear node. It can be sorted by insertion order or access order. 

Talk about the logical ideas of several main functions get(), put(), resize(), replace(), remove()

Operations involving changes in put&replace, remove, etc., require the following verification and processing:

Whether the array is empty-created by put&replace operation, and returned by remove operation

Whether the target array exists and whether the key exists-the node is judged if the array exists, the put&replace operation (replace if the node exists, create if the node does not exist), remove operation (delete if the node exists, and return if the node does not exist)

Is the node storage form a linked list or a red-black tree

After the operation is completed, whether the expansion/reduction threshold is reached-expansion/red-black tree and linked list conversion

For other detailed analysis, see the main method analysis above

Six, reference materials

https://blog.csdn.net/java_wxid/article/details/106896221?utm_source=app

https://zhuanlan.zhihu.com/p/21673805

https://blog.csdn.net/v123411739/article/details/78996181

https://blog.csdn.net/u012211603/article/details/79879944

https://blog.csdn.net/qq_41345773/article/details/92066554

https://blog.csdn.net/jdjdndhj/article/details/54407252

https://blog.csdn.net/u011240877/article/details/53358305

Guess you like

Origin blog.csdn.net/qq_36766417/article/details/109034107