Looking at HashMap from the perspective of source code

Let’s first look at how the HashMap class annotation introduces HashMap.

/**
 * Hash table based implementation of the <tt>Map</tt> interface.  This
 * implementation provides all of the optional map operations, and permits
 * <tt>null</tt> values and the <tt>null</tt> key.  (The <tt>HashMap</tt>
 * class is roughly equivalent to <tt>Hashtable</tt>, except that it is
 * unsynchronized and permits nulls.)  This class makes no guarantees as to
 * the order of the map; in particular, it does not guarantee that the order
 * will remain constant over time.
 *
 * <p>This implementation provides constant-time performance for the basic
 * operations (<tt>get</tt> and <tt>put</tt>), assuming the hash function
 * disperses the elements properly among the buckets.  Iteration over
 * collection views requires time proportional to the "capacity" of the
 * <tt>HashMap</tt> instance (the number of buckets) plus its size (the number
 * of key-value mappings).  Thus, it's very important not to set the initial
 * capacity too high (or the load factor too low) if iteration performance is
 * important.
 *
 * <p>An instance of <tt>HashMap</tt> has two parameters that affect its
 * performance: <i>initial capacity</i> and <i>load factor</i>.  The
 * <i>capacity</i> is the number of buckets in the hash table, and the initial
 * capacity is simply the capacity at the time the hash table is created.  The
 * <i>load factor</i> is a measure of how full the hash table is allowed to
 * get before its capacity is automatically increased.  When the number of
 * entries in the hash table exceeds the product of the load factor and the
 * current capacity, the hash table is <i>rehashed</i> (that is, internal data
 * structures are rebuilt) so that the hash table has approximately twice the
 * number of buckets.
 *
 * <p>As a general rule, the default load factor (.75) offers a good
 * tradeoff between time and space costs.  Higher values decrease the
 * space overhead but increase the lookup cost (reflected in most of
 * the operations of the <tt>HashMap</tt> class, including
 * <tt>get</tt> and <tt>put</tt>).  The expected number of entries in
 * the map and its load factor should be taken into account when
 * setting its initial capacity, so as to minimize the number of
 * rehash operations.  If the initial capacity is greater than the
 * maximum number of entries divided by the load factor, no rehash
 * operations will ever occur.
 *
 * <p>If many mappings are to be stored in a <tt>HashMap</tt>
 * instance, creating it with a sufficiently large capacity will allow
 * the mappings to be stored more efficiently than letting it perform
 * automatic rehashing as needed to grow the table.  Note that using
 * many keys with the same {@code hashCode()} is a sure way to slow
 * down performance of any hash table. To ameliorate impact, when keys
 * are {@link Comparable}, this class may use comparison order among
 * keys to help break ties.
 *
 * <p><strong>Note that this implementation is not synchronized.</strong>
 * If multiple threads access a hash map concurrently, and at least one of
 * the threads modifies the map structurally, it <i>must</i> be
 * synchronized externally.  (A structural modification is any operation
 * that adds or deletes one or more mappings; merely changing the value
 * associated with a key that an instance already contains is not a
 * structural modification.)  This is typically accomplished by
 * synchronizing on some object that naturally encapsulates the map.
 *
 * If no such object exists, the map should be "wrapped" using the
 * {@link Collections#synchronizedMap Collections.synchronizedMap}
 * method.  This is best done at creation time, to prevent accidental
 * unsynchronized access to the map:<pre>
 *   Map m = Collections.synchronizedMap(new HashMap(...));</pre>
 *
 * <p>The iterators returned by all of this class's "collection view methods"
 * are <i>fail-fast</i>: if the map is structurally modified at any time after
 * the iterator is created, in any way except through the iterator's own
 * <tt>remove</tt> method, the iterator will throw a
 * {@link ConcurrentModificationException}.  Thus, in the face of concurrent
 * modification, the iterator fails quickly and cleanly, rather than risking
 * arbitrary, non-deterministic behavior at an undetermined time in the
 * future.
 *
 * <p>Note that the fail-fast behavior of an iterator cannot be guaranteed
 * as it is, generally speaking, impossible to make any hard guarantees in the
 * presence of unsynchronized concurrent modification.  Fail-fast iterators
 * throw <tt>ConcurrentModificationException</tt> on a best-effort basis.
 * Therefore, it would be wrong to write a program that depended on this
 * exception for its correctness: <i>the fail-fast behavior of iterators
 * should be used only to detect bugs.</i>
 *
 * <p>This class is a member of the
 * <a href="{@docRoot}/../technotes/guides/collections/index.html">
 * Java Collections Framework</a>.
 *
 * @param <K> the type of keys maintained by this map
 * @param <V> the type of mapped values
 *
 * @author  Doug Lea
 * @author  Josh Bloch
 * @author  Arthur van Hoff
 * @author  Neal Gafter
 * @see     Object#hashCode()
 * @see     Collection
 * @see     Map
 * @see     TreeMap
 * @see     Hashtable
 * @since   1.2
 */

Translation:

A hash table-based implementation of the Map interface, which implements all abstract methods of the Map interface and allows the storage of null keys and null values. HashMap is very similar to HashTable, except that it is not thread-safe and allows storing null. HashMap does not guarantee the order of elements, and since the underlying layer is a hash table and rehash, it does not even guarantee that the order will be consistent over time.

HashMap provides constant time performance for basic operations (get and set). Assuming that the hash function distributes elements evenly into buckets, iteration of the collection view is proportional to the number of buckets and entries in the HashMap, so if iteration performance is important, do not set the initial capacity too high (or load factor is set too low), this is very important.

There are two parameters in HashMap that affect its performance: initial capacity and load factor. Capacity refers to the number of buckets in the hash table. The initial capacity is the size when the hash table is created. The load factor is a measure of how full the hash table reaches before it is automatically expanded. When the entry in the hash table When the number exceeds the product of the load factor and the current capacity, the hash table will automatically expand to twice the previous capacity, that is, a new hash table will be created with the new capacity. At this time, the old entry needs to be re-hashed to the new hash. In a hash table, this process is called hash table reconstruction.

As a general rule, the default load factor (0.75) provides a good balance between time and space costs, higher load factors reduce space overhead but increase lookup costs. When setting the initial capacity, you should consider the expected number of entries and the value of the load factor of the HashMap to minimize the number of hash table rebuilds. If the maximum number of entries is less than the initial capacity times the load factor, that is, the initial capacity is greater than the maximum entry number divided by the load factor, no hash table rebuild will occur.

It should be noted that if the hashCode() method of multiple keys returns the same value, it will reduce the performance of the hash table, because it may cause multiple keys to be hashed into the same bucket, thus forming a long linked list or a red-black tree ( After JDK 8, if the length of the linked list exceeds a certain threshold, it will automatically be converted into a red-black tree). In order to alleviate this impact, the keys can implement the Comparable interface. HashMap will use the comparison order between keys to assist sorting to help solve the problem of key dispersion. Column conflict.

It should be noted that HashMap is not thread-safe. If multiple threads access a HashMap instance at the same time, at least one thread is modifying its structure (structural modification refers to adding and deleting one or more entries, Changing the value of an existing key does not count as a structural modification), it must be externally synchronized, which is usually achieved by synchronizing some objects that encapsulate HashMap. If no such object exists, you should use Collections#synchronizedMap method to wrap the HashMap, which is best done at creation time to prevent accidental asynchronous access to the HashMap Map m = Collections.synchronizedMap(new HashMap(...)).

The iterators returned by all "collection view methods" of HashMap are fail-fast. If the HashMap structure is modified at any time after the iterator is created (except for the iterator's own remove method), the iterator will throw ConcurrentModificationException. Therefore, in the face of concurrent modifications, the iterator fails quickly and cleanly, rather than risking arbitrary, undefined behavior at an undetermined time in the future.

It is worth noting that the fail-fast behavior of iterator cannot be guaranteed, because it will only check whether the value of modCount changes during iteration. If it changes, a ConcurrentModificationException will be thrown, so when modCount changes (concurrent modification occurs) and There is no guarantee that changes in modCount will be detected immediately, and ConcurrentModificationException will be thrown.

The comments have introduced all the key features of HashMap. After we understand these features, let's take a look at how the source code of HashMap implements these features.

Source code analysis

Member variables

First, take a look at the member variables defined in HashMap:

/**
 * 初始容量的默认值,必须是 2 的倍数
 */
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16
/**
 * 负载因子的默认值
 */
static final float DEFAULT_LOAD_FACTOR = 0.75f;
/**
 * 容量的最大值
 */
static final int MAXIMUM_CAPACITY = 1 << 30;
/**
 * 触发 bucket 树化(链表转红黑树)的 table 的最小容量
 */
static final int MIN_TREEIFY_CAPACITY = 64;
/**
 * 链表中元素的个数超过这个值,链表会自动转红黑树
 */
static final int TREEIFY_THRESHOLD = 8;
/**
 * 红黑树中的元素个数低于这个值,红黑树会自动转链表
 */
static final int UNTREEIFY_THRESHOLD = 6;

The above introduction is about static member variables, which are usually used as the default values ​​of other non-static member variables. Let’s take a look at non-static member variables.

/**
 * HashMap 修改结构的次数,迭代器的 remove 不算,它和迭代器的 fail-fast 有关
 */
transient int modCount;
/**
 * entry 的个数
 */
transient int size;
/**
 * 哈希表,存储 entry
 */
transient Node<K,V>[] table;
/**
 * 下次动态扩容的大小,它的值是 capacity * load factor.
 */
int threshold;
/**
 * 负载因子
 */
final float loadFactor;
/**
 * entry 的集合视图
 */
transient Set<Map.Entry<K,V>> entrySet;
/**
 * HashMap 中的值的集合视图,这个变量定义在 AbstractMap 中
 */
transient Collection<V> values;
/**
 * HashMap 中的值的集合视图,这个变量定义在 AbstractMap 中
 */
transient Set<K>        keySet;
method

When we use HashMap, we usually instantiate it first, then call the put method to store the data, and then call the get method to get the data, or use iterator to traverse the data, as follows:

@Test
public void test07(){
    
    
    Map<String, String> map = new HashMap<>();
    map.put("001", "bob");
    map.put("002", "john");
    map.put("003", "slice");

    String value = map.get("001");
    System.out.println(value);
}
put

There is nothing easy to analyze about the parameterless construction method of HashMap. It only has one line of code to set the load factor. The most commonly used one is the put method. Both the key and value of HashMap#put can be null. Let's see what the put method does. something.

public V put(K key, V value) {
    
    
    // 将操作委托给了 putVal 方法
    return putVal(hash(key), key, value, false, true);
}

// 这是一个 final 方法,防止用户重写该方法
final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
                   boolean evict) {
    
    
    Node<K,V>[] tab; Node<K,V> p; int n, i;
    if ((tab = table) == null || (n = tab.length) == 0)
        /* 
         * 如果 table 为 null 或者 table 的长度为 0 的时候,调用 resize() 方法扩容哈希表,如果是第一次调用 put 方法
         * 会进入这个 if 条件中
         */
        n = (tab = resize()).length;
    if ((p = tab[i = (n - 1) & hash]) == null)
        /* 
         * 计算 key 的哈希值 i,从 tab[i] 中获取指定哈希值对应的 Node 对象 p,如果 p 为 null,则说明没有发生哈希冲突
         * 使用 key value 构建 Node 并填充到 tab[i]
         */
        tab[i] = newNode(hash, key, value, null);
    else {
    
    
        // 如果 p 不为 null,则发生了哈希冲突,链表和红黑树有不同的解决哈希冲突的逻辑,需要分开处理
        /* 
         * 如果发生了哈希冲突
         */
        Node<K,V> e; K k;
        if (p.hash == hash &&
            ((k = p.key) == key || (key != null && key.equals(k))))
            // key 相同
            e = p;
        else if (p instanceof TreeNode)
            // 红黑树解决哈希冲突
            e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
        else {
    
    
            // 链表解决哈希冲突,需要从头节点遍历,直到找到有相同 key 的 节点或者到达了尾节点
            for (int binCount = 0; ; ++binCount) {
    
    
                if ((e = p.next) == null) {
    
    
                    // p 是尾节点,此时 e = null,构建新的 Node 插入到 p 的后面
                    p.next = newNode(hash, key, value, null);
                    if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
                        // 检查是否要将链表转红黑树
                        treeifyBin(tab, hash);
                    break;
                }
                // e.key 和 key 相等,找到相同的节点
                if (e.hash == hash &&
                    ((k = e.key) == key || (key != null && key.equals(k))))
                    break;
                p = e;
            }
        }
        
        // e 不为 null,则 e 和新插入具有相同的 key,用新的 value 覆盖 e.value,并返回旧的 value
        if (e != null) {
    
     // existing mapping for key
            V oldValue = e.value;
            if (!onlyIfAbsent || oldValue == null)
                e.value = value;
            // 给子类提供的回调函数
            afterNodeAccess(e);
            return oldValue;
        }
    }
    // 因为 put 方法会修改 HashMap 的结构,所以 modCount 自增
    ++modCount;
    /*
     * 检查 HashMap 的容量 size 是否超过了阈值 threshold,如果超过了就扩容
     * 从这里可以看出每次调用 put 方法都会检查是否需要扩容,且扩容是发生在数据插入 HashMap 之后的
     * 这里是不是在插入数据之前检查并扩容会好一点?因为这样就可以减少一次插入数据的损耗了
     * 因为如果扩容的就可以不将数据插入到旧哈希表,而是在扩容之后插入到新哈希表,但是整体逻辑可能就复杂了,JDK 的实现可读性更高
     */
    if (++size > threshold)
        resize();
    // 给子类提供的回调函数
    afterNodeInsertion(evict);
    return null;
}
get

Then let's take a look at the get method. The get method gets the specified value from the Map based on the key. If the key does not exist, it returns null. Returning null does not necessarily mean that the key does not exist. It is also possible that the value corresponding to the key is null.

// 委托给 getNode 方法
public V get(Object key) {
    
    
    Node<K,V> e;
    return (e = getNode(hash(key), key)) == null ? null : e.value;
}

// getNode 方法也是一个 final 方法,防止子类重写该方法
final Node<K,V> getNode(int hash, Object key) {
    
    
    Node<K,V>[] tab; Node<K,V> first, e; int n; K k;
    if ((tab = table) != null && (n = tab.length) > 0 &&
        (first = tab[(n - 1) & hash]) != null) {
    
    
        // 因为哈希冲突的缘故,所以可能需要对比多个值,首先比较第一个 node
        if (first.hash == hash && // always check first node
            ((k = first.key) == key || (key != null && key.equals(k))))
            return first;
        if ((e = first.next) != null) {
    
    
            // 如果第一个 node 的 key 不相等,则分从红黑树和链表两种数据结构获取下一个 node 继续比较,直到最后一个 node
            if (first instanceof TreeNode)
                return ((TreeNode<K,V>)first).getTreeNode(hash, key);
            do {
    
    
                if (e.hash == hash &&
                    ((k = e.key) == key || (key != null && key.equals(k))))
                    return e;
            } while ((e = e.next) != null);
        }
    }
    return null;
}
remove

Next, let’s look at the remove method. The remove method is used to delete entries that exist in the HashMap.

// 类似的,委托给 removeNode 方法
public V remove(Object key) {
    
    
    Node<K,V> e;
    return (e = removeNode(hash(key), key, null, false, true)) == null ?
        null : e.value;
}

// remove 方法的代码乍一看和 get 方法的代码很相似,因为你要想 remove 它,必须先找到 它
final Node<K,V> removeNode(int hash, Object key, Object value,
                               boolean matchValue, boolean movable) {
    
    
    Node<K,V>[] tab; Node<K,V> p; int n, index;
    if ((tab = table) != null && (n = tab.length) > 0 &&
        (p = tab[index = (n - 1) & hash]) != null) {
    
    
        Node<K,V> node = null, e; K k; V v;
        if (p.hash == hash &&
            ((k = p.key) == key || (key != null && key.equals(k))))
            // 检查第一个 node 是否相等
            node = p;
        else if ((e = p.next) != null) {
    
    
            // 在红黑树和链表中查找要删除的 node,如果找到了就赋值给局部变量 node
            if (p instanceof TreeNode)
                node = ((TreeNode<K,V>)p).getTreeNode(hash, key);
            else {
    
    
                do {
    
    
                    if (e.hash == hash &&
                        ((k = e.key) == key ||
                         (key != null && key.equals(k)))) {
    
    
                        node = e;
                        break;
                    }
                    p = e;
                } while ((e = e.next) != null);
            }
        }
        // 如果 node != null,说明找到了要删除的节点
        if (node != null && (!matchValue || (v = node.value) == value ||
                             (value != null && value.equals(v)))) {
    
    
            if (node instanceof TreeNode)
                // 红黑树结构的删除,这里面还涉及了
                ((TreeNode<K,V>)node).removeTreeNode(this, tab, movable);
            else if (node == p)
                // 如果找到的就是第一个节点,则将 table 中的 bucket 指向 node 的下一个节点
                tab[index] = node.next;
            else
                // 链表结构的删除
                p.next = node.next;
            // 修改了 HashMap 的结构,modeCount 自增
            ++modCount;
            --size;
            afterNodeRemoval(node);
            return node;
        }
    }
    return null;
}
resize

In addition to the put, get and remove methods commonly used by users, there is also a very important method called resize, which is responsible for the reconstruction of the hash table.

// 同样是 final 方法
final Node<K,V>[] resize() {
    
    
        Node<K,V>[] oldTab = table;
    int oldCap = (oldTab == null) ? 0 : oldTab.length;
    int oldThr = threshold;
    int newCap, newThr = 0;
    // oldCap 表示旧容量大小,这个只有在初始化的时候为 0,其他情况都会大于 0
    if (oldCap > 0) {
    
    
        if (oldCap >= MAXIMUM_CAPACITY) {
    
    
            // 如果老容量大于等于最大容量 MAXIMUM_CAPACITY,则不会扩容
            threshold = Integer.MAX_VALUE;
            return oldTab;
        }
        else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
                 oldCap >= DEFAULT_INITIAL_CAPACITY)
            // 扩容成之前大小的两倍
            newThr = oldThr << 1; // double threshold
    }
    else if (oldThr > 0) // initial capacity was placed in threshold
        newCap = oldThr;
    else {
    
                   // zero initial threshold signifies using defaults
        newCap = DEFAULT_INITIAL_CAPACITY;
        newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
    }
    if (newThr == 0) {
    
    
        float ft = (float)newCap * loadFactor;
        newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
                  (int)ft : Integer.MAX_VALUE);
    }
    threshold = newThr;
    @SuppressWarnings({
    
    "rawtypes","unchecked"})
    // 新哈希表
    Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
    table = newTab;
    if (oldTab != null) {
    
    
        for (int j = 0; j < oldCap; ++j) {
    
    
            Node<K,V> e;
            if ((e = oldTab[j]) != null) {
    
    
                oldTab[j] = null;
                if (e.next == null)
                    // 如果当前 bucket 只有一个元素
                    newTab[e.hash & (newCap - 1)] = e;
                else if (e instanceof TreeNode)
                    /*
                     * 如果当前 bucket 使用红黑树解决哈希冲突
                     * 这里会检查红黑树的元素个数是否小于阈值 UNTREEIFY_THRESHOLD,如果小于会将红黑树转链表
                     */
                    ((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
                else {
    
     // preserve order
                    // 如果当前 bucket 使用链表解决哈希冲突
                    Node<K,V> loHead = null, loTail = null;
                    Node<K,V> hiHead = null, hiTail = null;
                    Node<K,V> next;
                    do {
    
    
                        next = e.next;
                        if ((e.hash & oldCap) == 0) {
    
    
                            if (loTail == null)
                                loHead = e;
                            else
                                loTail.next = e;
                            loTail = e;
                        }
                        else {
    
    
                            if (hiTail == null)
                                hiHead = e;
                            else
                                hiTail.next = e;
                            hiTail = e;
                        }
                    } while ((e = next) != null);
                    if (loTail != null) {
    
    
                        loTail.next = null;
                        newTab[j] = loHead;
                    }
                    if (hiTail != null) {
    
    
                        hiTail.next = null;
                        newTab[j + oldCap] = hiHead;
                    }
                }
            }
        }
    }
    return newTab;
}
keySet()、values()、entryKey()

These three methods are also commonly used methods in HashMap. They respectively return the collection of keys, values ​​and key-value pairs (Entry) in HashMap. The collections returned by these methods are views, which are snapshots based on the original HashMap data. If they are modified Will affect the original HashMap and vice versa. These three methods are related to the three member variables entrySet, values ​​and keySet.

The implementation logic of these three methods is similar. Here we only analyze the entrySet() method, which returns a collection view of the key-value pairs (Entry) in the HashMap. We generally use it to traverse the key-value pairs in the HashMap.

@Test
public void test08(){
    
    
    Map<String, String> map = new HashMap<>();
    map.put("001", "bob");
    map.put("002", "john");
    map.put("003", "slice");

    Set<Map.Entry<String, String>> entries = map.entrySet();
    Iterator<Map.Entry<String, String>> iterator = entries.iterator();
    while(iterator.hasNext()){
    
    
        Map.Entry<String, String> entry = iterator.next();
        String key = entry.getKey();
        String value = entry.getValue();
        System.out.println("key: " + key + ",value: " + value);
    }

}

Let's look at the source code of the entrySet() method in HashMap:

// 返回的是一个 EntrySet 实例
public Set<Map.Entry<K,V>> entrySet() {
    
    
    Set<Map.Entry<K,V>> es;
    return (es = entrySet) == null ? (entrySet = new EntrySet()) : es;
}

//EntrySet 实现了 AbstractSet,它支持 iterator、contains、remove 和 clear 方法,但是不支持 add 和 addAll 方法,因为它没有重写这两个方法
final class EntrySet extends AbstractSet<Map.Entry<K,V>> {
    
    
    public final int size()                 {
    
     return size; }
    public final void clear()               {
    
     HashMap.this.clear(); }
    public final Iterator<Map.Entry<K,V>> iterator() {
    
    
        return new EntryIterator();
    }
    public final boolean contains(Object o) {
    
    
        if (!(o instanceof Map.Entry))
            return false;
        Map.Entry<?,?> e = (Map.Entry<?,?>) o;
        Object key = e.getKey();
        Node<K,V> candidate = getNode(hash(key), key);
        return candidate != null && candidate.equals(e);
    }
    public final boolean remove(Object o) {
    
    
        if (o instanceof Map.Entry) {
    
    
            Map.Entry<?,?> e = (Map.Entry<?,?>) o;
            Object key = e.getKey();
            Object value = e.getValue();
            return removeNode(hash(key), key, value, true, true) != null;
        }
        return false;
    }
    public final Spliterator<Map.Entry<K,V>> spliterator() {
    
    
        return new EntrySpliterator<>(HashMap.this, 0, -1, 0, 0);
    }
    public final void forEach(Consumer<? super Map.Entry<K,V>> action) {
    
    
        Node<K,V>[] tab;
        if (action == null)
            throw new NullPointerException();
        if (size > 0 && (tab = table) != null) {
    
    
            int mc = modCount;
            for (int i = 0; i < tab.length; ++i) {
    
    
                for (Node<K,V> e = tab[i]; e != null; e = e.next)
                    action.accept(e);
            }
            if (modCount != mc)
                throw new ConcurrentModificationException();
        }
    }
}

// EntrySet 的 iterator 方法返回一个 EntryIterator 实例,它集成了 HashIterator 类
final class EntryIterator extends HashIterator
    implements Iterator<Map.Entry<K,V>> {
    
    
    public final Map.Entry<K,V> next() {
    
     return nextNode(); }
}

/**
 * 到这里才看到 EntrySet 操作的底层数据结构,它通过操作 HashMap 的哈希表 table,来提供 Entry 的 Set 视图,
 * 这也是为什么 EntrySet 被称为视图的原因,因为底层并没有构建这样一个 Set,它是通过操作 table 来实现这些功能的
 */
abstract class HashIterator {
    
    
    Node<K,V> next;        // next entry to return
    Node<K,V> current;     // current entry
    int expectedModCount;  // for fast-fail
    int index;             // current slot

    HashIterator() {
    
    
        expectedModCount = modCount;
        Node<K,V>[] t = table;
        current = next = null;
        index = 0;
        if (t != null && size > 0) {
    
     // advance to first entry
            do {
    
    } while (index < t.length && (next = t[index++]) == null);
        }
    }

    public final boolean hasNext() {
    
    
        return next != null;
    }

    final Node<K,V> nextNode() {
    
    
        Node<K,V>[] t;
        Node<K,V> e = next;
        if (modCount != expectedModCount)
            throw new ConcurrentModificationException();
        if (e == null)
            throw new NoSuchElementException();
        if ((next = (current = e).next) == null && (t = table) != null) {
    
    
            do {
    
    } while (index < t.length && (next = t[index++]) == null);
        }
        return e;
    }

    public final void remove() {
    
    
        Node<K,V> p = current;
        if (p == null)
            throw new IllegalStateException();
        if (modCount != expectedModCount)
            throw new ConcurrentModificationException();
        current = null;
        K key = p.key;
        removeNode(hash(key), key, null, false, false);
        expectedModCount = modCount;
    }
}

Guess you like

Origin blog.csdn.net/imonkeyi/article/details/133700670
Recommended