HashMap analysis of JDK source code

This article has participated in the "Newcomer Creation Ceremony" event to start the road of gold creation together.

This article was first published on CSDN, so there are also CSDN watermarks on the pictures. The articles are original and there is no infringement.

HashMap parsing

HashMap inheritance system

insert image description here

//HashMap继承体系源码部分
        public class HashMap<K,V> extends AbstractMap<K,V>
            implements Map<K,V>, Cloneable, Serializable {
            }
复制代码
  • As we can see from the figure, HashMap implements Serializable , Cloneable , and Map interfaces, and inherits the AbstractMap abstract class.
  • Implements the Serializable interface, indicating that it can be serialized and deserialized
  • Implements the Cloneable interface, indicating that the clone operation can be performed
  • Implements the Map interface, indicating that elements are stored in a key-value pair map

HashMap overview

  • HashMap is implemented based on the Map interface. The storage form is <K, V> , that is, a key-value pair, where K is the key for the key, V is the value for the value, and each key is unique (only refers to the content, not including the address) , and each value can have multiple , because different keys can correspond to the same value (here can be understood by contacting the definition of functions in our mathematics middle school), both key and value can be null. The implementation of HashMap is not synchronized, which means it is not thread safe . HashMap is unordered. The unordered here means that the key value is unordered , and the insertion order is inconsistent with the output order.

下面是一张图理解HashMap的存储结构

insert image description here

从下面的图中我们可以看出HashMap是无序的

insert image description here

  • jdk1.8以前HashMap采用数组+链表形式实现,jdk1.8及以后采用数组+链表+红黑树实现,数组是HashMap的主体,链表是为了解决由hashcode()方法再经过哈希函数之后冲突的情况,我们在每一个位置拉一条链存储映射后索引相同的元素,我们称这种方法为拉链法。当数组容量超过64,一条链的长度大于8则该链树化为红黑树,当红黑树中结点数量小于6时会将红黑树转化为链表存储。

​ 为什么采用红黑树呢?

​ 当一处索引位置的链的长度过长,即链化,我们查询的时间会达到O(n),这不符合我们对HashMap的定义,这时我们就需要转化为 红黑树来优化查询查询效率(红黑树的查询效率为O(log k),k为树中结点的个数 )。

为什么会出现HashMap这种数据结构呢?

我们知道数组的优点是查询快,增删慢,而链表的优点是查询慢,增删快,那么可不可以创造一种结合数组优点和链表优点的数据结构呢?这个时候,哈希表就应运而生了。它具有数组的快速查询的优点,也具有链表的增删迅速地优点,因此具有非常广泛地应用。

HashMap扩容机制

  • 在存储元素时,当**实际元素数量大于阈值(且存储的位置非空时)**时,先进行扩容,数组扩容为原来的两倍,当达到树化条件时,转化为红黑树。(扩容的目的是为了减小链表的长度,增加查询的效率)
  • 为什么先扩容再判断是否可以转化为红黑树呢?这是为了查询效率考虑,当数组长度小于64时,这个时候转化为红黑树反而会使查询效率下降,而数组长度大于64时,转化为红黑树反而可以优化查询效率。

链表转化为红黑树的条件

链表长度大于8,且数组长度大于64时,会将链表转化为红黑树。如果在转化为红黑树之后不在满足条件则会将红黑树转化为链表

红黑树转化为链表的条件

树中结点个数小于6时,红黑树转化为链表。

哈希冲突

当两个键值对的键经过哈希算法&(length-1)之后相同时,即存储位置相同就会发生哈希冲突,解决哈希冲突的方法有:拉链法、开放寻址法,HashMap中我们采用拉链法解决哈希冲突。

负载因子

负载因子太大查询效率不高,负载因子太小数组的利用率不高,因此选择一个合适的负载因子是很重要的。

既要是数组利用率不低又要使查询效率不低,经过大量测试0.75是一个比较合适的值

tableSizeFor(int cap)方法

这个方法只有在使用构造函数构造HashMap时才会使用,其目的是返回最小的大于等于cap的2的次幂

//找到最小的大于等于cap的2的次幂,这个函数只在使用构造函数创建HashMap时使用,在进行扩容时不使用
static final int tableSizeFor(int cap) {
        int n = cap - 1;
        n |= n >>> 1;
        n |= n >>> 2;
        n |= n >>> 4;
        n |= n >>> 8;
        n |= n >>> 16;
        return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
    }
复制代码
  • 为什么要减1呢?

    当传递的cap是2的次幂时,如果不减1,cap会扩充为原来的两倍,不符合目的

  • 最后为什么要加1呢?

    假如cap == 1,减1之后n = 0,经过后面几步后还是0,不符合题意,要+1返回1才可以

  • 注意:最后有一个操作判断,如果n大于等于最大数组长度则返回最大数组长度,否则返回n+1

哈希函数

高 16bit 不变,低 16bit 和高 16bit 做了一个异或(得到的 hashCode 转化为 32 位二进制,前 16 位和后 16 位做了一个异或的值赋给低16位而高16位的值不变)。

static final int hash(Object key) {
        int h;
        return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
    }
复制代码

为什么不使用hashcode直接作为哈希值呢,而要使用hashcode异或上hashcode无符号右移16位作为哈希值呢?

当hashcode的高16位变化很小而低16位变化很大时,当数组长度很小,如果直接使用hashcode作为哈希值进行寻址时(& (length - 1)高位都为0)很容易发生哈希冲突,为了减少哈希冲突,我们要将高位也利用上,所以采用这种方法

DEFAULT_INITIAL_CAPACITY

集合的初始容量,必须是二的次幂(即使赋予的初始容量值不是2的次幂,最后也会赋予大于等于指定容量最小的2的次幂),为什么呢?

static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16
复制代码

2 & (9 - 1) = 0 和 3 & (9 - 1) = 0 的值相同,会造成哈希冲突

而2 & (8 - 1) = 2 和 3 & (8 - 1) = 3 的值不同,不会产生哈希冲突

  1. 可以是数据尽量均匀插入,优化查询效率
  2. When we determine the index by the hash value, we can use the length of the remainder array to determine the index (% length), but the performance of % is not as good as &, and only when length is a power of 2, hash % length == hash & (length - 1)
  3. Reduce hash collisions . The more hash collisions, the longer the length of the linked list and the lower the query efficiency. This method can optimize the query efficiency.

threshold threshold

When the number of positions occupied by the array is greater than this value, it will be expanded, threshold = length * loadFactor, that is, threshold = array length * load factor .

Expansion of HashMap

final Node<K,V>[] resize() {
        Node<K,V>[] oldTab = table;
        int oldCap = (oldTab == null) ? 0 : oldTab.length;      //数组为空则写入旧数组长度为0否则就写入本来的长度
        int oldThr = threshold;                                 //旧数组的阈值
        int newCap, newThr = 0;             //新数组长度、阈值
        if (oldCap > 0) {
        //当旧数组长度大于0
        //如果旧数组长度大于等于可以开辟的最大数组长度则更该阈值直接返回
        //只是更改了阈值,数组实际长度并不变
            if (oldCap >= MAXIMUM_CAPACITY) {
                threshold = Integer.MAX_VALUE;
                return oldTab;
            }
            /*
            旧数组的两倍小于可以开辟的最大数组长度并且旧数组长度大于等于默认数组长度DEFAULT_INITIAL_CAPACITY(16)
            则为新数组开辟空间为原来的两倍并且新数组的阈值扩充为原来的两倍
            */
            else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
                     oldCap >= DEFAULT_INITIAL_CAPACITY)
                newThr = oldThr << 1; // double threshold
        }
        //这种情况是旧数组长度小于等于0并且旧数组阈值大于0,则赋予新数组长度为旧数组阈值
        else if (oldThr > 0) // initial capacity was placed in threshold
            newCap = oldThr;
        //这种情况是旧数组长度小于等于0并且旧数组旧数组阈值小于等于0则赋予新数组长度为默认数组长度,阈值为默认阈值
        else {               // zero initial threshold signifies using defaults
            newCap = DEFAULT_INITIAL_CAPACITY;
            newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
        }
        //如果新树组阈值等于0,则用newCap * loadFactor计算新树组的负载因子
        if (newThr == 0) {
            float ft = (float)newCap * loadFactor;
            newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
                      (int)ft : Integer.MAX_VALUE);
        }
        //将计算的阈值赋值给HashMap底层的阈值
        threshold = newThr;
        //开辟空间
        @SuppressWarnings({"rawtypes","unchecked"})
            Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
        table = newTab;
        //将老表中的数据赋给新表并且老表置为null,便于GC
        if (oldTab != null) {
            for (int j = 0; j < oldCap; ++j) {
                Node<K,V> e;
                if ((e = oldTab[j]) != null) {
                    oldTab[j] = null;
                    if (e.next == null)
                        newTab[e.hash & (newCap - 1)] = e;
                    else if (e instanceof TreeNode)
                        ((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
                    else { // preserve order
                        Node<K,V> loHead = null, loTail = null;
                        Node<K,V> hiHead = null, hiTail = null;
                        Node<K,V> next;
                        do {
                            next = e.next;
                            if ((e.hash & oldCap) == 0) {
                                if (loTail == null)
                                    loHead = e;
                                else
                                    loTail.next = e;
                                loTail = e;
                            }
                            else {
                                if (hiTail == null)
                                    hiHead = e;
                                else
                                    hiTail.next = e;
                                hiTail = e;
                            }
                        } while ((e = next) != null);
                        if (loTail != null) {
                            loTail.next = null;
                            newTab[j] = loHead;
                        }
                        if (hiTail != null) {
                            hiTail.next = null;
                            newTab[j + oldCap] = hiHead;
                        }
                    }
                }
            }
        }
        //返回新数组
        return newTab;
    }
复制代码

The expansion here is very clever, because each time the length of the array is expanded to 2 times the original, so the hash value after the expansion is found to be placed in the location &(length - 1) (equivalent to %length) After the original data storage location is either unchanged , or the original location + old capacity

HashMap storage data process

insert image description here

//向HashMap中插入数据
        public V put(K key, V value) {
                //传入哈希值,键,值,false表示如果键相同则覆盖值
                return putVal(hash(key), key, value, false, true);  //调用putVal方法
            }
final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
                       boolean evict) {
            Node<K,V>[] tab; Node<K,V> p; int n, i;
            //数组为空或者数组长度为0要进行扩容
            if ((tab = table) == null || (n = tab.length) == 0)
                n = (tab = resize()).length;
            //指定索引处为空则直接存储
            if ((p = tab[i = (n - 1) & hash]) == null)
                tab[i] = newNode(hash, key, value, null);
            //否则在链表上(或红黑树上)看是覆盖还是新开辟一个结点
            else {
                Node<K,V> e; K k;
                //哈希值相同并且内容相同则直接覆盖
                if (p.hash == hash &&
                    ((k = p.key) == key || (key != null && key.equals(k))))
                    e = p;
                 //在红黑树中寻找
                else if (p instanceof TreeNode)
                    e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
                //否则在单链表中寻找
                else {
                    for (int binCount = 0; ; ++binCount) {
                        if ((e = p.next) == null) {
                            p.next = newNode(hash, key, value, null);
                            if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
                                treeifyBin(tab, hash);
                            break;
                        }
                        //找不到则开辟新结点
                        if (e.hash == hash &&
                            ((k = e.key) == key || (key != null && key.equals(k))))
                            break;
                        p = e;
                    }
                }
                //找到哈希值和key相同的直接覆盖
                if (e != null) { // existing mapping for key
                    V oldValue = e.value;
                    if (!onlyIfAbsent || oldValue == null)
                        e.value = value;
                    afterNodeAccess(e);
                    return oldValue;
                }
            }
            //数组修改次数++
            ++modCount;

            //判断是否需要扩容
            if (++size > threshold)
                resize();
            afterNodeInsertion(evict);
            return null;
        }
复制代码

HashMap source code

HashMap storage data process


        HashMap底层是数组加链表实现的,每一个数组中的结点可以看成是一个链表的首结点,在每个首结点下都拉一条链用来解决哈希冲突

        链表转化为红黑树需要满足两个条件:
        1. 数组中存储的元素数量大于阈值64
        2. 链表长度大于8
基本属性:

        //数组默认容量
        static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16

        //数组最大容量
        static final int MAXIMUM_CAPACITY = 1 << 30;

        //默认的负载因子
        static final float DEFAULT_LOAD_FACTOR = 0.75f;

        //单链表转红黑树的条件,当链表长度大于8链表转化为红黑树
        static final int TREEIFY_THRESHOLD = 8;

        //红黑树转化为单链表的条件,当链表长度小于6时红黑树转化为链表
        static final int UNTREEIFY_THRESHOLD = 6;

        //当数组中存储的元素大于这个值时,链表转化为红黑树
        static final int MIN_TREEIFY_CAPACITY = 64;

        //HashMap底层数组,数组的长度一定是2的次幂
        transient Node<K,V>[] table;

        //HashMap中存储的元素数量
        transient int size;

        //底层数组修改次数
        transient int modCount;

        //扩容的条件
        int threshold;

        //负载因子
        final float loadFactor;

构造方法:
        //指定数组容量大小和负载因子
        public HashMap(int initialCapacity, float loadFactor) {
                if (initialCapacity < 0)            //容量不合法,抛出异常
                    throw new IllegalArgumentException("Illegal initial capacity: " +
                                                       initialCapacity);
                if (initialCapacity > MAXIMUM_CAPACITY)     //容量大于最大长度,指定数组容量为最大
                    initialCapacity = MAXIMUM_CAPACITY;
                if (loadFactor <= 0 || Float.isNaN(loadFactor))     //指定负载因子小于等于0或非法,抛出异常
                    throw new IllegalArgumentException("Illegal load factor: " +
                                                       loadFactor);
                this.loadFactor = loadFactor;               //负载因子

                ////调用另一个找到方法,找到大于等于指定容量的中最小值且该数必须是2的次幂
                this.threshold = tableSizeFor(initialCapacity);
            }

        public HashMap(int initialCapacity) {               //调用另一个构造函数,并传入默认负载因子构造大小
                this(initialCapacity, DEFAULT_LOAD_FACTOR);
            }

        public HashMap() {                                  //不进行扩容,只是把默认负载因子大小传过去
            this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted
        }


常用方法:
        //哈希函数得到哈希值
        //获取关键字的hashcode并异或上hashcode的无符号右移16得到哈希值
        /*
        为什么要这样计算哈希值而不是直接用hashcode呢?
        答:为了减少哈希冲突,使存储的数据分布均匀,提高查找效率
        */
        static final int hash(Object key) {
                int h;
                return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
            }

        //找到最小的大于等于cap的2的次幂,这个函数只在使用构造函数创建HashMap时使用,在进行扩容时不使用
        static final int tableSizeFor(int cap) {
                int n = cap - 1;
                n |= n >>> 1;
                n |= n >>> 2;
                n |= n >>> 4;
                n |= n >>> 8;
                n |= n >>> 16;
                return (n < 0) ? 1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
            }

        //HashMap中的存储单元,每一个Node可以看成是一个链表的结点,存储在数组中的每一个结点都可以看成是一个单链表的头结点
        static class Node<K,V> implements Map.Entry<K,V> {
                final int hash;             //哈希值
                final K key;
                V value;
                Node<K,V> next;

                Node(int hash, K key, V value, Node<K,V> next) {
                    this.hash = hash;
                    this.key = key;
                    this.value = value;
                    this.next = next;
                }

                public final K getKey()        { return key; }
                public final V getValue()      { return value; }
                public final String toString() { return key + "=" + value; }

                public final int hashCode() {
                    return Objects.hashCode(key) ^ Objects.hashCode(value);
                }

                public final V setValue(V newValue) {
                    V oldValue = value;
                    value = newValue;
                    return oldValue;
                }

                public final boolean equals(Object o) {             //equals方法
                    if (o == this)
                        return true;
                    if (o instanceof Map.Entry) {
                        Map.Entry<?,?> e = (Map.Entry<?,?>)o;
                        if (Objects.equals(key, e.getKey()) &&
                            Objects.equals(value, e.getValue()))
                            return true;
                    }
                    return false;
                }
            }

        //根据键寻找值
        public V get(Object key) {
                Node<K,V> e;
                return (e = getNode(hash(key), key)) == null ? null : e.value;      //调用另一个getNode方法
            }

        //由哈希值和键寻找值
        final Node<K,V> getNode(int hash, Object key) {                  //根据
                Node<K,V>[] tab; Node<K,V> first, e; int n; K k;

                //如果底层数组不为null并且数组长度大于0并且哈希值对应位置处有元素就开始找
                if ((tab = table) != null && (n = tab.length) > 0 &&
                    (first = tab[(n - 1) & hash]) != null) {             //这里其实是由哈希值寻找存储位置(即索引)

                    //匹配哈希值并且匹配key值返回索引处链表的首结点
                    if (first.hash == hash && // always check first node
                        ((k = first.key) == key || (key != null && key.equals(k))))
                        return first;
                    if ((e = first.next) != null) {
                        //没找到在红黑树中找
                        if (first instanceof TreeNode)
                            return ((TreeNode<K,V>)first).getTreeNode(hash, key);
                        do {
                        //都没找到就在链表中找
                            if (e.hash == hash &&
                                ((k = e.key) == key || (key != null && key.equals(k))))
                                return e;
                        } while ((e = e.next) != null);
                    }
                }
                //不存在返回null
                return null;
            }

        //向HashMap中插入数据
        public V put(K key, V value) {
                //传入哈希值,键,值,false表示如果键相同则覆盖值
                return putVal(hash(key), key, value, false, true);  //调用putVal方法
            }

        final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
                           boolean evict) {
                Node<K,V>[] tab; Node<K,V> p; int n, i;
                //数组为空或者数组长度为0要进行扩容
                if ((tab = table) == null || (n = tab.length) == 0)
                    n = (tab = resize()).length;
                //指定索引处为空则直接存储
                if ((p = tab[i = (n - 1) & hash]) == null)
                    tab[i] = newNode(hash, key, value, null);
                //否则在链表上(或红黑树上)看是覆盖还是新开辟一个结点
                else {
                    Node<K,V> e; K k;
                    //哈希值相同并且内容相同则直接覆盖
                    if (p.hash == hash &&
                        ((k = p.key) == key || (key != null && key.equals(k))))
                        e = p;
                     //在红黑树中寻找
                    else if (p instanceof TreeNode)
                        e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
                    //否则在单链表中寻找
                    else {
                        for (int binCount = 0; ; ++binCount) {
                            if ((e = p.next) == null) {
                                p.next = newNode(hash, key, value, null);
                                if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
                                    treeifyBin(tab, hash);
                                break;
                            }
                            //找不到则开辟新结点
                            if (e.hash == hash &&
                                ((k = e.key) == key || (key != null && key.equals(k))))
                                break;
                            p = e;
                        }
                    }
                    //找到哈希值和key相同的直接覆盖
                    if (e != null) { // existing mapping for key
                        V oldValue = e.value;
                        if (!onlyIfAbsent || oldValue == null)
                            e.value = value;
                        afterNodeAccess(e);
                        return oldValue;
                    }
                }
                //数组修改次数++
                ++modCount;

                //判断是否需要扩容
                if (++size > threshold)
                    resize();
                afterNodeInsertion(evict);
                return null;
            }

        //扩容函数
        final Node<K,V>[] resize() {
                Node<K,V>[] oldTab = table;
                int oldCap = (oldTab == null) ? 0 : oldTab.length;      //数组为空则写入旧数组长度为0否则就写入本来的长度
                int oldThr = threshold;                                 //旧数组的阈值
                int newCap, newThr = 0;             //新数组长度、阈值
                if (oldCap > 0) {
                //当旧数组长度大于0
                //如果旧数组长度大于等于可以开辟的最大数组长度则更该阈值直接返回
                //只是更改了阈值,数组实际长度并不变
                    if (oldCap >= MAXIMUM_CAPACITY) {
                        threshold = Integer.MAX_VALUE;
                        return oldTab;
                    }
                    /*
                    旧数组的两倍小于可以开辟的最大数组长度并且旧数组长度大于等于默认数组长度DEFAULT_INITIAL_CAPACITY(16)
                    则为新数组开辟空间为原来的两倍并且新数组的阈值扩充为原来的两倍
                    */
                    else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
                             oldCap >= DEFAULT_INITIAL_CAPACITY)
                        newThr = oldThr << 1; // double threshold
                }
                //这种情况是旧数组长度小于等于0并且旧数组阈值大于0,则赋予新数组长度为旧数组阈值
                else if (oldThr > 0) // initial capacity was placed in threshold
                    newCap = oldThr;
                //这种情况是旧数组长度小于等于0并且旧数组旧数组阈值小于等于0则赋予新数组长度为默认数组长度,阈值为默认阈值
                else {               // zero initial threshold signifies using defaults
                    newCap = DEFAULT_INITIAL_CAPACITY;
                    newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
                }
                //如果新树组阈值等于0,则用newCap * loadFactor计算新树组的负载因子
                if (newThr == 0) {
                    float ft = (float)newCap * loadFactor;
                    newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
                              (int)ft : Integer.MAX_VALUE);
                }
                //将计算的阈值赋值给HashMap底层的阈值
                threshold = newThr;
                //开辟空间
                @SuppressWarnings({"rawtypes","unchecked"})
                    Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
                table = newTab;
                //将老表中的数据赋给新表并且老表置为null,便于GC
                if (oldTab != null) {
                    for (int j = 0; j < oldCap; ++j) {
                        Node<K,V> e;
                        if ((e = oldTab[j]) != null) {
                            oldTab[j] = null;
                            if (e.next == null)
                                newTab[e.hash & (newCap - 1)] = e;
                            else if (e instanceof TreeNode)
                                ((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
                            else { // preserve order
                                Node<K,V> loHead = null, loTail = null;
                                Node<K,V> hiHead = null, hiTail = null;
                                Node<K,V> next;
                                do {
                                    next = e.next;
                                    if ((e.hash & oldCap) == 0) {
                                        if (loTail == null)
                                            loHead = e;
                                        else
                                            loTail.next = e;
                                        loTail = e;
                                    }
                                    else {
                                        if (hiTail == null)
                                            hiHead = e;
                                        else
                                            hiTail.next = e;
                                        hiTail = e;
                                    }
                                } while ((e = next) != null);
                                if (loTail != null) {
                                    loTail.next = null;
                                    newTab[j] = loHead;
                                }
                                if (hiTail != null) {
                                    hiTail.next = null;
                                    newTab[j + oldCap] = hiHead;
                                }
                            }
                        }
                    }
                }
                //返回新数组
                return newTab;
            }
复制代码

some common methods

put(key, value)

Insert data into HashMap object

get(key)

Get the corresponding value from the key

remove(key)

delete key-value pair by key

size()

Returns the number of elements in the HashMap object

Iterate HashMap

Get a collection of keys: keySet()

Get a collection of values: value()

HashMap<String, String> h = new HashMap<>();
h.put(new String("1"), new String("唐僧"));
h.put(new String("2"), new String("孙悟空"));
h.put(new String("3"), new String("猪八戒"));
h.put(new String("4"), new String("沙和尚"));

// 1. 第一种迭代方式
//获取键
for(String s1 : h.keySet()){            //获取键的集合
    System.out.println(s1);
    System.out.println(h.get(s1));      //这里可以根据键获取值
}
//获取值
for(String s2 : h.values()){            //获取值的集合
    System.out.println(s2);
}

// 2. 第二种迭代方式
 for(Map.Entry<String, String> s : h.entrySet()){
            System.out.println(s.getKey());
            System.out.println(s.getValue());
        }
// 3. 第三种迭代方式
Iterator<Map.Entry<String, String>> iterator = h.entrySet().iterator();
        while (iterator.hasNext()){
            Map.Entry<String, String> x = iterator.next();
            System.out.println(x.getKey());
            System.out.println(x.getValue());
        }
复制代码
clear()

Delete all key-value pairs in HashMap

entrySet()

Returns all key-value mappings in the Hashmap

HashMap<String, String> h = new HashMap<>();
        h.put(new String("1"), new String("唐僧"));
        h.put(new String("2"), new String("孙悟空"));
        h.put(new String("3"), new String("猪八戒"));
        h.put(new String("4"), new String("沙和尚"));

        for(Map.Entry<String, String> s : h.entrySet()){
            System.out.println(s.getKey());
            System.out.println(s.getValue());
        }
复制代码

Guess you like

Origin juejin.im/post/7083617876300529694