Source code: Interpretation of the underlying source code of JDK8 concurrency tool class ConcurrentHashMap

This article explains the commonly used methods of the java.util.concurrent.ConcurrentHashMap class in actual production development. If the explanation is wrong, I hope experts will correct it in the comment area.


1. Introduction

ConcurrentHashMapIt is JUCa thread-safe class under the package. ConcurrentHashMapIt does not lock the entire method, but ensures safe access by multi-threads through atomic operations and partial locking, and reduces performance loss as much as possible.

public class ConcurrentHashMap<K,V> 
		extends AbstractMap<K,V>
        implements ConcurrentMap<K,V>, Serializable {
    
    

2. Source code interpretation

1. Construction method

public ConcurrentHashMap() {
    
    
}

public ConcurrentHashMap(int initialCapacity) {
    
    
    if (initialCapacity < 0)
        throw new IllegalArgumentException();
    int cap = ((initialCapacity >= (MAXIMUM_CAPACITY >>> 1)) ?
            MAXIMUM_CAPACITY :
            tableSizeFor(initialCapacity + (initialCapacity >>> 1) + 1));
    this.sizeCtl = cap;
}

public ConcurrentHashMap(Map<? extends K, ? extends V> m) {
    
    
    this.sizeCtl = DEFAULT_CAPACITY;
    putAll(m);
}

public ConcurrentHashMap(int initialCapacity, float loadFactor) {
    
    
    this(initialCapacity, loadFactor, 1);
}

public ConcurrentHashMap(int initialCapacity,
                         float loadFactor, int concurrencyLevel) {
    
    
    if (!(loadFactor > 0.0f) || initialCapacity < 0 || concurrencyLevel <= 0)
        throw new IllegalArgumentException();
    if (initialCapacity < concurrencyLevel)   // Use at least as many bins
        initialCapacity = concurrencyLevel;   // as estimated threads
    long size = (long)(1.0 + (long)initialCapacity / loadFactor);
    int cap = (size >= (long)MAXIMUM_CAPACITY) ?
            MAXIMUM_CAPACITY : tableSizeFor((int)size);
    this.sizeCtl = cap;
}

2. put related methods

put method process:

  1. Calculate the hash value of key
  2. If the table is not initialized, CAS initializes it.
  3. If the bucket corresponding to the hash position in the table is empty, CAS creates (assigns a value)
  4. If the bucket is being expanded, help it expand its capacity.
  5. synchronized locks the first element of the bucket and performs a put operation
  6. The number of nodes in the bucket is greater than or equal to 8. Expand the table or convert the bucket linked list to a red-black tree.
  7. Call addCount() to determine whether expansion is needed
public V put(K key, V value) {
    
    
	return putVal(key, value, false);
}

/**
 * 如果table或bucket未初始化,则不加锁,通过CAS保证并发安全性
 * 其他情况加synchronized锁,锁的是bucket[0]元素
 *
 * @param key 键
 * @param value 新值
 * @param onlyIfAbsent true:不存在时才会put
 * @return 旧值
 */
final V putVal(K key, V value, boolean onlyIfAbsent) {
    
    
    if (key == null || value == null) throw new NullPointerException();
    // 1.计算key的哈希值hash
    int hash = spread(key.hashCode());
    int binCount = 0;
    for (ConcurrentHashMap.Node<K,V>[] tab = table;;) {
    
    
        ConcurrentHashMap.Node<K,V> f; int n, i, fh;
        // 2.如果:table尚未初始化
        if (tab == null || (n = tab.length) == 0)
            // 初始化table
            tab = initTable(); // 初始化表无锁
        // 3.如果:bucket尚未使用
        else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) {
    
    
            // 如果:CAS创建bucket成功
            if (casTabAt(tab, i, null,
                    new ConcurrentHashMap.Node<K,V>(hash, key, value, null)))
                // 退出循环,put完成
                break;
        }
        // 4.如果:table[i]处于扩容rehash状态中
        else if ((fh = f.hash) == MOVED)
            // 帮助table[i]进行扩容,并将tab指向newTable
            tab = helpTransfer(tab, f);
        else {
    
    
            V oldVal = null;
            // 5.锁住bucket的第一个元素,进行put操作
            synchronized (f) {
    
    
                // double check
                if (tabAt(tab, i) == f) {
    
    
                	// 如果:bucket为链表结构
                    if (fh >= 0) {
    
    
                        binCount = 1;
                        for (ConcurrentHashMap.Node<K,V> e = f;; ++binCount) {
    
    
                            K ek;
                            // 如果:该key已存在
                            if (e.hash == hash &&
                                    ((ek = e.key) == key || (ek != null && key.equals(ek)))) {
    
    
                                oldVal = e.val;
                                // 根据onlyIfAbsent选择性更新该value
                                if (!onlyIfAbsent)
                                    e.val = value;
                                break;
                            }
                            ConcurrentHashMap.Node<K,V> pred = e;
                            // 如果:遍历到了bucket最后一个node
                            if ((e = e.next) == null) {
    
    
                            	// 创建新node并链接在其尾部
                                pred.next = new ConcurrentHashMap.Node<K,V>(hash, key, value, null);
                                break;
                            }
                        }
                    }
                	// 如果:bucket为树结构
                    else if (f instanceof ConcurrentHashMap.TreeBin) {
    
    
                        ConcurrentHashMap.Node<K,V> p;
                        binCount = 2;
                        if ((p = ((ConcurrentHashMap.TreeBin<K,V>)f).putTreeVal(hash, key, value)) != null) {
    
    
                            oldVal = p.val;
                            if (!onlyIfAbsent)
                                p.val = value;
                        }
                    }
                }
            }
            if (binCount != 0) {
    
    
            	// 6.如果:bucket中的节点数大于等于8
                if (binCount >= TREEIFY_THRESHOLD)
                	// 扩容或转为红黑树
                    treeifyBin(tab, i);
                if (oldVal != null)
                    return oldVal;
                break;
            }
        }
    }
    // 7.
    addCount(1L, binCount);
    return null;
}

treeifyBin(): Convert the bucket's linked list structure to a red-black tree structure

/**
 * 替换给定索引处 bin 中的所有链接节点,除非表太小,在这种情况下,改为调整大小。
 */
private final void treeifyBin(ConcurrentHashMap.Node<K,V>[] tab, int index) {
    
    
    ConcurrentHashMap.Node<K,V> b; int n, sc;
    if (tab != null) {
    
    
        // 1.如果:table长度小于64
        if ((n = tab.length) < MIN_TREEIFY_CAPACITY)
            // 扩容为2倍长度
            tryPresize(n << 1);
        // 2.如果:bucket不为空且为链表结构
        else if ((b = tabAt(tab, index)) != null && b.hash >= 0) {
    
    
            // 锁住bucket第一个节点
            synchronized (b) {
    
    
                if (tabAt(tab, index) == b) {
    
    
                    ConcurrentHashMap.TreeNode<K,V> hd = null, tl = null;
                    // 3.将此bucket的所有链表节点转为树节点,并按原顺序将这些树节点组织成双向链表
                    for (ConcurrentHashMap.Node<K,V> e = b; e != null; e = e.next) {
    
    
                        ConcurrentHashMap.TreeNode<K,V> p =
                                new ConcurrentHashMap.TreeNode<K,V>(e.hash, e.key, e.val, null, null);
                        if ((p.prev = tl) == null)
                            hd = p;
                        else
                            tl.next = p;
                        tl = p;
                    }
                    // 4.将TreeNode由链表结构转化为红黑树结构
                    setTabAt(tab, index, new ConcurrentHashMap.TreeBin<K,V>(hd));
                }
            }
        }
    }
}

3. get related methods

public V get(Object key) {
    
    
    Node<K,V> e;
    // 计算key的哈希值,并进行查找相应value节点,找不到则返回null
    return (e = getNode(hash(key), key)) == null ? null : e.value;
}

final Node<K,V> getNode(int hash, Object key) {
    
    
    Node<K,V>[] tab; Node<K,V> first, e; int n; K k;
    // 如果:表已初始化,且hash对应bucket不为空
    if ((tab = table) != null && (n = tab.length) > 0 &&
            (first = tab[(n - 1) & hash]) != null) {
    
    
        // 如果:bucket第一个元素与该key相等
        if (first.hash == hash && // always check first node
                ((k = first.key) == key || (key != null && key.equals(k))))
            // 直接返回第一个元素
            return first;
        if ((e = first.next) != null) {
    
    
        	// 如果是树结构
            if (first instanceof TreeNode)
            	// 以红黑树的检索方式get
                return ((TreeNode<K,V>)first).getTreeNode(hash, key);
            // 普通链表结构进行遍历查找
            do {
    
    
                if (e.hash == hash &&
                        ((k = e.key) == key || (key != null && key.equals(k))))
                    return e;
            } while ((e = e.next) != null);
        }
    }
    // 不存在该key
    return null;
}

3. Others

For more interpretation of ConcurrentHashMap source code, it is recommended to view the article https://cloud.tencent.com/developer/article/2209609


Guess you like

Origin blog.csdn.net/qq_45867699/article/details/130794525