Concurrent Java programming --- ConcurrentHashMap of change (Part two)

Foreword

In the previous study we already know the most commonly used keys under the concurrent programming of container · ConcurrentHashMap, but also to understand in JDK1.7 can refer to the realization of the container (details associated implementation: HTTPS: // Blog .csdn.net / TheWindOfSon / Article this article was / the Details / 103 979 122 ) but implemented in JDK1.8 the container there is a huge difference.

In the structure of FIG 1.JDK1.8 container
Here Insert Picture Description
difference in the JDK1.7 achieved:
(1) From the graph we can see that it eliminates the Segment Array, directly table to save the data, lock granularity is smaller, reducing the probability of concurrent conflict.
(2) directly store the data when the list JDK1.7, 1.8 was used in the list + red-black tree , traversing kept on the list, the time complexity is O (n), the use of red-black tree (a self-balancing binary search tree ), it is traversed to find the time complexity is O (logn), has been greatly improved performance.
Red-black tree list and then transformed it and under what circumstances?
When the list is greater than or equal to eight elements, into a red-black tree list
as a red-black tree element 6 or less when converted to a red-black tree list

2. look at its construction method

 public ConcurrentHashMap(int initialCapacity,
                             float loadFactor, int concurrencyLevel) {
        if (!(loadFactor > 0.0f) || initialCapacity < 0 || concurrencyLevel <= 0)
            throw new IllegalArgumentException();
        if (initialCapacity < concurrencyLevel)   // Use at least as many bins
            initialCapacity = concurrencyLevel;   // as estimated threads
        long size = (long)(1.0 + (long)initialCapacity / loadFactor);
        int cap = (size >= (long)MAXIMUM_CAPACITY) ?
            MAXIMUM_CAPACITY : tableSizeFor((int)size);
        this.sizeCtl = cap;
    }

From the above we can see that it is just a variable assignment to the members, put () operation only when the actual table array filled
on a spoken three parameters
1) initalCapacity: the size of the initial capacity, the default is 16
2) loadFactor: expansion factor, the default is 0.75, when the number of memory elements is greater than a Node initalCapacity * loadFactor, the Node will once expansion.
3) concurrentLevel: concurrency, default is 16, concurrency can simultaneously be appreciated that the program update time does not run ConcurrentHashMap maximum number of threads of lock contention, in fact ConcurrentHashMap lock segment number, i.e., the Node [] of array length, if the degree of concurrency is set too small, will cause serious lock contention problems if concurrency is set too high, originally located in a visit to a different Node Node diffuse in, cache hit rate of the CPU will drop, causing performance degradation program.

Here we will often say table array in JDK source code, it represents the key to a storage node array
Here Insert Picture Description
difference is:
we can observe in the last line of configuration parameters have added a new parameter sizeCtl. It is used to initialize the array table, the main role is to control the length.
Different values of parameters different meanings:
(1) negative: indicates initialization or expansion (where -1 is initializing, -N N-1 expressed threads ongoing expansion)
(2) 0: indicates yet not been initialized
( 3) positive: initializing a next or expansion times threshold value.

3. look at its get () and put () method
get () operations:

Here Insert Picture Description
put()操作:
(1)先对key值进行hashCode(),再散列进行定位
(2)使用initTable()对数组进行初始化(其中有个关键参数为sizeCtl,通过循环CAS设置sizeCtl.直到初始化table成功)
(3)初始化成功后,把值放到数组里面。若table数组上这个元素为null,直接放到数组里;若该元素不为空,同过helpTransfer()检查是否需要扩容,然后再把该值进行相应的替换。(若个数超过8个,则由链表转化为红黑树)

部分源码:
Here Insert Picture Description
4.其他相关方法和操作:
(1)扩容操作:transfer()方法进行实际的扩容操作,table大小也是翻倍的进行。
(2)size()方法:估计的大概数量,不是精确的数量。(在实现上是使用了一个for循环对数量进行统计,但是在统计的过程中可能有别的线程改变了该容器的大小)
(3)一致性上:弱一致性

5.面试常问:
ConcurrentHashMap实现原理是怎样的?或者问ConcurrentHashMap如何在保证高并发下线程安全的同时实现性能的提升?
ConcurrentHashMap允许多个修改操作并发进行,其关键在于使用了锁分离技术,它使用了多个锁来控制对hash表不同部分进行的修改,内部使用了段(Segment)来表示这些不同的部分,每个段其实就是一个小的hash table,只要多个修改操作发生在不同的段上,它就可以并发进行。

总结

Learn to achieve ConcurrentHashMap version 1.8, we find that although it is optimized in structure, the complexity of the operation and the number of code methods may have greatly improved. All we need to understand its basic structure, as well as related put () and get () implementation, as well as some related to the basics. After all, there is the realization of more than 6,000 lines of code 1.8, either no one this time, did not understand all of this energy to achieve.

Published 19 original articles · won praise 2 · Views 412

Guess you like

Origin blog.csdn.net/TheWindOfSon/article/details/104033540