java concurrent containers of a BoundedConcurrentHashMap (based JDK1.8)

Recently I started learning java concurrent containers, in order to supplement their knowledge in terms of concurrency, from the source. If incorrect place, but also please the god of criticism.

Preface: I understand, look at the source code of a class constructor start from the first, and then look at methods. Let's look at the comments BoundedConcurrentHashMap

1 describes the HashTable look:
    This class the implements A the hash Table, Which Maps Keys to the Any NON-values <code>. Null </ code> Object Used CAN BE AS A or AS A Key value. 2   class implements a Ha table Xi, it maps keys to values. Any non <code> null </ code> object can be used as a key or value. . 3  the To successfully Store and Retrieve Objects A from the To successfully Hashtable Store and Retrieve Objects from Hashtable A, . 4  from the hash table to store and retrieve objects from the successfully hash table successfully store and retrieve objects . 5   The Objects Used MUST Implement The Keys AS <code> hashCode </ code> and method the <code> equals </ code> method. . 6   object as a key must implement <code> hashCode </ code> method and <code> equals <. 7   An instance of <code> the Hashtable </ code> has ITS TWO Performance Parameters that Affect: . 8   <code> the Hashtable </ code> is an example of two parameters affecting performance *: initial capacity and load factor <i> Initial Capacity </ I> and <I> Load factor </ I> . 9   of The <I> Capacity </ I> IS The Number of <I> buckets </ I> in The the hash Table, and The <I> Initial Capacity </ i> iS Simply the capacity Time AT the Created the iS the hash table 10   <i> capacity </ i> is the hash table <i> barrel </ i> number, <i> initial capacity </ i> only the capacity to create a hash table . 11   Note that the hash table the iS <I> Open </ I>: in the Case of a "the hash Collision" , a SINGLE bucket Multiple entries It Stores, MUST BE searched sequentially Which. 12 is   Note that the hash table is <i> open </ i> : in the "Hash *Case of conflict, "the single bucket stores a plurality of entries, which must be searched sequentially 13 is   of The <I> Load factor </ I> IS A How its measure of the hash Table Full The IS allowed to GET IS Automatically Increased Capacity before ITS 14   < i> the load factor </ i> hash before allowing to measure automatically increases its degree of filling capacity * table


BoundedConcurrentHashMap class

A hash table that supports full concurrency of retrievals and updates can be expected concurrency. Class {@link java.util.Hashtable} follow the same functional specification, and includes methods and each method version corresponding Hashtable. However, even though all operations are thread-safe, but do not need to lock retrieval operations, and does not support access to block all the way to lock the entire table. In the security thread, but it is not dependent on its synchronization dependent program details, such fully interoperable with Hashtable. Retrieval operation (including get) generally does not block, so may overlap with update operations (including put and remove). Retrieving the results reflect the recently completed update operations. For the polymerization operation, such as a clear putAll and the like, it may reflect the concurrent retrieval insertion or deletion of only some entries. They do not throw {@link java.util.ConcurrentModificationException}. However, once the iteration is designed to only be used by a thread. It allows concurrency between the update operation by an optional <tt> concurrencyLevel </ tt> constructor parameter (default 16) guided inside the prompt parameter is used to adjust the size. The partition table in the interior, in an attempt to allow a specified number of concurrent updates without contention occurs. Since the hash table is placed essentially random, the actual concurrency will be different. Ideally, you should choose a value to accommodate and modify the table at the same time as many threads. Use a higher value than a waste of space and time you need, while significantly lower values may lead to thread contention. However, within an order of magnitude overestimated and underestimated do not usually have a significant impact. When you know that only one thread will modify and all other threads can only be read when the value of 1 is appropriate. Further, this adjustment hash table or any other type of hash table is a relatively slow operation, therefore, where possible, it is best to provide estimates of the expected size of the table in the constructor. This class and its views and iterators implement the {@link Map} and {@link Iterator} All <em> Optional </ em> interface methods.This course is copied from the Infinispan, originally written by Doug Lea, with the assistance JSR 166-members of the Group of JCP and released into the public domain, such as the http://creativecommons.org/. licenses / publicdomain similar to {@link java.util.Hashtable}, {@link HashMap} but different, such as keys or null values are not allowed.

Default capacity: DEFAULT_MAXIMUM_CAPACITY = 512; use constructor is not specified
the default load factor: DEFAULT_LOAD_FACTOR = 0.75f use constructor not specified
the default level of concurrency: DEFAULT_CONCURRENCY_LEVEL = 16 when using the constructor is not specified
maximum capacity: MAXIMUM_CAPACITY = << 1 30 is used when the constructor specify this parameter, the specified value must be 2 <= 1 << power 30 to ensure that the entries can be indexed using integers.
The maximum allowed number of segments: MAX_SEGMENTS = 1 << 16 * constructor parameters for binding
does not understand: RETRIES_BEFORE_LOCK = 2,
segmentMask: mask value used to index into the segment, key hash codes for selecting the upper section.
segmentShift: shift value to be prepared within the segment index (not understood)
Segment <K, V> [] Segments: section. Played the role of sub-lock, these segments, each segment is a dedicated hash table. This class is an internal class inherited ReentrantLock only to simplify some of the locking and avoiding a separate structure
also has keySet, the entrySet, values
segmentFor (int hash): calculates a hash of the segment where the hash value according passed.
HashEntry: Internal class used to package hash map key-value pairs, if the collision is a "disconnect connector" treatment for a collision, the "collision" HashEntry objects into a linked list.
Since HashEntry next field for the final model, the new node is inserted only at the head of the linked list table

Constructor: Other constructors are also indirectly calls this

Parameter Description

  capacity: map the number of elements in the capacity limit

  concurrencyLevel: concurrent updates to the estimated number of threads, the implementation performs internal sizing to try to accommodate so many threads. Incoming is the default when several other constructors call this method DEFAULT_CONCURRENCY_LEVEL = 16

  evictionStrategy: From mapping table algorithm for the expulsion of elements (not quite understand)

  evictionListener: the expulsion of the listener, to inform the expulsion element (do not understand).

BoundedConcurrentHashMap public ( int Capacity, int concurrencyLevel, Eviction evictionStrategy, EvictionListener <K, V> evictionListener) {
  IF (Capacity <0 || concurrencyLevel <= 0) {// do not allow the maximum capacity of less than 0 or concurrently update the number of threads less than or equal to 0 otherwise would not be able to play
  the throw new new IllegalArgumentException ();
  }

  // concurrency limit of not less than the upper limit of the capacity element and not more than the general number of concurrent incoming Element

  concurrencyLevel = Math.min( capacity / 2, concurrencyLevel ); // concurrencyLevel cannot be > capacity/2
  concurrencyLevel = Math.max( concurrencyLevel, 1 ); // concurrencyLevel cannot be less than 1

  // 不允许最大容量小于并发数的两倍
  if ( capacity < concurrencyLevel * 2 && capacity != 1 ) {
  throw new IllegalArgumentException( "Maximum capacity has to be at least twice the concurrencyLevel" );
  }

  // 不允许驱逐算法和监听为空

  if ( evictionStrategy == null || evictionListener == null ) {
    throw new IllegalArgumentException();
  }

  // 限定最大并发更新线程数不大于MAX_SEGMENTS(是1<<16 = 65536,也即是2的16次幂,这样做就保证了最大并发是2的次幂数)

  if ( concurrencyLevel > MAX_SEGMENTS ) {
    concurrencyLevel = MAX_SEGMENTS;
  }

  // 用sshift 和 ssize变量来存储最佳匹配参数是2的次幂
  int sshift = 0;
  int ssize = 1;

  // 从后面看到ssize要用于创建的“分段锁”的数组长度,所以这里不能够比并发线程数小, 否则就会增加“线程”的竞争,导致效率下降。
  while ( ssize < concurrencyLevel ) {
    ++sshift;
    ssize <<= 1; // ssize<<=1 也即是 ssize = ssize * 2的1次幂,这里也即是取2的次幂
  }
  segmentShift = 32 - sshift; // 计算段移位量,用于与hash进行移位运算,找到hash所在的段的位置
  segmentMask = ssize - 1;
  this.segments = Segment.newArray( ssize ); // 初始化“分段锁”数组的长度。

  // 如果capacity(元素的容量上限)比规定的最大容量MAXIMUM_CAPACITY(1<<30)大,那么就去规定的最大容量

  if ( capacity > MAXIMUM_CAPACITY ) {
    capacity = MAXIMUM_CAPACITY;
  }
  int c = capacity / ssize; // 取元素的容量上限和“分段锁”数组的长度的余数
  int cap = 1;
  while ( cap < c ) { // 如果cap小于1那么对cap进行2的次幂运算,否则就把“分段锁”数组的每个元素中的数组初始容量定位1
    cap <<= 1;
  }

  for ( int i = 0; i < this.segments.length; ++i ) {

    // 从Segment构造器中可以看到cap是用来确定“分段锁”数组的具体元素中数组的长度,如果上面“分段锁”数组具体元素容量上限
    this.segments[i] = new Segment<K, V>( cap, c, DEFAULT_LOAD_FACTOR, evictionStrategy, evictionListener );
  }

  // 把Segment构造器放在这里只是配合上面循环中的new Segment容易理解。

  Segment(int cap, int evictCap, float lf, Eviction es, EvictionListener<K, V> listener) {
    loadFactor = lf;
    this.evictCap = evictCap;
    eviction = es.make( this, evictCap, lf );
    evictionListener = listener;
    setTable( HashEntry.<K, V>newArray( cap ) ); // 从这里可以看到cap是用来确定“分段锁”数组的具体元素中数组的长度
  }

  public V put(K key, V value) {
    if ( value == null ) { // 不允许value为null, 这里并没有对key做判断并不代表允许key为null,因为当key为null时在下面的key.hashCode()也会报出空指针异常
      throw new NullPointerException();
    }
    int hash = hash( key.hashCode() ); // 对key的hash码进行再hash确定“分段锁”数组中具体的数组下标

    // segmentFor(hash)是定位“分段锁数组”, Segment是一个内部类。
    return segmentFor( hash ).put( key, hash, value, false );
  }

  V put(K key, int hash, V value, boolean onlyIfAbsent) {
    lock(); // 加锁
    Set<HashEntry<K, V>> evicted = null;
    try {

      // count是此“分段锁”中的元素数
      int c = count;

      // threshold: 是“分段锁”中元素数量的阈值,Eviction.NON是驱逐算法

      // 此判断的意思是当元素数量大于阈值时, 那么表将会被重新hash分配整理
      if ( c++ > threshold && eviction.strategy() == Eviction.NONE ) {
        rehash();
      }
    HashEntry<K, V>[] tab = table;
    int index = hash & tab.length - 1;
    HashEntry<K, V> first = tab[index]; // 拿到数组中的第一个元素,至于为什么这样算出index就是第一个元素,不太理解
    HashEntry<K, V> e = first;

    // 这个while循环的作用是: 查看put进来的key表中是否已经存在,如果存在就是根据key替换value而不是新增。
    while ( e != null && ( e.hash != hash || !key.equals( e.key ) ) ) {
      e = e.next;
    }

    V oldValue;
    if ( e != null ) { // e不为空时说明上面while的另外一个条件e.hash != hash || !key.equals(e.key) 不成立,也即是传入的key在hash表中已经存在,这时候就替换value即可。
      oldValue = e.value; // 记下原来的value值返回。
      if ( !onlyIfAbsent ) {
        e.value = value;
        eviction.onEntryHit( e );
      }
    }
    else { // 否则就是出入的key在hash表中不存在,需要新增

      oldValue = null;

      // modCount是hash表的更新数,用来记录表的更新(可以理解成“乐观锁”,比如size方法中使用是,进入方法时会建立与“分段锁”长度一致的数组来存储每个“分段锁”中hash表修改的次数,当统计计算长度的时候会再次统计一次,然后比较

      // 这两次的值是否一致,如果不一致说明统计数量期间有别的线程进行了数据更新,那么就加上锁重新统计)

      ++modCount;

      // 将加1之后的元素数量重新赋值回去

      count = c; // write-volatile

      if ( eviction.strategy() != Eviction.NONE ) {

        // 当驱逐算法不是NONE时,并且又元素数量又达到了驱逐上限,那么就用eviction本身的驱逐算法对元素进行驱逐,以容纳新的元素

        if ( c > evictCap ) {
        // remove entries;lower count
        evicted = eviction.execute(); // 驱逐元素
        // 重新读取驱逐元素之后的第一个的值
        first = tab[index];
      }
      // 添加一个新的元素放在首位,并将新添加的元素的next指向原来的第一个元素,这样就在链表的开头新增了一个新的节点。
      tab[index] = eviction.createNewEntry( key, hash, first, value );
      //不太理解下面的操作
      Set<HashEntry<K, V>> newlyEvicted = eviction.onEntryMiss( tab[index] ); 
      if ( !newlyEvicted.isEmpty() ) {
        if ( evicted != null ) {
        evicted.addAll( newlyEvicted );
      }
      else {
        evicted = newlyEvicted;
      }
     }
    }
    else { // 如果驱逐算法是NONE构建完链表返回。
      tab[index] = eviction.createNewEntry( key, hash, first, value );
    }
   }

    return oldValue;
  }
  finally {
    unlock(); // 解锁,能够让其他线程访问
    notifyEvictionListener( evicted );
  }
 }

}

 



 

Guess you like

Origin www.cnblogs.com/qiaoyutao/p/10903813.html