Why HashMap expansion size is a power of 2

Ideally, hashMap access is O (1), which is directly based on hashcode can find it, each bucket stores only one node, point to the list are null, this is more fun, do not see a long list Case.

So we hope that it uniform distribution point, if we design it, we certainly direct the length of the modulus ----- hashcode% length, but the designer HashMap is not written, it is written in binary operation, as follows:

 static final int hash(Object key) {
     int h;
     return (key == null) ? 0 : (h = key.hashCode()) ^ (h >>> 16);
 }

Specific access rules array is tab [(. 1-n-) & hash] , wherein the tab is node array, n is the length of the array, hash is the hash key value.

Why is designed to (n - 1) & hash this happen? When the case where n is a power of 2, (n - 1) & hash ≈ hash% n, because binary operations much faster than the modulo, it uses this manner, the requirements 2 power.

We can see the process of seeking hash of it, move 16 32 hashCode value of the left, high fill 0, that is, as long as the upper 16 bits, this is why? Since the calculated hashcode hash values ​​lead to differences in the main high, and (n - 1) & hash is negligible over the high capacity, so use h >>> 16 similar situations is to avoid hash collisions.

Guess you like

Origin www.cnblogs.com/gaopengpy/p/11923346.html