Why is java HashMap resize or rehash not taking gradual approach like Redis

Boyu Zhang :

I am just wondering why the jdk HashMap reshashing process not taking the gradual approach as Redis. Though the rehash calculation of Jdk HashMap is quite elegant and effective, it will still take noticeable time when the number of elements in the original HashMap contains quite a number of entries. I am not an experience java user so I always suppose that there must be consideration of the java designers that is beyond the limit of my cognitive capability. The gradual rehash like Redis can effectively distributes the workload to each put, delete or get in the HashMap, which could significantly reduce the resize/rehashing time . And I have also compared the two hash methods which in my mind doesn't restrict Jdk from doing a gradual rehashing. Hope someone could give an clue or some inspiration. Thanks a lot in advance.

Matt Timmermans :

If you think about the costs and benefits of incremental rehashing for something like HashMap, It turns out that the costs are not insignificant, and the benefits are not as great as you might like.

An incrementally rehashing HashMap:

  • Uses 50% more memory on average, because it needs to keep both the old table and new table around during the incremental rehash; and
  • Has a somewhat higher computational cost per operation. Also:
  • The rehashing is still not entirely incremental, because allocating the new hash table array has to be done all at once; so
  • There are no improvements in the asymptotic complexity of any operation. And finally:
  • Almost nothing that would really need incremental rehashing can be implemented in Java at all, due to the unpredictable GC pauses, so why bother?

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=284676&siteId=1