Is hashmap increasing O(N+1) for every same hashcode object put in it?
The best case complexity for put
is O(1)
in time and space.
The average case complexity for put
is O(1)
time and space when amortized over N
put operations.
The amortization averages the cost of growing the hash array and rebuilding the hash buckets when the map is resized.
If you don't amortize, then the worst-case performance of a single
put
operation (which triggers a resize) will beO(N)
in time and space.
There is another worst-case scenario which occurs when a large proportion of the keys has to the same hash code. In that case the worst case time complexity of put
will be either O(N)
or O(logN)
.
Let us define M
to be the number of entries in the hash bucket with the most entries. Let us assume that we are inserting into that bucket, and that M
is O(N)
.
Prior to Java 8, the hash chains were unordered linked lists and searching an
O(N)
element chain isO(N)
. The worst-caseput
operation would therefore beO(N)
.With Java 8, the implementation was changed to use balanced binary trees when 1) the list exceeds a threshold, and 2) the key type K implements
Comparable<K>
.For large enough
N
we can assume that the threshold is exceeded. So the worst-case time complexity ofput
will be:O(log N)
in the case where the keys can be ordered usingComparable<K>
O(N)
in the where the keys cannot be ordered
Note that the javadocs (in Java 11) mention that Comparable
may be used:
"To ameliorate impact, when keys are
Comparable
, this class may use comparison order among keys to help break ties."
but it doesn't explicitly state the complexity. There are more details in the non-javadoc comments in the source code, but these are implementation specific.
The above statements are only valid for extant implementations of HashMap
at the time of writing (i.e. up to Java 12). You can always check for yourself by finding and reading the source code.