✅ Estimated Time Complexity of HashMap operations
Operation | Average Case | Worst Case (pre-Java 8) | Worst Case (Java 8+) |
---|---|---|---|
get(key) | O(1) | O(n) | O(log n) (with tree bins) |
put(key, value) | O(1) | O(n) | O(log n) |
remove(key) | O(1) | O(n) | O(log n) |
containsKey() | O(1) | O(n) | O(log n) |
🧠 Why the difference?
- In a perfect hash scenario, keys are evenly distributed across buckets → each bucket has few elements → O(1) time.
- In the worst case, all keys hash to the same bucket:
- Before Java 8, collisions were handled via linked lists → O(n) traversal in a bucket.
- Since Java 8, if a bucket becomes too dense (≥ 8 entries), it is converted into a balanced tree (a red-black tree), so lookup time becomes O(log n).
❓Does HashMap guarantee these time complexities?
👉 No, it does not guarantee constant-time performance for get()
or put()
. The Java documentation clearly says:
“This implementation provides constant-time performance for the basic operations (
get
andput
), assuming the hash function disperses the elements properly among the buckets.”
In other words:
- Average-case O(1) is expected but not guaranteed.
- Performance heavily depends on:
- A good hash function (
hashCode()
), - Load factor and resizing strategy,
- The number of collisions in the buckets.
- A good hash function (
If you want guaranteed O(log n)
lookups, consider using a TreeMap
(which uses a red-black tree for all entries, regardless of collisions).