✅ Yes, HashMap will still work
If all keys have the same hashCode()
, the HashMap
will:
- Map every key to the same bucket (because
index = hash & (table.length - 1)
will always be the same). - Store all entries in that one bucket.
- Use
equals()
to differentiate between keys inside that bucket.
So functionality-wise:
map.put(key1, "value1");
map.put(key2, "value2");
map.get(key2); // ✅ will still return "value2"
As long as equals()
works correctly, the HashMap
will behave correctly.
🚨 But performance will degrade badly
If all keys fall into one bucket:
- That bucket becomes a linked list (or a red-black tree if ≥8 entries and table ≥64 slots).
- So every
get()
,put()
, andremove()
in that bucket becomes:- O(n) time in Java 7 and earlier (linked list)
- O(log n) in Java 8+ (treeified bucket)
That’s a huge step down from the expected O(1) performance of a normal HashMap
.
💣 Example of worst-case hashCode():
class BadKey {
int id;
public int hashCode() { return 42; } // constant hash!
public boolean equals(Object o) {
return o instanceof BadKey && ((BadKey)o).id == this.id;
}
}
All keys hash to the same bucket → massive collision.
✅ How does Java mitigate this?
Since Java 8, buckets with ≥8 entries and a table size ≥64 will automatically convert from a linked list to a red-black tree to prevent performance degradation:
put()
andget()
then become O(log n) for that bucket.
🧠 TL;DR
Condition | Behavior |
---|---|
All keys have same hashCode() | HashMap still works ✅ |
But performance degrades | From O(1) to O(n) or O(log n) |
Java 8+ optimization | Treeifies bucket at threshold |
So yes — HashMap is functionally safe even with bad hash codes, but if performance matters, always use well-distributed hashCode() implementations.