✅ Adding an element to the middle is generally slower in
LinkedList
than inArrayList
Wait… what? You might’ve heard that LinkedList
is good for insertions — but here’s the full picture:
🔬 Let’s break it down:
📦 ArrayList.add(index, element)
(middle)
- Time complexity: O(n – index)
- Needs to shift elements to the right to make space
- But accessing index is fast: O(1)
- Memory locality makes shifting very fast in practice (arrays are cache-friendly)
✅ Efficient in real-world scenarios for small/mid-sized lists
🧱 LinkedList.add(index, element)
(middle)
- Time complexity: O(n)
- It must traverse the list from the beginning or end to find the node at that index
- Then it links in the new node — that part is O(1)
- But that traversal dominates performance
❌ Slower in practice, especially for large lists
🧪 Real-world test case:
List<Integer> arrayList = new ArrayList<>();
List<Integer> linkedList = new LinkedList<>();
for (int i = 0; i < 1_000_000; i++) {
arrayList.add(arrayList.size() / 2, i); // Works fine, relatively fast
linkedList.add(linkedList.size() / 2, i); // Much slower
}
This will kill performance on LinkedList
because it has to walk through half the list each time and lookup objects.
🧠 Summary:
Operation | ArrayList | LinkedList |
---|---|---|
Access middle (by index) | ✅ O(1) | ❌ O(n) |
Insert in middle (index-based) | ⚠️ O(n) (shifting) | ❌ O(n) (traversing) |
Practical performance | ✅ Faster | ❌ Slower |
✅ Bottom line:
Even though
LinkedList
has constant-time insertion once you have the node, the cost of finding that node makesadd(index, element)
in the middle slower thanArrayList
.
🔍 Why is ArrayList
faster for inserting in the middle?
Even though ArrayList.add(index, element)
needs to shift elements, it’s still usually faster than LinkedList
, because:
✅ 1. Fast index access
- Accessing any index in an array is O(1) — no traversal.
- Shifting elements is just copying memory.
✅ 2. Tight memory layout = better CPU cache usage
This is where caching (aka cache locality) comes in 👇
🚀 What is CPU caching / cache locality?
TL;DR:
CPU cache is small, fast memory that holds recently used data to avoid slow RAM access.
How it helps ArrayList
:
ArrayList
stores elements in a contiguous block of memory (like an array).- So when the CPU accesses
list[i]
, it likely also loads nearby elements into cache. - This makes reading/writing/shifting fast, because:
- Memory is close together
- CPU can bulk-load elements into cache lines (~64 bytes at a time)
👉 This is called spatial locality — memory near what you’re accessing is also brought in and reused.
❌ LinkedList
suffers here:
- Each node is a separate object on the heap.
- Nodes can be scattered randomly in memory.
- CPU can’t predict or prefetch nodes.
- So it incurs cache misses, which are expensive.
➜ Even though insertion is O(1) after locating a node, the cost of traversing the list dominates due to bad cache behavior.
📈 Example:
ArrayList<Integer> list = new ArrayList<>();
for (int i = 0; i < 1_000_000; i++) {
list.add(list.size() / 2, i);
}
This runs faster than the same code with LinkedList
, even though each ArrayList.add()
might shift thousands of elements — because those elements are right next to each other in memory.
🧠 Summary:
Feature | ArrayList ✅ | LinkedList ❌ |
---|---|---|
Index Access | O(1) | O(n) (traverse) |
Memory Layout | Contiguous | Fragmented |
CPU Cache Usage | ✅ Excellent | ❌ Poor |
Insert in Middle (Practical) | ⚠️ Still faster | ❌ Slower |
Does it have to do with memory alignment ?
🧠 Recap: What is memory alignment?
Memory alignment means storing data at memory addresses that are multiples of their size or system word size (e.g. 4 or 8 bytes). This ensures:
- Fast memory access
- Proper cache line usage
- Less chance of misaligned access (which CPUs hate)
✅ How ArrayList
benefits from alignment:
- Contiguous memory layout
- Elements are stored in a raw array:
Object[] elementData
- Array elements are placed back-to-back in aligned memory
- Elements are stored in a raw array:
- Aligned references
- Each object reference (like to an
Integer
,String
, etc.) is stored at aligned memory addresses (e.g., every 8 bytes on a 64-bit JVM) - CPU can read multiple references in one cache line load
- Each object reference (like to an
- Efficient prefetching
- Because memory is aligned and predictable, the CPU can prefetch upcoming elements into cache automatically
💡 This is why shifting elements during
add(index, element)
inArrayList
is relatively fast — you’re just copying aligned memory blocks, which the CPU loves.
❌ In contrast: LinkedList
- Each node is a separate object in memory with:
- Its own object header
- Pointers to
prev
,next
, and the item
- These nodes are scattered in memory (due to heap allocation)
- CPU cannot prefetch
- You get cache misses and slow traversal
- Memory alignment still applies — but doesn’t help when the data is all over the place
📈 Visualization
ArrayList
in memory (aligned):
| ref to 0 | ref to 1 | ref to 2 | ref to 3 | ...
↑ ↑ ↑ ↑
aligned aligned aligned aligned
LinkedList
in memory (fragmented):
Node A: [data] -> [next] (somewhere in RAM)
Node B: [data] -> [next] (somewhere else)
Node C: [data] -> [next] (somewhere else again)
🧠 Final thoughts:
✔️ ArrayList
= fast cache-friendly memory, thanks to alignment + contiguity
❌ LinkedList
= heap-scattered nodes, suffers from cache misses despite being “O(1)” for insertion