Java.Core.CachingInThreads

🔥 The CPU, Caches, and Threads — What’s Going On?

1️⃣ Each CPU core has its own local cache.

This cache holds copies of variables that the thread running on that core is using. Why?
Because accessing main memory (RAM) is slow, but reading from the local cache is very fast.


2️⃣ Each thread may see a different version of the same variable.

Imagine this setup:

  • Thread 1 (on Core 1) reads flag = true, caches it.
  • Thread 2 (on Core 2) updates flag = false, writes it to main memory.
  • Thread 1 might still see flag = true, because it’s reading from its own cache — it doesn’t know Thread 2 changed it in memory.

This is called the visibility problem.


🔥 What does volatile do?

When a variable is volatile, this happens: ✅ Every read must fetch directly from main memory (never from cache).
✅ Every write must flush directly to main memory (never just the local cache).

This guarantees that all threads see the same value of the variable — no stale data.


🔥 Example – Why caching is dangerous without volatile

Without volatile (Bad Case)

boolean running = true;

public void run() {
    while (running) {  // this might read a cached value
        // work
    }
}
public void stop() {
    running = false;  // another thread writes to running
}
  • Thread-1 caches running = true in Core 1 cache.
  • Thread-2 calls stop() and sets running = false in Core 2 cache.
  • Core 1 never sees the change, because it keeps reading its cached running = true.

🚨 Result: Thread-1 keeps running forever.


With volatile (Good Case)

private volatile boolean running = true;
  • Thread-1 always reads running directly from main memory.
  • Thread-2 writes running = false directly to main memory.
  • Now, Thread-1 will immediately see the change and stop.

🚀 CPU Cache Layers (For the Curious)

Cache LevelDescription
L1 CachePer-core, very small, ultra fast
L2 CachePer-core, slightly larger, still fast
L3 CacheShared by all cores, slower
Main Memory (RAM)Global memory, very slow compared to caches

🔥 Modern CPUs + Java

Modern multi-core CPUs rely heavily on caching to be fast. This creates:

  • Speed (fast local cache reads).
  • Visibility Problems (cache doesn’t know other cores changed something).

🚦 This is exactly why we need:

ToolWhat it Solves
volatileGuarantees visibility (skip cache for a variable)
synchronizedGuarantees visibility + atomicity (lock + memory flush)
AtomicIntegerUses low-level CPU instructions (CAS) to do atomic updates, ensuring latest value is always visible

🔥 Summary Rule

Variable TypeCache Behavior
Normal Field (int x)Cached per thread — changes may be invisible to other threads
volatile FieldNever cached — always read/write directly from main memory
Inside synchronized blockAll cached values are flushed to memory when entering/exiting the block
AtomicIntegerUses CAS (Compare And Swap), guaranteeing visibility and atomicity

💡 Final Analogy

  • Normal variables are like a personal notebook for each thread — they don’t always sync with the “official” shared notebook (main memory).
  • volatile variables are like reading/writing directly in the official notebook.
  • synchronized is like locking the notebook so only one person can write.
This entry was posted in Без рубрики. Bookmark the permalink.