No — the JIT does not persist compiled code across program runs.
Each time you start your program, the JIT compilation starts fresh — the JVM does not remember previously compiled hotspots from the last run. This is because JIT compilation happens entirely in memory, and that memory is lost when the program exits.
🚀 Why Doesn’t the JVM Cache JIT Results?
There are a few reasons:
- Portability: Java’s whole promise is “Write Once, Run Anywhere” — caching native code would tie you to a specific CPU architecture, OS version, or even JVM version.
- Dynamic Optimization: JIT relies on actual runtime profiling — it observes how your code behaves in that specific execution. Patterns might change the next time you run the program, so the old optimizations might not be ideal anymore.
- Complexity: Managing a persistent cache of optimized machine code would add a lot of complexity to the JVM.
💾 Are There Exceptions?
There are some advanced solutions that try to address this — for example:
- AOT Compilation (Ahead-Of-Time Compilation): This compiles bytecode to native code before the program starts — so it’s available immediately (used by GraalVM Native Image, for example).
- Class Data Sharing (CDS): In HotSpot JVM, this saves some class metadata (but not JIT-compiled code) to disk to improve startup time. This helps a bit, but doesn’t solve the JIT re-compilation issue.
🧰 What does this mean for you?
- If you care about long-running performance, JIT is great — it makes your program faster the longer it runs.
- If you care about fast startup, you could look into:
- AOT (GraalVM)
- CDS/AppCDS
- Interpreting some code at first, then JIT-ing selectively
🔥 Summary
Run | JIT Behavior |
---|---|
First Run | Interprets first, then JIT compiles hot code |
Second Run | Starts from scratch (no memory of previous JIT work) |
Long-running Process | JIT works well because it can optimize over time |
Restarted Process | Fresh JIT process again — no reuse of previous optimizations |