How to See Jit-Compiled Code in Jvm

How to see JIT-compiled code in JVM?

Assuming you're using the Sun Hotspot JVM (i.e. the one provided on java.com by Oracle), you can add the flag

-XX:+PrintOptoAssembly

when running your code. This will print out the optimized code generated by the JIT compiler and leaves out the rest.

If you want see the entire bytecode, including the unoptimized parts, add

-XX:CompileThreshold=#

when you're running your code.

You can read more about this command and the functionality of JIT in general here.

Where does the JIT compiled code reside?

HotSpot JVM has a Method structure in Metaspace (or PermGen in earlier versions).
It contains method bytecode which is never overwritten and a pointer to compiled code, initially NULL until the method is compiled.

A method may have multiple entry points:

  • _i2i_entry - a pointer to the bytecode interpreter.
  • _code->entry_point() - an entry point to the JIT-compiled code. Compiled methods reside in CodeCache - the special region of native memory for the VM dynamically generated code.
  • i2c and c2i adapters to call the compiled code from interpreter and vice versa. These adapters are needed, because the interpreted methods and compiled methods have different calling convention (the way how arguments are passed, how frames are constructed etc.)

A compiled method can have uncommon traps that fall back to interpreter in some rare cases. Furthermore, a Java method can be dynamically recompiled multiple times, so JVM cannot throw away the original bytecode. There is no sense to free it anyway, because the bytecode is usually much smaller than the compiled code.

How to print the JIT compilation messages for all the methods get compiled to native code of a given class

If by "JIT compilation messages" you mean the generated assembly code for all methods of the given class, use the following syntax:

-XX:CompileCommand=print,org.pkg.TheGivenClass::*

Disassemble Java JIT compiled native bytecode

Yes, there is a way to print the generated native code (requires OpenJDK 7).

No, there is no way to compile your Java bytecode to native code using the JDK's JIT and save it as a native executable.

Even if this were possible, it would probably not as useful as you think. The JVM does some very sophisticated optimizations, and it can even de-optimize code on the fly if necessary. In other words, it's not as simple as the JIT compiles your code to native machine language, and then that native machine language will remain unchanged while the program is running. Also, this would not let you make a native executable that is independent of the JVM and runtime library.

What exactly is the JIT compiler inside a JVM?

Since the computer can only execute machine code, and an interpreter is slower at translating the bytecode to machine code than a compiler is, why does the JVM use an interpreter and not a compiler?

Because compiling to machine code also takes time, especially when it has to analyze the code to optimize it, so interpreting is fast enough to execute most of the time, and actually faster than compile+run if only run once/occationally.

Also, an interpreter doesn't "translating the bytecode to machine code". It evaluates the bytecode and performs the operations requested by the bytecode. The interpreter itself is machine code, but it doesn't translate bytecode, it interprets/evaluates the bytecode.

Why do we not have another intermediate executable file generated by the JIT compiler for the CPU so it can quickly execute the instructions?

That would violate the Write Once, Run Anywhere paradigm of Java.

Is the JIT compiler really an interpreter that has the ability to compile frequently executed code?

No, the JIT compiler (or more accurately, the HotSpot compiler, as mentioned by EJP) is a compiler executed by the JVM as needed.

Are the terms compiler and interpreter wrongfully used interchangeably?

Correct. They cannot be used interchangeably, since they don't do the same thing. The interpreter executes bytecode. The JIT/HotSpot compiler converts bytecode to machine code, but doesn't run it.

How does the JVM decided to JIT-compile a method (categorize a method as hot)?

HotSpot compilation policy is rather complex, especially for Tiered Compilation, which is on by default in Java 8. It's neither a number of executions, nor a matter of CompileThreshold parameter.

The best explanation (apparently, the only reasonable explanation) can be found in HotSpot sources, see advancedThresholdPolicy.hpp.

I'll summarize the main points of this advanced compilation policy:

  • Execution starts at tier 0 (interpreter).
  • The main triggers for compilation are

    1. method invocation counter i;
    2. backedge counter b. Backward branches typically denote a loop in the code.
  • Every time counters reach certain frequency value (TierXInvokeNotifyFreqLog, TierXBackedgeNotifyFreqLog), a compilation policy is called to decide what to do next with currently running method. Depending on the values of i, b and current load of C1 and C2 compiler threads it can be decided to

    • continue execution in interpreter;
    • start profiling in interpreter;
    • compile method with C1 at tier 3 with full profile data required for futher recompilation;
    • compile method with C1 at tier 2 with no profile but with possibility to recompile (unlikely);
    • finally compile method with C1 at tier 1 with no profile or counters (also unlikely).

    Key parameters here are TierXInvocationThreshold and TierXBackEdgeThreshold. Thresholds can be dynamically adjusted for a given method depending on the length of compilation queue.

  • Compilation queue is not FIFO, but rather a priority queue.

  • C1-compiled code with profile data (tier 3) behave similarly, except that thresholds for switching to the next level (C2, tier 4) are much bigger. E.g. an interpreted method can be compiled at tier 3 after about 200 invocations, while C1-compiled method is subject for recompilation at tier 4 after 5000+ invocations.

  • A special policy is used for method inlining. Tiny methods can be inlined into the caller even if they are not "hot". A bit larger methods can be inlined only if they are invoked frequently (InlineFrequencyRatio, InlineFrequencyCount).


Related Topics



Leave a reply



Submit