Differences Between Just in Time Compilation and on Stack Replacement

Differences between Just in Time compilation and On Stack Replacement

In general, Just-in-time compilation refers to compiling native code at runtime and executing it instead of (or in addition to) interpreting. Some VMs, such as Google V8, don't even have an interpreter; they JIT compile every function that gets executed (with varying degrees of optimization).

On Stack Replacement (OSR) is a technique for switching between different implementations of the same function. For example, you could use OSR to switch from interpreted or unoptimized code to JITed code as soon as it finishes compiling.

OSR is useful in situations where you identify a function as "hot" while it is running. This might not necessarily be because the function gets called frequently; it might be called only once, but it spends a lot of time in a big loop which could benefit from optimization. When OSR occurs, the VM is paused, and the stack frame for the target function is replaced by an equivalent frame which may have variables in different locations.

OSR can also occur in the other direction: from optimized code to unoptimized code or interpreted code. Optimized code may make some assumptions about the runtime behavior of the program based on past behavior. For instance, you could convert a virtual or dynamic method call into a static call if you've only ever seen one type of receiver object. If it turns out later that these assumptions were wrong, OSR can be used to fall back to a more conservative implementation: the optimized stack frame gets converted into an unoptimized stack frame. If the VM supports inlining, you might even end up converting an optimized stack frame into several unoptimized stack frames.

What is the difference between Just-in-time compilation and dynamic compilation?

Admittedly, Wikipedia is confusing. First it says:

just-in-time (JIT) compilation, also known as dynamic translation...

Then it says:

JIT compilation is a form of dynamic compilation, and allows adaptive
optimization such as dynamic recompilation...

This also suggests that dynamic translation is also a form of dynamic compilation, which does not make much sense.

The term dynamic compilation used the be the standard and only term to refer to the family of techniques of compiling code at run-time before 1995. For example, check out this paper from 1985 that discusses dynamic compilation for Prolog. Many pre-1995 papers can be easily found that use the term.

However, the Java programming language was released around 1995 and the Java documents are first to use the term JIT compilation or JIT compilers. The earliest such document that I could find is this, although the first Java JIT compiler was developed in 1996. I've seen many papers published in that time frame that use the two terms interchangeably.

I remember also that some papers I've read consider JIT compilation a type of dynamic compilation.

What are the differences between a Just-in-Time-Compiler and an Interpreter?

Just-in-time compilation is the conversion of non-native code, for example bytecode, into native code just before it is executed.

From Wikipedia:

JIT builds upon two earlier ideas in run-time environments: bytecode compilation and dynamic compilation. It converts code at runtime prior to executing it natively, for example bytecode into native machine code.

An interpreter executes a program. It may or may not have a jitter.

Again, from Wikipedia:

An interpreter may be a program that
either

  1. executes the source code directly
  2. translates source code into some efficient intermediate representation
    (code) and immediately executes this
  3. explicitly executes stored precompiled code made by a compiler
    which is part of the interpreter
    system

Both the standard Java and .NET distributions have JIT compilation, but it is not required by the standards. The JIT compiler in .NET and C# are of course different because the intermediate bytecode is different. The principle is the same though.

What are the advantages of just-in-time compilation versus ahead-of-time compilation?

The ngen tool page spilled the beans (or at least provided a good comparison of native images versus JIT-compiled images). Executables that are compiled ahead-of-time typically have the following benefits:

  1. Native images load faster because they don't have much startup activities, and require a static amount of fewer memory (the memory required by the JIT compiler);
  2. Native images can share library code, while JIT-compiled images cannot.

Just-in-time compiled executables typically have the upper hand in these cases:

  1. Native images are larger than their bytecode counterpart;
  2. Native images must be regenerated whenever the original assembly or one of its dependencies is modified.

The need to regenerate an image that is ahead-of-time compiled every time one of its components is a huge disadvantage for native images. On the other hand, the fact that JIT-compiled images can't share library code can cause a serious memory hit. The operating system can load any native library at one physical location and share the immutable parts of it with every process that wants to use it, leading to significant memory savings, especially with system frameworks that virtually every program uses. (I imagine that this is somewhat offset by the fact that JIT-compiled programs only compile what they actually use.)

The general consideration of Microsoft on the matter is that large applications typically benefit from being compiled ahead-of-time, while small ones generally don't.

Confusion about HotSpot JVM JIT

Assuming you are asking about HotSpot JVM, the answer is the remaining interations will be executed in compiled code.

HotSpot JVM has a technique known as 'on-stack replacement' to switch from interpreter to compiled code while the method is running.

http://openjdk.java.net/groups/hotspot/docs/HotSpotGlossary.html

on-stack replacement
Also known as 'OSR'. The process of converting an
interpreted (or less optimized) stack frame into a compiled (or more
optimized) stack frame. This happens when the interpreter discovers
that a method is looping, requests the compiler to generate a special
nmethod with an entry point somewhere in the loop (specifically, at a
backward branch), and transfers control to that nmethod. A rough
inverse to deoptimization.

If you run JVM with -XX:+PrintCompilation flag, OSR compilations will be marked by % sign:

    274   27       3       java.lang.String::lastIndexOf (52 bytes)
275 29 3 java.lang.String::startsWith (72 bytes)
275 28 3 java.lang.String::startsWith (7 bytes)
275 30 3 java.util.Arrays::copyOf (19 bytes)
276 32 4 java.lang.AbstractStringBuilder::append (29 bytes)
276 31 s 3 java.lang.StringBuffer::append (13 bytes)
283 33 % 3 LoopTest::myLongLoop @ 13 (43 bytes)
^ ^
OSR bytecode index of OSR entry

UPDATE

Typically after OSR compilation a regular compilation is also queued, so that the next time the method is called, it will start running directly in compiled mode.

    187   32 %     3       LoopTest::myLongLoop @ 13 (43 bytes)
187 33 3 LoopTest::myLongLoop (43 bytes)

However, if a regular compilation is not complete by the time the method is called again, the method will start running in interpreter, and then will switch to OSR entry inside a loop.

How does the JVM decided to JIT-compile a method (categorize a method as hot)?

HotSpot compilation policy is rather complex, especially for Tiered Compilation, which is on by default in Java 8. It's neither a number of executions, nor a matter of CompileThreshold parameter.

The best explanation (apparently, the only reasonable explanation) can be found in HotSpot sources, see advancedThresholdPolicy.hpp.

I'll summarize the main points of this advanced compilation policy:

  • Execution starts at tier 0 (interpreter).
  • The main triggers for compilation are

    1. method invocation counter i;
    2. backedge counter b. Backward branches typically denote a loop in the code.
  • Every time counters reach certain frequency value (TierXInvokeNotifyFreqLog, TierXBackedgeNotifyFreqLog), a compilation policy is called to decide what to do next with currently running method. Depending on the values of i, b and current load of C1 and C2 compiler threads it can be decided to

    • continue execution in interpreter;
    • start profiling in interpreter;
    • compile method with C1 at tier 3 with full profile data required for futher recompilation;
    • compile method with C1 at tier 2 with no profile but with possibility to recompile (unlikely);
    • finally compile method with C1 at tier 1 with no profile or counters (also unlikely).

    Key parameters here are TierXInvocationThreshold and TierXBackEdgeThreshold. Thresholds can be dynamically adjusted for a given method depending on the length of compilation queue.

  • Compilation queue is not FIFO, but rather a priority queue.

  • C1-compiled code with profile data (tier 3) behave similarly, except that thresholds for switching to the next level (C2, tier 4) are much bigger. E.g. an interpreted method can be compiled at tier 3 after about 200 invocations, while C1-compiled method is subject for recompilation at tier 4 after 5000+ invocations.

  • A special policy is used for method inlining. Tiny methods can be inlined into the caller even if they are not "hot". A bit larger methods can be inlined only if they are invoked frequently (InlineFrequencyRatio, InlineFrequencyCount).

What is the difference between Angular AOT and JIT compiler

I've read that AOT and JIT both compile TypeScript to JavaScript
whether that is server side or on the client side.

No, that is not what AOT and JIT compilers do. TypeScript is transpiled into JavaScript using typescript compiler.

Angular compiler

There are two compilers that do the hard work of compilation and code generation:

  • view compiler
  • provider compiler

The view compiler compiles component templates and generates view factories. It parses expressions and html elements inside the template and goes through many of the standard compiler phases:

parse-tree (lexer) -> abstract-syntax-tree (parser) -> intermediate-code-tree -> output

The provider compiler compiles module providers and generates module factories.

JIT vs AOT

These two compilers are used in both JIT and AOT compilation. JIT and AOT compilations differ in how they get the metadata associated with the component or module:

// the view compiler needs this data

@Component({
providers: ...
template: ...
})

// the provider compiler needs this data

@NgModule({
providers: ...
});

JIT compiler uses runtime to get the data. The decorator functions @Component and @NgModule are executed and they attach metadata to the component or module class that is later read by Angular compilers using reflective capabiliteis (Reflect library).

AOT compiler uses static code analysis provided by typescript compiler to extract metadata and doesn't rely on code evaluation. Hence it's a bit limited when compared to JIT compiler since it can't evaluate in-explicit code - for example it requires exporting a function:

// this module scoped function

function declarations() {
return [
SomeComponent
]
}

// should be exported

export function declarations() {
return [
SomeComponent
];
}
@NgModule({
declarations: declarations(),
})
export class SomeModule {}

Again, both JIT and AOT compilers are mostly wrappers to extract metadata associated with a component or module and they both use the underlying view and provider compiler to generate factories.

If I am compiling it when I build it with Webpack and grunt and
deploying that minified javascript how does AOT and JIT even come into
the picture?

Angular provides webpack plugin that performs transpilation from typescript during build. This plugin can also AOT compile your project so that you don't include JIT compiler into the bundle and don't perform compilation on the client.

Is JIT compiler a Compiler or Interpreter?

JIT (just in time) compiler is a compiler. It does optimizations as well as compiling to machine code. (and even called a compiler)

HTML, Javascript are interpreted, they are read as-is by the web browser, and run with minimal bug fixes and optimizations.



Related Topics



Leave a reply



Submit