Optimization by Java Compiler

Optimization by Java Compiler

javac will only do a very little optimization, if any.

The point is that the JIT compiler does most of the optimization - and it works best if it has a lot of information, some of which may be lost if javac performed optimization too. If javac performed some sort of loop unrolling, it would be harder for the JIT to do that itself in a general way - and it has more information about which optimizations will actually work, as it knows the target platform.

Should I use javac -O option to optimize?

It is a no-op according to a comment in the source code around line 553.

It was probably useful when the JIT compiler was not efficient yet or when there was no JIT compiler at all.

Why is Java compiler not optimizing a trivial method?

A typical Java virtual machine optimizes your program at runtime, not during compilation. At runtime, the JVM knows a lot more about your application, both about the actual behavior of your program and about the actual hardware your program is executed upon.

The byte code is merely a description of how your program is supposed to behave. The runtime is free to apply any optimization to your byte code.

Of course, one can argue that such trivial optimizations could be applied even during compilation but in general it makes sense to not distribute optimizations over several steps. Any optimization is effectively causing a loos of information about the original program and this might make other optimizations impossible. This said, not all "best optimizations" are always obvious. An easy approach to this is to simply drop (almost) all optimizations during compilation and to apply them at runtime instead.

Does the Java compiler optimize an unnecessary ternary operator?

I find that unnecessary usage of the ternary operator tends to make the code more confusing and less readable, contrary to the original intention.

That being said, the compiler's behaviour in this regard can easily be tested by comparing the bytecode as compiled by the JVM.
Here are two mock classes to illustrate this:

Case I (without the ternary operator):

class Class {

public static void foo(int a, int b, int c) {
boolean val = (a == c && b != c);
System.out.println(val);
}

public static void main(String[] args) {
foo(1,2,3);
}
}

Case II (with the ternary operator):

class Class {

public static void foo(int a, int b, int c) {
boolean val = (a == c && b != c) ? true : false;
System.out.println(val);
}

public static void main(String[] args) {
foo(1,2,3);
}
}

Bytecode for foo() method in Case I:

       0: iload_0
1: iload_2
2: if_icmpne 14
5: iload_1
6: iload_2
7: if_icmpeq 14
10: iconst_1
11: goto 15
14: iconst_0
15: istore_3
16: getstatic #2 // Field java/lang/System.out:Ljava/io/PrintStream;
19: iload_3
20: invokevirtual #3 // Method java/io/PrintStream.println:(Z)V
23: return

Bytecode for foo() method in Case II:

       0: iload_0
1: iload_2
2: if_icmpne 14
5: iload_1
6: iload_2
7: if_icmpeq 14
10: iconst_1
11: goto 15
14: iconst_0
15: istore_3
16: getstatic #2 // Field java/lang/System.out:Ljava/io/PrintStream;
19: iload_3
20: invokevirtual #3 // Method java/io/PrintStream.println:(Z)V
23: return

Note that in both cases the bytecode is identical, i.e the compiler disregards the ternary operator when compiling the value of the val boolean.


EDIT:

The conversation regarding this question has gone one of several directions.

As shown above, in both cases (with or without the redundant ternary) the compiled java bytecode is identical.
Whether this can be regarded an optimization by the Java compiler depends somewhat on your definition of optimization. In some respects, as pointed out multiple times in other answers, it makes sense to argue that no - it isn't an optimization so much as it is the fact that in both cases the generated bytecode is the simplest set of stack operations that performs this task, regardless of the ternary.

However regarding the main question:

Obviously it would be better to just assign the statement’s result to
the boolean variable, but does the compiler care?

The simple answer is no. The compiler doesn't care.

Java compiler optimizations with final local variables

Why is there a difference between these 2 methods?

a, b and n are constant variables in the doStuffFinal() method, because:

A constant variable is a final variable of primitive type or type String that is initialized with a constant expression (§15.29)

But the variables in doStuffNotFinal aren't constant variables, because they're not final, and as such their values aren't constant expressions.

As described in Constant expressions, the result of a binary operator with constant expression operands is also a constant expression; so a + b is a constant expression, and so is a + b + n. And also:

Constant expressions of type String are always "interned"

Therefore, a + b + n is interned, and so will appear in the constant pool, thus you will see its use when you decompile.



Can't javac optimize such a trivial case?

The language spec says that the constant string has to be interned in the final case; it doesn't say it can't be in the non-final case. So, sure, it could.

There's only so much time in the day; compiler implementors can only do so much. Such a trivial case is probably uninteresting to optimize for in the compiler because it's going to be pretty rare.



does that mean we'd better put the final keyword all over the place

Don't forget that javac isn't the only thing that does optimization: javac is actually pretty dumb, and literal in its translation of the Java code to bytecode. Far more interesting optimizations occur in the JIT.

Additionally, you only get this benefit of making things final in very specific cases: final String or primitive variables initialized with constant expressions. It of course depends on your codebase, but these are not going to account for a substantial portion of your variables.

So, of course you can spray them everywhere, but it's unlikely to deliver benefits that outweigh the additional visual noise of having final smattered across your code.

Do java finals help the compiler create more efficient bytecode?

The bytecodes are not significantly more or less efficient if you use final because Java bytecode compilers typically do little in the way optimization. The efficiency bonus (if any) will be in the native code produced by the JIT compiler1.

In theory, using the final provides a hint to the JIT compiler that should help it optimize. In practice, recent HotSpot JIT compilers can do a better job by ignoring your hints. For instance, a modern JIT compiler typically performs a global analysis to find out if a given method call is a call to a leaf method in the context of the application's currently loaded classes. This analysis is more accurate than your final hints can be, and the runtime can even detect when a new class is loaded that invalidates the analysis ... and redo the analysis and native code generation for the affected code.

There are other semantic consequences for use of final:

  • Declaring a variable as final stops you from accidentally changing it. (And expresses your intention to the reader.)
  • Declaring a method as final prevents overriding in a subclass.
  • Declaring a class as final prevents subclassing entirely.
  • Declaring a field as final stops a subclass from changing it.
  • Declaring a field as final has important consequences for thread-safety; see JLS 17.5.

In the right circumstances, these can all be good. However, it is clear that they limit your options for reuse by creating subclasses. This needs to be considered when deciding whether or not to use final.

So good practice is to use final to (broadly speaking) express your design intentions, and to achieve other semantic effects that you require. If you use final solely as an optimization hint, you won't achieve much.


There are a couple of exceptions where final could lead to small performance improvements on some platforms.

  • Under certain circumstances, declaring a field as final changes the way that the bytecode compiler deals with it. I've given one example above. Another is the "constant variable" case (JLS 4.12.4) where a static final field's value will be inlined by the bytecode compiler both in the current classes, and in other classes, and this may affect the observed behavior of code. (For example, referring to a constant variable will NOT trigger class initialization. Hence, the addition of a final may change the order of class initialization.)

  • It is conceivable that declaring a field or local parameter as final may allow minor JIT compiler optimization that wouldn't otherwise be done. However, any field that can be declared as final could also be inferred to be effectively final by the JIT compiler. (It is just not clear that the JIT compiler actually does this, and whether that affects the generated native code.)

However the bottom line remains the same. You should use final to express your design intentions, not as an optimization hint.


1 - This answer assumes that we are talking about a recent JVM with a good JIT or AOT compiler. 1) The earliest Sun Java implementations didn't have a JIT compiler at all. 2) Early Android Java implementations had compilers which did a poor job of optimizing. Indeed the early Android developer documentation advised various source-level micro-optimizations to compensate. This advice has since been removed.

Java compiler optimisation

The compiler will not (in general) do the call just once as it may have side effects, or may be volatile in its result - it couldn't possibly know when you wanted it to ignore a second call and when you wanted it to actually make the call again. You need to save the result:

final ExpensiveCallResult nullableResult = object.expensiveCall();
String name;
if (nullableResult != null) {
name = nullableResult.getName();
}


Related Topics



Leave a reply



Submit