When Should I Use the "Strictfp" Keyword in Java

When should I use the strictfp keyword in java?

Strictfp ensures that you get exactly the same results from your floating point calculations on every platform. If you don't use strictfp, the JVM implementation is free to use extra precision where available.

From the JLS:

Within an FP-strict expression, all
intermediate values must be elements
of the float value set or the double
value set, implying that the results
of all FP-strict expressions must be
those predicted by IEEE 754 arithmetic
on operands represented using single
and double formats. Within an
expression that is not FP-strict, some
leeway is granted for an
implementation to use an extended
exponent range to represent
intermediate results; the net effect,
roughly speaking, is that a
calculation might produce "the correct
answer" in situations where exclusive
use of the float value set or double
value set might result in overflow or
underflow.

In other words, it's about making sure that Write-Once-Run-Anywhere actually means Write-Once-Get-Equally-Wrong-Results-Everywhere.

With strictfp your results are portable, without it they are more likely to be accurate.

What does the keyword strictfp mean?

Java strictfp keyword ensures that you will get the same result on every platform if you perform operations in the floating-point variable.The strictfp keyword can be applied on methods, classes and interfaces.

strictfp class A{}//strictfp applied on class  
strictfp interface M{}//strictfp applied on interface
class A{
strictfp void m(){}//strictfp applied on method
}

The strictfp keyword cannot be applied on abstract methods, variables or constructors.

class B{  
strictfp abstract void m();//Illegal combination of modifiers
}
class B{
strictfp int data=10;//modifier strictfp not allowed here
}
class B{
strictfp B(){}//modifier strictfp not allowed here
}

How can we use the strictfp method in a real life pogram?

As of Java 17 and JEP 306, you wouldn't use it:

Make floating-point operations consistently strict, rather than have both strict floating-point semantics (strictfp) and subtly different default floating-point semantics.

[T]he SSE2 (Streaming SIMD Extensions 2) extensions, shipped in Pentium 4 and later processors starting circa 2001, could support strict JVM floating-point operations in a straightforward manner without undue overhead.

Since Intel and AMD have both long supported SSE2 and later extensions which allow natural support for strict floating-point semantics, the technical motivation for having a default floating-point semantics different than strict is no longer present.

See also JLS §8.4.3.5:

The strictfp modifier on a method declaration is obsolete and should not be used in new code. Its presence or absence has has no effect at run time.

As of the latest LTE release of Java, there is no longer any difference in semantics and there is a new compiler warning for the use of the strictfp modifier.

public class Strictly {
public strictfp double calc(double d1, double d2) {
return d1 * d2;
}
}

$ javac Strictly.java
Strictly.java:2: warning: [strictfp] as of release 17, all floating-point expressions are evaluated strictly and 'strictfp' is not required
public strictfp double calc(double d1, double d2) {
^
1 warning

What is the use of Strictfp method in java?

strictfp is a method or class modifier that forces the JVM to do floating point math a certain way that is guaranteed to be the same across different JVM implementations (stopping the JVM from cutting corners to improve performance and possibly lose some precision / accuracy).

More information can be found on the wikipedia entry, but it's low on detail. Hardcore information (if you care) can be found in the JVM spec.

Does Java strictfp modifier have any effect on modern CPUs?

If by “modern” you mean processors supporting the sort of SSE2 instructions that you quote in your question as produced by your compiler (mulsd, …), then the answer is no, strictfp does not make a difference, because the instruction set does not allow to take advantage of the absence of strictfp. The available instructions are already optimal to compute to the precise specifications of strictfp. In other words, on that kind of modern CPU, you get strictfp semantics all the time for the same price.

If by “modern” you mean the historical 387 FPU, then it is possible to observe a difference if an intermediate computation would overflow or underflow in strictfp mode (the difference being that it may not overflow or, on underflow, keep more precision bits than expected).

A typical strictfp computation compiled for the 387 will look like the assembly in this answer, with well-placed multiplications by well-chosen powers of two to make underflow behave the same as in IEEE 754 binary64. A round-trip of the result through a 64-bit memory location then takes care of overflows.

The same computation compiled without strictfp would produce one 387 instruction per basic operation, for instance just the multiplication instruction fmulp for a source-level multiplication. (The 387 would have been configured to use the same significand width as binary64, 53 bits, at the beginning of the program.)



Related Topics



Leave a reply



Submit