Float/Double Precision in Debug/Release Modes

Float/double precision in debug/release modes

They can indeed be different. According to the CLR ECMA specification:

Storage locations for floating-point
numbers (statics, array elements, and
fields of classes) are of fixed size.
The supported storage sizes are
float32 and float64. Everywhere else
(on the evaluation stack, as
arguments, as return types, and as
local variables) floating-point
numbers are represented using an
internal floating-point type. In each
such instance, the nominal type of the
variable or expression is either R4 or
R8, but its value can be represented
internally with additional range
and/or precision. The size of the
internal floating-point representation
is implementation-dependent, can vary,
and shall have precision at least as
great as that of the variable or
expression being represented. An
implicit widening conversion to the
internal representation from float32
or float64 is performed when those
types are loaded from storage. The
internal representation is typically
the native size for the hardware, or
as required for efficient
implementation of an operation.

What this basically means is that the following comparison may or may not be equal:

class Foo
{
double _v = ...;

void Bar()
{
double v = _v;

if( v == _v )
{
// Code may or may not execute here.
// _v is 64-bit.
// v could be either 64-bit (debug) or 80-bit (release) or something else (future?).
}
}
}

Take-home message: never check floating values for equality.

release mode uses double precision even for float variables

As discussed in the comments, this is expected. It can be side-stepped by removing the JIT's ability to keep the value in a register (which will be wider than the actual value) - by forcing it down to a field (which has clearly-defined size):

class WorkingContext
{
public float Value; // you'll excuse me a public field here, I trust
public override string ToString()
{
return Value.ToString();
}
}
static void Main()
{
// start with some small magic number
WorkingContext a = new WorkingContext(), temp = new WorkingContext();
a.Value = 0.000000000000000013877787807814457f;
for (; ; )
{
// add the small a to 1
temp.Value = 1f + a.Value;
// break, if a + 1 really is > '1'
if (temp.Value - 1f != 0f) break;
// otherwise a is too small -> increase it
a.Value *= 2f;
Console.Out.WriteLine("current increment: " + a);
}
Console.Out.WriteLine("Found epsilon: " + a);
Console.ReadKey();
}

Interestingly, I tried this with a struct first, but the JIT was able to see past my cheating (presumably because it is all on the stack).

Float values behaving differently across the release and debug builds

Release mode may have a different FP strategy set. There are different floating point arithmetic modes depending on the level of optimization you'd like. MSVC, for example, has strict, fast, and precise modes.

Visual Studio debug vs. release build: comparing int and float missmatch

If we demangle the release ASM:

; Line 14
push OFFSET ??_C@_03HNJPMNDP@eq?6?$AA@
call DWORD PTR __imp__printf
add esp, 4
; Line 18
xor eax, eax
; Line 20
ret 0

It's just printing eq and exiting, which suggests that the floating comparison is just completely optimized out. For the debug assembly, we see it using fld and fild instructions:

; Line 9
fld DWORD PTR __real@4b800000
fstp DWORD PTR _f$[ebp]
; Line 10
fild DWORD PTR _i$[ebp]
fstp DWORD PTR _g$[ebp]
; Line 13
fild DWORD PTR _i$[ebp]

These are IA32 instructions which is the default architecture used in Visual Studio 2010. I suspect that you use /arch:SSE2 instead you will get different results.

Hans Passant's comment essentially confirms what I just said.



Related Topics



Leave a reply



Submit