Is Floating-Point Math Consistent in C#? Can It Be

Is floating-point math consistent in C#? Can it be?

I know of no way to way to make normal floating points deterministic in .net. The JITter is allowed to create code that behaves differently on different platforms(or between different versions of .net). So using normal floats in deterministic .net code is not possible.

The workarounds I considered:

  1. Implement FixedPoint32 in C#. While this is not too hard(I have a half finished implementation) the very small range of values makes it annoying to use. You have to be careful at all times so you neither overflow, nor lose too much precision. In the end I found this not easier than using integers directly.
  2. Implement FixedPoint64 in C#. I found this rather hard to do. For some operations intermediate integers of 128bit would be useful. But .net doesn't offer such a type.
  3. Implement a custom 32 bit floatingpoint. The lack of a BitScanReverse intrinsic causes a few annoyances when implementing this. But currently I think this is the most promising path.
  4. Use native code for the math operations. Incurs the overhead of a delegate call on every math operation.

I've just started a software implementation of 32 bit floating point math. It can do about 70million additions/multiplications per second on my 2.66GHz i3.
https://github.com/CodesInChaos/SoftFloat . Obviously it's still very incomplete and buggy.

Is floating point arithmetic stable?


But what if the equation used to calculate the number is the same? can I assume the outcome would be the same too?

No, not necessarily.

In particular, in some situations the JIT is permitted to use a more accurate intermediate representation - e.g. 80 bits when your original data is 64 bits - whereas in other situations it won't. That can result in seeing different results when any of the following is true:

  • You have slightly different code, e.g. using a local variable instead of a field, which can change whether the value is stored in a register or not. (That's one relatively obvious example; there are other much more subtle ones which can affect things, such as the existence of a try block in the method...)
  • You are executing on a different processor (I used to observe differences between AMD and Intel; there can be differences between different CPUs from the same manufacturer too)
  • You are executing with different optimization levels (e.g. under a debugger or not)

From the C# 5 specification section 4.1.6:

Floating-point operations may be performed with higher precision than the result type of the operation. For example, some hardware architectures support an "extended" or "long double" floating-point type with greater range and precision than the double type, and implicitly perform all floating-point operations using this higher precision type. Only at excessive cost in performance can such hardware architectures be made to perform floating-point operations with less precision, and rather than require an implementation to forfeit both performance and precision, C# allows a higher precision type to be used for all floating-point operations. Other than delivering more precise results, this rarely has any measurable effects. However, in expressions of the form x * y / z, where the multiplication produces a result that is outside the double range, but the subsequent division brings the temporary result back into the double range, the fact that the expression is evaluated in a higher range format may cause a finite result to be produced instead of an infinity.


How deterministic is floating point inaccuracy?

From what I understand you're only guaranteed identical results provided that you're dealing with the same instruction set and compiler, and that any processors you run on adhere strictly to the relevant standards (ie IEEE754). That said, unless you're dealing with a particularly chaotic system any drift in calculation between runs isn't likely to result in buggy behavior.

Specific gotchas that I'm aware of:

  1. some operating systems allow you to set the mode of the floating point processor in ways that break compatibility.

  2. floating point intermediate results often use 80 bit precision in register, but only 64 bit in memory. If a program is recompiled in a way that changes register spilling within a function, it may return different results compared to other versions. Most platforms will give you a way to force all results to be truncated to the in memory precision.

  3. standard library functions may change between versions. I gather that there are some not uncommonly encountered examples of this in gcc 3 vs 4.

  4. The IEEE itself allows some binary representations to differ... specifically NaN values, but I can't recall the details.

Floating point arithmetic is too reliable


double x = (0.1 * 3) / 3;
Console.WriteLine("x: {0}", x); // prints "x: 0.1"
Console.WriteLine("x == 0.1: {0}", x == 0.1); // prints "x == 0.1: False"

Remark: based on this don't make the assumption that floating point arithmetic is unreliable in .NET.

What's the benefit of accepting floating point inaccuracy in c#


Why does c# accept the inaccuracy by using floating points to store data?

"C#" doesn't accept the tradeoff of performance over accuracy; users do, or do not, accept that.

C# has three floating point types - float, double and decimal - because those three types meet the vast majority of the needs of real-world programmers.

float and double are good for "scientific" calculations where an answer that is correct to three or four decimal places is always close enough, because that's the precision that the original measurement came in with. Suppose you divide 10.00 by 3 and get 3.333333333333. Since the original measurement was probably accurate to only 0.01, the fact that the computed result is off by less than 0.0000000000004 is irrelevant. In scientific calculations, you're not representing known-to-be-exact quantities. Imprecision in the fifteenth decimal place is irrelevant if the original measurement value was only precise to the second decimal place.

This is of course not true of financial calculations. The operands to a financial calculation are usually precise to two decimal places and represent exact quantities. Decimal is good for "financial" calculations because decimal operation results are exact provided that all of the inputs and outputs can be represented exactly as decimals (and they are all in a reasonable range). Decimals still have rounding errors, of course, but the operations which are exact are precisely those that you are likely to want to be exact when doing financial calculations.

And what's the benefit of using it over other methods?

You should state what other methods you'd like to compare against. There are a great many different techniques for performing calculations on computers.

For example, Math.Pow(Math.Sqrt(2),2) is not exact in c#. There are programming languages that can calculate it exactly (for example, Mathematica).

Let's be clear on this point; Mathematica does not "calculate" root 2 exactly; the number is irrational, so it cannot be calculated exactly in any finite amount of storage. Instead, what mathematica does is it represents numbers as objects that describe how the number was produced. If you say "give me the square root of two", then Mathematica essentially allocates an object that means "the application of the square root operator to the exact number 2". If you then square that, it has special purpose logic that says "if you square something that was the square root of something else, give back the original value". Mathematica has objects that represent various special numbers as well, like pi or e, and a huge body of rules for how various manipulations of those numbers combine together.

Basically, it is a symbolic system; it manipulates numbers the same way people do when they do pencil-and-paper math. Most computer programs manipulate numbers like a calculator: perform the calculation immediately and round it off. If that is not acceptable then you should stick to a symbolic system.

One argument I could think of is that calculating it exactly is a lot slower then just coping with the inaccuracy, but Mathematica & Matlab are used to calculate gigantic scientific problems, so I find it hard to believe those languages are really significantly slower than c#.

It's not that they're slower, though multiplication of floating points really is incredibly fast on modern hardware. It's that the symbolic calculation engine is immensely complex. It encodes all the rules of basic mathematics, and there are a lot of those rules! C# is not intended to be a professional-grade symbolic computation engine, it's intended to be a general-purpose programming language.

Is floating point math (on integers) accurate?

Until your numbers are integer and have not more digits that a float mantissa can have, float operations on them are precise. The errors in counting floats appear because of cutting the ends. And while it doesn't happen it is nothing to be afraid of. But the first division by something different from a power of 2 will break this paradise. Or any operation that makes the result too long.

Is Double Math Consistent across Multiple Platforms?

The C# standard, ECMA-334, says in section 11.1.6 Floating point types "Floating-point operations can be performed with higher precision than the result type of the operation.".

Given that sort of rule, you cannot be sure of getting exactly the same answer everywhere. "can" means some implementations may use higher precision than others, for the same calculation.

Even if testing shows that all current implementations get the same answers for all your calculations across a range of inputs, the next compiler release on any platform might change that. If you really need identical results, you need to pick a library or similar that promises identical results.

Calculations vary in how sensitive they are to small changes in inputs. At the extreme are simulations of some physical systems such as weather, where tiny changes in inputs can expand to big changes in results.

One way to get a feeling for the behavior of your simulation is to deliberately perturb some values by e.g. one part in 1e15. Does that change the results enough to matter? However, there will always be a risk that some inputs will lead to less stable conditions than others.

c# and floating point inaccuracies

Here's the thing, though: 8.8000000000000007 can't be exactly represented in double, either. The closest value is 8.800000000000000710542735760100185871124267578125 (which I got from Jon Skeet's DoubleConverter). You could then use Decimal.Parse on that string to get a decimal value of 8.80000000000000071054273576.

decimal d = 8.8M;
double dbl = (double)d;
string s = DoubleConverter.ToExactString(dbl);
decimal dnew = decimal.Parse(s);

Floating point dramatic error (fractional part is completely lost)

float has a precision of 6-9 digits. Your value is just too large to fit into a float without loss.

double has a precision of about 15-17 digits.

As an illustration, inspect the value of (int)43156414f or (double)43156414f - they are both 43156416



Related Topics



Leave a reply



Submit