Is It Safe to Check Floating Point Values for Equality to 0

Is it safe to check floating point values for equality to 0?

It is safe to expect that the comparison will return true if and only if the double variable has a value of exactly 0.0 (which in your original code snippet is, of course, the case). This is consistent with the semantics of the == operator. a == b means "a is equal to b".

It is not safe (because it is not correct) to expect that the result of some calculation will be zero in double (or more generally, floating point) arithmetics whenever the result of the same calculation in pure Mathematics is zero. This is because when calculations come into the ground, floating point precision error appears - a concept which does not exist in Real number arithmetics in Mathematics.

Is it safe to test a float for 0.0 equality?

It is safe in the sense that if the value is set explicitly to 0.0f, it will return true there.

It is NOT safe in the sense that you should not expect that the value resultant from calculations will be exactly 0.0f.

So you're really using 0.0f as a special magic value of sorts, not as a real comparison against zero.

Comparison to 0.0 with floating point values

It's perfectly correct in your case to use floating point equality == 0.0.

It perfectly fits the intention of the function (return some value or 0.0 if it fails). Using any other epsilon is somehow arbitrary and require the knowledge of the range of correct values. If ever something went to change that could well be the range of values rather than 0, so testing == 0.0 is not less future proof than other solutions IMO.

The only problem I see is that some compilers will warn about suspiscious usage of equality (-Wfloat-equal)... That's as usefull as warning about int a,b,c; ...; c=a+b; because such instruction might possibly lead to problem (integer overflow and undefined behaviour). Curiously, I never saw the second warning.

So if you want to make usage of -Wall -Werror compiler options future proof, you might encode failure differently (with a negative value for example) and test for foo < 0.0 - until someone discover that floating point inequality might require a tolerance too and declare the construct as suspiscious.

Comparing floating point number to zero

You are correct with your observation.

If x == 0.0, then abs(x) * epsilon is zero and you're testing whether abs(y) <= 0.0.

If y == 0.0 then you're testing abs(x) <= abs(x) * epsilon which means either epsilon >= 1 (it isn't) or x == 0.0.

So either is_equal(val, 0.0) or is_equal(0.0, val) would be pointless, and you could just say val == 0.0. If you want to only accept exactly +0.0 and -0.0.

The FAQ's recommendation in this case is of limited utility. There is no "one size fits all" floating-point comparison. You have to think about the semantics of your variables, the acceptable range of values, and the magnitude of error introduced by your computations. Even the FAQ mentions a caveat, saying this function is not usually a problem "when the magnitudes of x and y are significantly larger than epsilon, but your mileage may vary".

Can you compare floating point values exactly to zero?

Even though 0 has an exact representation, you can't rely on the result of a calculation using floats to be exactly 0. As you noted, this is due to floating point calculation and conversion issues.

So, you should test for 0 against your tolerance epsilon.

How dangerous is it to compare floating point values?

First of all, floating point values are not "random" in their behavior. Exact comparison can and does make sense in plenty of real-world usages. But if you're going to use floating point you need to be aware of how it works. Erring on the side of assuming floating point works like real numbers will get you code that quickly breaks. Erring on the side of assuming floating point results have large random fuzz associated with them (like most of the answers here suggest) will get you code that appears to work at first but ends up having large-magnitude errors and broken corner cases.

First of all, if you want to program with floating point, you should read this:

What Every Computer Scientist Should Know About Floating-Point Arithmetic

Yes, read all of it. If that's too much of a burden, you should use integers/fixed point for your calculations until you have time to read it. :-)

Now, with that said, the biggest issues with exact floating point comparisons come down to:

  1. The fact that lots of values you may write in the source, or read in with scanf or strtod, do not exist as floating point values and get silently converted to the nearest approximation. This is what demon9733's answer was talking about.

  2. The fact that many results get rounded due to not having enough precision to represent the actual result. An easy example where you can see this is adding x = 0x1fffffe and y = 1 as floats. Here, x has 24 bits of precision in the mantissa (ok) and y has just 1 bit, but when you add them, their bits are not in overlapping places, and the result would need 25 bits of precision. Instead, it gets rounded (to 0x2000000 in the default rounding mode).

  3. The fact that many results get rounded due to needing infinitely many places for the correct value. This includes both rational results like 1/3 (which you're familiar with from decimal where it takes infinitely many places) but also 1/10 (which also takes infinitely many places in binary, since 5 is not a power of 2), as well as irrational results like the square root of anything that's not a perfect square.

  4. Double rounding. On some systems (particularly x86), floating point expressions are evaluated in higher precision than their nominal types. This means that when one of the above types of rounding happens, you'll get two rounding steps, first a rounding of the result to the higher-precision type, then a rounding to the final type. As an example, consider what happens in decimal if you round 1.49 to an integer (1), versus what happens if you first round it to one decimal place (1.5) then round that result to an integer (2). This is actually one of the nastiest areas to deal with in floating point, since the behaviour of the compiler (especially for buggy, non-conforming compilers like GCC) is unpredictable.

  5. Transcendental functions (trig, exp, log, etc.) are not specified to have correctly rounded results; the result is just specified to be correct within one unit in the last place of precision (usually referred to as 1ulp).

When you're writing floating point code, you need to keep in mind what you're doing with the numbers that could cause the results to be inexact, and make comparisons accordingly. Often times it will make sense to compare with an "epsilon", but that epsilon should be based on the magnitude of the numbers you are comparing, not an absolute constant. (In cases where an absolute constant epsilon would work, that's strongly indicative that fixed point, not floating point, is the right tool for the job!)

Edit: In particular, a magnitude-relative epsilon check should look something like:

if (fabs(x-y) < K * FLT_EPSILON * fabs(x+y))

Where FLT_EPSILON is the constant from float.h (replace it with DBL_EPSILON fordoubles or LDBL_EPSILON for long doubles) and K is a constant you choose such that the accumulated error of your computations is definitely bounded by K units in the last place (and if you're not sure you got the error bound calculation right, make K a few times bigger than what your calculations say it should be).

Finally, note that if you use this, some special care may be needed near zero, since FLT_EPSILON does not make sense for denormals. A quick fix would be to make it:

if (fabs(x-y) < K * FLT_EPSILON * fabs(x+y) || fabs(x-y) < FLT_MIN)

and likewise substitute DBL_MIN if using doubles.

Is checking a double for equality ever safe?

The reason you shouldn't check floats for equality is that floating point numbers are not perfectly precise -- there's some inaccuracy in storage with some numbers, such as those that extended too far into the mantissa and repeating decimals (note that I'm talking about repeating decimals in base 2). You can think of this imprecision as "rounding down". The digits that extend beyond the precision of the floating-point number are truncated, effectively rounding down.

If it has not changed, it will keep that equality. However, if you change it even slightly, you probably should not use equalities, but instead a range like (x < 0.0001 && x > -.0001).

In short: as long as you're not playing with x at a very small level, it's OK.

Is it safe to compare floats strictly, given we do no operations on them?

A mass of "should"s follows.

I don't think there's anything that says an assignment from float to float (current_width = new_width) couldn't alter the value, but I would be surprised if such a thing existed. There should be no reason to make assignment between variables of the same type anything else than a direct copy.

If the incoming new_width and new_height keep their values until they change, then this comparison should not have any issues. But if they are calculated before every call they might change their value, depending on how the calculation is done. So it's not only this function that needs to be checked.

The C 2011 standard says that calculations may use bigger precision than the format you assign to, but nothing specific about assigning a variable to another. So the only imprecision should be in the calculation stage. The "simple assignment" part (6.5.16.1) says:

In simple assignment (=), the value of the right operand is converted to the type of the assignment expression and replaces the value stored in the object designated by the left operand.

So if the types already match, there should be no need for conversion.

So, simply put: if you don't recalculate the incoming value on every call the comparison for equality should hold true. But is there really a case where your framebuffers are sized as floats and not integers?



Related Topics



Leave a reply



Submit