Floating Point Less-Than-Equal Comparisons After Addition and Substraction

Floating point less-than-equal comparisons after addition and substraction

No, there is no best practice. Unfortunately, there cannot be, because almost all floating-point calculations introduce some rounding error, and the consequences of the errors are different for different applications.

Typically, software will perform some calculations that ideally would yield some exact mathematical result x but, due to rounding errors (or other issues), produce an approximation x'. When comparing floating-point numbers, you want to ask some question about x, such as “Is x < 1?” or “Is x = 3.1415926…?” So the problem you want to solve is “How do I use x' to answer this question about x?”

There is no general solution for this. Some errors may produce an x' that is greater than 1 even though x is less than 1. Some errors may produce an x' that is less than 1 even though x is greater than 1. The solution in any specific instance depends on information about the errors that were generated while calculating x' and the specific question to be answered.

Sometimes a thorough analysis can demonstrate that certain questions about x can be answered using x'. For example, in some situations, we might craft calculations so that we know that, if x' < 1, then x < 1. Or perhaps that, if x' < .99875, then x < 1. Say we analyze the calculations we used to calculate x' and can show that the final error is less than .00125. Then, if x' < .99875, then we know x < 1, and, if x' > 1.00125, then x > 1. But, if .99875 < x' < 1.00125, then we do not know whether x > 1 or x < 1. What do we do in that situation? Is it then better for your application to take the path where x < 1 or the path where x > 1? The answer is specific to each application, and there is no general best practice.

I will add to this that the amount of rounding error that occurs varies hugely from application to application. This is because rounding error can be compounded in various ways. Some applications with a few floating-point operations will achieve results with small errors. Some applications with many floating-point operations will also achieve results with modest errors. But certain behaviors can lead calculations astray and produce catastrophic errors. So dealing with rounding error is a custom problem for each program.

How should I do floating point comparison?

Comparing for greater/smaller is not really a problem unless you're working right at the edge of the float/double precision limit.

For a "fuzzy equals" comparison, this (Java code, should be easy to adapt) is what I came up with for The Floating-Point Guide after a lot of work and taking into account lots of criticism:

public static boolean nearlyEqual(float a, float b, float epsilon) {
final float absA = Math.abs(a);
final float absB = Math.abs(b);
final float diff = Math.abs(a - b);

if (a == b) { // shortcut, handles infinities
return true;
} else if (a == 0 || b == 0 || diff < Float.MIN_NORMAL) {
// a or b is zero or both are extremely close to it
// relative error is less meaningful here
return diff < (epsilon * Float.MIN_NORMAL);
} else { // use relative error
return diff / (absA + absB) < epsilon;
}
}

It comes with a test suite. You should immediately dismiss any solution that doesn't, because it is virtually guaranteed to fail in some edge cases like having one value 0, two very small values opposite of zero, or infinities.

An alternative (see link above for more details) is to convert the floats' bit patterns to integer and accept everything within a fixed integer distance.

In any case, there probably isn't any solution that is perfect for all applications. Ideally, you'd develop/adapt your own with a test suite covering your actual use cases.

Floating Point, how much can I trust less than / greater than comparisons?

If you can guarantee that a and b are not NaNs or infinities, then you can just do:

if (a<b) {

} else {

}

The set of all floating point values except for infinities and NaNs comprise a total ordering (with a glitch with two representations of zero, but that shouldn't matter for you), which is not unlike working with normal set of integers — the only difference is that the magnitude of intervals between subsequent values is not constant, like it is with integers.

In fact, the IEEE 754 has been designed so that comparisons of non-NaN non-infinity values of the same sign can be done with the same operations as normal integers (again, with a glitch with zero). So, in this specific case, you can think of these numbers as of “better integers”.

What is the best way to compare floats for almost-equality in Python?

Python 3.5 adds the math.isclose and cmath.isclose functions as described in PEP 485.

If you're using an earlier version of Python, the equivalent function is given in the documentation.

def isclose(a, b, rel_tol=1e-09, abs_tol=0.0):
return abs(a-b) <= max(rel_tol * max(abs(a), abs(b)), abs_tol)

rel_tol is a relative tolerance, it is multiplied by the greater of the magnitudes of the two arguments; as the values get larger, so does the allowed difference between them while still considering them equal.

abs_tol is an absolute tolerance that is applied as-is in all cases. If the difference is less than either of those tolerances, the values are considered equal.

comparing float/double values using == operator

IBM has a recommendation for comparing two floats, using division rather than subtraction - this makes it easier to select an epsilon that works for all ranges of input.

if (abs(a/b - 1) < epsilon)

As for the value of epsilon, I would use 5.96e-08 as given in this Wikipedia table, or perhaps 2x that value.

What is correct way to compare 2 doubles values?

So the question will be is there a generic solution/workaround for this

There will not be a universal solution for finite precision floating point that would apply to all use cases. There cannot be, because the correct threshold is specific to each calculation, and cannot generally be known automatically.

You have to know what you are comparing and what you are expecting from the comparison. Full explanation won't fit in this answer, but you can find most information from this blog: https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/ (not mine).


There is however a generic solution/workaround that side-steps the issue: Use infinite precision arithmetic. The C++ standard library does not provide an implementation of infinite precision arithmetic.



Related Topics



Leave a reply



Submit