What Is the Correct/Standard Way to Check If Difference Is Smaller Than MAChine Precision

What is the correct/standard way to check if difference is smaller than machine precision?

The machine precision for double depends on its current value. .Machine$double.eps gives the precision when the values is 1. You can use the C function nextAfter to get the machine precision for other values.

library(Rcpp)
cppFunction("double getPrec(double x) {
return nextafter(x, std::numeric_limits<double>::infinity()) - x;}")

(pr <- getPrec(1))
#[1] 2.220446e-16
1 + pr == 1
#[1] FALSE
1 + pr/2 == 1
#[1] TRUE
1 + (pr/2 + getPrec(pr/2)) == 1
#[1] FALSE
1 + pr/2 + pr/2 == 1
#[1] TRUE
pr/2 + pr/2 + 1 == 1
#[1] FALSE

Adding value a to value b will not change b when a is <= half of it's machine precision. Checking if the difference is smaler than machine precision is done with <. The modifiers might consider typical cases how often an addition did not show a change.

In R the machine precision can be estimated with:

getPrecR <- function(x) {
y <- log2(pmax(.Machine$double.xmin, abs(x)))
ifelse(x < 0 & floor(y) == y, 2^(y-1), 2^floor(y)) * .Machine$double.eps
}
getPrecR(1)
#[1] 2.220446e-16

Each double value is representing a range. For a simple addition, the range of the result depends on the reange of each summand and also the range of their sum.

library(Rcpp)
cppFunction("std::vector<double> getRange(double x) {return std::vector<double>{
(nextafter(x, -std::numeric_limits<double>::infinity()) - x)/2.
, (nextafter(x, std::numeric_limits<double>::infinity()) - x)/2.};}")

x <- 2^54 - 2
getRange(x)
#[1] -1 1
y <- 4.1
getRange(y)
#[1] -4.440892e-16 4.440892e-16
z <- x + y
getRange(z)
#[1] -2 2
z - x - y #Should be 0
#[1] 1.9

2^54 - 2.9 + 4.1 - (2^54 + 5.9) #Should be -4.7
#[1] 0
2^54 - 2.9 == 2^54 - 2 #Gain 0.9
2^54 - 2 + 4.1 == 2^54 + 4 #Gain 1.9
2^54 + 5.9 == 2^54 + 4 #Gain 1.9

For higher precission Rmpfr could be used.

library(Rmpfr)
mpfr("2", 1024L)^54 - 2.9 + 4.1 - (mpfr("2", 1024L)^54 + 5.9)
#[1] -4.700000000000000621724893790087662637233734130859375

In case it could be converted to integer gmp could be used (what is in Rmpfr).

library(gmp)
as.bigz("2")^54 * 10 - 29 + 41 - (as.bigz("2")^54 * 10 + 59)
#[1] -47

Compare floats in R

.Machine$double.eps is the difference between 1 and the smallest representable value greater than 1. The difference between 0.1 and the smallest representable value greater than 0.1 is smaller than .Machine$double.eps and the difference between 100 and the smallest representable value greater than 100 is larger than .Machine$double.eps. Have a look at: What is the correct/standard way to check if difference is smaller than machine precision?.

.Machine$double.eps is

.Machine$double.eps
[1] 2.220446e-16

When you make the calculations the intern sotred values will be approximately:

print(1.0 + .Machine$double.eps, 20)
#[1] 1.000000000000000222
print(1.0 - .Machine$double.eps, 20)
#[1] 0.99999999999999977796
print(0.9 + .Machine$double.eps, 20)
#[1] 0.90000000000000024425
print(2.0 + .Machine$double.eps, 20)
#[1] 2

Using tolerance = .Machine$double.eps all.equal returns TRUE or FALSE depending if the difference of the intern stored values is lager or not than tolerance.

To compare 2 numbers in R if the are intern stored equal use ==.

Floating point less-than-equal comparisons after addition and substraction

No, there is no best practice. Unfortunately, there cannot be, because almost all floating-point calculations introduce some rounding error, and the consequences of the errors are different for different applications.

Typically, software will perform some calculations that ideally would yield some exact mathematical result x but, due to rounding errors (or other issues), produce an approximation x'. When comparing floating-point numbers, you want to ask some question about x, such as “Is x < 1?” or “Is x = 3.1415926…?” So the problem you want to solve is “How do I use x' to answer this question about x?”

There is no general solution for this. Some errors may produce an x' that is greater than 1 even though x is less than 1. Some errors may produce an x' that is less than 1 even though x is greater than 1. The solution in any specific instance depends on information about the errors that were generated while calculating x' and the specific question to be answered.

Sometimes a thorough analysis can demonstrate that certain questions about x can be answered using x'. For example, in some situations, we might craft calculations so that we know that, if x' < 1, then x < 1. Or perhaps that, if x' < .99875, then x < 1. Say we analyze the calculations we used to calculate x' and can show that the final error is less than .00125. Then, if x' < .99875, then we know x < 1, and, if x' > 1.00125, then x > 1. But, if .99875 < x' < 1.00125, then we do not know whether x > 1 or x < 1. What do we do in that situation? Is it then better for your application to take the path where x < 1 or the path where x > 1? The answer is specific to each application, and there is no general best practice.

I will add to this that the amount of rounding error that occurs varies hugely from application to application. This is because rounding error can be compounded in various ways. Some applications with a few floating-point operations will achieve results with small errors. Some applications with many floating-point operations will also achieve results with modest errors. But certain behaviors can lead calculations astray and produce catastrophic errors. So dealing with rounding error is a custom problem for each program.

Is the use of machine epsilon appropriate for floating-point equality tests?

How to choose a value for epsilon?

Short Answer: You take a small value which fits your applications needs.

Long Answer: Nobody can know which calculations your application does and how accurate you expect your results to be. Since rounding errors sum up machine epsilon will be almost all times far too big so you have to chose your own value. Depending on your needs, 0.01 be be sufficient, or maybe 0.00000000000001 or less will.

The question is, do you really want/need to do equality tests on floating point values? Maybe you should redesign your algorithms.

R: Computing maximum floating-point error for log(exp(...))

I think you got the right answer. Here I refined the step as small as sqrt(.Machine$double.eps), you will see

> x <- seq(0, 2, by = sqrt(.Machine$double.eps))

> max(abs(log(exp(x)) - x))
[1] 1.110725e-16

However, once your x is extremely large, you will have Inf error, e.g.,

> (x <- .Machine$double.xmax)
[1] 1.797693e+308

> max(abs(log(exp(x)) - x))
[1] Inf

What is the best way to compare floats for almost-equality in Python?

Python 3.5 adds the math.isclose and cmath.isclose functions as described in PEP 485.

If you're using an earlier version of Python, the equivalent function is given in the documentation.

def isclose(a, b, rel_tol=1e-09, abs_tol=0.0):
return abs(a-b) <= max(rel_tol * max(abs(a), abs(b)), abs_tol)

rel_tol is a relative tolerance, it is multiplied by the greater of the magnitudes of the two arguments; as the values get larger, so does the allowed difference between them while still considering them equal.

abs_tol is an absolute tolerance that is applied as-is in all cases. If the difference is less than either of those tolerances, the values are considered equal.

Rule of thumb to test the equality of two doubles in C#?

Using double.Epsilon does NOT necessarily work. double.Epsilon gives the smallest representable value that is greater than zero. However, because of the way that floating point numbers are implemented, they have less precision the further away from zero they are, so checking for a difference of double.Epsilon could fail for two large numbers that are very close to each other.

Details: A base-2 floating point number is represented as a significand - a number between 1 and 2 - multiplied by two raised to some exponent. A double has 52 bits for the fractional portion of the significand plus 11 bits of precision for the exponent. If the exponent is a very large negative value and the significand is 0, then you get values close to double.Epsilon, but if your exponent is big enough, then even a very small difference in two significands' values will result in a value much larger than double.Epsilon.

For a full discussion on how to test two floating point numbers for equality, see "Comparing Floating Point Numbers, 2012 Edition", by Bruce Dawson. To summarize, there are three main methods of comparison:

Use an absolute difference

As in Joel Coehoorn's example, but be very careful to select a value that's of an appropriate magnitude, unlike Joel's example.

Use a relative difference

Something like the following:

if (Math.Abs(a - b) / b <= maxRelativeError)
{
return true;
}

However, there are complications; you should divide by the larger of the two values, and this function performs poorly for values close to zero unless you also add a check for a maximum absolute difference. See the paper for details.

Using units of last place

Comparison using units of last place (ULPs) means checking the last portion of the significand. (The paper refers to this as "Comparing using integers.") This is a more complicated approach but is very robust. The paper provides source code in C; for C#, you could probably use BitConverter.DoubleToInt64Bits.

In response to your edit

"How many times greater?" This is really a question of your application domain, which is probably why the .NET Framework doesn't provide a default method, but I've had good luck using the ULPs comparison with a max ULPs difference of 4.



Related Topics



Leave a reply



Submit