More Precision Than Double in Swift

More precision than double in swift

Yes there is! There is Float80 exactly for that, it stores 80 bits (duh), 10 bytes. You can use it like any other floating point type. Note that there are Float32, Float64 and Float80 in Swift, where Float32 is just a typealias for Float and Float64 is one for Double

Is CGFloat more accurate than Double?

No, CGFloat does not represent a BigDecimal-type number, with infinite precision. CGFloat is defined as a struct with either wraps a Double or a Float, depending on whether you are on a 32-bit or 64-bit platform; i.e., on a 64-bit platform, it is equivalent to a Double, with no more or less precision or range.

The difference in the warning you see is that Double is a type native to Swift, which the compiler knows about, and has enough information about in order to be able to provide such a warning. CGFloat, however, is defined by Apple frameworks, and the compiler does not know enough about it to produce such a warning.

Should I define all values as Double or mixed Double and Float when those values will be calculated frequently?

from
THE SWIFT PROGRAMMING LANGUAGE

"NOTE

Double has a precision of at least 15 decimal digits, whereas the precision of Float can be as little as 6 decimal digits. The appropriate floating-point type to use depends on the nature and range of values you need to work with in your code. In situations where either type would be appropriate, Double is preferred."

Rounding a double value to x number of decimal places in swift

You can use Swift's round function to accomplish this.

To round a Double with 3 digits precision, first multiply it by 1000, round it and divide the rounded result by 1000:

let x = 1.23556789
let y = Double(round(1000 * x) / 1000)
print(y) /// 1.236

Unlike any kind of printf(...) or String(format: ...) solutions, the result of this operation is still of type Double.

EDIT:

Regarding the comments that it sometimes does not work, please read this: What Every Programmer Should Know About Floating-Point Arithmetic

Why is decimal more precise than double if it has a shorter range? C#

what I'm understanding here is that decimal takes more space but provides a shorter range?

Correct. It provides higher precision and smaller range. Plainly if you have a limited number of bits, you can increase precision only by decreasing range!

everyone agrees that decimal should be use when precision is required

Since that statement is false -- in particular, I do not agree with it -- any conclusion you draw from it is not sound.

The purpose of using decimal is not higher precision. It is smaller representation error. Higher precision is one way to achieve smaller representation error, but decimal does not achieve its smaller representation error by being higher precision. It achieves its smaller representation error by exactly representing decimal fractions.

Decimal is for those scenarios where the representation error of a decimal fraction must be zero, such as a financial computation.

Also when doing a calculation like = (1/3)*3, the desire result is 1, but only float and double give me 1

You got lucky. There are lots of fractions where the representation error of that computation is non-zero for both floats and doubles.

Let's do a quick check to see how many there are. We'll just make a million rationals and see:

    var q = from x in Enumerable.Range(1, 1000)
from y in Enumerable.Range(1, 1000)
where ((double)x)/y*y != x
select x + " " + y;
Console.WriteLine(q.Count()); // 101791

Over 10% of all small-number rationals are represented as doubles with sufficiently large representation error that they do not turn back into whole numbers when multiplied by their denominator!

If your desire is to do exact arithmetic on arbitrary rationals then neither double nor decimal are the appropriate type to use. Use a big-rational library if you need to exactly represent rationals.

why is decimal more precise?

Decimal is more precise than double because it has more bits of precision.

But again, precision is not actually that relevant. What is relevant is that decimal has smaller representation error than double for many common fractions.

It has smaller representation error than double for representing fractions with a small power of ten in the denominator because it was designed specifically to have zero representation error for all fractions with a small power of ten in the denominator.

That's why it is called "decimal", because it represents fractions with powers of ten. It represents the decimal system, which is the system we commonly use for arithmetic.

Double, in contrast, was explicitly not designed to have small representation error. Double was designed to have the range, precision, representation error and performance that is appropriate for physics computations.

There is no bias towards exact decimal quantities in physics. There is such a bias in finance. Use decimals for finance. Use doubles for physics.

Why does converting an integer string to float and double produce different results?

OK, please look at the floating point converter at https://www.h-schmidt.net/FloatConverter/IEEE754.html. It shows you the bits stored when you enter a number in binary and hex representation, and also gives you the error due to conversion. The issue is with the way the number gets represented in the standard. In floating point, the error indeed comes out to be -1.

Actually, any number in the range 77777772 to 77777780 gives you 77777776 as the internal representation of mantissa.

String to Double conversion loses precision in Swift

First off: you don't! What you encountered here is called floating point inaccuracy. Computers cannot store every number precisely. 2.4 cannot be stored lossless within a floating point type.

Secondly: Since floating point is always an issue and you are dealing with money here (I guess you are trying to store 2.4 franc) your number one solution is: don't use floating point numbers. Use the NSNumber you get from the numberFromString and do not try to get a Double out of it.

Alternatively shift the comma by multiplying and store it as Int.

The first solutions might look something like:

if let num = myDouble {
let value = NSDecimalNumber(decimal: num.decimalValue)
let output = value.decimalNumberByMultiplyingBy(NSDecimalNumber(integer: 10))
}


Related Topics



Leave a reply



Submit