Swift: Decode Imprecise Decimal Correctly

Swift: Decode imprecise decimal correctly

You can implement your own decoding method, convert your double to string and use it to initialize your decimal properties:



extension LosslessStringConvertible {
var string: String { .init(self) }
}


extension FloatingPoint where Self: LosslessStringConvertible {
var decimal: Decimal? { Decimal(string: string) }
}


struct Root: Codable {
let priceAfterTax, priceBeforeTax, tax, taxAmount: Decimal
}


extension Root {
public init(from decoder: Decoder) throws {
let container = try decoder.container(keyedBy: CodingKeys.self)
self.priceAfterTax = try container.decode(Double.self, forKey: .priceAfterTax).decimal ?? .zero
self.priceBeforeTax = try container.decode(Double.self, forKey: .priceBeforeTax).decimal ?? .zero
self.tax = try container.decode(Double.self, forKey: .tax).decimal ?? .zero
self.taxAmount = try container.decode(Double.self, forKey: .taxAmount).decimal ?? .zero
}
}


let data = Data("""
{
"priceAfterTax": 150.00,
"priceBeforeTax": 130.43,
"tax": 15.00,
"taxAmount": 19.57
}
""".utf8)

let decodedObj = try! JSONDecoder().decode(Root.self, from: data)
decodedObj.priceAfterTax // 150.00
decodedObj.priceBeforeTax // 130.43
decodedObj.tax // 15.00
decodedObj.taxAmount // 19.57

How would I decode a NSDecimalNumber without loss of precision?

The issue is that JSONDecoder is just a wrapper around JSONSerialization, which decodes decimal numbers into Double internally and only after converts them to Decimal. Sadly, unless you create your own JSON decoder, you cannot get around this issue when using numeric decimals from JSON.

The only workaround currently possible is to change your backend to send all decimal numbers as Strings and then convert those into Decimals after decoding them as Strings.

For more information, have a look at this open Swift bug: SR-7054.

Parsing Decimal from JSON presented as string

struct Root: Codable {
let decimal: Decimal
}

extension Root {
public init(from decoder: Decoder) throws {
let container = try decoder.container(keyedBy: CodingKeys.self)
decimal = try Decimal(string: container.decode(String.self, forKey: .decimal)) ?? .zero
}
}

let json = #"{"decimal":"0.007"}"# 
do {
let root = try JSONDecoder().decode(Root.self, from: .init(json.utf8))
print(root)
} catch {
print(error)
}

This will print

Root(decimal: 0.007)

How to store 1.66 in NSDecimalNumber

In

let number:NSDecimalNumber = 1.66

the right-hand side is a floating point number which cannot represent
the value "1.66" exactly. One option is to create the decimal number
from a string:

let number = NSDecimalNumber(string: "1.66")
print(number) // 1.66

Another option is to use arithmetic:

let number = NSDecimalNumber(value: 166).dividing(by: 100)
print(number) // 1.66

With Swift 3 you may consider to use the "overlay value type" Decimal instead, e.g.

let num = Decimal(166)/Decimal(100)
print(num) // 1.66

Yet another option:

let num = Decimal(sign: .plus, exponent: -2, significand: 166)
print(num) // 1.66

Addendum:

Related discussions in the Swift forum:

  • Exact NSDecimalNumber via literal
  • ExpressibleByFractionLiteral

Related bug reports:

  • SR-3317
    Literal protocol for decimal literals should support precise decimal accuracy, closed as a duplicate of
  • SR-920
    Re-design builtin compiler protocols for literal convertible types.

Rounding a double value to x number of decimal places in swift

You can use Swift's round function to accomplish this.

To round a Double with 3 digits precision, first multiply it by 1000, round it and divide the rounded result by 1000:

let x = 1.23556789
let y = Double(round(1000 * x) / 1000)
print(y) /// 1.236

Unlike any kind of printf(...) or String(format: ...) solutions, the result of this operation is still of type Double.

EDIT:

Regarding the comments that it sometimes does not work, please read this: What Every Programmer Should Know About Floating-Point Arithmetic

How to separate thousands from a Float value with decimals in Swift

Use Double instead of Float, the value you are specifying is not well representable in Float:

// [... your current code ...]
let myDouble: Double = 1123455432.67899
let myNumber = NSNumber(value: myDouble)
// [... your current code ...]

1,123,455,432.679

The e+XX notation Float has by default is not just for show, it is there because Float cannot store all digits. See:

let myFloat2: Float = 1123455432.67899
print(myFloat2 == 1123455432) // true
let notRepresentable = Float(exactly:1123455432.67899) // nil

Mystery behind presentation of Floating Point numbers

This is purely an artifact of how an NSNumber prints itself.

JSONSerialization is implemented in Objective-C and uses Objective-C objects (NSDictionary, NSArray, NSString, NSNumber, etc.) to represent the values it deserializes from your JSON. Since the JSON contains a bare number with decimal point as the value for the "amount" key, JSONSerialization parses it as a double and wraps it in an NSNumber.

Each of these Objective-C classes implements a description method to print itself.

The object returned by JSONSerialization is an NSDictionary. String(describing:) converts the NSDictionary to a String by sending it the description method. NSDictionary implements description by sending description to each of its keys and values, including the NSNumber value for the "amount" key.

The NSNumber implementation of description formats a double value using the printf specifier %0.16g. (I checked using a disassembler.) About the g specifier, the C standard says

Finally, unless the # flag is used, any trailing zeros are removed from the fractional portion of the result and the decimal-point wide character is removed if there is no fractional portion remaining.

The closest double to 98.39 is exactly 98.3900 0000 0000 0005 6843 4188 6080 8014 8696 8994 1406 25. So %0.16g formats that as %0.14f (see the standard for why it's 14, not 16), which gives "98.39000000000000", then chops off the trailing zeros, giving "98.39".

The closest double to 98.40 is exactly 98.4000 0000 0000 0056 8434 1886 0808 0148 6968 9941 4062 5. So %0.16g formats that as %0.14f, which gives "98.40000000000001" (because of rounding), and there are no trailing zeros to chop off.

So that's why, when you print the result of JSONSerialization.jsonObject(with:options:), you get lots of fractional digits for 98.40 but only two digits for 98.39.

If you extract the amounts from the JSON object and convert them to Swift's native Double type, and then print those Doubles, you get much shorter output, because Double implements a smarter formatting algorithm that prints the shortest string that, when parsed, produces exactly the same Double.

Try this:

import Foundation

struct Price: Encodable {
let amount: Decimal
}

func printJSON(from string: String) {
let decimal = Decimal(string: string)!
let price = Price(amount: decimal)

let data = try! JSONEncoder().encode(price)
let jsonString = String(data: data, encoding: .utf8)!
let jso = try! JSONSerialization.jsonObject(with: data, options: []) as! [String: Any]
let nsNumber = jso["amount"] as! NSNumber
let double = jso["amount"] as! Double

print("""
Original string: \(string)
json: \(jsonString)
jso: \(jso)
amount as NSNumber: \(nsNumber)
amount as Double: \(double)

""")
}

printJSON(from: "98.39")
printJSON(from: "98.40")
printJSON(from: "98.99")

Result:

Original string: 98.39
json: {"amount":98.39}
jso: ["amount": 98.39]
amount as NSNumber: 98.39
amount as Double: 98.39

Original string: 98.40
json: {"amount":98.4}
jso: ["amount": 98.40000000000001]
amount as NSNumber: 98.40000000000001
amount as Double: 98.4

Original string: 98.99
json: {"amount":98.99}
jso: ["amount": 98.98999999999999]
amount as NSNumber: 98.98999999999999
amount as Double: 98.99

Notice that both the actual JSON (on the lines labeled json:) and the Swift Double versions use the fewest digits in all cases. The lines that use -[NSNumber description] (labeled jso: and amount as NSNumber:) use extra digits for some values.



Related Topics



Leave a reply



Submit