Is a Double Really Unsuitable for Money

Is a double really unsuitable for money?

Very, very unsuitable. Use decimal.

double x = 3.65, y = 0.05, z = 3.7;
Console.WriteLine((x + y) == z); // false

(example from Jon's page here - recommended reading ;-p)

Why not use Double or Float to represent currency?

Because floats and doubles cannot accurately represent the base 10 multiples that we use for money. This issue isn't just for Java, it's for any programming language that uses base 2 floating-point types.

In base 10, you can write 10.25 as 1025 * 10-2 (an integer times a power of 10). IEEE-754 floating-point numbers are different, but a very simple way to think about them is to multiply by a power of two instead. For instance, you could be looking at 164 * 2-4 (an integer times a power of two), which is also equal to 10.25. That's not how the numbers are represented in memory, but the math implications are the same.

Even in base 10, this notation cannot accurately represent most simple fractions. For instance, you can't represent 1/3: the decimal representation is repeating (0.3333...), so there is no finite integer that you can multiply by a power of 10 to get 1/3. You could settle on a long sequence of 3's and a small exponent, like 333333333 * 10-10, but it is not accurate: if you multiply that by 3, you won't get 1.

However, for the purpose of counting money, at least for countries whose money is valued within an order of magnitude of the US dollar, usually all you need is to be able to store multiples of 10-2, so it doesn't really matter that 1/3 can't be represented.

The problem with floats and doubles is that the vast majority of money-like numbers don't have an exact representation as an integer times a power of 2. In fact, the only multiples of 0.01 between 0 and 1 (which are significant when dealing with money because they're integer cents) that can be represented exactly as an IEEE-754 binary floating-point number are 0, 0.25, 0.5, 0.75 and 1. All the others are off by a small amount. As an analogy to the 0.333333 example, if you take the floating-point value for 0.01 and you multiply it by 10, you won't get 0.1. Instead you will get something like 0.099999999786...

Representing money as a double or float will probably look good at first as the software rounds off the tiny errors, but as you perform more additions, subtractions, multiplications and divisions on inexact numbers, errors will compound and you'll end up with values that are visibly not accurate. This makes floats and doubles inadequate for dealing with money, where perfect accuracy for multiples of base 10 powers is required.

A solution that works in just about any language is to use integers instead, and count cents. For instance, 1025 would be $10.25. Several languages also have built-in types to deal with money. Among others, Java has the BigDecimal class, and Rust has the rust_decimal crate, and C# has the decimal type.

decimal vs double! - Which one should I use and when?

For money, always decimal. It's why it was created.

If numbers must add up correctly or balance, use decimal. This includes any financial storage or calculations, scores, or other numbers that people might do by hand.

If the exact value of numbers is not important, use double for speed. This includes graphics, physics or other physical sciences computations where there is already a "number of significant digits".

Why is using a NON-decimal data type bad for money?

Floats aren't stable for accumulating and decrementing funds. Here's your actual example:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace BadFloat
{
class Program
{
static void Main(string[] args)
{
Currency yourMoneyAccumulator = 0.0d;
int count = 200000;
double increment = 20000.01d; //1 cent
for (int i = 0; i < count; i++)
yourMoneyAccumulator += increment;
Console.WriteLine(yourMoneyAccumulator + " accumulated vs. " + increment * count + " expected");
}
}

struct Currency
{
private const double EPSILON = 0.00005;
public Currency(double value) { this.value = value; }
private double value;
public static Currency operator +(Currency a, Currency b) { return new Currency(a.value + b.value); }
public static Currency operator -(Currency a, Currency b) { return new Currency(a.value - b.value); }
public static Currency operator *(Currency a, double factor) { return new Currency(a.value * factor); }
public static Currency operator *(double factor, Currency a) { return new Currency(a.value * factor); }
public static Currency operator /(Currency a, double factor) { return new Currency(a.value / factor); }
public static Currency operator /(double factor, Currency a) { return new Currency(a.value / factor); }
public static explicit operator double(Currency c) { return System.Math.Round(c.value, 4); }
public static implicit operator Currency(double d) { return new Currency(d); }
public static bool operator <(Currency a, Currency b) { return (a.value - b.value) < -EPSILON; }
public static bool operator >(Currency a, Currency b) { return (a.value - b.value) > +EPSILON; }
public static bool operator <=(Currency a, Currency b) { return (a.value - b.value) <= +EPSILON; }
public static bool operator >=(Currency a, Currency b) { return (a.value - b.value) >= -EPSILON; }
public static bool operator !=(Currency a, Currency b) { return Math.Abs(a.value - b.value) <= EPSILON; }
public static bool operator ==(Currency a, Currency b) { return Math.Abs(a.value - b.value) > EPSILON; }
public bool Equals(Currency other) { return this == other; }
public override int GetHashCode() { return ((double)this).GetHashCode(); }
public override bool Equals(object other) { return other is Currency && this.Equals((Currency)other); }
public override string ToString() { return this.value.ToString("C4"); }
}

}

On my box this gives $4,000,002,000.0203 accumulated vs. 4000002000 expected in C#. It's a bad deal if this gets lost over many transactions in a bank - it doesn't have to be large ones, just many. Does that help?

What data type should I use to represent money in C#?

Use System.Decimal:

The Decimal value type represents
decimal numbers ranging from positive
79,228,162,514,264,337,593,543,950,335
to negative
79,228,162,514,264,337,593,543,950,335.
The Decimal value type is appropriate
for financial calculations requiring
large numbers of significant integral
and fractional digits and no round-off
errors.
The Decimal type does not
eliminate the need for rounding.
Rather, it minimizes errors due to
rounding.

Neither System.Single (float) nor System.Double (double) are precise enough capable of representing high-precision floating point numbers without rounding errors.

Using Double data type to transport Money values

There is a loss of precision when you store data in double not just when you retrieve it. So no, this doesn't get around your problem. You can't magically retrieve precision that has been lost.

For java applications, is it safe to use BigDecimal when dealing with money, or should I use integers and create an abstraction for money?

It depends on your requirements. You may only have a need for resolution to the nearest K (for example, salary requirements on a job posting website.)

Assuming you mean you need granularity, BigDecimal seems perfectly suited for the job. It seems certainly "safe" to use, but without knowing exactly what you plan to do with it, it's hard to say for certain.

Java - How to get the double from an Integer (Money)

If I'm understanding your question correctly, you're probably trying to just do this:

return (double)number / 100.;

Though a word of warning: You shouldn't be using floating-point for money due to round-off issues.

If you just want to print the number in money format, here's a 100% safe method:

System.out.print((number / 100) + ".");
int cents = number % 100;
if (cents < 10)
System.out.print("0");
System.out.println(cents);

This can probably be simplified a lot better... Of course you can go with BigDecimal, but IMO that's smashing an ant with a sledgehammer.



Related Topics



Leave a reply



Submit