What Data Type to Use for Money in Java

What data type to use for money in Java?

Java has Currency class that represents the ISO 4217 currency codes.
BigDecimal is the best type for representing currency decimal values.

Joda Money has provided a library to represent money.

Which primitive data type can be used for saving currency or other precise values, in Java programming language?

While many will tell you not to use double in fact this is what most investment banks do. Why? Because this is in fact one of the least error prone data types to use.

Why use double

  • representation is not random errors.
  • the precision of double is enough for currencies not experiencing hyper inflation. e.g. the US national debt (which is an estimate anyway) can be represented to 0.1 cent precision. If you have a currency in hyper inflation, don't trade it.
  • Using long is an option but you need to keep track of the precision correctly or you will accidentally multiply a number by 10x or 100x. This is far worse than adding a 0.000000001 cent.
  • Using BigDecimal doesn't mean you don't have to consider rounding, but it does make it hard to detect when you have such an error. It is slower to code and run, and more error prone to write (as it doesn't have language support like C# decimal) e.g. if you see 0.3333333333333332 it's an error but what about 0.33 as a BigDecimal? Looks ok, but it might be 1.00/3
  • A lot of financial libraries are written in C or C++. These all use double as there is no BigDecimal. A good portion of financial trading system are written entirely in C/C++ and they don't use BigDecimal either. (They could use long long but most use double)

One thing I think everyone agrees on is, don't use float unless you have many billions of such values. e.g. for back testing. It has poor precision and the code is actually slightly more verbose.

How to use money data type in Java, SQL, ORM

What are best practises in using money data type in Java application?

Use BigDecimal. Using any primitive will lead to precision problems sooner or later.

And what about ORM and SQL in most popular databases.

Hibernate (and probably all others) can handle BigDecimal just fine. And it translates to the appropriate database type, which is usually DECIMAL.

Which data type to use for manipulating currency

You almost certainly don't want to use floating-point types (double, float, Double, Float) to handle monetary amounts, especially if you will be performing computations on them. The main reason for this is that there are many simple-looking numbers that cannot be represented exactly as a double et al. One such number is 0.1.

BigDecimal is therefore a much better choice for this use case.

Representing Monetary Values in Java

BigDecimal all the way. I've heard of some folks creating their own Cash or Money classes which encapsulate a cash value with the currency, but under the skin it's still a BigDecimal, probably with BigDecimal.ROUND_HALF_EVEN rounding.

Edit: As Don mentions in his answer, there are open sourced projects like timeandmoney, and whilst I applaud them for trying to prevent developers from having to reinvent the wheel, I just don't have enough confidence in a pre-alpha library to use it in a production environment. Besides, if you dig around under the hood, you'll see they use BigDecimal too.

Data Type for Money in Dart/Flutter

As Abion47 said, the best way to store money would be to use the decimal package, which is like BigDecimal in Java and has a load of useful features.

How to format decimals in a currency format?

I doubt it. The problem is that 100 is never 100 if it's a float, it's normally 99.9999999999 or 100.0000001 or something like that.

If you do want to format it that way, you have to define an epsilon, that is, a maximum distance from an integer number, and use integer formatting if the difference is smaller, and a float otherwise.

Something like this would do the trick:

public String formatDecimal(float number) {
float epsilon = 0.004f; // 4 tenths of a cent
if (Math.abs(Math.round(number) - number) < epsilon) {
return String.format("%10.0f", number); // sdb
} else {
return String.format("%10.2f", number); // dj_segfault
}
}

Why not use Double or Float to represent currency?

Because floats and doubles cannot accurately represent the base 10 multiples that we use for money. This issue isn't just for Java, it's for any programming language that uses base 2 floating-point types.

In base 10, you can write 10.25 as 1025 * 10-2 (an integer times a power of 10). IEEE-754 floating-point numbers are different, but a very simple way to think about them is to multiply by a power of two instead. For instance, you could be looking at 164 * 2-4 (an integer times a power of two), which is also equal to 10.25. That's not how the numbers are represented in memory, but the math implications are the same.

Even in base 10, this notation cannot accurately represent most simple fractions. For instance, you can't represent 1/3: the decimal representation is repeating (0.3333...), so there is no finite integer that you can multiply by a power of 10 to get 1/3. You could settle on a long sequence of 3's and a small exponent, like 333333333 * 10-10, but it is not accurate: if you multiply that by 3, you won't get 1.

However, for the purpose of counting money, at least for countries whose money is valued within an order of magnitude of the US dollar, usually all you need is to be able to store multiples of 10-2, so it doesn't really matter that 1/3 can't be represented.

The problem with floats and doubles is that the vast majority of money-like numbers don't have an exact representation as an integer times a power of 2. In fact, the only multiples of 0.01 between 0 and 1 (which are significant when dealing with money because they're integer cents) that can be represented exactly as an IEEE-754 binary floating-point number are 0, 0.25, 0.5, 0.75 and 1. All the others are off by a small amount. As an analogy to the 0.333333 example, if you take the floating-point value for 0.01 and you multiply it by 10, you won't get 0.1. Instead you will get something like 0.099999999786...

Representing money as a double or float will probably look good at first as the software rounds off the tiny errors, but as you perform more additions, subtractions, multiplications and divisions on inexact numbers, errors will compound and you'll end up with values that are visibly not accurate. This makes floats and doubles inadequate for dealing with money, where perfect accuracy for multiples of base 10 powers is required.

A solution that works in just about any language is to use integers instead, and count cents. For instance, 1025 would be $10.25. Several languages also have built-in types to deal with money. Among others, Java has the BigDecimal class, and Rust has the rust_decimal crate, and C# has the decimal type.



Related Topics



Leave a reply



Submit