Double Vs. Bigdecimal

Double vs. BigDecimal?

A BigDecimal is an exact way of representing numbers. A Double has a certain precision. Working with doubles of various magnitudes (say d1=1000.0 and d2=0.001) could result in the 0.001 being dropped alltogether when summing as the difference in magnitude is so large. With BigDecimal this would not happen.

The disadvantage of BigDecimal is that it's slower, and it's a bit more difficult to program algorithms that way (due to + - * and / not being overloaded).

If you are dealing with money, or precision is a must, use BigDecimal. Otherwise Doubles tend to be good enough.

I do recommend reading the javadoc of BigDecimal as they do explain things better than I do here :)

Big Decimal vs double which provides better precision

BigDecimal is an exact way of representing numbers and is not impacted by rounding in the manner in which double does. The issue of losing precision is when assigning to double, not when converting from double to BigDecimal.

For more details, you can see this post: Double vs. BigDecimal?

Java:Why should we use BigDecimal instead of Double in the real world?

It's called loss of precision and is very noticeable when working with either very big numbers or very small numbers. The binary representation of decimal numbers with a radix is in many cases an approximation and not an absolute value. To understand why you need to read up on floating number representation in binary. Here is a link: http://en.wikipedia.org/wiki/IEEE_754-2008. Here is a quick demonstration:

in bc (An arbitrary precision calculator language) with precision=10:

(1/3+1/12+1/8+1/15) = 0.6083333332

(1/3+1/12+1/8) = 0.541666666666666

(1/3+1/12) = 0.416666666666666

Java double:

0.6083333333333333

0.5416666666666666

0.41666666666666663

Java float:

0.60833335

0.5416667

0.4166667

If you are a bank and are responsible for thousands of transactions every day, even though they are not to and from one and same account (or maybe they are) you have to have reliable numbers. Binary floats are not reliable - not unless you understand how they work and their limitations.

Why is BigDecimal more precise than double?

A double is a remarkably fast floating point data type implemented at a very low level on many chipsets.

Its precision is sufficient for very many applications: e.g. measuring the distance of the sun to Pluto to the nearest centimetre!

Always a performance trade-off when thinking about moving to a more precise data type as the latter will be much slower and your favourite mathematical libraries may not support them. Remember that the outputs of your program are a function of the quality of the inputs.

As a final remark, never use a double to represent cash quantities though!

Using BigDecimal as a beginner instead of Double?

double should be used whenever you are working with real numbers where perfect precision is not required. Here are some common examples:

  • Computer graphics, for several reasons: exact precision is rarely required, as few monitors have more than a low four-digit number of pixels; additionally, most trigonometric functions are available only for float and double, and trigonometry is essential to most graphics work
  • Statistical analysis; metrics like mean and standard deviation are typically expected to have at most a little more precision than the individual data points
  • Randomness (e.g. Random.nextDouble()), where the point isn't a specific number of digits; the priority is a real number over some specific distribution
  • Machine learning, where multiplication factors are being learned and specific decimal precision isn't required

For values like money, using double at any point at all is a recipe for a bad time.
BigDecimal should generally be used for money and anything else where you care about a specific number of decimal digits, but it has inferior performance and offers fewer mathematical operations.



Related Topics



Leave a reply



Submit