Decimal VS Double! - Which One Should I Use and When

decimal vs double! - Which one should I use and when?

For money, always decimal. It's why it was created.

If numbers must add up correctly or balance, use decimal. This includes any financial storage or calculations, scores, or other numbers that people might do by hand.

If the exact value of numbers is not important, use double for speed. This includes graphics, physics or other physical sciences computations where there is already a "number of significant digits".

Difference between decimal, float and double in .NET?

float and double are floating binary point types (float is 32-bit; double is 64-bit). In other words, they represent a number like this:

10001.10010110011

The binary number and the location of the binary point are both encoded within the value.

decimal is a floating decimal point type. In other words, they represent a number like this:

12345.65789

Again, the number and the location of the decimal point are both encoded within the value – that's what makes decimal still a floating point type instead of a fixed point type.

The important thing to note is that humans are used to representing non-integers in a decimal form, and expect exact results in decimal representations; not all decimal numbers are exactly representable in binary floating point – 0.1, for example – so if you use a binary floating point value you'll actually get an approximation to 0.1. You'll still get approximations when using a floating decimal point as well – the result of dividing 1 by 3 can't be exactly represented, for example.

As for what to use when:

  • For values which are "naturally exact decimals" it's good to use decimal. This is usually suitable for any concepts invented by humans: financial values are the most obvious example, but there are others too. Consider the score given to divers or ice skaters, for example.

  • For values which are more artefacts of nature which can't really be measured exactly anyway, float/double are more appropriate. For example, scientific data would usually be represented in this form. Here, the original values won't be "decimally accurate" to start with, so it's not important for the expected results to maintain the "decimal accuracy". Floating binary point types are much faster to work with than decimals.

When should I use double instead of decimal?

I think you've summarised the advantages quite well. You are however missing one point. The decimal type is only more accurate at representing base 10 numbers (e.g. those used in currency/financial calculations). In general, the double type is going to offer at least as great precision (someone correct me if I'm wrong) and definitely greater speed for arbitrary real numbers. The simple conclusion is: when considering which to use, always use double unless you need the base 10 accuracy that decimal offers.

Edit:

Regarding your additional question about the decrease in accuracy of floating-point numbers after operations, this is a slightly more subtle issue. Indeed, precision (I use the term interchangeably for accuracy here) will steadily decrease after each operation is performed. This is due to two reasons:

  1. the fact that certain numbers (most obviously decimals) can't be truly represented in floating point form
  2. rounding errors occur, just as if you were doing the calculation by hand. It depends greatly on the context (how many operations you're performing) whether these errors are significant enough to warrant much thought however.

In all cases, if you want to compare two floating-point numbers that should in theory be equivalent (but were arrived at using different calculations), you need to allow a certain degree of tolerance (how much varies, but is typically very small).

For a more detailed overview of the particular cases where errors in accuracies can be introduced, see the Accuracy section of the Wikipedia article. Finally, if you want a seriously in-depth (and mathematical) discussion of floating-point numbers/operations at machine level, try reading the oft-quoted article What Every Computer Scientist Should Know About Floating-Point Arithmetic.

How to decide what to use - double or decimal?

Use decimal whenever you're dealing with quantities that you want to (and can) be represented exactly in base-10. That includes monetary values, because you want 2.1234 to be represented exactly as 2.1234.

Use double when you don't need an exact representation in base-10. This is usually good for handling measurements, because those are already approximations, not exact quantities.

Of course, if having or not an exact representation in base-10 is not important to you, other factors come into consideration, which may or may not matter depending on the specific situation:

  • double has a larger range (it can handle very large and very small magnitudes);
  • decimal has more precision (has more significant digits);
  • you may need to use double to interact with some older APIs that are not aware of decimal;
  • double is faster than decimal;
  • decimal has a larger memory footprint;

Decimal vs Double Speed

  1. Floating point arithmetic will almost always be significantly faster because it is supported directly by the hardware. So far almost no widely used hardware supports decimal arithmetic (although this is changing, see comments).
  2. Financial applications should always use decimal numbers, the number of horror stories stemming from using floating point in financial applications is endless, you should be able to find many such examples with a Google search.
  3. While decimal arithmetic may be significantly slower than floating point arithmetic, unless you are spending a significant amount of time processing decimal data the impact on your program is likely to be negligible. As always, do the appropriate profiling before you start worrying about the difference.

decimal vs double again

The reason decimal is recommended is that all numbers that can be represented as non-repeating decimals can be accurately represented in a decimal type. Units of money in the real world are always non-repeating decimals. Your problem as others have said is that your price is, for some reason, not representable as a non-repeating decimal. That is it is 0.083333333.... Using a double doesn't actually help in terms of accuracy - a double can not accurately represent 1/12 either. In this case the lack of accuracy is not causing a problem but in others it might.

Also more importantly the choice to use a double will mean there are many more numbers that you couldn't represent completely accurately. For example 0.01, 0.02, 0.03... Yeah, quite a lot of numbers you are likely to care about can't be accurately represented as a double.

In this case the question of where the price comes from is really the important one. Wherever you are storing that price almost certainly isn't storing 1/12 exactly. Either you are storing an approximation already or that price is actually the result of a calculations (or you are using a very unusual number storage system where you are storing rational numbers but this seems wildly unlikely).

What you really want is a price that can be represented as a double. If that is what you have but then you modify it (eg by dividing by 12 to get a monthly cost from an annual) then you need to do that division as late as possible. And quite possibly you also need to calculate the monthly cost as a division of the outstanding balance. What I mean by this last part is that if you are paying $10 a year in monthly instalments you might charge $0.83 for the first month. Then the second month you charge ($10-0.83)/11. This would be 0.83 again. On the fifth month you charge (10-0.83*4)/8 which now is 0.84 (once rounded). Then next month its (10-0.83*4-0.84)/7 and so on. This way you guarantee that the total charge is correct and don't worry about compounded errors.

At the end of the day you are the only one to judge whether you can re-architect your system to remove all rounding errors like this or whether you have to mitigate them in some way as I've suggested. Your best bet though is to read up on everything you can about floating point numbers, both decimal and binary, so that you fully understand the implications of choosing one over the other.

to compare double and decimal should I cast double to decimal or decimal to double?

It depends on the data. Decimal has greater precision; double has greater range. If the double could be outside the decimal range, you should cast the decimal to double (or indeed you could write a method that returns a result without casting, if the double value is outside the decimal range; see example below).

In this case, it seems unlikely that the double value would be outside the decimal range, so (especially since you're working with price data) you should cast the double to decimal.

Example (could use some logic to handle NaN values):

private static int Compare(double d, decimal m)
{
const double decimalMin = (double)decimal.MinValue;
const double decimalMax = (double)decimal.MaxValue;
if (d < decimalMin) return -1;
if (d > decimalMax) return 1;
return ((decimal)d).CompareTo(m);
}

Decimal vs. Double for small currency numbers that only get stored, not manipulated

Generally when dealing with existing data source, just follow what's already used. In this case, SQL Server .NET reader maps money to decimal. This means no awkward casting and nothing wasted in transmission.

TomTom's answer in his comment point the case when double can get the performance you needed with acceptable precision, but unless you're dealing with millions of calculations or starved in storage there's no reason to not use decimal for financial data.



Related Topics



Leave a reply



Submit