Conversion of a Decimal to Double Number in C# Results in a Difference

Conversion of a decimal to double number in C# results in a difference

Interesting - although I generally don't trust normal ways of writing out floating point values when you're interested in the exact results.

Here's a slightly simpler demonstration, using DoubleConverter.cs which I've used a few times before.

using System;

class Test
{
static void Main()
{
decimal dcm1 = 8224055000.0000000000m;
decimal dcm2 = 8224055000m;
double dbl1 = (double) dcm1;
double dbl2 = (double) dcm2;

Console.WriteLine(DoubleConverter.ToExactString(dbl1));
Console.WriteLine(DoubleConverter.ToExactString(dbl2));
}
}

Results:

8224055000.00000095367431640625
8224055000

Now the question is why the original value (8224055000.0000000000) which is an integer - and exactly representable as a double - ends up with extra data in. I strongly suspect it's due to quirks in the algorithm used to convert from decimal to double, but it's unfortunate.

It also violates section 6.2.1 of the C# spec:

For a conversion from decimal to float or double, the decimal value is rounded to the
nearest double or float value. While this conversion may lose precision, it never causes
an exception to be thrown.

The "nearest double value" is clearly just 8224055000... so this is a bug IMO. It's not one I'd expect to get fixed any time soon though. (It gives the same results in .NET 4.0b1 by the way.)

To avoid the bug, you probably want to normalize the decimal value first, effectively "removing" the extra 0s after the decimal point. This is somewhat tricky as it involves 96-bit integer arithmetic - the .NET 4.0 BigInteger class may well make it easier, but that may not be an option for you.

Differences in double and decimal calculations in C#

280.585 and 280.5 are both exactly representable as short decimal fractions, as is their difference.

Assuming double is represented as IEEE 754 64-bit binary floating point, the closest double to 280.585 is 280.58499999999997953636921010911464691162109375. 280.5 is exactly representable as a double. Their difference is 0.08499999999997953636921010911464691162109375.

In the case of the 2+2=4 calculation, all the numbers are exactly representable in either int or long so both should get the exact answer.

Convert.ToDouble(decimal) unexpected loss precision

This I believe is a bug in Convert.ToDouble(decimal d). The C# spec says that conversion should give the closest double, but here it clearly doesn't.
Looking at the bits, we can see that it is off by one double.

double d = 1478110092.9070129;
decimal dc = 1478110092.9070129M;
double dcd = Convert.ToDouble(dc);
long d_bits = BitConverter.DoubleToInt64Bits(d); // 4743986451068882048
long dcd_bits = BitConverter.DoubleToInt64Bits(dcd); // 4743986451068882047

See also this possible duplicate:

Conversion of a decimal to double number in C# results in a difference

C# double to decimal precision loss

138630.78380386264 is not exactly representable to double precision. The closest double precision number (as found here) is 138630.783803862635977566242218017578125, which agrees with your findings.

You ask why the conversion to decimal does not contain more precision. The documentation for Convert.ToDecimal() has the answer:

The Decimal value returned by this method contains a maximum of 15 significant digits. If the value parameter contains more than 15 significant digits, it is rounded using rounding to nearest. The following example illustrates how the Convert.ToDecimal(Double) method uses rounding to nearest to return a Decimal value with 15 significant digits.

The double value, rounded to nearest at 15 significant figures is 138630.783803863, exactly as you show above.

When is it beneficial to convert from float to double via decimal

As strange as it may seem, conversion via decimal (with Convert.ToDecimal(float)) may be beneficial in some circumstances.

It will improve the precision if it is known that the original numbers were provided by users in decimal representation and users typed no more than 7 significant digits.

To prove it I wrote a small program (see below). Here is the explanation:

As you recall from the OP this is the sequence of steps:

  1. Application B has doubles coming from two sources:
    (a) results of calculations; (b) converted from user-typed decimal numbers.
  2. Application B writes its doubles as floats into the file - effectively
    doing binary rounding from 52 binary digits (IEEE 754 single) to the 23 binary digits (IEEE 754 double).
  3. Our Application reads that float and converts it to a double by one of two ways:

    (a) direct assignment to double - effectively padding a 23-bit number to a 52-bit number with binary zeros (29 zero-bits);

    (b) via conversion to decimal with (double)Convert.ToDecimal(float).

As Ben Voigt properly noticed Convert.ToDecimal(float) (see MSDN in the Remark section) rounds the result to 7 significant decimal digits. In Wikipedia's IEEE 754 article about Single we can read that precision is 24 bits - equivalent to log10(pow(2,24)) ≈ 7.225 decimal digits. So, when we do the conversion to decimal we lose that 0.225 of a decimal digit.

So, in the generic case, when there is no additional information about doubles, the conversion to decimal will in most cases make us loose some precision.

But (!) if there is the additional knowledge that originally (before being written to a file as floats) the doubles were decimals with no more than 7 digits, the rounding errors introduced in decimal rounding (step 3(b) above) will compensate the rounding errors introduced with the binary rounding (in step 2. above).

In the program to prove the statement for the generic case I randomly generate doubles, then cast it to float, then convert it back to double (a) directly, (b) via decimal, then I measure the distance between the original double and the double (a) and double (b). If the double(a) is closer to the original than the double(b), I increment pro-direct conversion counter, in the opposite case I increment the pro-viaDecimal counter. I do it in a loop of 1 mln. cycles, then I print the ratio of pro-direct to pro-viaDecimal counters. The ratio turns out to be about 3.7, i.e. approximately in 4 cases out of 5 the conversion via decimal will spoil the number.

To prove the case when the numbers are typed in by users I used the same program with the only change that I apply Math.Round(originalDouble, N) to the doubles. Because I get originalDoubles from the Random class, they all will be between 0 and 1, so the number of significant digits coincides with the number of digits after the decimal point. I placed this method in a loop by N from 1 significant digit to 15 significant digits typed by user. Then I plotted it on the graph. The dependency of (how many times direct conversion is better than conversion via decimal) from the number of significant digits typed by user.
The dependency of (how many times direct conversion is better than via decimal) from the number of significant digits typed by user.

As you can see, for 1 to 7 typed digits the conversion via Decimal is always better than the direct conversion. To be exact, for a million of random numbers only 1 or 2 are not improved by conversion to decimal.

Here is the code used for the comparison:

private static void CompareWhichIsBetter(int numTypedDigits)
{
Console.WriteLine("Number of typed digits: " + numTypedDigits);
Random rnd = new Random(DateTime.Now.Millisecond);
int countDecimalIsBetter = 0;
int countDirectIsBetter = 0;
int countEqual = 0;

for (int i = 0; i < 1000000; i++)
{
double origDouble = rnd.NextDouble();
//Use the line below for the user-typed-in-numbers case.
//double origDouble = Math.Round(rnd.NextDouble(), numTypedDigits);

float x = (float)origDouble;
double viaFloatAndDecimal = (double)Convert.ToDecimal(x);
double viaFloat = x;

double diff1 = Math.Abs(origDouble - viaFloatAndDecimal);
double diff2 = Math.Abs(origDouble - viaFloat);

if (diff1 < diff2)
countDecimalIsBetter++;
else if (diff1 > diff2)
countDirectIsBetter++;
else
countEqual++;
}

Console.WriteLine("Decimal better: " + countDecimalIsBetter);
Console.WriteLine("Direct better: " + countDirectIsBetter);
Console.WriteLine("Equal: " + countEqual);
Console.WriteLine("Betterness of direct conversion: " + (double)countDirectIsBetter / countDecimalIsBetter);
Console.WriteLine("Betterness of conv. via decimal: " + (double)countDecimalIsBetter / countDirectIsBetter );
Console.WriteLine();
}

How do I display a decimal value to 2 decimal places?

decimalVar.ToString("#.##"); // returns ".5" when decimalVar == 0.5m

or

decimalVar.ToString("0.##"); // returns "0.5"  when decimalVar == 0.5m

or

decimalVar.ToString("0.00"); // returns "0.50"  when decimalVar == 0.5m


Related Topics



Leave a reply



Submit