.Net Core 3 Yields Different Floating Point Results from Version 2.2

.net core 3 yields different floating point results from version 2.2

.NET Core introduced a lot of floating point parsing and formatting improvements in IEEE floating point compliance. One of them is IEEE 754-2008 formatting compliance.

Before .NET Core 3.0, ToString() internally limited precision to "just" 15 places, producing string that couldn't be parsed back to the original. The question's values differ by a single bit.

In both .NET 4.7 and .NET Core 3, the actual bytes remains the same. In both cases, calling

BitConverter.GetBytes(d*d*d)

Produces

85, 14, 45, 178, 157, 111, 27, 64

On the other hand, BitConverter.GetBytes(6.859) produces :

86, 14, 45, 178, 157, 111, 27, 64

Even in .NET Core 3, parsing "6.859" produces the second byte sequence :

BitConverter.GetBytes(double.Parse("6.859"))

This is a single bit difference. The old behavior produced a string that couldn't be parsed back to the original value

The difference is explained by this change :

ToString(), ToString("G"), and ToString("R") will now return the shortest roundtrippable string. This ensures that users end up with something that just works by default.

That's why we always need to specify a precision when dealing with floating point numbers. There were improvements in this case too :

For the "G" format specifier that takes a precision (e.g. G3), the precision specifier is now always respected. For double with precisions less than 15 (inclusive) and for float with precisions less than 6 (inclusive) this means you get the same string as before. For precisions greater than that, you will get up to that many significant digits

Using ToString("G15") produces 6.859 while ToString("G16") produces 6.858999999999999, which has 16 fractional digits.

That's a reminder that we always need to specify a precision when working with floating point numbers, whether it's comparing or formatting

Why does Double.ToString(F1) behave differently in .NET Core 3.1?

There is a Github issue describing your exact issue which can be found here:
https://github.com/dotnet/runtime/issues/1640

That issue is marked as closed due to the root cause issue being a problem with Math.Round:
https://github.com/dotnet/runtime/issues/1643

So yes, this is unexpected behavior that is a bug in .NET. The bug has a milestone of Future which means it is not expected to be fixed in an upcoming release.

ToString has a different behavior between .NET 462 and .NET Core 3.1

Many changes to floating-point were made in .NET Core 3.0, which Tanner lists in this article.

I think the one that concerns us is:

ToString(), ToString("G"), and ToString("R") will now return the shortest roundtrippable string. This ensures that users end up with something that just works by default. An example of where it was problematic was Math.PI.ToString() where the string that was previously being returned (for ToString() and ToString("G")) was 3.14159265358979; instead, it should have returned 3.1415926535897931. The previous result, when parsed, returned a value which was internally off by 7 ULP (units in last place) from the actual value of Math.PI. This meant that it was very easy for users to get into a scenario where they would accidentally lose some precision on a floating-point value when the needed to serialize/deserialize it.

So your value of 4.0584789241077042 is now round-tripped as the shortest value which can be roundtripped. In other words, even though the resulting string is missing the last decimal place ("4.058478924107704"), parsing that back to a double still gives 4.0584789241077042, due to the fact that the closest value to 4.058478924107704 which can be presented by an IEEE double is 4.0584789241077042.

double original = 4.0584789241077042;
Console.WriteLine("Original: {0:G17}", original);
// Original: 4.0584789241077042

string s = original.ToString("R", CultureInfo.InvariantCulture);
Console.WriteLine("Rouble-trippable: {0}", s);
// Rouble-trippable: 4.058478924107704

double parsed = double.Parse(s, CultureInfo.InvariantCulture);
Console.WriteLine("Parsed: {0:G17}", parsed);
// Parsed: 4.0584789241077042

Console.WriteLine("Original == Parsed: {0}", original == parsed);
// Original == Parsed: True

How to avoid -0 as double.ToString() result after porting from .NET Framework 4.7.2 to .NET5.0?

From the linked post, it sounds like the change was made in order to be more compliant with IEEE 754, which is the behavior all languages try to be compliant with to minimize confusion - or at least standardize a particular form of confusion.

It's always possible that they will revert such changes if it becomes too backwards-incompatible and breaks people's programs en masse, but if it hasn't been done by this point (it's already in .NET Core 3.1, which is an LTS release, and in .NET 5) I don't think it will be. But asking here whether "Microsoft is willing to change this" is the wrong place - file an issue in dotnet/runtime and ask Microsoft directly.

The workaround needs to go everywhere but is also simple: replace the exact string "-0" with "0" (replacing the substring "-0" could lead to inverting the sign of -0.5, for example).

BigFloat calculations produce different results in various machines

It's maybe could be the culture of this machines.

Explicit the culture in your source code and try again.

CultureInfo.CurrentCulture = CultureInfo.GetCultureInfo("en-US"); // your culture

Why are the TimeSpan values different between netcoreapp2.1 and netcoreapp3.1?

Between dotnet core 2.1 and 3.1 (circa end of 2019) the behaviour of System.TimeSpan changed, so that the factory methods no longer round to the nearest 1ms.

There is a github issue about this here.

Beware: The current documentation has not been updated to reflect the change in behaviour, the output of the example code is now incorrect on the site: System.TimeSpan.FromSeconds.

In .net core 2.1 and the Net Framework the TimeSpan created is rounded to the nearest ms:

      FromSeconds          TimeSpan
----------- --------
0.001 00:00:00.0010000
0.0015 00:00:00.0020000
12.3456 00:00:12.3460000
123456.7898 1.10:17:36.7900000
1234567898.7654 14288.23:31:38.7650000
1 00:00:01
60 00:01:00
3600 01:00:00
86400 1.00:00:00
1801220.2 20.20:20:20.2000000

In .net core 3.1 it is not rounded to the nearest ms (for example 12.3456):

      FromSeconds          TimeSpan
----------- --------
0.001 00:00:00.0010000
0.0015 00:00:00.0015000
12.3456 00:00:12.3455999
123456.7898 1.10:17:36.7898000
1234567898.7654 14288.23:31:38.7654000
1 00:00:01
60 00:01:00
3600 01:00:00
86400 1.00:00:00
1801220.2 20.20:20:20.2000000

Why do I need to do floating point arithmetic with Math.floor

The first line uses floating point maths, which is inaccurate. 128.766*1000 might evaluate to 128765.99999999999 or something similar.

Math.Floor rounds this down to become 128765.

However, in the second line, you converted the result to decimal first, before Math.Floor. Converting to decimal removes this inaccuracy, since decimal is 128-bit and has a smaller range. See here for more info. When you convert 128765.9999999999 to a decimal, it got turned into 128766.



Related Topics



Leave a reply



Submit