Why Does Str(Float) Return More Digits in Python 3 Than Python 2

Why does str(float) return more digits in Python 3 than Python 2?

No, there's no PEP. There's an issue in the bug tracker, and an associated discussion on the Python developers mailing list. While I was responsible for proposing and implementing the change, I can't claim it was my idea: it had arisen during conversations with Guido at EuroPython 2010.

Some more details: as already mentioned in comments, Python 3.1 introduced a new algorithm for the string repr of a float, (later backported to the Python 2 series, so that it also appears in Python 2.7). As a result of this new algorithm, a "short" decimal number typed in at the prompt has a correspondingly short representation. This eliminated one of the existing reasons for the difference between str and repr, and made it possible to use the same algorithm for both str and repr. So for Python 3.2, following the discussion linked to above, str and repr were made identical. As to why: it makes the language a little bit smaller and cleaner, and it removes the rather arbitrary choice of 12 digits when outputting the string. (The choice of 17 digits used for the repr in Python versions prior to 2.7 is far from arbitrary, by the way: two distinct IEEE 754 binary64 floats will have distinct representations when converted to decimal with 17 significant digits, and 17 is the smallest integer with this property.)

Apart from simplicity, there are some less obvious benefits. One aspect of the repr versus str distinction that's been confusing for users in the past is the fact that repr automatically gets used in containers. So for example in Python 2.7:

>>> x = 1.4 * 1.5
>>> print x
2.1
>>> print [x]
[2.0999999999999996]

I'm sure there's at least one StackOverflow question asking about this phenomenon somewhere: here is one such, and another more recent one. With the simplification introduced in Python 3.2, we get this instead:

>>> x = 1.4 * 1.5
>>> print(x)
2.0999999999999996
>>> print([x])
[2.0999999999999996]

which is at least more consistent.

If you do want to be able to hide imprecisions, the right way to do it remains the same: use string formatting for precise control of the output format.

>>> print("{:.12g}".format(x))
2.1

I hope that explains some of the reasoning behind the change. I'm not going to argue that it's universally beneficial: as you point out, the old str had the convenient side-effect of hiding imprecisions. But in my opinion (of course, I'm biased), it does help eliminate a few surprises from the language.

Not able get exact str(float) result of python 2 in python 3

The problem comes from the fact that as the same way some fractions are note easily represented in decimal form (for example 1/3 = 0.33333333…), some fractions are not easily represented in binary form (for example 1/3 = 0.01010101…). But if you want to have as consistent results as a decimal numbers, consider looking at the decimal module (which is available in Python 2.7 and 3). You could have something like this:

from decimal import Decimal

a = Decimal('1.5') * Decimal('0.4')
result = str(a)
print(result)

Python 3 Float Decimal Points/Precision

In a word, you can't.

3.65 cannot be represented exactly as a float. The number that you're getting is the nearest number to 3.65 that has an exact float representation.

The difference between (older?) Python 2 and 3 is purely due to the default formatting.

I am seeing the following both in Python 2.7.3 and 3.3.0:

In [1]: 3.65
Out[1]: 3.65

In [2]: '%.20f' % 3.65
Out[2]: '3.64999999999999991118'

For an exact decimal datatype, see decimal.Decimal.

Why does repr(float) return more digits on Google App Engine than others

The value of sys.float_repr_style is 'legacy', an option that is set at build time by GAE and can't be changed.

This version of the repr algorithm is what used to be the format used before Python 2.7, which is computing 17 significant digits, then basing the output on those 17 digits (stripping trailing zeros when appropriate

Last decimal digit precision changes in different call of same generator function [python]

It is not the order in which you called your generator, but the way you are presenting the numbers that caused this change in output.

You are printing a list object the second time, and that's a container. Container contents are printed using repr(), while before you used print on the float directly, which uses str()

repr() and str() output of floating point numbers simply differs:

>>> lst = [0, 0.1, 0.2, 0.30000000000000004, 0.4, 0.5, 0.6, 0.7, 0.7999999999999999, 0.8999999999999999, 0.9999999999999999]
>>> print lst
[0, 0.1, 0.2, 0.30000000000000004, 0.4, 0.5, 0.6, 0.7, 0.7999999999999999, 0.8999999999999999, 0.9999999999999999]
>>> for elem in lst:
... print elem
...
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
>>> str(lst[3])
'0.3'
>>> repr(lst[3])
'0.30000000000000004'

repr() on a float produces a result that'll let you reproduce the same value accurately. str() rounds the floating point number for presentation.

How to display a float with two decimal places?

You could use the string formatting operator for that:

>>> '%.2f' % 1.234
'1.23'
>>> '%.2f' % 5.0
'5.00'

The result of the operator is a string, so you can store it in a variable, print etc.

Decimal(str(my_float)) seems to be better than Decimal(my_float), what's going on?

The exact value of the float is 1.100000000000000088817841970012523233890533447265625. Python isn't somehow keeping track of the original string. When Python stringifies it with str, it truncates to 12 digits.

>>> x = 1.0/9
>>> print decimal.Decimal(x) # prints exact value
0.111111111111111104943205418749130330979824066162109375
>>> print x # truncated
0.111111111111

Even with repr, Python uses the shortest string that will round to the original float when parsed with float.

>>> print repr(x)
0.1111111111111111

If you want to parse JSON and get Decimal instances instead of float, you can pass a parse_float argument to the load(s) function:

>>> json.loads('{"foo": 1.234567890123456789}', parse_float=decimal.Decimal)
{u'foo': Decimal('1.234567890123456789')}

The above call causes decimal.Decimal to be called to parse numbers in the JSON string, bypassing the rounding that would occur with intermediate float or str calls. You'll get exactly the number specified in the JSON.

Note that the API distinguishes between the functions used to parse things that look like floats, things that look like ints, and things that look like Infinity, -Infinity, or NaN. This can be a bit inconvenient if you want all 3 categories to be handled the same way:

>>> json.loads('{"foo": 1.234567890123456789}',
... parse_float=decimal.Decimal,
... parse_int=decimal.Decimal,
... parse_constant=decimal.Decimal)
{u'foo': Decimal('1.234567890123456789')}


Related Topics



Leave a reply



Submit