How Does Python Manage Int and Long

How does Python manage int and long?

int and long were "unified" a few versions back. Before that it was possible to overflow an int through math ops.

3.x has further advanced this by eliminating long altogether and only having int.

  • Python 2: sys.maxint contains the maximum value a Python int can hold.

    • On a 64-bit Python 2.7, the size is 24 bytes. Check with sys.getsizeof().
  • Python 3: sys.maxsize contains the maximum size in bytes a Python int can be.

    • This will be gigabytes in 32 bits, and exabytes in 64 bits.
    • Such a large int would have a value similar to 8 to the power of sys.maxsize.

Difference between int() and long()

int: Integers; equivalent to C longs in Python 2.x, non-limited length in Python 3.x

long: Long integers of non-limited length; exists only in Python 2.x

So, in python 3.x and above, you can use int() instead of long().

How to indicate a long integer in Python

There is no need to indicate long types as one might with C-type languages. Python 3.0 integers are always as big as necessary, so just drop the L.

Handling very large numbers in Python

Python supports a "bignum" integer type which can work with arbitrarily large numbers. In Python 2.5+, this type is called long and is separate from the int type, but the interpreter will automatically use whichever is more appropriate. In Python 3.0+, the int type has been dropped completely.

That's just an implementation detail, though — as long as you have version 2.5 or better, just perform standard math operations and any number which exceeds the boundaries of 32-bit math will be automatically (and transparently) converted to a bignum.

You can find all the gory details in PEP 0237.

Why does this function return a long rather than an int?

Python <3 automatically converts int to an long if the number gets too big.
You can read more about it here.

How does Python manage int and long?

(this automatic conversion is one of the reasons python is more memory consuming and slower than lets say C/C++ but that is another discussion)

>>> import sys
>>> x = sys.maxint # set a variable to your systems maximum integer
>>> print type(x)
<type 'int'> # type is then set to int
>>> x += 1 # if you increase it, it gets converted into long
>>> print type(x)
<type 'long'>

Python: Is there a way to keep an automatic conversion from int to long int from happening?

So you want to throw out the One True Way and go retro on overflows. Silly you.

There is no good upside to the C / C++ / C# / Java style of overflow. It does not reliably raise an error condition. For C and C99 it is "undefined behavior" in ANSI and POSIX (C++ mandates modulo return) and it is a known security risk. Why do you want this?

The Python method of seamlessly overflowing to a long is the better way. I believe this is the same behavior being adapted by Perl 6.

You can use the Decimal module to get more finite overflows:

>>> from decimal import *
>>> from sys import maxint
>>> getcontext()
Context(prec=28, rounding=ROUND_HALF_EVEN, Emin=-999999999, Emax=999999999, capitals=1,
flags=[], traps=[DivisionByZero, Overflow, InvalidOperation])

>>> d=Decimal(maxint)
>>> d
Decimal('9223372036854775807')
>>> e=Decimal(maxint)
>>> f=d**e
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/decimal.py", line 2225, in __pow__
ans = ans._fix(context)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/decimal.py", line 1589, in _fix
return context._raise_error(Overflow, 'above Emax', self._sign)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/decimal.py", line 3680, in _raise_error
raise error(explanation)
decimal.Overflow: above Emax

You can set your precision and boundary conditions with Decimal classes and the overflow is nearly immediate. You can set what you trap for. You can set your max and min. Really -- How does it get better than this? (I don't know about relative speed to be honest, but I suspect it is faster than numby but slower than native ints obviously...)

For your specific issue of image processing, this sounds like a natural application to consider some form of saturation arithmetic. You also might consider, if you are having overflows on 32 arithmetic, check operands along the way on obvious cases: pow, **, *. You might consider overloaded operators and check for the conditions you don't want.

If Decimal, saturation, or overloaded operators don't work -- you can write an extension. Heaven help you if you want to throw out the Python way of overflow to go retro...

Python integer division when integers are very long

In python 3:

// is used for integer division, this will give you the correct result.

print(N//5)

output:

37976099630974882181694214460891074195560794810598715548942859588791116710647901151257847105814835418037954576749411059998307979336216414740

where
/ is used for floating point division, so it might give errors while rounding up the digits.



Related Topics



Leave a reply



Submit