Exponentials in Python: X**Y VS Math.Pow(X, Y)

Exponentials in python: x**y vs math.pow(x, y)

Using the power operator ** will be faster as it won’t have the overhead of a function call. You can see this if you disassemble the Python code:

>>> dis.dis('7. ** i')
1 0 LOAD_CONST 0 (7.0)
3 LOAD_NAME 0 (i)
6 BINARY_POWER
7 RETURN_VALUE
>>> dis.dis('pow(7., i)')
1 0 LOAD_NAME 0 (pow)
3 LOAD_CONST 0 (7.0)
6 LOAD_NAME 1 (i)
9 CALL_FUNCTION 2 (2 positional, 0 keyword pair)
12 RETURN_VALUE
>>> dis.dis('math.pow(7, i)')
1 0 LOAD_NAME 0 (math)
3 LOAD_ATTR 1 (pow)
6 LOAD_CONST 0 (7)
9 LOAD_NAME 2 (i)
12 CALL_FUNCTION 2 (2 positional, 0 keyword pair)
15 RETURN_VALUE

Note that I’m using a variable i as the exponent here because constant expressions like 7. ** 5 are actually evaluated at compile time.

Now, in practice, this difference does not matter that much, as you can see when timing it:

>>> from timeit import timeit
>>> timeit('7. ** i', setup='i = 5')
0.2894785532627111
>>> timeit('pow(7., i)', setup='i = 5')
0.41218495570683444
>>> timeit('math.pow(7, i)', setup='import math; i = 5')
0.5655053168791255

So, while pow and math.pow are about twice as slow, they are still fast enough to not care much. Unless you can actually identify the exponentiation as a bottleneck, there won’t be a reason to choose one method over the other if clarity decreases. This especially applies since pow offers an integrated modulo operation for example.


Alfe asked a good question in the comments above:

timeit shows that math.pow is slower than ** in all cases. What is math.pow() good for anyway? Has anybody an idea where it can be of any advantage then?

The big difference of math.pow to both the builtin pow and the power operator ** is that it always uses float semantics. So if you, for some reason, want to make sure you get a float as a result back, then math.pow will ensure this property.

Let’s think of an example: We have two numbers, i and j, and have no idea if they are floats or integers. But we want to have a float result of i^j. So what options do we have?

  • We can convert at least one of the arguments to a float and then do i ** j.
  • We can do i ** j and convert the result to a float (float exponentation is automatically used when either i or j are floats, so the result is the same).
  • We can use math.pow.

So, let’s test this:

>>> timeit('float(i) ** j', setup='i, j = 7, 5')
0.7610865891750791
>>> timeit('i ** float(j)', setup='i, j = 7, 5')
0.7930400942188385
>>> timeit('float(i ** j)', setup='i, j = 7, 5')
0.8946636625872202
>>> timeit('math.pow(i, j)', setup='import math; i, j = 7, 5')
0.5699394063529439

As you can see, math.pow is actually faster! And if you think about it, the overhead from the function call is also gone now, because in all the other alternatives we have to call float().


In addition, it might be worth to note that the behavior of ** and pow can be overridden by implementing the special __pow__ (and __rpow__) method for custom types. So if you don’t want that (for whatever reason), using math.pow won’t do that.

Why is math.pow returning different values than the ** operator?

math.pow always converts it’s arguments to floats first, which are only approximations.

The ** operator uses integers that can never overflow, which means that it always gives a correct result.

Difference between the built-in pow() and math.pow() for floats, in Python?

Quick Check

From the signatures, we can tell that they are different:

pow(x, y[, z])

math.pow(x, y)

Also, trying it in the shell will give you a quick idea:

>>> pow is math.pow
False

Testing the differences

Another way to understand the differences in behaviour between the two functions is to test for them:

import math
import traceback
import sys

inf = float("inf")
NaN = float("nan")

vals = [inf, NaN, 0.0, 1.0, 2.2, -1.0, -0.0, -2.2, -inf, 1, 0, 2]

tests = set([])

for vala in vals:
for valb in vals:
tests.add( (vala, valb) )
tests.add( (valb, vala) )

for a,b in tests:
print("math.pow(%f,%f)"%(a,b) )
try:
print(" %f "%math.pow(a,b))
except:
traceback.print_exc()

print("__builtins__.pow(%f,%f)"%(a,b) )
try:
print(" %f "%__builtins__.pow(a,b))
except:
traceback.print_exc()

We can then notice some subtle differences. For example:

math.pow(0.000000,-2.200000)
ValueError: math domain error

__builtins__.pow(0.000000,-2.200000)
ZeroDivisionError: 0.0 cannot be raised to a negative power

There are other differences, and the test list above is not complete (no long numbers, no complex, etc...), but this will give us a pragmatic list of how the two functions behave differently. I would also recommend extending the above test to check for the type that each function returns. You could probably write something similar that creates a report of the differences between the two functions.

math.pow()

math.pow() handles its arguments very differently from the builtin ** or pow(). This comes at the cost of flexibility. Having a look at the source, we can see that the arguments to math.pow() are cast directly to doubles:

static PyObject *
math_pow(PyObject *self, PyObject *args)
{
PyObject *ox, *oy;
double r, x, y;
int odd_y;

if (! PyArg_UnpackTuple(args, "pow", 2, 2, &ox, &oy))
return NULL;
x = PyFloat_AsDouble(ox);
y = PyFloat_AsDouble(oy);
/*...*/

The checks are then carried out against the doubles for validity, and then the result is passed to the underlying C math library.

builtin pow()

The built-in pow() (same as the ** operator) on the other hand behaves very differently, it actually uses the Objects's own implementation of the ** operator, which can be overridden by the end user if need be by replacing a number's __pow__(), __rpow__() or __ipow__(), method.

For built-in types, it is instructive to study the difference between the power function implemented for two numeric types, for example, floats, long and complex.

Overriding the default behaviour

Emulating numeric types is described here. essentially, if you are creating a new type for numbers with uncertainty, what you will have to do is provide the __pow__(), __rpow__() and possibly __ipow__() methods for your type. This will allow your numbers to be used with the operator:

class Uncertain:
def __init__(self, x, delta=0):
self.delta = delta
self.x = x
def __pow__(self, other):
return Uncertain(
self.x**other.x,
Uncertain._propagate_power(self, other)
)
@staticmethod
def _propagate_power(A, B):
return math.sqrt(
((B.x*(A.x**(B.x-1)))**2)*A.delta*A.delta +
(((A.x**B.x)*math.log(B.x))**2)*B.delta*B.delta
)

In order to override math.pow() you will have to monkey patch it to support your new type:

def new_pow(a,b):
_a = Uncertain(a)
_b = Uncertain(b)
return _a ** _b

math.pow = new_pow

Note that for this to work you'll have to wrangle the Uncertain class to cope with an Uncertain instance as an input to __init__()

Why is pow(a, d, n) so much faster than a**d % n?

See the Wikipedia article on modular exponentiation. Basically, when you do a**d % n, you actually have to calculate a**d, which could be quite large. But there are ways of computing a**d % n without having to compute a**d itself, and that is what pow does. The ** operator can't do this because it can't "see into the future" to know that you are going to immediately take the modulus.

Why is 2**100 so much faster than math.pow(2,100)?

Essentially the reason that the power operator looks like it's doing so well in your examples is because Python has most likely folded the constant at compile time.

import dis
dis.dis('3.0 ** 100')
i = 100
dis.dis('3.0 ** i')

This gives the following output:

  1           0 LOAD_CONST               2 (5.153775207320113e+47)
3 RETURN_VALUE
1 0 LOAD_CONST 0 (3.0)
3 LOAD_NAME 0 (i)
6 BINARY_POWER
7 RETURN_VALUE

You can see this run here: http://ideone.com/5Ari8o

So in this case you can see it's not actually doing a fair comparison of the performance of the power operator vs math.pow because the result has been precomputed then cached. When you are making the 3.0 ** 100 there's no computation performed, the result is just being returned. This you would expect to be much faster than any exponentiation operation performed at runtime. This is ultimately what explains your results.

For a more fair comparison you need to force the computation to occur at runtime by using a variable:

print timeit.timeit("3.0 ** i", setup='i=100')

I tried making a quick benchmark for this using the python 3.4.1 on my computer:

import timeit
trials = 1000000
print("Integer exponent:")
print("pow(2, 100)")
print(timeit.timeit(stmt="pow(2, 100)", number=trials))
print("math.pow(2, 100)")
print(timeit.timeit(stmt="m_pow(2, 100)", setup='import math; m_pow=math.pow', number=trials))
print("2 ** 100")
print(timeit.timeit(stmt="2 ** i", setup='i=100', number=trials))
print("2.0 ** 100")
print(timeit.timeit(stmt="2.0 ** i", setup='i=100', number=trials))
print("Float exponent:")
print("pow(2.0, 100.0)")
print(timeit.timeit(stmt="pow(2.0, 100.0)", number=trials))
print("math.pow(2, 100.0)")
print(timeit.timeit(stmt="m_pow(2, 100.0)", setup='import math; m_pow=math.pow', number=trials))
print("2.0 ** 100.0")
print(timeit.timeit(stmt="2.0 ** i", setup='i=100.0', number=trials))
print("2.01 ** 100.01")
print(timeit.timeit(stmt="2.01 ** i", setup='i=100.01', number=trials))

results:

Integer exponent:
pow(2, 100)
0.7596459520525322
math.pow(2, 100)
0.5203307256717318
2 ** 100
0.7334983742808263
2.0 ** 100
0.30665244505310607
Float exponent:
pow(2.0, 100.0)
0.26179656874310275
math.pow(2, 100.0)
0.34543158098034743
2.0 ** 100.0
0.1768205988074767
2.01 ** 100.01
0.18460920008178894

So it looks like the conversion to a float eats up a fair amount of the execution time.

I also added a benchmark for math.pow note that this function is not the same as the builtin pow see this for more: Difference between the built-in pow() and math.pow() for floats, in Python?

calculate mod using pow function python

It's simple: pow takes an optional 3rd argument for the modulus.

From the docs:

pow(x, y[, z])

Return x to the power y; if z is present, return x to the power y, modulo z (computed more efficiently than pow(x, y) % z). The
two-argument form pow(x, y) is equivalent to using the power operator:
x**y.

So you want:

pow(6, 8, 5)

Not only is pow(x, y, z) faster & more efficient than (x ** y) % z it can easily handle large values of y without using arbitrary precision arithmetic, assuming z is a simple machine integer.

How do I do exponentiation in python?

^ is the xor operator.

** is exponentiation.

2**3 = 8

In Python, why does a negative number raised to an even power remain negative?

The ** operator binds more tightly than the - operator does in Python. If you want to override that, you'd use parentheses, e.g. (-i)**4.

https://docs.python.org/2/reference/expressions.html#operator-precedence
https://docs.python.org/3/reference/expressions.html#operator-precedence



Related Topics



Leave a reply



Submit