Time Complexity of Power()

Time complexity of power()

Exponentiation by Squaring.

Sample Image

A non-recursive implementation

LL power(int a, int b)
{
LL pow = 1;
while ( b )
{
if ( b & 1 )
{
pow = pow * a;
--b;
}
a = a*a;
b = b/2;
}
return pow;
}

This algorithm requires log2b squarings and at most log2b multiplications.

The running time is O(log b)


Time complexity of this power function

Yes.

Another way of thinking on complexity of recursive functions is (amount of calls)**(height of recursive tree)

In each call you make two calls which divide n by two so the height of tree is logn so the time complexity is 2**(logn) which is O(n)

See a much more formal proof here:

https://cs.stackexchange.com/questions/69539/time-complexity-of-recursive-power-code

Time complexity of powering a number

Think of it like this.

If you conisder multiplication between two numbers to be an operation that takes unit time. Then the complexity of a 2 number multiplication is done in theta(1) time.
Now, in a for loop which runs for n-1 times for n numbers. You apply this operation n-1 times. So the theta(1) cost operation happens N-1 times which makes the overall cost of the operation theta(n-1) which in asymptotic terms is theta(n)

The multiplication happens like this

  • x=x
  • x^2 = x*x
  • x^3 = (x^2)*x
  • x^4 = (x^3)*x
  • ................
  • .................
  • .................
  • x^(n-1) =(x^(n-2))*x
  • x^n = (x^(n-1))*x

It's theta(1) for each step as you can use the result of a previous step to calculate the overall product. For example, when you caculate x^2. You can store the value of x^2 and use it while calculating x^3. Similarly when you calculate x^4 you can use the stored value of x^3.

Now all the individual operations take theta(1) time. If you do it n times, the total time is theta(n). Now for calculating the complexity of x^n.

  • for x^2, T(2) = theta(1)

    This is the base case for our induction.
  • Let us assume for x^k, T(k) = theta(k) to be true
  • x^(k+1) = (x^k)*x, T(k+1)= theta(k)+theta(1)

Hence, for x^n, time complexity is T(n) = theta(N)

and if you want to sum up the complexity. You are summing it up wrong.

We know that T(2) = theta(1), time complexity of multiplying two numbers.

  • T(n) = T(n-1)+T(2) (time complexity of multiplying two numbers and time complexity of multiplying (n-1) numbers)
  • T(n) = T(n-2)+T(2)+T(2)
  • T(n) = T(n-3)+T(2)+T(2)+T(2)
  • ...................
  • ...................
  • T(n) = T(3) + (n-3)*T(2)
  • T(n) = T(2) + (n-2)*T(2)
  • T(n) = (n-1)*T(2)
  • T(n) = (n-1)*theta(1)
  • T(n) = theta(n)

As you know C an example of how you will write a power(naive) function.

 int power(int x,int n)
{
int powerVal=1;
for(int i=1;i<=n;++i)
{
powerVal=powerVal*x;
}
return powerVal;
}

Now, as you can see each time multiplication of two integer takes place and that takes only theta(1) time. You run this loop n times. so total complexity is theta(n)

Time Complexity of recursive Power Set function

The run-time is actually O(n*2n). The simple explanation is that this is an asymptotically optimal algorithm insofar as the total work it does is dominated by creating the subsets which feature directly in the final output of the algorithm, with the total length of the output generated being O(n*2n). We can also analyze an annotated implementation of the pseudo-code (in JavaScript) to show this complexity more rigorously:

function powerSet(S) {
if (S.length == 0) return [[]] // O(1)
let e = S.pop() // O(1)
let pSetWithoutE = powerSet(S); // T(n-1)
let pSet = pSetWithoutE // O(1)
pSet.push(...pSetWithoutE.map(set => set.concat(e))) // O(2*|T(n-1)| + ||T(n-1)||)
return pSet; // O(1)
}
// print example:
console.log('{');
for (let subset of powerSet([1,2,3])) console.log(`\t{`, subset.join(', '), `}`);
console.log('}')

Java Math.pow(a,b) time complexity

@Blindy talks about possible approaches that Java could take in implementing pow.

First of all, the general case cannot be repeated multiplication. It won't work for the general case where the exponent is not an integer. (The signature for pow is Math.pow(double, double)!)

In the OpenJDK 8 codebase, the native code implementation for pow can work in two ways:

  • The first implementation in e_pow.c uses a power series. The approach is described in the C comments as follows:

    * Method:  Let x =  2   * (1+f)
    * 1. Compute and return log2(x) in two pieces:
    * log2(x) = w1 + w2,
    * where w1 has 53-24 = 29 bit trailing zeros.
    * 2. Perform y*log2(x) = n+y' by simulating multi-precision
    * arithmetic, where |y'|<=0.5.
    * 3. Return x**y = 2**n*exp(y'*log2)
  • The second implementation in w_pow.c is a wrapper for the pow function provided by the Standard C library. The wrapper deals with edge cases.

Now it is possible that the Standard C library uses CPU specific math instructions. If it did, and the JDK build (or runtime) selected1 the second implementation, then Java would use those instructions too.

But either way, I can see no trace of any special case code that uses repeated multiplication. You can safely assume that it is O(1).


1 - I haven't delved into how when the selection is / can be made.



Related Topics



Leave a reply



Submit