Multiplication of Large Integers

Multiplication of large numbers

The complexity of calculating x^n depends of course on the used algorithm for calculating the power, and multiplication. If you compute the power per Exponentiation by squaring, you need O(log n) multiplications. In each step, you either square one number, or you multiply two numbers and square one of the two.

If x has d(x) digits, the x^n has Θ(n*d(x)) digits, in the last step, you square a number of about n/2*d(x) digits (and possibly multiply that number with a smaller one) and the algorithm is then finished by multiplying the repeated square x^(2^k), if 2^k <= n < 2^(k+1), with the accumulator.

Let S(m) be the cost of squaring an m-digit number (may or may not be the same as the cost M(m) of multiplying two arbitrary m-digit numbers). The squarings then need approximately

S(2^(k-1)*d(x)) + S(2^(k-2)*d(x)) + S(2^(k-3)*d(x)) + ...

work. Since S(m) >= m, that is between S(2^(k-1)*d(x)) and 2*S(2^(k-1)*d(x)). So the work for the squarings is dominated by the last step. For the multiplication of an x^(2^s) with the accumulator, the same holds, the final multiplication dominates the work. The final accumulator can be nearly as large as the repeated square, so the total cost of raising x to the n-th power by repeated squaring is

Θ(M(2^k*d(x)),

which is Θ(M(n*d(x))). With the naive multiplication - M(m) = O(m^2) - you then get a total cost of O(n^2*d(x)^2). Using a more advanced multiplication algorithm (Karatsuba, Toom-Cook, Schönhage-Strassen, ...), you get a much lower complexity, down to a little above O(n*d(x)*log (n*d(x)) * log log (n*d(x))).

When computing the power iteratively by multiplying with x, in n steps, let M(m,k) denote the cost of multiplying an m-digit number with a k-digit number. Since one of the factors is always x, the total cost is

M(d(x),d(x)) + M(d(x),2*d(x)) + M(d(x),3*d(x)) + ... + M(d(x),(n-1)*d(x))

With the schoolbook algorithm with the cost M(m,k) = m*k, this sums up to n*(n-1)/2*d(x)^2, so the total cost is again Θ(n^2*d(x)^2). But the constant factors are larger than for the exponentiation by repeated squaring.

When you are multiplying numbers of greatly differing lengths, as happens here after a few iterations, you cannot - as far as I know - reduce the cost M(m,k) much below Θ(m*k) - if m < k, view the numbers as a 1-digit number and an r-digit number (r*m <= k < (r+1)*m) in base b^m, you can reduce the cost of multiplying "digits" using the better algorithms if m is large enough, but you cannot reduce the r factor. So this algorithm then has a complexity of O(n^2*M(d(x))), the factor of n^2 cannot be eliminated.

Multiplication of very long integers

Most languages have functions or libraries that do this, usually called a Bignum library (GMP is a good one.)

If you want to do it yourself, I would do it the same way that people do long multiplication on paper. To do this you could either work with strings containing the number, or do it in binary using bitwise operations.

Example:

  45
x67
---
315
+270
----
585

Or in binary:

   101
x101
----
101
000
+101
------
11001

Edit: After doing it in binary I realized that it would be much simpler (and faster of course) to code using bitwise operations instead of strings containing the base-10 numbers. I've edited my binary multiplying example to show a pattern: for each 1-bit in the bottom number, add the top number, bit-shifted left the position of the 1-bit times to a variable. At the end, that variable will contain the product.

To store the product, you'll have to have two 64-bit numbers and imagine one of them being the first 64 bits and the other one the second 64 bits of the product. You'll have to write code that carries the addition from bit 63 of the second number to bit 0 of the first number.

An interview question - implement Biginteger Multiply

Do you know how to do multiplication on paper?

  123
x 456
-----
738
615
492
-----
56088

I would just implement that algorithm in code.

Multiply Very Large Numbers Accurately in Python

If you use fractions.Fraction you can handle larger numbers accurately, at the cost of some efficiency:

from fractions import Fraction
a = Fraction(45310630.0)
b = Fraction(1023473145)

c = int(a * b)
print(c)

Output:

46374212988031350

Some timings:

In [2]: a = Fraction(45310630.0)
...: b = Fraction(1023473145)
...:

In [3]: %timeit int(a * b)
3.92 µs ± 21.9 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

In [4]: a = 45310630.0
...: b = 1023473145
...:

In [5]: %timeit int(a * b)
479 ns ± 13.9 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)

Large numbers multiplication in MATLAB

If you are interested in the algorithm for computing for example 139676498390139139676498390676498390*8745566554641239676498390676498390, then this is what happens:

  • both numbers are converted to double in IEEE® Standard 754
  • as such, the representation is not accurate, because doubles can only accurately represent integers of up to 2^53-1 (check documentation of bitmax function) (~10^15, while your numbers are on the order of 10^35).
  • multiplication is performed using double precision floating point numbers, and is thus also approximate

Have a look at this example code:

>> a=139676498390139139676498390676498390;
>> num2str(a)

ans =

139676498390139141600015724509659136

which is clearly not exactly the value you assigned to a - only the first 16 digits match.

long integer multiplication

you can use GNU Multiple Precision Arithmetic Library for C++.

If you just want an easy way to multiply huge numbers( Integers ), here you are:

#include<iostream>

#include<string>
#include<sstream>
#define SIZE 700

using namespace std;

class Bignum{

int no[SIZE];

public:

Bignum operator *(Bignum& x){ // overload the * operator
/*
34 x 46
-------
204 // these values are stored in the
136 // two dimensional array mat[][];
-------
1564 // this the value stored in "Bignum ret"
*/
Bignum ret;
int carry=0;
int mat[2*SIZE+1][2*SIZE]={0};
for(int i=SIZE-1;i>=0;i--){
for(int j=SIZE-1;j>=0;j--){
carry += no[i]*x.no[j];
if(carry < 10){
mat[i][j-(SIZE-1-i)]=carry;
carry=0;
}
else{
mat[i][j-(SIZE-1-i)]=carry%10;
carry=carry/10;
}
}
}
for(int i=1;i<SIZE+1;i++){
for(int j=SIZE-1;j>=0;j--){
carry += mat[i][j]+mat[i-1][j];

if(carry < 10){

mat[i][j]=carry;

carry=0;

}

else{

mat[i][j]=carry%10;

carry=carry/10;

}
}
}
for(int i=0;i<SIZE;i++)
ret.no[i]=mat[SIZE][i];
return ret;
}

Bignum (){

for(int i=0;i<SIZE;i++)

no[i]=0;

}

Bignum (string _no){

for(int i=0;i<SIZE;i++)

no[i]=0;

int index=SIZE-1;

for(int i=_no.length()-1;i>=0;i--,index--){

no[index]=_no[i]-'0';

}

}

void print(){

int start=0;

for(int i=0;i<SIZE;i++)

if(no[i]!=0){

start=i;

break; // find the first non zero digit. store the index in start.

}

for(int i=start;i<SIZE;i++) // print the number starting from start till the end of array.

cout<<no[i];

cout<<endl;

return;

}
};

int main(){

Bignum n1("100122354123451234516326245372363523632123458913760187501287519875019671647109857108740138475018937460298374610938765410938457109384571039846");
Bignum n2("92759375839475239085472390845783940752398636109570251809571085701287505712857018570198713984570329867103986475103984765109384675109386713984751098570932847510938247510398475130984571093846571394675137846510874510847513049875610384750183274501978365109387460374651873496710394867103984761098347609138746297561762234873519257610");

Bignum n3 = n1*n2;
n3.print();

return 0;

}

as you can see, it's multiply 2 huge integer :) ... (up to 700 digits)



Related Topics



Leave a reply



Submit