Is There an Upper Bound to Biginteger

Is there an upper bound to BigInteger?

The number is held in an int[] - the maximum size of an array is Integer.MAX_VALUE. So the maximum BigInteger probably is (2 ^ 32) ^ Integer.MAX_VALUE.

Admittedly, this is implementation dependent, not part of the specification.


In Java 8, some information was added to the BigInteger javadoc, giving a minimum supported range and the actual limit of the current implementation:

BigInteger must support values in the range -2Integer.MAX_VALUE (exclusive) to +2Integer.MAX_VALUE (exclusive) and may support values outside of that range.

Implementation note: BigInteger constructors and operations throw ArithmeticException when the result is out of the supported range of -2Integer.MAX_VALUE (exclusive) to +2Integer.MAX_VALUE (exclusive).

What does BigInteger having no limit mean?

There is no theoretical limit. The BigInteger class allocates as much memory as it needs for all the bits of data it is asked to hold.

There are, however, some practical limits, dictated by the memory available. And there are further technical limits, although you're very unlikely to be affected: some methods assume that the bits are addressable by int indexes, so things will start to break when you go above Integer.MAX_VALUE bits.

What is the upper bound of BigInteger with character array implementation?

An 8-bit byte can hold 28, or 256 unique values.

4GB of memory is 232, or 4294967296 bytes.

Or 4294967295, if we subtract the one byte that you want to reserve for a sign

That's 34359738360 bits.

This many bits can hold 234359738360 unique values.

- 10^x < N <= 10^x

(first character is reserved for sign).

What is x in 32 bit system?

Wolfram Alpha suggests - 10^1292913986 < N <= 10^1292913986 as the largest representable powers of 10.

So x is 1,292,913,986.

Upper bound for number of digits of big integer in different base

I imagine speed is of some concern, or else you could just try the floating point-based estimate and adjust if it turned out to be too small. In that case, one can sacrifice tightness of the estimate for speed.

In the following, let dst_base be 2^w, src_base be b, and n_digits be n.

Let k(b,w)=max {j | b^j < 2^w}. This represents the largest power of b that is guaranteed to fit within a w-wide binary (non-negative) integer. Because of the relatively small number of source and destination bases, these values can be precomputed and looked-up in a table, but mathematically k(b,w)=[w log 2/log b] (where [.] denotes the integer part.)

For a given n let m=ceil( n / k(b,w) ). Then the maximum number of dst_base digits required to hold a number less than b^n is:

ceil(log (b^n-1)/log (2^w)) ≤ ceil(log (b^n) / log (2^w) )
≤ ceil( m . log (b^k(b,w)) / log (2^w) ) ≤ m.

In short, if you precalculate the k(b,w) values, you can quickly get an upper bound (which is not tight!) by dividing n by k, rounding up.

What are the limits of BigDecimal and BigInteger?

You won't get NumberFormatException multiplying large numbers. If the number produced is too large, you will get a cryptic NegativeArraySizeException as the size of the array overflows.

You are more likely to get an out of memory error.

The limit is 32 * 2^32-1 bits for BigInteger or about 2^(4 billion).

You can get a NumberFormatException if you

  • create a BigInteger from an empty byte[]
  • use a signum < -1 or > +1
  • try to parse a number in base >36 or < 2
  • have a string with illegal digits.

When you get an exception you should also look at the message and the stack trace as this usually gives you the real cause.

java find a biginteger within specific range

Use binary search. For instance, to find Z, start with Zmin=0 and Zmax=N. Compute Zmid = (Zmin+Zmax)/2, then compare Zmid^j versus N. if Zmin^j < N, then set Zmin=Zmid. Otherwise, set Zmax=Zmid. Eventually you'll narrow down to the correct Z in O(log(N)) time.

Is there a limit to the size of a BigInt or BigUint in Rust?

TL;DR: the maximum number that can be represented is roughly:

3.079 x 10^22212093154093428519

I suppose that nothing useful needs such a big number to be represented. You can be certain that the num_bigint will do the job, whatever the usage you have with it.


In theory, there is no limit to the num big integers size since the documentation says nothing about it (version 0.1.44). However, there is a concrete limit that we can calculate:

BigUint is a Vec<BigDigit>, and BigDigit is an u32. As far as I know, Rust does not define a max size for a Vec, but since the maximum possible allocated size is isize::MAX, the maximum number of BigDigit aka u32 is:

MAX_LEN = isize::MAX / sizeof(u32)

With this information, we can deduce that the maximum of a num::BigUint (and a num::BigInt as well) in the current implementation is:

(u32::MAX + 1) ^ MAX_LEN - 1 = 2^32^MAX_LEN - 1

To have this formula, we mimic the way we calculate u8::MAX, for example:

  • bit::MAX is 1,
  • the length is 8,
  • so the maximum is (bit::MAX + 1) ^ 8 - 1 = 255

Here is the full demonstration from the formula given by the num documentation:

a + b * big_digit::BASE + c * big_digit::BASE^2 + ...

If we are taking the max value, a == b == c == u32::MAX. Let's name it a. Let's name big_digit::BASE b for convenience. So the max number is:

sum(a * b^n) where n is from 0 to (MAX_LEN - 1)

if we factorize, we get:

a * sum(b^n) where n is from 0 to (MAX_LEN - 1)

The general formula of the sum of x^n is (x^(n + 1) - 1) / (x - 1). So, because n is MAX_LEN - 1, the result is:

a * (b^(MAX_LEN - 1 + 1) - 1) / (b - 1)

We replace a and b with the right value, and the biggest representable number is:

u32::MAX * (2^32^MAX_LEN - 1) / (2^32 - 1)

u32::MAX is 2^32 - 1, so this can be simplified into:

2^32^MAX_LEN - 1


Related Topics



Leave a reply



Submit