Is There a 128 Bit Integer in C++

Is there a 128 bit integer in gcc?

For GCC before C23, a primitive 128-bit integer type is only ever available on 64-bit targets, so you need to check for availability even if you have already detected a recent GCC version. In theory gcc could support TImode integers on machines where it would take 4x 32-bit registers to hold one, but I don't think there are any cases where it does.

In C++, consider a library such as boost::multiprecision::int128_t which hopefully uses compiler built-in wide types if available, for zero overhead vs. using your own typedef (like GCC's __int128 or Clang's _BitInt(128)). See also @phuclv's answer on another question.

ISO C23 will let you typedef unsigned _BitInt(128) u128, modeled on clang's feature originally called _ExtInt() which works even on 32-bit machines; see a brief intro to it. Current GCC -std=gnu2x doesn't even support that syntax yet.


GCC 4.6 and later has a __int128 / unsigned __int128 defined as a built-in type. Use

#ifdef __SIZEOF_INT128__ to detect it.

GCC 4.1 and later define __int128_t and __uint128_t as built-in types. (You don't need #include <stdint.h> for these, either. Proof on Godbolt.)

I tested on the Godbolt compiler explorer for the first versions of compilers to support each of these 3 things (on x86-64). Godbolt only goes back to gcc4.1, ICC13, and clang3.0, so I've used <= 4.1 to indicate that the actual first support might have been even earlier.

         legacy               recommended(?)    |  One way of detecting support
__uint128_t | [unsigned] __int128 | #ifdef __SIZEOF_INT128__
gcc <= 4.1 | 4.6 | 4.6
clang <= 3.0 | 3.1 | 3.3
ICC <= 13 | <= 13 | 16. (Godbolt doesn't have 14 or 15)

If you compile for a 32-bit architecture like ARM, or x86 with -m32, no 128-bit integer type is supported with even the newest version of any of these compilers. So you need to detect support before using, if it's possible for your code to work at all without it.

The only direct CPP macro I'm aware of for detecting it is __SIZEOF_INT128__, but unfortunately some old compiler versions support it without defining it. (And there's no macro for __uint128_t, only the gcc4.6 style unsigned __int128). How to know if __uint128_t is defined

Some people still use ancient compiler versions like gcc4.4 on RHEL (RedHat Enterprise Linux), or similar crusty old systems. If you care about obsolete gcc versions like that, you probably want to stick to __uint128_t. And maybe detect 64-bitness in terms of sizeof(void*) == 8 as a fallback for __SIZEOF_INT128__ no being defined. (I think GNU systems always have CHAR_BIT==8, although I might be wrong about some DSPs). That will give a false negative on ILP32 ABIs on 64-bit ISAs (like x86-64 Linux x32, or AArch64 ILP32), but this is already just a fallback / bonus for people using old compilers that don't define __SIZEOF_INT128__.

There might be some 64-bit ISAs where gcc doesn't define __int128, or maybe even some 32-bit ISAs where gcc does define __int128, but I'm not aware of any.


The GCC internals are integer TI mode (GCC internals manual). (Tetra-integer = 4x width of a 32-bit int, vs. DImode = double width vs. SImode = plain int.) As the GCC manual points out, __int128 is supported on targets that support a 128-bit integer mode (TImode).

// __uint128_t is pre-defined equivalently to this
typedef unsigned uint128 __attribute__ ((mode (TI)));

There is an OImode in the manual, oct-int = 32 bytes, but current GCC for x86-64 complains "unable to emulate 'OI'" if you attempt to use it.


Random fact: ICC19 and g++/clang++ -E -dM define:

#define __GLIBCXX_TYPE_INT_N_0 __int128
#define __GLIBCXX_BITSIZE_INT_N_0 128

@MarcGlisse commented that's the way you tell libstdc++ to handle extra integer types (overload abs, specialize type traits, etc)

icpc defines that even with -xc (to compile as C, not C++), while g++ -xc and clang++ -xc don't. But compiling with actual icc (e.g. select C instead of C++ in the Godbolt language dropdown) doesn't define this macro.


The test function was:

#include <stdint.h>   // for uint64_t

#define uint128_t __uint128_t
//#define uint128_t unsigned __int128

uint128_t mul64(uint64_t a, uint64_t b) {
return (uint128_t)a * b;
}

compilers that support it all compile it efficiently, to

    mov       rax, rdi
mul rsi
ret # return in RDX:RAX which mul uses implicitly

Assigning 128 bit integer in C

Am I doing something wrong or is this a bug in gcc?

The problem is in 47942806932686753431 part, not in __uint128_t p. According to gcc docs there's no way to declare 128 bit constant:

There is no support in GCC for expressing an integer constant of type __int128 for targets with long long integer less than 128 bits wide.

So, it seems that while you can have 128 bit variables, you cannot have 128 bit constants, unless your long long is 128 bit wide.

The workaround could be to construct 128 bit value from "narrower" integral constants using basic arithmetic operations, and hope for compiler to perform constant folding.

use of 128 bit unsigned int in c language

  1. "Target" means the specific combination of CPU architecture and operating system that your compiler is configured to create programs for. There is a discussion at Does a list of all known target triplets in use exist?. But "integer mode" is really a concept used internally by the compiler, and only indirectly related to what the hardware can and can't do. So all this really says is "the compiler supports 128-bit integers on some targets and not on others". The easiest way to find out whether yours does is to just try to compile and run a small test program that uses __int128.

  2. Most system's printf functions don't support __int128, so you have to write your own code to print them, or find third-party code somewhere. See How to print __int128 in g++? which is for C++ but still relevant.

  3. You don't need to include anything or use any special options.

Is there a 128 bit integer in C++?

GCC and Clang support __int128

Getting a 128 bits integer from command line

Take a step back, and look at what you are trying to implement. The Tiny Encryption Algorithm does not work on an 128-bit integer, but on a 128-bit key; the key is composed of four 32-bit unsigned integers.

What you actually need, is a way to parse a decimal (or hexadecimal, or some other base) 128-bit unsigned integer from a string to four 32-bit unsigned integer elements.

I suggest writing a multiply-add function, which takes the quad-32-bit value, multiplies it by a 32-bit constant, and adds another 32-bit constant:

#include <stdint.h>

uint32_t muladd128(uint32_t quad[4], const uint32_t mul, const uint32_t add)
{
uint64_t temp = 0;

temp = (uint64_t)quad[3] * (uint64_t)mul + add;
quad[3] = temp;

temp = (uint64_t)quad[2] * (uint64_t)mul + (temp >> 32);
quad[2] = temp;

temp = (uint64_t)quad[1] * (uint64_t)mul + (temp >> 32);
quad[1] = temp;

temp = (uint64_t)quad[0] * (uint64_t)mul + (temp >> 32);
quad[0] = temp;

return temp >> 32;
}

The above uses most significant first word order. It returns nonzero if the result overflows; in fact, it returns the 32-bit overflow itself.

With that, it is very easy to parse a string describing a nonnegative 128-bit integer in binary, octal, decimal, or hexadecimal:

#include <stdlib.h>
#include <stdint.h>
#include <string.h>
#include <stdio.h>
#include <errno.h>

static void clear128(uint32_t quad[4])
{
quad[0] = quad[1] = quad[2] = quad[3] = 0;
}

/* muladd128() */

static const char *parse128(uint32_t quad[4], const char *from)
{
if (!from) {
errno = EINVAL;
return NULL;
}

while (*from == '\t' || *from == '\n' || *from == '\v' ||
*from == '\f' || *from == '\r' || *from == ' ')
from++;

if (from[0] == '0' && (from[1] == 'x' || from[1] == 'X') &&
((from[2] >= '0' && from[2] <= '9') ||
(from[2] >= 'A' && from[2] <= 'F') ||
(from[2] >= 'a' && from[2] <= 'f'))) {
/* Hexadecimal */
from += 2;
clear128(quad);

while (1)
if (*from >= '0' && *from <= '9') {
if (muladd128(quad, 16, *from - '0')) {
errno = ERANGE;
return NULL;
}
from++;
} else
if (*from >= 'A' && *from <= 'F') {
if (muladd128(quad, 16, *from - 'A' + 10)) {
errno = ERANGE;
return NULL;
}
from++;
} else
if (*from >= 'a' && *from <= 'f') {
if (muladd128(quad, 16, *from - 'a' + 10)) {
errno = ERANGE;
return NULL;
}
from++;
} else
return from;
}

if (from[0] == '0' && (from[1] == 'b' || from[1] == 'B') &&
(from[2] >= '0' && from[2] <= '1')) {
/* Binary */
from += 2;
clear128(quad);

while (1)
if (*from >= '0' && *from <= '1') {
if (muladd128(quad, 2, *from - '0')) {
errno = ERANGE;
return NULL;
}
from++;
} else
return from;
}

if (from[0] == '0' &&
(from[1] >= '0' && from[1] <= '7')) {
/* Octal */
from += 1;
clear128(quad);

while (1)
if (*from >= '0' && *from <= '7') {
if (muladd128(quad, 8, *from - '0')) {
errno = ERANGE;
return NULL;
}
from++;
} else
return from;
}

if (from[0] >= '0' && from[0] <= '9') {
/* Decimal */
clear128(quad);

while (1)
if (*from >= '0' && *from <= '9') {
if (muladd128(quad, 10, *from - '0')) {
errno = ERANGE;
return NULL;
}
from++;
} else
return from;
}

/* Not a recognized number. */
errno = EINVAL;
return NULL;
}

int main(int argc, char *argv[])
{
uint32_t key[4];
int arg;

for (arg = 1; arg < argc; arg++) {
const char *end = parse128(key, argv[arg]);
if (end) {
if (*end != '\0')
printf("%s: 0x%08x%08x%08x%08x (+ \"%s\")\n", argv[arg], key[0], key[1], key[2], key[3], end);
else
printf("%s: 0x%08x%08x%08x%08x\n", argv[arg], key[0], key[1], key[2], key[3]);
fflush(stdout);
} else {
switch (errno) {
case ERANGE:
fprintf(stderr, "%s: Too large.\n", argv[arg]);
break;
case EINVAL:
fprintf(stderr, "%s: Not a nonnegative integer in binary, octal, decimal, or hexadecimal notation.\n", argv[arg]);
break;
default:
fprintf(stderr, "%s: %s.\n", argv[arg], strerror(errno));
break;
}
}
}

return EXIT_SUCCESS;
}

It is very straightforward to add support for Base64 and Base85, which are sometimes used; or indeed for any base less than 232.

And, if you think about the above, it was all down to being precise about what you need.

Is a 128 bit int written or loaded in two instructions in C/C++?

The support is discussed in other answers. I'll discuss implementation issues.

Usually when reading from memory, the compiler will emit processor instructions to fetch the data from memory into a register. This may be atomic depending on how the databus is set up between the processor and the memory.

If your processor supports 128-bit transfers and the memory supports 128-bit data bus, this could be a single fetch (or write).

If your processor supports 128-bit {register} transfers, but the data bus is smaller, the processor will perform enough fetches to transfer the data from memory. This may or may not be atomic, depending on your definition of atomic (it's one processor instruction, but may require more than one fetch).

For processors that don't support 128-bit register transfers, the compiler will emit enough instructions to read the memory into register(s). This is for register to memory or memory to register transfers.

For memory to memory transfers (e.g. variable assignments), the compiler may choose to use block reading and writing (if your processor has support for block reading and writing). Some processors support SIMD, others may have block transfer instructions. For example, the ARM has LDM (load multiple) and STM (store multiple) instructions for loading many registers from memory and storing many registers to memory. Another method of block reading and writing is to use a DMA device (if present). The DMA can transfer data while the processor executes other instructions. However, the overhead to use the DMA may require more instructions than using 16 8-byte (byte) transfers.

In summary, compilers are not required to support int128_t. If they do support it, there are various methods to transfer the data, depending on the processor and platform hardware support. View the assembly language to see the instructions emitted by the compiler to support int128_t.

C++ standard and 128 bit integer

The reason is expense and lack of need. If the standard required a 128-bit integer type, every compiler would have to implement it. On hardware that doesn't support such an integer type natively, implementations would have to provide a way of generating code to emulate it. There simply aren't enough folks who need such a type to justify imposing it on every compiler.

128-bit arithmetic on x64 in C

GCC 4.1 introduced initial 128-bit integer support with the __int128_t and __uint128_t built-in types but 128-bit type was officially released since GCC 4.6 as __int128 / unsigned __int128

Clang also supports those types although I don't know since when. The first version on Godbolt (3.0.0) does support __int128_t though

ICC gained the same support since version 13.0.0: 128-bit integers supporting +, -, *, /, and % in the Intel C Compiler?

See also

  • Is there a 128 bit integer in gcc?
  • What gcc versions support the __int128 intrinsic type?

If you're on MSVC then there's no direct support for a 128-bit type but there are many intrinsics helping you do 128-bit operations:

  • 64*64=128: _mul128(), _umul128(), __mulh(), __umulh()

  • 128/64=64: _div128(), _udiv128()

  • 64+64=65: The carry in an addition can be easily obtained by comparing the low part of the sum with any of the operands:

    struct uint128 {
    uint64_t H, L;
    };

    inline uint128 add(uint128 a, uint128 b)
    {
    uint128 c;
    c.L = a.L + b.L; // add low parts
    c.H = a.H + b.H + (c.L < a.L); // add high parts and carry
    return c;
    }

    The same thing can be used for 128-bit subtraction

There are also intrinsics for shifting although implementing these is trivial: __shiftleft128(), __shiftright128()


If you're on an unsupported compiler then just use some fixed-width types from many available libraries, that would be much faster. For example ttmath:UInt<4> (a 128-bit int type with four 32-bit limbs), or (u)int128_t in Boost.Multiprecision and calccrypto/uint128_t. An arbitrary-precision arithmetic library like GMP is just too costly for this. One example: Optimization story: Switching from GMP to gcc's __int128 reduced run time by 95%



Related Topics



Leave a reply



Submit