How to Enable _Int128 on Visual Studio

How to enable __int128 on Visual Studio?

MSDN doesn't list it as being available, and this recent response agrees, so officially, no, there is no type called __int128 and it can't be enabled.

Additionally, never trust the syntax hilighter; it is user editable, and thus likely to either have bogus or 'future' types in it. (it is probably a reserved word however, due to the error, thus you should avoid naming any types __int128, this follows the convention that anything prefixed with a double underscore should reserved for compiler use).

One would think the __int128 might be available on x64/IPF machines via register spanning, like __in64 is on 32bit targets, but right now the only 128 bit types stem from SIMD types (__m128 and its various typed forms).

`unsigned __int128` for Visual Studio 2019

__m128 is defined as a union thus:

typedef union __declspec(intrin_type) __declspec(align(16)) __m128 {
float m128_f32[4];
unsigned __int64 m128_u64[2];
__int8 m128_i8[16];
__int16 m128_i16[8];
__int32 m128_i32[4];
__int64 m128_i64[2];
unsigned __int8 m128_u8[16];
unsigned __int16 m128_u16[8];
unsigned __int32 m128_u32[4];
} __m128;

Clearly it can be used for unsigned operations via its m128_uXX members but it cannot be used semantically like a built-in data type - including qualifying with unsigned, and it cannot be used in built-in arithmetic operations - it requires functions/extensions defined specifically defined to operate on it. It is intended for use with SSE/SSE2 SIMD extensions

Microsoft's implementation does not include an extension 128 bit integer type.

In terms of the QuickJS code, the problem is in the preceding libbf.h code there is:

#if defined(__x86_64__)
#define LIMB_LOG2_BITS 6
#else
#define LIMB_LOG2_BITS 5
#endif

#define LIMB_BITS (1 << LIMB_LOG2_BITS)

So that for 64 bit compilation LIMB_BITS == 64. I suggest one of two solutions:

  • Use 32 bit compilation so that LIMB_BITS == 32
  • Modify libbf.h thus:
    #if defined(__x86_64__) || defined(_MSC_VER)
    so that the 128-bit definitions are omitted on an MS build.
  • Use a Windows compiler that has a __int128 built-in such as Mingw64.

I note that there is a QuickJS Visual Studio port here, but looking at the code and the lua script that generates the VS solution it is not immediately obvious to me how or even if this issue is resolved. The readme.md file says to download and install premake5, but the link is broken. I gave up at that point.

Can I use 128-bit integer in MSVC++?

I shall leave aside the question of whether it's a good idea, or whether the physical quantity you're measuring could even in theory ever exceed a value of 2^63, or 10^19 or thereabouts. I'm sure you have your reasons. So what are your options in pure C/C++?

The answer is: not many.

  • 128 bit integers are not part of any standard, nor are they supported on the compilers I know.
  • 64 bit double will give you the dynamic range (10^308 or so). An excellent choice if you don't need exact answers. Unfortunately if you have a number with enough zeros and you add one to it, it isn't going to change.
  • 80 bit double is natively support by the floating point processor, and that gives you the 63 bit mantissa together with the extended dynamic range.

So, how about roll-your-own 128 bit integer arithmetic? You would really have to be a masochist. It's easy enough to do addition and subtraction (mind your carries), and with a bit of thought it's not too hard to do multiplication. Division is another thing entirely. That is seriously hard, and the likely outcome is bugs similar to the Pentium bug of the 1990s.

You could probably accumulate your counters in two (or more) 64 bit integers without much difficulty. Then convert them into doubles for the calculations at the end. That shouldn't be too hard.

After that I'm afraid it's off to library shopping. Boost you mentioned, but there are much more specialised libraries around, such as cpp-bigint.

Not surprisingly, this question has been asked before and has a very good answer: Representing 128-bit numbers in C++.

Largest value in Visual Studio C or C++

There aren't data types for 128-bit integers that work like the ones for 64-bit sizes and below. If you want them, you'll have to implement them yourself. Using GMP or boost::multiprecision is always an option.

128-bit division intrinsic in Visual C++

I am no expert, but I dug this up:

http://research.swtch.com/2008/01/division-via-multiplication.html

Interesting stuff. Hope it helps.

EDIT: This is insightful too: http://www.gamedev.net/topic/508197-x64-div-intrinsic/

Is there a 128 bit integer in C++?

GCC and Clang support __int128



Related Topics



Leave a reply



Submit