Definition of int64_t
a) Can you explain to me the difference between
int64_t
andlong
(long int
)? In my understanding, both are 64 bit integers. Is there any reason to choose one over the other?
The former is a signed integer type with exactly 64 bits. The latter is a signed integer type with at least 32 bits.
b) I tried to look up the definition of
int64_t
on the web, without much success. Is there an authoritative source I need to consult for such questions?
http://cppreference.com covers this here: http://en.cppreference.com/w/cpp/types/integer. The authoritative source, however, is the C++ standard (this particular bit can be found in §18.4 Integer types [cstdint]).
c) For code using
int64_t
to compile, I am including<iostream>
, which doesn't make much sense to me. Are there other includes that provide a declaration ofint64_t
?
It is declared in <cstdint>
or <cinttypes>
(under namespace std
), or in <stdint.h>
or <inttypes.h>
(in the global namespace).
C++: int64_t where does it come from?
They were introduced by C99 standard.
Documentation:
http://www.cplusplus.com/reference/cstdint/
http://en.cppreference.com/w/c/types/integer
They were introduced because the standard doesn't specify fixed width for standard primitives, but a minimum width. So int
can be 16-bit or 32-bit, depending on compiler, OS, and architecture, long
varies as well as it can be 32-bit or 64-bit. Even char
can be 8 or 16 bits.
long long int vs. long int vs. int64_t in C++
You don't need to go to 64-bit to see something like this. Consider int32_t
on common 32-bit platforms. It might be typedef
'ed as int
or as a long
, but obviously only one of the two at a time. int
and long
are of course distinct types.
It's not hard to see that there is no workaround which makes int == int32_t == long
on 32-bit systems. For the same reason, there's no way to make long == int64_t == long long
on 64-bit systems.
If you could, the possible consequences would be rather painful for code that overloaded foo(int)
, foo(long)
and foo(long long)
- suddenly they'd have two definitions for the same overload?!
The correct solution is that your template code usually should not be relying on a precise type, but on the properties of that type. The whole same_type
logic could still be OK for specific cases:
long foo(long x);
std::tr1::disable_if(same_type(int64_t, long), int64_t)::type foo(int64_t);
I.e., the overload foo(int64_t)
is not defined when it's exactly the same as foo(long)
.
[edit]
With C++11, we now have a standard way to write this:
long foo(long x);
std::enable_if<!std::is_same<int64_t, long>::value, int64_t>::type foo(int64_t);
[edit]
Or C++20
long foo(long x);
int64_t foo(int64_t) requires (!std::is_same_v<int64_t, long>);
What is the difference between int64 and int64_t in C++?
int64_t
is a Standard C++ type for a signed integer of exactly 64 bits. int64
is not a standard type.
The first C++ standard didn't have fixed-width types. Before int64_t
was added to Standard C++, the different compilers all implemented a 64-bit type but they used their own names for it (e.g. long long
, __int64
, etc.)
A likely series of events is that this project originally would typedef int64
to the 64-bit type for each compiler it was supported on. But once compilers all started to support Standard C++ better, or once the person who wrote the code found out about int64_t
, the code was switched over to use the standard name.
int_least64_t vs int_fast64_t vs int64_t
On your platform, they're all names for the same underlying data type. On other platforms, they aren't.
int64_t
is required to be EXACTLY 64 bits. On architectures with (for example) a 9-bit byte, it won't be available at all.
int_least64_t
is the smallest data type with at least 64 bits. If int64_t
is available, it will be used. But (for example) with a 9-bit byte machine, this could be 72 bits.
int_fast64_t
is the data type with at least 64 bits and the best arithmetic performance. It's there mainly for consistency with int_fast8_t
and int_fast16_t
, which on many machines will be 32 bits, not 8 or 16. In a few more years, there might be an architecture where 128-bit math is faster than 64-bit, but I don't think any exists today.
If you're porting an algorithm, you probably want to be using int_fast32_t
, since it will hold any value your old 32-bit code can handle, but will be 64-bit if that's faster. If you're converting pointers to integers (why?) then use intptr_t
.
Does C99 mandate a `int64_t` type be available, always?
Does the C99 standard mandate that a conforming compiler have a 64-bit
int64_t defined (and usable)? Or is it optional, and just happens to
be defined by all popular compilers?
The type is optional, in one sense, and conditionally required in a different sense. Specifically, C99 says,
The typedef name intN_t designates a signed integer type with width N
, no padding bits, and a two's complement representation. [...]These types are optional. However, if an implementation provides
integer types with widths of 8, 16, 32, or 64 bits, no padding bits,
and (for the signed types) that have a two's complement
representation, it shall define the corresponding typedef names.
Thus, int64_t
is optional in the sense that a conforming implementation is not required to provide any type that exactly matches the characteristics of an int64_t
, and if it doesn't, then it needn't (indeed, must not, according to another section) provide type int64_t
.
C99 does specify that there is a type long long int
whose required minimum range necessitates a representation at least 64 bits wide. Now it is possible that in some implementation there is no signed integer type exactly 64 bits wide (for example, maybe int
is 24 bits, long
48, and long long
96), and it is possible that there is a 64-value-bit integer type, but it contains padding bits or is not represented in two's complement. Such implementations could be fully conforming and yet not define an int64_t
. In practice, though, there aren't any such implementations in common use today.
Why unsigned int64_t gives an error in C?
You cannot apply the unsigned
modifier on the type int64_t
. It only works on char
, short
, int
, long
, and long long
.
You probably want to use uint64_t
which is the unsigned counterpart of int64_t
.
Also note that int64_t
et al. are defined in the header stdint.h
, which you should include if you want to use these types.
Variable's type int32_t, int64_t, etc
There are no built in types like Int32_t
and Int64_t
, and there are no magic suffix _t
that you can add to an existing type.
The types Int32_t
and Int64_t
has to be defined somewhere in your code. They are probably using the Int32
and Int64
types in some way, but there is no magic involved just because the type names contains other type names. They could just as well be named ABigNumber
and ABiggerNumber
as far as the compiler is concerned.
Accessing hi and low part of int64_t with int32_t
First I should note that int64_t
is a C99 feature, but older C89 compilers often already have support for double-word operations via some extension types like long long
or __int64
. Check if it's the case of your old compiler, if not then check if your compiler has an extension to get the carry flag, like __builtin_addc()
or __builtin_add_overflow()
. If all failed go to the next step
Now %0 = %1 + %2;
is not an assembly instruction in any architecture I know, but it looks more readable than the traditional mnemonic syntax. However you don't even need to use assembly for multiword additions/subtractions like this. It's very simple to do directly in C since
- basic operations in 2's complement don't depend on the signness of the type, and
- if an overflow occurs then the result will be smaller than the operands (in unsigned) which we can use to get the carry bit
Regarding the implementation, since your old compiler has no 64-bit type, there's no need to declare the union, and you can't do that either because int64_t
wasn't declared before. You can just access the whole thing as a struct.
#if COMPILER_VERSION <= SOME_VERSION
typedef UINT64_T {
uint32_t h;
uint32_t l;
} uint64_t;
uint64_t add(uint64_t x, uint64_t y)
{
uint64_t z;
z.l = x.l + y.l; // add the low parts
z.h = x.h + y.h + (z.l < x.l); // add the high parts and carry
return z;
}
// ...
#else
uint64_t add(uint64_t x, uint64_t y)
{
return x + y;
}
#endif
t = add(2, 3);
If you need a signed type then a small change is needed
typedef INT64_T {
int32_t h;
uint32_t l;
} int64_t;
The add/sub/mul functions are still the same as the unsigned version
A smart modern compiler will recognize the z.l < x.l
pattern and turn into add/adc
pair in architectures that have them, so there's no comparison and/or branch there. If not then unfortunately you still need to fall back to inline assembly
See also
- Multiword addition in C
- Access the flags without inline assembly?
- Efficient 128-bit addition using carry flag
- An efficient way to do basic 128 bit integer calculations in C++?
Related Topics
How to Check for Equals? (0 == I) or (I == 0)
C++ on X86-64: When Are Structs/Classes Passed and Returned in Registers
Deduce Template Argument from Std::Function Call Signature
Compile a Dll in C/C++, Then Call It from Another Program
Should I Use Virtual, Override, or Both Keywords
What Would Be C++ Limitations Compared C Language
How to Build Cmake Externalproject While Configurating Main One
Standard Conversions: Array-To-Pointer Conversion
C++ What Is the Purpose of Casting to Void
C++ Expected Constant Expression
Generating an Interface Without Virtual Functions
Split an Integer into Its Digits C++
Use Channel Hiearchy of Boost.Log for Severity and Sink Filtering
Why Is There an Implicit Type Conversion from Pointers to Bool in C++