Long Long Int VS. Long Int VS. Int64_T in C++

long long int vs. long int vs. int64_t in C++

You don't need to go to 64-bit to see something like this. Consider int32_t on common 32-bit platforms. It might be typedef'ed as int or as a long, but obviously only one of the two at a time. int and long are of course distinct types.

It's not hard to see that there is no workaround which makes int == int32_t == long on 32-bit systems. For the same reason, there's no way to make long == int64_t == long long on 64-bit systems.

If you could, the possible consequences would be rather painful for code that overloaded foo(int), foo(long) and foo(long long) - suddenly they'd have two definitions for the same overload?!

The correct solution is that your template code usually should not be relying on a precise type, but on the properties of that type. The whole same_type logic could still be OK for specific cases:

long foo(long x);
std::tr1::disable_if(same_type(int64_t, long), int64_t)::type foo(int64_t);

I.e., the overload foo(int64_t) is not defined when it's exactly the same as foo(long).

[edit]
With C++11, we now have a standard way to write this:

long foo(long x);
std::enable_if<!std::is_same<int64_t, long>::value, int64_t>::type foo(int64_t);

[edit]
Or C++20

long foo(long x);
int64_t foo(int64_t) requires (!std::is_same_v<int64_t, long>);

why long long is not int64_t but they have same size?

long and long long are built-in types defined by the core language.

int64_t is defined by the <cstdint> header (in practice include <stdint.h>). It's defined as an alias for some built-in type. In your case it's defined as an alias for long.


Built in types that can be the same size on some platform still need to be regarded as distinct types in order to not create ambiguous calls of overloaded functions.

There are even built in numerical types that guaranteed have the same size and signedness, but still are distinct. Namely, all three of unsigned char, char and signed char are distinct and of size 1, but char must be the same signedness as one of the others. Although I don't recommend doing it, technically you can freely overload functions based on just this difference.


Re

Can I use long long and int64_t interchangeably on my machine, given that they are both integer types and have same size?

That depends on what you mean by “interchangeably”.

Since int64_t with your compiler is an alias for long, you can't call a function expecting an int64_t*, with a long long* pointer.

But long long is guaranteed to be at least 64 bits, and in this sense it's only with respect to readability that it matters which of long long and int64_t you choose.

Should I use long long or int64_t for portable code?

The types long long and unsigned long long are standard C and standard C++ types each with at least 64 bits. All compilers I'm aware of provide these types, except possibly when in a -pedantic mode but in this case int64_t or uint64_t won't be available with pre-C++ 2011 compilers, either. On all of the systems <stdint.h> is available, too. That is, as far as I can tell it doesn't matter much how you spell the type. The main goal of <stdint.h> is to provide the best match for a specific number of bits. If you need at least 64 bit but you also want to take advantage of the fasted implementation of such a type, you'd use int_least64_t or uint_least64_t from <stdint.h> or <cstdint> (in case of the latter, the names are defined in namespace std).

stdint types vs native types: long vs int64_t, int32_t

Yes, different CPU architectures have different sizes of fundamental types, and the fixed width aliases map to different types. This differs across operating systems as well; not just architecture. This is normal, not a bug, and generally doesn't change between compiler versions.

To avoid this problem, either provide overloads for only fixed width types, or provide overloads for each fundamental type. Don't mix them.

In this case, it may be better to use a function template instead of overloads:

template<class T>
void write(T value)
{
os << value;
}


how about equivalency of char, uint8_t, int8_t?

All of those are always distinct types. std::uint8_t - when it is defined at all - is an alias of unsigned char and std::int8_t - when defined - is an alias of signed char. Both of those are distinct from char type.

What is the difference between int64 and int64_t in C++?

int64_t is a Standard C++ type for a signed integer of exactly 64 bits. int64 is not a standard type.

The first C++ standard didn't have fixed-width types. Before int64_t was added to Standard C++, the different compilers all implemented a 64-bit type but they used their own names for it (e.g. long long, __int64, etc.)

A likely series of events is that this project originally would typedef int64 to the 64-bit type for each compiler it was supported on. But once compilers all started to support Standard C++ better, or once the person who wrote the code found out about int64_t, the code was switched over to use the standard name.

overload ambiguous (int - int64_t vs int - double)

From [over.ics.user] table 12 we have

Sample Image

As you can see integer and floating point promotions have the same rank and integer and floating point conversions have the same rank.

Now we need to determine if 5 -> int64_t is a integer promotion or conversion. If we check [conv.prom]/1 we find

A prvalue of an integer type other than bool, char16_t, char32_t, or wchar_t whose integer conversion rank (4.13) is less than the rank of int can be converted to a prvalue of type int if int can represent all the values of the source type; otherwise, the source prvalue can be converted to a prvalue of type unsigned int.

The promotion stops at int so we have to look at [conv.integral]/1 which is integer conversion and we have

A prvalue of an integer type can be converted to a prvalue of another integer type. A prvalue of an unscoped enumeration type can be converted to a prvalue of an integer type.

Which is what is going on. So 5 -> int64_t is integer conversion and 5 -> double is floating point conversion which are both ranked the same so the overload resolution is ambiguous.

int_least64_t vs int_fast64_t vs int64_t

On your platform, they're all names for the same underlying data type. On other platforms, they aren't.

int64_t is required to be EXACTLY 64 bits. On architectures with (for example) a 9-bit byte, it won't be available at all.

int_least64_t is the smallest data type with at least 64 bits. If int64_t is available, it will be used. But (for example) with a 9-bit byte machine, this could be 72 bits.

int_fast64_t is the data type with at least 64 bits and the best arithmetic performance. It's there mainly for consistency with int_fast8_t and int_fast16_t, which on many machines will be 32 bits, not 8 or 16. In a few more years, there might be an architecture where 128-bit math is faster than 64-bit, but I don't think any exists today.


If you're porting an algorithm, you probably want to be using int_fast32_t, since it will hold any value your old 32-bit code can handle, but will be 64-bit if that's faster. If you're converting pointers to integers (why?) then use intptr_t.



Related Topics



Leave a reply



Submit