Is "Long Long" = "Long Long Int" = "Long Int Long" = "Int Long Long"

What is the difference between long, long long, long int, and long long int in C++?

long and long int are identical. So are long long and long long int. In both cases, the int is optional.

As to the difference between the two sets, the C++ standard mandates minimum ranges for each, and that long long is at least as wide as long.

The controlling parts of the standard (C++11, but this has been around for a long time) are, for one, 3.9.1 Fundamental types, section 2 (a later section gives similar rules for the unsigned integral types):

There are five standard signed integer types : signed char, short int, int, long int, and long long int. In this list, each type provides at least as much storage as those preceding it in the list.

There's also a table 9 in 7.1.6.2 Simple type specifiers, which shows the "mappings" of the specifiers to actual types (showing that the int is optional), a section of which is shown below:

Specifier(s)         Type
------------- -------------
long long int long long int
long long long long int
long int long int
long long int

Note the distinction there between the specifier and the type. The specifier is how you tell the compiler what the type is but you can use different specifiers to end up at the same type.

Hence long on its own is neither a type nor a modifier as your question posits, it's simply a specifier for the long int type. Ditto for long long being a specifier for the long long int type.

Although the C++ standard itself doesn't specify the minimum ranges of integral types, it does cite C99, in 1.2 Normative references, as applying. Hence the minimal ranges as set out in C99 5.2.4.2.1 Sizes of integer types <limits.h> are applicable.


In terms of long double, that's actually a floating point value rather than an integer. Similarly to the integral types, it's required to have at least as much precision as a double and to provide a superset of values over that type (meaning at least those values, not necessarily more values).

What is the difference between long long and long int

There are several shorthands for built-in types.

  • short is (signed) short int
  • long is (signed) long int
  • long long is (signed) long long int.

On many systems, short is 16-bit, long is 32-bit and long long is 64-bit. However, keep in mind that the standard only requires

sizeof(char) == 1
sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long) <= sizeof(long long)

And a consequence of this is that on an exotic system, sizeof(long long) == 1 is possible.

What's the difference between long long and long

Going by the standard, all that's guaranteed is:

  • int must be at least 16 bits
  • long must be at least 32 bits
  • long long must be at least 64 bits

On major 32-bit platforms:

  • int is 32 bits
  • long is 32 bits as well
  • long long is 64 bits

On major 64-bit platforms:

  • int is 32 bits
  • long is either 32 or 64 bits
  • long long is 64 bits as well

If you need a specific integer size for a particular application, rather than trusting the compiler to pick the size you want, #include <stdint.h> (or <cstdint>) so you can use these types:

  • int8_t and uint8_t
  • int16_t and uint16_t
  • int32_t and uint32_t
  • int64_t and uint64_t

You may also be interested in #include <stddef.h> (or <cstddef>):

  • size_t
  • ptrdiff_t

Is long long a type in C?

If you read the right bits of the standard carefully enough, you find that the monster declaration in the question is valid, even if implausible.

The 'right bits' includes:

6.2.5 Types


There are five standard signed integer types, designated as signed char, short
int
, int, long int, and long long int. (These and other types may be
designated in several additional ways, as described in 6.7.2.)

For each of the signed integer types, there is a corresponding (but different) unsigned
integer type (designated with the keyword unsigned) that uses the same amount of
storage (including sign information) and has the same alignment requirements.

6.7.2 Type specifiers


At least one type specifier shall be given in the declaration specifiers in each declaration,
and in the specifier-qualifier list in each struct declaration and type name. Each list of
type specifiers shall be one of the following multisets (delimited by commas, when there
is more than one multiset per item); the type specifiers may occur in any order, possibly
intermixed with the other declaration specifiers.

  • long long, signed long long, long long int, or
    signed long long int
  • unsigned long long, or unsigned long long int

Other declaration specifiers include storage classes (static and _Thread_local in the example), and type qualifiers (volatile and _Atomic).

6.7 Declarations


6.7 Declarations

Syntax

declaration:
     declaration-specifiers init-declarator-listopt ;
     static_assert-declaration

declaration-specifiers:
     storage-class-specifier

     declaration-specifiersopt
     type-specifier declaration-specifiersopt
     type-qualifier declaration-specifiersopt
     function-specifier declaration-specifiersopt
     alignment-specifier declaration-specifiersopt

Also, as noted by Olaf in a comment:

6.11.5 Storage-class specifiers


The placement of a storage-class specifier other than at the beginning of the declaration
specifiers in a declaration is an obsolescent feature.

It is also eccentric to split up the integer type keywords (the type specifier). A more orthodox version of the declaration would be:

static _Thread_local _Atomic const volatile unsigned long long int x = 10;

(or it might drop the int).

Difference between Short, long and Long long int in C programming?

C was originally written for 16-bit machines, where the fastest and most-convenient size of integer to work with were 16 bits. This was the original int type. Sometimes, programmers needed 32-bit numbers, so those were the long int type.

In due course, people ported C to 32-bit machines where the most convenient word size was 32 bits. They wanted to keep using int to mean the native word size, because for example about every C program in the real world has int i; in it somewhere, but sometimes, they also needed to use 16-bit numbers to save space. So those became the short int type.

Time passed, and C needed to be ported to 64-bit machines. By then, there were a lot of programs in existence that assumed long was exactly 32 bits wide, and unfortunately, a lot of programs that also assumed that a long was the same size as a pointer, an IPv4 address, a file offset, a timestamp, etc. Good code uses types such as size_t, uintptr_t, and off_t instead, but old versions of the system libraries defined their library functions to use long, so a lot of legacy code had to as well. Since C99 came out, there have also been types such as int32_t in the standard library to specify exact widths, and Unix and Windows have had them under different names for a while.

As a consequence of the types being used to mean contradictory things, compilers today give programmers the choice between long being 32 and 64 bits wide, and 64-bit compilers let int be either 32 or 64 bits wide, too. At least one real compiler defined int as 64 bits (the native word length) but long as 32 bits (so the system libraries and code would still work). The standards committee prohibited that in C11: long long int is now guaranteed to be at least as wide as long int, and long int at least as wide as int.

Because a lot of programs couldn't redefine long as 64 bits without breaking, C needed a new type that meant (at least) 64-bit. By then, the standards committee was reluctant to introduce new keywords outside a header file, so they reused an existing one instead and named it long long int.

In brief, you shouldn’t make assumptions about what int and long mean beyond that int is at least 16 bits wide and fast, long is at least 32 bits wide, and on a C11 compiler long is at least as wide as int but no wider than long long. For general purposes, use int. If you need an array index, use size_t or ptrdiff_t. If you want to be sure you can hold a number over 32,767, and seriously expect you might need to run on some 16-bit machine someday, use long or the fast type from <inttypes.h>. If you’re making a system call, use the same type as its arguments. If you’re storing a pointer in an integer type, use uintptr_t. If you need a number at least 64 bits wide, use long long, and if you know exactly the width you need, use the exact-width type.

Why negative of a long int gives overflow?

Your code has implementation defined behavior in both cases because the expression has type long long int on your 32-bit system and exceeds the range of type long.

The C Standard is clear about this:

6.3.1.3 Signed and unsigned integers

  1. When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it is unchanged.
  2. Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.
  3. Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.

The confusion may come from the L suffix that does not force the type of the constant, but makes it at least as large as long int. It would be helpful if the compiler would flag such confusing side effects, but such diagnostics are not mandated by the C Standard.

Here is a program to illustrate this:

#include <stdbool.h>
#include <stdint.h>
#include <stdio.h>

#define typeof(X) _Generic((X), \
long double: "long double", \
double: "double", \
float: "float", \
unsigned long long int: "unsigned long long int", \
long long int: "long long int", \
unsigned long int: "unsigned long int", \
long int: "long int", \
unsigned int: "unsigned int", \
int: "int", \
unsigned short: "unsigned short", \
short: "short", \
unsigned char: "unsigned char", \
signed char: "signed char", \
char: "char", \
bool: "bool", \
default: "other")

int main() {
#define TEST(x) printf("%8s has type %s and size %zu\n", #x, typeof(x), sizeof(x))
TEST(4294967295);
TEST(4294967295U);
TEST(4294967295L);
TEST(4294967295UL);
TEST(4294967295LL);
TEST(4294967295ULL);
TEST(0xffffffff);
TEST(0xffffffffU);
TEST(0xffffffffL);
TEST(0xffffffffUL);
TEST(0xffffffffLL);
TEST(0xffffffffULL);
TEST(2147483647);
TEST(2147483648);
TEST(2147483647+1);
TEST(-2147483647);
TEST(-2147483648);
TEST(-2147483647-1);

return 0;
}

Output on a 32-bit system (with 32-bit int and long):

4294967295 has type long long int and size 8
4294967295U has type unsigned int and size 4
4294967295L has type long long int and size 8
4294967295UL has type unsigned long int and size 4
4294967295LL has type long long int and size 8
4294967295ULL has type unsigned long long int and size 8
0xffffffff has type unsigned int and size 4
0xffffffffU has type unsigned int and size 4
0xffffffffL has type unsigned long int and size 4
0xffffffffUL has type unsigned long int and size 4
0xffffffffLL has type long long int and size 8
0xffffffffULL has type unsigned long long int and size 8
2147483647 has type int and size 4
2147483648 has type long long int and size 8
2147483647+1 has type int and size 4
-2147483647 has type int and size 4
-2147483648 has type long long int and size 8
-2147483647-1 has type int and size 4


Related Topics



Leave a reply



Submit