Difference Between an Int and a Long in C++

What is the difference between an int and a long in C++?

It is implementation dependent.

For example, under Windows they are the same, but for example on Alpha systems a long was 64 bits whereas an int was 32 bits. This article covers the rules for the Intel C++ compiler on variable platforms. To summarize:

  OS           arch           size
Windows IA-32 4 bytes
Windows Intel 64 4 bytes
Windows IA-64 4 bytes
Linux IA-32 4 bytes
Linux Intel 64 8 bytes
Linux IA-64 8 bytes
Mac OS X IA-32 4 bytes
Mac OS X Intel 64 8 bytes

Difference between Short, long and Long long int in C programming?

C was originally written for 16-bit machines, where the fastest and most-convenient size of integer to work with were 16 bits. This was the original int type. Sometimes, programmers needed 32-bit numbers, so those were the long int type.

In due course, people ported C to 32-bit machines where the most convenient word size was 32 bits. They wanted to keep using int to mean the native word size, because for example about every C program in the real world has int i; in it somewhere, but sometimes, they also needed to use 16-bit numbers to save space. So those became the short int type.

Time passed, and C needed to be ported to 64-bit machines. By then, there were a lot of programs in existence that assumed long was exactly 32 bits wide, and unfortunately, a lot of programs that also assumed that a long was the same size as a pointer, an IPv4 address, a file offset, a timestamp, etc. Good code uses types such as size_t, uintptr_t, and off_t instead, but old versions of the system libraries defined their library functions to use long, so a lot of legacy code had to as well. Since C99 came out, there have also been types such as int32_t in the standard library to specify exact widths, and Unix and Windows have had them under different names for a while.

As a consequence of the types being used to mean contradictory things, compilers today give programmers the choice between long being 32 and 64 bits wide, and 64-bit compilers let int be either 32 or 64 bits wide, too. At least one real compiler defined int as 64 bits (the native word length) but long as 32 bits (so the system libraries and code would still work). The standards committee prohibited that in C11: long long int is now guaranteed to be at least as wide as long int, and long int at least as wide as int.

Because a lot of programs couldn't redefine long as 64 bits without breaking, C needed a new type that meant (at least) 64-bit. By then, the standards committee was reluctant to introduce new keywords outside a header file, so they reused an existing one instead and named it long long int.

In brief, you shouldn’t make assumptions about what int and long mean beyond that int is at least 16 bits wide and fast, long is at least 32 bits wide, and on a C11 compiler long is at least as wide as int but no wider than long long. For general purposes, use int. If you need an array index, use size_t or ptrdiff_t. If you want to be sure you can hold a number over 32,767, and seriously expect you might need to run on some 16-bit machine someday, use long or the fast type from <inttypes.h>. If you’re making a system call, use the same type as its arguments. If you’re storing a pointer in an integer type, use uintptr_t. If you need a number at least 64 bits wide, use long long, and if you know exactly the width you need, use the exact-width type.

Long Vs. Int C/C++ - What's The Point?

When writing in C or C++, every datatype is architecture and compiler specific. On one system int is 32, but you can find ones where it is 16 or 64; it's not defined, so it's up to compiler.

As for long and int, it comes from times, where standard integer was 16bit, where long was 32 bit integer - and it indeed was longer than int.

What is the difference between long, long long, long int, and long long int in C++?

long and long int are identical. So are long long and long long int. In both cases, the int is optional.

As to the difference between the two sets, the C++ standard mandates minimum ranges for each, and that long long is at least as wide as long.

The controlling parts of the standard (C++11, but this has been around for a long time) are, for one, 3.9.1 Fundamental types, section 2 (a later section gives similar rules for the unsigned integral types):

There are five standard signed integer types : signed char, short int, int, long int, and long long int. In this list, each type provides at least as much storage as those preceding it in the list.

There's also a table 9 in 7.1.6.2 Simple type specifiers, which shows the "mappings" of the specifiers to actual types (showing that the int is optional), a section of which is shown below:

Specifier(s)         Type
------------- -------------
long long int long long int
long long long long int
long int long int
long long int

Note the distinction there between the specifier and the type. The specifier is how you tell the compiler what the type is but you can use different specifiers to end up at the same type.

Hence long on its own is neither a type nor a modifier as your question posits, it's simply a specifier for the long int type. Ditto for long long being a specifier for the long long int type.

Although the C++ standard itself doesn't specify the minimum ranges of integral types, it does cite C99, in 1.2 Normative references, as applying. Hence the minimal ranges as set out in C99 5.2.4.2.1 Sizes of integer types <limits.h> are applicable.


In terms of long double, that's actually a floating point value rather than an integer. Similarly to the integral types, it's required to have at least as much precision as a double and to provide a superset of values over that type (meaning at least those values, not necessarily more values).

Difference between long and int data types

From this reference:

An int was originally intended to be
the "natural" word size of the
processor. Many modern processors can
handle different word sizes with equal
ease.

Also, this bit:

On many (but not all) C and C++
implementations, a long is larger than
an int. Today's most popular desktop
platforms, such as Windows and Linux,
run primarily on 32 bit processors and
most compilers for these platforms use
a 32 bit int which has the same size
and representation as a long.

What is difference between long int a=2 and int a=2L?

In C++, all variables are declared with type. C++ forces1 you to specify the type explicitly, but doesn't force you to initialize the variable at all.

long int a = 2;
long int b = 2L;
long int c;

This code makes 3 variables of the same type long int.

int a = 2;
int b = 2L;
int c;

This code makes 3 variables of the same type int.

The idea of type is roughly "the set of all values the variable can take". It doesn't (and cannot) depend on the initial value of the variable - whether it's 2 or 2L or anything else.

So, if you have two variables of different type but same value

int a = 2L;
long int b = 2;

The difference between them is what they can do further in the code. For example:

a += 2147483647; // most likely, overflow
b += 2147483647; // probably calculates correctly

The type of the variable won't change from the point it's defined onwards.

Another example:

int x = 2.5;

Here the type of x is int, and it's initialized to 2. Even though the initializer has a different type, C++ regards the declaration type of x "more important".


1 BTW C++ has support for "type inference"; you can use it if you want the type of the initializer to be important:

auto a = 2L; // "a" has type "long int"
auto b = 2; // "b" has type "int"

Comparing int with long and others

int/long compare always works. The 2 operands are converted to a common type, in this case long and all int can be converted to long with no problems.

int ii = ...;
long ll = ...;
if (ii < ll)
doSomethings();

unsigned/long compare always works if long ranges exceeds unsigned. If unsigned range was [0...65535] and long was [-2G...2G-1], then the operands are converted to long and all unsigned can be converted to long with no problems.

unsigned uu16 = ...;
long ll32 = ...;
if (uu16 < ll32)
doSomethings();

unsigned/long compare has trouble when long ranges does not exceed unsigned. If unsigned range was [0...4G-1] and long was [-2G...2G-1], then the operands are converted to long, a common type that does not encompass both ranges and problems ensue.

unsigned uu32 = ...;
long ll32 = ...;

// problems
if (uu32 < ll32)
doSomethings();

// corrected solution
if (uu32 <= LONG_MAX && uu32 < ll32)
doSomethings();

// wrong solution
if (ll32 < 0 || uu32 < ll32)
doSomethings();

If type long long includes all the range of unsigned, code could use do the compare with at least long long width.

unsigned uu;
long ll;
#if LONG_MAX >= UINT_MAX
if (uu < ll)
#if LLONG_MAX >= UINT_MAX
if (uu < ll*1LL)
#else
if (uu32 <= LONG_MAX && uu32 < ll32)
// if (ll < 0 || uu < ll)
#endif

What is the difference between int and long?

According to the standard, int is guaranteed to be at least 16 bits and long is at least 32 bits. On most 32-bit compilers they are just the same, both 32 bits. But you shouldn't count on this as there are 64-, 16- and even 8-bit compilers too.

Difference between long and int in C#?

An int (aka System.Int32 within the runtime) is always a signed 32 bit integer on any platform, a long (aka System.Int64) is always a signed 64 bit integer on any platform. So you can't cast from a long with a value above Int32.MaxValue or below Int32.MinValue without losing data.



Related Topics



Leave a reply



Submit