Long VS. Int C/C++ - What's the Point

Long Vs. Int C/C++ - What's The Point?

When writing in C or C++, every datatype is architecture and compiler specific. On one system int is 32, but you can find ones where it is 16 or 64; it's not defined, so it's up to compiler.

As for long and int, it comes from times, where standard integer was 16bit, where long was 32 bit integer - and it indeed was longer than int.

Difference between long and int data types

From this reference:

An int was originally intended to be
the "natural" word size of the
processor. Many modern processors can
handle different word sizes with equal
ease.

Also, this bit:

On many (but not all) C and C++
implementations, a long is larger than
an int. Today's most popular desktop
platforms, such as Windows and Linux,
run primarily on 32 bit processors and
most compilers for these platforms use
a 32 bit int which has the same size
and representation as a long.

What is the difference between an int and a long in C++?

It is implementation dependent.

For example, under Windows they are the same, but for example on Alpha systems a long was 64 bits whereas an int was 32 bits. This article covers the rules for the Intel C++ compiler on variable platforms. To summarize:

  OS           arch           size
Windows IA-32 4 bytes
Windows Intel 64 4 bytes
Windows IA-64 4 bytes
Linux IA-32 4 bytes
Linux Intel 64 8 bytes
Linux IA-64 8 bytes
Mac OS X IA-32 4 bytes
Mac OS X Intel 64 8 bytes

What is the difference between long, long long, long int, and long long int in C++?

long and long int are identical. So are long long and long long int. In both cases, the int is optional.

As to the difference between the two sets, the C++ standard mandates minimum ranges for each, and that long long is at least as wide as long.

The controlling parts of the standard (C++11, but this has been around for a long time) are, for one, 3.9.1 Fundamental types, section 2 (a later section gives similar rules for the unsigned integral types):

There are five standard signed integer types : signed char, short int, int, long int, and long long int. In this list, each type provides at least as much storage as those preceding it in the list.

There's also a table 9 in 7.1.6.2 Simple type specifiers, which shows the "mappings" of the specifiers to actual types (showing that the int is optional), a section of which is shown below:

Specifier(s)         Type
------------- -------------
long long int long long int
long long long long int
long int long int
long long int

Note the distinction there between the specifier and the type. The specifier is how you tell the compiler what the type is but you can use different specifiers to end up at the same type.

Hence long on its own is neither a type nor a modifier as your question posits, it's simply a specifier for the long int type. Ditto for long long being a specifier for the long long int type.

Although the C++ standard itself doesn't specify the minimum ranges of integral types, it does cite C99, in 1.2 Normative references, as applying. Hence the minimal ranges as set out in C99 5.2.4.2.1 Sizes of integer types <limits.h> are applicable.


In terms of long double, that's actually a floating point value rather than an integer. Similarly to the integral types, it's required to have at least as much precision as a double and to provide a superset of values over that type (meaning at least those values, not necessarily more values).

What is the historical context for long and int often being the same size?

From the C99 rationale (PDF) on section 6.2.5:

[...] In the 1970s, 16-bit C (for the
PDP-11) first represented file
information with 16-bit integers,
which were rapidly obsoleted by disk
progress. People switched to a 32-bit
file system, first using int[2]
constructs which were not only
awkward, but also not efficiently
portable to 32-bit hardware.

To solve the problem, the long type
was added to the language, even though
this required C on the PDP-11 to
generate multiple operations to
simulate 32-bit arithmetic. Even as
32-bit minicomputers became available
alongside 16-bit systems, people still
used int for efficiency, reserving
long for cases where larger integers
were truly needed, since long was
noticeably less efficient on 16-bit
systems. Both short and long were
added to C, making short available
for 16 bits, long for 32 bits, and
int as convenient for performance.
There was no desire to lock the
numbers 16 or 32 into the language, as
there existed C compilers for at least
24- and 36-bit CPUs, but rather to
provide names that could be used for
32 bits as needed.

PDP-11 C might have been
re-implemented with int as 32-bits,
thus avoiding the need for long; but
that would have made people change
most uses of int to short or
suffer serious performance degradation
on PDP-11s. In addition to the
potential impact on source code, the
impact on existing object code and
data files would have been worse, even
in 1976. By the 1990s, with an immense
installed base of software, and with
widespread use of dynamic linked
libraries, the impact of changing the
size of a common data object in an
existing environment is so high that
few people would tolerate it, although
it might be acceptable when creating a
new environment. Hence, many vendors,
to avoid namespace conflicts, have
added a 64-bit integer to their 32-bit
C environments using a new name, of
which long long has been the most
widely used. [...]

Long vs Integer, long vs int, what to use and when?

Long is the Object form of long, and Integer is the object form of int.

The long uses 64 bits. The int uses 32 bits, and so can only hold numbers up to ±2 billion (-231 to +231-1).

You should use long and int, except where you need to make use of methods inherited from Object, such as hashcode. Java.util.collections methods usually use the boxed (Object-wrapped) versions, because they need to work for any Object, and a primitive type, like int or long, is not an Object.

Another difference is that long and int are pass-by-value, whereas Long and Integer are pass-by-reference value, like all non-primitive Java types. So if it were possible to modify a Long or Integer (it's not, they're immutable without using JNI code), there would be another reason to use one over the other.

A final difference is that a Long or Integer could be null.

Why long int has same size as int? Does this modifier works at all?

The reason that MS choose to makelong 32 bits even on a 64-bit system is that the existing Windows API, for historical reasons use a mixture of int and long for similar things, and the expectation is that this is s 32-bit value (some of this goes back to times when Windows was a 16-bit system). So to make the conversion of old code to the new 64-bit architecture, they choose to keep long at 32 bits, so that applications mixing int and long in various places would still compile.

There is nothing in the C++ standard that dictates that a long should be bigger than int (it certainly isn't on most 32-bit systems). All the standard says is that the size of short <= int <= long - and that short is at least 16 bits, if memory serves [not necessarily expressed as "should be at least 16 bits", I think it mentions the range of values].

C++ int versus long

What is faster and what is not is something that is becoming harder to predict every day. The reason is that processors are no more "simple" and with all the complex dynamics and algorithms behind them the final speed may follow rules that are totally counter-intuitive.

The only way out is to just measure and decide. Also note that what is faster depends on the little details and even for compatible CPUs what is an optimization for one can be a pessimization for the other. For very critical parts some software just tries and checks timings for different approaches at run time during program initialization.

That said, as a general rule the faster integer you can have is int. You should use other integers only if you need them specifically (e.g. if long is larger and you need the higher precision, or if short is smaller but enough and you need to save memory).

Even better if you need a specific size then use a fixed standard type or add a typedef instead of just sprinkling around long where you need it. This way it will be easier to support different compilers and architectures and also the intent will be clearer for whoever is going to read the code in the future.



Related Topics



Leave a reply



Submit