Difference Between Long and Int Data Types

Long vs Integer, long vs int, what to use and when?

Long is the Object form of long, and Integer is the object form of int.

The long uses 64 bits. The int uses 32 bits, and so can only hold numbers up to ±2 billion (-231 to +231-1).

You should use long and int, except where you need to make use of methods inherited from Object, such as hashcode. Java.util.collections methods usually use the boxed (Object-wrapped) versions, because they need to work for any Object, and a primitive type, like int or long, is not an Object.

Another difference is that long and int are pass-by-value, whereas Long and Integer are pass-by-reference value, like all non-primitive Java types. So if it were possible to modify a Long or Integer (it's not, they're immutable without using JNI code), there would be another reason to use one over the other.

A final difference is that a Long or Integer could be null.

Difference between long and int data types

From this reference:

An int was originally intended to be
the "natural" word size of the
processor. Many modern processors can
handle different word sizes with equal
ease.

Also, this bit:

On many (but not all) C and C++
implementations, a long is larger than
an int. Today's most popular desktop
platforms, such as Windows and Linux,
run primarily on 32 bit processors and
most compilers for these platforms use
a 32 bit int which has the same size
and representation as a long.

What is the difference between long, long long, long int, and long long int in C++?

long and long int are identical. So are long long and long long int. In both cases, the int is optional.

As to the difference between the two sets, the C++ standard mandates minimum ranges for each, and that long long is at least as wide as long.

The controlling parts of the standard (C++11, but this has been around for a long time) are, for one, 3.9.1 Fundamental types, section 2 (a later section gives similar rules for the unsigned integral types):

There are five standard signed integer types : signed char, short int, int, long int, and long long int. In this list, each type provides at least as much storage as those preceding it in the list.

There's also a table 9 in 7.1.6.2 Simple type specifiers, which shows the "mappings" of the specifiers to actual types (showing that the int is optional), a section of which is shown below:

Specifier(s)         Type
------------- -------------
long long int long long int
long long long long int
long int long int
long long int

Note the distinction there between the specifier and the type. The specifier is how you tell the compiler what the type is but you can use different specifiers to end up at the same type.

Hence long on its own is neither a type nor a modifier as your question posits, it's simply a specifier for the long int type. Ditto for long long being a specifier for the long long int type.

Although the C++ standard itself doesn't specify the minimum ranges of integral types, it does cite C99, in 1.2 Normative references, as applying. Hence the minimal ranges as set out in C99 5.2.4.2.1 Sizes of integer types <limits.h> are applicable.


In terms of long double, that's actually a floating point value rather than an integer. Similarly to the integral types, it's required to have at least as much precision as a double and to provide a superset of values over that type (meaning at least those values, not necessarily more values).

Long Vs. Int C/C++ - What's The Point?

When writing in C or C++, every datatype is architecture and compiler specific. On one system int is 32, but you can find ones where it is 16 or 64; it's not defined, so it's up to compiler.

As for long and int, it comes from times, where standard integer was 16bit, where long was 32 bit integer - and it indeed was longer than int.

Difference between Short, long and Long long int in C programming?

C was originally written for 16-bit machines, where the fastest and most-convenient size of integer to work with were 16 bits. This was the original int type. Sometimes, programmers needed 32-bit numbers, so those were the long int type.

In due course, people ported C to 32-bit machines where the most convenient word size was 32 bits. They wanted to keep using int to mean the native word size, because for example about every C program in the real world has int i; in it somewhere, but sometimes, they also needed to use 16-bit numbers to save space. So those became the short int type.

Time passed, and C needed to be ported to 64-bit machines. By then, there were a lot of programs in existence that assumed long was exactly 32 bits wide, and unfortunately, a lot of programs that also assumed that a long was the same size as a pointer, an IPv4 address, a file offset, a timestamp, etc. Good code uses types such as size_t, uintptr_t, and off_t instead, but old versions of the system libraries defined their library functions to use long, so a lot of legacy code had to as well. Since C99 came out, there have also been types such as int32_t in the standard library to specify exact widths, and Unix and Windows have had them under different names for a while.

As a consequence of the types being used to mean contradictory things, compilers today give programmers the choice between long being 32 and 64 bits wide, and 64-bit compilers let int be either 32 or 64 bits wide, too. At least one real compiler defined int as 64 bits (the native word length) but long as 32 bits (so the system libraries and code would still work). The standards committee prohibited that in C11: long long int is now guaranteed to be at least as wide as long int, and long int at least as wide as int.

Because a lot of programs couldn't redefine long as 64 bits without breaking, C needed a new type that meant (at least) 64-bit. By then, the standards committee was reluctant to introduce new keywords outside a header file, so they reused an existing one instead and named it long long int.

In brief, you shouldn’t make assumptions about what int and long mean beyond that int is at least 16 bits wide and fast, long is at least 32 bits wide, and on a C11 compiler long is at least as wide as int but no wider than long long. For general purposes, use int. If you need an array index, use size_t or ptrdiff_t. If you want to be sure you can hold a number over 32,767, and seriously expect you might need to run on some 16-bit machine someday, use long or the fast type from <inttypes.h>. If you’re making a system call, use the same type as its arguments. If you’re storing a pointer in an integer type, use uintptr_t. If you need a number at least 64 bits wide, use long long, and if you know exactly the width you need, use the exact-width type.

Why Use Integer Instead of Long?

Integer variables are stored as 16-bit (2-byte) numbers

Office VBA Reference

Long (long integer) variables are stored as signed 32-bit (4-byte) numbers

Office VBA Reference

So, the benefit is in reduced memory space. An Integer takes up half the memory that a Long does. Now, we are talking about 2 bytes, so it's not going to make a real difference for individual integers, it's only a concern when you are dealing with TONS of integers (e.g large arrays) and memory usage is critical.

BUT on a 32 bit system, the halved memory usage comes at a performance cost. When the processor actually performs some computation with a 16 bit integer (e.g. incrementing a loop counter), the value silently gets converted to a temporary Long without the benefit of the larger range of numbers to work with. Overflows still happen, and the register that the processor uses to store the values for the calculation will take the same amount of memory (32 bits) either way. Performance may even be hurt because the datatype has to be converted (at a very low level).

Not the reference I was looking for but....

My understanding is that the underlying VB engine converts integers to long even if its declared as an integer. Therefore a slight speed decrease can be noted. I have believed this for some time and perhaps thats also why the above statement was made, I didnt ask for reasoning.

ozgrid forums

This is the reference I was looking for.

Short answer, in 32-bit systems 2 byte integers are converted to 4 byte
Longs. There really is no other way so that respective bits correctly line
up for any form of processing. Consider the following

MsgBox Hex(-1) = Hex(65535) ' = True

Obviously -1 does not equal 65535 yet the computer is returning the correct
answer, namely
"FFFF" = "FFFF"

However had we coerced the -1 to a long first we would have got the right
answer (the 65535 being greater than 32k is automatically a long)

MsgBox Hex(-1&) = Hex(65535) ' = False

"FFFFFFFF" = "FFFF"

Generally there is no point in VBA to declare "As Integer" in modern
systems, except perhaps for some legacy API's that expect to receive an
Integer.

pcreview forum

And at long last I found the msdn documentation I was really truly looking for.

Traditionally, VBA programmers have used integers to hold small
numbers, because they required less memory. In recent versions,
however, VBA converts all integer values to type Long, even if they're
declared as type Integer. So there's no longer a performance advantage
to using Integer variables; in fact, Long variables may be slightly
faster because VBA does not have to convert them.

To clarify based on the comments: Integers still require less memory to store - a large array of integers will need significantly less RAM than an Long array with the same dimensions. But because the processor needs to work with 32 bit chunks of memory, VBA converts Integers to Longs temporarily when it performs calculations


So, in summary, there's almost no good reason to use an Integer type these days. Unless you need to Interop with an old API call that expects a 16 bit int, or you are working with large arrays of small integers and memory is at a premium.

One thing worth pointing out is that some old API functions may be expecting parameters that are 16-bit (2-byte) Integers and if you are on a 32 bit and trying to pass an Integer (that is already a 4-byte long) by reference it will not work due to difference in length of bytes.

Thanks to Vba4All for pointing that out.

Why not use long for all integer values

Does it make sense to use for example, an int data type, instead of a long data type?

ABSOLUTELY YES.



MEMORY / DISK USAGE

Using only one variable or two you won't see difference of performance, but when apps grow it will increase your app speed.

Check this question for further info.

Also looking to Oracle primitive type documentation you can see some advices and the memory usage:

type    memory usage    recommended for
------- --------------- ---------------------------------------------------
byte 8-bit signed The byte data type can be useful for saving memory in large arrays, where the memory savings actually matters.
short 16-bit signed same as byte
int 32-bit signed
long 64-bit Use this data type when you need a range of values wider than those provided by int
float Use a float (instead of double) if you need to save memory in large arrays of floating point numbers. This data type should never be used for precise values, such as currency.

byte:

The byte data type is an 8-bit signed two's complement integer. It has a minimum value of -128 and a maximum value of 127 (inclusive). The byte data type can be useful for saving memory in large arrays, where the memory savings actually matters.

short:

The short data type is a 16-bit signed two's complement integer. It has a minimum value of -32,768 and a maximum value of 32,767 (inclusive). As with byte, the same guidelines apply: you can use a short to save memory in large arrays, in situations where the memory savings actually matters.

int:

By default, the int data type is a 32-bit signed two's complement integer, which has a minimum value of -2³¹ and a maximum value of 2³¹-1. In Java SE 8 and later, you can use the int data type to represent an unsigned 32-bit integer, which has a minimum value of 0 and a maximum value of 2³²-1.

long:

The long data type is a 64-bit two's complement integer. The signed long has a minimum value of -2⁶³ and a maximum value of 2⁶³-1. In Java SE 8 and later, you can use the long data type to represent an unsigned 64-bit long, which has a minimum value of 0 and a maximum value of 2⁶⁴-1. Use this data type when you need a range of values wider than those provided by int.

float:

The float data type is a single-precision 32-bit IEEE 754 floating point. Its range of values is beyond the scope of this discussion, but is specified in the Floating-Point Types, Formats, and Values section of the Java Language Specification. As with the recommendations for byte and short, use a float (instead of double) if you need to save memory in large arrays of floating point numbers. This data type should never be used for precise values, such as currency.



CODE READABILITY

Also, it will clarify your mind and your code, lets say, you have a variable that represents the ID of an object, this object ID will never use decimals, so, if you see in your code:

int id;

you will now for sure how this ID will look, otherwise

double id;

wont.

Also, if you see:

int quantity;
double price;

you will know quantity won't allow decimals (only full objects) but price will do... That makes your job (and others programmers will read your code) easier.

If the size of long and int are the same on a platform - are long and int different in any way?

They are not compatible types, which you can see with a a simple example:

int* iptr;
long* lptr = iptr; // compiler error here

So it mostly matters when dealing with pointers to these types. Similarly, there is the "strict aliasing rule" which makes this code undefined behavior:

int i;
long* lptr = (long*)&i;
*lptr = ...; // undefined behavior

Some another subtle issue is implicit promotion. In case you have some_int + some_long then the resulting type of that expression is long. Or in case either parameter is unsigned, unsigned long. This is because of integer promotion through the usual arithmetic conversions, see Implicit type promotion rules.
Shouldn't matter most of the time, but code such as this will fail: _Generic(some_int + some_long, int: stuff() ) since there is no long clause in the expression.

Generally, when assigning values between types, there shouldn't be any problems. In case of uint32_t, it doesn't matter which type it corresponds to, because you should treat uint32_t as a separate type anyway. I'd pick long for compatibility with small microcontrollers, where typedef unsigned int uint32_t; will break. (And obviously, typedef signed long int32_t; for the signed equivalent.)

What is difference between long int a=2 and int a=2L?

In C++, all variables are declared with type. C++ forces1 you to specify the type explicitly, but doesn't force you to initialize the variable at all.

long int a = 2;
long int b = 2L;
long int c;

This code makes 3 variables of the same type long int.

int a = 2;
int b = 2L;
int c;

This code makes 3 variables of the same type int.

The idea of type is roughly "the set of all values the variable can take". It doesn't (and cannot) depend on the initial value of the variable - whether it's 2 or 2L or anything else.

So, if you have two variables of different type but same value

int a = 2L;
long int b = 2;

The difference between them is what they can do further in the code. For example:

a += 2147483647; // most likely, overflow
b += 2147483647; // probably calculates correctly

The type of the variable won't change from the point it's defined onwards.

Another example:

int x = 2.5;

Here the type of x is int, and it's initialized to 2. Even though the initializer has a different type, C++ regards the declaration type of x "more important".


1 BTW C++ has support for "type inference"; you can use it if you want the type of the initializer to be important:

auto a = 2L; // "a" has type "long int"
auto b = 2; // "b" has type "int"


Related Topics



Leave a reply



Submit