Can I assume the size of long int is always 4 bytes?
The standards say nothing regarding the exact size of any integer types aside from char
. Typically, long
is 32-bit on 32-bit systems and 64-bit on 64-bit systems.
The standard does however specify a minimum size. From section 5.2.4.2.1 of the C Standard:
1 The values given below shall be replaced by constant expressions
suitable for use in#if
preprocessing directives. Moreover,
except forCHAR_BIT
andMB_LEN_MAX
, the following shall be
replaced by expressions that have the same type as would an
expression that is an object of the corresponding type converted
according to the integer promotions. Their implementation-defined
values shall be equal or greater in magnitude (absolute value) to
those shown, with the same sign....
minimum value for an object of type
long int
LONG_MIN
-2147483647 // −(2^31−1)maximum value for an object of type
long int
LONG_MAX
+2147483647 // 2^31−1
This says that a long int
must be a minimum of 32 bits, but may be larger. On a machine where CHAR_BIT
is 8, this gives a minimum byte size of 4. However on machine with e.g. CHAR_BIT
equal to 16, a long int
could be 2 bytes long.
Here's a real-world example. For the following code:
#include <stdio.h>
int main ()
{
printf("sizeof(long) = %zu\n", sizeof(long));
return 0;
}
Output on Debian 7 i686:
sizeof(long) = 4
Output on CentOS 7 x64:
sizeof(long) = 8
So no, you can't make any assumptions on size. If you need a type of a specific size, you can use the types defined in stdint.h
. It defines the following types:
int8_t
: signed 8-bituint8_t
: unsigned 8-bitint16_t
: signed 16-bituint16_t
: unsigned 16-bitint32_t
: signed 32-bituint32_t
: unsigned 32-bitint64_t
: signed 64-bituint64_t
: unsigned 64-bit
The stdint.h
header is described in section 7.20 of the standard, with exact width types in section 7.20.1.1. The standard states that these typedefs are optional, but they exist on most implementations.
What determines the size of integer in C? [duplicate]
Ultimately the compiler does, but in order for compiled code to play nicely with system libraries, most compilers match the behavior of the compiler[s] used to build the target system.
So loosely speaking, the size of int
is a property of the target hardware and OS (two different OSs on the same hardware may have a different size of int
, and the same OS running on two different machines may have a different size of int
; there are reasonably common examples of both).
All of this is also constrained by the rules in the C standard. int
must be large enough to represent all values between -32767
and 32767
, for example.
Why long int has same size as int? Does this modifier works at all?
The reason that MS choose to makelong
32 bits even on a 64-bit system is that the existing Windows API, for historical reasons use a mixture of int
and long
for similar things, and the expectation is that this is s 32-bit value (some of this goes back to times when Windows was a 16-bit system). So to make the conversion of old code to the new 64-bit architecture, they choose to keep long
at 32 bits, so that applications mixing int
and long
in various places would still compile.
There is nothing in the C++ standard that dictates that a long
should be bigger than int
(it certainly isn't on most 32-bit systems). All the standard says is that the size of short
<= int
<= long
- and that short
is at least 16 bits, if memory serves [not necessarily expressed as "should be at least 16 bits", I think it mentions the range of values].
Is int byte size fixed or it occupy it accordingly in C/C++?
The size of all types is constant. The value that you store in an integer has no effect on the size of the type. If you store a positive value smaller than maximum value representable by a single byte, then the more significant bytes (if any) will contain a zero value.
The size of int
is not necessarily 4 bytes. The byte size of integer types is implementation defined.
C++ int versus long
What is faster and what is not is something that is becoming harder to predict every day. The reason is that processors are no more "simple" and with all the complex dynamics and algorithms behind them the final speed may follow rules that are totally counter-intuitive.
The only way out is to just measure and decide. Also note that what is faster depends on the little details and even for compatible CPUs what is an optimization for one can be a pessimization for the other. For very critical parts some software just tries and checks timings for different approaches at run time during program initialization.
That said, as a general rule the faster integer you can have is int
. You should use other integers only if you need them specifically (e.g. if long
is larger and you need the higher precision, or if short
is smaller but enough and you need to save memory).
Even better if you need a specific size then use a fixed standard type or add a typedef
instead of just sprinkling around long
where you need it. This way it will be easier to support different compilers and architectures and also the intent will be clearer for whoever is going to read the code in the future.
Related Topics
Usr/Bin/Ld: Cannot Find -L≪Nameofthelibrary≫
Determine If Two Rectangles Overlap Each Other
What Is a "Span" and When Should I Use One
How to Execute a Command and Get the Output of the Command Within C++ Using Posix
Print Heart Shape With Words Inside
Count How Many Times Elements in an Array Are Repeated
How to Get Installed Windows Sdk Version
Get Current Time in Milliseconds, or Hh:Mm:Ss:Mmm Format
How to Get the Cpu Cycle Count in X86_64 from C++
Meaning of 'Const' Last in a Function Declaration of a Class
Read Whole Ascii File into C++ Std::String
Does a Const Reference Class Member Prolong the Life of a Temporary
How to Declare a 2D Array in C++ Using New
Difference Between #Include ≪Filename≫ and #Include "Filename"