Making 'Long' 4 Bytes in Gcc on a 64-Bit Linux MAChine

Making 'long' 4 bytes in gcc on a 64-bit Linux machine

  1. No. On Linux x86_64 the ABI specifies that long is a 8 byte type (LP64). In fact, most if not all 64-bit Unix systems (including 64-bit OS X, AFAIK) are LP64 so this is nothing specific to Linux.

  2. Beyond fixing your code, no.

If you need a portable integer type which is large enough to store a pointer value, use intptr_t or uintptr_t (but usually wanting to store a pointer value into an integer means that you're doing something wrong, so think twice!). For an integer type which is capable of representing the difference between two pointers, use ptrdiff_t. For sizes of objects, use size_t.

Can I assume the size of long int is always 4 bytes?

The standards say nothing regarding the exact size of any integer types aside from char. Typically, long is 32-bit on 32-bit systems and 64-bit on 64-bit systems.

The standard does however specify a minimum size. From section 5.2.4.2.1 of the C Standard:

1 The values given below shall be replaced by constant expressions
suitable for use in #if preprocessing directives. Moreover,
except for CHAR_BIT and MB_LEN_MAX, the following shall be
replaced by expressions that have the same type as would an
expression that is an object of the corresponding type converted
according to the integer promotions. Their implementation-defined
values shall be equal or greater in magnitude (absolute value) to
those shown, with the same sign.

...

  • minimum value for an object of type long int

    LONG_MIN -2147483647 // −(2^31−1)

  • maximum value for an object of type long int

    LONG_MAX +2147483647 // 2^31−1

This says that a long int must be a minimum of 32 bits, but may be larger. On a machine where CHAR_BIT is 8, this gives a minimum byte size of 4. However on machine with e.g. CHAR_BIT equal to 16, a long int could be 2 bytes long.

Here's a real-world example. For the following code:

#include <stdio.h>

int main ()
{
printf("sizeof(long) = %zu\n", sizeof(long));
return 0;
}

Output on Debian 7 i686:

sizeof(long) = 4

Output on CentOS 7 x64:

sizeof(long) = 8

So no, you can't make any assumptions on size. If you need a type of a specific size, you can use the types defined in stdint.h. It defines the following types:

  • int8_t: signed 8-bit
  • uint8_t: unsigned 8-bit
  • int16_t: signed 16-bit
  • uint16_t: unsigned 16-bit
  • int32_t: signed 32-bit
  • uint32_t: unsigned 32-bit
  • int64_t: signed 64-bit
  • uint64_t: unsigned 64-bit

The stdint.h header is described in section 7.20 of the standard, with exact width types in section 7.20.1.1. The standard states that these typedefs are optional, but they exist on most implementations.

Difference between 4-byte types in 32-bit and 64-bit Linux machines

Thanks everyone. The issue was something local/specific to our code, where we were making an assumption that pointers are always 4-bytes long. When we were dereferencing them, we were casting them to 4-byte variables and were losing precision. This was causing crashes. Since this is a large codebase developed by many people over many years, there was lot of legacy code with these assumptions, I had to go over all the pointers and modify them. And finally we have ported our code successfully to 64-bit mode.

How to guarantee long is 4 bytes

Instead of using int and long, you will likely want to create a header file that uses typedef's (preferred) or preprocessor macros to define common typenames for you to use.

#ifdef _WINDOWS
typedef unsigned long uint32;
#else
typedef unsigned int uint32;
#endif

In your union, you would use uint32 instead of long.

The header file stdint.h does exactly this, but is not always installed as a standard header file with every compiler. Visual Studio (for example) does not have it by default. You can download it from http://msinttypes.googlecode.com/svn/trunk/stdint.h if you would prefer to use it instead.

gcc, width of long int on different architectures

Don't do this - use standard types such as int32_t, uint32_t, int64_t, uint64_t, etc from <stdint.h> rather than trying to make assumptions about naked types such as long int or trying to bend the compiler to your will.

Note: The 64-bit model for any given platform (e.g. LP64 for most *nix platforms, Mac OS X, etc) is a given, so even if you could convince the compiler to use a different 64-bit model you would probably break any calls to system code, libraries, etc.

Compiling old C code Y2038 conform still results in 4 byte variables

According to this post (which is getting a little old now, and some parts of which are probably no longer relevant):

... defining _TIME_BITS=64 would cause all time functions to use 64-bit times by default. The _TIME_BITS=64 option is implemented by transparently mapping the standard functions and types to their internal 64-bit variants. Glibc would also set __USE_TIME_BITS64, which user code can test for to determine if the 64-bit variants are available.

Presumably, this includes making time_t 64 bit.

So if your version of glibc supports this at all, it looks like you're setting the wrong macro. You want:

-D_TIME_BITS=64



Related Topics



Leave a reply



Submit