Sizeof(Int) on X64

What should be the sizeof(int) on a 64-bit machine?

Doesn't have to be; "64-bit machine" can mean many things, but typically means that the CPU has registers that big. The sizeof a type is determined by the compiler, which doesn't have to have anything to do with the actual hardware (though it typically does); in fact, different compilers on the same machine can have different values for these.

sizeof(int) on x64?

There are various 64-bit data models; Microsoft uses LP64 for .NET: both longs and pointers are 64-bits (although C-style pointers can only be used in C# in unsafe contexts or as a IntPtr value which cannot be used for pointer-arithmetic). Contrast this with ILP64 where ints are also 64-bits.

Thus, on all platforms, int is 32-bits and long is 64-bits; you can see this in the names of the underlying types System.Int32 and System.Int64.

Size of int and sizeof int pointer on a 64 bit machine

No, the sizeof(int) is implementation defined, and is usually 4 bytes.

On the other hand, in order to address more than 4GB of memory (that 32bit systems can do), you need your pointers to be 8 bytes wide. int* just holds the address to "somewhere in memory", and you can't address more than 4GB of memory with just 32 bits.

Size of various datatypes on Windows 64 bit platform - specifically sizeof(int)

Yes, integers in MSVC are 4 bytes. See this article for details on MSVC sizes.

For 8 byte numbers, use long long or double.

Who decides the sizeof any datatype or structure (depending on 32 bit or 64 bit)?

It's ultimately the compiler. The compiler implementors can decide to emulate whatever integer size they see fit, regardless of what the CPU handles the most efficiently. That said, the C (and C++) standard is written such, that the compiler implementor is free to choose the fastest and most efficient way. For many compilers, the implementers chose to keep int as a 32 bit, although the CPU natively handles 64 bit ints very efficiently.

I think this was done in part to increase portability towards programs written when 32 bit machines were the most common and who expected an int to be 32 bits and no longer. (It could also be, as user user3386109 points out, that 32 bit data was preferred because it takes less space and therefore can be accessed faster.)

So if you want to make sure you get 64 bit ints, you use int64_t instead of int to declare your variable. If you know your value will fit inside of 32 bits or you don't care about size, you use int to let the compiler pick the most efficient representation.

As for the other datatypes such as struct, they are composed from the base types such as int.

Delphi SizeOf(NativeInt) vs C sizeof(int) on x86-64. Why the Size difference?

NativeInt is simply an integer that is the same size as a pointer. Hence the fact that it changes size on different platforms. The documentation says exactly that:

The size of NativeInt is equivalent to the size of the pointer on the current platform.

The main use for NativeInt is to store things like operating system handles that behind the scenes are actually memory addresses. You are not expected to use it to perform arithmetic, store array lengths etc. If you attempt to do that then you make it much more difficult to share code between 32 and 64 bit versions of your program.

You can think of Delphi NativeInt as being directly equivalent to the .net type IntPtr. In C and C++ the OS handle types would commonly be declared as void* which is a pointer type rather than an integer type. However, you would perfectly well use a type like intptr_t if you so wished.

You use the term "native integer" to describe NativeInt, but in spite of the name it's very important to realise that NativeInt is not the native integer type of the language. That would be Integer. The native in NativeInt refers to the underlying hardware platform rather than the language.

The Delphi type Integer, the language native integer, matches up with the C type int, the corresponding language native type. And on Windows these types are 32 bits wide for both 32 and 64 bit systems.

When the Windows designers started working on 64 bit Windows, they had a keen memory of what had happened when int changed from 16 to 32 bits in the transition from 16 bit to 32 bit systems. That was no fun at all, although it was clearly the right decision. This time round, from 32 to 64, there was no compelling reason to make int a 64 bit type. Had the Windows designers done so, it would have made porting much harder work. And so they chose to leave int as a 32 bit type.

In terms of performance, the AMD64 architecture was designed to operate efficiently on 32 bit types. Since a 32 bit integer is half the size of a 64 bit integer, then memory usage is reduced by making int only 32 bits on a 64 bit system. And this will have a performance benefit.

A couple of comments:

  • You state that "C has only 64 bit pointers". That is not so. A 32 bit C compiler will generally use a flat 32 bit memory model with 32 bit pointers.
  • You also say, "in Delphi NativeInt is 64 bit in size". Again that is not so. It is either 32 or 64 bits wide depending on the target.


Related Topics



Leave a reply



Submit