long long implementation in 32 bit machine
On the IA32 architecture, 64-bit integer are implemented in using two 32-bit registers (eax and edx).
There are platform specific equivalents for C++, and you can use the stdint.h header where available (boost provides you with one).
C - Unsigned long long to double on 32-bit machine
uint64_t vs double, which has a higher range limit for covering positive numbers?
uint64_t
, where supported, has 64 value bits, no padding bits, and no sign bit. It can represent all integers between 0 and 264 - 1, inclusive.
Substantially all modern C implementations represent double
in IEEE-754 64-bit binary format, but C does not require nor even endorse that format. It is so common, however, that it is fairly safe to assume that format, and maybe to just put in some compile-time checks against the macros defining FP characteristics. I will assume for the balance of this answer that the C implementation indeed does use that representation.
IEEE-754 binary double precision provides 53 bits of mantissa, therefore it can represent all integers between 0 and 253 - 1. It is a floating-point format, however, with an 11-bit binary exponent. The largest number it can represent is (253 - 1) * 21023, or nearly 21077. In this sense, double
has a much greater range than uint64_t
, but the vast majority of integers between 0 and its maximum value cannot be represented exactly as double
s, including almost all of the numbers that can be represented exactly by uint64_t
.
How to convert double into uint64_t if only the whole number part of double is needed
You can simply assign (conversion is implicit), or you can explicitly cast if you want to make it clear that a conversion takes place:
double my_double = 1.2345678e48;
uint64_t my_uint;
uint64_t my_other_uint;
my_uint = my_double;
my_other_uint = (uint64_t) my_double;
Any fractional part of the double
's value will be truncated. The integer part will be preserved exactly if it is representable as a uint64_t
; otherwise, the behavior is undefined.
The code you presented uses a union to overlay storage of a double
and a uint64_t
. That's not inherently wrong, but it's not a useful technique for converting between the two types. Casts are C's mechanism for all non-implicit value conversions.
In C, what is the size of a long for a 32-bit machine and for a 64-bit machine?
The size of long
(and the sizes of objects generally) is determined by the C implementation, not the platform programs execute on.
Generally speaking, a C implementation is a compiler plus the libraries and other supporting software needed to run C programs.1 There can be more than one C implementation for a platform. In fact, one compiler can implement multiple C implementations by using different switches to request various configurations.
A general C implementation typically uses sizes for short
, int
, and long
that work well with the target processor model (or models) and give the programmer good choices. However, C implementations can be designed for special purposes, such as supporting older code that was intended for a specific size of long
. Generally speaking, a C compiler can write instructions for whatever size of long
it defines.
The C standard imposes some lower limits on the sizes of objects. The number of bits in a character, CHAR_BIT
, must be at least eight. short
and int
must be capable of representing values from −32767 to +32767, and long
must be capable of representing −2147483647 to +2147483647. It also requires that long
be capable of representing all int
values, that int
be capable of representing all short
values, and short
be capable of representing all signed char
values. Other than that, the C standard imposes few requirements. It does not require that int
or long
be a particular size on particular platforms. And operating systems have no say in what happens inside a programming language. An operating system sets requirements for running programs and interfacing with the system, but, inside a program, software can do anything it wants. So a compiler can call 17 bits an int
if it wants, and the operating system has no control over that.
Footnote
1 The C 2011 standard (draft N1570) defines an implementation, in clause 3.12, as a “particular set of software, running in a particular translation environment under particular control options, that performs translation of programs for, and supports execution of functions in, a particular execution environment.”
windows uses long int as if in a 32-bit machine while in 64-bit
How can this be fixed?
By using long long
or std::int64_t
. long
is required / guaranteed to be at least 32 bits, and that's the size of long
on (64 bit) windows.
32 bit operating system supporting 64 bit unsigned integer!! how?
Let's see what it compiles to. I've added some arithmetic to make it clearer.
6 unsigned long long i = 0;
0x0804868d <+18>: mov DWORD PTR [ebp-0x10],0x0
0x08048694 <+25>: mov DWORD PTR [ebp-0xc],0x0
7 unsigned long long j = 3;
0x0804869b <+32>: mov DWORD PTR [ebp-0x18],0x3
0x080486a2 <+39>: mov DWORD PTR [ebp-0x14],0x0
8 cout << i + j << endl;
0x080486a9 <+46>: mov ecx,DWORD PTR [ebp-0x10]
0x080486ac <+49>: mov ebx,DWORD PTR [ebp-0xc]
0x080486af <+52>: mov eax,DWORD PTR [ebp-0x18]
0x080486b2 <+55>: mov edx,DWORD PTR [ebp-0x14]
0x080486b5 <+58>: add eax,ecx
0x080486b7 <+60>: adc edx,ebx
As you can see, the compiler generates two instructions for load and store. Addition is performed using adc
(add with carry) to carry the carry bit to the higher part.
Why do C compilers specify long to be 32-bit and long long to be 64-bit?
Yes, it does make sense, but Microsoft had their own reasons for defining "long" as 32-bits.
As far as I know, of all the mainstream systems right now, Windows is the only OS where "long" is 32-bits. On Unix and Linux, it's 64-bit.
All compilers for Windows will compile "long" to 32-bits on Windows to maintain compatibility with Microsoft.
For this reason, I avoid using "int" and "long". Occasionally I'll use "int" for error codes and booleans (in C), but I never use them for any code that is dependent on the size of the type.
Long long in c99
If long is already 8 then, why is it necessary to add another long long type? What does this do to the compiler/architecture?
"If long is already 8" is not always true as much code exists that relies on 32-bit long
and int
as 32 or 16 bits.
Requiring long
as 64-bit would break code bases. This is a major concern.
Yet requiring long
to remain 32-bit (and no long long
) would not make for access to standard 64-bit integers, hence a rationale for long long
.
Allowing long
as either 32-bit or 64-bit (or others) allows for transition.
Various functions pass in/return long
like fseek(), ftell()
. They benefit from long
being more than 32-bit for large file support.
Recommended practice encourages a wider long
: "The types used for size_t
and ptrdiff_t
should not have an integer conversion rank greater than
that of signed long int
unless the implementation supports objects large enough to make this necessary." This relates to memory sizes exceeding 32-bit.
Perhaps in the future an implementation may use int/long/long long/intmax_t
as 32/64/128/256 bits.
IAC, I see fixed width types intN_t
increasing in popularity over long
and long long
. I tend to use fixed width types or bool
, (unsigned
) char
, int
/unsigned
, size_t
, (u
)intmax_t
and leave signed char
, (unsigned
) short
, (unsigned
) long
, (unsigned
) long long
for special cases.
Is it possible to implement 64-bit integers using a 32-bit compiler?
int64_t
is a 64b integer type and is available if you include stdint.h
.uint64_t
is the unsigned version and is defined in the same header.
See Are types like uint32, int32, uint64, int64 defined in any stdlib header? for more details.
Is it possible to implement 64-bit integers on a 32-bit machine? (not considering performance issues)
Sure. There's usually no need unless you're on a really restricted instruction set, but it's definitely possible. How to implement big int in C++ explains how to do arbitrary precision integers which is a strictly harder problem to solve.
Related Topics
How to Prevent an Object Being Created on the Heap
What Is the Size of an Enum Type Data in C++
Is There a Standard Way of Moving a Range into a Vector
Linux C++: How to Profile Time Wasted Due to Cache Misses
Sharing Precompiled Headers Between Projects in Visual Studio
How to Use Multiple Versions of Gcc
Best Library for Statistics in C++
Allocating Vectors (Or Vectors of Vectors) Dynamically
How to Compare Two Vectors for Equality Element by Element in C++
How to Get the String Representation of Hresult Value Using Win API
Does Std::Mutex Create a Fence
How to Implement No-Op MACro (Or Template) in C++
Convert Char Array to Single Int
Pointers to Virtual Member Functions. How Does It Work
Advice on a Better Way to Extend C++ Stl Container with User-Defined Methods