How Does a 32 Bit Processor Support 64 Bit Integers

How does a 32 bit processor support 64 bit integers?

Most processors include a carry flag and an overflow flag to support operations on multi-word integers. The carry flag is used for unsigned math, and the overflow flag for signed math.

For example, on an x86 you could add two unsigned 64-bit numbers (which we'll assume are in EDX:EAX and EBX:ECX) something like this:

add eax, ecx  ; this does an add, ignoring the carry flag
adc edx, ebx ; this adds the carry flag along with the numbers
; sum in edx:eax

It's possible to implement this sort of thing in higher level languages like C++ as well, but they do a lot less to support it, so the code typically ends up substantially slower than when it's written in assembly language.

Most operations are basically serial in nature. When you're doing addition at the binary level, you take two input bits and produce one result bit and one carry bit. The carry bit is then used as an input when adding the next least significant bit, and so on across the word (known as a "ripple adder", because the addition "ripples" across the word).

There are more sophisticated ways to do addition that can reduce that dependency between one bit and another when a particular addition doesn't produce a dependency, and most current hardware uses such things.

In the worst case, however, adding 1 to a number that's already the largest a given word size supports will result in generating a carry from every bit to the next, all the way across the word.

That means that (to at least some extent) the word width a CPU supports imposes a limit on the maximum clock speed at which it can run. If somebody wanted to badly enough, they could build a CPU that worked with, say, 1024-bit operands. If they did that, however, they'd have two choices: either run it at a lower clock speed, or else take multiple clocks to add a single pair of operands.

Also note that as you widen operands like that, you need more storage (e.g., larger cache) to store as many operands, more gates to carry out each individual operation, and so on.

So given identical technology, you could have a 64-bit processor that ran at 4 GHz and had, say, 4 megabytes of cache, or a 1024-bit processor that ran at about 250 MHz and had, perhaps, 2 megabytes of cache.

The latter would probably be a win if most of your work was on 1024-bit (or larger) operands. Most people don't do math on 1024-bit operands very often at all though. In fact, 64-bit numbers are large enough for most purposes. As such, supporting wider operands would probably turn out to be a net loss for most people most of the time.

Is it ok to use 64bit integers in a 32bit application?

If I compile 32bit code using these types, will I suffer any performance issues on 64bit and/or 32bit machines?

Your compiler may need to generate several machine code instructions to perform operations on the 64 bit values, slowing down those operations by several times. If that might be a concern, you'd want to do some benchmarking to assess the impact on a particular program with realistic data. That issue exists where you're executing the 32 bit executable on a 32 or 64 bit machine.

would I ever have a reason to just use int?

Aside from performance and memory usage, there's occasionally reason to use ints because other APIs/streams etc. that you work with use int. There's also subtle documentary value in using int if it's clearly adequate, otherwise other programmers may waste time wondering why you'd gone out of your way to use a long long.

After all, 64bit ints are far more useful in storing large numbers.

Far more useful in storing very large numbers - sure - but that's relatively rarely needed. If you're storing something like a year or someone's age, there's just no particular point in having 64 bits.

32 bit operating system supporting 64 bit unsigned integer!! how?

Let's see what it compiles to. I've added some arithmetic to make it clearer.

6           unsigned long long i = 0;
0x0804868d <+18>: mov DWORD PTR [ebp-0x10],0x0
0x08048694 <+25>: mov DWORD PTR [ebp-0xc],0x0

7 unsigned long long j = 3;
0x0804869b <+32>: mov DWORD PTR [ebp-0x18],0x3
0x080486a2 <+39>: mov DWORD PTR [ebp-0x14],0x0

8 cout << i + j << endl;
0x080486a9 <+46>: mov ecx,DWORD PTR [ebp-0x10]
0x080486ac <+49>: mov ebx,DWORD PTR [ebp-0xc]
0x080486af <+52>: mov eax,DWORD PTR [ebp-0x18]
0x080486b2 <+55>: mov edx,DWORD PTR [ebp-0x14]
0x080486b5 <+58>: add eax,ecx
0x080486b7 <+60>: adc edx,ebx

As you can see, the compiler generates two instructions for load and store. Addition is performed using adc (add with carry) to carry the carry bit to the higher part.

16 bit Int vs 32 bit Int vs 64 bit Int

"Better" is a subjective term, but some integers are more performant on certain platforms.

For example, in a 32-bit computer (referenced by terms like 32-bit platform and Win32) the CPU is optimized to handle a 32-bit value at a time, and the 32 refers to the number of bits that the CPU can consume or produce in a single cycle. (This is a really simplistic explanation, but it gets the general idea across).

In a 64-bit computer (most recent AMD and Intel processors fall into this category), the CPU is optimized to handle 64-bit values at a time.

So, on a 32-bit platform, a 16-bit integer loaded into a 32-bit address would need to have 16 bits zeroed out so that the CPU could operate on it; a 32-bit integer would be immediately usable without any alteration, and a 64-bit integer would need to be operated on in two or more CPU cycles (once for the low 32-bits, and then again for the high 32-bits).

Conversely, on a 64-bit platform, 16-bit integers would need to have 48 bits zeroed, 32-bit integers would need to have 32 bits zeroed, and 64-bit integers could be operated on immediately.

Each platform and CPU has a 'native' bit-ness (like 32 or 64), and this usually limits some of the other resources that can be accessed by that CPU (for example, the 3GB/4GB memory limitation of 32-bit processors). The 80386 processor family (and later x86) processors made 32-bit the norm, but now companies like AMD and then Intel are currently making 64-bit the norm.

How is 64-bit math accomplished on a 32-bit machine?

They use the carry bit for add and subtract. The assembler ops for "add with carry" and "subtract with carry" (or "borrow") can be used for arbitrary bit length extended precision addition and subtraction.

For multiply, if you only have a 32-bit result from multiply, you can break it into 16-bit value pairs and multiply and then shift and add (with carry) to get a full 64-bit result from 32-bit multiply. Basically, doing the long-hand version (any two 16-bit multiplies fit in a 32-bit result) can be used to generate arbitrary bit-length multiplies using a more limited precision.

FWIW, the Intel 32-bit asm "mul" instruction can put a 64-bit result in EDX:EAX so you can actually do multiplies in 32-bit chunks (with 64-bit values to add) rather than 16-bit chunks (with 32-bit values to shift and add).



Related Topics



Leave a reply



Submit