What's the Best Way to Do Fixed-Point Math

Fixed Point Arithmetic in C Programming

The idea behind fixed-point arithmetic is that you store the values multiplied by a certain amount, use the multiplied values for all calculus, and divide it by the same amount when you want the result. The purpose of this technique is to use integer arithmetic (int, long...) while being able to represent fractions.

The usual and most efficient way of doing this in C is by using the bits shifting operators (<< and >>). Shifting bits is a quite simple and fast operation for the ALU and doing this have the property to multiply (<<) and divide (>>) the integer value by 2 on each shift (besides, many shifts can be done for exactly the same price of a single one). Of course, the drawback is that the multiplier must be a power of 2 (which is usually not a problem by itself as we don't really care about that exact multiplier value).

Now let's say we want to use 32 bits integers for storing our values. We must choose a power of 2 multiplier. Let's divide the cake in two, so say 65536 (this is the most common case, but you can really use any power of 2 depending on your needs in precision). This is 216 and the 16 here means that we will use the 16 least significant bits (LSB) for the fractional part. The rest (32 - 16 = 16) is for the most significant bits (MSB), the integer part.

     integer (MSB)    fraction (LSB)
v v
0000000000000000.0000000000000000

Let's put this in code:

#define SHIFT_AMOUNT 16 // 2^16 = 65536
#define SHIFT_MASK ((1 << SHIFT_AMOUNT) - 1) // 65535 (all LSB set, all MSB clear)

int price = 500 << SHIFT_AMOUNT;

This is the value you must put in store (structure, database, whatever). Note that int is not necessarily 32 bits in C even though it is mostly the case nowadays. Also without further declaration, it is signed by default. You can add unsigned to the declaration to be sure. Better than that, you can use uint32_t or uint_least32_t (declared in stdint.h) if your code highly depends on the integer bit size (you may introduce some hacks about it). In doubt, use a typedef for your fixed-point type and you're safer.

When you want to do calculus on this value, you can use the 4 basic operators: +, -, * and /. You have to keep in mind that when adding and subtracting a value (+ and -), that value must also be shifted. Let's say we want to add 10 to our 500 price:

price += 10 << SHIFT_AMOUNT;

But for multiplication and division (* and /), the multiplier/divisor must NOT be shifted. Let's say we want to multiply by 3:

price *= 3;

Now let's make things more interesting by dividing the price by 4 so we make up for a non-zero fractional part:

price /= 4; // now our price is ((500 + 10) * 3) / 4 = 382.5

That's all about the rules. When you want to retrieve the real price at any point, you must right-shift:

printf("price integer is %d\n", price >> SHIFT_AMOUNT);

If you need the fractional part, you must mask it out:

printf ("price fraction is %d\n", price & SHIFT_MASK);

Of course, this value is not what we can call a decimal fraction, in fact it is an integer in the range [0 - 65535]. But it maps exactly with the decimal fraction range [0 - 0.9999...]. In other words, mapping looks like: 0 => 0, 32768 => 0.5, 65535 => 0.9999...

An easy way to see it as a decimal fraction is to resort to C built-in float arithmetic at this point:

printf("price fraction in decimal is %f\n", ((double)(price & SHIFT_MASK) / (1 << SHIFT_AMOUNT)));

But if you don't have FPU support (either hardware or software), you can use your new skills like this for complete price:

printf("price is roughly %d.%lld\n", price >> SHIFT_AMOUNT, (long long)(price & SHIFT_MASK) * 100000 / (1 << SHIFT_AMOUNT));

The number of 0's in the expression is roughly the number of digits you want after the decimal point. Don't overestimate the number of 0's given your fraction precision (no real trap here, that's quite obvious). Don't use simple long as sizeof(long) can be equal to sizeof(int). Use long long in case int is 32 bits as long long is guaranted to be 64 bits minimum (or use int64_t, int_least64_t and such, declared in stdint.h). In other words, use a type twice the size of your fixed-point type, that's fair enough. Finally, if you don't have access to >= 64 bits types, maybe it's time to exercice emulating them, at least for your output.

These are the basic ideas behind fixed-point arithmetics.

Be careful with negative values. It can becomes tricky sometimes, especially when it's time to show the final value. Besides, C is implementation-defined about signed integers (even though platforms where this is a problem are very uncommon nowadays). You should always make minimal tests in your environment to make sure everything goes as expected. If not, you can hack around it if you know what you do (I won't develop on this, but this has something to do with arithmetic shift vs logical shift and 2's complement representation). With unsigned integers however, you're mostly safe whatever you do as behaviors are well defined anyway.

Also take note that if a 32 bits integer can not represent values bigger than 232 - 1, using fixed-point arithmetic with 216 limits your range to 216 - 1! (and divide all of this by 2 with signed integers, which in our example would leave us with an available range of 215 - 1). The goal is then to choose a SHIFT_AMOUNT suitable to the situation. This is a tradeoff between integer part magnitude and fractional part precision.

Now for the real warnings: this technique is definitely not suitable in areas where precision is a top priority (financial, science, military...). Usual floating point (float/double) are also often not precise enough, even though they have better properties than fixed-point overall. Fixed-point has the same precision whatever the value (this can be an advantage in some cases), where floats precision is inversely proportional to the value magnitude (ie. the lower the magnitude, the more precision you get... well, this is more complex than that but you get the point). Also floats have a much greater magnitude than the equivalent (in number of bits) integers (fixed-point or not), to the cost of a loss of precision with high values (you can even reach a point of magnitude where adding 1 or even greater values will have no effect at all, something that cannot happen with integers).

If you work in those sensible areas, you're better off using libraries dedicated to the purpose of arbitrary precision (go take a look at gmplib, it's free). In computing science, essentially, gaining precision is about the number of bits you use to store your values. You want high precision? Use bits. That's all.

understanding Fixed point arithmetic

Maybe it's easiest to make an example.

Suppose you want to add two numbers, one in the format A(3, 5), and the other in the format A(2, 10).

You can do it by converting both numbers to a "common" format - that is, they should have the same number of bits in the fractional part.

A conservative way of doing that is to choose the greater number of bits. That is, convert the first number to A(3, 10) by shifting it 5 bits left. Then, add the second number.

The result of an addition has the range of the greater format, plus 1 bit. In my example, if you add A(3, 10) and A(2, 10), the result has the format A(4, 10).

I call this the "conservative" way because you cannot lose information - it guarantees that the result is representable in the fixed-point format, without losing precision. However, in practice, you will want to use smaller formats for your calculation results. To do that, consider these ideas:

  1. You can use the less-accurate format as your common representation. In my example, you can convert the second number to A(2, 5) by shifting the integer right by 5 bits. This will lose precision, and usually this precision loss is not problematic, because you are going to add a less-precise number to it anyway.
  2. You can use 1 fewer bit for the integer part of the result. In applications, it often happens that the result cannot be too big. In this case, you can allocate 1 fewer bit to represent it. You might want to check if the result is too big, and clamp it to the needed range.

Now, on multiplication.

It's possible to multiply two fixed-point numbers directly - they can be in any format. The format of the result is the "sum of the input formats" - all the parts added together - and add 1 to the integer part. In my example, multiplying A(3, 5) with A(2, 10) gives a number in the format A(7, 15). This is a conservative rule - the output format is able to store the result without loss of precision, but in applications, almost always you want to cut the precision of the output, because it's just too many bits.


In your case, where the number of bits for all numbers is 32, you probably want to lose precision in such a way that all intermediate results have 32 bits.

For example, multiplying A(17, 14) with A(2, 29) gives A(20, 43) - 64 bits required. You probably should cut 32 bits from it, and throw away the rest. What is the range of the result? If your twiddle factor is a number up to 4, the result is probably limited by 2^19 (the conservative number 20 above is needed to accommodate the edge case of multiplying -1 << 31 by -1 << 31 - it's almost always worth rejecting this edge-case).

So use A(19, 12) for your output format, i.e. remove 31 bits from the fractional part of your output.

So, instead of

res = a*tw;

you probably want

int64_t res_tmp = (int64_t)a * tw;      // A(20, 43)
if (res_tmp == ((int64_t)1 << 62)) // you might want to neglect this edge case
--res_tmp; // A(19, 43)
int32_t res = (int32_t)(res_tmp >> 31); // A(19, 12)

How to do fixed-point math instead of floating-point?

Check the below for link for fixed point, I'm sure will be useful for you ...

http://cnx.org/content/m11054/latest/

http://www.digitalsignallabs.com/fp.pdf

fixed point arithmetic in modern systems


Since i'd like to avoid trying to force the cpu to try to create a 64bit type in the middle of my calculations is the shifting to lower bit values the only other alternative?

You have to work with the hardware capabilities, and the only available operations you'll find are:

  1. Multiply N x N => low N bits (native C multiplication)
  2. Multiply N x N => high N bits (the C language has no operator for this)
  3. Multiply N x N => all 2N bits (cast to wider type, then multiply)

If the instruction set has #3, and the CPU implements it efficiently, then there's no need to worry about the extra-wide result it produces. For x86, you can pretty much take these as a given. Anyway, you said this wasn't an optimization question :) .

Sticking to just #1, you'll need to break the operands into pieces of (N/2) bits and do long multiplication, which is likely to generate more work. There are still cases where it's the right thing to do, for instance implementing #3 (software extended arithmetic) on a CPU that doesn't have it or #2.

Is that precision loss just a part of using fixed point numbers?

log2( 9.1234567890 – 9.123444 ) = –16.25, and you used 16 bits of precision, so yep, that's very typical.

How to connect the theory of fixed-point numbers and its practical implementation?

Fixed-point formats are used as a way to represent fractional numbers. Quite commonly, processors perform fixed-point or integer arithmetic faster or more efficiently than floating-point arithmetic. Whether fixed-point arithmetic is suitable for an application depends on what numbers the application needs to work with.

Using fixed-point formats does require converting input to the fixed-point format and converting numbers in the fixed-point format to output. But this is also true of integers and floating-point: All input must be converted to whatever internal format is used to represent it, and all output must be produced by converting from internal formats.

And how does multiplying on 2^(fractional_bits) affect the quantity of digits after the point?

Suppose we have some number x that is represented as an integer X = x•2f, where f is the number of fraction bits. Conceptually X is in a fixed-point format. Similarly, we have y represented as Y = y•2f.

If we execute an integer multiplication instruction to produce result Z = XY, then Z = XY = (x•2f)•(y•2f) = xy•22f. Then, if we divide Z by 2f (or, nearly equivalently, shift it right by f bits), we have xy•2f except for any rounding errors that may have occurred in the division. And xy•2f is the fixed-point representation of the product of x and y.

Thus, we can effect a fixed-point multiplication by perform an integer multiplication followed by a shift.

Often, to get rounding instead of truncation, a value of half of 2f is added before the shift, so we compute floor((XY + 2f−1) / 2f):

  • Multiply X by Y.
  • Add 2f−1.
  • Shift right f bits.

Is fixed point math faster than floating point?

Fixed point is marginally useful on platforms that do not support any kind of decimal type of their own; for example, I implemented a 24-bit fixed point type for the PIC16F series microcontrollers (more on why I chose fixed point later).

However, almost every modern CPU supports floating point at the microcode or hardware level, so there isn't much need for fixed point.

Fixed point numbers are limited in the range they can represent - consider a 64-bit(32.32) fixed point vs. a 64-bit floating point: the 64-bit fixed point number has a decimal resolution of 1/(232), while the floating point number has a decimal resolution of up to 1/(253); the fixed point number can represent values as high as 231, while the floating point number can represent numbers up to 2223. And if you need more, most modern CPUs support 80-bit floating point values.

Of course, the biggest downfall of floating point is limited precision in extreme cases - e.g. in fixed point, it would require fewer bits to represent 9000000000000000000000000000000.00000000000000000000000000000002. Of course, with floating point, you get better precision for average uses of decimal arithmetic, and I have yet to see an application where decimal arithmetic is as extreme as the above example yet also does not overflow the equivalent fixed-point size.

The reason I implemented a fixed-point library for the PIC16F rather than use an existing floating point library was code size, not speed: the 16F88 has 384 bytes of usable RAM and room for 4095 instructions total. To add two fixed point numbers of predefined width, I inlined integer addition with carry-out in my code (the fixed point doesn't move anyway); to multiply two fixed point numbers, I used a simple shift-and-add function with extended 32-bit fixed point, even though that isn't the fastest multiplication approach, in order to save even more code.

So, when I had need of only one or two basic arithmetic operations, I was able to add them without using up all of the program storage. For comparison, a freely available floating point library on that platform was about 60% of the total storage on the device. In contrast, software floating point libraries are mostly just wrappers around a few arithmetic operations, and in my experience, they are mostly all-or-nothing, so cutting the code size in half because you only need half of the functions doesn't work so well.

Fixed point generally doesn't provide much of an advantage in speed though, because of its limited representation range: how many bits would you need to represent 1.7E+/-308 with 15 digits of precision, the same as a 64-bit double? If my calculations are correct, you'd need somewhere around 2020 bits. I'd bet the performance of that wouldn't be so good.

Thirty years ago, when hardware floating point was relatively rare, very special-purpose fixed-point (or even scaled integer) arithmetic could provide significant gains in performance over doing software-based floating point, but only if the allowable range of values could be efficiently represented with scaled-integer arithmetic (the original Doom used this approach when no coprocessor was available, such as on my 486sx-25 in 1992 - typing this on an overclocked hyperthreaded Core i7 running at 4.0GHz with a GeForce card that has over 1000 independent floating point compute units, it just seems wrong somehow, although I'm not sure which - the 486, or the i7...).

Floating point is more general purpose due to the range of values it can represent, and with it implemented in hardware on both CPUs and GPUs, it beats fixed point in every way, unless you really need more than 80-bit floating point precision at the expense of huge fixed-point sizes and very slow code.



Related Topics



Leave a reply



Submit