Why Does Integer Division Code Give the Wrong Answer

Why does integer division code give the wrong answer?

You're dividing integers, which means that you're using integer division.

In integer division the fractional part of the result is thrown away.

Try the following:

float res = (float) quantity / standard;
^^^^^^^

The above forces the numerator to be treated as a float which in turn promotes the denominator to float as well, and a float-division is performed instead of an int-division.

Note that if you're dealing with literals, you can change

float f = 6800 / 500;

to include the f suffix to make the denominator a float:

float f = 6800f / 500;
^

Division operation is giving me the wrong result

Like others have said, you're using integer division.

Remember that int values can't hold decimal places. So if you do a calculation that would result in decimal places, those values are truncated and you're left with only the whole number part (in other words, the integer part).

int x = 1;
int y = 2;
int z = x/y; //0

int x = 5;
int y = 2;
int z = x/y; //2

You're using int literal values (digits like 2 and 3), and those don't have decimal places, so Processing treats them like int values, and they obey the above rules.

If you care about the decimal places, then either store them in float or double variables:

float x = 1;
float y = 2;
float z = x/y; //.5

float x = 5;
float y = 2;
float z = x/y; //2.5

Or just use float literals by adding a decimal part:

float a = 2.0/3.0;

Processing now knows that these are float values, so you'll keep the decimal places.

What's wrong with this division?

It's doing integer division in the first example as this is the default type for a numeric literal. Try changing it to -1.0/9 (or 1d/9d - the d suffix indicates a double) and you should get the same answer.

Why am I getting the wrong division answer?

You're doing division between two integers, as parseInt(50.00) will give 50 and parseInt(5.58) gives 5, so your calculation will be doing 50/5 which is equal to 10. To perform math with your floating-point numbers, there is no need to parse them as they're already floats:

const price = 5.58; // already a float

console.log(price); // returns 5.58

const money = 50.00; // already a float

console.log(money); // returns 50

const dev = money / price;

console.log(dev) // 8.960573476702509

Why does Python 3.4 give the wrong answer for division of large numbers, and how can I test for divisibility?

The floating-point result is wrong because dividing two ints with / produces a float, and the exact result of your division cannot be represented exactly as a float. The exact result 11882227807719423 must be rounded to the nearest representable number:

In [1]: float(11882227807719423)
Out[1]: 1.1882227807719424e+16

Why does integer division in C# return an integer and not a float?

While it is common for new programmer to make this mistake of performing integer division when they actually meant to use floating point division, in actual practice integer division is a very common operation. If you are assuming that people rarely use it, and that every time you do division you'll always need to remember to cast to floating points, you are mistaken.

First off, integer division is quite a bit faster, so if you only need a whole number result, one would want to use the more efficient algorithm.

Secondly, there are a number of algorithms that use integer division, and if the result of division was always a floating point number you would be forced to round the result every time. One example off of the top of my head is changing the base of a number. Calculating each digit involves the integer division of a number along with the remainder, rather than the floating point division of the number.

Because of these (and other related) reasons, integer division results in an integer. If you want to get the floating point division of two integers you'll just need to remember to cast one to a double/float/decimal.

Int division: Why is the result of 1/3 == 0?

The two operands (1 and 3) are integers, therefore integer arithmetic (division here) is used. Declaring the result variable as double just causes an implicit conversion to occur after division.

Integer division of course returns the true result of division rounded towards zero. The result of 0.333... is thus rounded down to 0 here. (Note that the processor doesn't actually do any rounding, but you can think of it that way still.)

Also, note that if both operands (numbers) are given as floats; 3.0 and 1.0, or even just the first, then floating-point arithmetic is used, giving you 0.333....



Related Topics



Leave a reply



Submit