Why Does Integer Division Yield a Float Instead of Another Integer

Why does integer division yield a float instead of another integer?

Take a look at PEP-238: Changing the Division Operator

The // operator will be available to request floor division unambiguously.

Why does integer division return float?

You misunderstood the operator. It is a floor division operator, not an integer division operator.

For floating point inputs, it'll still return a floored float value.

From the Binary arithmetic operations section:

The / (division) and // (floor division) operators yield the quotient of their arguments. The numeric arguments are first converted to a common type. Division of integers yields a float, while floor division of integers results in an integer; the result is that of mathematical division with the ‘floor’ function applied to the result.

The result of flooring is safe to convert to an integer.

Note that Python applies these rules to almost all numeric types when used in binary arithmetic operations, division and floor division are not exceptional here, nor is this specific to integers and floats (try it with import fractions then 2 // fractions.Fraction(1, 1) for example). The only exception here are complex numbers, for which floor division, modulo operator or divmod() are not defined.

Why dividing two integers doesn't get a float?

This is because of implicit conversion. The variables b, c, d are of float type. But the / operator sees two integers it has to divide and hence returns an integer in the result which gets implicitly converted to a float by the addition of a decimal point. If you want float divisions, try making the two operands to the / floats. Like follows.

#include <stdio.h>

int main() {
int a;
float b, c, d;
a = 750;
b = a / 350.0f;
c = 750;
d = c / 350;
printf("%.2f %.2f", b, d);
// output: 2.14 2.14
return 0;
}

Python 3 int division operator is returning a float?

From PEP-238, which introduced the new division (emphasis mine):

Semantics of Floor Division


Floor division will be implemented in all the Python numeric types,
and will have the semantics of:

a // b == floor(a/b)

except that the result type will be the common type into which a and
b are coerced before the operation.

Specifically, if a and b are of the same type, a//b will be of that type too. If the inputs are of different types, they are first
coerced to a common type using the same rules used for all other
arithmetic operators.

In particular, if a and b are both ints or longs, the result has the
same type and value as for classic division on these types (including
the case of mixed input types; int//long and long//int will both
return a long).

For floating point inputs, the result is a float. For example:

3.5//2.0 == 1.0

For complex numbers, // raises an exception, since floor() of a
complex number is not allowed.

For user-defined classes and extension types, all semantics are up to
the implementation of the class or type.

So yes, it is supposed to behave that way. "// means integer division and should return an integer" - not quite, it means floor division and should return something equal to an integer (you'd always expect (a // b).is_integer() to be true where either operand is a float).

Why does integer division in C# return an integer and not a float?

While it is common for new programmer to make this mistake of performing integer division when they actually meant to use floating point division, in actual practice integer division is a very common operation. If you are assuming that people rarely use it, and that every time you do division you'll always need to remember to cast to floating points, you are mistaken.

First off, integer division is quite a bit faster, so if you only need a whole number result, one would want to use the more efficient algorithm.

Secondly, there are a number of algorithms that use integer division, and if the result of division was always a floating point number you would be forced to round the result every time. One example off of the top of my head is changing the base of a number. Calculating each digit involves the integer division of a number along with the remainder, rather than the floating point division of the number.

Because of these (and other related) reasons, integer division results in an integer. If you want to get the floating point division of two integers you'll just need to remember to cast one to a double/float/decimal.

Why does dividing two int not yield the right value when assigned to double?

This is because you are using the integer division version of operator/, which takes 2 ints and returns an int. In order to use the double version, which returns a double, at least one of the ints must be explicitly casted to a double.

c = a/(double)b;

Why does the division get rounded to an integer?

You're using Python 2.x, where integer divisions will truncate instead of becoming a floating point number.

>>> 1 / 2
0

You should make one of them a float:

>>> float(10 - 20) / (100 - 10)
-0.1111111111111111

or from __future__ import division, which the forces / to adopt Python 3.x's behavior that always returns a float.

>>> from __future__ import division
>>> (10 - 20) / (100 - 10)
-0.1111111111111111

Integer from tuple, divided, yields integer instead of float?

Your print statements say to display the values as integers (%d); try using %f instead.



Related Topics



Leave a reply



Submit