Why Does Integer Division in C# Return an Integer and Not a Float

Why does integer division in C# return an integer and not a float?

While it is common for new programmer to make this mistake of performing integer division when they actually meant to use floating point division, in actual practice integer division is a very common operation. If you are assuming that people rarely use it, and that every time you do division you'll always need to remember to cast to floating points, you are mistaken.

First off, integer division is quite a bit faster, so if you only need a whole number result, one would want to use the more efficient algorithm.

Secondly, there are a number of algorithms that use integer division, and if the result of division was always a floating point number you would be forced to round the result every time. One example off of the top of my head is changing the base of a number. Calculating each digit involves the integer division of a number along with the remainder, rather than the floating point division of the number.

Because of these (and other related) reasons, integer division results in an integer. If you want to get the floating point division of two integers you'll just need to remember to cast one to a double/float/decimal.

Floating point division returns integer numbers

There're two problems with your code

  1. Evident one - Integer division - e.g. 1 / 2 == 0 not 0.5 since result must be integer
  2. Hidden one - Integer overflow - e.g. a + b can overflow int.MaxValue and you'll get negative result

The most accurate implementation is

public static float Average(int a, int b)
{
return 0.5f * a + 0.5f * b;
}

Tests:

Average(1, 2);                       // 1.5
Average(int.MaxValue, int.MaxValue); // some large positive value

Division of Integer Parameters Returning Non Float Value

This comes down to the order of operations

float kx=(float)(img.Width / refsize.Width);

first evaluates

img.Width / refsize.Width

then casts the result (which is the integer 1) to a float.

To get your expected result, cast both widths to a float before division (technically you can cast either one and the compiler will promote the other, but I prefer to be explicit. You never know who will maintain the code years down the road).

float kx=(float)img.Width / (float)refsize.Width;

Divide returns 0 instead of float

This will work the way you expect ...

float test = (float)140 / (float)1058;

By the way, your code works fine for me (prints a 0.1323251 to the console).

Float Evaluation returns integer?

float in C# has precision of 6-9 digits according to docs, 600851475143 divided by maximum i in the loop (i.e. 999) will have more than 9 digits in mantissa so float will not be able to cover fractional part, so try switching to double or even decimal:

double input = 600851475143;
for (double i = 0; i < 1000; i++)
{
double temp = input / i;
if ((temp % 1) == 0)
{
Console.Write(temp + ", ");
}
}

Also I would say that you can use long's in this case:

long input = 600851475143;
for (long i = 1; i < 1000; i++)
{
var x = (input % i);
if (x == 0)
{
Console.WriteLine(input / i);
}
}

As for the last snippet - 10 and 9 in 10/9 are int's, so the result of 10/9 is an int and equals to 1, which give you the result you get.

Why is the division result between two integers truncated?

C# traces its heritage to C, so the answer to "why is it like this in C#?" is a combination of "why is it like this in C?" and "was there no good reason to change?"

The approach of C is to have a fairly close correspondence between the high-level language and low-level operations. Processors generally implement integer division as returning a quotient and a remainder, both of which are of the same type as the operands.

(So my question would be, "why doesn't integer division in C-like languages return two integers", not "why doesn't it return a floating point value?")

The solution was to provide separate operations for division and remainder, each of which returns an integer. In the context of C, it's not surprising that the result of each of these operations is an integer. This is frequently more accurate than floating-point arithmetic. Consider the example from your comment of 7 / 3. This value cannot be represented by a finite binary number nor by a finite decimal number. In other words, on today's computers, we cannot accurately represent 7 / 3 unless we use integers! The most accurate representation of this fraction is "quotient 2, remainder 1".

So, was there no good reason to change? I can't think of any, and I can think of a few good reasons not to change. None of the other answers has mentioned Visual Basic which (at least through version 6) has two operators for dividing integers: / converts the integers to double, and returns a double, while \ performs normal integer arithmetic.

I learned about the \ operator after struggling to implement a binary search algorithm using floating-point division. It was really painful, and integer division came in like a breath of fresh air. Without it, there was lots of special handling to cover edge cases and off-by-one errors in the first draft of the procedure.

From that experience, I draw the conclusion that having different operators for dividing integers is confusing.

Another alternative would be to have only one integer operation, which always returns a double, and require programmers to truncate it. This means you have to perform two int->double conversions, a truncation and a double->int conversion every time you want integer division. And how many programmers would mistakenly round or floor the result instead of truncating it? It's a more complicated system, and at least as prone to programmer error, and slower.

Finally, in addition to binary search, there are many standard algorithms that employ integer arithmetic. One example is dividing collections of objects into sub-collections of similar size. Another is converting between indices in a 1-d array and coordinates in a 2-d matrix.

As far as I can see, no alternative to "int / int yields int" survives a cost-benefit analysis in terms of language usability, so there's no reason to change the behavior inherited from C.

In conclusion:

  • Integer division is frequently useful in many standard algorithms.
  • When the floating-point division of integers is needed, it may be invoked explicitly with a simple, short, and clear cast: (double)a / b rather than a / b
  • Other alternatives introduce more complication both the programmer and more clock cycles for the processor.

How can I divide two integers to get a double?

You want to cast the numbers:

double num3 = (double)num1/(double)num2;

Note: If any of the arguments in C# is a double, a double divide is used which results in a double. So, the following would work too:

double num3 = (double)num1/num2;

For more information see:

Dot Net Perls

Division in double variable returning always zero

The result of 80/100 (both integers) is always 0.

Change it to 80.0/100.0



Related Topics



Leave a reply



Submit