Why do these division equations result in zero?
You're using int/int, which does everything in integer arithmetic even if you're assigning to a decimal/double/float variable.
Force one of the operands to be of the type you want to use for the arithmetic.
for (int i = 0; i <= 100; i++)
{
decimal result = i / 100m;
long result2 = i / 100;
double result3 = i / 100d;
float result4 = i / 100f;
Console.WriteLine("{0}/{1}={2} ({3},{4},{5}, {6})",
i, 100, i / 100d, result, result2, result3, result4);
}
Results:
0/100=0 (0,0,0, 0)
1/100=0.01 (0.01,0,0.01, 0.01)
2/100=0.02 (0.02,0,0.02, 0.02)
3/100=0.03 (0.03,0,0.03, 0.03)
4/100=0.04 (0.04,0,0.04, 0.04)
5/100=0.05 (0.05,0,0.05, 0.05)
(etc)
Note that that isn't showing the exact value represented by the float or the double - you can't represent 0.01 exactly as a float or double, for example. The string formatting is effectively rounding the result. See my article on .NET floating binary point for more information as well as a class which will let you see the exact value of a double.
I haven't bothered using 100L for result2
because the result would always be the same.
Why does division result in zero instead of a decimal?
It looks like you have integer division in the second case:
tempC=((5/9)*(tempF-32))
The 5 / 9
will get truncated to zero.
To fix that, you need to make one of them a floating-point type:
tempC=((5./9.)*(tempF-32))
Division result is always zero
because in this expression
t = (1/100) * d;
1 and 100 are integer values, integer division truncates, so this It's the same as this
t = (0) * d;
you need make that a float constant like this
t = (1.0/100.0) * d;
you may also want to do the same with this
k = n / 3.0;
Division returns zero
You are working with integers here. Try using decimals for all the numbers in your calculation.
decimal share = (18m / 58m) * 100m;
What is the result of divide by zero?
It might just not halt. Integer division can be carried out in linear time through repeated subtraction: for 7/2, you can subtract 2 from 7 a total of 3 times, so that’s the quotient, and the remainder (modulus) is 1. If you were to supply a dividend of 0 to an algorithm like that, unless there were a mechanism in place to prevent it, the algorithm would not halt: you can subtract 0 from 42 an infinite number of times without ever getting anywhere.
From a type perspective, this should be intuitive. The result of an undefined computation or a non-halting one is ⊥ (“bottom”), the undefined value inhabiting every type. Division by zero is not defined on the integers, so it should rightfully produce ⊥ by raising an error or failing to terminate. The former is probably preferable. ;)
Other, more efficient (logarithmic time) division algorithms rely on series that converge to the quotient; for a dividend of 0, as far as I can tell, these will either fail to converge (i.e., fail to terminate) or produce 0. See Division on Wikipedia.
Floating-point division similarly needs a special case: to divide two floats, subtract their exponents and integer-divide their significands. Same underlying algorithm, same problem. That’s why there are representations in IEEE-754 for positive and negative infinity, as well as signed zero and NaN (for 0/0).
Why is 0 divided by 0 an error?
This is maths rather than programming, but briefly:
It's in some sense justifiable to assign a 'value' of positive-infinity to
some-strictly-positive-quantity / 0
, because the limit is well-definedHowever, the limit of
x / y
asx
andy
both tend to zero depends on the path they take. For example,lim (x -> 0) 2x / x
is clearly 2, whereaslim (x -> 0) x / 5x
is clearly 1/5. The mathematical definition of a limit requires that it is the same whatever path is followed to the limit.
Percentage calculation always returns me 0
It's probably due to the precision of the int
. Use a decimal
or double
instead.
When we use an integer, we lose precision.
Console.WriteLine(100 / 17); // 5
Console.WriteLine(100 / 17m); // 5.8823529411764705882352941176
Console.WriteLine(100 / 17d); // 5.88235294117647
Console.WriteLine(100 / 17f); // 5.882353
Since integers always round down, 0.99
as an integer is 0
.
Note that for precision, the types of the inputs matters.
double output = input1 * input2;
For example:
double outputA = 9 / 10;
Console.WriteLine(outputA); // 0
double outputB = 9 / 10d;
Console.WriteLine(outputB); // 0.9
double outputC = 9d / 10;
Console.WriteLine(outputC); // 0.9
Here is a Fiddle.
Why does division by zero in IEEE754 standard results in Infinite value?
It's a nonsense from the mathematical perspective.
Yes. No. Sort of.
The thing is: Floating-point numbers are approximations. You want to use a wide range of exponents and a limited number of digits and get results which are not completely wrong. :)
The idea behind IEEE-754 is that every operation could trigger "traps" which indicate possible problems. They are
- Illegal (senseless operation like sqrt of negative number)
- Overflow (too big)
- Underflow (too small)
- Division by zero (The thing you do not like)
- Inexact (This operation may give you wrong results because you are losing precision)
Now many people like scientists and engineers do not want to be bothered with writing trap routines. So Kahan, the inventor of IEEE-754, decided that every operation should also return a sensible default value if no trap routines exist.
They are
- NaN for illegal values
- signed infinities for Overflow
- signed zeroes for Underflow
- NaN for indeterminate results (0/0) and infinities for (x/0 x != 0)
- normal operation result for Inexact
The thing is that in 99% of all cases zeroes are caused by underflow and therefore in 99%
of all times Infinity is "correct" even if wrong from a mathematical perspective.
Why C# output 0.0 for a simple Percentage formula?
Probably it's because var position
and var count
are being treated as int
, so the division is 0 unless position equals count.
Try changing them to a double
.
floats being saved as 0 after calculation
Since 1, 10 and 100 are all integer values, the division is also an int value, rounded down. In this case 1/10 = 0
so (1/10)*100 = 0
If you do not want this try using floats:
(1.0f/10)*100
In case you're working with integer variables you have to convert them first. This can be achieved by casting, like so:
int a=1;
...
float b = ((float) a)/10; // b will be 0.1
Another quick way of doing this in a line with multiple operations is multiplying by 1.0f:
int x = 100;
float c = (a*1.0f)/x; // c will be 0.01f
Related Topics
How to Return a Value from a Form in C#
How to Write Programs in C# .Net, to Run Them on Linux/Wine/Mono
Cancellation Token in Task Constructor: Why
How to Handle Key Press Event in Console Application
Is There Any Benefit to This Switch/Pattern Matching Idea
Fastest Way to Serialize and Deserialize .Net Objects
"On Exit" for a Console Application
Good Gethashcode() Override for List of Foo Objects Respecting the Order
What Does $ Mean Before a String
Generating Dll Assembly Dynamically at Run Time
Example Ajax Call Back to an ASP.NET Core Razor Page
How Can a Word Document Be Created in C#
Why Is "Set as Startup" Option Stored in the Suo File and Not the Sln File