Why Is the "F" Required When Declaring Floats

Why is the f required when declaring floats?

Your declaration of a float contains two parts:

  1. It declares that the variable timeRemaining is of type float.
  2. It assigns the value 0.58 to this variable.

The problem occurs in part 2.

The right-hand side is evaluated on its own. According to the C# specification, a number containing a decimal point that doesn't have a suffix is interpreted as a double.

So we now have a double value that we want to assign to a variable of type float. In order to do this, there must be an implicit conversion from double to float. There is no such conversion, because you may (and in this case do) lose information in the conversion.

The reason is that the value used by the compiler isn't really 0.58, but the floating-point value closest to 0.58, which is 0.57999999999999978655962351581366... for double and exactly 0.579999946057796478271484375 for float.

Strictly speaking, the f is not required. You can avoid having to use the f suffix by casting the value to a float:

float timeRemaining = (float)0.58;

Why is the letter f used at the end of a float no.?

The f indicates it's a floating point literal, not a double literal (which it would implicitly be otherwise.) It hasn't got a particular technical name that I know of - I tend to call it the "letter suffix" if I need to refer to it specifically, though that's somewhat arbitrary!

For instance:

float f = 3.14f; //Compiles
float f = 3.14; //Doesn't compile, because you're trying to put a double literal in a float without a cast.

You could of course do:

float f = (float)3.14;

...which accomplishes near enough the same thing, but the F is a neater, more concise way of showing it.

Why was double chosen as the default rather than float? Well, these days the memory requirements of a double over a float aren't an issue in 99% of cases, and the extra accuracy they provide is beneficial in a lot of cases - so you could argue that's the sensible default.

Note that you can explicitly show a decimal literal as a double by putting a d at the end also:

double d = 3.14d;

...but because it's a double value anyway, this has no effect. Some people might argue for it advocating it's clearer what literal type you mean, but personally I think it just clutters code (unless perhaps you have a lot of float literals hanging around and you want to emphasise that this literal is indeed meant to be a double, and the omission of the f isn't just a bug.)

Is there a reason to always declare floats with the type suffix 'f' in C#?

Here is the list of allowed Implicit Numeric Conversions. Implicit conversions are allowed when the target type can hold the original type range. In the case, float distance = 0.3; is an error because a float range cannot accommodate the double range.

As to efficiency, between 3 and 3f, the compiler should optimize for you.

IL_0001:  ldc.r4     3.   // float distance1 = 3;
IL_0006: stloc.0
IL_0007: ldc.r4 3. // float distance2 = 3f;
IL_000c: stloc.1

What's the use of suffix `f` on float value

3.00 is interpreted as a double, as opposed to 3.00f which is seen by the compiler as a float.

The f suffix simply tells the compiler which is a float and which is a double.

See MSDN (C++)

why f is placed after float values?

By default 12.3 is double literal. To tell compiler to treat it as float explicitly -> use f or F.

See tutorial page on the primitive types.

Why is 'f' required on floats in C#

Konrad is right.

If you see this as "duplicity", you can use eg. var:

var myFloat = 23f;

Tada, the "duplicit" float is gone :))



Related Topics



Leave a reply



Submit