Declaration Suffix for Decimal Type

Declaration suffix for decimal type

Documented in the C# language specification, chapter 2.4.4:

float f = 1.2f;
double d = 1.2d;
uint u = 2u;
long l = 2L;
ulong ul = 2UL;
decimal m = 2m;

Nothing for int, byte, sbyte, short, ushort.

Declaring m-suffix for a decimal type dynamically

No, decimalPoints m is invalid syntax - but you can use casting

decimal result =  10 * (decimal)decimalPoints;

or in this case better: a decimal multiplied with a int results into a decimal

decimal result =  10m * decimalPoints;

What does the M stand for in C# Decimal literal notation?

It means it's a decimal literal, as others have said. However, the origins are probably not those suggested elsewhere in this answer. From the C# Annotated Standard (the ECMA version, not the MS version):

The decimal suffix is M/m since D/d
was already taken by double.
Although it has been suggested that M
stands for money, Peter Golde recalls
that M was chosen simply as the next
best letter in decimal.

A similar annotation mentions that early versions of C# included "Y" and "S" for byte and short literals respectively. They were dropped on the grounds of not being useful very often.

why can't you assign a number with a decimal point to decimal type directly without using type suffix?

Edit: I may have missed the last part of the question, so the overview below is hardly useful.

Anyway, the reason you can't do what you're trying to do is because there is no implicit conversion between floating point types and decimal. You can however assign it from an integer, as there is an implicit conversion from int to decimal.


You can, but you have to use this syntax (or do an explicit cast to decimal).

decimal bankBalance = 3433.20m;

and for floats it is

float bankBalance = 3433.20f;

default is double

double bankBalance = 3444.20;

C# suffix behind numeric literal

You are confusing two different things here:

float testFloat = 3.0F;

The float tells the compiler that the variable testFloat will be a floating point value. The F tells the compiler that the literal 3.0 is a float. The compiler needs to know both pieces before it can decide whether or not it can assign the literal to the variable with either no conversion or an implicit conversion.

For example, you can do this:

float testFloat = 3;

And that's okay. Because the compiler will see 3 as a literal integer, but it knows it can assign that to a float without loss of precision (this is implicit conversion). But if you do this:

float testFloat = 3.0;

3.0 is a literal double (because that's the default without a suffix) and it can't implicitly (i.e. automatically) convert a double to a float because a float has less precision. In other words, information might be lost. So you either tell the compiler that it's a literal float:

float testFloat = 3.0f;

Or you tell it you are okay with any loss of precision by using an explicit cast:

float testFloat = (float)3.0;

Why c# decimals can't be initialized without the M suffix?

The type of a literal without the m suffix is double - it's as simple as that. You can't initialize a float that way either:

float x = 10.0; // Fail

The type of the literal should be made clear from the literal itself, and the type of variable it's assigned to should be assignable to from the type of that literal. So your second example works because there's an implicit conversion from int (the type of the literal) to decimal. There's no implicit conversion from double to decimal (as it can lose information).

Personally I'd have preferred it if there'd been no default or if the default had been decimal, but that's a different matter...

What does the M stand for in decimal value assignment?

M makes the number a decimal representation in code.

To answer the second part of your question, yes they are different.

decimal current = (decimal)10.99

is the same as

double tmp = 10.99;
decimal current = (decimal)tmp;

Now for numbers larger than sigma it should not be a problem but if you meant decimal you should specify decimal.


Update:

Wow, i was wrong. I went to go check the IL to prove my point and the compiler optimized it away.


Update 2:

I was right after all!, you still need to be careful. Compare the output of these two functions.

class Program
{
static void Main(string[] args)
{
Console.WriteLine(Test1());
Console.WriteLine(Test2());
Console.ReadLine();
}

static decimal Test1()
{
return 10.999999999999999999999M;
}
static decimal Test2()
{
return (decimal)10.999999999999999999999;
}
}

The first returns 10.999999999999999999999 but the seccond returns 11


Just as a side note, double will get you 15 decimal digits of precision but decimal will get you 96 bits of precision with a scaling factor from 0 to 28. So you can represent any number in the range ((-296 to 296) / 10(0 to 28))

Why is the f required when declaring floats?

Your declaration of a float contains two parts:

  1. It declares that the variable timeRemaining is of type float.
  2. It assigns the value 0.58 to this variable.

The problem occurs in part 2.

The right-hand side is evaluated on its own. According to the C# specification, a number containing a decimal point that doesn't have a suffix is interpreted as a double.

So we now have a double value that we want to assign to a variable of type float. In order to do this, there must be an implicit conversion from double to float. There is no such conversion, because you may (and in this case do) lose information in the conversion.

The reason is that the value used by the compiler isn't really 0.58, but the floating-point value closest to 0.58, which is 0.57999999999999978655962351581366... for double and exactly 0.579999946057796478271484375 for float.

Strictly speaking, the f is not required. You can avoid having to use the f suffix by casting the value to a float:

float timeRemaining = (float)0.58;


Related Topics



Leave a reply



Submit