What Does the M Stand for in C# Decimal Literal Notation

What does the M stand for in C# Decimal literal notation?

It means it's a decimal literal, as others have said. However, the origins are probably not those suggested elsewhere in this answer. From the C# Annotated Standard (the ECMA version, not the MS version):

The decimal suffix is M/m since D/d
was already taken by double.
Although it has been suggested that M
stands for money, Peter Golde recalls
that M was chosen simply as the next
best letter in decimal.

A similar annotation mentions that early versions of C# included "Y" and "S" for byte and short literals respectively. They were dropped on the grounds of not being useful very often.

What does the M stand for in decimal value assignment?

M makes the number a decimal representation in code.

To answer the second part of your question, yes they are different.

decimal current = (decimal)10.99

is the same as

double tmp = 10.99;
decimal current = (decimal)tmp;

Now for numbers larger than sigma it should not be a problem but if you meant decimal you should specify decimal.


Update:

Wow, i was wrong. I went to go check the IL to prove my point and the compiler optimized it away.


Update 2:

I was right after all!, you still need to be careful. Compare the output of these two functions.

class Program
{
static void Main(string[] args)
{
Console.WriteLine(Test1());
Console.WriteLine(Test2());
Console.ReadLine();
}

static decimal Test1()
{
return 10.999999999999999999999M;
}
static decimal Test2()
{
return (decimal)10.999999999999999999999;
}
}

The first returns 10.999999999999999999999 but the seccond returns 11


Just as a side note, double will get you 15 decimal digits of precision but decimal will get you 96 bits of precision with a scaling factor from 0 to 28. So you can represent any number in the range ((-296 to 296) / 10(0 to 28))

Using 0M instead of 0 for decimal values?

It is not necessary. Integer types are casted implicitly to decimal. You have to add the M suffix if the literal represents a floating point number. Floating point literals without a type suffix are double and those require an explicit cast to decimal.

decimal d = 1;     // works
decimal d2 = 1.0 // does not work
decimal d3 = 1.0M // works

The literal 0 here is obviously a special case of the integer literal.

What does M,D mean in decimal(M,D) exactly?

As the docs say:

M is the maximum number of digits (the
precision). It has a range of 1 to 65.
(Older versions of MySQL allowed a
range of 1 to 254.)

D is the number of digits to the right
of the decimal point (the scale). It
has a range of 0 to 30 and must be no
larger than M.

So M stands for Maximum (number of digits overall), D stands for Decimals (number of digits to the right of the decimal point).

Declaration suffix for decimal type

Documented in the C# language specification, chapter 2.4.4:

float f = 1.2f;
double d = 1.2d;
uint u = 2u;
long l = 2L;
ulong ul = 2UL;
decimal m = 2m;

Nothing for int, byte, sbyte, short, ushort.

unable to understand decimal in C#

If you don't specify the suffix the default is double for a number with decimal separator. M specify that the literal is actually a decimal

C# short/long/int literal format?

var d  = 1.0d;  // double
var d0 = 1.0; // double
var d1 = 1e+3; // double
var d2 = 1e-3; // double
var f = 1.0f; // float
var m = 1.0m; // decimal
var i = 1; // int
var ui = 1U; // uint
var ul = 1UL; // ulong
var l = 1L; // long

I think that's all... there are no literal specifiers for short/ushort/byte/sbyte

Is there a literal notation for decimal in IronPython?

Look at the python docs:

Decimal instances can be constructed from integers, strings, or
tuples. To create a Decimal from a
float, first convert it to a string.
This serves as an explicit reminder of
the details of the conversion
(including representation error).

Unfortunatelly there is no short hand notation you are looking for. Even python has none.
The standard way of 'hard coding' a decimal is to pass a string representation to the decimal constructor:

from decimal import Decimal
my_decimal = Decimal("1.21")

If you are writing code that is heavy on .net interop passing (it seems like you are) it may be better just using the .net Decimal datatype right away, as you proposed yourself. However be aware that you passing a floating point number to the decimal constructor in this example. You may get a small error in accuracy from the conversion, if the number has much more decimal places:

from System import *
my_decimal = Decimal(1.21)

So you may be better off using this constructor overload:

from System import *
my_decimal = Decimal(121, 0, 0, False, 2)

Not the most beautiful and shortest way of doing what you want, but the best in terms of performance and accuracy.



Related Topics



Leave a reply



Submit