C# Short/Long/Int Literal Format

C# short/long/int literal format?


var d  = 1.0d;  // double
var d0 = 1.0; // double
var d1 = 1e+3; // double
var d2 = 1e-3; // double
var f = 1.0f; // float
var m = 1.0m; // decimal
var i = 1; // int
var ui = 1U; // uint
var ul = 1UL; // ulong
var l = 1L; // long

I think that's all... there are no literal specifiers for short/ushort/byte/sbyte

How to specify a short int literal without casting?

Short answer, No. In C#, there's no letter S that could be used as var a = 123S that would indicate that a is of type short. There's L for long, F for float, D for double, M for decimal, but not S. It would be nice if there was, but there isn't.

var a = 1M;  // decimal
var a = 1L; // long
var a = 1F; // float
var a = 1D; // double
var a = 1; // int

var a = 1U; // uint
var a = 1UL; // ulong

but not

var a = 1S; // not possible, you must use (short)1;

C# compiler number literals


var y = 0f; // y is single
var z = 0d; // z is double
var r = 0m; // r is decimal
var i = 0U; // i is unsigned int
var j = 0L; // j is long (note capital L for clarity)
var k = 0UL; // k is unsigned long (note capital L for clarity)

From the C# specification 2.4.4.2 Integer literals and 2.4.4.3 Real literals. Take note that L and UL are preferred as opposed to their lowercase variants for clarity as recommended by Jon Skeet.

C# int32 literal can only be stored in long data type

What it is doing is taking the unsigned int 12 and trying to convert it into a negative value (the -) which requires a conversion to a type that can handle negative numbers as an unsigned int cannot.

Because it is an unsigned int it has possible values outside the range of int, so conversion to a long is required.

By default datatype of literal 0 is int?

Yes, a numeric literal is, by default, an int unless you specify otherwise, for example a decimal can be declared using a M

var myDecimal = 0M;

F for float, L for long. But there isnt one for short you just need to rely on a cast:

public short CardType { get; set; }
CardType = (short)0;

On your edit:

Its not matter of casting. I want to understand why 0 cannot be short by default?

Its not relevant - Integers have to be some data type by default, in this case they just default to an int as per the Language specification

The type of an integer literal is determined as follows:

  • If the literal has no suffix, it has the first of these types in which its value can be represented: int, uint, long, ulong.
  • If the literal is suffixed by U or u, it has the first of these types in which its value can be represented: uint, ulong.
  • If the literal is suffixed by L or l, it has the first of these types in which its value can be represented: long, ulong.
  • If the literal is suffixed by UL, Ul, uL, ul, LU, Lu, lU, or lu, it is of type ulong.

Which int type does var default to?

Really, what you're asking is What type is given to integer literals in C#?, to which the answer can be found in the specification:

(Section 2.4.4.2 of the 4.0 spec)

The type of an integer literal is determined as follows:

  • If the literal has no suffix, it has the first of these types in which its value can be represented: int, uint, long, ulong.
  • If the literal is suffixed by U or u, it has the first of these types in which its value can be represented: uint, ulong.
  • If the literal is suffixed by L or l, it has the first of these types in which its value can be represented: long, ulong.
  • If the literal is suffixed by UL, Ul, uL, ul, LU, Lu, lU, or lu, it is of type ulong.

If the value represented by an integer literal is outside the range of
the ulong type, a compile-time error occurs.

cast short to int in if block

When the code is being compiled, It looks something like this:

for:

Int16 myShortInt;  
myShortInt = Condition ? 1 :2;

It looks somthing like

Int16 myShortInt; 
var value = Condition ? 1 :2; //notice that this is interperted as an integer.
myShortInt = value ;

while for:

if(Condition)  
{
myShortInt = 1;
}
else
{
myShortInt = 2;
}

There is no stage in between to interperate the value as int, and the literal is being treated as a Int16.

Does it really matter to distinct between short, int, long?

Unless you are packing large numbers of these together in some kind of structure, it will probably not affect the memory consumption at all. The best reason to use a particular integer type is compatibility with an API. Other than that, just make sure the type you pick has enough range to cover the values you need. Beyond that for simple local variables, it doesn't matter much.

Issue in Adding large numbers in C# with int datatype

Because of performance C# (not dotnet) and ANY other langauge that I know use the LARGEST common denominator to do operations. This is COMMON, between them. not the result, which would have to be guessed.

SO, adding 2 int numbers is an int operation that may overflow. Adding an int to long is a long operation. Adding two long is a long operation.

Simple like that. If you know you will overflow, then declare one of the types as longer.

Now, you CAN cast - you COULD also just declare them as long, you know.

C# short/long/int literal format?

shows you all the suffices.

  • 1111 is an int
  • 1111UL is an unsigned long.

So, there is no need to cast if you tell the compiler the correct data type to use in the literal.



Related Topics



Leave a reply



Submit