If Int32 Is Just an Alias for Int, How Can the Int32 Class Use an Int

If Int32 is just an alias for int, how can the Int32 class use an int?

isn't this illegal in C#? If "int" is only alias for "Int32" it should fail to compile with error CS0523. Is there some magic in the compiler?

Yes; the error is deliberately suppressed in the compiler. The cycle checker is skipped entirely if the type in question is a built-in type.

Normally this sort of thing is illegal:

struct S { S s; int i; }

In that case the size of S is undefined because whatever the size of S is, it must be equal to itself plus the size of an int. There is no such size.

struct S { S s; }

In that case we have no information from which to deduce the size of S.

struct Int32 { Int32 i; }

But in this case the compiler knows ahead of time that System.Int32 is four bytes because it is a very special type.

Incidentally, the details of how the C# compiler (and, for that matter, the CLR) determines when a set of struct types is cyclic is extremely interesting. I'll try to write a blog article about that at some point.

int vs Int32. Are they really aliases?

int vs. Int32 is irrelevant to this issue. That the field is displayed as int is just because the tool you use to look at it replaces those types by their aliases when it displays them. If you look at is with a lower level tool you'll see that it doesn't know about int, only about Int32.

The problem is that the Int32 struct contains an Int32 field.

"What is the int standing on?" "You're very clever, young man, very clever," said the old lady. "But it's ints all the way down!"

The solution to this problem is magic. The runtime knows what an Int32 is and gives it special treatment avoiding infinite recursion. You can't write a custom struct that contains itself as a field. Int32 is a built in type, no normal struct. It just appears as struct for consistency's sake.

In C#, why is int an alias for System.Int32?

I believe that their main reason was portability of programs targeting CLR. If they were to allow a type as basic as int to be platform-dependent, making portable programs for CLR would become a lot more difficult. Proliferation of typedef-ed integral types in platform-neutral C/C++ code to cover the use of built-in int is an indirect hint as to why the designers of CLR decided on making built-in types platform-independent. Discrepancies like that are a big inhibitor to the "write once, run anywhere" goal of execution systems based on VMs.

Edit More often than not, the size of an int plays into your code implicitly through bit operations, rather than through arithmetics (after all, what could possibly go wrong with the i++, right?) But the errors are usually more subtle. Consider an example below:

const int MaxItem = 20;
var item = new MyItem[MaxItem];
for (int mask = 1 ; mask != (1<<MaxItem) ; mask++) {
var combination = new HashSet<MyItem>();
for (int i = 0 ; i != MaxItem ; i++) {
if ((mask & (1<<i)) != 0) {
combination.Add(item[i]);
}
}
ProcessCombination(combination);
}

This code computes and processes all combinations of 20 items. As you can tell, the code fails miserably on a system with 16-bit int, but works fine with ints of 32 or 64 bits.

Unsafe code would provide another source of headache: when the int is fixed at some size (say, 32) code that allocates 4 times the number of bytes as the number of ints that it needs to marshal would work, even though it is technically incorrect to use 4 in place of sizeof(int). Moreover, this technically incorrect code would remain portable!

Ultimately, small things like that play heavily into the perception of platform as "good" or "bad". Users of .NET programs do not care that a program crashes because its programmer made a non-portable mistake, or the CLR is buggy. This is similar to the way the early Windows were widely perceived as non-stable due to poor quality of drivers. To most users, a crash is just another .NET program crash, not a programmers' issue. Therefore is is good for perception of the ".NET ecosystem" to make the standard as forgiving as possible.

Why does `Int32` use `int` in its source code?

int, bool etc are simply aliases for builtin types like Int32 and Boolean respectively. They are shorthand, so it makes sense to use them. even in their own definitions.

The aliases are built into the compiler, so there is no "chicken and egg" situation of Int32 having to be compiled before int is accessible. There is no "original source code" for int. The "source" is for Int32 and is shared between functionality built into the CLR (for handling the primitives) and the framework (for defining functionality for manipulating those primitives). As this answer to a duplicate question explains, the source for Int32 isn't a valid definition of the type itself; that's built into the CLR. As such, the compiler has to fudge the normally illegal use of int within that file in order to allow the Int32 type to have functionality.

In C#, type int refers Int32. Is it still a value type

How does it become a value type instead of a reference type.

It becomes a value type as all other types in .NET do, by being declared as a struct, and not a class.

Can a structure inherit and can it hold implementation methods of the
interfaces.

A struct cannot inherit, but can implement interfaces. int32 is a value type. Why cant they inherit? More on that in Why don't structs support inheritance?

By viewing the source, you can easily see that Int32 implements a bunch of interfaces:

public struct Int32 : IComparable, IFormattable, IConvertible,
IComparable<Int32>, IEquatable<Int32>

If Int32 is just an alias for int, how can the Int32 class use an int?

isn't this illegal in C#? If "int" is only alias for "Int32" it should fail to compile with error CS0523. Is there some magic in the compiler?

Yes; the error is deliberately suppressed in the compiler. The cycle checker is skipped entirely if the type in question is a built-in type.

Normally this sort of thing is illegal:

struct S { S s; int i; }

In that case the size of S is undefined because whatever the size of S is, it must be equal to itself plus the size of an int. There is no such size.

struct S { S s; }

In that case we have no information from which to deduce the size of S.

struct Int32 { Int32 i; }

But in this case the compiler knows ahead of time that System.Int32 is four bytes because it is a very special type.

Incidentally, the details of how the C# compiler (and, for that matter, the CLR) determines when a set of struct types is cyclic is extremely interesting. I'll try to write a blog article about that at some point.

What is the difference between int, Int16, Int32 and Int64?

Each type of integer has a different range of storage capacity

   Type      Capacity

Int16 -- (-32,768 to +32,767)

Int32 -- (-2,147,483,648 to +2,147,483,647)

Int64 -- (-9,223,372,036,854,775,808 to +9,223,372,036,854,775,807)

As stated by James Sutherland in his answer:

int and Int32 are indeed synonymous; int will be a little more
familiar looking, Int32 makes the 32-bitness more explicit to those
reading your code. I would be inclined to use int where I just need
'an integer', Int32 where the size is important (cryptographic code,
structures) so future maintainers will know it's safe to enlarge an
int if appropriate, but should take care changing Int32 variables
in the same way.

The resulting code will be identical: the difference is purely one of
readability or code appearance.

When does it matter whether you use int versus Int32, or string versus String?

A using alias directive cannot use a keyword as the type name (but can use keywords in type argument lists):

using Handle = int; // error
using Handle = Int32; // OK
using NullableHandle = Nullable<int>; // OK

The underlying type of an enum must be specified using a keyword:

enum E : int { } // OK
enum E : Int32 { } // error

The expressions (x)+y, (x)-y, and (x)*y are interpreted differently depending on whether x is a keyword or an identifier:

(int)+y // cast +y (unary plus) to int
(Int32)+y // add y to Int32; error if Int32 is not a variable
(Int32)(+y) // cast +y to Int32

(int)-y // cast -y (unary minus) to int
(Int32)-y // subtract y from Int32; error if Int32 is not a variable
(Int32)(-y) // cast -y to Int32

(int)*y // cast *y (pointer indirection) to int
(Int32)*y // multiply Int32 by y; error if Int32 is not a variable
(Int32)(*y) // cast *y to Int32

GetType return Int instead of System.Int32

C# has a number of 'types' that are actually keyword aliases to .NET CLR Types. In this case, int is a C# alias for System.Int32, but the same is true of other C# types like string which is an alias to System.String.

This means that when you get under the hood with reflection and start looking at the CLR Type objects you won't find int, string or any of the other C# type aliases because .NET and the CLR don't know about them... and nor should they.

If you want to translate from the CLR type to a C# alias you'll have to do it yourself via lookup. Something like this:

// This is the set of types from the C# keyword list.
static Dictionary<Type, string> _typeAlias = new Dictionary<Type, string>
{
{ typeof(bool), "bool" },
{ typeof(byte), "byte" },
{ typeof(char), "char" },
{ typeof(decimal), "decimal" },
{ typeof(double), "double" },
{ typeof(float), "float" },
{ typeof(int), "int" },
{ typeof(long), "long" },
{ typeof(object), "object" },
{ typeof(sbyte), "sbyte" },
{ typeof(short), "short" },
{ typeof(string), "string" },
{ typeof(uint), "uint" },
{ typeof(ulong), "ulong" },
// Yes, this is an odd one. Technically it's a type though.
{ typeof(void), "void" }
};

static string TypeNameOrAlias(Type type)
{
// Lookup alias for type
if (_typeAlias.TryGetValue(type, out string alias))
return alias;

// Default to CLR type name
return type.Name;
}

For simple types that will work fine. Generics, arrays and Nullable take a bit more work. Arrays and Nullable values are handled recursively like this:

static string TypeNameOrAlias(Type type)
{
// Handle nullable value types
var nullbase = Nullable.GetUnderlyingType(type);
if (nullbase != null)
return TypeNameOrAlias(nullbase) + "?";

// Handle arrays
if (type.BaseType == typeof(System.Array))
return TypeNameOrAlias(type.GetElementType()) + "[]";

// Lookup alias for type
if (_typeAlias.TryGetValue(type, out string alias))
return alias;

// Default to CLR type name
return type.Name;
}

This will handle things like:

Console.WriteLine(TypeNameOrAlias(typeof(int?[][])));

Generics, if you need them, are a bit more involved basically the same process. Scan through the generic parameter list and run the types recursively through the process.


Nested Types

When you run TypeNameOrAlias on a nested type the result is only the name of the specific type, not the full path you'd need to specify to use it from outside the type that declares it:

public class Outer
{
public class Inner
{
}
}
// TypeNameOrAlias(typeof(Outer.Inner)) == "Inner"

This resolves the issue:

static string GetTypeName(Type type)
{
string name = TypeNameOrAlias(type);
if (type.DeclaringType is Type dec)
{
return $"{GetTypeName(dec)}.{name}";
}
return name;
}
// GetTypeName(typeof(Outer.Inner)) == "Outer.Inner"

Generics

Generics in the .NET type system are interesting. It's relatively easy to handle things like List<int> or Dictionary<int, string> or similar. Insert this at the top of TypeNameOrAlias:

    // Handle generic types
if (type.IsGenericType)
{
string name = type.Name.Split('`').FirstOrDefault();
IEnumerable<string> parms =
type.GetGenericArguments()
.Select(a => type.IsConstructedGenericType ? TypeNameOrAlias(a) : a.Name);
return $"{name}<{string.Join(",", parms)}>";
}

Now you'll get correct results for things like TypeNameOrAlias(typeof(Dictionary<int, string>)) and so on. It also deals with generic type definitions: TypeNameOrAlias(typeof(Dictionary<,>)) will return Dictionary<TKey,TValue>.

Where things get difficult is when you nest classes inside generics. Try GetTypeName(typeof(Dictionary<int, string>.KeyCollection)) for an interesting result.



Related Topics



Leave a reply



Submit