.Net Integer VS Int16

.NET Integer vs Int16?

According to the below reference, the runtime optimizes performance of Int32 and recommends them for counters and other frequently accessed operations.

From the book: MCTS Self-Paced Training Kit (Exam 70-536): Microsoft® .NET Framework 2.0—Application Development Foundation

Chapter 1: "Framework Fundamentals"

Lesson 1: "Using Value Types"

Best Practices: Optimizing performance
with built-in types

The runtime optimizes the performance of 32-bit integer types (Int32 and UInt32), so use those types for counters and other frequently accessed integral variables.

For floating-point operations, Double is the most efficient type because those operations are optimized by hardware.

Also, Table 1-1 in the same section lists recommended uses for each type.
Relevant to this discussion:

  • Int16 - Interoperation and other specialized uses
  • Int32 - Whole numbers and counters
  • Int64 - Large whole numbers

What is the difference between int, Int16, Int32 and Int64?

Each type of integer has a different range of storage capacity

   Type      Capacity

Int16 -- (-32,768 to +32,767)

Int32 -- (-2,147,483,648 to +2,147,483,647)

Int64 -- (-9,223,372,036,854,775,808 to +9,223,372,036,854,775,807)

As stated by James Sutherland in his answer:

int and Int32 are indeed synonymous; int will be a little more
familiar looking, Int32 makes the 32-bitness more explicit to those
reading your code. I would be inclined to use int where I just need
'an integer', Int32 where the size is important (cryptographic code,
structures) so future maintainers will know it's safe to enlarge an
int if appropriate, but should take care changing Int32 variables
in the same way.

The resulting code will be identical: the difference is purely one of
readability or code appearance.

Is there any difference between Integer and Int32 in VB.NET?

Functionally, there is no difference between the types Integer and System.Int32. In VB.NET Integer is just an alias for the System.Int32 type.

The identifiers Int32 and Integer are not completely equal though. Integer is always an alias for System.Int32 and is understood by the compiler. Int32 though is not special cased in the compiler and goes through normal name resolution like any other type. So it's possible for Int32 to bind to a different type in certain cases. This is very rare though; no one should be defining their own Int32 type.

Here is a concrete repro which demonstrates the difference.

Class Int32

End Class

Module Module1
Sub Main()
Dim local1 As Integer = Nothing
Dim local2 As Int32 = Nothing
local1 = local2 ' Error!!!
End Sub
End Module

In this case local1 and local2 are actually different types, because Int32 binds to the user defined type over System.Int32.

what is difference between Convert.ToInt16 and (Int16)

The numeric literal 10 is treated as an integer, and more specifically an Int32. Though you typed your variable as object, under the covers it is still the integer. You can only unbox a value type to its same type, or to a nullable version of that type, directly.

For example, this code:

int i = 10;
object o = i;
short j = (short)o;

Will not execute, because the original value of i is not a short, it is an integer. You have to first unbox to integer, then you can cast to short.

short j = (short)(int)o;

Convert.ToInt16 sidesteps that issue, and the way it does it is an implementation detail. However, that method has multiple overloads that accepts multiple types, including strings, so it is not the equivalent of code using a direct cast.


Edit: I noticed I'm mixing terms here, so just so it's clear for a novice C# reader, the names short and Int16 are interchangeable for a 16 bit integer, as are the names int and Int32 for 32 bit integers. In C#, short and int are aliases for the .NET types Int16 and Int32, respectively.

Is Int16 equality test faster than Int32?

Best thing to do is benchmark it and see for yourself.

Normally, on 32-bit architectures, there should either be no difference, or 32 should be faster (native word size is usually the fastest, even if smaller ints are supported). Ditto 64/64.

Why does Int32.Equals(Int16) return true where the reverse doesn't?

Int32 has an Equals(Int32) overload and Int16 can be implicity converted to an equivalent Int32. With this overload, it's now comparing two 32-bit integers, checks for value equality, and naturally returns true.

Int16 has its own Equals(Int16) method overload, but there is no implicit conversion from an Int32 to an Int16 (because you can have values that are out of range for a 16-bit integer). Thus the type system ignores this overload and reverts to the Equals(Object) overload. Its documentation reports:

true if obj is an instance of Int16 and equals the value of this
instance; otherwise, false.

But, the value we're passing in, while it "equals the value of this instance" (1 == 1) it's not an instance of Int16 as it's an Int32.


The equivalent code for the b.Equals(a) that you have would look like this:

Int16 a = 1;
Int32 b = 1;

Int32 a_As_32Bit = a; //implicit conversion from 16-bit to 32-bit

var test1 = b.Equals(a_As_32Bit); //calls Int32.Equals(Int32)

Now it's clear we're comparing both numbers as 32-bit integers.

The equivalent code for the a.Equals(b) would look this:

Int16 a = 1;
Int32 b = 1;

object b_As_Object = b; //treats our 16-bit integer as a System.Object

var test2 = a.Equals(b_As_Object); //calls Int16.Equals(Object)

Now it's clear we're calling a different equality method. Internally, that equality method is doing more or less this:

Int16 a = 1;
Int32 b = 1;

object b_As_Object = b;

bool test2;
if (b_As_Object is Int16) //but it's not, it's an Int32
{
test2 = ((Int16)b_As_Object) == a;
}
else
{
test2 = false; //and this is where your confusing result is returned
}

what is difference between Convert.ToInt16 and (Int16)

The numeric literal 10 is treated as an integer, and more specifically an Int32. Though you typed your variable as object, under the covers it is still the integer. You can only unbox a value type to its same type, or to a nullable version of that type, directly.

For example, this code:

int i = 10;
object o = i;
short j = (short)o;

Will not execute, because the original value of i is not a short, it is an integer. You have to first unbox to integer, then you can cast to short.

short j = (short)(int)o;

Convert.ToInt16 sidesteps that issue, and the way it does it is an implementation detail. However, that method has multiple overloads that accepts multiple types, including strings, so it is not the equivalent of code using a direct cast.


Edit: I noticed I'm mixing terms here, so just so it's clear for a novice C# reader, the names short and Int16 are interchangeable for a 16 bit integer, as are the names int and Int32 for 32 bit integers. In C#, short and int are aliases for the .NET types Int16 and Int32, respectively.

Should I use int or Int32

ECMA-334:2006 C# Language Specification (p18):

Each of the predefined types is shorthand for a system-provided type. For example, the keyword int refers to the struct System.Int32. As a matter of style, use of the keyword is favoured over use of the complete system type name.



Related Topics



Leave a reply



Submit