Differencebetween Int, Int16, Int32 and Int64

What is the difference between int, Int16, Int32 and Int64?

Each type of integer has a different range of storage capacity

   Type      Capacity

Int16 -- (-32,768 to +32,767)

Int32 -- (-2,147,483,648 to +2,147,483,647)

Int64 -- (-9,223,372,036,854,775,808 to +9,223,372,036,854,775,807)

As stated by James Sutherland in his answer:

int and Int32 are indeed synonymous; int will be a little more
familiar looking, Int32 makes the 32-bitness more explicit to those
reading your code. I would be inclined to use int where I just need
'an integer', Int32 where the size is important (cryptographic code,
structures) so future maintainers will know it's safe to enlarge an
int if appropriate, but should take care changing Int32 variables
in the same way.

The resulting code will be identical: the difference is purely one of
readability or code appearance.

What's the difference between Int16, Int32 and Int64?

Int16 -> short -> 16-bit Integer

Int32 -> int -> 32-bit Integer

Int64 -> long -> 64-bit Integer

What is the difference between int and int64 in Go?

func ParseInt(s string, base int, bitSize int) (i int64, err error)

ParseInt always returns int64.

bitSize defines the range of values.

If the value corresponding to s cannot be represented by a signed integer of the given size, err.Err = ErrRange.

http://golang.org/pkg/strconv/#ParseInt

type int int

int is a signed integer type that is at least 32 bits in size. It is a distinct type, however, and not an alias for, say, int32.

http://golang.org/pkg/builtin/#int

So int could be bigger than 32 bit in the future or on some systems like int in C.

I guess on some systems int64 might be faster than int32 because that system only works with 64-bit integers.

Here is an example of an error when bitSize is 8:

http://play.golang.org/p/_osjMqL6Nj

package main

import (
"fmt"
"strconv"
)

func main() {
i, err := strconv.ParseInt("123456", 10, 8)
fmt.Println(i, err)
}

What is the difference between Int and Int32 in Swift?

According to the Swift Documentation

Int


In most cases, you don’t need to pick a specific size of integer to use in your code. Swift provides an additional integer type, Int, which has the same size as the current platform’s native word size:

On a 32-bit platform, Int is the same size as Int32.

On a 64-bit platform, Int is the same size as Int64.

Unless you need to work with a specific size of integer, always use Int for integer values in your code. This aids code consistency and interoperability. Even on 32-bit platforms, Int can store any value between -2,147,483,648 and 2,147,483,647, and is large enough for many integer ranges.

What the difference between int32 and int64

This was answered very well over here.

The key difference is in terms of storage capacity. An integer is held in bits (1s and 0s). Very simply a 64-bit integer can store a much larger/smaller number by virtue of having more bits.

Type Capacity

Int16 -- (-32,768 to +32,767)

Int32 -- (-2,147,483,648 to +2,147,483,647)

Int64 -- (-9,223,372,036,854,775,808 to +9,223,372,036,854,775,807)

Int32 vs. Int64 vs. Int in C#

int is an alias for Int32

long is an alias for Int64

Their sizes will not change, at all, just use whichever one you need to use.

The use of them in your code is in no way related to 32bit and 64bit machines

EDIT:
In reference to the comments about Thread Safety. Here is a good question and answers that detail any issues you need to be aware of.
Under C# is Int64 use on a 32 bit processor dangerous

.NET Integer vs Int16?

According to the below reference, the runtime optimizes performance of Int32 and recommends them for counters and other frequently accessed operations.

From the book: MCTS Self-Paced Training Kit (Exam 70-536): Microsoft® .NET Framework 2.0—Application Development Foundation

Chapter 1: "Framework Fundamentals"

Lesson 1: "Using Value Types"

Best Practices: Optimizing performance
with built-in types

The runtime optimizes the performance of 32-bit integer types (Int32 and UInt32), so use those types for counters and other frequently accessed integral variables.

For floating-point operations, Double is the most efficient type because those operations are optimized by hardware.

Also, Table 1-1 in the same section lists recommended uses for each type.
Relevant to this discussion:

  • Int16 - Interoperation and other specialized uses
  • Int32 - Whole numbers and counters
  • Int64 - Large whole numbers

Should I use int or Int32

ECMA-334:2006 C# Language Specification (p18):

Each of the predefined types is shorthand for a system-provided type. For example, the keyword int refers to the struct System.Int32. As a matter of style, use of the keyword is favoured over use of the complete system type name.

Converting to int16, int32, int64 - how do you know which one to choose?

Everyone here who has mentioned that declaring an Int16 saves ram should get a downvote.

The answer to your question is to use the keyword "int" (or if you feel like it, use "Int32").

That gives you a range of up to 2.4 billion numbers... Also, 32bit processors will handle those ints better... also (and THE MOST IMPORTANT REASON) is that if you plan on using that int for almost any reason... it will likely need to be an "int" (Int32).

In the .Net framework, 99.999% of numeric fields (that are whole numbers) are "ints" (Int32).

Example: Array.Length, Process.ID, Windows.Width, Button.Height, etc, etc, etc 1 million times.

EDIT: I realize that my grumpiness is going to get me down-voted... but this is the right answer.



Related Topics



Leave a reply



Submit