Should I Use Int or Int32

Should I use int or Int32

ECMA-334:2006 C# Language Specification (p18):

Each of the predefined types is shorthand for a system-provided type. For example, the keyword int refers to the struct System.Int32. As a matter of style, use of the keyword is favoured over use of the complete system type name.


What is the difference between int, Int16, Int32 and Int64?

Each type of integer has a different range of storage capacity

   Type      Capacity

Int16 -- (-32,768 to +32,767)

Int32 -- (-2,147,483,648 to +2,147,483,647)

Int64 -- (-9,223,372,036,854,775,808 to +9,223,372,036,854,775,807)

As stated by James Sutherland in his answer:

int and Int32 are indeed synonymous; int will be a little more
familiar looking, Int32 makes the 32-bitness more explicit to those
reading your code. I would be inclined to use int where I just need
'an integer', Int32 where the size is important (cryptographic code,
structures) so future maintainers will know it's safe to enlarge an
int if appropriate, but should take care changing Int32 variables
in the same way.

The resulting code will be identical: the difference is purely one of
readability or code appearance.


Should I use Int32 for small number instead of Int or Int64 in 64 bit architecture

No, use Int. The Swift Programming Language is quite explicit about this:

Unless you need to work with a specific size of integer, always use Int for integer values in your code. This aids code consistency and interoperability. Even on 32-bit platforms, Int can store any value between -2,147,483,648 and 2,147,483,647, and is large enough for many integer ranges.

By "work with a specific size of integer," the documentation is describing situations such as file formats and networking protocols that are defined in terms of specific bit-widths. Even if you're only counting to 10, you should still store it in an Int.

Int types do not automatically convert, so if you have an Int32, and a function requires an Int, you'd have to convert it as Int(x). This gets very cumbersome very quickly. To avoid that, Swift strongly recommends everything be an Int unless you have a specific reason to do otherwise.

You should also avoid UInt, even if your value is unsigned. You should only use UInt when you mean "this machine-word-sized bit pattern" and you should only use the sized UInts (UInt32, etc) when you mean "this bit-width bit pattern." If you mean "a number" (even an unsigned number), you should use Int.

Use UInt only when you specifically need an unsigned integer type with the same size as the platform’s native word size. If this isn’t the case, Int is preferred, even when the values to be stored are known to be nonnegative. A consistent use of Int for integer values aids code interoperability, avoids the need to convert between different number types, and matches integer type inference, as described in Type Safety and Type Inference.


See Peter's comments below for some links to further discussion on performance. It is very true that using 32-bit integers can be a significant performance improvement when working with large data structures, particularly because of caching and locality issues. But as a rule, this should be hidden within a data type that manages that extra complexity, isolating performance-critical code from the main system. Shifting back and forth between 32- and 64-bit integers can easily overwhelm the advantages of smaller data if you're not careful.

So as a rule, use Int. There are advantages to using Int32 in some cases, but trying to use it as a default is as likely to hurt performance as help it, and will definitely increase code complexity dramatically.

Difference between int32, int, int32_t, int8 and int8_t

Between int32 and int32_t, (and likewise between int8 and int8_t) the difference is pretty simple: the C standard defines int8_t and int32_t, but does not define anything named int8 or int32 -- the latter (if they exist at all) is probably from some other header or library (most likely predates the addition of int8_t and int32_t in C99).

Plain int is quite a bit different from the others. Where int8_t and int32_t each have a specified size, int can be any size >= 16 bits. At different times, both 16 bits and 32 bits have been reasonably common (and for a 64-bit implementation, it should probably be 64 bits).

On the other hand, int is guaranteed to be present in every implementation of C, where int8_t and int32_t are not. It's probably open to question whether this matters to you though. If you use C on small embedded systems and/or older compilers, it may be a problem. If you use it primarily with a modern compiler on desktop/server machines, it probably won't be.

Oops -- missed the part about char. You'd use int8_t instead of char if (and only if) you want an integer type guaranteed to be exactly 8 bits in size. If you want to store characters, you probably want to use char instead. Its size can vary (in terms of number of bits) but it's guaranteed to be exactly one byte. One slight oddity though: there's no guarantee about whether a plain char is signed or unsigned (and many compilers can make it either one, depending on a compile-time flag). If you need to ensure its being either signed or unsigned, you need to specify that explicitly.

What is the difference between Int and Int32 in Swift?

According to the Swift Documentation

Int


In most cases, you don’t need to pick a specific size of integer to use in your code. Swift provides an additional integer type, Int, which has the same size as the current platform’s native word size:

On a 32-bit platform, Int is the same size as Int32.

On a 64-bit platform, Int is the same size as Int64.

Unless you need to work with a specific size of integer, always use Int for integer values in your code. This aids code consistency and interoperability. Even on 32-bit platforms, Int can store any value between -2,147,483,648 and 2,147,483,647, and is large enough for many integer ranges.


Difference between __int32 vs int vs int32_t (again)


Which one of my guesses are correct?


  • int, by the C++ standard, must be >= 32 bits.

Not correct. int must be >= 16 bits. long must be >= 32 bits.

Different compilers (implementations) can make int to allocate different amount of bits

Correct.

... by default.

I don't know of a compiler with configurable int sizes - it usually depends directly on target architecture - but I suppose that would be a possibility.

Also, different compilers may / may not use bit padding

They may. They aren't required to.

__int32 (Microsoft-specifc) and int32_t are here to solve the problem and:

  • Force compiler to allocate exactly 32 bits.

  • Never use bit padding before / after this data type.

Correct. More specifically, std::int32_t is an alias of one of the fundamental types which has exactly 32 bits and no padding. If such integer type is not provided by the compiler, then std::int32_t alias will not be provided either.

Microsoft documentation promises that __int32 exists and that it is another name of int, and that it has 32 non-padding bits. Elsewhere, Microsoft documents that int32_t is also an alias of int. As such, there is no difference other than __int32 not being a standard name.

Difference between int and System::Int32

The only difference is probably being explicit about the range of values supported by int and System::Int32.

System::Int32 makes the 32-bitness more explicit to those reading the code. One should use int where there is just need of 'an integer', and use System::Int32 where the size is important (cryptographic code, structures) so readers will be clear of it while maintaining the same.

Using int or System::Int32the resulting code will be identical: the difference is purely one of explicit readability or code appearance.

Use int or Int32 while type casting?

They do exactly the same thing - they'll even compile to the same IL. (Assuming you haven't got some crazy other Int32 type somewhere...)

int is just an alias for global::System.Int32.

(This doesn't just apply to casting - it's almost anywhere that a type name is used.)

int vs Int32. Are they really aliases?

int vs. Int32 is irrelevant to this issue. That the field is displayed as int is just because the tool you use to look at it replaces those types by their aliases when it displays them. If you look at is with a lower level tool you'll see that it doesn't know about int, only about Int32.

The problem is that the Int32 struct contains an Int32 field.

"What is the int standing on?" "You're very clever, young man, very clever," said the old lady. "But it's ints all the way down!"

The solution to this problem is magic. The runtime knows what an Int32 is and gives it special treatment avoiding infinite recursion. You can't write a custom struct that contains itself as a field. Int32 is a built in type, no normal struct. It just appears as struct for consistency's sake.



Related Topics



Leave a reply



Submit