Differencebetween Int and Int32 in Swift

What is the difference between Int and Int32 in Swift?

According to the Swift Documentation

Int


In most cases, you don’t need to pick a specific size of integer to use in your code. Swift provides an additional integer type, Int, which has the same size as the current platform’s native word size:

On a 32-bit platform, Int is the same size as Int32.

On a 64-bit platform, Int is the same size as Int64.

Unless you need to work with a specific size of integer, always use Int for integer values in your code. This aids code consistency and interoperability. Even on 32-bit platforms, Int can store any value between -2,147,483,648 and 2,147,483,647, and is large enough for many integer ranges.

What is the difference between int, Int16, Int32 and Int64?

Each type of integer has a different range of storage capacity

   Type      Capacity

Int16 -- (-32,768 to +32,767)

Int32 -- (-2,147,483,648 to +2,147,483,647)

Int64 -- (-9,223,372,036,854,775,808 to +9,223,372,036,854,775,807)

As stated by James Sutherland in his answer:

int and Int32 are indeed synonymous; int will be a little more
familiar looking, Int32 makes the 32-bitness more explicit to those
reading your code. I would be inclined to use int where I just need
'an integer', Int32 where the size is important (cryptographic code,
structures) so future maintainers will know it's safe to enlarge an
int if appropriate, but should take care changing Int32 variables
in the same way.

The resulting code will be identical: the difference is purely one of
readability or code appearance.

Swift compare Int with Int32 or Int64

Comparing Across Integer Types

You can use relational operators, such as the less-than and equal-to
operators (< and ==), to compare instances of different binary integer
types. The following example compares instances of the Int, UInt, and
UInt8 types:

let x: Int = -23 let y: UInt = 1_000 let z: UInt8 = 23

if x < y {
print("\(x) is less than \(y).") } // Prints "-23 is less than 1000."

if z > x {
print("\(z) is greater than \(x).") } // Prints "23 is greater than -23."

From BinaryInteger - Comparing Across Integer Types

You can read the docs for this particularity overload of ==, too.

Should I use Int32 for small number instead of Int or Int64 in 64 bit architecture

No, use Int. The Swift Programming Language is quite explicit about this:

Unless you need to work with a specific size of integer, always use Int for integer values in your code. This aids code consistency and interoperability. Even on 32-bit platforms, Int can store any value between -2,147,483,648 and 2,147,483,647, and is large enough for many integer ranges.

By "work with a specific size of integer," the documentation is describing situations such as file formats and networking protocols that are defined in terms of specific bit-widths. Even if you're only counting to 10, you should still store it in an Int.

Int types do not automatically convert, so if you have an Int32, and a function requires an Int, you'd have to convert it as Int(x). This gets very cumbersome very quickly. To avoid that, Swift strongly recommends everything be an Int unless you have a specific reason to do otherwise.

You should also avoid UInt, even if your value is unsigned. You should only use UInt when you mean "this machine-word-sized bit pattern" and you should only use the sized UInts (UInt32, etc) when you mean "this bit-width bit pattern." If you mean "a number" (even an unsigned number), you should use Int.

Use UInt only when you specifically need an unsigned integer type with the same size as the platform’s native word size. If this isn’t the case, Int is preferred, even when the values to be stored are known to be nonnegative. A consistent use of Int for integer values aids code interoperability, avoids the need to convert between different number types, and matches integer type inference, as described in Type Safety and Type Inference.


See Peter's comments below for some links to further discussion on performance. It is very true that using 32-bit integers can be a significant performance improvement when working with large data structures, particularly because of caching and locality issues. But as a rule, this should be hidden within a data type that manages that extra complexity, isolating performance-critical code from the main system. Shifting back and forth between 32- and 64-bit integers can easily overwhelm the advantages of smaller data if you're not careful.

So as a rule, use Int. There are advantages to using Int32 in some cases, but trying to use it as a default is as likely to hurt performance as help it, and will definitely increase code complexity dramatically.

Swift difference between Int() and toInt()

Swift's Int has no constructors which accept a String.

Any time you want to convert a String to an Int, you must use variable.toInt().

You can only use Int(variable) if variable's type is in the following list:

  • Int
  • UInt8
  • Int8
  • UInt16
  • Int16
  • UInt32
  • Int32
  • UInt64
  • Int64
  • UInt
  • Float
  • Double
  • Float80
  • Any other type you write a custom Int extension for and add a custom init for.

For any other type, you must use an available toInt() method if it exists, or write your own.

The primary difference between stuff on this list and stuff not on this list is that for the most part, Int can accurately-ish represent everything that's in this list. A failable initializer is not necessary for any of these types.

When trying to convert "Hello World!" to an Int however, what should we return? String's toInt() returns nil because String's toInt()'s return type is Int? (an Int optional). To do the same in an init, the init must be failable (I've posted an example at the bottom of the answer).

However, if you were to implement a struct Rational to represent rational fraction numbers, it might make sense to extend Int to include a constructor that accepts a Rational number:

extension Int {
init(_ value: Rational) {
// your implementation
}
}

Here's a list of the available constructors for Int (the cases in which you can use Int(variable):

/// A 64-bit signed integer value
/// type.
struct Int : SignedIntegerType {
/// Create an instance initialized to zero.
init()
/// Create an instance initialized to `value`.
init(_ value: Int)
/// Creates an integer from its big-endian representation, changing the
/// byte order if necessary.
init(bigEndian value: Int)

/// Creates an integer from its little-endian representation, changing the
/// byte order if necessary.
init(littleEndian value: Int)
init(_builtinIntegerLiteral value: Builtin.Int2048)

/// Create an instance initialized to `value`.
init(integerLiteral value: Int)
}
extension Int {
init(_ v: UInt8)
init(_ v: Int8)
init(_ v: UInt16)
init(_ v: Int16)
init(_ v: UInt32)
init(_ v: Int32)
init(_ v: UInt64)

/// Construct a `Int` having the same bitwise representation as
/// the least significant bits of the provided bit pattern.
///
/// No range or overflow checking occurs.
init(truncatingBitPattern: UInt64)
init(_ v: Int64)

/// Construct a `Int` having the same bitwise representation as
/// the least significant bits of the provided bit pattern.
///
/// No range or overflow checking occurs.
init(truncatingBitPattern: Int64)
init(_ v: UInt)

/// Construct a `Int` having the same memory representation as
/// the `UInt` `bitPattern`. No range or overflow checking
/// occurs, and the resulting `Int` may not have the same numeric
/// value as `bitPattern`--it is only guaranteed to use the same
/// pattern of bits.
init(bitPattern: UInt)
}
extension Int {
/// Construct an instance that approximates `other`.
init(_ other: Float)
/// Construct an instance that approximates `other`.
init(_ other: Double)
/// Construct an instance that approximates `other`.
init(_ other: Float80)
}

(You can get to this list by typing Int(0) somewhere in Swift, right-clicking, and clicking "Jump to Definition".)

And notice, that not all of these are simply Int(variable), some of them must be used like Int(littleEndian:variable) for example.

The only way you could use Int(variable) where variable is a String would be to add your own extension to Int:

extension Int {
init?(_ s: String) {
if let i = s.ToInt() {
init(i)
} else {
init(0)
return nil
}
}
}

But I'd recommend just sticking with variable.ToInt().

Objective-C int is bridged as Int32 to Swift

Objective C's int (or to be correct, C's int) should be at least 16 bits in size, so storing it in 32 bits is not invalid. Probably in Apple's implementation it is always 32 bit for int.

If I'm correct, Objective-C's int type length depends on the platform word length (i.e. it's 64 bit when running in a 64-bit environment and 32-bit when running in a 32-bit environment).

That would be NSInteger, not int. You may be confusing them because in Swift's Int is equivalent type for Objective C's NSInteger.

You can even see it in the docs:

typealias NSInteger = Int

How can I convert Int32 to Int in Swift?

The error is your ? after valueForKey.

Int initializer doesnt accept optionals.

By doing myUnit.valueForKey(“theNUMBER”)?.intValue! gives you an optional value and the ! at the end doesnt help it.

Just replace with this:

return Int(myUnit.valueForKey(“theNUMBER”)!.intValue)

But you could also do like this if you want it to be fail safe:

return myUnit.valueForKey(“theNUMBER”)?.integerValue ?? 0

And to shorten you function you can do this:

func myNumber() -> Int {
let myUnit = self.getObject("EntityName") as! NSManagedObject

return myUnit.valueForKey("theNUMBER")?.integerValue ?? 0
}

Why is using a generic Int that switches between 32 and 64 bits better than explicitly using Int32 and Int64

This subject is not widely agreed upon.

The advantage of using a generic type is portability. The same piece of code will compile and run independent of the platform's word size. It may also be faster, in some cases.

The advantage of using specific types is precision. There is no room for ambiguity, and the exact capabilities of the type are known ahead of time.

There is no hard true answer. If you stick to either side of the matter for any and all purposes, you will sooner or later find yourself making an exception.



Related Topics



Leave a reply



Submit