Why Is the ! in Swift Called an 'Implicitly' Rather Than 'Explicitly' Unwrapped Optional

Why create Implicitly Unwrapped Optionals , since that implies you know there's a value?

Consider the case of an object that may have nil properties while it's being constructed and configured, but is immutable and non-nil afterwards (NSImage is often treated this way, though in its case it's still useful to mutate sometimes). Implicitly unwrapped optionals would clean up its code a good deal, with relatively low loss of safety (as long as the one guarantee held, it would be safe).

(Edit) To be clear though: regular optionals are nearly always preferable.

Why should I unwrap implicitly unwrapped optional?

print, as part of its own internal workings, unwraps Optionals (the regular, unwrap-yourself kind). String interpolation doesn't do this for you, it just converts whatever you give it.

Here's an explanation of the last example:

  • print("y: \(y!.description)") // y: 2

    y has a type Int!, which is explicitly unwrapped, to give its Int content. description is called on it. description returns a String. If y was nil, this would crash.

  • print("y: \(y?.description)") // y: Optional("2")

    Optional chaining is used to call description on y, only if it's non-nil. If it's nil, then description isn't called in the first place, and the nil is propagated. The result of this expression is a String?.

  • print("y: \(y.description)") // y: 2

    Like case 1, y starts as an Int!, but is this time implicitly unwrapped, to give its Int content. description is called on it. description returns a String. If y was nil, this would crash.

Why do implicitly unwrapped optionals need to unwrapped again in conditionals?

why we need to forcefully unwrap the bool when we use it inside a conditional

It's a historical issue. Once upon a time, in the early days of Swift, an Optional could be used as a condition as a way of asking whether it was nil. There is thus a residual worry that you might think that if willBeSomeBool is a nil test. Therefore you have to either test explicitly for nil or unwrap so that we have a true Bool, thus proving that you know what you're doing — and preventing you from misinterpreting your own results.

Look at it this way. You can't say if willBeSomeString. You can only say something more explicit, either if willBeSomeString == nil or if willBeSomeString == "howdy". So it is simplest to make willBeSomeBool obey that same pattern.

Implicitly unwrapped optional in Apple methods

The frequency of implicitly unwrapped optionals (IUOs) in Cocoa APIs is an artifact of importing those APIs to Swift from ObjC.

In ObjC, all references to objects are just memory pointers. It's always technically possible for any pointer to be nil, as far as the language/compiler knows. In practice, a method that takes an object parameter may be implemented to never allow passing nil, or a method that returns an object to never return nil. But ObjC doesn't provide a way to assert this in an an API's header declaration, so the compiler has to assume that nil pointers are valid for any object reference.

In Swift, object references aren't just pointers to memory; instead they're a language construct. This gets additional safety by allowing us to assert that a given reference always points to an object, or to use optionals to both allow for that possibility and require that it be accounted for.

But dealing with optional unwrapping can be a pain in cases where it's very likely that a reference will never be nil. So we also have the IUO type, which lets you use optionals without checking them for nil (at your own risk).

Because the compiler doesn't know which references in ObjC APIs are or aren't safely nullable, it must import all object references in ObjC APIs (Apple's or your own) with some kind of optional type. For whatever reason, Apple chose to use IUOs for all imported APIs. Possibly because many (but importantly not all!) imported APIs are effectively non-nullable, or possibly because it lets you write chaining code like you would in ObjC (self.view.scene.rootNode etc).

In some APIs (including much of Foundation and UIKit), Apple has manually audited the imported declarations to use either full optionals (e.g. UIView?) or non-optional references (UIView) instead of IUOs when semantically appropriate. But not all APIs have yet been audited, and some that have still use IUOs because that's still the most appropriate thing for those APIs to do. (They're pretty consistent about always importing id as AnyObject!, for example.)

So, getting back to the table in your question: it's better not to think in terms of probabilities. When dealing with an API that returns...

  • AnyObject: always has a value, cannot be nil.
  • AnyObject?: may be nil, must check for nil
  • AnyObject!: no guarantees. Read the header, documentation, or source code (if available) for that API to find out if it can really be nil in your situation and treat it accordingly. Or just assume it can be nil and code defensively.

In the case of instantiateViewControllerWithIdentifier, there are two options you might call equally valid:

  1. Say you're guaranteed to never get nil because you know you put a view controller in your storyboard with that identifier and you know what class it is.

    let vc = storyboard.instantiateViewControllerWithIdentifier("vc") as MyViewController
    // do stuff with vc
  2. Assume your storyboard can change in the future and set yourself up with a meaningful debug error in case you ever break something.

    if let vc = storyboard.instantiateViewControllerWithIdentifier("vc") as? MyViewController {
    // do something with vc
    } else {
    fatalError("missing expected storyboard content")
    }


Related Topics



Leave a reply



Submit