Swift Use 'Is' for Function Type, Compiler Behavior Is Different with Runtime

Swift: Overriding Self-requirement is allowed, but causes runtime error. Why?

Yes, there seems to be a contradiction. The Self keyword, when used as a return type, apparently means 'self as an instance of Self'. For example, given this protocol

protocol ReturnsReceived {

/// Returns other.
func doReturn(other: Self) -> Self
}

we can't implement it as follows

class Return: ReturnsReceived {

func doReturn(other: Return) -> Self {
return other // Error
}
}

because we get a compiler error ("Cannot convert return expression of type 'Return' to return type 'Self'"), which disappears if we violate doReturn()'s contract and return self instead of other. And we can't write

class Return: ReturnsReceived {

func doReturn(other: Return) -> Return { // Error
return other
}
}

because this is only allowed in a final class, even if Swift supports covariant return types. (The following actually compiles.)

final class Return: ReturnsReceived {

func doReturn(other: Return) -> Return {
return other
}
}

On the other hand, as you pointed out, a subclass of Return can 'override' the Self requirement and merrily honor the contract of ReturnsReceived, as if Self were a simple placeholder for the conforming class' name.

class SubReturn: Return {

override func doReturn(other: Return) -> SubReturn {
// Of course this crashes if other is not a
// SubReturn instance, but let's ignore this
// problem for now.
return other as! SubReturn
}
}

I could be wrong, but I think that:

  • if Self as a return type really means 'self as an instance of
    Self', the compiler should not accept this kind of Self requirement
    overriding, because it makes it possible to return instances which
    are not self; otherwise,

  • if Self as a return type must be simply a placeholder with no further implications, then in our example the compiler should already allow overriding the Self requirement in the Return class.

That said, and here any choice about the precise semantics of Self is not bound to change things, your code illustrates one of those cases where the compiler can easily be fooled, and the best it can do is generate code to defer checks to run-time. In this case, the checks that should be delegated to the runtime have to do with casting, and in my opinion one interesting aspect revealed by your examples is that at a particular spot Swift seems not to delegate anything, hence the inevitable crash is more dramatic than it ought to be.

Swift is able to check casts at run-time. Let's consider the following code.

let sm = SuperMario()
let ffm = sm as! FireFlowerMario
ffm.throwFireballs()

Here we create a SuperMario and downcast it to FireFlowerMario. These two classes are not unrelated, and we are assuring the compiler (as!) that we know what we are doing, so the compiler leaves it as it is and compiles the second and third lines without a hitch. However, the program fails at run-time, complaining that it

Could not cast value of type
'SomeModule.SuperMario' (0x...) to
'SomeModule.FireFlowerMario' (0x...).

when trying the cast in the second line. This is not wrong or surprising behaviour. Java, for example, would do exactly the same: compile the code, and fail at run-time with a ClassCastException. The important thing is that the application reliably crashes at run-time.

Your code is a more elaborate way to fool the compiler, but it boils down to the same problem: there is a SuperMario instead of a FireFlowerMario. The difference is that in your case we don't get a gentle "could not cast" message but, in a real Xcode project, an abrupt and terrific error when calling throwFireballs().

In the same situation, Java fails (at run-time) with the same error we saw above (a ClassCastException), which means it attempts a cast (to FireFlowerMario) before calling throwFireballs() on the object returned by queryFriend(). The presence of an explicit checkcast instruction in the bytecode easily confirms this.

Swift on the contrary, as far as I can see at the moment, does not try any cast before the call (no casting routine is called in the compiled code), so a horrible, uncaught error is the only possible outcome. If, instead, your code produced a run-time "could not cast" error message, or something as gracious as that, I would be completely satisfied with the behaviour of the language.

Why UnsafeRawPointer shows different result when function signatures differs in Swift?

Why the results of two prints differs?

Because for each function call, Swift is creating a temporary variable initialised to the value returned by a.key's getter. Each function is called with a pointer to their given temporary variable. Therefore the pointer values will likely not be the same – as they refer to different variables.

The reason why temporary variables are used here is because A is a non-final class, and can therefore have its getters and setters of key overridden by subclasses (which could well re-implement it as a computed property).

Therefore in an un-optimised build, the compiler cannot just pass the address of key directly to the function, but instead has to rely on calling the getter (although in an optimised build, this behaviour can change completely).

You'll note that if you mark key as final, you should now get consistent pointer values in both functions:

class A {
final var key = "aaa"
}

var a = A()
aaa(&a.key) // 0x0000000100a0abe0
bbb(&a.key) // 0x0000000100a0abe0

Because now the address of key can just be directly passed to the functions, bypassing its getter entirely.

It's worth noting however that, in general, you should not rely on this behaviour. The values of the pointers you get within the functions are a pure implementation detail and are not guaranteed to be stable. The compiler is free to call the functions however it wishes, only promising you that the pointers you get will be valid for the duration of the call, and will have pointees initialised to the expected values (and if mutable, any changes you make to the pointees will be seen by the caller).

The only exception to this rule is the passing of pointers to global and static stored variables. Swift does guarantee that the pointer values you get will be stable and unique for that particular variable. From the Swift team's blog post on Interacting with C Pointers (emphasis mine):

However, interaction with C pointers is inherently
unsafe compared to your other Swift code, so care must be taken. In
particular:

  • These conversions cannot safely be used if the callee
    saves the pointer value for use after it returns. The pointer that
    results from these conversions is only guaranteed to be valid for the
    duration of a call. Even if you pass the same variable, array, or
    string as multiple pointer arguments, you could receive a different
    pointer each time. An exception to this is global or static stored
    variables. You can safely use the address of a global variable as a
    persistent unique pointer value
    , e.g.: as a KVO context parameter.

Therefore if you made key a static stored property of A or just a global stored variable, you are guaranteed to the get same pointer value in both function calls.


Changing the function signature

When I change the function signature of bbb to make it the same with aaa, the result of two prints are the same

This appears to be an optimisation thing, as I can only reproduce it in -O builds and playgrounds. In an un-optimised build, the addition or removal of an extra parameter has no effect.

(Although it's worth noting that you should not test Swift behaviour in playgrounds as they are not real Swift environments, and can exhibit different runtime behaviour to code compiled with swiftc)

The cause of this behaviour is merely a coincidence – the second temporary variable is able to reside at the same address as the first (after the first is deallocated). When you add an extra parameter to aaa, a new variable will be allocated 'between' them to hold the value of the parameter to pass, preventing them from sharing the same address.

The same address isn't observable in un-optimised builds due to the intermediate load of a in order to call the getter for the value of a.key. As an optimisation, the compiler is able to inline the value of a.key to the call-site if it has a property initialiser with a constant expression, removing the need for this intermediate load.

Therefore if you give a.key a non-determininstic value, e.g var key = arc4random(), then you should once again observe different pointer values, as the value of a.key can no longer be inlined.

But regardless of the cause, this is a perfect example of how the pointer values for variables (which are not global or static stored variables) are not to be relied on – as the value you get can completely change depending on factors such as optimisation level and parameter count.


inout & UnsafeMutable(Raw)Pointer

Regarding your comment:

But since withUnsafePointer(to:_:) always has the correct behavior I want (in fact it should, otherwise this function is of no use), and it also has an inout parameter. So I assume there are implementation difference between these functions with inout parameters.

The compiler treats an inout parameter in a slightly different way to an UnsafeRawPointer parameter. This is because you can mutate the value of an inout argument in the function call, but you cannot mutate the pointee of an UnsafeRawPointer.

In order to make any mutations to the value of the inout argument visible to the caller, the compiler generally has two options:

  1. Make a temporary variable initialised to the value returned by the variable's getter. Call the function with a pointer to this variable, and once the function has returned, call the variable's setter with the (possibly mutated) value of the temporary variable.

  2. If it's addressable, simply call the function with a direct pointer to the variable.

As said above, the compiler cannot use the second option for stored properties that aren't known to be final (but this can change with optimisation). However, always relying on the first option can be potentially expensive for large values, as they'll have to be copied. This is especially detrimental for value types with copy-on-write behaviour, as they depend on being unique in order to perform direct mutations to their underlying buffer – a temporary copy violates this.

To solve this problem, Swift implements a special accessor – called materializeForSet. This accessor allows the callee to either provide the caller with a direct pointer to the given variable if it's addressable, or otherwise will return a pointer to a temporary buffer containing a copy of the variable, which will need to be written back to the setter after it has been used.

The former is the behaviour you're seeing with inout – you're getting a direct pointer to a.key back from materializeForSet, therefore the pointer values you get in both function calls are the same.

However, materializeForSet is only used for function parameters that require write-back, which explains why it's not used for UnsafeRawPointer. If you make the function parameters of aaa and bbb take UnsafeMutable(Raw)Pointers (which do require write-back), you should observe the same pointer values again.

func aaa(_ key: UnsafeMutableRawPointer) {
print(key)
}

func bbb(_ key: UnsafeMutableRawPointer) {
print(key)
}

class A {
var key = "aaa"
}

var a = A()

// will use materializeForSet to get a direct pointer to a.key
aaa(&a.key) // 0x0000000100b00580
bbb(&a.key) // 0x0000000100b00580

But again, as said above, this behaviour is not to be relied upon for variables that are not global or static.

Generic type not preserved when called within another generic function

You're trying to reinvent class inheritance with generics. That is not what generics are for, and they don't work that way. Generic methods are statically dispatched, which means that the code is chosen at compile-time, not runtime. An overload should never change the behavior of the function (which is what you're trying to do here). Overrides in where clauses can be used to improve performance, but they cannot be used to create dynamic (runtime) dispatch.

If you must use inheritance, then you must use classes. That said, the problem you've described is better solved with a generic Task rather than a protocol. For example:

struct Task<Result> {
let execute: () throws -> Result
}

enum TaskRunner {
static func run<Result>(task: Task<Result>) throws -> Result {
try task.execute()
}
}

let specificTask = Task(execute: { "Some Result" })

print(try TaskRunner.run(task: specificTask)) // Prints "Some Result"

Notice how this eliminates the "task not supported" case. Rather than being a runtime error, it is now a compile-time error. You can no longer call this incorrectly, so you don't have to check for that case.

If you really want dynamic dispatch, it is possible, but you must implement it as dynamic dispatch, not overloads.

enum TaskRunner<T: Task> {
static func run(task: T) throws -> T.Result {

switch task {
case is SpecificTask:
// execute a SpecificTask
return "Some Result" as! T.Result // <=== This is very fragile
default:
throw SomeError.error
}
}
}

This is fragile because of the as! T.Result. If you change the result type of SpecificTask to something other than String, it'll crash. But the important point is the case is SpecificTask, which is determined at runtime (dynamic dispatch). If you need task, and I assume you do, you'd swap that with if let task = task as? SpecificTask.

Before going down that road, I'd reconsider the design and see how this will really be called. Since the Result type is generic, you can't call arbitrary Tasks in a loop (since all the return values have to match). So it makes me wonder what kind of code can actually call run.

Swift Type Inference Not Working (Xcode 7.1.1)

Okay, I am going to answer my own question here.

After some investigation it seems that Swift wants you to implement an extension with a type constraint on the generic parameter.

extension MyClass where T : MyProtocol {
func print() {
b.foo(value)
}
}

I know this doesn't really solve the problem but it was sufficient enough for me as a work around in my real world use case.

The above sample would wind up looking something like the following.

protocol MyProtocol { }

class MyProtoClass : MyProtocol { }

class Bar {

func foo<T>(value: T) {
print("T is Generic")
}

func foo(value: MyProtocol) {
print("T conforms to MyProtocol")
}
}

class MyClass<T> {

var value: T
init(value: T) { self.value = value }
var b = Bar()

func print() {
b.foo(value)
}
}

extension MyClass where T : MyProtocol {

func print() {
b.foo(value)
}
}

MyClass<MyProtoClass>(value: MyProtoClass()).print()
MyClass<String>(value: "").print()

Swift: returning a runtime random opaque type generates an error

Consider this: What is the type of the expression value ? Cat() : Dog()
It isn’t animal. And for a ternary you need one type but you have either a cat or a dog. Type inference isn’t going to figure out that you can erase those two different types back to some common type, even if it’s possible to do so

Swift: Generic type inference at runtime

Generics are evaluated at compile time and assigned a single, concrete type. There is no such thing as "type inference at runtime."

I think the primary change you want is:

func delete(_ type:IDObject.Type, ids:[Int]) {

You don't want to specialize this function on type, you just want to pass type.

It's not clear what objects(_:where:) returns, so this may break your delete method. You may need to make it less specific:

func delete(_ objects:Results<Object>) {

(This isn't a panacea for subtyping; I'm assuming that objects(_:where:) returns exactly Results<Object>.)

Wrong specialized generic function gets called in Swift 3 from an indirect call

This is indeed correct behaviour as overload resolution takes place at compile time (it would be a pretty expensive operation to take place at runtime). Therefore from within test(value:), the only thing the compiler knows about value is that it's of some type that conforms to DispatchType – thus the only overload it can dispatch to is func doBar<D : DispatchType>(value: D).

Things would be different if generic functions were always specialised by the compiler, because then a specialised implementation of test(value:) would know the concrete type of value and thus be able to pick the appropriate overload. However, specialisation of generic functions is currently only an optimisation (as without inlining, it can add significant bloat to your code), so this doesn't change the observed behaviour.

One solution in order to allow for polymorphism is to leverage the protocol witness table (see this great WWDC talk on them) by adding doBar() as a protocol requirement, and implementing the specialised implementations of it in the respective classes that conform to the protocol, with the general implementation being a part of the protocol extension.

This will allow for the dynamic dispatch of doBar(), thus allowing it to be called from test(value:) and having the correct implementation called.

protocol DispatchType {
func doBar()
}

extension DispatchType {
func doBar() {
print("general function called")
}
}

class DispatchType1: DispatchType {
func doBar() {
print("DispatchType1 called")
}
}

class DispatchType2: DispatchType {
func doBar() {
print("DispatchType2 called")
}
}

func test<D : DispatchType>(value: D) {
value.doBar()
}

let d1 = DispatchType1()
let d2 = DispatchType2()

test(value: d1) // "DispatchType1 called"
test(value: d2) // "DispatchType2 called"

The strange behaviour of Swift's AnyObject

Similar as in Objective-C, where you can send arbitrary messages to id,
arbitrary properties and methods can be called on an instance of AnyObject
in Swift. The details are different however, and it is documented in
Interacting with Objective-C APIs
in the "Using Swift with Cocoa and Objective-C" book.

Swift includes an AnyObject type that represents some kind of object. This is similar to Objective-C’s id type. Swift imports id as AnyObject, which allows you to write type-safe Swift code while maintaining the flexibility of an untyped object.

...

You can call any Objective-C method and access any property on an AnyObject value without casting to a more specific class type. This includes Objective-C compatible methods and properties marked with the @objc attribute.

...

When you call a method on a value of AnyObject type, that method call behaves like an implicitly unwrapped optional. You can use the same optional chaining syntax you would use for optional methods in protocols to optionally invoke a method on AnyObject.

Here is an example:

func tryToGetTimeInterval(obj : AnyObject) {
let ti = obj.timeIntervalSinceReferenceDate // NSTimeInterval!
if let theTi = ti {
print(theTi)
} else {
print("does not respond to `timeIntervalSinceReferenceDate`")
}
}

tryToGetTimeInterval(NSDate(timeIntervalSinceReferenceDate: 1234))
// 1234.0

tryToGetTimeInterval(NSString(string: "abc"))
// does not respond to `timeIntervalSinceReferenceDate`

obj.timeIntervalSinceReferenceDate is an implicitly unwrapped optional
and nil if the object does not have that property.

Here an example for checking and calling a method:

func tryToGetFirstCharacter(obj : AnyObject) {
let fc = obj.characterAtIndex // ((Int) -> unichar)!
if let theFc = fc {
print(theFc(0))
} else {
print("does not respond to `characterAtIndex`")
}
}

tryToGetFirstCharacter(NSDate(timeIntervalSinceReferenceDate: 1234))
// does not respond to `characterAtIndex`

tryToGetFirstCharacter(NSString(string: "abc"))
// 97

obj.characterAtIndex is an implicitly unwrapped optional closure. That code
can be simplified using optional chaining:

func tryToGetFirstCharacter(obj : AnyObject) {
if let c = obj.characterAtIndex?(0) {
print(c)
} else {
print("does not respond to `characterAtIndex`")
}
}

In your case, TestClass does not have any @objc properties.

let xyz = typeAnyObject.xyz // error: value of type 'AnyObject' has no member 'xyz'

does not compile because the xyz property is unknown to the compiler.

let name = typeAnyObject.name // String!

does compile because – as you noticed – NSException has a name property.
The value however is nil because TestClass does not have an
Objective-C compatible name method. As above, you should use optional
binding to safely unwrap the value (or test against nil).

If your class is derived from NSObject

class TestClass : NSObject {
var name : String?
var xyz : String?
}

then

let xyz = typeAnyObject.xyz // String?!

does compile. (Alternatively, mark the class or the properties with @objc.)
But now

let name = typeAnyObject.name // error: Ambigous use of `name`

does not compile anymore. The reason is that both TestClass and NSException
have a name property, but with different types (String? vs String),
so the type of that expression is ambiguous. This ambiguity can only be
resolved by (optionally) casting the AnyObject back to TestClass:

if let name = (typeAnyObject as? TestClass)?.name {
print(name)
}

Conclusion:

  • You can call any method/property on an instance of AnyObject if that
    method/property is Objective-C compatible.
  • You have to test the implicitly unwrapped optional against nil or
    use optional binding to check that the instance actually has that
    method/property.
  • Ambiguities arise if more than one class has (Objective-C) compatible
    methods with the same name but different types.

In particular because of the last point, I would try to avoid this
mechanism if possible, and optionally cast to a known class instead
(as in the last example).



Related Topics



Leave a reply



Submit