Swift: Accessing Computed Property Through Pointer

Swift: accessing computed property through pointer

None of this is defined behaviour. It may or may not produce expected results, or it may just crash at runtime.

When you say

let m1 = Mutator(name:"mf1", storage: &f1.bar)

Swift will allocate some memory and initialise it to the value returned by f1.bar's getter. A pointer to this memory will then be passed into Mutator's init – and after the call, Swift will then call f1.bar's setter with the (possibly changed) contents of the memory it allocated.

This memory will then be deallocated – the pointer is now no longer valid. Reading and writing to its pointee will produce undefined behaviour. Therefore, you should not persist the pointer after the call to Mutator's initialiser.

One way in order to get the behaviour you want is to use two closures for the getting and setting of f1.bar, both capturing f1. This ensures that the reference to f1 remains valid as long as the closures live.

For example:

struct Mutator<T> {

let getter: () -> T
let setter: (T) -> Void

var value: T {
get {
return getter()
}
nonmutating set {
setter(newValue)
}
}

init(getter: @escaping () -> T, setter: @escaping (T) -> Void) {
self.getter = getter
self.setter = setter
}
}

You can then use it like so:

class Foo {
private var data = [String : Double]()

var bar: Double? {
get { return self.data["bar"] }
set { self.data["bar"] = newValue }
}

init(_ key: String, _ val: Double) {
self.data[key] = val
}
}

let f1 = Foo("bar", 1.1)
let m1 = Mutator(getter: { f1.bar }, setter: { f1.bar = $0 })

let before = m1.value
m1.value = 199.1

print("m1: before = \(before as Optional), after = \(m1.value as Optional)")
print("f1 after = \(f1.bar as Optional)")

// m1: before = Optional(1.1000000000000001), after = Optional(199.09999999999999)
// f1 after = Optional(199.09999999999999)

Although one downside to this approach is the repetition of value you're getting and setting (f1.bar in this case). One alternative implementation would be to use a single closure with a function argument that takes an inout parameter, returning the (possibly mutated) value.

struct Mutator<T> {

let getter: () -> T
let setter: (T) -> Void

var value: T {
get {
return getter()
}
nonmutating set {
setter(newValue)
}
}

init(mutator: @escaping ((inout T) -> T) -> T) {

// a function, which when applied, will call mutator with a function input
// that just returns the inout argument passed by the caller.
getter = {
mutator { $0 }
}

// a function, which when applied with a given new value, will call mutator
// with a function that will set the inout argument passed by the caller
// to the new value, which will then be returned
// (but ignored by the outer function)
setter = { newValue in
_ = mutator { $0 = newValue; return $0 }
}
}
}

// ...

let f1 = Foo("bar", 1.1)
let m1 = Mutator { $0(&f1.bar) }

The getter now simply applies the passed function, returning the inout parameter passed (f1.bar in this case), and the setter uses this inout parameter in order to assign a new value.

Although personally, I prefer the first approach, despite the repetition.

Computed property not computed on set

Your app crashes for a loop issue.

On your get you have:

if (!self.displayName.isEmpty) {
return self.displayName
}

I suggest you a solution like this:

class User {

private var compoundName: String
var displayName: String {
get {
guard !self.compoundName.isEmpty else {
return self.compoundName
}
if let firstLastNameChar = self.lastName.characters.first {
return return "\(self.firstName) \(firstLastNameChar)"
}
return self.firstName
}
set(displayName) {
self.compoundName = displayName
}
}

}

Why is it not possible to define property observers for computed propertys?

In the first code, you don't need observers, because you already are writing the code that sets the property (set). Thus, if you want to do something before/after the property gets set, you can just do it right there in the setter handler (set):

class A {
var test : String {
get {
return "foo"
}
set {
// will set
self.test = newValue
// did set
}
}
}

Thus, by a kind of Occam's Razor principle, it would be redundant and unnecessary to have separate setter observers: you are the setter, so there is no need to observe yourself.

In your subclass override, on the other hand, where you didn't supply a whole new computed property, the setting is going on behind your back, as it were, so as compensation you are allowed to inject set observation into the process.

UnsafeMutablePointer.pointee and didSet properties

Any time you see a construct like UnsafeMutablePointer(&intObservingThing.anInt), you should be extremely wary about whether it'll exhibit undefined behaviour. In the vast majority of cases, it will.

First, let's break down exactly what's happening here. UnsafeMutablePointer doesn't have any initialisers that take inout parameters, so what initialiser is this calling? Well, the compiler has a special conversion that allows a & prefixed argument to be converted to a mutable pointer to the 'storage' referred to by the expression. This is called an inout-to-pointer conversion.

For example:

func foo(_ ptr: UnsafeMutablePointer<Int>) {
ptr.pointee += 1
}

var i = 0
foo(&i)
print(i) // 1

The compiler inserts a conversion that turns &i into a mutable pointer to i's storage. Okay, but what happens when i doesn't have any storage? For example, what if it's computed?

func foo(_ ptr: UnsafeMutablePointer<Int>) {
ptr.pointee += 1
}

var i: Int {
get { return 0 }
set { print("newValue = \(newValue)") }
}
foo(&i)
// prints: newValue = 1

This still works, so what storage is being pointed to by the pointer? To solve this problem, the compiler:

  1. Calls i's getter, and places the resultant value into a temporary variable.
  2. Gets a pointer to that temporary variable, and passes that to the call to foo.
  3. Calls i's setter with the new value from the temporary.

Effectively doing the following:

var j = i // calling `i`'s getter
foo(&j)
i = j // calling `i`'s setter

It should hopefully be clear from this example that this imposes an important constraint on the lifetime of the pointer passed to foo – it can only be used to mutate the value of i during the call to foo. Attempting to escape the pointer and using it after the call to foo will result in a modification of only the temporary variable's value, and not i.

For example:

func foo(_ ptr: UnsafeMutablePointer<Int>) -> UnsafeMutablePointer<Int> {
return ptr
}

var i: Int {
get { return 0 }
set { print("newValue = \(newValue)") }
}
let ptr = foo(&i)
// prints: newValue = 0
ptr.pointee += 1

ptr.pointee += 1 takes place after i's setter has been called with the temporary variable's new value, therefore it has no effect.

Worse than that, it exhibits undefined behaviour, as the compiler doesn't guarantee that the temporary variable will remain valid after the call to foo has ended. For example, the optimiser could de-initialise it immediately after the call.

Okay, but as long as we only get pointers to variables that aren't computed, we should be able to use the pointer outside of the call it was passed to, right? Unfortunately not, turns out there's lots of other ways to shoot yourself in the foot when escaping inout-to-pointer conversions!

To name just a few (there are many more!):

  • A local variable is problematic for a similar reason to our temporary variable from earlier – the compiler doesn't guarantee that it will remain initialised until the end of the scope it's declared in. The optimiser is free to de-initialise it earlier.

    For example:

    func bar() {
    var i = 0
    let ptr = foo(&i)
    // Optimiser could de-initialise `i` here.

    // ... making this undefined behaviour!
    ptr.pointee += 1
    }
  • A stored variable with observers is problematic because under the hood it's actually implemented as a computed variable that calls its observers in its setter.

    For example:

    var i: Int = 0 {
    willSet(newValue) {
    print("willSet to \(newValue), oldValue was \(i)")
    }
    didSet(oldValue) {
    print("didSet to \(i), oldValue was \(oldValue)")
    }
    }

    is essentially syntactic sugar for:

    var _i: Int = 0

    func willSetI(newValue: Int) {
    print("willSet to \(newValue), oldValue was \(i)")
    }

    func didSetI(oldValue: Int) {
    print("didSet to \(i), oldValue was \(oldValue)")
    }

    var i: Int {
    get {
    return _i
    }
    set {
    willSetI(newValue: newValue)
    let oldValue = _i
    _i = newValue
    didSetI(oldValue: oldValue)
    }
    }
  • A non-final stored property on classes is problematic as it can be overridden by a computed property.

And this isn't even considering cases that rely on implementation details within the compiler.

For this reason, the compiler only guarantees stable and unique pointer values from inout-to-pointer conversions on stored global and static stored variables without observers. In any other case, attempting to escape and use a pointer from an inout-to-pointer conversion after the call it was passed to will lead to undefined behaviour.


Okay, but how does my example with the function foo relate to your example of calling an UnsafeMutablePointer initialiser? Well, UnsafeMutablePointer has an initialiser that takes an UnsafeMutablePointer argument (as a result of conforming to the underscored _Pointer protocol which most standard library pointer types conform to).

This initialiser is effectively same as the foo function – it takes an UnsafeMutablePointer argument and returns it. Therefore when you do UnsafeMutablePointer(&intObservingThing.anInt), you're escaping the pointer produced from the inout-to-pointer conversion – which, as we've discussed, is only valid if it's used on a stored global or static variable without observers.

So, to wrap things up:

var intObservingThing = IntObservingThing(anInt: 0)
var otherPtr = UnsafeMutablePointer(&intObservingThing.anInt)
// "I was just set to 0."

otherPtr.pointee = 20

is undefined behaviour. The pointer produced from the inout-to-pointer conversion is only valid for the duration of the call to UnsafeMutablePointer's initialiser. Attempting to use it afterwards results in undefined behaviour. As matt demonstrates, if you want scoped pointer access to intObservingThing.anInt, you want to use withUnsafeMutablePointer(to:).

I'm actually currently working on implementing a warning (which will hopefully transition to an error) that will be emitted on such unsound inout-to-pointer conversions. Unfortunately I haven't had much time lately to work on it, but all things going well, I'm aiming to start pushing it forwards in the new year, and hopefully get it into a Swift 5.x release.

In addition, it's worth noting that while the compiler doesn't currently guarantee well-defined behaviour for:

var normalThing = NormalThing(anInt: 0)
var ptr = UnsafeMutablePointer(&normalThing.anInt)
ptr.pointee = 20

From the discussion on #20467, it looks like this will likely be something that the compiler does guarantee well-defined behaviour for in a future release, due to the fact that the base (normalThing) is a fragile stored global variable of a struct without observers, and anInt is a fragile stored property without observers.

Swift, when referencing a class property, is it making a copy of the data?

Yes and yes. It doesn't matter that the String and the Int were in a class. You asked for the String or the Int (or both), those are value types, you got copies.

It's easy to prove this to yourself, especially with the String. Just change something about it, and then look back at what the class instance is holding: it will be unchanged.

class C {
var stringProperty : String
init(string:String) {
self.stringProperty = string
}
}
let c = C(string:"hello")
var s = c.stringProperty
s.removeLast()
print(s) // hell
print(c.stringProperty) // hello

If you want to see the class-as-reference in action, make two of the same instance and do something to one of those:

class C {
var stringProperty : String
init(string:String) {
self.stringProperty = string
}
}
let c = C(string:"hello")
let d = c
c.stringProperty = "goodbye"
print(d.stringProperty) // goodbye

Computed property with UNNotificationRequest returns Cannot convert return expression of type 'Void' to return type '[UNNotificationRequest]'

You can probably use dispatch groups like as follows. What you are actually doing is waiting for all your notifications to be retrieved from a thread and then continue



var notifications: [UNNotificationRequest] {
var retrievedNotifications: [UNNotificationRequest] = []

let group = DispatchGroup()
group.enter()

// avoid deadlocks by not using .main queue here
DispatchQueue.global(attributes: .qosDefault).async {
center.getPendingNotificationRequests { notifications in
retrievedNotifications = notifications
group.leave()
}
}

// wait ...
group.wait()

return retrievedNotifications
}

Read-only access for instance property in Swift

Make origin completely private. The only public thing should be a read-only computed property. Your public incrementX and incrementY methods will still work as desired, because they are allowed to access origin.

If Point is a struct, then it is sufficient for this read-only property to return origin, because it will be a copy:

var currentOrigin : Point {
return self.origin
}

But if you insist on making Point a class rather than a struct, then you are vending a reference to the same Point instance you are already retaining, and you cannot prevent it from being mutable. Therefore you will have to produce a copy of self.origin yourself, so that you are not vending a mutable reference to your own private instance:

var currentOrigin : Point {
return Point(self.origin.x,self.origin.y)
}

(The way Objective-C Cocoa typically solves this sort of problem is by implementing a class cluster of immutable/mutable pairs; for example, we maintain an NSMutableString but we vend an NSString copy, so that our mutable string cannot be mutated behind our back. However, I think you are being very foolish to reject the use a struct, since this is one of Swift's huge advantages over Objective-C, i.e. it solves the very problem you are having.)

Swift - constant properties (e.g. strings) computed var vs let, any advantage?

IMHO, the memory used by the computed property will be always greater than the memory used by a let constant.

The reason is simple, you'll have the string myConstant probably in your symbol table, so the constant could be translated in just a pointer to the address of this string, while the computed var probably will be allocated on the stack as a function and then it'd return the pointer to the string in the symbol table.

Probably this makes a (really) small difference, and I'd assume that the compiler optimizes it when you have lots of accesses of the same address, somewhat like the compiler does with collections.

I do not have any docs to verify it but I think is reasonable ir order to visualize and understand a little about what's going on when using a constant constant or a computed property var.



Related Topics



Leave a reply



Submit