Objc_Sync_Enter/Objc_Sync_Exit Not Working with Dispatch_Queue_Priority_Low

objc_sync_enter / objc_sync_exit not working with DISPATCH_QUEUE_PRIORITY_LOW

objc_sync_enter is an extremely low-level primitive, and isn't intended to be used directly. It's an implementation detail of the old @synchronized system in ObjC. Even that is extremely out-dated and should generally be avoided.

Synchronized access in Cocoa is best achieved with GCD queues. For example, this is a common approach that achieves a reader/writer lock (concurrent reading, exclusive writing).

public class UserData {
private let myPropertyQueue = dispatch_queue_create("com.example.mygreatapp.property", DISPATCH_QUEUE_CONCURRENT)

private var _myProperty = "" // Backing storage
public var myProperty: String {
get {
var result = ""
dispatch_sync(myPropertyQueue) {
result = self._myProperty
}
return result
}

set {
dispatch_barrier_async(myPropertyQueue) {
self._myProperty = newValue
}
}
}
}

All your concurrent properties can share a single queue, or you can give each property its own queue. It depends on how much contention you expect (a writer will lock the entire queue).

The "barrier" in "dispatch_barrier_async" means that it is the only thing allowed to run on the queue at that time, so all previous reads will have completed, and all future reads will be prevented until it completes. This scheme means that you can have as many concurrent readers as you want without starving writers (since writers will always be serviced), and writes are never blocking. On reads are blocking, and only if there is actual contention. In the normal, uncontested case, this is extremely very fast.

Atomic property wrapper only works when declared as class, not struct

This is question is answered in this PR: https://github.com/apple/swift-evolution/pull/1387

I think this is those lines that really explains it /p>

In Swift's formal memory access model, methods on a value types are considered to access the entire value, and so calling the wrappedValue getter formally reads the entire stored wrapper, while calling the setter of wrappedValue formally modifies the entire stored wrapper.

The wrapper's value will be loaded before the call to
wrappedValue.getter and written back after the call to
wrappedValue.setter. Therefore, synchronization within the wrapper
cannot provide atomic access to its own value.

What is the Swift equivalent to Objective-C's @synchronized?

With the advent of Swift concurrency, we would use actors.

You can use tasks to break up your program into isolated, concurrent
pieces. Tasks are isolated from each other, which is what makes it
safe for them to run at the same time, but sometimes you need to share
some information between tasks. Actors let you safely share
information between concurrent code.

Like classes, actors are reference types, so the comparison of value
types and reference types in Classes Are Reference Types applies to
actors as well as classes. Unlike classes, actors allow only one task
to access their mutable state at a time, which makes it safe for code
in multiple tasks to interact with the same instance of an actor. For
example, here’s an actor that records temperatures:

actor TemperatureLogger {
let label: String
var measurements: [Int]
private(set) var max: Int

init(label: String, measurement: Int) {
self.label = label
self.measurements = [measurement]
self.max = measurement
}
}

You introduce an actor with the actor keyword, followed by its definition in a pair of braces. The TemperatureLogger actor has properties that other code outside the actor can access, and restricts the max property so only code inside the actor can update the maximum value.

For more information, see WWDC video Protect mutable state with Swift actors.


For the sake of completeness, the historical alternatives include:

  • GCD serial queue: This is a simple pre-concurrency approach to ensure that one one thread at a time will interact with the shared resource.

  • Reader-writer pattern with concurrent GCD queue: In reader-writer patterns, one uses a concurrent dispatch queue to perform synchronous, but concurrent, reads (but concurrent with other reads only, not writes) but perform writes asynchronously with a barrier (forcing writes to not be performed concurrently with anything else on that queue). This can offer a performance improvement over a simple GCD serial solution, but in practice, the advantage is modest and comes at the cost of additional complexity (e.g., you have to be careful about thread-explosion scenarios). IMHO, I tend to avoid this pattern, either sticking with the simplicity of the serial queue pattern, or, when the performance difference is critical, using a completely different pattern.

  • Locks: In my Swift tests, lock-based synchronization tends to be substantially faster than either of the GCD approaches. Locks come in a few flavors:

    • NSLock is a nice, relatively efficient lock mechanism.
    • In those cases where performance is of paramount concern, I use “unfair locks”, but you must be careful when using them from Swift (see https://stackoverflow.com/a/66525671/1271826).
    • For the sake of completeness, there is also the recursive lock. IMHO, I would favor simple NSLock over NSRecursiveLock. Recursive locks are subject to abuse and often indicate code smell.
    • You might see references to “spin locks”. Many years ago, they used to be employed where performance was of paramount concern, but they are now deprecated in favor of unfair locks.
  • Technically, one can use semaphores for synchronization, but it tends to be the slowest of all the alternatives.

I outline a few my benchmark results here.

In short, nowadays I use actors for contemporary codebases, GCD serial queues for simple scenarios non-async-await code, and locks in those rare cases where performance is essential.

And, needless to say, we often try to reduce the number of synchronizations altogether. If we can, we often use value types, where each thread gets its own copy. And where synchronization cannot be avoided, we try to minimize the number of those synchronizations where possible.

Creating a threadsafe Array, the easy way?

The synchronization mechanism in your question, with concurrent queue and judicious use of barrier is known as the “reader-writer” pattern. In short, it offers concurrent synchronous reads and non-concurrent asynchronous writes. This is a fine synchronization mechanism. It is not the problem here.

But there are a few problems:

  1. In the attempt to pare back the implementation, this class has become very inefficient. Consider:

    class ThreadSafeArray<Element> {
    private var array: [Element]
    private let queue = DispatchQueue(label: "ThreadsafeArray.reader-writer", attributes: .concurrent)

    init(_ array: [Element] = []) {
    self.array = array
    }
    }

    extension ThreadSafeArray {
    var threadsafe: [Element] {
    get { queue.sync { array } }
    set { queue.async(flags: .barrier) { self.array = newValue } }
    }
    }

    let numbers = ThreadSafeArray([1, 2, 3])
    numbers.threadsafe[1] = 42 // !!!

    What that numbers.threadsafe[1] = 42 line is really doing is as follows:

    • Fetching the whole array;
    • Changing the second item in a copy of the array; and
    • Replacing the whole array with a copy of the array that was just created.

    That is obviously very inefficient.

  2. The intuitive solution is to add an efficient subscript operator in the implementation:

    extension ThreadSafeArray {
    typealias Index = Int

    subscript(index: Index) -> Element {
    get { queue.sync { array[index] } }
    set { queue.async(flags: .barrier) { self.array[index] = newValue} }
    }
    }

    Then you can do:

    numbers[1] = 42

    That will perform a synchronized update of the existing array “in place”, without needing to copy the array at all. In short, it is an efficient, thread-safe mechanism.

    What will end up happening, as one adds more and more basic “array” functionality (e.g., especially mutable methods such as the removing of items, adding items, etc.), you end up with an implementation not dissimilar to the original implementation you found online. This is why that article you referenced implemented all of those methods: It exposes array-like functionality, but offering an efficient and (seemingly) thread-safe interface.

  3. While the above addresses the data races, there is a deep problem in that code sample you found online, as illuminated by your thread-safety test.

    To illustrate this, let’s first assume we flesh out our ThreadSafeArray to have last, append() and make it print-able:

    class ThreadSafeArray<Element> {
    private var array: [Element]
    private let queue = DispatchQueue(label: "ThreadsafeArray.reader-writer", attributes: .concurrent)

    init(_ array: [Element] = []) {
    self.array = array
    }
    }

    extension ThreadSafeArray {
    typealias Index = Int

    subscript(index: Index) -> Element {
    get { queue.sync { array[index] } }
    set { queue.async(flags: .barrier) { self.array[index] = newValue} }
    }

    var last: Element? {
    queue.sync { array.last }
    }

    func append(_ newElement: Element) {
    queue.async(flags: .barrier) {
    self.array.append(newElement)
    }
    }
    }

    extension ThreadSafeArray: CustomStringConvertible {
    var description: String {
    queue.sync { array.description }
    }
    }

    That implementation (a simplified version of the rendition found on that web site) looks OK, as it solves the data race and avoids unnecessary copying of the array. But it has its own problems. Consider this rendition of your thread-safety test:

    let numbers = ThreadSafeArray([0])

    DispatchQueue.concurrentPerform(iterations: 1_000) { <#Int#> in
    let lastValue = numbers.last! + 1
    numbers.append(lastValue)
    }

    print(numbers) // !!!

    The strict data race is solved, but the result will not be [0, 1, 2, ..., 1000]. The problem are the lines:

    let lastValue = numbers.last! + 1
    numbers.append(lastValue)

    That does a synchronized retrieval of last followed by a separate synchronized append. The problem is that another thread might slip in between these two synchronized calls and fetch the same last value! You need to wrap the whole “fetch last value, increment it, and append this new value” in a single, synchronized task.

    To solve this, we would often give the thread-safe object a method that would provide a way to perform multiple statements as a single, synchronized, task. E.g.:

    extension ThreadSafeArray {
    func synchronized(block: @escaping (inout [Element]) -> Void) {
    queue.async(flags: .barrier) { [self] in
    block(&array)
    }
    }
    }

    Then you can do:

    let numbers = ThreadSafeArray([0])

    DispatchQueue.concurrentPerform(iterations: 1_000) { <#Int#> in
    numbers.synchronized { array in
    let lastValue = array.last! + 1
    array.append(lastValue)
    }
    }

    print(numbers) // OK
  4. So let’s return to your intuition that the author’s class can be simplified. You are right, that it can and should be simplified. But my rationale is slightly different than yours.

    The complexity of the implementation is not my concern. It actually is an interesting pedagogical exercise to understand barriers and the broader reader-writer pattern.

    My concern is that (to my point 3, above), is that the author’s implementation lulls an application developer in a false sense of security provided by the low-level thread-safety. As your tests demonstrate, a higher-level level of synchronization is almost always needed.

    In short, I would stick to a very basic implementation, one that exposes the appropriate high-level, thread-safe interface, not a method-by-method and property-by-property interface to the underlying array, which almost always will be insufficient. In fact, this desire for a high-level, thread-safe interface is a motivating idea behind a more modern thread-safety mechanism, namely actors in Swift concurrency.



Related Topics



Leave a reply



Submit