Synchronize Properties in Swift 3 Using Gcd

Synchronize Properties in Swift 3 using GCD

But i think this snippet is not valid, because the internalQueue could be concurrent

But it isn't concurrent. Dispatch queues that you create are serial by default. That is the point of the technique (and the example).

iOS GCD Sync and Async

Basically using sync from the main thread is always wrong, because now you are blocking the main thread. And that is exactly what you are doing! Only a background thread should sync onto another thread.

and there are no problems to run sync queue and different async queue simultaneously

Yes, there are. Nothing can happen "simultaneously". Everything has to "take turns".

Or custom sync queues block any other queues (even main queue)?

Yes. While Queue A is being called with sync and is sleeping, the calling thread (which in your code is the main thread) is blocked. You are not allowing things to "take turns". We cannot proceed to the Queue B calls because we are stuck in the Queue A calls.

You will also notice that the Queue B results arrive much faster than the Queue A results did. That is because Queue B is being called with async and is concurrent, so the whole second loop, when it runs, runs immediately, doing all the loops, and so all the results arrive together. This is even more obvious if you print Date().timeIntervalSince1970 together with your output.

What is the Swift equivalent to Objective-C's @synchronized ?

With the advent of Swift concurrency, we would use actors.

You can use tasks to break up your program into isolated, concurrent
pieces. Tasks are isolated from each other, which is what makes it
safe for them to run at the same time, but sometimes you need to share
some information between tasks. Actors let you safely share
information between concurrent code.

Like classes, actors are reference types, so the comparison of value
types and reference types in Classes Are Reference Types applies to
actors as well as classes. Unlike classes, actors allow only one task
to access their mutable state at a time, which makes it safe for code
in multiple tasks to interact with the same instance of an actor. For
example, here’s an actor that records temperatures:

actor TemperatureLogger {
let label: String
var measurements: [Int]
private(set) var max: Int

init(label: String, measurement: Int) {
self.label = label
self.measurements = [measurement]
self.max = measurement
}
}

You introduce an actor with the actor keyword, followed by its definition in a pair of braces. The TemperatureLogger actor has properties that other code outside the actor can access, and restricts the max property so only code inside the actor can update the maximum value.

For more information, see WWDC video Protect mutable state with Swift actors.


For the sake of completeness, the historical alternatives include:

  • GCD serial queue: This is a simple pre-concurrency approach to ensure that one one thread at a time will interact with the shared resource.

  • Reader-writer pattern with concurrent GCD queue: In reader-writer patterns, one uses a concurrent dispatch queue to perform synchronous, but concurrent, reads (but concurrent with other reads only, not writes) but perform writes asynchronously with a barrier (forcing writes to not be performed concurrently with anything else on that queue). This can offer a performance improvement over a simple GCD serial solution, but in practice, the advantage is modest and comes at the cost of additional complexity (e.g., you have to be careful about thread-explosion scenarios). IMHO, I tend to avoid this pattern, either sticking with the simplicity of the serial queue pattern, or, when the performance difference is critical, using a completely different pattern.

  • Locks: In my Swift tests, lock-based synchronization tends to be substantially faster than either of the GCD approaches. Locks come in a few flavors:

    • NSLock is a nice, relatively efficient lock mechanism.
    • In those cases where performance is of paramount concern, I use “unfair locks”, but you must be careful when using them from Swift (see https://stackoverflow.com/a/66525671/1271826).
    • For the sake of completeness, there is also the recursive lock. IMHO, I would favor simple NSLock over NSRecursiveLock. Recursive locks are subject to abuse and often indicate code smell.
    • You might see references to “spin locks”. Many years ago, they used to be employed where performance was of paramount concern, but they are now deprecated in favor of unfair locks.
  • Technically, one can use semaphores for synchronization, but it tends to be the slowest of all the alternatives.

I outline a few my benchmark results here.

In short, nowadays I use actors for contemporary codebases, GCD serial queues for simple scenarios non-async-await code, and locks in those rare cases where performance is essential.

And, needless to say, we often try to reduce the number of synchronizations altogether. If we can, we often use value types, where each thread gets its own copy. And where synchronization cannot be avoided, we try to minimize the number of those synchronizations where possible.

Puzzled by when copy of vars to func params happens when using GCD

Other than making the local copy ..., is there a preferred method of handing this?

The simplest way to give that other thread its own local copy is through a capture list:

queue.async(group: group).async { [array] in
checkSquares(array)
}

@synchronized block versus GCD dispatch_async()

Although the functional difference might not matter much to you, it's what you'd expect: if you @synchronize then the thread you're on is blocked until it can get exclusive execution. If you dispatch to a serial dispatch queue asynchronously then the calling thread can get on with other things and whatever it is you're actually doing will always occur on the same, known queue.

So they're equivalent for ensuring that a third resource is used from only one queue at a time.

Dispatching could be a better idea if, say, you had a resource that is accessed by the user interface from the main queue and you wanted to mutate it. Then your user interface code doesn't need explicitly to @synchronize, hiding the complexity of your threading scheme within the object quite naturally. Dispatching will also be a better idea if you've got a central actor that can trigger several of these changes on other different actors; that'll allow them to operate concurrently.

Synchronising is more compact and a lot easier to step debug. If what you're doing tends to be two or three lines and you'd need to dispatch it synchronously anyway then it feels like going to the effort of creating a queue isn't worth it — especially when you consider the implicit costs of creating a block and moving it over onto the heap.

Do we need to declare a property atomic if we use GCD?

Using atomic is one way to synchronize a property being used from multiple threads. But there are many mechanisms for synchronizing access from multiple threads, and atomic is one with fairly limited utility. I'd suggest you refer to the Synchronization chapter of the Threading Programming Guide for a fuller discussion of alternatives (and even that fails to discuss other contemporary patterns such as GCD serial queues and reader-writer pattern with a custom, concurrent queue).

Bottom line, atomic is, by itself, neither necessary nor sufficient to ensure thread safety. In general, it has some limited utility when dealing with some simple, fundamental data type (Booleans, NSInteger) but is inadequate when dealing with more complicated logic or when dealing with mutable objects.

In short, do not assume that you should use atomic whenever you use GCD. In fact, if you use GCD, that generally obviates the need for atomic, which, in fact, will unnecessary and adversely impact performance in conjunction with GCD. So, if you have some property being accessed from multiple threads, you should synchronize it, but the choice of which synchronization technique to employ is a function of the the specific details of the particular situation, and GCD often is a more performant and more complete solution.

Difference between dispatching to a queue with `sync` and using a work item with a `.wait` flag?

.wait is not a flag in DispatchWorkItemFlags, and that is why
your code

myQueue.async(flags: .wait) { sleep(1); print("wait") }

does not compile.

wait() is a method of DispatchWorkItem and just a wrapper for
dispatch_block_wait().

/*!
* @function dispatch_block_wait
*
* @abstract
* Wait synchronously until execution of the specified dispatch block object has
* completed or until the specified timeout has elapsed.

Simple example:

let myQueue = DispatchQueue(label: "my.queue", attributes: .concurrent)
let workItem = DispatchWorkItem {
sleep(1)
print("done")
}
myQueue.async(execute: workItem)
print("before waiting")
workItem.wait()
print("after waiting")

dispatchMain()

dispatch_barrier_async equivalent in Swift 3

The async() method has a flags parameter which accepts a .barrier
option:

func subscribe(subscriber: DaoDelegate) {
(self.subscribers.q).async(flags: .barrier) {
//...
}
}


Related Topics



Leave a reply



Submit