Is Dispatchsemaphore a Good Replacement for Nslock

Is DispatchSemaphore a good replacement for NSLock?

Yes they have the same function, both to deal with producer-consumer problem.

Semaphore allows more than one thread to access a shared resource if it is configured accordingly. You can make the execution of the blocks in the same concurrent dispatchQueue.

{
semaphore.wait()
// do things...
semaphore.signal()
}

Actually the same applies to Lock, if you only want one thread to touch the resource at one time, in the concurrent way.

I found this to be helpful: https://priteshrnandgaonkar.github.io/concurrency-with-swift-3/

Mutex alternatives in swift

As people commented (incl. me), there are several ways to achieve this kind of lock. But I think dispatch semaphore is better than others because it seems to have the least overhead. As found in Apples doc, "Replacing Semaphore Code", it doesn't go down to kernel space unless the semaphore is already locked (= zero), which is the only case when the code goes down into the kernel to switch the thread. I think that semaphore is not zero most of the time (which is of course app specific matter, though). Thus, we can avoid lots of overhead.

One more comment on dispatch semaphore, which is the opposite scenario to above. If your threads have different execution priorities, and the higher priority threads have to lock the semaphore for a long time, dispatch semaphore may not be the solution. This is because there's no "queue" among waiting threads. What happens at this case is that higher priority
threads get and lock the semaphore most of the time, and lower priority threads can lock the semaphore only occasionally, thus, mostly just waiting. If this behavior is not good for your application, you have to consider dispatch queue instead.

Swift thread safe array

Your problem is that unshift calls count. unshift is already holding the semaphore, but the first thing that count does is call wait, which causes a deadlock. You have the same problem in popLast.

Since you already have exclusive access to the array you can simply use its isEmpty property.

public func unshift() -> T? {
var firstEl:T? = nil
wait(); defer { signal() }
if !dataArray.isEmpty {
firstEl = dataArray.removeFirst()
}
return firstEl;
}

public func pop() -> T? {
var lastEl:T? = nil
wait(); defer { signal() }
if !dataArray.isEmpty {
lastEl = dataArray.popLast()
}
return lastEl;
}

You could also replace your DispatchSemaphore with a NSRecursiveLock since you don't need the counting behaviour of a semaphore. NSRecursiveLock can be locked multiple times by the same thread without causing a deadlock.

Swift: Safe Thread using NSLock or Concurrent Queue

Concurrent queue with barrier flag is more efficient than using NSLock in this case.

Both of them block other operations while a setter is running but the difference is when you call multiple getters concurrently or parallelism.

  • NSLock: Only allow 1 getter running at a time
  • Concurrent Queue with barrier flag: Allow multiple getters running at a time.

How to Implement Semaphores in iOS Application?

Yes, it is possible.
There are quite a few synchronization tools available:

  • @synchronized
  • NSLock
  • NSCondition
  • NSConditionLock
  • GCD semaphores
  • pthread locks
  • ...

I'd suggest reading "Threading Programming Guide" and asking something more specific.

What is the Swift equivalent to Objective-C's @synchronized?

With the advent of Swift concurrency, we would use actors.

You can use tasks to break up your program into isolated, concurrent
pieces. Tasks are isolated from each other, which is what makes it
safe for them to run at the same time, but sometimes you need to share
some information between tasks. Actors let you safely share
information between concurrent code.

Like classes, actors are reference types, so the comparison of value
types and reference types in Classes Are Reference Types applies to
actors as well as classes. Unlike classes, actors allow only one task
to access their mutable state at a time, which makes it safe for code
in multiple tasks to interact with the same instance of an actor. For
example, here’s an actor that records temperatures:

actor TemperatureLogger {
let label: String
var measurements: [Int]
private(set) var max: Int

init(label: String, measurement: Int) {
self.label = label
self.measurements = [measurement]
self.max = measurement
}
}

You introduce an actor with the actor keyword, followed by its definition in a pair of braces. The TemperatureLogger actor has properties that other code outside the actor can access, and restricts the max property so only code inside the actor can update the maximum value.

For more information, see WWDC video Protect mutable state with Swift actors.


For the sake of completeness, the historical alternatives include:

  • GCD serial queue: This is a simple pre-concurrency approach to ensure that one one thread at a time will interact with the shared resource.

  • Reader-writer pattern with concurrent GCD queue: In reader-writer patterns, one uses a concurrent dispatch queue to perform synchronous, but concurrent, reads (but concurrent with other reads only, not writes) but perform writes asynchronously with a barrier (forcing writes to not be performed concurrently with anything else on that queue). This can offer a performance improvement over a simple GCD serial solution, but in practice, the advantage is modest and comes at the cost of additional complexity (e.g., you have to be careful about thread-explosion scenarios). IMHO, I tend to avoid this pattern, either sticking with the simplicity of the serial queue pattern, or, when the performance difference is critical, using a completely different pattern.

  • Locks: In my Swift tests, lock-based synchronization tends to be substantially faster than either of the GCD approaches. Locks come in a few flavors:

    • NSLock is a nice, relatively efficient lock mechanism.
    • In those cases where performance is of paramount concern, I use “unfair locks”, but you must be careful when using them from Swift (see https://stackoverflow.com/a/66525671/1271826).
    • For the sake of completeness, there is also the recursive lock. IMHO, I would favor simple NSLock over NSRecursiveLock. Recursive locks are subject to abuse and often indicate code smell.
    • You might see references to “spin locks”. Many years ago, they used to be employed where performance was of paramount concern, but they are now deprecated in favor of unfair locks.
  • Technically, one can use semaphores for synchronization, but it tends to be the slowest of all the alternatives.

I outline a few my benchmark results here.

In short, nowadays I use actors for contemporary codebases, GCD serial queues for simple scenarios non-async-await code, and locks in those rare cases where performance is essential.

And, needless to say, we often try to reduce the number of synchronizations altogether. If we can, we often use value types, where each thread gets its own copy. And where synchronization cannot be avoided, we try to minimize the number of those synchronizations where possible.

What happens when you dispatch a task asynchronously inside a sync queue in Swift?

You can’t call serialQueue.sync from the block that is being executed by the serialQueue.

TL;DR;

Here is what I think is likely happening:

  1. You schedule a block A via serialQueue.async from notifyDelegate.
  2. In the context of block A execution, your delegate calls changeState, incorrectly assuming that current thread is not the serialQueue’s thread.
  3. From the changeState method, being on the serialQueue’s call stack, you schedule synchronously another block B via serialQueue.sync which can never start because you wait for it to be started in the previously asynchronously scheduled block A which is currently executed by the serialQueue.

Ways to avoid this situation:

  1. Never invoke public callbacks in the private serial queue that you use for synchronization.

OR


  1. Don’t use private queue for synchronization, use os_unfair_lockor NSLock or NSRecursiveLock instead. It might also improve the performance.


Related Topics



Leave a reply



Submit