Is There Any Way of Locking an Object in Swift Like in C#

Is there any way of locking an object in Swift like in C#

Hope this will help you.

func lock(obj: AnyObject, blk:() -> ()) {
objc_sync_enter(obj)
blk()
objc_sync_exit(obj)
}

var pendingElements = 10

func foo() {
var sum = 0
var pendingElements = 10

for i in 0 ..< 10 {
proccessElementAsync(i) { value in

lock(pendingElements) {
sum += value
pendingElements--

if pendingElements == 0 {
println(sum)
}
}

}
}
}

Do I need to add thread locking to simple variables?

Yes you do.

Imagine the situation where two threads are trying to add 1 to someValue. A thread does this by:

  1. read someValue into a register
  2. Add 1
  3. write someValue back

If both threads do operation 1 before either does operation 3, you will get a different answer than if one thread does all three operations before the other thread does operation 1.

There are also more subtle issues, in that an optimising compiler might not write the modified value back from the register for some time - if at all. Also, modern CPUs have multiple cores each with its own cache. The CPU writing a value back to memory doesn't guarantee it gets to memory straight away. It may just get as far as the core's cache. You need what's called a memory barrier to ensure that everything gets neatly written back to main memory.

On the larger scale, you'll need locking to ensure consistency between the variables in your class. So, if the state is meant to represent some property of someValue e.g. is it an integer or not, you'll need locking to ensure everybody always has a consistent view i.e.

  1. modify someValue
  2. test the new value
  3. set state accordingly.

The above three operations have to appear to be atomic, or if the object is examined after operation 1 but before operation 3, it will be in an inconsistent state.

What is the Swift equivalent to Objective-C's @synchronized?

With the advent of Swift concurrency, we would use actors.

You can use tasks to break up your program into isolated, concurrent
pieces. Tasks are isolated from each other, which is what makes it
safe for them to run at the same time, but sometimes you need to share
some information between tasks. Actors let you safely share
information between concurrent code.

Like classes, actors are reference types, so the comparison of value
types and reference types in Classes Are Reference Types applies to
actors as well as classes. Unlike classes, actors allow only one task
to access their mutable state at a time, which makes it safe for code
in multiple tasks to interact with the same instance of an actor. For
example, here’s an actor that records temperatures:

actor TemperatureLogger {
let label: String
var measurements: [Int]
private(set) var max: Int

init(label: String, measurement: Int) {
self.label = label
self.measurements = [measurement]
self.max = measurement
}
}

You introduce an actor with the actor keyword, followed by its definition in a pair of braces. The TemperatureLogger actor has properties that other code outside the actor can access, and restricts the max property so only code inside the actor can update the maximum value.

For more information, see WWDC video Protect mutable state with Swift actors.


For the sake of completeness, the historical alternatives include:

  • GCD serial queue: This is a simple pre-concurrency approach to ensure that one one thread at a time will interact with the shared resource.

  • Reader-writer pattern with concurrent GCD queue: In reader-writer patterns, one uses a concurrent dispatch queue to perform synchronous, but concurrent, reads (but concurrent with other reads only, not writes) but perform writes asynchronously with a barrier (forcing writes to not be performed concurrently with anything else on that queue). This can offer a performance improvement over a simple GCD serial solution, but in practice, the advantage is modest and comes at the cost of additional complexity (e.g., you have to be careful about thread-explosion scenarios). IMHO, I tend to avoid this pattern, either sticking with the simplicity of the serial queue pattern, or, when the performance difference is critical, using a completely different pattern.

  • Locks: In my Swift tests, lock-based synchronization tends to be substantially faster than either of the GCD approaches. Locks come in a few flavors:

    • NSLock is a nice, relatively efficient lock mechanism.
    • In those cases where performance is of paramount concern, I use “unfair locks”, but you must be careful when using them from Swift (see https://stackoverflow.com/a/66525671/1271826).
    • For the sake of completeness, there is also the recursive lock. IMHO, I would favor simple NSLock over NSRecursiveLock. Recursive locks are subject to abuse and often indicate code smell.
    • You might see references to “spin locks”. Many years ago, they used to be employed where performance was of paramount concern, but they are now deprecated in favor of unfair locks.
  • Technically, one can use semaphores for synchronization, but it tends to be the slowest of all the alternatives.

I outline a few my benchmark results here.

In short, nowadays I use actors for contemporary codebases, GCD serial queues for simple scenarios non-async-await code, and locks in those rare cases where performance is essential.

And, needless to say, we often try to reduce the number of synchronizations altogether. If we can, we often use value types, where each thread gets its own copy. And where synchronization cannot be avoided, we try to minimize the number of those synchronizations where possible.

Is it necessary to lock a C# list before adding elements to it if I do not intend to read from the list while elements are being added?

Yes, you have to lock. The Parallel.For will cause concurrent calls to Add().

On a side note:

//var Result = new List<Int32> ();
var Result = new List<Int32> (100000);
ParallelEnumerable.Range (1, 100000)

Will make this a lot more efficient. Less growing also means less contention on the lock.

How does a singleton property with a lock ensure thread safety?


Does Singleton.Instance.WriteLine("Hello!"); maintain the lock during the execution of the entire method of WriteLine?

No, the lock guards only the creation of your singleton. WriteLine executes unlocked (unless, of course, it obtains its own lock internally).

Is Console.WriteLine("Hello!") also completely thread safe like Singleton.Instance.WriteLine("Hello!")?

It is equally as safe or unsafe as Singleton.Instance, because the lock is not maintained outside of Instance's getter.

Anyway, I'm just confused how this makes the singleton thread safe

Lock makes the process of obtaining the instance of your singleton thread-safe. Making the methods of your singleton thread-safe is a process that does not depend on whether your object is a singleton or not. There is no simple turn-key one-fits-all solution for making a thread-unsafe object behave in a thread-safe way. You address it one method at a time.

C# Singleton with a Disposable object

I would suggest separating the resource handling from the actual usage. Assuming the resource requires disposal this could look something like:

    public class DisposableWrapper<T> where T : IDisposable
{
private readonly Func<T> resourceFactory;
private T resource;
private bool constructed;
private object lockObj = new object();
private int currentUsers = 0;

public DisposableWrapper(Func<T> resourceFactory)
{
this.resourceFactory = resourceFactory;
}

public O Run<O>(Func<T, O> func)
{
lock (lockObj)
{
if (!constructed)
{
resource = resourceFactory();
constructed = true;
}
currentUsers++;
}

try
{
return func(resource);
}
catch
{
return default;
}
finally
{
Interlocked.Decrement(ref currentUsers);
}
}

public void Run(Action<T> action)
{
lock (lockObj)
{
if (!constructed)
{
resource = resourceFactory();
constructed = true;
}
currentUsers++;
}

try
{
action(resource);
}
finally
{
Interlocked.Decrement(ref currentUsers);
}
}

public bool TryRelease()
{
lock (lockObj)
{
if (currentUsers == 0 && constructed)
{
constructed = false;
resource.Dispose();
resource = default;
return true;
}
return false;
}
}
}

If the resource does not require disposal I would suggest to instead use lazy<T>. Releasing resources would simply mean replacing the existing lazy object with a new one. Letting the old object be cleaned up by the garbage collector.

Lock on Dictionary's TryGetValue() - Performance issues

What you're trying to do here is simply not a supported scenario. The TryGetValue occurs outside of the lock which means is very possible for one thread to be writing to the dictionary while others are simultaneously calling TryGetValue. The only threading scenario inherently supported by Dictionary<TKey, TValue> is reads from multiple threads. Once you start reading and writing from multiple threads all bets are off.

In order to make this safe you should do one of the following

  • Use a single lock for all read or write accesses to the Dictionary
  • Use a type like ConcurrentDictionary<TKey, TValue> which is designed for multi-threaded scenarios.

Is a lock required with a lazy initialization on a deeply immutable type?

That will work. Writing to references in C# is guaranteed to be atomic, as described in section 5.5 of the spec.
This is still probably not a good way to do it, because your code will be more confusing to debug and read in exchange for a probably minor effect on performance.

Jon Skeet has a great page on implementing singeltons in C#.

The general advice about small optimizations like these is not to do them unless a profiler tells you this code is a hotspot. Also, you should be wary of writing code that cannot be fully understood by most programmers without checking the spec.

EDIT: As noted in the comments, even though you say you don't mind if 2 versions of your object get created, that situation is so counter-intuitive that this approach should never be used.



Related Topics



Leave a reply



Submit