Is a Reference Assignment Threadsafe

Is a reference assignment threadsafe?

Yes, reference updates are guaranteed to be atomic in the language spec.

5.5 Atomicity of variable references

Reads and writes of the following data types are atomic: bool, char, byte, sbyte, short, ushort, uint, int, float, and reference types. In addition, reads and writes of enum types with an underlying type in the previous list are also atomic. Reads and writes of other types, including long, ulong, double, and decimal, as well as user-defined types, are not guaranteed to be atomic.

However inside a tight loop you might get bitten by register caching. Unlikely in this case unless your method-call is inlined (which might happen). Personally I'd add the lock to make it simple and predictable, but volatile can help here too. And note that full thread-safety is more than just atomicity.

In the case of a cache, I'd be looking at Interlocked.CompareExchange, personally - i.e. try to update, but if it fails redo the work from scratch (starting from the new value) and try again.

is reference assignment atomic in Swift 5?

In short, no, property accessors are not atomic. See WWDC 2016 video Concurrent Programming With GCD in Swift 3, which talks about the absence of atomic/synchronization native in the language. (This is a GCD talk, so when they subsequently dive into synchronization methods, they focus on GCD methods, but any synchronization method is fine.) Apple uses a variety of different synchronization methods in their own code. E.g. in ThreadSafeArrayStore they use they use NSLock).


If synchronizing with locks, I might suggest an extension like the following:

extension NSLocking {
func synchronized<T>(block: () throws -> T) rethrows -> T {
lock()
defer { unlock() }
return try block()
}
}

Apple uses this pattern in their own code, though they happen to call it withLock rather than synchronized. But the pattern is the same.

Then you can do:

public class B {
private var lock = NSLock()
private var a: A? // make this private to prevent unsynchronized direct access to this property

public func setA() {
lock.synchronized {
a = A(0)
}
}

public func hasA() -> Bool {
lock.synchronized {
a != nil
}
}

public func resetA() {
lock.synchronized {
guard a != nil else { return }
a = A(1)
}
}
}

Or perhaps

public class B {
private var lock = NSLock()
private var _a: A?

public var a: A? {
get { lock.synchronized { _a } }
set { lock.synchronized { _a = newValue } }
}

public var hasA: Bool {
lock.synchronized { _a != nil }
}

public func resetA() {
lock.synchronized {
guard _a != nil else { return }
_a = A(1)
}
}
}

I confess to some uneasiness in exposing hasA, because it practically invites the application developer to write like:

if !b.hasA {
b.a = ...
}

That is fine in terms of preventing simultaneous access to memory, but it introduced a logical race if two threads are doing it at the same time, where both happen to pass the !hasA test, and they both replace the value, the last one wins.

Instead, I might write a method to do this for us:

public class B {
private var lock = NSLock() // replacing os_unfair_lock_s()
private var _a: A? = nil // fixed, thanks to Rob

var a: A? {
get { lock.synchronized { _a } }
set { lock.synchronized { _a = newValue } }
}

public func withA(block: (inout A?) throws -> T) rethrows -> T {
try lock.synchronized {
try block(&_a)
}
}
}

That way you can do:

b.withA { a in
if a == nil {
a = ...
}
}

That is thread safe, because we are letting the caller wrap all the logical tasks (checking to see if a is nil and, if so, the initialization of a) all in one single synchronized step. It is a nice generalized solution to the problem. And it prevents logic races.


Now the above example is so abstract that it is hard to follow. So let's consider a practical example, a variation on Apple’s ThreadSafeArrayStore:

public class ThreadSafeArrayStore<Value> {
private var underlying: [Value]
private let lock = NSLock()

public init(_ seed: [Value] = []) {
underlying = seed
}

public subscript(index: Int) -> Value {
get { lock.synchronized { underlying[index] } }
set { lock.synchronized { underlying[index] = newValue } }
}

public func get() -> [Value] {
lock.synchronized {
underlying
}
}

public func clear() {
lock.synchronized {
underlying = []
}
}

public func append(_ item: Value) {
lock.synchronized {
underlying.append(item)
}
}

public var count: Int {
lock.synchronized {
underlying.count
}
}

public var isEmpty: Bool {
lock.synchronized {
underlying.isEmpty
}
}

public func map<NewValue>(_ transform: (Value) throws -> NewValue) rethrows -> [NewValue] {
try lock.synchronized {
try underlying.map(transform)
}
}

public func compactMap<NewValue>(_ transform: (Value) throws -> NewValue?) rethrows -> [NewValue] {
try lock.synchronized {
try underlying.compactMap(transform)
}
}
}

Here we have a synchronized array, where we define an interface to interact with the underlying array in a thread-safe manner.


Or, if you want an even more trivial example, consider an thread-safe object to keep track of what the tallest item was. We would not have a hasValue boolean, but rather we would incorporate that right into our synchronized updateIfTaller method:

public class Tallest {
private var _height: Float?
private let lock = NSLock()

var height: Float? {
lock.synchronized { _height }
}

func updateIfTaller(_ candidate: Float) {
lock.synchronized {
guard let tallest = _height else {
_height = candidate
return
}

if candidate > tallest {
_height = candidate
}
}
}
}

Just a few examples. Hopefully it illustrates the idea.

Maintain reference then assign new object thread safe?

It's not thread-safe. There is no guarantee when, if ever, other Threads will see the change to the static variable res.

You could change it to:

static volatile int[] res = new int[10];

And then other threads will be guaranteed to pick it up the next time the use the res variable.

In this particular case where you are only resetting to zero and have no dependency on the previous value of res, this is probably good enough.

In cases where you do depend on the previous value (or other shared variables) you'll need to implement further synchronization between threads.

However one warning: other threads may still be manipulating values in your "copy" variable as they may have retrieved the reference and held on to it.

Why is reference assignment atomic in Java?

First of all, reference assignment is atomic because the specification says so. Besides that, there is no obstacle for JVM implementors to fulfill this constraint, as 64 Bit references are usually only used on 64 Bit architectures, where atomic 64 Bit assignment comes for free.

Your main confusion stems from the assumption that the additional “Atomic References” feature means exactly that, due to its name. But the AtomicReference class offers far more, as it encapsulates a volatile reference, which has stronger memory visibility guarantees in a multi-threaded execution.

Having an atomic reference update does not necessarily imply that a thread reading the reference will also see consistent values regarding the fields of the object reachable through that reference. All it guarantees is that you will read either the null reference or a valid reference to an existing object that actually was stored by some thread. If you want more guarantees, you need constructs like synchronization, volatile references, or an AtomicReference.

AtomicReference also offers atomic update operations like compareAndSet or getAndSet. These are not possible with ordinary reference variables using built-in language constructs (but only with special classes like AtomicReferenceFieldUpdater or VarHandle).

reference assignment is atomic so why is Interlocked.Exchange(ref Object, Object) needed?

There are numerous questions here. Considering them one at a time:

reference assignment is atomic so why is Interlocked.Exchange(ref Object, Object) needed?

Reference assignment is atomic. Interlocked.Exchange does not do only reference assignment. It does a read of the current value of a variable, stashes away the old value, and assigns the new value to the variable, all as an atomic operation.

my colleague said that on some platforms it's not guaranteed that reference assignment is atomic. Was my colleague correct?

No. Reference assignment is guaranteed to be atomic on all .NET platforms.

My colleague is reasoning from false premises. Does that mean that their conclusions are incorrect?

Not necessarily. Your colleague could be giving you good advice for bad reasons. Perhaps there is some other reason why you ought to be using Interlocked.Exchange. Lock-free programming is insanely difficult and the moment you depart from well-established practices espoused by experts in the field, you are off in the weeds and risking the worst kind of race conditions. I am neither an expert in this field nor an expert on your code, so I cannot make a judgement one way or the other.

produces warning "a reference to a volatile field will not be treated as volatile" What should I think about this?

You should understand why this is a problem in general. That will lead to an understanding of why the warning is unimportant in this particular case.

The reason that the compiler gives this warning is because marking a field as volatile means "this field is going to be updated on multiple threads -- do not generate any code that caches values of this field, and make sure that any reads or writes of this field are not "moved forwards and backwards in time" via processor cache inconsistencies."

(I assume that you understand all that already. If you do not have a detailed understanding of the meaning of volatile and how it impacts processor cache semantics then you don't understand how it works and should not be using volatile. Lock-free programs are very difficult to get right; make sure that your program is right because you understand how it works, not right by accident.)

Now suppose you make a variable which is an alias of a volatile field by passing a ref to that field. Inside the called method, the compiler has no reason whatsoever to know that the reference needs to have volatile semantics! The compiler will cheerfully generate code for the method that fails to implement the rules for volatile fields, but the variable is a volatile field. That can completely wreck your lock-free logic; the assumption is always that a volatile field is always accessed with volatile semantics. It makes no sense to treat it as volatile sometimes and not other times; you have to always be consistent otherwise you cannot guarantee consistency on other accesses.

Therefore, the compiler warns when you do this, because it is probably going to completely mess up your carefully developed lock-free logic.

Of course, Interlocked.Exchange is written to expect a volatile field and do the right thing. The warning is therefore misleading. I regret this very much; what we should have done is implement some mechanism whereby an author of a method like Interlocked.Exchange could put an attribute on the method saying "this method which takes a ref enforces volatile semantics on the variable, so suppress the warning". Perhaps in a future version of the compiler we shall do so.

C# Is value type assignment atomic?

Structs are value types. If you assign a struct to a variable/field/method parameter, the whole struct content will be copied from the source storage location to the storage location of the variable/field/method parameter (the storage location in each case being the size of the struct itself).

Copying a struct is not guaranteed to be an atomic operation. As written in the C# language specification:

Atomicity of variable references

Reads and writes of the following data types are atomic: bool, char,
byte, sbyte, short, ushort, uint, int, float, and reference types. In
addition, reads and writes of enum types with an underlying type in
the previous list are also atomic. Reads and writes of other types,
including long, ulong, double, and decimal, as well as user-defined
types, are not guaranteed to be atomic. Aside from the library
functions designed for that purpose, there is no guarantee of atomic
read-modify-write, such as in the case of increment or decrement.



So yes, it can happen that while one thread is in the process of copying the data from a struct storage location, another thread comes along and starts copying new data from another struct to that storage location. The thread copying from the storage location thus can end up copying a mix of old and new data.


As a side note, your code can also suffer from other concurrency problems due to how one of your threads is writing to a variable and how the variable is used by another thread. (An answer by user acelent to another question explains this rather well in technical detail, so i will just refer to it: https://stackoverflow.com/a/46695456/2819245) You can avoid such problems by encapsulating any access of such "thread-crossing" variables in a lock block. As an alternative to lock, and with regard to basic data types, you could also use methods provided by the Interlocked class to access thread-crossing variables/fields in a thread-safe manner (Alternating between both lock and Interlocked methods for the same thread-crossing variable is not a good idea, though).

Thread Safety: Lock vs Reference

This is not safe without the lock. Copying the reference to the list doesn't really do anything for you in this context. It's still quite possible for the list that you are currently iterating to be mutated in another thread while you are iterating it, causing all sorts of possible badness.



Related Topics



Leave a reply



Submit