Reference Assignment Is Atomic So Why Is Interlocked.Exchange(Ref Object, Object) Needed

reference assignment is atomic so why is Interlocked.Exchange(ref Object, Object) needed?

There are numerous questions here. Considering them one at a time:

reference assignment is atomic so why is Interlocked.Exchange(ref Object, Object) needed?

Reference assignment is atomic. Interlocked.Exchange does not do only reference assignment. It does a read of the current value of a variable, stashes away the old value, and assigns the new value to the variable, all as an atomic operation.

my colleague said that on some platforms it's not guaranteed that reference assignment is atomic. Was my colleague correct?

No. Reference assignment is guaranteed to be atomic on all .NET platforms.

My colleague is reasoning from false premises. Does that mean that their conclusions are incorrect?

Not necessarily. Your colleague could be giving you good advice for bad reasons. Perhaps there is some other reason why you ought to be using Interlocked.Exchange. Lock-free programming is insanely difficult and the moment you depart from well-established practices espoused by experts in the field, you are off in the weeds and risking the worst kind of race conditions. I am neither an expert in this field nor an expert on your code, so I cannot make a judgement one way or the other.

produces warning "a reference to a volatile field will not be treated as volatile" What should I think about this?

You should understand why this is a problem in general. That will lead to an understanding of why the warning is unimportant in this particular case.

The reason that the compiler gives this warning is because marking a field as volatile means "this field is going to be updated on multiple threads -- do not generate any code that caches values of this field, and make sure that any reads or writes of this field are not "moved forwards and backwards in time" via processor cache inconsistencies."

(I assume that you understand all that already. If you do not have a detailed understanding of the meaning of volatile and how it impacts processor cache semantics then you don't understand how it works and should not be using volatile. Lock-free programs are very difficult to get right; make sure that your program is right because you understand how it works, not right by accident.)

Now suppose you make a variable which is an alias of a volatile field by passing a ref to that field. Inside the called method, the compiler has no reason whatsoever to know that the reference needs to have volatile semantics! The compiler will cheerfully generate code for the method that fails to implement the rules for volatile fields, but the variable is a volatile field. That can completely wreck your lock-free logic; the assumption is always that a volatile field is always accessed with volatile semantics. It makes no sense to treat it as volatile sometimes and not other times; you have to always be consistent otherwise you cannot guarantee consistency on other accesses.

Therefore, the compiler warns when you do this, because it is probably going to completely mess up your carefully developed lock-free logic.

Of course, Interlocked.Exchange is written to expect a volatile field and do the right thing. The warning is therefore misleading. I regret this very much; what we should have done is implement some mechanism whereby an author of a method like Interlocked.Exchange could put an attribute on the method saying "this method which takes a ref enforces volatile semantics on the variable, so suppress the warning". Perhaps in a future version of the compiler we shall do so.

Static List assignment in C# is atomic

No, this code is not atomic - if Items is accessed from multiple threads in parallel, _items may actually get created more than once and different callers may receive a different value.

This code needs locking because it first performs a read, a branch and a write (after an expensive deserialization call). The read and the write by themselves are atomic but - without a lock - there's nothing to prevent the system to switch to another thread between the read and the write.

In pseudo(ish) code, this is what may happen:

if (_items==null)
// Thread may be interrupted here.
{
// Thread may be interrupted inside this call in many places,
// so another thread may enter the body of the if() and
// call this same function again.
var s = ConfigurationManager.AppSettings.get_Item("Items");

// Thread may be interrupted inside this call in many places,
// so another thread may enter the body of the if() and
// call this same function again.
var i = JsonConvert.DeserializeObject(s);

// Thread may be interrupted here.
_items = i;
}

// Thread may be interrupted here.
return (_items);

This shows you that without locking it's possible for multiple callers to get a different instance of the Items list.

You should look into using Lazy<T> which will make this sort of initialization a lot simpler and safe.

When should I use Lazy<T>?

Also, keep in mind that List<T> itself is not thread-safe - you may want to use a different type (like ConcurrentDictionary<T1, T2> or ReadOnlyCollection<T>) or you may need to use locking around all operations against this list.

Rob, in the comments, pointed out that the question may be about whether a given assignment is atomic - a single assignment (that is a single write) of a reference is guaranteed to be atomic but that doesn't make this code safe because there's more than a single assignment here.

C# object type - atomicity of assignment

I am assuming OP means 'thread safe' when talking about atomicity.

A write operation is not atomic only if it is possible that another thread can read partially written value.

If object [] table is local to a method, each thread will get their own table and hence any operation on the table will be atomic.

Going forward I am assuming that table is shared across threads.

OP has defined table as an array of object. Hence table[3] = 10 involves boxing.

Even though table[3] = 10 represents a chain of instructions, this operation is atomic because this will eventually be writing the 'address' of the boxed instance (Current CLR implementation do represent object reference using memory address) and addresses are of the natural word size of the machine (i.e. 32 bit on 32 bit machines and 64 bit on 64 bit machines). Please note that even though above explanation is based on current CLR implementation, atomicity of reference writing is guaranteed by specs. The operation of boxing and the boxed instance itself are local to the thread and hence there is no way that another thread can interfere in that.
Going by same reasoning even if a value exceeding the word size (for e.g. Decimal) was being written, the operation would have been atomic (due to boxing).
It must be noted here that the above argument holds only if the value being written has already been obtained in a thread safe manner.

Had no boxing been involved, then the usual rules of word size writes (or sizes lesser than the word size, owing to memory alignment) being atomic holds.

Atomic array in C11

The standard prohibits to apply the _Atomic specifier to an array as a whole, for example this

typedef double at[5];
_Atomic(at) atomic_array; // constraint violation

But the array elements may well be atomic

_Atomic(double) atomic_array[5]; // valid

If you want the access to the array as a whole to be atomic, you'd have to encapsulate the array within a structure and then have the _Atomic around the structure.

ReadOnlyCollections and Threads - Is this code safe?

Reference assignments are atomic, so yes, it is thread safe. But only as long as you don't rely on the data to be ready to be read exactly the moment after it is written. This is because of caching, you might want to throw in a volatile to prevent that.

See also reference assignment is atomic so why is Interlocked.Exchange(ref Object, Object) needed?.

Is assigning to ref parameter of method an atomic operation?

That depends: is Class1 genuinely a class (or interface, or delegate)? If it is a struct: it may be non-atomic (size being one major factor that affects this); reference updates, however, are always atomic; that is guaranteed by the language specification. As for "thread safe" - that is more complex - it depends on how the other values are reading/writing the field. For example, it is not guaranteed that other threads will notice the swap immediately, unless they are doing volatile reads.

Using of Interlocked.Exchange for updating of references and Int32

It is not only about atomicity. It is also about memory visibility. Variable can be stored in main memory or in CPU cache. If the variable is only stored in CPU cache it will not be visible to threads running on different CPU. Consider following example:

public class Test {
private Int32 i = 5;

public void ChangeUsingAssignment() {
i = 10;
}

public void ChangeUsingInterlocked() {
Interlocked.Exchange(ref i, 10);
}

public Int32 Read() {
return Interlocked.CompareExchange(ref i, 0, 0);
}
}

Now if you call 'ChangeUsingAssignment' on one thread and 'Read' on another thread the return value may be 5, not 10. But if you call ChangeUsingInterlocked, 'Read' will return 10 as expected.

 ----------         ------------         -------------------
| CPU 1 | --> | CACHE 1 | --> | |
---------- ------------ | |
| RAM |
---------- ------------ | |
| CPU 2 | --> | CACHE 2 | --> | |
---------- ------------ -------------------

In the diagram above 'ChangeUsingAssignement' method may result in value 10 get 'stuck' in CACHE 2 and not make it to RAM. When CPU 1 later tries to read it, it will get the value from RAM where it is still 5. Using Interlocked instead of ordinary write will make sure that value 10 gets all the way to the RAM.

Overuse of Interlocked.exchange?

Using an Interlocked method does two things:

  1. It performs some series of operations that normally wouldn't be atomic, and makes them effectively atomic. In the case of Exchange you doing the equivalent of: var temp = first; first=second; return temp; but without risk of either variable being modified by another thread while you're doing that.
  2. It introduces a memory barrier. It's possible for compiler, runtime, and or hardware optimizations to result in different threads having a local "copy" of a value that is technically in shared memory (normally as a result of caching variables). This can result in it taking a long time for one thread to "see" the results of a write in another thread. A memory barrier essentially syncs up all of these different versions of the same variable.

So, onto your code specifically. Your second solution isn't actually thread safe. Each individual Interlocked operation is atomic, but any number of calls to various Interlocked calls aren't atomic. Given everything that you're methods are doing, your critical sections are actually much larger; you'll need to use lock or another similar mechanism (i.e. a semaphore or monitor) to limit access to sections of code to only a single thread. In your particular case I'd imagine that the entirety of the method is a critical section. If you're really, really careful you may be able to have several smaller critical blocks, but it will be very difficult to ensure that there are no possible race conditions as a result.

As for performance, well, as it is the code doesn't work, and so performance is irrelevant.

Is the return value of Interlocked.Exchange also processed atomically?

I assume you're considering code like this:

using System;
using System.Threading;

class Test
{
static int x = 1;
static int y = 2;

static void Main()
{
x = Interlocked.Exchange(ref y, 5);
}
}

In that case, no, the operation isn't atomic. In IL, there are two separate actions:

  • Calling the method
  • Copying the value from the notional stack to the field

It would be entirely possible for another thread to "see" y become 5 before the return value of Interlocked.Exchange was stored in x.

Personally, if I were looking at something where you need multiple field values to be changed atomically, I'd be considering locks instead of atomic lock-free operations.



Related Topics



Leave a reply



Submit