Efficient Memory Barriers

Efficient Memory Barriers

The C++11 standard library includes support for fences in <atomic> with std::atomic_thread_fence.

Calling this invokes a full fence:

std::atomic_thread_fence(std::memory_order_seq_cst);

If you want to emit only an acquire or only a release fence, you can use std:memory_order_acquire and std::memory_order_release instead.

Is function call an effective memory barrier for modern platforms?

Memory barriers aren't just to prevent instruction reordering. Even if instructions aren't reordered it can still cause problems with cache coherence. As for the reordering - it depends on your compiler and settings. ICC is particularly agressive with reordering. MSVC w/ whole program optimization can be, too.

If your shared data variable is declared as volatile, even though it's not in the spec most compilers will generate a memory variable around reads and writes from the variable and prevent reordering. This is not the correct way of using volatile, nor what it was meant for.

(If I had any votes left, I'd +1 your question for the narration.)

Avoiding concurrent structures by manually triggering memory barriers

The thing you are trying to do is called “Premature Optimization”. You don’t have a real performance problem but try to make your entire program very complicated and possibly error prone, without any gain.

The reason why you will never experience any (notable) gain lies in the way how a lock works. You can learn a lot of it by studying the documentation of the class AbstractQueuedSynchronizer.

A Lock is formed around a simple int value with volatile semantics and atomic updates. In the simplest form, i.e. without contention, locking and unlocking consist of a single atomic update of this int variable. Since you claim that you can be sure that there will be only one thread accessing the data at a given time, there will be no contention and the lock state update has similar performance characteristics compared to your volatile boolean attempts but with the difference that the Lock code works reliable and is heavily tested.

The ConcurrentMap approach goes a step further and allows a lock-free read that has the potential to be even more efficient than your volatile read (depending on the actual implementation).

So you are creating a potentially slower and possibly error prone program just because you “feel that using many concurrent hash maps per object is heavy-handed”. The only answer can be: don’t feel. Measure. Or just leave it as is as long as there is no real performance problem.

Why do I need a memory barrier?

Barrier #2 guarentees that the write to _complete gets committed immediately. Otherwise it could remain in a queued state meaning that the read of _complete in B would not see the change caused by A even though B effectively used a volatile read.

Of course, this example does not quite do justice to the problem because A does nothing more after writing to _complete which means that the write will be comitted immediately anyway since the thread terminates early.

The answer to your question of whether the if could still evaluate to false is yes for exactly the reasons you stated. But, notice what the author says regarding this point.

Barriers 1 and 4 prevent this example
from writing “0”. Barriers 2 and 3
provide a freshness guarantee: they
ensure that if B ran after A, reading
_complete would evaluate to true.

The emphasis on "if B ran after A" is mine. It certainly could be the case that the two threads interleave. But, the author was ignoring this scenario presumably to make his point regarding how Thread.MemoryBarrier works simpler.

By the way, I had a hard time contriving an example on my machine where barriers #1 and #2 would have altered the behavior of the program. This is because the memory model regarding writes was strong in my environment. Perhaps, if I had a multiprocessor machine, was using Mono, or had some other different setup I could have demonstrated it. Of course, it was easy to demonstrate that removing barriers #3 and #4 had an impact.

C - volatile and memory barriers in lockless shared memory access?

  1. is my understanding correct and complete ?

Yeah, it looks that way, except for not mentioning that C11 <stdatomic.h> made all this obsolete for almost all purposes.

There are more bad/weird things that can happen without volatile (or better, _Atomic) that you didn't list: the LWN article Who's afraid of a big bad optimizing compiler? goes into detail about things like inventing extra loads (and expecting them both to read the same value). It's aimed at Linux kernel code, where C11 _Atomic isn't how they do things.

Other than the Linux kernel, new code should pretty much always use <stdatomic.h> instead of rolling your own atomics with volatile and inline asm for RMWs and barriers. But that does continue to work because all real-world CPUs that we run threads across have coherent shared memory, so making a memory access happen in the asm is enough for inter-thread visibility, like memory_order_relaxed. See When to use volatile with multi threading? (basically never, except in the Linux kernel or maybe a handful of other codebases that already have good implementations of hand-rolled stuff).

In ISO C11, it's data-race undefined behaviour for two threads to do unsynchronized read+write on the same object, but mainstream compilers do define the behaviour, just compiling the way you'd expect so hardware guarantees or lack thereof come into play.


Other than that, yeah, looks accurate except for your final question 2: there are use-cases for memory_order_relaxed atomics, which is like volatile with no barriers, e.g. an exit_now flag.

or are there are cases where only using barriers suffice ?

No, unless you get lucky and the compiler happens to generate correct asm anyway.

Or unless other synchronization means this code only runs while no other threads are reading/writing the object. (C++20 has std::atomic_ref<T> to handle the case where some parts of the code need to do atomic accesses to data, but other parts of your program don't, and you want to let them auto-vectorize or whatever. C doesn't have any such thing yet, other than using plain variables with/without GNU C __atomic_load_n() and other builtins, which is how C++ headers implement std::atomic<T>, and which is the same underlying support that C11 _Atomic compiles to. Probably also the C11 functions like atomic_load_explicit defined in stdatomic.h, but unlike C++, _Atomic is a true keyword not defined in any header.)

Are memory barriers necessary for atomic reference counting shared immutable data?

On x86, it will turn into a lock prefixed assembly instruction, like LOCK XADD.

Being a single instruction, it is non-interruptible. As an added "feature", the lock prefix results in a full memory barrier:

"...locked operations serialize all outstanding load and store operations (that is, wait for them to complete)." ..."Locked operations are atomic with respect to all other memory operations and all externally visible events. Only instruction fetch and page table accesses can pass locked instructions. Locked instructions can be used to synchronize data written by one processor and read by another processor." - Intel® 64 and IA-32 Architectures Software Developer’s Manual, Chapter 8.1.2.

A memory barrier is in fact implemented as a dummy LOCK OR or LOCK AND in both the .NET and the JAVA JIT on x86/x64, because mfence is slower on many CPUs even when it's guaranteed to be available, like in 64-bit mode. (Does lock xchg have the same behavior as mfence?)

So you have a full fence on x86 as an added bonus, whether you like it or not. :-)

On PPC, it is different. An LL/SC pair - lwarx & stwcx - with a subtraction inside can be used to load the memory operand into a register, subtract one, then either write it back if there was no other store to the target location, or retry the whole loop if there was. An LL/SC can be interrupted (meaning it will fail and retry).

It also does not mean an automatic full fence.

This does not however compromise the atomicity of the counter in any way.

It just means that in the x86 case, you happen to get a fence as well, "for free".

On PPC, one can insert a (partial or) full fence by emitting a (lw)sync instruction.

All in all, explicit memory barriers are not necessary for the atomic counter to work properly.

Which memory barriers are minimally needed for updating array elements with greater values?

Relaxed is fine, you don't need any ordering wrt. access to any other elements during the process of updating. And for accesses to the same location, ISO C++ guarantees that a "modification order" exists for each location separately, and that even relaxed operations will only see the same or later values in the modification order of the location between loaded or RMWed.

You're just building an atomic fetch_max primitive out of a CAS retry loop. Since the other writers are doing the same thing, the value of each location is monotonically increasing. So it's totally safe to bail out any time you see a value greater than the new_value.

For the main thread to collect the results at the end, you do need release/acquire synchronization like thread.join or some kind of flag. (e.g. maybe fetch_sub(1, release) of a counter of how many threads still have work left to do, or an array of done flags so you can just do a pure store.)


BTW, this seems likely to be slow, with lots of time spent waiting for cache lines to bounce between cores. (Lots of false-sharing.) Ideally you you can efficiently change this to have each thread work on different parts of the array (e.g. computing multiple candidates for the same index so it doesn't need any atomic stuff).

I cannot guarantee that the computed indices do not overlap. In practice, the overlapping is usually small, but it cannot be eliminated.

So apparently that's a no. And if the indices touched by different threads are in different cache lines (chunk of 16 int32_t) then there won't be too much false sharing. (Also, if computation is expensive so you aren't producing values very fast, that's good so atomic updates aren't what your code is spending most of its time on.)

But if there is significant contention and the array isn't huge, you could give each thread its own output array, and collect the results at the end. e.g. have one thread do a[i] = max(a[i], b[i], c[i], d[i]) for 4 to 8 arrays per loop. (Not too many read streams at once, and not a variable number of inputs because that probably couldn't compile efficiently). This should benefit from SIMD, e.g. SSE4.1 pmaxsd doing 4 parallel max operations, so this should be limited mostly by L3 cache bandwidth.

Or divide the max work between threads as a second parallel phase, with each thread doing the above over part of the output array. Or have the thread_id % 4 == 0 reduce results from itself and the next 3 threads, so you have a tree of reductions if you have a system with many threads.



Related Topics



Leave a reply



Submit