Volatile in C++11

Volatile in C++11

Whether it is optimized out depends entirely on compilers and what they choose to optimize away. The C++98/03 memory model does not recognize the possibility that x could change between the setting of it and the retrieval of the value.

The C++11 memory model does recognize that x could be changed. However, it doesn't care. Non-atomic access to variables (ie: not using std::atomics or proper mutexes) yields undefined behavior. So it's perfectly fine for a C++11 compiler to assume that x never changes between the write and reads, since undefined behavior can mean, "the function never sees x change ever."

Now, let's look at what C++11 says about volatile int x;. If you put that in there, and you have some other thread mess with x, you still have undefined behavior. Volatile does not affect threading behavior. C++11's memory model does not define reads or writes from/to x to be atomic, nor does it require the memory barriers needed for non-atomic reads/writes to be properly ordered. volatile has nothing to do with it one way or the other.

Oh, your code might work. But C++11 doesn't guarantee it.

What volatile tells the compiler is that it can't optimize memory reads from that variable. However, CPU cores have different caches, and most memory writes do not immediately go out to main memory. They get stored in that core's local cache, and may be written... eventually.

CPUs have ways to force cache lines out into memory and to synchronize memory access among different cores. These memory barriers allow two threads to communicate effectively. Merely reading from memory in one core that was written in another core isn't enough; the core that wrote the memory needs to issue a barrier, and the core that's reading it needs to have had that barrier complete before reading it to actually get the data.

volatile guarantees none of this. Volatile works with "hardware, mapped memory and stuff" because the hardware that writes that memory makes sure that the cache issue is taken care of. If CPU cores issued a memory barrier after every write, you can basically kiss any hope of performance goodbye. So C++11 has specific language saying when constructs are required to issue a barrier.

volatile is about memory access (when to read); threading is about memory integrity (what is actually stored there).

The C++11 memory model is specific about what operations will cause writes in one thread to become visible in another. It's about memory integrity, which is not something volatile handles. And memory integrity generally requires both threads to do something.

For example, if thread A locks a mutex, does a write, and then unlocks it, the C++11 memory model only requires that write to become visible to thread B if thread B later locks it. Until it actually acquires that particular lock, it's undefined what value is there. This stuff is laid out in great detail in section 1.10 of the standard.

Let's look at the code you cite, with respect to the standard. Section 1.10, p8 speaks of the ability of certain library calls to cause a thread to "synchronize with" another thread. Most of the other paragraphs explain how synchronization (and other things) build an order of operations between threads. Of course, your code doesn't invoke any of this. There is no synchronization point, no dependency ordering, nothing.

Without such protection, without some form of synchronization or ordering, 1.10 p21 comes in:

The execution of a program contains a data race if it contains two conflicting actions in different threads, at least one of which is not atomic, and neither happens before the other. Any such data race results in undefined behavior.

Your program contains two conflicting actions (reading from x and writing to x). Neither is atomic, and neither is ordered by synchronization to happen before the other.

Thus, you have achieved undefined behavior.

So the only case where you get guaranteed multithreaded behavior by the C++11 memory model is if you use a proper mutex or std::atomic<int> x with the proper atomic load/store calls.

Oh, and you don't need to make x volatile too. Anytime you call a (non-inline) function, that function or something it calls could modify a global variable. So it cannot optimize away the read of x in the while loop. And every C++11 mechanism to synchronize requires calling a function. That just so happens to invoke a memory barrier.

Usefulness of volatile in concurrent programming since C++11

No, in C++ the volatile keyword tells the compiler that must not optimize the variable in any way shape or form. This can be useful when dealing with memory that can be changed from outside of your own code e.g a hardware register on a custom board.

For more in depth guide about volatile you should read Volatile vs. volatile
By Herb Sutter

Why do we use volatile keyword?

Consider this code,

int some_int = 100;

while(some_int == 100)
{
//your code
}

When this program gets compiled, the compiler may optimize this code, if it finds that the program never ever makes any attempt to change the value of some_int, so it may be tempted to optimize the while loop by changing it from while(some_int == 100) to something which is equivalent to while(true) so that the execution could be fast (since the condition in while loop appears to be true always). (if the compiler doesn't optimize it, then it has to fetch the value of some_int and compare it with 100, in each iteration which obviously is a little bit slow.)

However, sometimes, optimization (of some parts of your program) may be undesirable, because it may be that someone else is changing the value of some_int from outside the program which compiler is not aware of, since it can't see it; but it's how you've designed it. In that case, compiler's optimization would not produce the desired result!

So, to ensure the desired result, you need to somehow stop the compiler from optimizing the while loop. That is where the volatile keyword plays its role. All you need to do is this,

volatile int some_int = 100; //note the 'volatile' qualifier now!

In other words, I would explain this as follows:

volatile tells the compiler that,

"Hey compiler, I'm volatile and, you
know, I can be changed by some XYZ
that you're not even aware of. That
XYZ could be anything. Maybe some
alien outside this planet called
program. Maybe some lightning, some
form of interrupt, volcanoes, etc can
mutate me. Maybe. You never know who
is going to change me! So O you
ignorant, stop playing an all-knowing
god, and don't dare touch the code
where I'm present. Okay?"

Well, that is how volatile prevents the compiler from optimizing code. Now search the web to see some sample examples.


Quoting from the C++ Standard ($7.1.5.1/8)

[..] volatile is a hint to the
implementation to avoid aggressive
optimization involving the object

because the value of the object might
be changed by means undetectable by an
implementation.[...]

Related topic:

Does making a struct volatile make all its members volatile?

Is it safe to remove the c++ volatile here?

In general, volatile and multithreading are orthogonal in C++11. Using volatile neither adds nor removes data races.

In this case, m_flag = true; is sequenced before the release of the mutex in the thread launched by async ([intro.execution]/p14), which in turn synchronizes with the subsequent acquire of the mutex in m_cv.wait(lock) ([thread.mutex.requirements.mutex]/p11,25), which in turn is sequenced before a subsequent read of m_flag. m_flag = true; therefore inter-thread happens before, and hence happens before, the subsequent read. ([intro.multithread]/p13-14)

Since there are no other side effects on m_flag, m_flag = true; is the visible side effect with respect to that read ([intro.multithread]/p15), and that read must therefore read what was stored by the visible side effect, i.e., true.

A compiler that "optimizes" away that condition, regardless of whether volatile is used, would be non-conforming.

Does standard C++11 guarantee that `volatile atomic T ` has both semantics (volatile + atomic)?

Yes, it does.

Section 29.6.5, "Requirements for operations on atomic types"

Many operations are volatile-qualified. The “volatile as device register” semantics have not changed in the standard. This qualification means that volatility is preserved when applying these operations to volatile objects.

I checked working drafts 2008 through 2016, and the same text is in all of them. Therefore it should apply C++11, C++14, and C++17.

Volatile specifier ignored in C++

The struct version will be optimized out probably, as the compiler realizes that there's no side effects (no read or write into the variable a), regardless of the volatile. You basically have a no-op, a;, so the compiler can do whatever it pleases it; it is not forced to unroll the loop or to optimize it out, so the volatile doesn't really matter here. In the case of ints, there seems to be no optimizations, but this is consistent with the use case of volatile: you should expect non-optimizations only when you have a possible "access to an object" (i.e. read or write) in the loop. However what constitutes "access to an object" is implementation-defined (although most of the time it follows common-sense), see EDIT 3 at the bottom.

Toy example here:

#include <iostream>
#include <chrono>

int main()
{
volatile int a = 0;

const std::size_t N = 100000000;

// side effects, never optimized
auto start = std::chrono::steady_clock::now();
for (std::size_t i = 0 ; i < N; ++i)
++a; // side effect (write)
auto end = std::chrono::steady_clock::now();
std::cout << std::chrono::duration_cast<std::chrono::milliseconds>(end - start).count()
<< " ms" << std::endl;

// no side effects, may or may not be optimized out
start = std::chrono::steady_clock::now();
for (std::size_t i = 0 ; i < N; ++i)
a; // no side effect, this is a no-op
end = std::chrono::steady_clock::now();
std::cout << std::chrono::duration_cast<std::chrono::milliseconds>(end - start).count()
<< " ms" << std::endl;
}

EDIT

The no-op is not actually optimized out for scalar types, as you can see in this minimal example. For struct's though, it is optimized out. In the example I linked, clang doesn't optimize the code with no optimization, but optimizes both loops with -O3. gcc doesn't optimize out the loops either with no optimizations, but optimizes only the first loop with optimizations on.

EDIT 2

clang spits out an warning: warning: expression result unused; assign into a variable to force a volatile load [-Wunused-volatile-lvalue]. So my initial guess was correct, the compiler can optimize out no-ops, but it is not forced. Why does it do it for structs and not scalar types is something that I don't understand, but it is the compiler's choice, and it is standard compliant. For some reason it gives this warning only when the no-op is a struct, and doesn't give the warning when it's a scalar type.

Also note that you don't have a "read/write", you only have a no-op, so you shouldn't expect anything from volatile.

EDIT 3

From the golden book (C++ standard)

7.1.6.1/8 The cv-qualifiers [dcl.type.cv]

What constitutes an access to an object that has volatile-qualified
type is implementation-defined. ...

So it is up to the compiler to decide when to optimize out the loops. In most cases, it follows the common sense: when reading or writing into the object.

Understanding volatile keyword in c++

The volatile keyword in C++ was inherited it from C, where it was intended as a general catch-all to indicate places where a compiler should allow for the possibility that reading or writing an object might have side-effects it doesn't know about. Because the kinds of side-effects that could be induced would vary among different platforms, the Standard leaves the question of what allowances to make up to compiler writers' judgments as to how they should best serve their customers.

Microsoft's compilers for the 8088/8086 and later x86 have for decades been designed to support the practice of using volatile objects to build a mutex which guards "ordinary" objects. As a simple example: if thread 1 does something like:

ordinaryObject = 23;
volatileFlag = 1;
while(volatileFlag)
doOtherStuffWhileWaiting();
useValue(ordinaryObject);

and thread 2 periodically does something like:

if (volatileFlag)
{
ordinaryObject++;
volatileFlag=0;
}

then the accesses to volatileFlag would serve as a warning to Microsoft's compilers that they should refrain from making assumptions about how any preceding actions on any objects would interact with later actions. This pattern has been followed with the volatile qualifiers in other languages like C#.

Unfortunately, neither clang nor gcc includes any option to treat volatile in such a fashion, opting instead to require that programmers use compiler-specific intrinsics to yield the same semantics that Microsoft could achieve using only the Standard keyword volatile that was intended to be suitable for such purposes [according to the authors of the Standard, "A volatile object is also an appropriate model for a variable shared among multiple processes."--see http://www.open-std.org/jtc1/sc22/wg14/www/C99RationaleV5.10.pdf p. 76 ll. 25-26]

Concurrency: Atomic and volatile in C++11 memory model

Firstly, volatile does not imply atomic access. It is designed for things like memory mapped I/O and signal handling. volatile is completely unnecessary when used with std::atomic, and unless your platform documents otherwise, volatile has no bearing on atomic access or memory ordering between threads.

If you have a global variable which is shared between threads, such as:

std::atomic<int> ai;

then the visibility and ordering constraints depend on the memory ordering parameter you use for operations, and the synchronization effects of locks, threads and accesses to other atomic variables.

In the absence of any additional synchronization, if one thread writes a value to ai then there is nothing that guarantees that another thread will see the value in any given time period. The standard specifies that it should be visible "in a reasonable period of time", but any given access may return a stale value.

The default memory ordering of std::memory_order_seq_cst provides a single global total order for all std::memory_order_seq_cst operations across all variables. This doesn't mean that you can't get stale values, but it does mean that the value you do get determines and is determined by where in this total order your operation lies.

If you have 2 shared variables x and y, initially zero, and have one thread write 1 to x and another write 2 to y, then a third thread that reads both may see either (0,0), (1,0), (0,2) or (1,2) since there is no ordering constraint between the operations, and thus the operations may appear in any order in the global order.

If both writes are from the same thread, which does x=1 before y=2 and the reading thread reads y before x then (0,2) is no longer a valid option, since the read of y==2 implies that the earlier write to x is visible. The other 3 pairings (0,0), (1,0) and (1,2) are still possible, depending how the 2 reads interleave with the 2 writes.

If you use other memory orderings such as std::memory_order_relaxed or std::memory_order_acquire then the constraints are relaxed even further, and the single global ordering no longer applies. Threads don't even necessarily have to agree on the ordering of two stores to separate variables if there is no additional synchronization.

The only way to guarantee you have the "latest" value is to use a read-modify-write operation such as exchange(), compare_exchange_strong() or fetch_add(). Read-modify-write operations have an additional constraint that they always operate on the "latest" value, so a sequence of ai.fetch_add(1) operations by a series of threads will return a sequence of values with no duplicates or gaps. In the absence of additional constraints, there's still no guarantee which threads will see which values though. In particular, it is important to note that the use of an RMW operation does not force changes from other threads to become visible any quicker, it just means that if the changes are not seen by the RMW then all threads must agree that they are later in the modification order of that atomic variable than the RMW operation. Stores from different threads can still be delayed by arbitrary amounts of time, depending on when the CPU actually issues the store to memory (rather than just its own store buffer), physically how far apart the CPUs executing the threads are (in the case of a multi-processor system), and the details of the cache coherency protocol.

Working with atomic operations is a complex topic. I suggest you read a lot of background material, and examine published code before writing production code with atomics. In most cases it is easier to write code that uses locks, and not noticeably less efficient.

Why does volatile exist?

volatile is needed if you are reading from a spot in memory that, say, a completely separate process/device/whatever may write to.

I used to work with dual-port ram in a multiprocessor system in straight C. We used a hardware managed 16 bit value as a semaphore to know when the other guy was done. Essentially we did this:

void waitForSemaphore()
{
volatile uint16_t* semPtr = WELL_KNOWN_SEM_ADDR;/*well known address to my semaphore*/
while ((*semPtr) != IS_OK_FOR_ME_TO_PROCEED);
}

Without volatile, the optimizer sees the loop as useless (The guy never sets the value! He's nuts, get rid of that code!) and my code would proceed without having acquired the semaphore, causing problems later on.



Related Topics



Leave a reply



Submit