Why Is the Volatile Qualifier Used Through Out Std::Atomic

Why is the volatile qualifier used through out std::atomic?

Why is the volatile qualifier used throughout std::atomic?

So that volatile objects can also be atomic. See here:

The relevant quote is

The functions and operations are defined to work with volatile objects, so that variables that should be volatile can also be atomic. The volatile qualifier, however, is not required for atomicity.

Do my atomic<> variables need to be volatile or not?

No, atomic objects don't have to be volatile.

Why do all the member functions in std::atomic appear both with and without volatile?

Probably it all stems from what volatile is, for that see this answer. As the use-cases are quite slim compare to usual application development, that is why nobody usually cares. I will assume that you do not have any practical scenario where you wondering if you should apply those volatile overloadings. Then I will try to come up with an example where you may need those (do not judge it to be too real).

volatile std::sig_atomic_t status = ~SIGINT;
std::atomic<int> shareable(100);

void signal_handler(int signal)
{
status = signal;
}

// thread 1
auto old = std::signal(SIGINT, signal_handler);
std::raise(SIGINT);
int s = status;
shareable.store(10, std::memory_order_relaxed);
std::signal(SIGINT, old);

// thread 2
int i = shareable.load(std::memory_order_relaxed);

memory_order_relaxed guarantees atomicity and modification order consistency, no side-effects. volatile cannot be reordered with side-effects. Then here we are, in thread 2 you can get shareable equal 10, but status is still not SIGINT. However, if you set type qualifier to volatile of shareable that must be guaranteed. For that you will need the member methods be volatile-qualified.

Why would you do something like this at all? One case, I might think of, is you have some old code that is using old volatile-based stuff and you cannot modify it for one or another reason. Hard to imagine, but I guess that there might be a need to have some kind of guaranteed order between atomic and volatile inline assembly. The bottom line, IMHO, is that whenever it is possible you can use new atomic library instead of volatile objects, in the case there are some volatile objects, that you cannot rid of, and you want to use atomic objects, then you might need volatile qualifier for the atomic objects to have proper ordering guarantees, for that you would need overloading.

UPDATE

But if all I wanted was to have atomic types usable as both volatile and non-volatile, why not just implement the former?

struct Foo {
int k;
};

template <typename T>
struct Atomic {
void store(T desired) volatile { t = desired; }
T t;
};

int main(int i, char** argv) {
//error: no viable overloaded '='
// void store(T desired) volatile { t = desired; }
Atomic<Foo>().store(Foo());
return 0;
}

The same would be with load and other operations, because those are usually not trivial implementations that require copy operator and/or copy constructor (that also can volatile or non-volatile).

If volatile is useless for threading, why do atomic operations require pointers to volatile data?

It suddenly came to me that I simply misinterpreted the meaning of volatile*. Much like const* means the pointee shouldn't change, volatile* means that the pointee shouldn't be cached in a register. This is an additional constraint that can be freely added: as much as you can cast a char* to a const char*, you can cast an int* to a volatile int*.

So applying the volatile modifier to the pointees simply ensures that atomic functions can be used on already volatile variables. For non-volatile variables, adding the qualifier is free. My mistake was to interpret the presence of the keyword in the prototypes as an incentive to use it rather than as a convenience to those using it.

Is there any sense to make std::atomic objects with the qualifier - volatile?

No, there is absolutely no sense in making std::atomic also volatile, as inside the std::atomic, the code will deal with the possibility that the variable may change at any time, and that other processors may need to be "told" that it has changed (the "telling" other processors is not covered by volatile).

The only time you really need volatile is if you have a pointer to piece of hardware that your code is controlling - for example reading a counter in a timer, or a which frame buffer is active right now, or telling a network card where to read the data for the next packet to send. Those sort of things are volatile, because the compiler can't know that the value of those things can change at any time.

What's the usecase of volatile operations on std::atomicT?

There is a general usefulness to volatile in the sense that compiler MUST not optimise away accesses to that variable. However, in this case, I think it's mainly because the input MAY be volatile - just like in case of const, you can "add" but not "remove" volatile attribute to a passed in paramater.

Thus:

int foo(volatile int *a)
{
...
}

will accept:

int x;
volatile int y;

foo(&x);
foo(&y);

where if you didn't write volatile, the compiler should not accept the foo(&y); variant.

Can I replace the atomic with a volatile in an one-reader/one-writer queue?

As already pointed out in several other comments, volatile is unrelated to multithreading, so it should not be used here. However, the reason why volatile performs better then an atmoic simply is, because with a volatile ++n translates to simple load, inc, store instructions, while with an atomic it translates to a more expensive lock xadd (assuming you compile for x86).

But since this is only a single-reader single-writer queue, you don't need expensive read-modify-write operations:

struct queue {
queue() {
tail = head = &reserved;
n = 0;
}
void push(item *it) {
tail->next = it;
tail = it;
auto new_n = n.load(std::memory_order_relaxed) + 1;
n.store(new_n, std::memory_order_release);
}
item* pop() {
while (n.load(std::memory_order_acquire) == used);
++used;
head = head->next;
return head;
}

item reserved;
item *tail, *head;

int used = 0;
std::atomic <int> n;
}

This should perform equally good as a volatile version. If the acquire-load in pop "sees" the value written by the store-release in push, the two operations synchronize-with each other, thereby establishing the required happens-before relation.

Why does volatile exist?

volatile is needed if you are reading from a spot in memory that, say, a completely separate process/device/whatever may write to.

I used to work with dual-port ram in a multiprocessor system in straight C. We used a hardware managed 16 bit value as a semaphore to know when the other guy was done. Essentially we did this:

void waitForSemaphore()
{
volatile uint16_t* semPtr = WELL_KNOWN_SEM_ADDR;/*well known address to my semaphore*/
while ((*semPtr) != IS_OK_FOR_ME_TO_PROCEED);
}

Without volatile, the optimizer sees the loop as useless (The guy never sets the value! He's nuts, get rid of that code!) and my code would proceed without having acquired the semaphore, causing problems later on.

What's the usecase of volatile operations on std::atomicT?

There is a general usefulness to volatile in the sense that compiler MUST not optimise away accesses to that variable. However, in this case, I think it's mainly because the input MAY be volatile - just like in case of const, you can "add" but not "remove" volatile attribute to a passed in paramater.

Thus:

int foo(volatile int *a)
{
...
}

will accept:

int x;
volatile int y;

foo(&x);
foo(&y);

where if you didn't write volatile, the compiler should not accept the foo(&y); variant.



Related Topics



Leave a reply



Submit