Do I Have to Use Atomic<Bool> for "Exit" Bool Variable

Do I have to use atomicbool for exit bool variable?

Do I have to use atomic for “exit” bool variable?

Yes.

Either use atomic<bool>, or use manual synchronization through (for instance) an std::mutex. Your program currently contains a data race, with one thread potentially reading a variable while another thread is writing it. This is Undefined Behavior.

Per Paragraph 1.10/21 of the C++11 Standard:

The execution of a program contains a data race if it contains two conflicting actions in different threads,
at least one of which is not atomic, and neither happens before the other. Any such data race results in
undefined behavior.

The definition of "conflicting" is given in Paragraph 1.10/4:

Two expression evaluations conflict if one of them modifies a memory location (1.7) and the other one
accesses or modifies the same memory location.

When do I really need to use atomicbool instead of bool?

No type in C++ is "atomic by nature" unless it is an std::atomic*-something. That's because the standard says so.

In practice, the actual hardware instructions that are emitted to manipulate an std::atomic<bool> may (or may not) be the same as those for an ordinary bool, but being atomic is a larger concept with wider ramifications (e.g. restrictions on compiler re-ordering). Furthermore, some operations (like negation) are overloaded on the atomic operation to create a distinctly different instruction on the hardware than the native, non-atomic read-modify-write sequence of a non-atomic variable.

Is it necessary to use a std::atomic to signal that a thread has finished execution?

Someone who commented on the accepted answer claims that one cannot use a simple bool variable as a signal, the code was broken without a memory barrier and using std::atomic would be correct.

The commenter is right: a simple bool is insufficient, because non-atomic writes from the thread that sets thread_finished to true can be re-ordered.

Consider a thread that sets a static variable x to some very important number, and then signals its exit, like this:

x = 42;
thread_finished = true;

When your main thread sees thread_finished set to true, it assumes that the worker thread has finished. However, when your main thread examines x, it may find it set to a wrong number, because the two writes above have been re-ordered.

Of course this is only a simplified example to illustrate the general problem. Using std::atomic for your thread_finished variable adds a memory barrier, making sure that all writes before it are done. This fixes the potential problem of out-of-order writes.

Another issue is that reads to non-volatile variables can be optimized out, so the main thread would never notice the change in the thread_finished flag.



Important note: making your thread_finished volatile is not going to fix the problem; in fact, volatile should not be used in conjunction with threading - it is intended for working with memory-mapped hardware.

std::atomic bool or Normal global bool is good in single thread?

If all access to the boolean is protected by a mutex, then it does not matter either way (in terms of correctness). Use the atomic<> or don't. The correctness of your code will not suffer.

But the performance might suffer, other than the fact that you are locking access to something that already has a mutex protecting it, std::atomic<> has the potential to use a mutex under the hood itself. std::atomic_flag however, is guaranteed to be lock-free. See the cppreference documentation for more.

If you have a single thread accessing that boolean, then no point having any synchronization at all.

Why does a boolean need to be atomic?

What exactly does an AtomicBool protect us from? Why can we not use a regular bool from different threads (other than the fact that the compiler won't let us)?

Anything that might go wrong, whether you can think of it or not. I hate to follow this up with something I can think of, because it doesn't matter. The rules say it's not guaranteed to work and that should end it. Thinking you have to think of a way it can fail or it can't fail is just wrong.

But here's one way:

 // Release the lock
thread_lock = false

Say this particular CPU doesn't have a particularly good way to set a boolean to false without using a register but does have a good single operation that negates a boolean and tests if it's zero without using a register. On this CPU, in conditions of register pressure, this might get optimized to:

  1. Negate thread_lock and test if it's zero.
  2. If the copy of thread_lock was false, negate thread_lock again.

What happens if in-betweens steps 1 and 2 another thread observes thread_lock to be true even though it was false going into this operation and will be false when it's done?

std::atomic_bool for cancellation flag: is std::memory_order_relaxed the correct memory order?

As long as there is no dependency between cancel_requested flag and anything else, you should be safe.

The code as shown looks OK, assuming you use cancel_requested only to expedite the shutdown, but also have a provision for an orderly shutdown, such as a sentinel entry in the queue (and of course that the queue itself is synchronized).

Which means your code actually looks like this:

std::thread work_thread;
std::atomic_bool cancel_requested{false};
std::mutex work_queue_mutex;
std::condition_variable work_queue_filled_cond;
std::queue work_queue;

void thread_func()
{
while(! cancel_requested.load(std::memory_order_relaxed))
{
std::unique_lock<std::mutex> lock(work_queue_mutex);
work_queue_filled_cond.wait(lock, []{ return !work_queue.empty(); });
auto element = work_queue.front();
work_queue.pop();
lock.unlock();
if (element == exit_sentinel)
break;
process_next_element(element);
}
}

void cancel()
{
std::unique_lock<std::mutex> lock(work_queue_mutex);
work_queue.push_back(exit_sentinel);
work_queue_filled_cond.notify_one();
lock.unlock();
cancel_requested.store(true, std::memory_order_relaxed);
work_thread.join();
}

And if we're that far, then cancel_requested may just as well become a regular variable, the code even becomes simpler.

std::thread work_thread;
bool cancel_requested = false;
std::mutex work_queue_mutex;
std::condition_variable work_queue_filled_cond;
std::queue work_queue;

void thread_func()
{
while(true)
{
std::unique_lock<std::mutex> lock(work_queue_mutex);
work_queue_filled_cond.wait(lock, []{ return cancel_requested || !work_queue.empty(); });
if (cancel_requested)
break;
auto element = work_queue.front();
work_queue.pop();
lock.unlock();
process_next_element(element);
}
}

void cancel()
{
std::unique_lock<std::mutex> lock(work_queue_mutex);
cancel_requested = true;
work_queue_filled_cond.notify_one();
lock.unlock();
work_thread.join();
}

memory_order_relaxed is generally hard to reason about, because it blurs the general notion of sequentially executing code. So the usefulness of it is very, very limited as Herb explains in his atomic weapons talk.

Note std::thread::join() by itself acts as a memory barrier between the two threads.

Is it ok to read a shared boolean flag without locking it when another thread may set it (at most once)?

It is never OK to read something possibly modified in a different thread without synchronization. What level of synchronization is needed depends on what you are actually reading. For primitive types, you should have a look at atomic reads, e.g. in the form of std::atomic<bool>.

The reason synchronization is always needed is that the processors will have the data possibly shared in a cache line. It has no reason to update this value to a value possibly changed in a different thread if there is no synchronization. Worse, yet, if there is no synchronization it may write the wrong value if something stored close to the value is changed and synchronized.

Interoperabilty between C and C++ atomics

The atomic_bool type in C and the std::atomic<bool> type in C++ (typedefed as std::atomic_bool) are two different types that are unrelated. Passing a std::atomic_bool to a C function expecting C's atomic_bool is Undefined Behavior. That it works at all is a combination of luck and the simple definitions of these types being compatible.

If the C++ code needs to call a C function that expects C's atomic_bool, then that is what it must use. However, the <stdatomic.h> header does not exist in C++. You'll have to provide a way for the C++ code to call C code to get a pointer to the atomic variable you need in a way that hides the type. (Possibly declare a struct that holds the atomic bool, that C++ would only know that the type exists and only know about pointers to it.)

Should I use compare_exchange_weak(or strong) when check atomicbool variable is set?

Wow that is a confusing piece of code. Firstly the obvious point, the compare_exchange_weak version (might) change the value of the underlying atomic value. b.load() does not, so they are not equivalent.

To explain...

while(!b.compare_exchange_weak(expected,true) && !expected);

b.compare_exchange_weak(expected,true) says

if b is false make it true, otherwise set expected = b(true)

Now you might ask, why does the code check the value after the exchange?

It is checking to see if another thread has already set the value. which causes the loop to exit. But then, if the code exits no matter what the value of b, why the loop?

The loop is there because the weak-version of the function can just do nothing (for no reason).

So this code is equivalent to

b.compare_exchange_strong(expected,true);

I'm not sure why the author didn't write it this way, but I'm missing the context for it. I'd certainly have problems with someone who put that in production code, without a nice explanatory comment above it!

See https://en.cppreference.com/w/cpp/atomic/atomic/compare_exchange for details.
See https://newbedev.com/understanding-std-atomic-compare-exchange-weak-in-c-11 for a similar discusion



Related Topics



Leave a reply



Submit