Why Do I Need to Acquire a Lock to Modify a Shared "Atomic" Variable Before Notifying Condition_Variable

Is it necessary to acquire the lock and notify condition_variable if no thread needs to wake up?

Your on_pause function sets pause_ to true, while the predicate in check_pause verifies that it is set to false. Hence calling notify_all in on_pause is pointless, because the notified threads in check_pause will check the predicate and immediately go back to sleep. And since pause_ is atomic and you don't need to call notify_all, you also don't need the lock.

Is it always necessary for a notifying thread to lock the shared data during modification?

For the simple case shown below, is it necessary for the main thread to lock "stop" when it modifies it?

Yes, to prevent race condition. Aside from issues of accessing shared data from different threads, which could be fixed by std::atomic, imagine this order of events:

worker_thread: checks value of `stop` it is false
main_thread: sets `stop` to true and sends signal
worker_thread: sleeps on condition variable

In this situation worker thread would possibly sleep forever and it would miss event of stop set to true simply because wakeup signal was already sent and it missed it.

Acquiring mutex on modification of shared data is required because only then you can treat checking condition and going to sleep or acting on it in worker thread as atomic operation as whole.

Does it make sense to use different mutexes with the same condition variable?

I think the wording of cppreference is somewhat awkward here. I think they were just trying to differentiate the mutex used in conjunction with the condition variable from other unrelated mutexes.

It makes no sense to use a condition variable with different mutexes. The mutex is used to make any change to the actual semantic condition (in the example it is just the variable ready) atomic and it must therefore be held whenever the condition is updated or checked. Also it is needed to ensure that a waiting thread that is unblocked can immediately check the condition without again running into race conditions.

I understand it as follows:
It is OK, not to hold the lock on the mutex associated with the condition variable when notify_one is called, or any mutex at all, however it is OK to hold other mutexes for different reasons.

The pessimisation is not that only one mutex is used, but to hold this mutex for longer than necessary when you know that another thread is supposed to immediately try to acquire the mutex after being notified.

I think that my interpretation agrees with the explanation given in cppreference on condition variable:

The thread that intends to modify the shared variable has to

acquire a std::mutex (typically via std::lock_guard)

perform the modification while the lock is held

execute notify_one or notify_all on the std::condition_variable (the lock does not need to be held for notification)

Even if the shared variable is atomic, it must be modified under the mutex in order to correctly publish the modification to the waiting thread.

Any thread that intends to wait on std::condition_variable has to
acquire a std::unique_lock<std::mutex>, on the same mutex as used to protect the shared variable

Furthermore the standard expressly forbids using different mutexes for wait, wait_­for, or wait_­until:

lock.mutex() returns the same value for each of the lock arguments supplied by all concurrently waiting (via wait, wait_­for, or wait_­until) threads.

Do I have to acquire lock before calling condition_variable.notify_one()?

You do not need to be holding a lock when calling condition_variable::notify_one(), but it's not wrong in the sense that it's still well defined behavior and not an error.

However, it might be a "pessimization" since whatever waiting thread is made runnable (if any) will immediately try to acquire the lock that the notifying thread holds. I think it's a good rule of thumb to avoid holding the lock associated with a condition variable while calling notify_one() or notify_all(). See Pthread Mutex: pthread_mutex_unlock() consumes lots of time for an example where releasing a lock before calling the pthread equivalent of notify_one() improved performance measurably.

Keep in mind that the lock() call in the while loop is necessary at some point, because the lock needs to be held during the while (!done) loop condition check. But it doesn't need to be held for the call to notify_one().


2016-02-27: Large update to address some questions in the comments about whether there's a race condition if the lock isn't held for the notify_one() call. I know this update is late because the question was asked almost two years ago, but I'd like to address @Cookie's question about a possible race condition if the producer (signals() in this example) calls notify_one() just before the consumer (waits() in this example) is able to call wait().

The key is what happens to i - that's the object that actually indicates whether or not the consumer has "work" to do. The condition_variable is just a mechanism to let the consumer efficiently wait for a change to i.

The producer needs to hold the lock when updating i, and the consumer must hold the lock while checking i and calling condition_variable::wait() (if it needs to wait at all). In this case, the key is that it must be the same instance of holding the lock (often called a critical section) when the consumer does this check-and-wait. Since the critical section is held when the producer updates i and when the consumer checks-and-waits on i, there is no opportunity for i to change between when the consumer checks i and when it calls condition_variable::wait(). This is the crux for a proper use of condition variables.

The C++ standard says that condition_variable::wait() behaves like the following when called with a predicate (as in this case):

while (!pred())
wait(lock);

There are two situations that can occur when the consumer checks i:

  • if i is 0 then the consumer calls cv.wait(), then i will still be 0 when the wait(lock) part of the implementation is called - the proper use of the locks ensures that. In this case the producer has no opportunity to call the condition_variable::notify_one() in its while loop until after the consumer has called cv.wait(lk, []{return i == 1;}) (and the wait() call has done everything it needs to do to properly 'catch' a notify - wait() won't release the lock until it has done that). So in this case, the consumer cannot miss the notification.

  • if i is already 1 when the consumer calls cv.wait(), the wait(lock) part of the implementation will never be called because the while (!pred()) test will cause the internal loop to terminate. In this situation it doesn't matter when the call to notify_one() occurs - the consumer will not block.

The example here does have the additional complexity of using the done variable to signal back to the producer thread that the consumer has recognized that i == 1, but I don't think this changes the analysis at all because all of the access to done (for both reading and modifying) are done while in the same critical sections that involve i and the condition_variable.

If you look at the question that @eh9 pointed to, Sync is unreliable using std::atomic and std::condition_variable, you will see a race condition. However, the code posted in that question violates one of the fundamental rules of using a condition variable: It does not hold a single critical section when performing a check-and-wait.

In that example, the code looks like:

if (--f->counter == 0)      // (1)
// we have zeroed this fence's counter, wake up everyone that waits
f->resume.notify_all(); // (2)
else
{
unique_lock<mutex> lock(f->resume_mutex);
f->resume.wait(lock); // (3)
}

You will notice that the wait() at #3 is performed while holding f->resume_mutex. But the check for whether or not the wait() is necessary at step #1 is not done while holding that lock at all (much less continuously for the check-and-wait), which is a requirement for proper use of condition variables). I believe that the person who has the problem with that code snippet thought that since f->counter was a std::atomic type this would fulfill the requirement. However, the atomicity provided by std::atomic doesn't extend to the subsequent call to f->resume.wait(lock). In this example, there is a race between when f->counter is checked (step #1) and when the wait() is called (step #3).

That race does not exist in this question's example.

std::condition_variable memory writes visibility

I guess you are mixing up memory ordering of so called atomic values and the mechanisms of classic lock based synchronization.

When you have a datum which is shared between threads, lets say an int for example, one thread can not simply read it while the other thread might be write to it meanwhile. Otherwise we would have a data race.

To get around this for long time we used classic lock based synchronization:
The threads share at least a mutex and the int. To read or to write any thread has to hold the lock first, meaning they wait on the mutex. Mutexes are build so that they are fine that this can happen concurrently. If a thread wins gettting the mutex it can change or read the int and then should unlock it, so others can read/write too. Using a conditional variable like you used is just to make the pattern "readers wait for a change of a value by a writer" more efficient, they get woken up by the cv instead of periodically waiting on the lock, reading, and unlocking, which would be called busy waiting.

So because you hold the lock in any after waiting on the mutex or in you case, correctly (mutex is still needed) waiting on the conditional variable, you can change the int. And readers will read the new value after the writer was able to wrote it, never the old. UPDATE: However one thing if have to add, which might also be the cause of confusion: Conditional variables are subject for so called spurious wakeups. Meaning even though you write did not have notified any thread, a read thread might still wake up, with the mutex locked. So you have to check if you writer actually waked you up, which is usually done by the writer by changing another datum just to notify this, or if its suitable by using the same datum you already wanted to share. The lambda parameter overload of std::condition_variable::wait was just made to make the checking and going back to sleep code looking a bit prettier. Based on your question now I don't know if you want to use you values for this job.

However at snippet for the "main" thread is incorrect or incomplete:
You are not synchronizing on the mutex in order to change values.
You have to hold the lock for that, but notifying can be done without the lock.

std::unique_lock lock(mutex);
fillData(values);
lock.unlock();
cv.notify_all();

But these mutex based patters have some drawbacks and are slow, only one thread at a time can do something. This is were so called atomics, like std::atomic<int> came into play. They can be written and read at the same time without an mutex by multiple threads concurrently. Memory ordering is only a thing to consider there and an optimization for cases where you uses several of them in a meaningful way or you don't need the "after the write, I never see the old value" guarantee. However with it's default memory ordering memory_order_seq_cst you would also be fine.



Related Topics



Leave a reply



Submit