Do I Have to Acquire Lock Before Calling Condition_Variable.Notify_One()

Do I have to acquire lock before calling condition_variable.notify_one()?

You do not need to be holding a lock when calling condition_variable::notify_one(), but it's not wrong in the sense that it's still well defined behavior and not an error.

However, it might be a "pessimization" since whatever waiting thread is made runnable (if any) will immediately try to acquire the lock that the notifying thread holds. I think it's a good rule of thumb to avoid holding the lock associated with a condition variable while calling notify_one() or notify_all(). See Pthread Mutex: pthread_mutex_unlock() consumes lots of time for an example where releasing a lock before calling the pthread equivalent of notify_one() improved performance measurably.

Keep in mind that the lock() call in the while loop is necessary at some point, because the lock needs to be held during the while (!done) loop condition check. But it doesn't need to be held for the call to notify_one().


2016-02-27: Large update to address some questions in the comments about whether there's a race condition if the lock isn't held for the notify_one() call. I know this update is late because the question was asked almost two years ago, but I'd like to address @Cookie's question about a possible race condition if the producer (signals() in this example) calls notify_one() just before the consumer (waits() in this example) is able to call wait().

The key is what happens to i - that's the object that actually indicates whether or not the consumer has "work" to do. The condition_variable is just a mechanism to let the consumer efficiently wait for a change to i.

The producer needs to hold the lock when updating i, and the consumer must hold the lock while checking i and calling condition_variable::wait() (if it needs to wait at all). In this case, the key is that it must be the same instance of holding the lock (often called a critical section) when the consumer does this check-and-wait. Since the critical section is held when the producer updates i and when the consumer checks-and-waits on i, there is no opportunity for i to change between when the consumer checks i and when it calls condition_variable::wait(). This is the crux for a proper use of condition variables.

The C++ standard says that condition_variable::wait() behaves like the following when called with a predicate (as in this case):

while (!pred())
wait(lock);

There are two situations that can occur when the consumer checks i:

  • if i is 0 then the consumer calls cv.wait(), then i will still be 0 when the wait(lock) part of the implementation is called - the proper use of the locks ensures that. In this case the producer has no opportunity to call the condition_variable::notify_one() in its while loop until after the consumer has called cv.wait(lk, []{return i == 1;}) (and the wait() call has done everything it needs to do to properly 'catch' a notify - wait() won't release the lock until it has done that). So in this case, the consumer cannot miss the notification.

  • if i is already 1 when the consumer calls cv.wait(), the wait(lock) part of the implementation will never be called because the while (!pred()) test will cause the internal loop to terminate. In this situation it doesn't matter when the call to notify_one() occurs - the consumer will not block.

The example here does have the additional complexity of using the done variable to signal back to the producer thread that the consumer has recognized that i == 1, but I don't think this changes the analysis at all because all of the access to done (for both reading and modifying) are done while in the same critical sections that involve i and the condition_variable.

If you look at the question that @eh9 pointed to, Sync is unreliable using std::atomic and std::condition_variable, you will see a race condition. However, the code posted in that question violates one of the fundamental rules of using a condition variable: It does not hold a single critical section when performing a check-and-wait.

In that example, the code looks like:

if (--f->counter == 0)      // (1)
// we have zeroed this fence's counter, wake up everyone that waits
f->resume.notify_all(); // (2)
else
{
unique_lock<mutex> lock(f->resume_mutex);
f->resume.wait(lock); // (3)
}

You will notice that the wait() at #3 is performed while holding f->resume_mutex. But the check for whether or not the wait() is necessary at step #1 is not done while holding that lock at all (much less continuously for the check-and-wait), which is a requirement for proper use of condition variables). I believe that the person who has the problem with that code snippet thought that since f->counter was a std::atomic type this would fulfill the requirement. However, the atomicity provided by std::atomic doesn't extend to the subsequent call to f->resume.wait(lock). In this example, there is a race between when f->counter is checked (step #1) and when the wait() is called (step #3).

That race does not exist in this question's example.

Is it necessary to acquire the lock and notify condition_variable if no thread needs to wake up?

Your on_pause function sets pause_ to true, while the predicate in check_pause verifies that it is set to false. Hence calling notify_all in on_pause is pointless, because the notified threads in check_pause will check the predicate and immediately go back to sleep. And since pause_ is atomic and you don't need to call notify_all, you also don't need the lock.

Does a mutex get unlocked when calling notify on a condition variable?

Notifying does not unlock the mutex. You can tell (indirectly) because you don't pass the lock to notify_one() the way you do to wait(), which does release the mutex while it waits.

On the other side, the notified thread(s) are notified "immediately". But they won't necessarily return from wait() immediately. Before they can return from wait() they must first re-acquire the mutex, so they will block there until the notifying thread releases it.

Should condition_variable.notify_all be covered by mutex lock?

There is no risk releasing the lock if the mutex was held in some interval between the state of the condition test changing and thr notification.

{
std::lock_guard<std::mutex> lk(_address_mutex);
_address = address;
}
_cv.notify_all();

here, the mutex was unlocked after _address was changed. So no risk.

If we modify _address to be atomic, naively this looks correct:

{
std::lock_guard<std::mutex> lk(_address_mutex);
}
_address = address;
_cv.notify_all();

but it is not; here, the mutex is released for the entire period between modifying the condition test and notification,

_address = address;
{
std::lock_guard<std::mutex> lk(_address_mutex);
}
_cv.notify_all();

The above, however, becomes correct again (if more than a little strange).


The risk is that the condition test will be evaluated with the mutex active (as false), then changed, then notification sent, then the waiting thread waits on a notification and releases the mutex.

waiting|signalling
lock
test
test changed
notification
listen+unlock

the above is an example of a missed notification.

So long as we have the mutex held anywhere after the test change and before the notification it cannot happen.



Related Topics



Leave a reply



Submit