Creating a Lock That Preserves the Order of Locking Attempts in C++11

many locks on same mutex, one locks forever

An std::mutex does not guarantee giving everyone an equal chance. So it is possible that one thread starves another. The first thing you can try is to insert std::this_thread::yield() and see if it helps. If this does not help, then your code must have logic errors. Post some portion of the code and we can help you diagnose further.

Mutex ownership queue order

It's pretty safe to say that for your purpose it's arbitrary in the sense that the operating system will wake up one of the threads waiting for the Mutex and grant it to the thread, but the decision as to which thread is non-deterministic.

You could implement your own scheme for priority between your threads using a global priority index. Then if one of the threads waiting for the Mutex receives it while not being first-in-line it gives it up right away and proceeds to wait until the Mutex becomes available again. This should repeat until the Mutex is acquired and the thread is first-in-line according to the priority index of the thread as compared to the global index.

mutex lock priority

If both threads are equal priority, there is no such guarantee by standard mutex implementations. Some OS's have a lis of "who's waiting", and will pick the "longest waiting" when you release something, but that is an implementation detail, not something you can reliably depend on.

And imagine that you have two threads, each running something like this:

m.lock();

(...)

m.unlock();
(...) // Clearly not the same code as above (...)
m.lock();

(...) // Some other code that needs locking against updates.

m.unlock();

Would you want the above code to switch thread on the second lock, every time?

By the way, if both threads run with lock for the entire loop, what is the point of a lock?

Double-Checked Lock Singleton in C++11

That implementation is not race-free. The atomic store of the singleton, while it uses release semantics, will only synchronize with the matching acquire operation—that is, the load operation that is already guarded by the mutex.

It's possible that the outer relaxed load would read a non-null pointer before the locking thread finished initializing the singleton.

The acquire that is guarded by the lock, on the other hand, is redundant. It will synchronize with any store with release semantics on another thread, but at that point (thanks to the mutex) the only thread that can possibly store is the current thread. That load doesn't even need to be atomic—no stores can happen from another thread.

See Anthony Williams' series on C++0x multithreading.

unique_lock same mutex in different thread

Because right after acquiring a lock on the mutex each thread calls lk.unlock(); and now other thread can acquire a lock on the mutex. Only if a thread tries to lock an already locked mutex (by a different thread) it has to wait for the mutex to be free. As any thread in your code eventually calls lk.unlock(); there is always a chance for a different thread to get a lock on the mutex and there is no deadlock.

A deadlock would occur for example if you have two mutexes and two threads try to lock them in different order:

  // thread A
std::unique_lock<std::mutex> lk1(mutex1);
std::unique_lock<std::mutex> lk2(mutex2); // X

// thread B
std::unique_lock<std::mutex> lk2(mutex2);
std::unique_lock<std::mutex> lk1(mutex1); // X

Here it can happen that thread A locks mutex1, thread B locks mutex2 and then both wait in X for the other thread to release the other mutex, but this will never happen. Its a deadlock.

2.

A lock is merely a slim RAII type. Its only purpose is to call lock on the mutex when created and unlock when destroyed. You can write the same code without the lock, by manually locking / unlocking the mutex, but when there is an exception while a mutex is locked it will never be unlocked.

What are the exact inter-thread reordering constraints on mutex.lock() and .unlock() in c++11 and up?

Almost a duplicate: How C++ Standard prevents deadlock in spinlock mutex with memory_order_acquire and memory_order_release? - that's using hand-rolled std::atomic spinlocks, but the same reasoning applies:

The compiler can't compile-time reorder mutex acquire and release in ways that could introduce a deadlock where the C++ abstract machine doesn't have one. That would violate the as-if rule.

It would effectively be introducing an infinite loop in a place the source doesn't have one, violating this rule:

ISO C++ current draft, section 6.9.2.3 Forward progress

18. An implementation should ensure that the last value (in modification order) assigned by an atomic or synchronization operation will become visible to all other threads in a finite period of time.


The ISO C++ standard doesn't distinguish compile-time vs. run-time reordering. In fact it doesn't say anything about reordering. It only says things about when you're guaranteed to see something because of synchronizes-with effects, and the existence of a modification order for each atomic object, and the total order of seq_cst operations. It's a misreading of the standard to take it as permission to nail things down into asm in a way that requires mutexes to be taken in a different order than source order.

Taking a mutex is essentially equivalent to an atomic RMW with memory_order_acquire on the mutex object. (And in fact the ISO C++ standard even groups them together in 6.9.2.3 :: 18 quoted above.)

You're allowed to see an earlier release or relaxed store or even RMW appear inside a mutex lock/unlock critical section instead of before it. But the standard requires an atomic store (or sync operation) to be visible to other threads promptly, so compile-time reordering to force it to wait until after a lock had been acquired could violate that promptness guarantee. So even a relaxed store can't compile-time / source-level reorder with a mutex.lock(), only as a run-time effect.

This same reasoning applies to mutex2.lock(). You're allowed to see reordering, but the compiler can't create a situation where the code requires that reordering to always happen, if that makes execution different from the C++ abstract machine in any important / long-term observable ways. (e.g. reordering around an unbounded wait). Creating a deadlock counts as one of those ways, whether for this reason or another. (Every sane compiler developer would agree on that, even if C++ didn't have formal language to forbid it.)

Note that mutex unlock can't block, so compile-time reordering of two unlocks isn't forbidden for that reason. (If there are no slow or potentially blocking operations in between). But mutex unlock is a "release" operation, so that's ruled out: two release stores can't reorder with each other.


And BTW, the practical mechanism for preventing compile-time reordering of mutex.lock() operations is just to make them regular function calls that the compiler doesn't know how to inline. It has to assume that functions aren't "pure", i.e. that they have side effects on global state, and thus the order might be important. That's the same mechanism that keeps operations inside the critical section: How does a mutex lock and unlock functions prevents CPU reordering?

An inlinable std::mutex written with std::atomic would end up depending on the compiler actually applying the rules about making operations visible promptly and not introducing deadlocks by reordering things at compile-time. As described in How C++ Standard prevents deadlock in spinlock mutex with memory_order_acquire and memory_order_release?



Related Topics



Leave a reply



Submit