Example For Boost Shared_Mutex (Multiple Reads/One Write)

Example for boost shared_mutex (multiple reads/one write)?

It looks like you would do something like this:

boost::shared_mutex _access;
void reader()
{
// get shared access
boost::shared_lock<boost::shared_mutex> lock(_access);

// now we have shared access
}

void writer()
{
// get upgradable access
boost::upgrade_lock<boost::shared_mutex> lock(_access);

// get exclusive access
boost::upgrade_to_unique_lock<boost::shared_mutex> uniqueLock(lock);
// now we have exclusive access
}

boost::shared_mutex multiple-reader / single-writer mutex

Apparently the boost::shared_mutex leaves the fairness policy up to the implementation. It can be either fair, reader-over-writer or writer-over-reader so depending on which it is for your particular version it's possible that the writer can be starved.

Multiple-readers, single-writer locks in Boost

boost::shared_lock doesn't help in this situation (multiple readers that can become writers), since only a single thread may own an upgradable lock. This is both implied by the quote from the documentation in the question, and by looking at the code (thread\win32\shared_mutex.hpp). If a thread tries to acquire an upgradable lock while another thread holds one, it will wait for the other thread.

I settled on using a regular lock for all reader/writers, which is OK in my case since the critical section is short.

Boost shared_lock. Read preferred?

I'm a little late to this question, but I believe I have some pertinent information.

The proposals of shared_mutex to the C++ committee, which the boost libs are based on, purposefully did not specify an API to give readers nor writers priority. This is because Alexander Terekhov proposed an algorithm about a decade ago that is completely fair. It lets the operating system decide whether the next thread to get the mutex is a reader or writer, and the operating system is completely ignorant as to whether the next thread is a reader or writer.

Because of this algorithm, the need for specifying whether a reader or writer is preferred disappears. To the best of my knowledge, the boost libs are now (boost 1.52) implemented with this fair algorithm.

The Terekhov algorithm consists of having the read/write mutex consist of two gates: gate1 and gate2. Only one thread at a time can pass through each gate. The gates can be implemented with a mutex and two condition variables.

Both readers and writers attempt to pass through gate1. In order to pass through gate1 it must be true that a writer thread is not currently inside of gate1. If there is, the thread attempting to pass through gate1 blocks.

Once a reader thread passes through gate1 it has read ownership of the mutex.

When a writer thread passes through gate1 it must also pass through gate2 before obtaining write ownership of the mutex. It can not pass through gate2 until the number of readers inside of gate1 drops to zero.

This is a fair algorithm because when there are only 0 or more readers inside of gate1, it is up to the OS as to whether the next thread to get inside of gate1 is a reader or writer. A writer becomes "prioritized" only after it has passed through gate1, and is thus next in line to obtain ownership of the mutex.

I used your example compiled against an example implementation of what eventually became shared_timed_mutex in C++14 (with minor modifications to your example). The code below calls it shared_mutex which is the name it had when it was proposed.

I got the following outputs (all with the same executable):

Sometimes:

Worked finished her work
Worked finished her work
Grabbed exclusive lock, killing system
KILLING ALL WORK
Workers are on strike. This worker refuses to work
Workers are on strike. This worker refuses to work

And sometimes:

Worked finished her work
Grabbed exclusive lock, killing system
KILLING ALL WORK
Workers are on strike. This worker refuses to work
Workers are on strike. This worker refuses to work
Workers are on strike. This worker refuses to work

And sometimes:

Worked finished her work
Worked finished her work
Worked finished her work
Worked finished her work
Grabbed exclusive lock, killing system
KILLING ALL WORK

I believe it should be theoretically possible to also obtain other outputs, though I did not confirm that experimentally.

In the interest of full disclosure, here is the exact code I executed:

#include "../mutexes/shared_mutex"
#include <thread>
#include <chrono>
#include <iostream>

using namespace std;
using namespace ting;

mutex outLock;
shared_mutex workerAccess;
bool shouldIWork = true;

class WorkerKiller
{
public:

void operator()()
{
unique_lock<shared_mutex> lock(workerAccess);

cout << "Grabbed exclusive lock, killing system" << endl;
this_thread::sleep_for(chrono::seconds(2));
shouldIWork = false;
cout << "KILLING ALL WORK" << endl;
}

private:
};

class Worker
{
public:

Worker()
{
}

void operator()()
{
shared_lock<shared_mutex> lock(workerAccess);

if (!shouldIWork) {
lock_guard<mutex> _(outLock);
cout << "Workers are on strike. This worker refuses to work" << endl;
} else {
this_thread::sleep_for(chrono::seconds(1));

lock_guard<mutex> _(outLock);
cout << "Worked finished her work" << endl;
}
}
};

int main()
{
Worker w1;
Worker w2;
Worker w3;
Worker w4;
WorkerKiller wk;

thread workerThread1(w1);
thread workerThread2(w2);

thread workerKillerThread(wk);

thread workerThread3(w3);
thread workerThread4(w4);

workerThread1.join();
workerThread2.join();
workerKillerThread.join();
workerThread3.join();
workerThread4.join();

return 0;
}

boost::shared_mutex vs boost::mutex for multiple threads writing?

Yes, boost::shared_mutex is not your case as you don't have pure readers and have multiple writers. Just use boost::mutex to enforce exclusive access to shared data.

Does Multiple reader single writer implementation in g++-4.4(Not in C++11/14) via boost::shared_mutex impact performance?

If writes were non-existent, one possibility can be 2-level cache where you first have a thread-local cache, and then the normal cache with mutex or reader/writer lock.

If writes are extremely rare, you can do the same. But have some lock-free way of invalidating the thread-local cache, e.g. an atomic int updated with every write, and in those cases clear the thread-local cache.

Boost shared_mutex with Conditional Writers

So I got this working in an acceptable fashion. Basically I release the shared lock if a write is needed, and then the write function will get a unique lock. After getting the unique lock in the write function, it checks again to make sure that the write operation is still needed to handle the case where multiple threads think that they need to write to the data.

I know this isn't ideal since the time spent waiting for a unique lock in multiple threads, when only one thread needs the unique lock. But, the performance is good enough for now, and is a significant improvement over how it was before.

class Test {
public:
int getData() {
boost::shared_lock<boost::shared_mutex> lock(access_);

if(need_write) {
lock.unlock();
writeData();
lock.lock();
}

// Do stuff
}

void writeData() {
// Get exclusive access
boost::unique_lock<boost::shared_mutex> unique_lock(access_);

if(need_write) {
return;
}

// Do stuff
}
private:
boost::shared_mutex access_;
}


Related Topics



Leave a reply



Submit