Threadsafe Vector Class for C++

Thread safe vector

adding (push_back) new objects may invalidate previous pointers ...

No, this operation doesn't invalidate any previous pointers, unless you are refering to addresses inside the vectors internal data management (which clearly isn't your scenario).

If you store raw pointers, or std::shared_ptr's there, those will be simply copied, and not get invalid.


As mentioned in comments a std::vector isn't very suitable to guarantee thread safety for producer / consumer patterns for a number of reasons. Neither storing raw pointers to reference the alive instances is!

A Queue will be much better to support this. As for standards you can use the std::deque to have ceratain access points (front(),back()) for the producer / consumer.

To make these access point's thread safe (for pushing/popping values) you can easily wrap them with your own class and use a mutex along, to secure insertion/deletion operations on the shared queue reference.

The other (and major, as from your question) point is: manage ownership and lifetime of the contained/referenced instances. You may also transfer ownership to the consumer, if that's suitable for your use case (thus getting off from the overhead with e.g. std::unique_ptr), see below ...


Additionally you may have a semaphore (condition variable), to notify the consumer thread, that new data is available.


'1. Using atomic or mutexes is not enough? If I push back from one thread, another thread handling an object via pointer may end up having an invalid object?'

The lifetime (and thus thread safe use) of the instances stored to the queue (shared container) need's to be managed separately (e.g. using smart pointers like std::shared_ptr or std::unique_ptr stored there).

'2. Is there a library ...'

It can be achieved all well with the existing standard library mechanisms IMHO.

As for point 3. see what's written above. As what I can tell further about this, it sounds like you're asking for something like a rw_lock mutex. You may provide a surrogate for this with a suitable condition variable.

Feel free to ask for more clarification ...

Threadsafe Vector class for C++

This is difficult because of algorithms.

Suppose you wrapped vector so that all its member functions are serialised using a mutex, like Java synchronized methods. Then concurrent calls to std::remove on that vector still wouldn't be safe, because they rely on looking at the vector and changing it based on what they see.

So your LockingVector would need to specialize every template in the standard algorithms, to lock around the whole thing. But then other algorithms like std::remove_if would be calling user-defined code under the lock. Doing this silently behind the scenes is a recipe for locking inversion as soon as someone starts creating vectors of objects which themselves internally take locks around all their methods.

In answer to your actual question: sorry, no, I don't know of one. For a quick test of the kind you need, I recommend that you start out with:

template <typename T>
class LockedVector {
private:
SomeKindOfLock lock;
std::vector<T> vec;
};

Then drop it in as a replacement container, and start implementing member functions (and member typedefs, and operators) until it compiles. You'll notice pretty quickly if any of your code is using iterators on the vector in a way which simply cannot be made thread-safe from the inside out, and if need be you can temporarily change the calling code in those cases to lock the vector via public methods.

C++ thread safe vector insertion

There must be only one mutex for the vector. So you should add the mutex next to the vector, e.g. as results_mutex in Resources. If results is a static member then the mutex should be a static member as well (so that there is only one mutex for the vector).

Then you must also lock the mutex on all operations accessing the vector that could potentially be executed in parallel with a call to insert_output, not only on the insert operation.

In your current code you create a new mutex on each call, making it completely pointless.

cpp: how to make access of vector in class thread-safe?

I believe I did this now with reference to Sam's comments, and it seems to work, is this correct?

#include <iostream>
#include <mutex>
#include <vector>
#include <initializer_list>
using namespace std;

class Data {
public:
unique_ptr<mutex> lockptr{ new mutex };
void write_data(vector<float>& data) {
datav = move(data);
}

vector<float>* read_data() {
return(&datav);
}

Data(vector<float> in) : datav{ in } {
};
Data(const Data&) = delete;
Data& operator=(const Data&) = delete;
Data(Data&& old) {
datav = move(old.datav);
unique_ptr<mutex> lockptr{ new mutex };
}
Data& operator=(Data&& old) {
datav = move(old.datav);
unique_ptr<mutex> lockptr{ new mutex };
}
private:
vector<float> datav{};
//mutex lock{};

};

void f1(vector<Data>& in) {
for (Data& tupel : in) {
unique_lock<mutex> lock(*(tupel.lockptr));
vector<float>& in{ *(tupel.read_data()) };
for (float& f : in) {
f += (float)1.0;
};
};
}

void f2(vector<Data>& in) {
for (Data& tupel : in) {
(*(tupel.lockptr)).try_lock();
vector<float>& in{ *(tupel.read_data()) };
for (float& f : in) {
cout << f << ",";
};
(*(tupel.lockptr)).unlock();
};
}
int main() {
vector<Data> datastore{};
datastore.emplace_back(initializer_list<float>{ 0.2, 0.4 });
datastore.emplace_back(initializer_list<float>{ 0.6, 0.8 });
vector<float> bigfv(50, 0.3);
Data demo{ bigfv };
datastore.push_back(move(demo));
thread t1(f1, ref(datastore));
thread t2(f2, ref(datastore));
t1.join();
t2.join();
};

By usage of the unique_ptr, I should leave no memory leaks when I move the instance, right?

is vector threadsafe for read/write at different locations?

vector does not provide any thread-safety guarantees, so technically the answer would be no.

In practice, you should be able to get away with it... until the day that someone (possibly you) makes a small change in one corner of the program and all hell breaks loose. I wouldn't feel comfortable doing this in any non-trivial program.



Related Topics



Leave a reply



Submit