Boost Libraries in Multithreading-Aware Mode

What exactly does `threading=multi` do when compiling boost?

No, threading=multi doesn't mean that things like boost containers will suddenly become safe for concurrent access by multiple threads (that would be prohibitively expensive from a performance point of view).

Rather, what it means in theory is that boost will be compiled to be thread aware. Basically this means that boost methods and classes will behave in a reasonable default way when accessed from multiple threads, much like classes in the std library. This means that you cannot access the same object from multiple threads, unless otherwise documented, but you can access different objects from multiple threads, safely. This might seem obvious, even without explicit support, but any static state used by the library would break that guarantee if not protected. Using threading=multi guarantees that any such shared state is property guarded by mutex or some other mechanism.

In the past similar arguments or stdlib flavors were available for the C and C++ std libraries supplied my compilers, although today mostly just the multithreaded versions are available.

There is likely little downside to compiling with threading=multi, given that only a limited amount of static state need to be synchronized. Your comment that your library will mostly only be called by a single thread doesn't inspire a lot of confidence - after all, those are the kinds of latent bugs that will cause you to be woken up at 3 AM by your boss after a night of long drinking.

The example of boost's shared_ptr is informative. With threading=single, it is not even guaranteed that independent manipulation of two shared_ptr instances, from multiple threads, is safe. If they happen to point to the same object (or, in theory, under some exotic implementations even if they don't), you will gen undefined behavior, because shared state won't be manipulated with the proper protections.

With threading=multi, this won't happen. However, it is still not safe to access the same shared_ptr instance from multiple threads. That is, it doesn't give any thread safety guarantees which aren't documented for the object in question - but it does give the "expected/reasonable/default" guarantees of independent objects being independent. There isn't a good name for this default level of thread-safety, but it is in fact what's generally offered all standard libraries for multi-threaded languages today.

As a final point, it's worth noting that Boost.Thread is implicitly always compiled with threading=multi - since using boost's multithreaded classes is an implicit hint that multiple threads are present. Using Boost.Thread without multithreaded support would be nonsensical.

Now, all that said, the above is the theoretical idea behind compiling boost "with thread support" or "without thread support" which is the purpose of the threading= flag. In practice, since this flag was introduced, multithreading has become the default, and single threading the exception. Indeed, many compilers and linkers which defaulted to single threaded behavior, now default to multithreaded - or at least require only a single "hint" (e.g., the presence of -pthread on the command line) to flip to multithreaded.

Apart from that, there has also been a concerted effort to make the boost build be "smart" - in that it should flip to multithreaded mode when the environment favors it. That's pretty vague, but necessarily so. It gets as complicated as weak-linking the pthreads symbols, so that the decision to use MT or ST code is actually deferred to runtime - if pthreads is available at execution time, those symbols will be used, otherwise weakly linked stubs - that do nothing at all - will be used.

The bottom line is that threading=multi is correct, and harmless, for your scenario, and especially if you are producing a binary you'll distribute to other hosts. If you don't special that, it is highly likely that it will work anyway, due to the build-time and even runtime heuristics, but you do run the chance of silently using empty stub methods, or otherwise using MT-unsafe code. There is little downside to using the right option - but some of the gory details can also be found in the comments to this point, and Igor's reply as well.

How do I make Boost multithreading?

-mt is just distribution specific extension.
either edit your config file or create symbolic link to libboost_thread

andrey@localhost:~$ ls -l /usr/lib/libboost_thread*
-rw-r--r-- 1 root root 174308 2010-01-25 10:36 /usr/lib/libboost_thread.a
lrwxrwxrwx 1 root root 41 2009-11-04 10:10 /usr/lib/libboost_thread-gcc41-mt-1_34_1.so.1.34.1 -> libboost_thread-gcc42-mt-1_34_1.so.1.34.1
-rw-r--r-- 1 root root 49912 2008-11-01 02:55 /usr/lib/libboost_thread-gcc42-mt-1_34_1.so.1.34.1
lrwxrwxrwx 1 root root 17 2010-01-27 18:32 /usr/lib/libboost_thread-mt.a -> libboost_thread.a
lrwxrwxrwx 1 root root 25 2010-01-27 18:32 /usr/lib/libboost_thread-mt.so -> libboost_thread.so.1.40.0
lrwxrwxrwx 1 root root 25 2010-01-27 18:32 /usr/lib/libboost_thread.so -> libboost_thread.so.1.40.0
-rw-r--r-- 1 root root 89392 2010-01-25 10:36 /usr/lib/libboost_thread.so.1.40.0

Multithreading program stuck in optimized mode but runs normally in -O0

Two threads, accessing a non-atomic, non-guarded variable are U.B. This concerns finished. You could make finished of type std::atomic<bool> to fix this.

My fix:

#include <iostream>
#include <future>
#include <atomic>

static std::atomic<bool> finished = false;

int func()
{
size_t i = 0;
while (!finished)
++i;
return i;
}

int main()
{
auto result=std::async(std::launch::async, func);
std::this_thread::sleep_for(std::chrono::seconds(1));
finished=true;
std::cout<<"result ="<<result.get();
std::cout<<"\nmain thread id="<<std::this_thread::get_id()<<std::endl;
}

Output:

result =1023045342
main thread id=140147660588864

Live Demo on coliru


Somebody may think 'It's a bool – probably one bit. How can this be non-atomic?' (I did when I started with multi-threading myself.)

But note that lack-of-tearing is not the only thing that std::atomic gives you. It also makes concurrent read+write access from multiple threads well-defined, stopping the compiler from assuming that re-reading the variable will always see the same value.

Making a bool unguarded, non-atomic can cause additional issues:

  • The compiler might decide to optimize variable into a register or even CSE multiple accesses into one and hoist a load out of a loop.
  • The variable might be cached for a CPU core. (In real life, CPUs have coherent caches. This is not a real problem, but the C++ standard is loose enough to cover hypothetical C++ implementations on non-coherent shared memory where atomic<bool> with memory_order_relaxed store/load would work, but where volatile wouldn't. Using volatile for this would be UB, even though it works in practice on real C++ implementations.)

To prevent this to happen, the compiler must be told explicitly not to do.


I'm a little bit surprised about the evolving discussion concerning the potential relation of volatile to this issue. Thus, I'd like to spent my two cents:

  • Is volatile useful with threads
  • Who's afraid of a big bad optimizing compiler?.

installing the python wrappers for vlfeat

On Linux, from Boost v1.40.0:

Build System

The default naming of libraries in Unix-like environment now matches
system conventions, and does not include various decorations.

Decorations are tags like:

-mt: the library was built with multithreading support enabled.

-d: encodes details that affect the library's interoperability with other compiled code. ...

Both on Linux and Windows Boost libraries are usually built in MT mode but only on Windows you get the -mt suffix for it (e.g. take a look at BOOST libraries in multithreading-aware mode).

So your idea of linking with boost_python-mt-py27 should be safe (unfortunately I don't know how to distinguish thread aware libraries from regular libraries).

C++ Boost on iPhone

This blog will probably help you.

It says that Pete Goodliffe wrote a script that builds the boost libraries into an iOS framework. There is also an XCode project compiling the whole thing.

If people tell you that you can't use custom frameworks in iOS applications this is not true. So long as the framework is properly built (architectures for i386,armv6 and armv7 all lipoed together) this will work.

I haven't tried that particular framework but if it's correctly setup I am sure this will do the trick.

Keep us posted.



Related Topics



Leave a reply



Submit