Why Is Std::Unordered_Map Slow, and How to Use It More Effectively to Alleviate That

Why is std::unordered_map slow, and can I use it more effectively to alleviate that?

The standard library's maps are, indeed, inherently slow (std::map especially but std::unoredered_map as well). Google's Chandler Carruth explains this in his CppCon 2014 talk; in a nutshell: std::unordered_map is cache-unfriendly because it uses linked lists as buckets.

This SO question mentioned some efficient hash map implementations - use one of those instead.

Is gcc std::unordered_map implementation slow? If so - why?

I found the reason: it is a Problem of gcc-4.7!!

With gcc-4.7

inserts: 37728
get : 2985

With gcc-4.6

inserts: 2531
get : 1565

So std::unordered_map in gcc-4.7 is broken (or my installation, which is an installation of gcc-4.7.0 on Ubuntu - and another installation which is gcc 4.7.1 on debian testing).

I will submit a bug report.. until then: DO NOT use std::unordered_map with gcc 4.7!

C++ std::unordered_map fastest way to insert new element only if it doesn't exist

This is what try_emplace is for. The API is set up in such a way as to avoid constructing nodes, or even the value, unless it's strictly necessary. It comes form N4279.

In your case, that would be:

auto [it, success] = map.try_emplace(key, largeObject);

Each of the four options in the OP has issues:

  • map[key] = largeObject doesn't actually do what you're asking for, it would overwrite the existing item. And even if it wasn't there, it requires default construction and copy assignment.

  • The approaches with count and find both require two lookups.

  • map.insert(std::make_pair<uint64_t, LargeObject>(key, largeObject)); is a single lookup but requires constructing the large object, and the pair, unconditionally.

Not mentioned in the OP is another option: map.emplace(key, largeObject); This has the issue that it's actually under-specified whether or not the pair is created in the case that the key exists. It does on some implementations. The motivation for try_emplace was to properly specify the API so that the pair definitely does not get created if the key already exists.

Choosing between std::map and std::unordered_map

As already mentioned, map allows to iterate over the elements in a sorted way, but unordered_map does not. This is very important in many situations, for example displaying a collection (e.g. address book). This also manifests in other indirect ways like: (1) Start iterating from the iterator returned by find(), or (2) existence of member functions like lower_bound().

Also, I think there is some difference in the worst case search complexity.

  • For map, it is O( lg N )

  • For unordered_map, it is O( N ) [This may happen when the hash function is not good leading to too many hash collisions.]

The same is applicable for worst case deletion complexity.

Why does C++11/Boost `unordered_map` not rehash when erasing?

As far as I can tell, that behavior is not so much a result of the requirement to not invalidate iterators (std::unordered_map::rehash also doesn't invalidate them) than a result of the complexity requirement for std::unordered_map::erase, which should take constant time on average.

I can't tell you, why it was specified like this, but I can tell you, why it is the right default behavior for me:

  1. In many applications, the content of my hash table is virtually
    constant after initialization anyway - so here I don't care.
  2. Where this is not the case, at least the average number of elements
    stays more or less the same (within an order of magnitude).
    So even if a lot of objects are deleted at some point in time, new
    elements will probably be added soon afterwards. In that case, it
    wouldn't really reduce the memory footprint and the overhead of
    rehashing two times (once after deletion and once after adding new elements) would usually outweigh any performance improvement I might get through a more compact table.
  3. Erasing a larger number of elements (e.g. by a filter function) would be severely slowed down by intermediate rehashes, if you could not control the heuristic (as you can when inserting elements by modifying max_load_factor).

    So finally, even in those cases where it is actually beneficial to rehash, I can usually make a better decision, about when to do it (e.g. via rehash or copy and swap) than a generic heuristic in std::unordere_map could.

Again, those points are true for my typical use cases, I don't claim that they are universally true for other people's software or that they were the motivation behind the specification of unordered_map

Interestingly, VS2015 and libstc++ seem to implement rehash(0) differently *:

  • libstc++ will actually shrink (reallocate) the memory where the table is stored
  • VS2015 will decrease the table size (a.k.a. bucket number) but not reallocate the table. So even after rehashing an empty hash map, the surplus memory for the table will not be returned.

Apparently, the only portable way to minimize the memory footprint is to copy and swap.

Concerning the documentation, I agree that this should probably be mentioned explicitly somewhere, but on the other hand it is e.g. consistent with the documentation of std::vector::erase(). Also I'm not 100% sure, if it is really impossible to write an implementation that rehashes on erase at least sometimes, without violating the requirements.


*) I inferred this from the results of bucket_count and getAllocatedBytes() from your allocator, not by actually looking at the source code.

Vector of unordered_maps, searching in maps too slow

The c++ code is not too slow. The java code is better optimized hashwise.

  • In c++, it is unordered_map which is responsible for computing the hash. So each container in your collection will hash the string during unordered_map<string,int>::const_iterator got = map.find(key).
  • In java, the HashMap relies on the hashCode method of object. Thing is, String class can compute the hash only at initialization and when the string is modified.

In terms of hash(string) -> int computations, your find method in c++ is O(NUM_OF_MAPS), while in java it is O(1).

Is this slower because of two lookups instead of one?

Yes, because you search the key twice: map[key] search for the key exactly like map.find, of which you threw away the result.

It's like open a drawer to see if there is a given object, say "ah yes!" and close the drawer, then open it up again and research the object to change it.

The second code opens the drawer, search for an object and change it.

There can be compiler optimizations that allow to avoid the double search, or that may reduce the search in constant time, and there can be compiler optimization that allow to avoid the auto find variable to be stored on memory (it can be a CPU register, since it usage is very local).

The entire problem, will in effect reduce in comparing the time for twice hash computation twice (and walk the eventual map slot, in case of hash collision) and the time to access the extra variable:

2*H < H+M

That means H < M. If M is a register and H is not trivial, it's hard for H to be less than M.

Does std::unordered_map equality depend on insertion order

Yes, they're guaranteed to return equal in this case. The specific wording (from N4659, §[unord.req]/12) is:

Two unordered containers a and b compare equal if a.size() == b.size() and, for every equivalent-key group [Ea1, Ea2) obtained from a.equal_range(Ea1), there exists an equivalent-key group [Eb1, Eb2) obtained from b.equal_range(Ea1), such that is_permutation(Ea1, Ea2, Eb1, Eb2) returns true.

So, as long as the keys (and associated values) in one are the same as the other (but possibly in a differently-permuted order), it will compare equal.



Related Topics



Leave a reply



Submit