How to Know the Right Max Size of Vector? Max_Size()? But No

How to know the right max size of vector? max_size()? but no

The problem is that vector tries to allocate a contiguous block of memory, which might not be available at that time, even though the total available memory may be much larger.

I would suggest to use std::deque as it does not require to allocate a contiguous block of memory.

C++ vector max_size();

Simply get the answer by

std::vector<dataType> v;
std::cout << v.max_size();

Or we can get the answer by (2^nativePointerBitWidth)/sizeof(dataType) - 1. For example, on a 64 bit system, long long is (typically) 8 bytes wide, so we have (2^64)/8 - 1 == 2305843009213693951.

How can I know the real maximum size of a vector? (Not using std::vector::max_size)

Note that the max_size function returns a theoretical maximum number of elements, it doesn't say anything about the amount of memory needed.

If we assume that sizeof(int) == 4 (pretty common) then 204324850 elements would need 817299400 bytes of contiguous memory (that's almost 780 MiB).

You get a bad_alloc exception because the vector simply can't allocate enough memory to hold all the elements.

Practical use of vector::max_size

It really isn't very useful.

The only theoretical usage would be to check that if you need a container larger than max_size(), you are in trouble. But you would probably realize that already when considering a port of your database server to a microwave oven.

The committee once considered improving the function, but didn't find it useful enough to be worth a change:

max_size() isn't useful for very many things, and the existing wording is sufficiently clear for the few cases that max_size() can be used for. None of the attempts to change the existing wording were an improvement.

http://www.open-std.org/jtc1/sc22/wg21/docs/lwg-closed.html#197

Unable to Allocate large cpp std::vector that is less than std::vector::max_size()

From std::vector::max_size:

This value typically reflects the theoretical limit on the size of the
container, at most std::numeric_limits<difference_type>::max(). At
runtime, the size of the container may be limited to a value smaller
than max_size() by the amount of RAM available.

This means that std::vector::max_size is not a good indication to the actual maximum size you can allocate due to hardware limitation.

In practice the actual maximum size will [almost] always be smaller, depending on your available RAM in runtime.
On current 64 bit systems this will always be the case (at least with current available hardware), because the theoretical size in a 64 bit address space is a lot bigger than available RAM sizes.

confused about max size of std::vector

Why is the max value so small? it looks like the max of an uint32.

That is to be expected on 32 bit systems.

I was expecting it to be more in the range of size_t, which should be 18446744073709551615, right?

If PTRDIFF_MAX is 4294967295, then I find it surprising that SIZE_MAX would be as much as 18446744073709551615. That said, I also would find it surpising that PTRDIFF_MAX was 4294967295.

You're seeing surprising and meaningless output because the behaviour of the program is undefined which is because you used the wrong format specifier. %u is for unsigned int and only for unsigned int. %td specifier is for std::ptrdiff_t, PRIdMAX macro expands to the specifier for std::intmax_t and %zu is for std::size_t.

I recommend learning to use the C++ iostreams. It isn't quite as easy to accidentally cause undefined behaviour using them as it is when using the C standard I/O.

Why am I getting the vector too long when my vector surpasses 2147483648 (i.e. half the stated maximum) number of values?

I don't know what getting "vector too long" means, but it's typical that you don't have the entire address space available to your program. It's quite possible that half of it is reserved to the kernel.

max_size doesn't necessarily take such system limitations into consideration and is a theoretical limit that is typically not achievable in practice.

Why did I get deque's max_size() less than vector's max_size()?

I am missing one detail in your question: Which types did you test the max_sizes with? Ideone's gcc 4.7.2 (on 32bit) says that both have the same max_size - if given the same element type. For int its 2^30-1 - which means the maximum size of stored data is (2^32 - 4) bytes, since sizeof(int) == 4 on that system.

This is a wild guess here: have you compared vector<T>::max_size vs. deque<U>::max_size with sizeof(T) == 4 and sizeof(U) == 8? that would explain the approximate factor 2.

Be that as it may, your experiment shows, that the max_size returns only a very theoretic number, since you surely can't get 2^62-1 ints into memory. The -1 stems from the fact that the "first" 4 bytes have to be left empty, or else &vec[0] == NULL. You could not have any other data in that program than the ints stored in the vector - that includes the vector itself!

When using std::vector::push_back to insert an unknown number of elements, should std::vector::max_size be checked on each push?

No. push_back() will throw if it can't allocate memory.

max_size() is a theoretical limit. I've never seen it used. Maybe in a context with extremely limited memory and no exception support it could be useful, but in general no.



Related Topics



Leave a reply



Submit