Bad Alloc Is Thrown

Is it okay to manually throw an std::bad_alloc?

You don't need to do that. You can use the parameterless form of the throw statement to catch the std::bad_alloc exception, log it, then rethrow it:

logger = new CLogger(this);
try {
videoLayer = new CVideoLayer(this);
} catch (std::bad_alloc&) {
logger->log("Not enough memory to create the video layer.");
throw;
}

Or, if logger is not a smart pointer (which it should be):

logger = new CLogger(this);
try {
videoLayer = new CVideoLayer(this);
} catch (std::bad_alloc&) {
logger->log("Not enough memory to create the video layer.");
delete logger;
throw;
} catch (...) {
delete logger;
throw;
}

What is the most common reason that bad_alloc is thrown?

EDIT: The other commenters have pointed out a few interesting scenarios. I'm adding them to my response for the sake of completeness.

Case 1: Running out of memory

My understanding is that bad_alloc is thrown whenever the operators new and new[] fail to allocate memory to an object or variable. This can happen if you've newed a bunch of objects and forgot to delete them before they got out of scope (i.e., your code leaks like crazy).

Case 2: Allocating huge amounts of memory in one swoop

Allocating a large chunk of memory, as in the case of a 1000 x 1000 x 1000 matrix of doubles, will possibly fail because it requires a single block of that size.

There might be several free memory blocks available, but none of these are large enough.

Case 3: Passing an invalid value to new[]

bad_alloc is thrown if you pass a negative value as its parameter.

How to deal with bad_alloc in C++?

You can catch it like any other exception:

try {
foo();
}
catch (const std::bad_alloc&) {
return -1;
}

Quite what you can usefully do from this point is up to you, but it's definitely feasible technically.

Bad alloc is thrown

It seems you're simply running out of memory. You might reason that you shouldn't since the individual allocations don't occupy the amount of space.

But memory fragmentation can do this: if there is sufficient 'padding' or 'overhead' with the shared memory objects, you can run out of contiguously allocatable space.

Either, store your data in a pre-allocated vector (e.g.), or use one of the smarter interprocess allocation algorithms:

  • http://www.boost.org/doc/libs/1_55_0/doc/html/interprocess/allocators_containers.html

The simplest way to resolve it in this instance would seem to be just making the shared memory area twice as big (minimal size is a 4K memory page on most systems, anyway).

I just used 2*size and the tests ran to completion.

Update/fixes

I've just verified that indeed doing things "the vector way" is much more efficient: replacing std::map by boost's flat_map gets you vector storage.

The big difference is that each node in a map is dynamically allocated, incurring a fixed overhead, linearly consuming available memory.

Sample Image

Observations

  • There's considerable initial overhead, consuming 320 bytes before anything happened.
  • with the flat_map, you also reserve the vector capacity up front, you see that you can win just a little extra storage efficiency.

The above graph was created from the output of the following program. Look for the calls to get_free_memory(). To switch map implementation, just change #if 0 into #if 1. (Note how I cleaned up some of the code that was needless repetitious and using exceptions for flow control).

#include <boost/interprocess/managed_shared_memory.hpp>
#include <boost/interprocess/containers/map.hpp>
#include <boost/interprocess/allocators/allocator.hpp>
#include <boost/interprocess/containers/string.hpp>
#include <boost/interprocess/containers/flat_map.hpp>
#include <boost/interprocess/exceptions.hpp>

#include <functional>
#include <utility>
#include <iostream>
#include <string>

#define space_name "MySharedMemory"

int main ()
{
using namespace boost::interprocess;
//Remove shared memory on construction and destruction

struct shm_remove
{
shm_remove() { shared_memory_object::remove(space_name); }
~shm_remove(){ shared_memory_object::remove(space_name); }
} remover;

typedef int KeyType;
typedef boost::interprocess::managed_shared_memory::allocator<char>::type char_allocator;
//typedef boost::interprocess::allocator<char, boost::interprocess::managed_shared_memory::segment_manager> char_allocator;
//typedef boost::interprocess::basic_string<char, std::char_traits<char>, char_allocator> shm_string;

struct certificateStorage{
int certificate_id;
certificateStorage( int _certificate_id, const char* _certificate, const char* _key, const char_allocator &al) :
certificate_id(_certificate_id)
{}
};

#if 0 // STL
typedef std::pair<const int, certificateStorage> certValueType;
typedef allocator<certValueType, boost::interprocess::managed_shared_memory::segment_manager> certShmemAllocator;
typedef map<KeyType, certificateStorage, std::less<KeyType>, certShmemAllocator> certSHMMap;
#else // FLAT_MAP
typedef std::pair<int, certificateStorage> certValueType; // not const key for flat_map
typedef allocator<certValueType, boost::interprocess::managed_shared_memory::segment_manager> certShmemAllocator;
typedef boost::container::flat_map<KeyType, certificateStorage, std::less<KeyType>, certShmemAllocator> certSHMMap;
#endif

std::cout << "\n\n\nStarting the program.\n\n\n";

const int numentries = 20;
const char* elementName = "mymap";
int size = sizeof(certificateStorage) * numentries + 1000;
int runningsize = 0;

std::cout << "SHM size is " <<size<< " bytes \n";

try{
managed_shared_memory shm_segment(create_only, space_name/*segment name*/, size);

certShmemAllocator alloc_inst (shm_segment.get_segment_manager());
char_allocator ca(shm_segment.get_allocator<char>());

certSHMMap *mymap = shm_segment.find_or_construct<certSHMMap>(elementName)
(std::less<int>(), alloc_inst);

mymap->reserve(numentries);

for(int i = 0; i < numentries; i++){
std::cout << "Free memory: " << shm_segment.get_free_memory() << "\n";

certificateStorage thisCert(i, "", "", ca);
std::cout << "Created object.\n";
mymap->insert(certValueType(i, thisCert));
std::cout << "Inserted object. " << i <<" size is " <<sizeof(thisCert) << " \n";
runningsize += sizeof(thisCert) ;
std::cout << "SHM Current size is " << runningsize << " / " << size << "\n";
}

std::cout << "\n\nDone Inserting\nStarting output\n";

for(int i = 0; i < numentries; i++){
certificateStorage tmp = mymap->at(i);
std::cout << "The key is: " << i << " And the value is: " << tmp.certificate_id << "\n";
}
}
catch(boost::interprocess::interprocess_exception &ex){
std::cout << "\n shm space wont load wont load\n";
std::cout << "\n Why: " << ex.what() << "\n";
}
}

Why bad_alloc() exception thrown in case of size_t

You're passing the address of an uninitialized 64-bit (8 bytes, on modern 64-bit systems) variable, state, and tell fread to read sizeof(int) (32 bits, 4 bytes on those same systems) bytes from the file into this variable.

This will overwrite 4 bytes of the variable with the value read, but leave the other 4 uninitialized. Which 4 bytes it overwrites depends on the architecture (the least significant on Intel CPUs, the most significant on big-endian-configured ARMs), but the result will most likely be garbage either way, because 4 bytes were left uninitialized and could contain anything.

In your case, most likely they are the most significant bytes, and contain at least one non-zero bit, meaning that you then try to allocate far beyond 4GB of memory, which you don't have.

The solution is to make state a std::uint32_t (since you apparently expect the file to contain 4 bytes representing an unsigned integer; don't forget to include <cstdint>) and pass sizeof(std::uint32_t), and in general make sure that for every fread and similar call where you pass in a pointer and a size, you make sure that the thing the pointer points to actually has exactly the size you pass along. Passing a size_t* and sizeof(int) does not fulfill these requirements on 64-bit systems, and since the size of C++'s basic types is not guaranteed, you generally don't want to use them for binary I/O at all.

C++ List of Objects code throwing std::bad_alloc

You are allocating 160000 of your As with 5000 int in them, with usually around 4 bytes in size, so you are allocating 160000*5000*4 bytes, and which are /1024 = 3.125.000 kibiBytes and /1024 = 3.051,7578125 Mebibytes so around 3 GB, which is close to the upper limit, what a 32-bit process can get, and I assume, even when running x64 windows, you compiled it with the default x86 settings, meaning it will run with the 32bit compatibility mode in windows. Add the overhead of your 160000 pointers, which you stored in the most inefficient std container, plus overhead of paging `stuff, plus the likely added padding, and you get out of memory.

To get back to your original question:

  1. The list is placed on the "stack" i.e. with automatic storage duration (more correct term), which means only the household data of it (for example a pointer to the first item, and one to the last and it's size), but not the items/nodes it contains, and even if it did, it does not contain the big stuff, i.e. your As but simply pointers to them A*s, which in turn, as in any std container, except std::array are stored on the heap, i.e. "dynamic storage duration", but are nothing compared in size to the 5000 ints they point to. As you allocated with new, your A are never cleaned up until by a call with delete by you. C++ is very different from Java. And your Java code probably run as a 64bit process who knows what the VM did, as it saw that you wont use them in the future.

So if you want your As on the "stack" i.e. automatic storage duration, you can use a std::array<A,160000> (which is a much better version of A[160000]), but I bet you will crash you stack with such sizes. (On most operating systems you get around 2MB of stack per Thread, but it can be much lower, and your call tree needs place too)

If you want your As on the "heap" i.e. with dynamic storage duration, i.e. in the list, use std::list<A> instead of std::list<A*> and remove your new expression altogether. However because of multiple reasons, the best default container is std::vector<A> which would store them in one big chunk of "heap" memory.


  1. There is no such explicit limit in the C++ standard, according to §3.7.4 of ISO/IEC 14882:2014, new either gets you the requested amount or more, or it fails, so it depends on your runtime i.e. implementation meaning operating system and compiler, but in general you can get as much as the operating system will give you, which as I said is around the 3-4GB for x86/32bit processes. Otherwise it can be much more, or in case of embedded applications, very less up to 0 (no dynamic allocation at all).

vector is throwing bad_alloc

As Neil Kirk points out, a 32-bit process is limited to 2GB of memory as stated in this MSDN page. This is true for unmanned and managed applications.

There are many SO questions about this, for example I have found these:

How much memory can a 32 bit process access on a 64 bit operating system?

The maximum amount of memory any single process on Windows can address

Set Maximum Memory Usage C#

In my case I think that the interop between the .NET and the unmanaged code is doing some buffering and so is eating up the available memory. Ideally I should only have two or three 2D vectors of 2,600,000 x 10 elements (if a double is 8-bytes then this is still less than 1 GB). I will need to investigate this further.



Related Topics



Leave a reply



Submit