Can You Allocate a Very Large Single Chunk of Memory ( > 4Gb ) in C or C++

Can you allocate a very large single chunk of memory ( 4GB ) in c or c++?

Short answer: Not likely

In order for this to work, you absolutely would have to use a 64-bit processor.
Secondly, it would depend on the Operating System support for allocating more than 4G of RAM to a single process.

In theory, it would be possible, but you would have to read the documentation for the memory allocator. You would also be more susceptible to memory fragmentation issues.

There is good information on Windows memory management.

Allocate more than 4GB memory in C

Contrary to your statement, malloc() will solve the problem, assuming you are using an implementation (which includes the compiler and library) that can produce a 64-bit executable, and have configured it to do so (i.e. to build a 64-bit target). Some toolchains are capable of building 64-bit targets but by default (e.g. as used in an associated IDE) will only produce a 32-bit executable.

If you are building a 32-bit target (i.e. producing a 32-bit executable) then, yes, malloc() will be limited to 4GB. A 32-bit executable can be executed on a 64-bit system (assuming an operating system which permits that). However, the program's use of malloc() will still be limited to 4GB in that case.

Which means you need to ensure you have a compiler that can build 64 bit programs AND use it to build a 64-bit target.

Of course, another question you should ask is whether you really need to allocate more than 4GB in a single chunk. While there are circumstances where that is appropriate, more often than not, a program which needs to do that is a sign of poor or lazy design.

Allocate memory for different types in one block

If you allocate and free lots of these structures, the savings could add up to something that's significant.

It uses a little less space, since each allocated block has some bookkeeping that records its size. It also may reduce memory fragmentation.

Also, doing this ensures that the a and b arrays are close together in memory. If they're often used together, this can improve cache hits.

Library implementers often do these micro-optimizations because they can't predict how the library will be used, and they want to work as well as possible in all cases. When you're writing your own application, you have a better idea of which code will be in inner loops that are significant for performance tuning.

Can you fit smaller sized structures into a larger chunk of memory?

Yes, that's fine. malloc will return you suitably aligned memory. Just assigning any arbitrary void * pointer to a small_structure * variable is not OK, however. That means your specific example is fine, but something like:

int function(void *p)
{
small_structure *s = p;
return s->data[0];
}

is not! If p isn't suitably aligned for a small_structure * pointer, you've just caused undefined behaviour.

Allocating large blocks of memory with new

With respect to new in C++/GCC/Linux(32bit)...

It's been a while, and it's implementation dependent, but I believe new will, behind the scenes, invoke malloc(). Malloc(), unless you ask for something exceeding the address space of the process, or outside of specified (ulimit/getrusage) limits, won't fail. Even when your system doesn't have enough RAM+SWAP. For example: malloc(1gig) on a system with 256Meg of RAM + 0 SWAP will, I believe, succeed.

However, when you go use that memory, the kernel supplies the pages through a lazy-allocation mechanism. At that point, when you first read or write to that memory, if the kernel cannot allocate memory pages to your process, it kills your process.

This can be a problem on a shared computer, when your colleague has a slow core leak. Especially when he starts knocking out system processes.

So the fact that you are seeing std::bad_alloc exceptions is "interesting".

Now new will run the constructor on the allocated memory, touching all those memory pages before it returns. Depending on implementation, it might be trapping the out-of-memory signal.

Have you tried this with plain o'l malloc?

Have you tried running the "free" program? Do you have enough memory available?

As others have suggested, have you checked limit/ulimit/getrusage() for hard & soft constraints?

What does your code look like, exactly? I'm guessing new ClassFoo [ N ]. Or perhaps new char [ N ].

What is sizeof(ClassFoo)? What is N?

Allocating 64*288000 (17.58Meg) should be trivial for most modern machines... Are you running on an embedded system or something otherwise special?

Alternatively, are you linking with a custom new allocator? Does your class have its own new allocator?

Does your data structure (class) allocate other objects as part of its constructor?

Has someone tampered with your libraries? Do you have multiple compilers installed? Are you using the wrong include or library paths?

Are you linking against stale object files? Do you simply need to recompile your all your source files?

Can you create a trivial test program? Just a couple lines of code that reproduces the bug? Or is your problem elsewhere, and only showing up here?

--

For what it's worth, I've allocated over 2gig data blocks with new in 32bit linux under g++. Your problem lies elsewhere.

How much memory should you be able to allocate?

As much as the OS wants to give you. By default, Windows lets a 32-bit process have 2GB of address space. And this is split into several chunks. One area is set aside for the stack, others for each executable and dll that is loaded. Whatever is left can be dynamically allocated, but there's no guarantee that it'll be one big contiguous chunk. It might be several smaller chunks of a couple of hundred MB each.

If you compile with the LargeAddressAware flag, 64-bit Windows will let you use the full 4GB address space, which should help a bit, but in general,

  • you shouldn't assume that the available memory is contiguous. You should be able to work with multiple smaller allocations rather than a few big ones, and
  • You should compile it as a 64-bit application if you need a lot of memory.

Stack allocation in c++ for large sizes

When used with a pointer, the sizeof operator returns the size of the pointer and not what it points to.

When you allocate memory dynamically (in C++ use new instead of malloc) you need to keep track of the amount of entries yourself. Or better yet, use e.g. std::vector.



Related Topics



Leave a reply



Submit