Linux: Cannot Allocate More Than 32 Gb/64 Gb of Memory in a Single Process Due to Virtual Memory Limit

Single Process Maximum Possible Memory in x64 Linux

Certain kernels have different limits, but on any modern 64-bit linux the single-process limit is still far over 32GB (assuming that process is a 64-bit executable). Various distributions may also have set per-process limits using sysctl, so you'll want to check your local environment to make sure that there aren't arbitrarily low limits set (also check ipcs -l on RPM-based systems).

The Debian port documentation for the AMD64 port specifically mentions that the per-process virtual address space limit is 128TiB (twice the physical memory limit), so that should be the reasonable upper bound you're working with.

Increase maximum virtual memory size above 256gb

Maybe you're hitting the maximum of /proc/sys/vm/max_map_count. This setting sets a maximum on the number of mmaps your process can have. The default value is 65536. So it's likely not the size of memory you want to malloc, but the number of malloc calls that causes the error Cannot allocate memory.

You can try to increase the maximum with:

sysctl -w vm.max_map_count=131070

See also NPTL caps maximum threads at 65528?

Why so many applications allocate incredibly large amount of virtual memory while not using any of it?

Languages that run their code inside virtual machines (like Java (*), C# or Python) usually assign large amounts of (virtual) memory right at startup. Part of this is necessary for the virtual machine itself, part is pre-allocated to parcel out to the application inside the VM.

With languages executing under direct OS control (like C or C++), this is not necessary. You can write applications that dynamically use just the amount of memory they actually require. However, some applications / frameworks are still designed in such a way that they request a large chunk memory from the operating system once, and then manage the memory themselves, in hopes of being more efficient about it than the OS.

There are problems with this:

  • It is not necessarily faster. Most operating systems are already quite smart about how they manage their memory. Rule #1 of optimization, measure, optimize, measure.

  • Not all operating systems do have virtual memory. There are some quite capable ones out there that cannot run applications that are so "careless" in assuming that you can allocate lots & lots of "not real" memory without problems.

  • You already found out that if you turn your OS from "generous" to "strict", these memory hogs fall flat on their noses. ;-)


(*) Java, for example, cannot expand its VM once it is started. You have to give the maximum size of the VM as a parameter (-Xmxn). Thinking "better safe than sorry" leads to severe overallocations by certain people / applications.

Wine can't use more than 32GB of RAM on Ubuntu Amazon EC2

Thanks to the tip by Alexandre Julliard, I was able to modify the VIRTUAL_HEAP_SIZE constant in dlls/ntdll/virtual.c to be 2x larger.

Alexandre stated:

The virtual heap is running out of space. Storing the page protection flags for 32Gb requires 8Mb, which is the heap limit. You can increase VIRTUAL_HEAP_SIZE in dlls/ntdll/virtual.c to work around it, but we probably want a different mechanism for such cases.

I made the change in line 144 in dlls/ntdll/virtual.c:

#define VIRTUAL_HEAP_SIZE (sizeof(void*)*1024*1024)

to this:

#define VIRTUAL_HEAP_SIZE (sizeof(void*)*1024*1024*2)

locally in my wine source (version 1.9.0) and recompiled. This solved my issue.

Fail to allocate a large amount of virtual memory

I read that when you try to allocate more bytes than are available in RAM using malloc(), it allocates virtual memory

To start with, this is not correct. You always allocate virtual memory. This virtual memory is mapped to some area on the Physical memory(RAM) or the swap space. If the swap space + physical memory is less than 100 GBs, your allocation will fail. Also, the libc implementation might fail to allocate such a large amount, if it has some (programmable) limit set.

but I have a strange task to show up 100gb of virtual memory for the process in htop tool. And it's claimed to be achievable via single line of code.

Yes if you just need this much virtual memory, you can reserve memory but not commit it. You can read upon how mmap(*NIX) or VirtualAlloc(Windows) can be used for the same.

When you reserve a particular Virtual Address range, you tell the operating system that you intend to use this range, so other code can't use that. But it doesn't mean you can actually use it. This also means that it doesn't need a RAM/Swap backing. So you will be able to reserve arbitrarily large amount (less than 2^48 bytes on your 64 bit system of course).

Although I am not sure if htop will include that in the value it shows, you will have to try that out.

If this doesn't indeed add to your virtual memory count, you can map it to a file, instead of mapping it anonymously. This might create a 100 GB file on your system (assuming you have that much space), but you should even be able to read/write to it.

Following code can be used on linux -

int fd = open("temp.txt", O_RDWR | O_CREAT);
void* addr = mmap(NULL, 100 * GBS, PROT_WRITE | PROT_READ, MAP_PRIVATE, fd, 0);

Soft virtual memory limit (ulimit -v)

The virtual memory limit is per process, not per user.



Related Topics



Leave a reply



Submit