How to Limit CPU and Ram Resources for Mongodump

How to limit CPU and RAM resources for mongodump?

You should use cgroups. Mount points and details are different on distros and a kernels. I.e. Debian 7.0 with stock kernel doesn't mount cgroupfs by default and have memory subsystem disabled (folks advise to reboot with cgroup_enabled=memory) while openSUSE 13.1 shipped with all that out of box (due to systemd mostly).

So first of all, create mount points and mount cgroupfs if not yet done by your distro:

mkdir /sys/fs/cgroup/cpu
mount -t cgroup -o cpuacct,cpu cgroup /sys/fs/cgroup/cpu

mkdir /sys/fs/cgroup/memory
mount -t cgroup -o memory cgroup /sys/fs/cgroup/memory

Create a cgroup:

mkdir /sys/fs/cgroup/cpu/shell
mkdir /sys/fs/cgroup/memory/shell

Set up a cgroup. I decided to alter cpu shares. Default value for it is 1024, so setting it to 128 will limit cgroup to 11% of all CPU resources, if there are competitors. If there are still free cpu resources they would be given to mongodump. You may also use cpuset to limit numver of cores available to it.

echo 128 > /sys/fs/cgroup/cpu/shell/cpu.shares
echo 50331648 > /sys/fs/cgroup/memory/shell/memory.limit_in_bytes

Now add PIDs to the cgroup it will also affect all their children.

echo 13065 >  /sys/fs/cgroup/cpu/shell/tasks
echo 13065 > /sys/fs/cgroup/memory/shell/tasks

I run couple of tests. Python that tries to allocate bunch of mem was Killed by OOM:

myaut@zenbook:~$ python -c 'l = range(3000000)'
Killed

I've also run four infinite loops and fifth in cgroup. As expected, loop that was run in cgroup got only about 45% of CPU time, while the rest of them got 355% (I have 4 cores).

All that changes do not survive reboot!

You may add this code to a script that runs mongodump, or use some permanent solution.

performance issue until mongodump

As you have heavy load, adding a replica set is a good solution, as backup could be taken on secondary node, but be aware that replica need at least three servers (you can have an master/slave/arbiter - where the last need a little amount of resources)

MongoDump makes general query lock which will have an impact if there is a lot of writes in dumped database.

Hint: try to make backup when there is light load on system.

Mongorestore seems to run out of memory and kills the mongo process

Since it sounds like you're not running out of disk space due to mongorestore continuing where it left off successfully, focusing on memory issues is the correct response. You're definitely running out of memory during the mongorestore process.

I would highly recommend going with the swap space, as this is the simplest, most reliable, least hacky, and arguably the most officially supported way to handle this problem.

Alternatively, if you're for some reason completely opposed to using swap space, you could temporarily use a node with a larger amount of memory, perform the mongorestore on this node, allow it to replicate, then take the node down and replace it with a node that has fewer resources allocated to it. This option should work, but could become quite difficult with larger data sets and is pretty overkill for something like this.

MongoDB using too much memory

Okay, so after following the clues given by loicmathieu and jstell, and digging it up a little, these are the things I found out about MongoDB using WiredTiger storage engine. I'm putting it here if anyone encountered the same questions.

The memory usage threads that I mentioned, all belonged to 2012-2014, all pre-date WiredTiger and are describing behavior of the original MMAPV1 storage engine which doesn't have a separate cache or support for compression.

The WiredTiger cache settings only controls the size of memory directly used by the WiredTiger storage engine (not the total memory used by mongod). Many other things are potentially taking memory in a MongoDB/WiredTiger configuration, such as the following:

  • WiredTiger compresses disk storage, but the data in memory are uncompressed.

  • WiredTiger by default does not fsync the data on each
    commit
    , so the log files are also in RAM which takes its toll on
    memory. It's also mentioned that in order to use I/O efficiently,
    WiredTiger chunks I/O requests (cache misses) together, that also
    seems to take some RAM (In fact dirty pages (pages that has
    changed/updated) have a list of updates on them stored in a
    Concurrent SkipList).

  • WiredTiger keeps multiple versions of records in its cache (Multi
    Version Concurrency Control, read operations access the last
    committed version before their operation).

  • WiredTiger Keeps checksums of the data in cache.

  • MongoDB itself consumes memory to handle open connections, aggregations, serverside code and etc.

Considering these facts, relying on show dbs; was not technically correct, since it only shows the compressed size of the datasets.

The following commands can be used in order to get the full dataset size.

db.getSiblingDB('data_server').stats()
# OR
db.stats()

This results is the following:

{
"db" : "data_server",
"collections" : 11,
"objects" : 266565289,
"avgObjSize" : 224.8413545621088,
"dataSize" : 59934900658, # 60GBs
"storageSize" : 22959984640,
"numExtents" : 0,
"indexes" : 41,
"indexSize" : 7757348864, # 7.7GBs
"ok" : 1
}

So it seems that the actual dataset size + its indexes are taking about 68GBs of that memory.

Considering all these, I guess the memory usage is now pretty expected, good part being it's completely okay to limit the WiredTiger cache size, since it handles I/O operations pretty efficiently (as described above).

There also remains the problem of OOM, to overcome this issue, since we didn't have enough resources to take out mongodb, we lowered the oom_score_adj to prevent OOM from killing important processes for the time being (Meaning we told OOM not to kill our desired processes).



Related Topics



Leave a reply



Submit