Linux "Free -M": Total, Used and Free Memory Values Don't Add Up

Linux free -m : Total, used and free memory values don't add up

For the main memory, the actual size of memory can be calculated as used+free+buffers+cache OR used+free+buffers/cache because buffers/cache = buffer+cache.

The man page of free highlights used as Used memory (calculated as total - free - buffers - cache)

As the man page of free says :-

total Total installed memory (MemTotal and SwapTotal in /proc/meminfo)

used Used memory (calculated as total - free - buffers - cache)

free Unused memory (MemFree and SwapFree in /proc/meminfo)

shared Memory used (mostly) by tmpfs (Shmem in /proc/meminfo,
on kernels 2.6.32, displayed as zero if not available)

buffers Memory used by kernel buffers (Buffers in /proc/meminfo)

cache Memory used by the page cache and slabs (Cached and Slab in
/proc/meminfo)

buff / cache Sum of buffers and cache

available Estimation of how much memory is available for starting new applications, without swapping. Unlike the data provided by the cache or free fields, this field takes into account page cache and also that not all reclaimable memory slabs will be reclaimed due to items being in use (MemAvailable in /proc/meminfo, available on kernels 3.14, emulated on kernels 2.6.27+, otherwise the same as free)

In your case,


873224(used) + 389320(free) + 25493068(buff/cache) = 26755612(total)


My server's total memory doesn't match with USED + FREE memory. I'm using linux free command

buff/cache: 21114

Your operating system uses "free" memory for caching.

From wikipedia:

Usually, all physical memory not directly allocated to applications is
used by the operating system for the page cache. Since the memory
would otherwise be idle and is easily reclaimed when applications
request it, there is generally no associated performance penalty [...]

Linux free shows high memory usage but top does not

Don't look at the "Mem" line, look at the one below it.

The Linux kernel consumes as much memory as it can to provide the I/O cache (and other non-critical buffers, but the cache is going to be most of this usage). This memory is relinquished to processes when they request it. The "-/+ buffers/cache" line is showing you the adjusted values after the I/O cache is accounted for, that is, the amount of memory used by processes and the amount available to processes (in this case, 578MB used and 7411MB free).

The difference of used memory between the "Mem" and "-/+ buffers/cache" line shows you how much is in use by the kernel for the purposes of caching: 7734MB - 578MB = 7156MB in the I/O cache. If processes need this memory, the kernel will simply shrink the size of the I/O cache.

How can I measure the actual memory usage of an application or process?

With ps or similar tools you will only get the amount of memory pages allocated by that process. This number is correct, but:

  • does not reflect the actual amount of memory used by the application, only the amount of memory reserved for it

  • can be misleading if pages are shared, for example by several threads or by using dynamically linked libraries

If you really want to know what amount of memory your application actually uses, you need to run it within a profiler. For example, Valgrind can give you insights about the amount of memory used, and, more importantly, about possible memory leaks in your program. The heap profiler tool of Valgrind is called 'massif':

Massif is a heap profiler. It performs detailed heap profiling by taking regular snapshots of a program's heap. It produces a graph showing heap usage over time, including information about which parts of the program are responsible for the most memory allocations. The graph is supplemented by a text or HTML file that includes more information for determining where the most memory is being allocated. Massif runs programs about 20x slower than normal.

As explained in the Valgrind documentation, you need to run the program through Valgrind:

valgrind --tool=massif <executable> <arguments>

Massif writes a dump of memory usage snapshots (e.g. massif.out.12345). These provide, (1) a timeline of memory usage, (2) for each snapshot, a record of where in your program memory was allocated. A great graphical tool for analyzing these files is massif-visualizer. But I found ms_print, a simple text-based tool shipped with Valgrind, to be of great help already.

To find memory leaks, use the (default) memcheck tool of valgrind.

Where has my used memory gone?

I was able to identify and solve my issue. But it was not without the help of the information present at http://linux-mm.org/Low_On_Memory.

The memory at slabinfo for dentry was around 5GB. After issuing "sync" command the dirty pages got synced to hard-drive and the command "echo 3 > /proc/sys/vm/drop_caches" freed up some more memory by dropping some more caches.

In addition to the literature present in the above website, the memory is reclaimed by the kernel at a rate dependent on vfs_cache_pressure (/proc/sys/vm/vfs_cache_pressure).

Thanks to all for your help.

Linux memory reporting discrepancy

Check the usage of the Slab cache (Slab:, SReclaimable: and SUnreclaim: in /proc/meminfo). This is a cache of in-kernel data structures, and is separate from the page cache reported by free.

If the slab cache is resposible for a large portion of your "missing memory", check /proc/slabinfo to see where it's gone. If it's dentries or inodes, you can use sync ; echo 2 > /proc/sys/vm/drop_caches to get rid of them.

You can also use the slabtop tool to show the current usage of the Slab cache in a friendly format. c will sort the list by current cache size.

Peak memory usage of a linux/unix process

Here's a one-liner that doesn't require any external scripts or utilities and doesn't require you to start the process via another program like Valgrind or time, so you can use it for any process that's already running:

grep ^VmPeak /proc/$PID/status

(replace $PID with the PID of the process you're interested in)

How to get the percentage of memory free with a Linux command?

Using the free command:

% free
total used free shared buffers cached
Mem: 2061712 490924 1570788 0 60984 220236
-/+ buffers/cache: 209704 1852008
Swap: 587768 0 587768

Based on this output we grab the line with Mem and using awk pick specific fields for our computations.

This will report the percentage of memory in use

% free | grep Mem | awk '{print $3/$2 * 100.0}'
23.8171

This will report the percentage of memory that's free

% free | grep Mem | awk '{print $4/$2 * 100.0}'
76.5013

You could create an alias for this command or put this into a tiny shell script. The specific output could be tailored to your needs using formatting commands for the print statement along these lines:

free | grep Mem | awk '{ printf("free: %.4f %\n", $4/$2 * 100.0) }'

How to set Apache Spark Executor memory

Since you are running Spark in local mode, setting spark.executor.memory won't have any effect, as you have noticed. The reason for this is that the Worker "lives" within the driver JVM process that you start when you start spark-shell and the default memory used for that is 512M. You can increase that by setting spark.driver.memory to something higher, for example 5g. You can do that by either:

  • setting it in the properties file (default is $SPARK_HOME/conf/spark-defaults.conf),

    spark.driver.memory              5g
  • or by supplying configuration setting at runtime

    $ ./bin/spark-shell --driver-memory 5g

Note that this cannot be achieved by setting it in the application, because it is already too late by then, the process has already started with some amount of memory.

The reason for 265.4 MB is that Spark dedicates spark.storage.memoryFraction * spark.storage.safetyFraction to the total amount of storage memory and by default they are 0.6 and 0.9.

512 MB * 0.6 * 0.9 ~ 265.4 MB

So be aware that not the whole amount of driver memory will be available for RDD storage.

But when you'll start running this on a cluster, the spark.executor.memory setting will take over when calculating the amount to dedicate to Spark's memory cache.

Memory usage doesn't decrease when free() used

On many operating systems, free() doesn't make the memory available for the OS again, but "only" for new calls to malloc(). This is why you don't see the memory usage go down externally, but when you increase the number of new allocations by threading, the memory is re-used so total usage doesn't go through the roof.



Related Topics



Leave a reply



Submit