Why does the JVM consume less memory than -Xms specified?
You're looking at the resident memory - that is, the physical RAM consumed. See here for more info.
The virtual memory, however, is the memory consumed by your application, including the memory swapped out (to disk). You'll see there's a closer correspondance with the virtual memory and your -Xms
settings.
See this ServerFault answer for more info.
Why memory usage is less than Xms?
The following line shows you the current state, not the maximum/minimum range:
java.exe 7700 Console 1 145 272 КБ
Execute some memory-intensive program in your JVM and you will get a different result on executing tasklist | find {my_pid}
.
Note that Xms
guarantees that if the OS does not have this much memory (plus some additional memory) left, the OS will not start java.exe
. However, once java.exe
is started, the OS may optimize the allocation of the memory based on the usage e.g. if java.exe
becomes idle, OS may allocate it a memory < Xms
. Moreover, the information provided by tasklist
is for end-users and may not closely relate to the internal stats. Please check this discussion to for some more insight.
Why is the JVM using more memory than I am allocating
-Xmx
just tells the JVM how much memory it may use for its internal heap.
The JVM needs memory for other purposes (permanent generation, temporary space etc.), plus like every binary it needs space for its own binary code, plus any libraries/DLLs/.so it loads.
The 144 MiB you quote probably contains at least some of these other memory uses.
How did you measure the memory usage? On modern OS using virtual memory, measuring memory usage of a process is not quite trivial, and cannot be expressed as a single value.
Java using much more memory than heap size (or size correctly Docker memory limit)
Virtual memory used by a Java process extends far beyond just Java Heap. You know, JVM includes many subsytems: Garbage Collector, Class Loading, JIT compilers etc., and all these subsystems require certain amount of RAM to function.
JVM is not the only consumer of RAM. Native libraries (including standard Java Class Library) may also allocate native memory. And this won't be even visible to Native Memory Tracking. Java application itself can also use off-heap memory by means of direct ByteBuffers.
So what takes memory in a Java process?
JVM parts (mostly shown by Native Memory Tracking)
1. Java Heap
The most obvious part. This is where Java objects live. Heap takes up to -Xmx
amount of memory.
2. Garbage Collector
GC structures and algorithms require additional memory for heap management. These structures are Mark Bitmap, Mark Stack (for traversing object graph), Remembered Sets (for recording inter-region references) and others. Some of them are directly tunable, e.g. -XX:MarkStackSizeMax
, others depend on heap layout, e.g. the larger are G1 regions (-XX:G1HeapRegionSize
), the smaller are remembered sets.
GC memory overhead varies between GC algorithms. -XX:+UseSerialGC
and -XX:+UseShenandoahGC
have the smallest overhead. G1 or CMS may easily use around 10% of total heap size.
3. Code Cache
Contains dynamically generated code: JIT-compiled methods, interpreter and run-time stubs. Its size is limited by -XX:ReservedCodeCacheSize
(240M by default). Turn off -XX:-TieredCompilation
to reduce the amount of compiled code and thus the Code Cache usage.
4. Compiler
JIT compiler itself also requires memory to do its job. This can be reduced again by switching off Tiered Compilation or by reducing the number of compiler threads: -XX:CICompilerCount
.
5. Class loading
Class metadata (method bytecodes, symbols, constant pools, annotations etc.) is stored in off-heap area called Metaspace. The more classes are loaded - the more metaspace is used. Total usage can be limited by -XX:MaxMetaspaceSize
(unlimited by default) and -XX:CompressedClassSpaceSize
(1G by default).
6. Symbol tables
Two main hashtables of the JVM: the Symbol table contains names, signatures, identifiers etc. and the String table contains references to interned strings. If Native Memory Tracking indicates significant memory usage by a String table, it probably means the application excessively calls String.intern
.
7. Threads
Thread stacks are also responsible for taking RAM. The stack size is controlled by -Xss
. The default is 1M per thread, but fortunately things are not so bad. The OS allocates memory pages lazily, i.e. on the first use, so the actual memory usage will be much lower (typically 80-200 KB per thread stack). I wrote a script to estimate how much of RSS belongs to Java thread stacks.
There are other JVM parts that allocate native memory, but they do not usually play a big role in total memory consumption.
Direct buffers
An application may explicitly request off-heap memory by calling ByteBuffer.allocateDirect
. The default off-heap limit is equal to -Xmx
, but it can be overridden with -XX:MaxDirectMemorySize
. Direct ByteBuffers are included in Other
section of NMT output (or Internal
before JDK 11).
The amount of direct memory in use is visible through JMX, e.g. in JConsole or Java Mission Control:
Besides direct ByteBuffers there can be MappedByteBuffers
- the files mapped to virtual memory of a process. NMT does not track them, however, MappedByteBuffers can also take physical memory. And there is no a simple way to limit how much they can take. You can just see the actual usage by looking at process memory map: pmap -x <pid>
Address Kbytes RSS Dirty Mode Mapping
...
00007f2b3e557000 39592 32956 0 r--s- some-file-17405-Index.db
00007f2b40c01000 39600 33092 0 r--s- some-file-17404-Index.db
^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^
Native libraries
JNI code loaded by System.loadLibrary
can allocate as much off-heap memory as it wants with no control from JVM side. This also concerns standard Java Class Library. In particular, unclosed Java resources may become a source of native memory leak. Typical examples are ZipInputStream
or DirectoryStream
.
JVMTI agents, in particular, jdwp
debugging agent - can also cause excessive memory consumption.
This answer describes how to profile native memory allocations with async-profiler.
Allocator issues
A process typically requests native memory either directly from OS (by mmap
system call) or by using malloc
- standard libc allocator. In turn, malloc
requests big chunks of memory from OS using mmap
, and then manages these chunks according to its own allocation algorithm. The problem is - this algorithm can lead to fragmentation and excessive virtual memory usage.
jemalloc
, an alternative allocator, often appears smarter than regular libc malloc
, so switching to jemalloc
may result in a smaller footprint for free.
Conclusion
There is no guaranteed way to estimate full memory usage of a Java process, because there are too many factors to consider.
Total memory = Heap + Code Cache + Metaspace + Symbol tables +
Other JVM structures + Thread stacks +
Direct buffers + Mapped files +
Native Libraries + Malloc overhead + ...
It is possible to shrink or limit certain memory areas (like Code Cache) by JVM flags, but many others are out of JVM control at all.
One possible approach to setting Docker limits would be to watch the actual memory usage in a "normal" state of the process. There are tools and techniques for investigating issues with Java memory consumption: Native Memory Tracking, pmap, jemalloc, async-profiler.
Update
Here is a recording of my presentation Memory Footprint of a Java Process.
In this video, I discuss what may consume memory in a Java process, how to monitor and restrain the size of certain memory areas, and how to profile native memory leaks in a Java application.
Is there any advantage in setting Xms and Xmx to the same value?
Well there are couple of things.
- Program will start with -Xms value and if the value is lesser it will eventually force GC to occur more frequently
- Once the program reaches -Xms heap, jvm request OS for additional memory and eventually grabs -Xmx that requires additional time leading to performance issue, you might as well set it to that at the beginning avoiding jvm to request additional memory.
It is very nicely answered here - https://developer.jboss.org/thread/149559?_sscc=t
Java consumes memory more than Xmx argument
As the comments and answers have alluded to, there are a number of other factors to take into
account when measuring JVM memory usage. However, I don't think any answer has gone into
nearly enough depth.
JVM Memory Overview
Lets hit the question "I was wondering what is that 30MB used for?" head on. To do this, here is a simple java class:
// HelloWorld.java
public class HelloWorld {
public static void main(String[] args) throws Exception {
System.out.println("Hello world!");
Thread.sleep(10000); // wait 10 seconds so we can get memory usage
}
}
Now compile and run it with heap constraints:
$ nohup java -Xms2m -Xmx2m HelloWorld & # run in background
$ ps aux | awk 'NR==1; /[H]elloWorld/'
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
chaospie 6204 6.0 0.1 2662860 23040 pts/2 Sl 19:15 0:00 java -Xms2m -Xmx2m HelloWorld
Looking at the RSS (Resident Set Size, or how much memory this process is using) above we see that the JVM's
process is using about 23MB of memory. To see why, lets do some analysis. The quickest way to get a good
overview is to turn on NativeMemorytracking
use the jcmd
tool's VM.native_memory
command. So,
let's run our app again:
$ nohup java -XX:NativeMemoryTracking=summary -Xms2M -Xmx2M HelloWorld &
[2] 6661
nohup: ignoring input and appending output to 'nohup.out'
$ ps aux | awk 'NR==1; /[H]elloWorld/'
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
chaospie 6661 5.0 0.1 2662860 23104 pts/2 Sl 19:21 0:00 java -XX:NativeMemoryTracking=summary -Xms2M -Xmx2M HelloWorld
$ jcmd 6661 VM.native_memory summary
6661:
Native Memory Tracking:
Total: reserved=1360145KB, committed=61177KB
- Java Heap (reserved=2048KB, committed=2048KB)
(mmap: reserved=2048KB, committed=2048KB)
- Class (reserved=1066093KB, committed=14189KB)
(classes #402)
(malloc=9325KB #146)
(mmap: reserved=1056768KB, committed=4864KB)
- Thread (reserved=20646KB, committed=20646KB)
(thread #21)
(stack: reserved=20560KB, committed=20560KB)
(malloc=62KB #110)
(arena=23KB #40)
- Code (reserved=249632KB, committed=2568KB)
(malloc=32KB #299)
(mmap: reserved=249600KB, committed=2536KB)
- GC (reserved=10467KB, committed=10467KB)
(malloc=10383KB #129)
(mmap: reserved=84KB, committed=84KB)
- Compiler (reserved=132KB, committed=132KB)
(malloc=1KB #21)
(arena=131KB #3)
- Internal (reserved=9453KB, committed=9453KB)
(malloc=9421KB #1402)
(mmap: reserved=32KB, committed=32KB)
- Symbol (reserved=1358KB, committed=1358KB)
(malloc=902KB #86)
(arena=456KB #1)
- Native Memory Tracking (reserved=143KB, committed=143KB)
(malloc=86KB #1363)
(tracking overhead=57KB)
- Arena Chunk (reserved=175KB, committed=175KB)
(malloc=175KB)
Memory Regions
Let's break it down 1:
- Java Heap : this is the heap -
- Class : this is Metaspace, assuming you are using java 8.
- Thread : this shows the number of threads, and the overall mem usage of the threads (note that the used
stack
in this section reflects theXss
value times the number of threads, you can get the default-Xss
value withjava -XX:+PrintFlagsFinal -version |grep ThreadStackSize
). - Code : the code cache - this is used by the JIT (Just In Time Compiler) to cache compiled code.
- GC : space used by the garbage collector.
- Compiler : space used by the JIT when generating code.
- Symbols : this is for symbols, field names, method signatures etc...
- Native Memory Tracking : memory used by the native memory tracker itself.
- Arena Chunk : this is related to malloc arenas 2.
Much more than just the heap!
Reserved, Committed And RSS
Note that each region has a committed
and a reserved
section. To keep it shortreserved
is what it can grow to and committed
is what is currently committed to be used.
For example see the Java Heap
section: Java Heap (reserved=2048KB, committed=2048KB)
, reserved
is our-Xmx
value and committed would be our -Xms value
, in this case they are equal .
Note too that the total committed
size - it does not
reflect actual usage reported by RSS (or the RES column in top
). The reason they differ is that RSS shows
the size of all memory pages which have been, and still are in use in physical memory, whereas committed
shows the memory which is
used including that which is not in physical memory 3.
There is a lot more to this, however JVM and OS memory management is a complex topic, so I hope this answers your question at least at a high level.
- See https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/tooldescr022.html
- From the JVM Native Memory Tracking docs (https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/tooldescr007.html#BABJGHDB):
Arena is a chunk of memory allocated using malloc. Memory is freed from these chunks in bulk, when exiting a scope or leaving an area of code. These chunks may be reused in other subsystems to hold temporary memory, for example, pre-thread allocations. Arena malloc policy ensures no memory leakage. So Arena is tracked as a whole and not individual objects. Some amount of initial memory can not by tracked.
- To go into the difference between RSS, Reserved and Committed memory would be too much here, OS memory management is a complex topic, but see
this answer for a good overview.
Related Topics
Ionic Build Android Error When Downloading Gradle
How to View/Change Socket Connection Timeout on Linux
How to Connect to Ftps Server with Data Connection Using Same Tls Session
Java Embedded Databases Comparison
What Is the Maximum Depth of the Java Call Stack
Hibernate Criteria Returns Children Multiple Times with Fetchtype.Eager
Java Function for Arrays Like PHP's Join()
Big-O Summary for Java Collections Framework Implementations
How to Write Console Output to a Txt File
How to Update a Textview of an Activity from Another Class
How to Close Rmiregistry Running on Particular Port
Differences Between System.Out.Println() and Return in Java
How to Convert a JSON String to a Map<String, String> with Jackson JSON
How to Instantiate an Inner Class with Reflection in Java