Java 8 with Jetty on Linux Memory Issue

java server with jetty memory leak issue

After lots of different types of investigations, I came back to the Eclipse Memory analyzer and tried once more ...

But this time I decided to trust the leak report that was saying :

One instance of "org.hibernate.internal.SessionFactoryImpl" loaded by
"sun.misc.Launcher$AppClassLoader @ 0xc001d4f8" occupies 2.956.808
(21,05%) bytes. The memory is accumulated in one instance of
"org.hibernate.internal.SessionFactoryImpl" loaded by
"sun.misc.Launcher$AppClassLoader @ 0xc001d4f8".

Then I shift my investigation the my DAO implementation, expecting that I forgot to close one or other EntityManager call. That wasn´t the case, all of then were with a nice close() method call after use.

Then I realize that the problem could be on hibernate itself as the problematic object were a SessionFactoryImpl, so I changed my DAO implementation to clear Hibernate first level cache every time I create an Entity Manager, as I wasn´t able to find a way to disable it.

End of the day result = IT WORKED !! :) the memory changes a litle and come back after some minutes, but don´t grow in a crazy way anymore (1000MB in 24hours).

Here is the code I changed, hope it helps someone.

public EntityManager getEntityManager(){
if( emf == null ){
if (parameters == null) {
emf = Persistence.createEntityManagerFactory(persistenceUnitName);
} else {
emf = Persistence.createEntityManagerFactory(persistenceUnitName, parameters);
}
} else {
emf.getCache().evictAll();
}

EntityManager em = emf.createEntityManager();
return em;
}

The key is here : emf.getCache().evictAll();

Jetty is using too much RSS memory in one of the computers. How to decrease it

Without more information, I'll speculate that you have different JVM implementations (1 linux, 1 OSX) and have likely not specified the upper bounds of memory for those JVMs.

Note that different JVMs will likely specify different upper memory limit defaults.
Even the same JVM version by oracle will specify different maximum memory limits differently on different OS/hardware combinations.

There's a prior question about this topic that might help: What is the default maximum heap size for Sun's JVM from Java SE 6?

Also, maven versions differ on what plugin versions they will use by default as well, you could have a difference that is entirely within the scope of maven + its plugins.

Problems with jetty crashing intermittently

When you say crash do you mean the JVM segfaults and disappears? If that's the case I'd check and make sure you aren't exhausting the machine's available memory. Java on linux will crash when the system memory gets so low the JVM cannot allocate up to its maximum memory. For example, you've set the max JVM memory to 500MB of which it's using 250MB at the moment. However, the Linux OS only has 128MB available. This produces unstable results and the JVM will segfault.

On windows the JVM is more well behaved in this scenario and throws OutOfMemoryError when the system is running low on memory.

  1. Validate how much system memory is available around the time of your crashes.
  2. Verify if other processes on your box are eating up a lot of memory. Turn off anything that could be competing with the JVM.
  3. Run jconsole and connect it to your JVM. That will tell you how memory is being used in your JVM process and give you a history to look back through when it does crash.
  4. Eliminate any native code you might be loading into the JVM when doing this type of testing.

I believe Jetty has some native code to do high volume request processing. Make sure that's not being used. You want to isolate the crashes to Java and NOT some strange native lib. If you take out the native stuff and find it works then you have your answer as to what's causing it. If it continues to crash then it very well could be what I'm describing.

You can force the JVM to allocate all the memory at startup with -Xms900m that can make sure the JVM doesn't fight with other processes for memory. Once it has the full Xmx amount allocated it won't crash. Not a solution, but you can easily test it this way.

Jetty webapp continues to grow into heap space: (OutOfMemoryError Java heap space)

I confirm that moving to Jetty 9.3.7 removed this issue; I believe the heap space issue I faced has to do with the bug (and the fix) described in here.

Specifically, there was a bug in Jetty 9.3.6 (and, presumably, Jetty 9.3.5 as well), that did not clean up JSR356 (client) Session objects correctly on the Jetty server, resulting in a memory leak for applications whose clients reconnect often (or there are too many of them).

Hope this helps someone.

Grails application taking more memory than possible?

The JVM is a native program which consumes native resources, including native memory. Native memory is the memory available to the runtime process, as distinguished from the Java heap memory that a Java application uses. Every virtualized resource — including the Java heap and Java threads — must be stored in native memory, along with the data used by the virtual machine as it runs.

You can also take a look at this answer which explains this subject very nicely.

This thread show some nice examples of how to leak native memory with Java.

The Jetty documentation also refers to this subject.

If you are using JDK7 you can use the VisualVM buffer monitor plugin for monitoring direct buffers that are allocated outside of the garbage-collected heap.



Related Topics



Leave a reply



Submit