Recommendations For a Heap Analysis Tool For Java

Recommendations for a heap analysis tool for Java?

YourKit : http://www.yourkit.com/

Pros:

  • The interface is clean and it's fast
  • It opened a large 5-gig heap dump where jProfiler grined to a halt. And it only needed 1-2 gigs of JVM ram to do so.

Cons:
Of course... it's not free :(

Tool for analyzing large Java heap dumps

Normally, what I use is ParseHeapDump.sh included within Eclipse Memory Analyzer and described here, and I do that onto one our more beefed up servers (download and copy over the linux .zip distro, unzip there). The shell script needs less resources than parsing the heap from the GUI, plus you can run it on your beefy server with more resources (you can allocate more resources by adding something like -vmargs -Xmx40g -XX:-UseGCOverheadLimit to the end of the last line of the script.
For instance, the last line of that file might look like this after modification

./MemoryAnalyzer -consolelog -application org.eclipse.mat.api.parse "$@" -vmargs -Xmx40g -XX:-UseGCOverheadLimit

Run it like ./path/to/ParseHeapDump.sh ../today_heap_dump/jvm.hprof

After that succeeds, it creates a number of "index" files next to the .hprof file.

After creating the indices, I try to generate reports from that and scp those reports to my local machines and try to see if I can find the culprit just by that (not just the reports, not the indices). Here's a tutorial on creating the reports.

Example report:

./ParseHeapDump.sh ../today_heap_dump/jvm.hprof org.eclipse.mat.api:suspects

Other report options:

org.eclipse.mat.api:overview and org.eclipse.mat.api:top_components

If those reports are not enough and if I need some more digging (i.e. let's say via oql), I scp the indices as well as hprof file to my local machine, and then open the heap dump (with the indices in the same directory as the heap dump) with my Eclipse MAT GUI. From there, it does not need too much memory to run.

EDIT:
I just liked to add two notes :

  • As far as I know, only the generation of the indices is the memory intensive part of Eclipse MAT. After you have the indices, most of your processing from Eclipse MAT would not need that much memory.
  • Doing this on a shell script means I can do it on a headless server (and I normally do it on a headless server as well, because they're normally the most powerful ones). And if you have a server that can generate a heap dump of that size, chances are, you have another server out there that can process that much of a heap dump as well.

How can I analyze a heap dump in IntelliJ? (memory leak)

The best thing out there is Memory Analyzer (MAT), IntelliJ does not have any bundled heap dump analyzer.

Tool or tricks to analyze offline Java heap dumps (.hprof)

Eclipse Memory Analyzer does everything you need.

Understanding Java Heap dump

The question about java memory leaks is a duplicate of this, that, etc. Still, here are a few thoughts:

Start by taking a few heap snapshots as described in the answer linked above.

Then, if you know the whole application well, you can eyeball the instance counts and find which type has too many instances sticking around. For example, if you know that a class is a singleton, yet you see 100 instances of that class in memory, then that's a sure sign that something funny is going on there. Alternatively you can compare the snapshots to find which types of objects are growing in number over time; the key here is that you're looking for relative growth over some usage period.

Once you know what's leaking, you trace back through the references to find the root reference that cannot be collected.

Finally, remember that it's possible that you see an OutOfMemoryError not because you're leaking memory, but rather because some part of your heap is too small for the application. To check whether this is the case:

  • Include in your question the type of VM that you're using.
  • Include in your question the arguments that you pass when starting java. What are your min, max, and permgen heap sizes? What type of garbage collector are you using?
  • Include in your question the OOME stack trace, in case there's some useful information there.
  • Turn on verbose GC logging, so that you can see which part(s) of the heap are growing.
  • Turn on the HeapDumpOnOutOfMemoryError parameter, so that you get a heap dump at the very end when the process dies.

Update: I'm not sure what "kernel killed the process with out of memory error" in your latest update means, but I think you might be saying that the linux out of memory killer was invoked. Was this the case? This problem is completely separate from a java OutOfMemoryError. For more details about what's happening, take a look at the links from the page I just linked to, including this and that. But the solution to your problem is simple: use less memory on the server in question. I suppose that you could drop the min and max heap size of the java process in question, but you need to be sure that you won't trigger real java OutOfMemoryErrors. Can you move some processes elsewhere? Can you correlate the memory killer with the start up of a specific process?



Related Topics



Leave a reply



Submit