Benefits/Drawbacks to Running 64-Bit Jvm on 64-Bit Linux Server

32-bit versus 64-bit JVM performance considerations?

The memory requirements will go up as it is more expensive to address objects on a 64 bit architecture. How much is really depending on your application. A wild guess would be 10%-20%, assuming that you are not hashing 4 gb of Integers...

You may also get problems with lock contention on locks in loggers, thread pools, connection pools, singletons etc. It is probably not a problem if your application is database centric, but if - for example- Your application is storing a lot of sessions in a map, and access that map a lot, you might get problems. The contention "wall" can come rather quickly, in my experience.

However, there is only one way to know : ...test it.

Java performance 64 bit

With 64-bit JVM, you may see different performance however you will see much more difference using a different OS on a different machine.

If you want to see if using 64-bit references is slowing you down, you can enable -XX:+UseCompressedOops which causes the 64-bit JVM to use 32-bit references but can still access 32 Gb of memory.

Another way to test this is to use a 32-bit JVM on your system. ;)

We have a latency sensitive system and see little performance advantage in using 32-bit references on a 64-bit JVM as it shifts every addres by 3 bits. The 32-bit JVMs smaller register set hurts us more than it helps.

EDIT: For more details

http://wikis.sun.com/display/HotSpotInternals/CompressedOops

http://docs.oracle.com/javase/7/docs/technotes/guides/vm/performance-enhancements-7.html

http://blog.juma.me.uk/2008/10/14/32-bit-or-64-bit-jvm-how-about-a-hybrid/

32 OR 64 BIT for the JVM?

It is more a matter of general performance. If you have a recent 64 bit processor and plenty of unused memory, the only possible gain using JVM 32 instead of JVM 64 would be that some loops could fully fit in cache with 32 bits addresses and not with 64 bits ones leading to less main memory accesses with a 32 bits JVM. I really cannot evaluate the gain, but except in very special cases I doubt it reaches 5 to 20 % - note I only doubt, not sure ...

But if you are using a resources limited system, eventually because you want to optimize your hardware and run many virtual machines on it, a 32 bits JVM will use far less memory than a 64 bits one. It will leave more free memory to other applications and to the system thus avoiding swapping and allowing the system to better cache disk IO.

IMHO, there is no general rule. I would just use following rule of thumb : if when the JVM and all other applications are running, the system still has enough memory for caching IO, I would use a 64 bits JVM, and a 32 bits one in opposite case.

Of course above only has sense if the Java application by itself do not need a lot of memory imposing the use of a 64 bits JVM - and eventually adding more memory to the system if necessary, because memory is not that expensive nowadays

Advantages of a 64 bit system

When you have 64-bits of address space to play with, you can adopt certain designs that would be very hard with less of an address space. For example, a friend recently pointed out to me that address space for thread stacks can get to be a problem with thousands of threads on a 32-bit system. But on a 64-bit system, this is no longer even remotely close to being a problem. This is the main direct benefit to developers that can affect how you write programs. And this is true regardless of how much actual memory the machine has.

Most programs I have seen converted to 64-bit have seen performance improvements because of the extra registers available.

Having 64-bit addresses can offset this performance improvement in some programs. The extra space pointers take up mean they take more cache, which leaves less space in your cache for other things. Also they take up more memory bus bandwidth when being transferred to and from main memory.

There is at least one project out there that proposes to recompile most programs in Linux in a sort of mixed-mode in which all the extra registers are used, but only 32-bit pointers are used. I'm interested in how this pans out because it removes the one performance disadvantage of 64-bit programs.

There is also a small (but important) subset of programs and algorithms that can make use of 64-bit registers. For example, most of the SHA-3 candidates are designed to take advantage of the ability to manipulate 64-bits of data at a time when doing bitwise operations.

Lastly, since the data paths inside the CPU are now 64-bits wide, this can also mean there is more bandwidth inside the CPU for moving things around. But I would expect this to be a benefit on 64-bit CPUs running in 32-bit mode as well.

Any bugs or incompatibilities in 64 bit JVM?

We found out what the issue was. There was a loop in one part where a thread waited on something to become ready that another thread was working on, except it was just a while(!ready){} loop. It seems like in the 64 bit JVM threads don't get preempted or something, because in the 32 bit JVM this loop would get preempted and then the other thread would finish and set the variable to true, but not in the 64 bit JVM. I understand now that we should have used wait/notify and locks to do this, but at least temporarily throwing a sleep() in there fixed it. Not exactly a race condition, more of a quirk of the threading model it seems, so I'm not accepting any of the other answers since they didn't answer the question I asked.

Supplying 64 bit specific versions of your software

Architectural benefits of Intel x64 vs. x86

  • larger address space
  • a richer register set
  • can link against external libraries or load plugins that are 64-bit

Architectural downside of x64 mode

  • all pointers (and thus many instructions too) take up 2x the memory, cutting the effective processor cache size in half in the worst case
  • cannot link against external libraries or load plugins that are 32-bit

In applications I've written, I've sometimes seen big speedups (30%) and sometimes seen big slowdowns (> 2x) when switching to 64-bit. The big speedups have happened in number crunching / video processing applications where I was register-bound.

The only big slowdown I've seen in my own code when converting to 64-bit is from a massive pointer-chasing application where one compiler made some really bad "optimizations". Another compiler generated code where the performance difference was negligible.

Benefit of porting now

Writing 64-bit-compatible code isn't that hard 99% of the time, once you know what to watch out for. Mostly, it boils down to using size_t and ptrdiff_t instead of int when referring to memory addresses (I'm assuming C/C++ code here). It can be a pain to convert a lot of code that wasn't written to be 64-bit-aware.

Even if it doesn't make sense to make a 64-bit build for your application (it probably doesn't), it's worth the time to learn what it would take to make the build so that at least all new code and future refactorings will be 64-bit-compatible.

Maximum Java heap size of a 32-bit JVM on a 64-bit OS

32-bit JVMs which expect to have a single large chunk of memory and use raw pointers cannot use more than 4 Gb (since that is the 32 bit limit which also applies to pointers). This includes Sun and - I'm pretty sure - also IBM implementations. I do not know if e.g. JRockit or others have a large memory option with their 32-bit implementations.

If you expect to be hitting this limit you should strongly consider starting a parallel track validating a 64-bit JVM for your production environment so you have that ready for when the 32-bit environment breaks down. Otherwise you will have to do that work under pressure, which is never nice.


Edit 2014-05-15: Oracle FAQ:

The maximum theoretical heap limit for the 32-bit JVM is 4G. Due to various additional constraints such as available swap, kernel address space usage, memory fragmentation, and VM overhead, in practice the limit can be much lower. On most modern 32-bit Windows systems the maximum heap size will range from 1.4G to 1.6G. On 32-bit Solaris kernels the address space is limited to 2G. On 64-bit operating systems running the 32-bit VM, the max heap size can be higher, approaching 4G on many Solaris systems.

(http://www.oracle.com/technetwork/java/hotspotfaq-138619.html#gc_heap_32bit)

JRE 32bit vs 64bit

64-bit vs. 32-bit really boils down to the size of object references, not the size of numbers.

In 32-bit mode, references are four bytes, allowing the JVM to uniquely address 2^32 bytes of memory. This is the reason 32-bit JVMs are limited to a maximum heap size of 4GB (in reality, the limit is smaller due to other JVM and OS overhead, and differs depending on the OS).

In 64-bit mode, references are (surprise) eight bytes, allowing the JVM to uniquely address 2^64 bytes of memory, which should be enough for anybody. JVM heap sizes (specified with -Xmx) in 64-bit mode can be huge.

But 64-bit mode comes with a cost: references are double the size, increasing memory consumption. This is why Oracle introduced "Compressed oops". With compressed oops enabled (which I believe is now the default), object references are shrunk to four bytes, with the caveat that the heap is limited to four billion objects (and 32GB Xmx). Compressed oops are not free: there is a small computational cost to achieve this big reduction in memory consumption.

As a personal preference, I always run the 64-bit JVM at home. The CPU is x64 capable, the OS is too, so I like the JVM to run in 64-bit mode as well.

How to develop to take advantage of 64 bit systems?

Anything that requires more than 4 GB of working and program memory would certainly qualify, since that is the maximum amount of memory that a 32 bit system can address directly.

Since 64 bit numbers can reside in the CPU registers, calculations requiring numbers of these sizes would see a performance improvement.



Related Topics



Leave a reply



Submit