Low Java Single Process Thread Limit in Red Hat Linux

Low Java single process thread limit in Red Hat Linux

Updating the kernel to a newer version (2.6.something) with NPTL threading fixed this.

Multiple Java webapps total thread limit on linux

With the help from Aris2World and lenach87, I managed to find the answer to my own question.

The root cause of it is because of the max user processes(NPROC) limit that Linux has against the executing user of a process.

I logged in as root during my investigation, hence the result ulimit -a was for root:

[root@vm119 ~]# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 62810
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 100000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 62810
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

However, what I should have checked was the limit for the executing user of the webapps:

[root@vm119 ~]# su - user -c "ulimit -a"
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 62810
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 100000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 1024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

In order to change the limit for my executing user, I manually inserted two lines to

/etc/security/limits.conf

[root@vm119 ~]# cat /etc/security/limits.conf | grep user
user soft nproc 4096
user hard nproc 4096

Maximum number of threads per process in Linux?

Linux doesn't have a separate threads per process limit, just a limit on the total number of processes on the system (threads are essentially just processes with a shared address space on Linux) which you can view like this:

cat /proc/sys/kernel/threads-max

The default is the number of memory pages/4. You can increase this like:

echo 100000 > /proc/sys/kernel/threads-max

There is also a limit on the number of processes (and hence threads) that a single user may create, see ulimit/getrlimit for details regarding these limits.

Processes exceeding thread stack size limit on RedHat Enterprise Linux 6?

Turns out that RHEL6 2.11 have changed the thread model such that each thread where possible gets allocated its own thread pool, so on a larger system you may see it grabbing up to the 64MB. On 64 bit the max number of thread pools allowed is greater.

The fix for this was to add

export LD_PRELOAD=/path/to/libtcmalloc.so 

in the script that starts the processes (rather than using glibc2.11)

Some more inforation on this is available from:

Linux glibc >= 2.10 (RHEL 6) malloc may show excessive virtual memory usage
https://www.ibm.com/developerworks/mydeveloperworks/blogs/kevgrig/entry/linux_glibc_2_10_rhel_6_malloc_may_show_excessive_virtual_memory_usage?lang=en

glibc bug malloc uses excessive memory for multi-threaded applications
http://sourceware.org/bugzilla/show_bug.cgi?id=11261

Apache hadoop have fixed the problem by setting MALLOC_ARENA_MAX
https://issues.apache.org/jira/browse/HADOOP-7154



Related Topics



Leave a reply



Submit