How to Set a Global Nofile Limit to Avoid "Many Open Files" Error

How do I increase the open files limit for a non-root user?

The ulimit command by default changes the HARD limits, which you (a user) can lower, but cannot raise.

Use the -S option to change the SOFT limit, which can range from 0-{HARD}.

I have actually aliased ulimit to ulimit -S, so it defaults to the soft limits all the time.

alias ulimit='ulimit -S'

As for your issue, you're missing a column in your entries in /etc/security/limits.conf.

There should be FOUR columns, but the first is missing in your example.

* soft nofile 4096
* hard nofile 4096

The first column describes WHO the limit is to apply for. '*' is a wildcard, meaning all users. To raise the limits for root, you have to explicitly enter 'root' instead of '*'.

You also need to edit /etc/pam.d/common-session* and add the following line to the end:

session required pam_limits.so

Too many open files ( ulimit already changed )

I just put the line ulimit -n 8192 inside the catalina.sh, so when I do the catalina start, java runs with the specified limit above.

Increase max open files for Ubuntu/Upstart (initctl)

This needs to be added to your Upstart script for it to work:

limit nofile 50000 50000

"It's by design that upstart does not look at
/etc/security/limits.conf for system jobs. PAM settings are only
applied to user sessions, not to system services." from
https://bugs.launchpad.net/ubuntu/+source/upstart/+bug/938669

Sources:

https://github.com/saltstack/salt/issues/5323

http://bryanmarty.com/blog/2012/02/10/setting-nofile-limit-upstart/

Cannot change the maximum open files per process with sysctl

For Ubuntu 17.04. See this solution.

Prior to Ubuntu 17.04:

I don't know why the above settings don't work but it seems you can get the same result by using the /etc/security/limits.conf file.

Set the limit in /etc/security/limits.conf

sudo bash -c "echo '* - nofile 10240' >> /etc/security/limits.conf"
  • * means all users. You can replace it by a specific username.
  • - means both soft and hard for the type of limit to be enforced. Hard can only be modified by the superuser. Soft can be modified by a non-root user and cannot be superior to hard.
  • nofile is the Maximum number of open files parameter.
  • 10240 is the new limit.

Reload

Logout and log back in. sudo sysctl -p doesn't seem to be enough to reload.

You can check the new limit with:

ulimit -n

Tested on Ubuntu 16.04 and CentOS 6. Inspired by this answer.

Too many open files raised by supervisord?

I resolved by sudo supervisorctl shutdown should't kill P directly

Is there any downside to setting ulimit really high?

The entire reason ulimit exists is to protect the overall performance of the system by preventing a process from using up more resources than are "normal".

"Normal" can be different depending on what you are doing, but setting limits extremely high by default would defeat the purpose of ulimit and allow any process to use up ridiculous amounts of resources. On a server without users this is less critical than a big multiuser environment, but its still a useful safeguard against buggy or exploited processes.

Your CPU probably just went up because your computer is doing more work now instead of erroring out.

PS - You want to be sure there isn't something wrong in your tomcat environment too... it might be OK to have thousands of open files, I don't know your application, but that could also be a sign of something gone buggy. If it is, you just allowed the bug's effect to become potentially much worse :( If you can explain why tomcat needs thousands of files open, cool, but if not... yikes.

Too many open files throwing an IOException

I solved my problem following that tutorial

https://easyengine.io/tutorials/linux/increase-open-files-limit/



Related Topics



Leave a reply



Submit