Max open files for working process
As a system administrator: The /etc/security/limits.conf
file controls this on most Linux installations; it allows you to set per-user limits. You'll want a line like myuser - nofile 1000
.
Within a process: The getrlimit and setrlimit calls control most per-process resource allocation limits. RLIMIT_NOFILE
controls the maximum number of file descriptors. You will need appropriate permissions to call it.
Raise number of files a process can open beyond 2^20
Okay, I got it. You can't just set fs.file-max
. You also have to set fs.nr_open
, which has a default value of 2^20. I also removed the /etc/pam.d/common-session
I created and commented out the session required pam_limits.so
line in /etc/pam.d/sudo
.
Cannot change the maximum open files per process with sysctl
For Ubuntu 17.04. See this solution.
Prior to Ubuntu 17.04:
I don't know why the above settings don't work but it seems you can get the same result by using the /etc/security/limits.conf
file.
Set the limit in /etc/security/limits.conf
sudo bash -c "echo '* - nofile 10240' >> /etc/security/limits.conf"
*
means all users. You can replace it by a specific username.-
means bothsoft
andhard
for the type of limit to be enforced. Hard can only be modified by the superuser. Soft can be modified by a non-root user and cannot be superior to hard.nofile
is the Maximum number of open files parameter.10240
is the new limit.
Reload
Logout and log back in. sudo sysctl -p
doesn't seem to be enough to reload.
You can check the new limit with:
ulimit -n
Tested on Ubuntu 16.04 and CentOS 6. Inspired by this answer.
Max number of open files per process in Linux
There is no issue here.
A pipe
has two ends, each gets its own file descriptor.
So, each end of a pipe
counts as a file against the limit.
The slight difference between 1024/2 = 512 and 510 is because your process has already opened the files stdin, stdout, and stderr, which counts against the limit.
Increase open files limit for process
After trying to resolve this issue in multiple questions, got it down to the fact that supervisor set's its own file limit on the program. As seen in the comments, you have to use minfds
setting in supervisor.
To check to see if it is working you can run a cat /proc/$PID/limits
Which should output the number you set minfds
too, in my case 100,000
Max open files 100000 100000 files
I would like to note that when you go and put into supervisor the minfds
you put it in the /etc/supervisor/supervisord.conf
as if you put in your programs config file it will do nothing.
Max open files per process
It is clear now.
The ulimit command is build in shell. You can set the maxfiles using ulimit -n command for current shell (and every program which was started from this shell).
10252 files - it was my mistake.. it was 253 max open files when I start my test program from shell (253 + stdin + stdout + stderr = 256).
9469 files - the result of my test program running under Xcode, it seems that Xcode set tha maxfiles before running the program.
ulimit is not system wide setting, thats why to set the system wide value for maxfiles you must use launchctl (the first process in the system, try to 'launchctl limit') or sysctl.
And the answer is 256 files.
Related Topics
How Find Out Which Process Is Using a File in Linux
Anything Like Dos2Unix for Windows
How to Use Xargs to Copy Files That Have Spaces and Quotes in Their Names
Linux Shell Sort File According to the Second Column
How to Use Gdb in Eclipse for C/C++ Debugging
Using Ls to List Directories and Their Total Sizes
Limitations of Intel Assembly Syntax Compared to At&T
Virtualenv: Workon Command Not Found
Difference Between Shell and Environment Variables
Using "$Random" to Generate a Random String in Bash
Generating a Sha-256 Hash from the Linux Command Line
How to Get the Name of the Current Git Branch into a Variable in a Shell Script
Where Does Eclipse Look for Eclipse.Ini Under Linux
How to Convert Linux 32-Bit Gcc Inline Assembly to 64-Bit Code