Raising hard limit on RLIMIT_NOFILE system-wide on Linux
You can set the limits in /etc/security/limits.conf with the syntax:
<domain> <type> <item> <value>
The <domain> can be a user (i.e. memcache) or a group.
How do I increase the open files limit for a non-root user?
The ulimit
command by default changes the HARD limits, which you (a user) can lower, but cannot raise.
Use the -S option to change the SOFT limit, which can range from 0-{HARD}.
I have actually aliased ulimit
to ulimit -S
, so it defaults to the soft limits all the time.
alias ulimit='ulimit -S'
As for your issue, you're missing a column in your entries in /etc/security/limits.conf
.
There should be FOUR columns, but the first is missing in your example.
* soft nofile 4096
* hard nofile 4096
The first column describes WHO the limit is to apply for. '*' is a wildcard, meaning all users. To raise the limits for root, you have to explicitly enter 'root' instead of '*'.
You also need to edit /etc/pam.d/common-session*
and add the following line to the end:
session required pam_limits.so
Max open files for working process
As a system administrator: The /etc/security/limits.conf
file controls this on most Linux installations; it allows you to set per-user limits. You'll want a line like myuser - nofile 1000
.
Within a process: The getrlimit and setrlimit calls control most per-process resource allocation limits. RLIMIT_NOFILE
controls the maximum number of file descriptors. You will need appropriate permissions to call it.
Socket accept - Too many open files
There are multiple places where Linux can have limits on the number of file descriptors you are allowed to open.
You can check the following:
cat /proc/sys/fs/file-max
That will give you the system wide limits of file descriptors.
On the shell level, this will tell you your personal limit:
ulimit -n
This can be changed in /etc/security/limits.conf - it's the nofile param.
However, if you're closing your sockets correctly, you shouldn't receive this unless you're opening a lot of simulataneous connections. It sounds like something is preventing your sockets from being closed appropriately. I would verify that they are being handled properly.
core file size limit has non-deterministic effects on processes
The implementation of core dumping can be found in fs/binfmt_elf.c
. I'll follow the code in 3.12 and above (it changed with commit 9b56d5438) but the logic is very similar.
The code initially decides how much to dump of a VMA (virtual memory area) in vma_dump_size
. For an anonymous VMA such as the brk
heap, it returns the full size of the VMA. During this step, the core limit is not involved.
The first phase of writing the core dump then writes a PT_LOAD
header for each VMA. This is basically a pointer that says where to find the data in the remainder of the ELF file. The actual data is written by a for
loop, and is actually a second phase.
During the second phase, elf_core_dump
repeatedly calls get_dump_page
to get a struct page
pointer for each page of the program address space that has to be dumped. get_dump_page
is a common utility function found in mm/gup.c
. The comment to get_dump_page
is helpful:
* Returns NULL on any kind of failure - a hole must then be inserted into
* the corefile, to preserve alignment with its headers; and also returns
* NULL wherever the ZERO_PAGE, or an anonymous pte_none, has been found -
* allowing a hole to be left in the corefile to save diskspace.
and in fact elf_core_dump
calls a function in fs/coredump.c
( dump_seek
in your kernel, dump_skip
in 3.12+) if get_dump_page
returns NULL
. This function calls lseek to leave a hole in the dump (actually since this is the kernel it calls file->f_op->llseek
directly on a struct file
pointer). The main difference is that dump_seek
was indeed not obeying the ulimit, while the newer dump_skip
does.
As to why the second program has the weird behavior, it's probably because of ASLR (address space randomization). Which VMA is truncated depends on the relative order of the VMAs, which is random. You could try disabling it with
echo 0 | sudo tee /proc/sys/kernel/randomize_va_space
and see if your results are more homogeneous. To reenable ASLR, use
echo 2 | sudo tee /proc/sys/kernel/randomize_va_space
How to set ulimit -n from a golang program?
It works as expected.
setrlimit(2).
The soft limit is the value that the kernel enforces for the
corresponding resource. The hard limit acts as a ceiling for the soft
limit: an unprivileged process may only set its soft limit to a value
in the range from 0 up to the hard limit, and (irreversibly) lower its
hard limit. A privileged process (under Linux: one with the
CAP_SYS_RESOURCE capability) may make arbitrary changes to either
limit value.
rlimit.go
:
package main
import (
"fmt"
"syscall"
)
func main() {
var rLimit syscall.Rlimit
err := syscall.Getrlimit(syscall.RLIMIT_NOFILE, &rLimit)
if err != nil {
fmt.Println("Error Getting Rlimit ", err)
}
fmt.Println(rLimit)
rLimit.Max = 999999
rLimit.Cur = 999999
err = syscall.Setrlimit(syscall.RLIMIT_NOFILE, &rLimit)
if err != nil {
fmt.Println("Error Setting Rlimit ", err)
}
err = syscall.Getrlimit(syscall.RLIMIT_NOFILE, &rLimit)
if err != nil {
fmt.Println("Error Getting Rlimit ", err)
}
fmt.Println("Rlimit Final", rLimit)
}
Output:
$ uname -a
Linux peterSO 3.8.0-26-generic #38-Ubuntu SMP Mon Jun 17 21:43:33 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
$ go build rlimit.go
$ ./rlimit
{1024 4096}
Error Setting Rlimit operation not permitted
Rlimit Final {1024 4096}
$ sudo ./rlimit
[sudo] password for peterSO:
{1024 4096}
Rlimit Final {999999 999999}
UPDATE:
I successfully ran rlimit.go
for linux/amd64
, you failed for linux/386
. There were a Go bugs in Getrlimit
and Setrlimit
for Linux 32-bit distributions. These bugs have been fixed.
Using the Go default
branch tip
(to include the bug fixes), run the following, and update your question with the results.
$ uname -a
Linux peterSO 3.8.0-26-generic #38-Ubuntu SMP Mon Jun 17 21:46:08 UTC 2013 i686 i686 i686 GNU/Linux
$ go version
go version devel +ba52f6399462 Thu Jul 25 09:56:06 2013 -0400 linux/386
$ ulimit -Sn
1024
$ ulimit -Hn
4096
$ go build rlimit.go
$ ./rlimit
{1024 4096}
Error Setting Rlimit operation not permitted
Rlimit Final {1024 4096}
$ sudo ./rlimit
[sudo] password for peterSO:
{1024 4096}
Rlimit Final {999999 999999}
$
Check the open FD limit for a given process in Linux
Count the entries in /proc/<pid>/fd/
. The hard and soft limits applying to the process can be found in /proc/<pid>/limits
.
Related Topics
Recording from Alsa - Understanding Memory Mapping
"Hello World" Function Without Using C Printf
How to Clear Space on My Main System Drive on a Linux Centos System
Installing Ffmpeg on Amazon Linux - Cpp, Gcc & Libstdc++ Dependancies
Autoconf Check for Program and Fail If Not Found
Open-Source Opengl Profiler for Linux
Systemd/Udev Dependency Failure When Auto Mounting Separate Partition During Startup
How to Send Data Using Curl from Linux Command Line
Posix_Fadvise(Willneed) Makes Io Slower
Linux Async (Io_Submit) Write V/S Normal (Buffered) Write
Environment Variables in Docker When Exec Docker Run
Webdrivererror Error: Chrome Failed to Start: Exited Abnormally
Tasklist.Exe Equivalent in Linux
What Does It Mean to Break User Space