Docker Ignores Limits.Conf (Trying to Solve "Too Many Open Files" Error)

Docker Ignores limits.conf (trying to solve too many open files error)

I was able to mitgiate this issue with the following configuration :

I used ubuntu 14.04 linux for the docker machine and the host machine.

On the host machine You need to :

  • update the /etc/security/limits.conf to include :* - nofile 64000
  • add to your /etc/sysctl.conf : fs.file-max = 64000
  • restart sysctl : sudo sysctl -p

Docker error: too many open files

Default limit of number of open files is 1024. You can increase it in two ways:

  1. Run the container with --ulimit parameter:

    docker run --ulimit nofile=5000:5000 <image-tag>
  2. Run the container with --privileged mode and execute ulimit -n 5000.

You can find more information here.

Need understand ulimit 's nofile setting in host and container

For example, If a Linux OS has ulimit nofile set to 1024 (soft) and Hard (4096) , and I run docker with ----ulimit nofile=10240:40960, could the container use more nofiles than its host?

  • Docker has the CAP_SYS_RESOURCE capability set on it's permissions.
    This means that Docker is able to set an ulimit different from the host. according to man 2 prlimit:

A privileged process (under Linux: one with the CAP_SYS_RESOURCE capability in the initial user namespace) may make arbitrary changes to either limit value.

  • So, for containers, the limits to be considered are the ones set by the docker daemon.
    You can check the docker daemon limits with this command:
$ cat /proc/$(ps -A | grep dockerd | awk '{print $1}')/limits | grep "files"
Max open files 1048576 1048576 files
  • As you can see, the docker 19 has a pretty high limit of 1048576 so your 40960 will work like a charm.

  • And if you run a docker container with --ulimit set to be higher than the node but lower than the daemon itself, you won't find any problem, and won't need to give additional permissions like in the example bellow:

$ cat /proc/$(ps -A | grep dockerd | awk '{print $1}')/limits | grep "files"
Max open files 1048576 1048576 files

$ docker run -d -it --rm --ulimit nofile=99999:99999 python python;
354de39a75533c7c6e31a1773a85a76e393ba328bfb623069d57c38b42937d03

$ cat /proc/$(ps -A | grep python | awk '{print $1}')/limits | grep "files"
Max open files 99999 99999 files
  • You can set a new limit for dockerd on the file /etc/init.d/docker:
$ cat /etc/init.d/docker | grep ulimit
ulimit -n 1048576
  • As for the container itself having a ulimit higher than the docker daemon, it's a bit more tricky, but doable, refer here.
  • I saw you have tagged the Kubernetes tag, but didn't mention it in your question, but in order to make it work on Kubernetes, the container will need securityContext.priviledged: true, this way you can run the command ulimit as root inside the container, here an example:
image: image-name
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true

Kolla-ansible too many open files

This was fixed with a workaround in bug https://bugs.launchpad.net/keystonemiddleware/+bug/1883659 problem was neutron server was keeping memcached connections open and not closing them until the memcached container reached too many files open. There is a work around mentioned in the bug link.



Related Topics



Leave a reply



Submit