Kubernetes Can't Start Due to Too Many Open Files in System

Kubernetes can't start due to too many open files in system

You can confirm which process is hogging file descriptors by running:

lsof | awk '{print $2}' | sort | uniq -c | sort -n

That will give you a sorted list of open FD counts with the pid of the process. Then you can look up each process w/

ps -p <pid>

If the main hogs are docker/kubernetes, then I would recommend following along on the issue that caesarxuchao referenced.

Kubernetes - Too many open files

May you have a look at https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/
You but you need enable few features to make it work.

  securityContext:
sysctls:
- name: fs.file-max
value: "YOUR VALUE HERE"

Socket accept - Too many open files

There are multiple places where Linux can have limits on the number of file descriptors you are allowed to open.

You can check the following:

cat /proc/sys/fs/file-max

That will give you the system wide limits of file descriptors.

On the shell level, this will tell you your personal limit:

ulimit -n

This can be changed in /etc/security/limits.conf - it's the nofile param.

However, if you're closing your sockets correctly, you shouldn't receive this unless you're opening a lot of simulataneous connections. It sounds like something is preventing your sockets from being closed appropriately. I would verify that they are being handled properly.

Docker error: too many open files

Default limit of number of open files is 1024. You can increase it in two ways:

  1. Run the container with --ulimit parameter:

    docker run --ulimit nofile=5000:5000 <image-tag>
  2. Run the container with --privileged mode and execute ulimit -n 5000.

You can find more information here.



Related Topics



Leave a reply



Submit