Docker container does not inherit ulimit from host
If you want to set custom ulimits for a container, you can use the --ulimit
option. For example;
docker run -it --rm --ulimit memlock=32768:32768 ubuntu sh -c "ulimit -a"
Shows:
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 8192
coredump(blocks) 0
memory(kbytes) unlimited
locked memory(kbytes) 32
process 7873
nofiles 1024
vmemory(kbytes) unlimited
locks unlimited
You can find more information in the documentation; https://docs.docker.com/engine/reference/commandline/run/
For other ways to restrict resources for a container, also see this section;
https://docs.docker.com/engine/reference/run/#runtime-constraints-on-resources
Setup ulimit parameter in dockerfile
You can't set ulimits on docker containers in the dockerfile - needs to be set when running the container from the command line. Try this:
docker run --ulimit nofile=262144:262144 IMAGE
Docker - ulimit differences between container and host
Resource limits may be set by Docker during the container startup, and you may tune these settings using the --ulimit
argument when launching the container. It may be easily verified by strace
ing the containerd
process during the container startup, for example, the following command
$ docker run -it --ulimit nofile=1024 alpine
will produce the following trace:
prlimit64(7246, RLIMIT_NOFILE, {rlim_cur=1024, rlim_max=1024}, <unfinished ...>
and checking ulimit
in the container gives the expected limit value:
-n: file descriptors 1024
When running the container without explicitly specified --ulimit
, this check gives different value (probably inherited from containerd
), e.g.:
-n: file descriptors 1048576
Why Docker is allowed to set limits higher that the ones you observe by checking ulimit
on your host? Let's open man 2 prlimit
:
A privileged process (under Linux: one with the CAP_SYS_RESOURCE capability
in the initial user namespace) may make arbitrary changes to either limit value.
This means that any process with the CAP_SYS_RESOURCE
capability may set any resource limit, and Docker has this capability. You may check it by inspecting the CapEff
field of /proc/$PID/status
file, where $PID
is a PID of containerd
process, and decoding this value using capsh --decode
:
$ pidof docker-containerd
675
$ cat /proc/675/status | grep CapEff
CapEff: 0000003fffffffff
$ capsh --decode=0000003fffffffff
0x0000003fffffffff=cap_chown,<...>,cap_sys_resource,<...>
To summarize: yes, Docker may increase resource limits for the containers, because it has privileges to do so, and you may tune these limits using the --ulimit
argument.
Running docker container through python API with specific ulimit
Your python and shell commands are not identical: in the shell command, you are specifying the soft
limits and in the python you are specifying the hard
limits. The syntax for the argument to the --ulimit
command flag is:
<type>=<soft limit>[:<hard limit>]
And the documentation explains:
Note: If you do not provide a hard limit, the soft limit will be used for both values. If no ulimits are set, they will be inherited from the default ulimits set on the daemon.
To get the identical behavior, I would try changing your python ulimit declarations to
docker.types.Ulimit(name='stack', soft=67108864, hard=67108864)]
This sounds like a shortcoming of the python documentation, which says only that both soft
and hard
are optional arguments.
PHP Docker - Is it possible to set ulimit at runtime?
This warning can be triggered with the following PHP snippet:
<?php
$fds = [];
for ($i = 0; $i < PHP_FD_SETSIZE; $i++) { // PHP_FD_SETSIZE how FD_SETSIZE is exposed to userland
$fds[] = fopen(__FILE__, 'r');
}
$read = $fds;
$write = [];
$except = [];
echo sprintf("FD_SETSIZE=%d", PHP_FD_SETSIZE) . "\n";
stream_select($read, $write, $except, 0);
/*
* Warning: stream_select(): You MUST recompile PHP with a larger value of FD_SETSIZE.
* It is set to 1024, but you have descriptors numbered at least as high as 1027.
* --enable-fd-setsize=2048 is recommended, but you may want to set it
* to equal the maximum number of open files supported by your system,
* in order to avoid seeing this error again at a later date. in /in/ZfBsg on line 14
*/
FD_SETSIZE is a constant defined by POSIX (see the select(2) manpage). While the warning leads you to believe that this warning can be mitigated
by recompiling php with --enable-fd-setsize=[some-larger-value], this is not true. FD_SETSIZE cannot be changed at
runtime, since its value is determined while compiling the kernel.
I looked at the PHP source code for a bit and I'm not sure why --enable-fd-setsize exists at all, since it does not
seem to do anything (although I did not test this on Windows).
The only solutions to this problem seem to be:
- avoid stream_select
- avoid opening many files
Related Topics
How to Run 16 Bit Code on 32 Bit Linux
Sigbus While Doing Memcpy from Mmap Ed Buffer Which Is in Ram as Identified by Mincore
Bash: Ctrl+C During Input Breaks Current Terminal
Why Is Capeff All Zeros in /Proc/$Pid/Status
How to Solve Ssh: /Usr/Lib64/Libcrypto.So.10: No Version Information Available
How to Automatically Start an Application That Needs X in Linux
Using Source to Include Part of a File in a Bash Script
Can a Gnome Application Be Automated? How
Linux Bash Commands to Remove Duplicates from a CSV File
Configuring Tomat's Server.Xml File with Auto Generating Mod_Jk.Conf
.Dat Attachment Instead of Text Using Mailx in Redhat Linux
Specially Debugging Line by Line
How to Show Printk() Message in Console
How to Use Find on Dirs with White Spaces
Question About Epoll and Splice
Passing a Password to "Su" Command Over Sshexec from Ant
Bash Ip If Then Else Statement
"Unknown Symbol in Module" on Module Insertion Despite Export_Symbol