Limiting Certain Processes to CPU % - Linux

Limiting process memory/CPU usage on linux

You might want to investigate cgroups as well as the older (obsolete) ulimit.

Limit the percentage of CPU a process tree is allowed to use?

Yes!

Just as I was writing this question, found out that I was trying an old version of cpulimit.

The new version supports limiting child processes too.

$ cpulimit -h
Usage: cpulimit [OPTIONS...] TARGET
OPTIONS
-l, --limit=N percentage of cpu allowed from 0 to 400 (required)
-v, --verbose show control statistics
-z, --lazy exit if there is no target process, or if it dies
-i, --include-children limit also the children processes
-h, --help display this help and exit
TARGET must be exactly one of these:
-p, --pid=N pid of the process (implies -z)
-e, --exe=FILE name of the executable program file or path name
COMMAND [ARGS] run this command and limit it (implies -z)

Report bugs to <marlonx80@hotmail.com>.

Limit CPU time of process group

I found a solution that works for me. It is still far from perfect (read the caveats before using it). I'm somewhat new to bash scripting so any comments about this are welcome.

#!/bin/bash
#
# This script tries to limit the CPU time of a process group similar to
# ulimit but counting the time spent in spawned processes against the
# limit. It works by creating a temporary cgroup to run the process in
# and checking on the used CPU time of that process group. Instead of
# polling in regular intervals, the monitoring process assumes that no
# time is lost to I/O (i.e., wall clock time = CPU time) and checks in
# after the time limit. It then updates its assumption by comparing the
# actual CPU usage to the time limit and waiting again. This is repeated
# until the CPU usage exceeds its limit or the monitored process
# terminates. Once the main process terminates, all remaining processes
# in the temporary cgroup are killed.
#
# NOTE: this script still has some major limitations.
# 1) The monitored process can exceed the limit by up to one second
# since every iteration of the monitoring process takes at least that
# long. It can exceed the limit by an additional second by ignoring
# the SIGXCPU signal sent when hitting the (soft) limit but this is
# configurable below.
# 2) It assumes there is only one CPU core. On a system with n cores
# waiting for t seconds gives the process n*t seconds on the CPU.
# This could be fixed by figuring out how many CPUs the process is
# allowed to use (using the cpuset cgroup) and dividing the remaining
# time by that. Since sleep has a resolution of 1 second, this would
# still introduce an error of up to n seconds.

set -e

if [ "$#" -lt 2 ]; then
echo "Usage: $(basename "$0") TIME_LIMIT_IN_S COMMAND [ ARG ... ]"
exit 1
fi
TIME_LIMIT=$1
shift

# To simulate a hard time limit, set KILL_WAIT to 0. If KILL_WAIT is
# non-zero, TIME_LIMIT is the soft limit and TIME_LIMIT + KILL_WAIT is
# the hard limit.
KILL_WAIT=1

# Update as necessary. The script needs permissions to create cgroups
# in the cpuacct hierarchy in a subgroup "timelimit". To create it use:
# sudo cgcreate -a $USER -t $USER -g cpuacct:timelimit
CGROUPS_ROOT=/sys/fs/cgroup
LOCAL_CPUACCT_GROUP=timelimit/timelimited_$$
LOCAL_CGROUP_TASKS=$CGROUPS_ROOT/cpuacct/$LOCAL_CPUACCT_GROUP/tasks

kill_monitored_cgroup() {
SIGNAL=$1
kill -$SIGNAL $(cat $LOCAL_CGROUP_TASKS) 2> /dev/null
}

get_cpu_usage() {
cgget -nv -r cpuacct.usage $LOCAL_CPUACCT_GROUP
}

# Create a cgroup to measure the CPU time of the monitored process.
cgcreate -a $USER -t $USER -g cpuacct:$LOCAL_CPUACCT_GROUP

# Start the monitored process. In case it fails, we still have to clean
# up, so we disable exiting on errors.
set +e
(
set -e
# In case the process doesn't fork a ulimit is more exact. If the
# process forks, the ulimit still applies to each child process.
ulimit -t $(($TIME_LIMIT + $KILL_WAIT))
ulimit -S -t $TIME_LIMIT
cgexec -g cpuacct:$LOCAL_CPUACCT_GROUP --sticky $@
)&
MONITORED_PID=$!

# Start the monitoring process
(
REMAINING_TIME=$TIME_LIMIT
while [ "$REMAINING_TIME" -gt "0" ]; do
# Wait $REMAINING_TIME seconds for the monitored process to
# terminate. On a single CPU the CPU time cannot exceed the
# wall clock time. It might be less, though. In that case, we
# will go through the loop again.
sleep $REMAINING_TIME
CPU_USAGE=$(get_cpu_usage)
REMAINING_TIME=$(($TIME_LIMIT - $CPU_USAGE / 1000000000))
done

# Time limit exceeded. Kill the monitored cgroup.
if [ "$KILL_WAIT" -gt "0" ]; then
kill_monitored_cgroup XCPU
sleep $KILL_WAIT
fi
kill_monitored_cgroup KILL
)&
MONITOR_PID=$!

# Wait for the monitored job to exit (either on its own or because it
# was killed by the monitor).
wait $MONITORED_PID
EXIT_CODE=$?

# Kill all remaining tasks in the monitored cgroup and the monitor.
kill_monitored_cgroup KILL
kill -KILL $MONITOR_PID 2> /dev/null
wait $MONITOR_PID 2>/dev/null

# Report actual CPU usage.
set -e
CPU_USAGE=$(get_cpu_usage)
echo "Total CPU usage: $(($CPU_USAGE / 1000000))ms"

# Clean up and exit with the return code of the monitored process.
cgdelete cpuacct:$LOCAL_CPUACCT_GROUP
exit $EXIT_CODE

Limit a set of processes' CPU and memory usage

Will the program launched from this console inherit the niceness of the console itself?

The answer is yes. You can check it nice -n19 xterm. In the console, start someprocess, then check the nice level using ps -efl | grep someprocess. The nice level is inherited.

How to restrict a CPU core to only 2 applications using Linux scheduler?

isolcpus is a solution but the linux kernel documentation declares it as deprecated:

isolcpus= [KNL,SMP,ISOL] Isolate a given set of CPUs from disturbance.

[Deprecated - use cpusets instead]

The cpuset sub-system of the cgroups is preferable. A tool like partrt is based on this principle to divide the CPU cores in two subsets: nrt where all the processes run and rt into which you can move your applications. Hence, the applications are isolated on some CPU cores.



Related Topics



Leave a reply



Submit