Get Program Execution Time in the Shell

Get program execution time in the shell

Use the built-in time keyword:


$ help time

time: time [-p] PIPELINE
Execute PIPELINE and print a summary of the real time, user CPU time,
and system CPU time spent executing PIPELINE when it terminates.
The return status is the return status of PIPELINE. The `-p' option
prints the timing summary in a slightly different format. This uses
the value of the TIMEFORMAT variable as the output format.

Example:

$ time sleep 2

real 0m2.009s
user 0m0.000s
sys 0m0.004s

How to Calculate Execution Time and Get Command Exit Code

Your quoting in your sample call is unusual. I would expect that

sh -c "ipython -c "%run ./foo.ipynb" > $(log_file "log_file_name") 2>&1"

is translated to

sh -c 'ipython -c %run' './foo.ipynb > somewhere/logs/log_file_name_somedate.log 2>&1'

As for the behaviour of time wrt return codes, my bash (5.1.16) behaves like this:

$ time ls "I do not exist" > $(echo newfile) 2>&1

real 0m0,004s
user 0m0,004s
sys 0m0,000s

$ echo $?
2

$ cat newfile
"I do not exist": No such file or directory (os error 2)

And wrt to redirections like this:

$ capture="$(time (ls "I do not exist" > $(echo newfile2) 2>&1) 2>&1)"

$ echo $?
2

$ echo "$capture"

real 0m0,004s
user 0m0,004s
sys 0m0,000s

$ cat newfile2
"I do not exist": No such file or directory (os error 2)

Therefore, I'd suggest you to try changing your call to:

CODE_TIME="$(time (sh -c "ipython -c '%run ./foo.ipynb' > $(log_file "log_file_name") 2>&1") 2>&1)"

CODE_RESULT=$?

Calculate average execution time of a program using Bash

You could write a loop and collect the output of time command and pipe it to awk to compute the average:

avg_time() {
#
# usage: avg_time n command ...
#
n=$1; shift
(($# > 0)) || return # bail if no command given
for ((i = 0; i < n; i++)); do
{ time -p "$@" &>/dev/null; } 2>&1 # ignore the output of the command
# but collect time's output in stdout
done | awk '
/real/ { real = real + $2; nr++ }
/user/ { user = user + $2; nu++ }
/sys/ { sys = sys + $2; ns++}
END {
if (nr>0) printf("real %f\n", real/nr);
if (nu>0) printf("user %f\n", user/nu);
if (ns>0) printf("sys %f\n", sys/ns)
}'
}

Example:

avg_time 5 sleep 1

would give you

real 1.000000
user 0.000000
sys 0.000000

This can be easily enhanced to:

  • sleep for a given amount of time between executions
  • sleep for a random time (within a certain range) between executions

Meaning of time -p from man time:

   -p
When in the POSIX locale, use the precise traditional format

"real %f\nuser %f\nsys %f\n"

(with numbers in seconds) where the number of decimals in the
output for %f is unspecified but is sufficient to express the
clock tick accuracy, and at least one.

You may want to check out this command-line benchmarking tool as well:

sharkdp/hyperfine

Print execution time of a shell command

Don't forget that there is a difference between bash's builtin time (which should be called by default when you do time command) and /usr/bin/time (which should require you to call it by its full path).

The builtin time always prints to stderr, but /usr/bin/time will allow you to send time's output to a specific file, so you do not interfere with the executed command's stderr stream. Also, /usr/bin/time's format is configurable on the command line or by the environment variable TIME, whereas bash's builtin time format is only configured by the TIMEFORMAT environment variable.

$ time factor 1234567889234567891 # builtin
1234567889234567891: 142662263 8653780357

real 0m3.194s
user 0m1.596s
sys 0m0.004s
$ /usr/bin/time factor 1234567889234567891
1234567889234567891: 142662263 8653780357
1.54user 0.00system 0:02.69elapsed 57%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+215minor)pagefaults 0swaps
$ /usr/bin/time -o timed factor 1234567889234567891 # log to file `timed`
1234567889234567891: 142662263 8653780357
$ cat timed
1.56user 0.02system 0:02.49elapsed 63%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+217minor)pagefaults 0swaps

How do I get time of a Python program's execution?

The simplest way in Python:

import time
start_time = time.time()
main()
print("--- %s seconds ---" % (time.time() - start_time))

This assumes that your program takes at least a tenth of second to run.

Prints:

--- 0.764891862869 seconds ---

Mean of execution time of a program

If I understand correctly what average you would like to calculate, I think the code below will serve your purpose.

Some explanations on the additions to your script:

  • Lines 6 - 14 declare a function that expects three arguments and updates the accumulated total time, in seconds
  • Line 26 initializes variable total_time.
  • Lines 31, 38, execute programs A and B respectively. Using bash time to collect the execution time. >/dev/null "discards" A's and B's outputs. 2>&1 redirects stderr to stdout so that grep can get time's outputs (a nice explanation can be found here). grep real keeps only the real output from time, you could refer to this post for an explanation of time's output and choose the specific time of your interest. awk {print $2} keeps only the numeric part of grep's output.
  • Lines 32, 39 store the minutes part to the corresponding variable
  • Lines 33-34, 40-41 trim the seconds part of real_time variable
  • Lines 35, 42 accumulate the total time by calling function accumulate_time
  • Line 46 calculates the average time by dividing with 5
  • Converted the while loop to a nested for loop and introduced the iterations variable, not necessarily part of the initial question but helps re-usability of the number of iterations
  1 #!/bin/bash
2
3 # Function that receives three arguments (total time,
4 # minutes and seconds) and returns the accumulated time in
5 # seconds
6 function accumulate_time() {
7 total_time=$1
8 minutes=$2
9 seconds=$3
10
11 accumulated_time_secs=$(echo "$minutes * 60 + $seconds + $total_time" | bc )
12 echo "$accumulated_time_secs"
13
14 }
15
16 g++ A.cpp -o A
17 g++ B.cpp -o B
18 Inputfiles=(X Y Z U V)
19
20 iterations=5
21
22 for j in "${Inputfiles[@]}"
23 do
24 echo $j.txt:
25 # Initialize total_time
26 total_time=0.0
27
28 for i in $(seq 1 $iterations)
29 do
30 # Execute A and capture its real time
31 real_time=`{ time ./A $j.txt >/dev/null; } 2>&1 | grep real | awk '{print $2}'`
32 minutes=${real_time%m*}
33 seconds=${real_time#*m}
34 seconds=${seconds%s*}
35 total_time=$(accumulate_time "$total_time" "$minutes" "$seconds")
36
37 # Execute B and capture its real time
38 real_time=`{ time ./B C.txt >/dev/null; } 2>&1 | grep real | awk '{print $2}'`
39 minutes=${real_time%m*}
40 seconds=${real_time#*m}
41 seconds=${seconds%s*}
42 total_time=$(accumulate_time "$total_time" "$minutes" "$seconds")
43 echo ""
44 done
45
46 average_time=$(echo "scale=3; $total_time / $iterations" | bc)
47 echo "Average time for input file $j is: $average_time"
48 done
49
50 rm -f A B

Calculate time for each step of a shell script and show total execution time

If you are OK with the time granularity of seconds, you could simply do this:

start=$SECONDS
/u01/scripts/stop.sh ${1} | tee ${stop_log}
stop=$SECONDS
/u01/scripts/kill_proc.sh ${1} | tee ${kill_log}
kill_proc=$SECONDS
/u01/scripts/detach.sh ${1}| tee ${detach_log}
detach=$SECONDS
/u01/scripts/copy.sh ${1} | tee ${copy_log}
end=$SECONDS

printf "%s\n" "stop=$((stop-start)), kill_proc=$((kill_proc-stop)), detach=$((detach-kill_proc)), copy=$((end-detach)), total=$((end-start))"

You can write a function to do this as well:

time_it() {
local start=$SECONDS rc
echo "$(date): Starting $*"
"$@"; rc=$?
echo "$(date): Finished $*; elapsed = $((SECONDS-start)) seconds"
return $rc
}

With Bash version >= 4.2 you can use printf to print date rather than invoking an external command:

time_it() {
local start=$SECONDS ts rc
printf -v ts '%(%Y-%m-%d_%H:%M:%S)T' -1
printf '%s\n' "$ts Starting $*"
"$@"; rc=$?
printf -v ts '%(%Y-%m-%d_%H:%M:%S)T' -1
printf '%s\n' "$ts Finished $*; elapsed = $((SECONDS-start)) seconds"
return $rc
}

And invoke it as:

start=$SECONDS
time_it /u01/scripts/stop.sh ${1} | tee ${stop_log}
time_it /u01/scripts/kill_proc.sh ${1} | tee ${kill_log}
time_it /u01/scripts/detach.sh ${1}| tee ${detach_log}
time_it /u01/scripts/copy.sh ${1} | tee ${copy_log}
echo "Total time = $((SECONDS-start)) seconds"

Related:

  • How do I measure duration in seconds in a shell script?

Execution time of C program

CLOCKS_PER_SEC is a constant which is declared in <time.h>. To get the CPU time used by a task within a C application, use:

clock_t begin = clock();

/* here, do your time-consuming job */

clock_t end = clock();
double time_spent = (double)(end - begin) / CLOCKS_PER_SEC;

Note that this returns the time as a floating point type. This can be more precise than a second (e.g. you measure 4.52 seconds). Precision depends on the architecture; on modern systems you easily get 10ms or lower, but on older Windows machines (from the Win98 era) it was closer to 60ms.

clock() is standard C; it works "everywhere". There are system-specific functions, such as getrusage() on Unix-like systems.

Java's System.currentTimeMillis() does not measure the same thing. It is a "wall clock": it can help you measure how much time it took for the program to execute, but it does not tell you how much CPU time was used. On a multitasking systems (i.e. all of them), these can be widely different.



Related Topics



Leave a reply



Submit