How can I get the CPU time for a perl system call?
You can use Capture::Tiny to capture the STDOUT and STDERR of pretty much anything in Perl.
use Capture::Tiny 'capture';
my ($stdout, $stderr, $exit) = capture { system "time ls" };
print $stderr;
For some reason the output is missing some whitespace on my system, but is clear enough to parse out what you need.
0.00user 0.00system 0:00.00elapsed 0%CPU (0avgtext+0avgdata 2272maxresident)k
0inputs+8outputs (0major+111minor)pagefaults 0swaps
Measure the runtime of a program run via perl open
Since you open a pipe, you need to time from before open
ing to at least after the reading
use warnings;
use strict;
use Time::HiRes qw(gettimeofday tv_interval sleep);
my $t0 = [gettimeofday];
open my $read, '-|', qw(ls -l) or die "Can't open process: $!";
while (<$read>)
{
sleep 0.1;
print;
}
print "It took ", tv_interval($t0), " seconds\n";
# close pipe and check
or, to time the whole process, after calling close
on the pipe (after all reading is done)
my $t0 = [gettimeofday];
open my $read, '-|', qw(ls -l) or die "Can't open process: $!";
# ... while ($read) { ... }
close $read or
warn $! ? "Error closing pipe: $!" : "Exit status: $?";
print "It took ", tv_interval($t0), " seconds\n";
The close blocks and waits for the program to finish
Closing a pipe also waits for the process executing on the pipe to exit--in case you wish to look at the output of the pipe afterwards--and implicitly puts the exit status value of that command into
$?
[...]
For the status check see $? variable in perlvar and system
If the timed program forks and doesn't wait
on its children in a blocking way this won't time them correctly.
In that case you need to identify resources that they use (files?) and monitor that.
I'd like to add that external commands should be put together carefully, to avoid shell injection trouble. A good module is String::ShellQuote. See for example this answer and this answer
Using a module for capturing streams would free you from the shell and perhaps open other ways to run and time this more reliably. A good one is Capture::Tiny (and there are others as well).
Thanks to HåkonHægland for comments. Thanks to ikegami for setting me straight, to use close
(and not waitpid
).
accurately timing an execution of a system call using perl
$dt2
is a constant, but it looks like you want it to recompute the timestamp every time it is used. For that purpose, you should use a function.
sub dt2 {
return strftime('%Y%m%d-%H:%M:%S', localtime(time))
}
...
open (CRLOG, ">dump-$dt.log") || die "cannot append";
foreach my $tbls (@tbls)
{
chomp $tbls;
print CRLOG "TIME START => ", &dt2, "\n";
my $crfile = qq{mysql -u foo -pmypw ...};
system($crfile);
print CRLOG "COMMAND => $crfile\n";
print CRLOG "TIME END => ", &dt2, "\n";
};
close (CRLOG);
Control a perl script's CPU utilization?
you could lower the process (Perl's) priority, as assigned by the OS: windows or linux
example for lowest priority:
windows
start /LOW perl <myscript>
linux
nice +19 perl <myscript>
Writing a Sys::Getpagesize Perl module for system call getpagesize (man page GETPAGESIZE(2))
Why do you think you need the source code to the getpagesize
function? You just link to the system's version. I haven't tried it, but something like this should work:
#include "EXTERN.h"
#include "perl.h"
#include "XSUB.h"
#include <unistd.h> /* man 2 getpagesize says to use this */
MODULE = Sys::Getpagesize PACKAGE = Sys::Getpagesize
int
getpagesize()
But in this case, you shouldn't need to write XS at all. man 2 getpagesize
says "Portable applications should employ sysconf(_SC_PAGESIZE)
instead of getpagesize()
."
Perl's standard POSIX module has sysconf
already:
use POSIX qw(sysconf _SC_PAGESIZE);
print sysconf( _SC_PAGESIZE );
Get both STDOUT and STDERR of a program or script to a variable in perl
To get both STDOUT and STDERR into a variable use following code snippet using backticks (``).
Make sure 2>&1 is places at the end of your command, to redirect STDERR in to STDOUT.
When wrong command provided,
my $Temp_return;
$Temp_return = `lse 2>&1`;
print "return = " . $Temp_return . "\n";
Error OutPut is,
return = 'lse' is not recognized as an internal or external command, operable program or batch file.
For correct command you will get the result accordingly.
As an additional information different methods for executing the command in Perl are.
system() : If you want to execute a command and not interested in reading console output of the executed command.
exec : When you don't want to return to the calling Perl script. use same.
backticks : When you want to store /read the console output of the command into a Perl variable. Initial I mistakenly thought, required to use Single cores('') instead, for backticks (``) and get confused, because its almost similar to Single cores(''), please give attention.
open() : When you want to pipe the command (as input or output) to your script.
Hope it could be helpful for you..... :)
BR,
Jerry James
Invoke mutilple threads by perl system function
Taking advantage of multiple cores for implicitly parallel workloads has many ways to do it.
The most obvious is - suffix an ampersand after your system call, and it'll charge off and do it in the background.
my @filenames = ("file1.xml","file2.xml","file3.xml",file4.xml");
foreach my $file (@filenames)
{
#Scripts which parses the XML file
system("perl parse.pl $file &");
#Go-On don't wait till parse.pl has finished
}
That's pretty simplistic, but should do the trick. The downside of this approach is it doesn't scale too well - if you had a long list of files (say, 1000?) then they'd all kick off at once, and you may drain system resources and cause problems by doing it.
So if you want a more controlled approach - you can use either forking or threading. fork
ing uses the C system call, and starts duplicate process instances.
use Parallel::ForkManager;
my $manager = Parallel::ForkManager -> new ( 4 ); #number of CPUs
my @filenames = ("file1.xml","file2.xml","file3.xml",file4.xml");
foreach my $file (@filenames)
{
#Scripts which parses the XML file
$manager -> start and next;
exec("perl", "parse.pl", $file) or die "exec: $!";
$manager -> finish;
#Go-On don't wait till parse.pl has finished
}
# and if you want to wait:
$manager -> wait_all_children();
And if you wanted to do something that involved capturing output and post-processing it, I'd be suggesting thinking in terms of threads
and Thread::Queue
. But that's unnecessary if there's no synchronisation required.
(If you're thinking that might be useful, I'll offer:
Perl daemonize with child daemons)
Edit: Amended based on comments. Ikegami correctly points out:
system("perl parse.pl $file"); $manager->finish; is wasteful (three processes per worker). Use: exec("perl", "parse.pl", $file) or die "exec: $!"; (one process per worker).
How to calculate the CPU usage of a process by PID in Linux from C?
You need to parse out the data from /proc/<PID>/stat
. These are the first few fields (from Documentation/filesystems/proc.txt
in your kernel source):
Table 1-3: Contents of the stat files (as of 2.6.22-rc3)
..............................................................................
Field Content
pid process id
tcomm filename of the executable
state state (R is running, S is sleeping, D is sleeping in an
uninterruptible wait, Z is zombie, T is traced or stopped)
ppid process id of the parent process
pgrp pgrp of the process
sid session id
tty_nr tty the process uses
tty_pgrp pgrp of the tty
flags task flags
min_flt number of minor faults
cmin_flt number of minor faults with child's
maj_flt number of major faults
cmaj_flt number of major faults with child's
utime user mode jiffies
stime kernel mode jiffies
cutime user mode jiffies with child's
cstime kernel mode jiffies with child's
You're probably after utime
and/or stime
. You'll also need to read the cpu
line from /proc/stat
, which looks like:
cpu 192369 7119 480152 122044337 14142 9937 26747 0 0
This tells you the cumulative CPU time that's been used in various categories, in units of jiffies. You need to take the sum of the values on this line to get a time_total
measure.
Read both utime
and stime
for the process you're interested in, and read time_total
from /proc/stat
. Then sleep for a second or so, and read them all again. You can now calculate the CPU usage of the process over the sampling time, with:
user_util = 100 * (utime_after - utime_before) / (time_total_after - time_total_before);
sys_util = 100 * (stime_after - stime_before) / (time_total_after - time_total_before);
Make sense?
Related Topics
How Provide Nested Mount of Overlayfs
How to Prevent an X Window from Receiving User Input
Batch Remove Substring from Filename with Special Characters in Bash
Automating Killall Then Killall Level 9
Rust Linux Version Glibc Not Found - Compile for Different Glibc/Libc6 Version
How Does Gdb Start an Assembly Compiled Program and Step One Line at a Time
Is Ethernet Checksum Exposed via Af_Packet
How to Print the Nth (5Th) Line of Every File Preceded by the Filename Using Any Linux Tool
Sed: Matching on 2 Patterns on the Same Line
Wget: Unsupported Scheme on Non-Http Url
Difference Between Source and ./ Execution of Linux Scripts
Comparison of Cat Pipe Awk Operation to Awk Command on a File
Jupyter Lab - Suppress Console Output
Executable Object Files and Virtual Memory
Start Docker-Compose Automatically on Ec2 Startup
Sched_Fifo Higher Priority Thread Is Getting Preempted by the Sched_Fifo Lower Priority Thread