Capture both exit status and output from a system call in R
As of R 2.15, system2
will give the return value as an attribute when stdout
and/or stderr
are TRUE. This makes it easy to get the text output and return value.
In this example, ret
ends up being a string with an attribute "status"
:
> ret <- system2("ls","xx", stdout=TRUE, stderr=TRUE)
Warning message:
running command ''ls' xx 2>&1' had status 1
> ret
[1] "ls: xx: No such file or directory"
attr(,"status")
[1] 1
> attr(ret, "status")
[1] 1
Simultaneously save and print R system call output?
Since R is waiting for this to complete anyway, generally to see the stdout in real time, you need to poll the process for output. (One can/should also poll for stderr, depending.)
Here's a quick example using processx
.
First, I'll create a slow-output shell script; replace this with the real reason you're calling system
. I've named this myscript.sh
.
#!/bin/bash
for i in `seq 1 5` ; do
sleep 3
echo 'hello world: '$i
done
Now let's (1) start a process in the background, then (2) poll its output every second.
proc <- processx::process$new("bash", c("-c", "./myscript.sh"), stdout = "|")
output <- character(0)
while (proc$is_alive()) {
Sys.sleep(1)
now <- Sys.time()
tmstmp <- sprintf("# [%s]", format(now, format = "%T"))
thisout <- proc$read_output_lines()
if (length(thisout)) {
output <- c(output, thisout)
message(tmstmp, " New output!\n", paste("#>", thisout))
} else message(tmstmp)
}
# [13:09:29]
# [13:09:30]
# [13:09:31]
# [13:09:32]New output!
#> hello world: 1
# [13:09:33]
# [13:09:34]
# [13:09:35]New output!
#> hello world: 2
# [13:09:36]
# [13:09:37]
# [13:09:38]New output!
#> hello world: 3
# [13:09:39]
# [13:09:40]
# [13:09:41]New output!
#> hello world: 4
# [13:09:42]
# [13:09:43]
# [13:09:44]New output!
#> hello world: 5
And its output is stored:
output
# [1] "hello world: 1" "hello world: 2" "hello world: 3" "hello world: 4" "hello world: 5"
Ways that this can be extended:
Add/store a timestamp with each message, so you know when it came in. The accuracy and utility of this depends on how frequently you want R to poll the process stdout pipe, and really how much you need this information.
Run the process in the background, and even poll for it in the background cycles. I use the
later
package and set up a self-recurring function that polls, appends, and re-submits itself into thelater
process queue. The benefit of this is that you can continue to use R; the drawback is that if you're running long code, then you will not see output until your current code exits and lets R breathe and do something idly. (To understand this bullet, one really must play with thelater
package, a bit beyond this answer.)Depending on your intentions, it might be more appropriate for the output to go to a file and "permanently" store it there instead of relying on the R process to keep tabs. There are disadvantages to this, in that now you need to manage polling a file for changes, and R isn't making that easy (it does not have, for instance, direct/easy access to
inotify
, so now it gets even more complicated).
Run a process, capture its output and exit code
Here is a way to get this out.
#!/usr/bin/env ruby
require 'open3'
Open3.popen3("ls -l") do |stdin, stdout, stderr, wait_thr|
puts stdout.read
puts wait_thr.value.exitstatus
end
# >> total 52
# >> -rw-r--r-- 1 arup users 0 Aug 5 09:32 a.rb
# >> drwxr-xr-x 2 arup users 4096 Jul 20 20:37 FSS
# >> drwxr-xr-x 2 arup users 4096 Jul 20 20:37 fss_dir
# >> -rw-r--r-- 1 arup users 42 Jul 19 01:36 out.txt
# .....
#...
# >> 0
Doco is very clear of ::popen3
. stdout
gives IO
object. So, we need to use IO#read
method. And wait_thr.value
gives us Process::Status
. Now once we have that object, we can use #exitstatus
method for the same.
Interrupt loop with system command
John, I'm not sure if this will help, but from investigating setTimeLimit
, I learned that it can halt execution whenever a user is able to execute an interrupt, like Ctrl-C. See this question for some of the references.
In particular, callbacks may be the way to go, and I'd check out addTaskCallback
and this guide on developer.r-project.org.
Here are four other suggestions:
Although it's a hack, a very different approach my be to invoke two R sessions, one is a master session and the other simply exists to execute shell commands passed by the master session, which solely waits for a confirmation that the job was done before starting the next one.
If you can use
foreach
instead offor
(either in parallel, via %dopar%, or serial %do% rather than %dopar% or w/ only 1 registered worker), this may be more amenable to interruptions, as it may be equivalent to the first suggestion (since it forks R).If you can retrieve the exit code for the external command, then that could be passed to a loop conditional. This previous Q&A will be helpful in that regard.
If you want to have everything run in a bash script, then R could just write one long script (i.e. output a string or series of strings to a file). This could be executed and the interrupt is guaranteed not to affect a loop, as you've unrolled the loop. Alternatively, you could write loops in bash. Here are examples. Personally, I like to apply commands to files using
find
(e.g.find .... -exec doStuff {} ';'
) or as inputs via backquotes. Unfortunately, I can't easily give well-formatted code on SO, since it embeds backquotes inside of backquotes... See this page for examples So, it may be the case that you could have one command, no looping, and apply a function to all files meeting a particular set of criteria. Using command substitution via backquotes is a very handy trick for a bash user.
How do I execute a command and get the output of the command within C++ using POSIX?
#include <cstdio>
#include <iostream>
#include <memory>
#include <stdexcept>
#include <string>
#include <array>
std::string exec(const char* cmd) {
std::array<char, 128> buffer;
std::string result;
std::unique_ptr<FILE, decltype(&pclose)> pipe(popen(cmd, "r"), pclose);
if (!pipe) {
throw std::runtime_error("popen() failed!");
}
while (fgets(buffer.data(), buffer.size(), pipe.get()) != nullptr) {
result += buffer.data();
}
return result;
}
Pre-C++11 version:
#include <iostream>
#include <stdexcept>
#include <stdio.h>
#include <string>
std::string exec(const char* cmd) {
char buffer[128];
std::string result = "";
FILE* pipe = popen(cmd, "r");
if (!pipe) throw std::runtime_error("popen() failed!");
try {
while (fgets(buffer, sizeof buffer, pipe) != NULL) {
result += buffer;
}
} catch (...) {
pclose(pipe);
throw;
}
pclose(pipe);
return result;
}
Replace popen
and pclose
with _popen
and _pclose
for Windows.
Getting STDOUT, STDERR, and response code from external *nix command in perl
Actually, the proper way to write this is:
#!/usr/bin/perl
$cmd = 'lsss';
my $out=qx($cmd 2>&1);
my $r_c=$?;
print "output was $out\n";
print "return code = ", $r_c, "\n";
You will get a '0' if no error and '-1' if error.
Pipe output and capture exit status in Bash
There is an internal Bash variable called $PIPESTATUS
; it’s an array that holds the exit status of each command in your last foreground pipeline of commands.
<command> | tee out.txt ; test ${PIPESTATUS[0]} -eq 0
Or another alternative which also works with other shells (like zsh) would be to enable pipefail:
set -o pipefail
...
The first option does not work with zsh
due to a little bit different syntax.
How to change output depending upon command exit status in shell?
I think you're mixing up two different things, success vs. failure and stdout vs stderr. The stdout vs. stderr distinction really has to do with a command's "normal" output (stdout) vs. status messages (stderr, including error messages, success messages, status messages, etc). In your case, all of the things you're printing should go to stderr, not stdout.
Success vs failure is a different question, and it's generally detected by the exit status of the command. It's a bit strange-looking, but the standard way to check the exit status of a command is to use it as the condition of an if
statement, something like this:
printf "\n Installing Nano" >&2
if sudo apt install nano -y &> /dev/null; then
# The "then" branch runs if the condition succeeds
printf "\r Nano Installed \n" >&2
else
# The "else" branch runs if the condition fails
printf "\r Error Occured \n" >&2
fi
(Using &&
and ||
instead of if ... then ... else
can cause confusion and weird problems in some situations; I do not recommend it.)
Note: the >&2
redirects output to stderr, so the above sends all messages to stderr.
Bash: How to get exit code of a command while using a spinner?
wait
will tell you what exit status a child PID exited with (by setting that program's exit status as its own), when given that PID as an argument.
sleep 2 & sleep_pid=$!
spinner "$sleep_pid"
wait "$sleep_pid"; exitCode=$?
echo "exitcode: $exitCode"
Note that combining multiple commands onto a line when collecting $!
or $?
in the second half is a practice I strongly recommend -- it prevents the value you're trying to collect from being changed by mistake (as by someone adding a new log line to your code later and not realizing it has side effects).
Related Topics
What Is the Linux Built-In Driver Load Order
Url Encoding a String in Bash Script
Sed Help: Matching and Replacing a Literal "\N" (Not the Newline)
How Bash Handles the Jobs When Logout
Who Can Access a File with Octal Permissions "000" on Linux/Unix
Implementing Poll in a Linux Kernel Module
"Make" Command for Windows - Possible Options
Linux/Ubuntu Set: Illegal Option -O Pipefail
How to Change the Host Type for a 'Canadian Cross' Compilation of Gcc with Crosstool-Ng
How to Execve a Process, Retaining Capabilities in Spite of Missing Filesystem-Based Capabilities
Docker Alpine Executable Binary Not Found Even If in Path
Receiving Udp Broadcast Packets on Linux
Component Based Web Project Directory Layout with Git and Symlinks
How to Read Only First N Bytes from the Http Server Using Linux Command
Problems Installing R on Linux Centos 6.2