How to redirect stderr and stdout to different files in the same line in script?
Just add them in one line command 2>> error 1>> output
However, note that >>
is for appending if the file already has data. Whereas, >
will overwrite any existing data in the file.
So, command 2> error 1> output
if you do not want to append.
Just for completion's sake, you can write 1>
as just >
since the default file descriptor is the output. so 1>
and >
is the same thing.
So, command 2> error 1> output
becomes, command 2> error > output
Bash redirection: save stderr/stdout to different files and still print them out on a console
Here is an answer:
./yourScript.sh > >(tee stdout.log) 2> >(tee stderr.log >&2)
If your script has STDOUT and STDERR descriptors, you get 2 files stdout.log + stderr.log and all output (Err + Out) to console.
Different ways to redirect stderr and stdout to a file in Bash?
>
overwrites files; >>
appends to files.
When you write 1> file
and 2> file
both streams will overwrite file
in parallel and may therefore overwrite each other – a typical race condition.
command 1>> file 2>> file
should keep all output of both streams.
Example:
$ n=1""000""000
$ (seq "$n" 1>&2 & seq "$n") 1> o 2> o
$ wc -l o
1000000
$ rm o
$ (seq "$n" 1>&2 & seq "$n") 1>> o 2>> o
wc -l o
2000000
Bash redirect stdout and stderr to seperate files with timestamps
The trick is to make sure that tee
, and the process substitution running your log
function, exits before the script as a whole does -- so that when the shell that started the script prints its prompt, there isn't any backgrounded process that might write more output after it's done.
As a working example (albeit one focused more on being explicit than terseness):
#!/usr/bin/env bash
stdout_log=stdout.log; stderr_log=stderr.log
log () {
file=$1; shift
while read -r line; do
printf '%(%s)T %s\n' -1 "$line"
done >> "$file"
}
# first, make backups of your original stdout and stderr
exec {stdout_orig_fd}>&1 {stderr_orig_fd}>&2
# for stdout: start your process substitution, record its PID, start tee, record *its* PID
exec {stdout_log_fd}> >(log "$stdout_log"); stdout_log_pid=$!
exec {stdout_tee_fd}> >(tee "/dev/fd/$stdout_log_fd"); stdout_tee_pid=$!
exec {stdout_log_fd}>&- # close stdout_log_fd so the log process can exit when tee does
# for stderr: likewise
exec {stderr_log_fd}> >(log "$stderr_log"); stderr_log_pid=$!
exec {stderr_tee_fd}> >(tee "/dev/fd/$stderr_log_fd" >&2); stderr_tee_pid=$!
exec {stderr_log_fd}>&- # close stderr_log_fd so the log process can exit when tee does
# now actually swap out stdout and stderr for the processes we started
exec 1>&$stdout_tee_fd 2>&$stderr_tee_fd {stdout_tee_fd}>&- {stderr_tee_fd}>&-
# ...do the things you want to log here...
echo "this goes to stdout"; echo "this goes to stderr" >&2
# now, replace the FDs going to tee with the backups...
exec >&"$stdout_orig_fd" 2>&"$stderr_orig_fd"
# ...and wait for the associated processes to exit.
while :; do
ready_to_exit=1
for pid_var in stderr_tee_pid stderr_log_pid stdout_tee_pid stdout_log_pid; do
# kill -0 just checks whether a PID exists; it doesn't actually send a signal
kill -0 "${!pid_var}" &>/dev/null && ready_to_exit=0
done
(( ready_to_exit )) && break
sleep 0.1 # avoid a busy-loop eating unnecessary CPU by sleeping before next poll
done
So What's With The File Descriptor Manipulation?
A few key concepts to make sure we have clear:
- All subshells have their own copies of the file descriptor table as created when they were
fork()
ed off from their parent. From that point forward, each file descriptor table is effectively independent. - A process reading from (the read end of) a FIFO (or pipe) won't see an EOF until all programs writing to (the write end of) that FIFO have closed their copies of the descriptor.
...so, if you create a FIFO pair, fork()
off a child process, and let the child process write to the write end of the FIFO, whatever's reading from the read end will never see an EOF until not just the child, but also the parent, closes their copies.
Thus, the gymnastics you see here:
- When we run
exec {stdout_log_fd}>&-
, we're closing the parent shell's copy of the FIFO writing to thelog
function for stdout, so the only remaining copy is the one used by thetee
child process -- so that whentee
exits, the subshell runninglog
exits too. - When we run
exec 1>&$stdout_tee_fd {stdout_tee_fd}>&-
, we're doing two things: First, we make FD 1 a copy of the file descriptor whose number is stored in the variablestdout_tee_fd
; second, we delete thestdout_tee_fd
entry from the file descriptor table, so only the copy on FD 1 remains. This ensures that later, when we runexec >&"$stdout_orig_fd"
, we're deleting the last remaining write handle to the stdouttee
function, causingtee
to get an EOF on stdin (so it exits, thus closing the handle it holds on thelog
function's subshell and letting that subshell exit as well).
Some Final Notes On Process Management
Unfortunately, how bash handles subshells created for process substitutions has changed substantially between still-actively-deployed releases; so while in theory it's possible to use wait "$pid"
to let a process substitution exit and collect its exit status, this isn't always reliable -- hence the use of kill -0
.
However, if wait "$pid"
worked, it would be strongly preferable, because the wait()
syscall is what removes a previously-exited process's entry from the process table: It is guaranteed that a PID will not be reused (and a zombie process-table entry is left as a placeholder) if no wait()
or waitpid()
invocation has taken place.
Modern operating systems try fairly hard to avoid short-term PID reuse, so wraparound is not an active concern in most scenarios. However, if you're worried about this, consider using the flock
-based mechanism discussed in https://stackoverflow.com/a/31552333/14122 for waiting for your process substitutions to exit, instead of kill -0
.
tee stdout and stderr to separate files while retaining them on their respective streams
A process-substitution-based solution is simple, although not as simple as you might think. My first attempt seemed like it should work
{ echo stdout; echo stderr >&2; } > >( tee ~/stdout.txt ) \
2> >( tee ~/stderr.txt )
However, it doesn't quite work as intended in bash
because the second tee
inherits its standard output from the original command (and hence it goes to the first tee
) rather than from the calling shell. It's not clear if this should be considered a bug in bash
.
It can be fixed by separating the output redirections into two separate commands:
{ { echo stdout; echo stderr >&2; } > >(tee stdout.txt ); } \
2> >(tee stderr.txt )
Update: the second tee
should actually be tee stderr.txt >&2
so that what was read from standard error is printed back onto standard error.
Now, the redirection of standard error occurs in a command which does not have its standard output redirected, so it works in the intended fashion. The outer compound command has its standard error redirected to the outer tee
, with its standard output left on the terminal. The inner compound command inherits its standard error from the outer (and so it also goes to the outer tee
, while its standard output is redirected to the inner tee
.
How to redirect to separate files and to a combined file?
Following ideas in comments, use tee to place stdout/stderr into specific file, and into combined file.
rm -f both.log
some-command 2> >(tee err.log >>both.log) | tee out.log >> both.log
redirect stdout and stderr to one file, copy of just stderr to another
You can use an additional file descriptor and tee
:
{ foo.sh 2>&1 1>&3- | tee stderr.txt; } > stdout_and_stderr.txt 3>&1
Be aware that line buffering may cause the stdout output to appear out of order. If this is a problem, there are ways to overcome that including the use of unbuffer
.
How to redirect and append both standard output and standard error to a file with Bash
cmd >>file.txt 2>&1
Bash executes the redirects from left to right as follows:
>>file.txt
: Openfile.txt
in append mode and redirectstdout
there.2>&1
: Redirectstderr
to "wherestdout
is currently going". In this case, that is a file opened in append mode. In other words, the&1
reuses the file descriptor whichstdout
currently uses.
Related Topics
What Is Export_Symbol_Gpl in Linux Kernel Code
D-Bus Tutorial in C to Communicate with Wpa_Supplicant
Case-Insensitive Glob on Zsh/Bash
Tcp: Server Sends [Rst, Ack] Immediately After Receiving [Syn] from Client
Rename File Command in Unix with Timestamp
Evaluating Smi (System Management Interrupt) Latency on Linux-Centos/Intel MAChine
Compiling Out-Of-Tree Kernel Module Against Any Kernel Source Tree on the Filesystem
Binary Data Over Serial Terminal
Replace a Text with a Variable
How to Install Influxdb in Windows
Docker Copy with File Globbing
Linux Raw Ethernet Socket Bind to Specific Protocol
How to Check If Ssh-Agent Is Already Running in Bash
Pycharm Startup Error: Unable to Detect Graphics Environment
Perf Stat Does Not Count Memory-Loads But Counts Memory-Stores