Output Stdout and Stderr to File and Screen and Stderr to File in a Limited Environment

Output stdout and stderr to file and screen and stderr to file in a limited environment

Finally, reaching the goal. I want to say that I have been inspired by @WilliamPursell's answer.

{ "$0" "${mainArgs[@]}" 2>&1 1>&3 | tee -a "$logPath/$logFileName.err" 1>&3 & } 3>&1 | tee -a "$logPath/$logFileName.log" &

Explanation

  • Relaunch the script with...
  • Sending stderr (2>&1) to stdout and...
  • Sending stdout to a new file descriptor (1>&3)
  • Pipe it to tee which receives stderr to duplicate the errors in a file and to file descriptor #1 with...
  • Sending stdout to the new file descriptor (1>&3)...
  • And having & to ensure no blocking
  • Then grouping the previous commands using curly brackets.
  • Sending the grouped commands new file descriptor to stdout (3>&1)
  • Pipe it to tee which receives stdout that combines errors and normal output that write to file and display on screen
  • And having & to ensure no blocking

Full code of my activateLogs function for those interested. I also included the dependencies even though they could be inserted into the activateLogs function.

m=0
declare -a mainArgs
if [ ! "$#" = "0" ]; then
for arg in "$@"; do
mainArgs[$m]=$arg
m=$(($m + 1))
done
fi

function containsElement()
# $1 string to find
# $2 array to search in
# return 0 if there is a match, otherwise 1
{
local e match="$1"
shift
for e; do [[ "$e" == "$match" ]] && return 0; done
return 1
}

function hasMainArg()
# $1 string to find
# return 0 if there is a match, otherwise 1
{
local match="$1"
containsElement "$1" "${mainArgs[@]}"
return $?
}

function activateLogs()
# $1 = logOutput: What is the output for logs: SCREEN, DISK, BOTH. Default is DISK. Optional parameter.
{
local logOutput=$1
if [ "$logOutput" != "SCREEN" ] && [ "$logOutput" != "BOTH" ]; then
logOutput="DISK"
fi

if [ "$logOutput" = "SCREEN" ]; then
echo "Logs will only be output to screen"
return
fi

hasMainArg "--force-log"
local forceLog=$?

local isFileDescriptor3Exist=$(command 2>/dev/null >&3 && echo "Y")

if [ "$isFileDescriptor3Exist" = "Y" ]; then
echo "Logs are configured"
elif [ "$forceLog" = "1" ] && ([ ! -t 1 ] || [ ! -t 2 ]); then
# Use external file descriptor if they are set except if having "--force-log"
echo "Logs are configured externally"
else
echo "Relaunching with logs files"
local logPath="logs"
if [ ! -d $logPath ]; then mkdir $logPath; fi

local logFileName=$(basename "$0")"."$(date +%Y-%m-%d.%k-%M-%S)

exec 4<> "$logPath/$logFileName.log" # File descriptor created only to get the underlying file in any output option
if [ "$logOutput" = "DISK" ]; then
# FROM: https://stackoverflow.com/a/45426547/214898
exec 3<> "$logPath/$logFileName.log"
"$0" "${mainArgs[@]}" 2>&1 1>&3 | tee -a "$logPath/$logFileName.err" 1>&3 &
else
# FROM: https://stackoverflow.com/a/70790574/214898
{ "$0" "${mainArgs[@]}" 2>&1 1>&3 | tee -a "$logPath/$logFileName.err" 1>&3 & } 3>&1 | tee -a "$logPath/$logFileName.log" &
fi

exit
fi
}

#activateLogs "DISK"
#activateLogs "SCREEN"
activateLogs "BOTH"

echo "FIRST"
echo "ERROR" >&2
echo "LAST"
echo "LAST2"

Redirect both stdout and stderr to a single file and keep message order

The problem is that output through stdout is buffered while stderr is unbuffered.

You either have to make stdout unbuffered like stderr, or make stderr buffered like stdout. You set buffering and mode by using setvbuf.

You could also call fflush on stdout after each output to it.

Msys: how to keep STDERR on the screen and at the same time copy both STDOUT and STDERR to files

There might be no direct solution in Msys but the >(tee ... ) solution works fine in *Nix, OSX, and probably Cygwin.

The workaround is to grep all the errors and warnings we want to keep them on the screen.

I have successfully used the following command for a makefile to compile C code:

make 2>&1 | tee make.log | grep -E "(([Ee]rror|warning|make):|In function|undefined)"

Retrieve underlying file of tee command

I finally opted for the creation of a "fake" file descriptor #4 that does not nothing except pointing to the current log file.

Redirecting the output of my bash to a log file

Basic Techniques

There are multiple ways to achieve it:

exec >outputfile.txt
command1
command2
command3
command4

This changes the standard output of the entire script to the log file.

My generally preferred way to do it is:

{
command1
command2
command3
command4
} > outputfile.txt

This does I/O redirection for all the commands within the scope of the braces. Be careful, you have to treat both { and } as if they were commands; they cannot appear just anywhere. This does not create a sub-shell — which is the main reason I favour it.

You can replace the braces with parentheses:

(
command1
command2
command3
command4
) > outputfile.txt

You can be more cavalier about the placement of the parentheses than the braces, so:

(command1
command2
command3
command4) > outputfile.txt

would also work (but do that with the braces and the shell will fail to find a command {command1 (unless you happen to have a file around that's executable and ...). This creates a sub-shell. Any variable assignments done within the parentheses will not be seen/accessible outside the parentheses. This can be a show-stopper sometimes (but not always). The incremental cost of a sub-shell is pretty much negligible; it exists, but you're likely to be hard-pressed to measure it.

There's also the long-hand way:

command1 >>outputfile.txt
command2 >>outputfile.txt
command3 >>outputfile.txt
command4 >>outputfile.txt

If you wish to demonstrate that you're a neophyte shell programmer, by all means use this technique. If you wish to be considered as a more advanced shell programmer, do not.

Note that all the commands above redirect just standard output to the named file, leaving standard error going to the original destination (usually the terminal). If you want to get standard error to go to the same file, simply add 2>&1 (meaning, send file descriptor 2, standard error, to the same place as file descriptor 1, standard output) after the redirection for standard output.


Applying the Techniques

Addressing questions raised in the comments.

By using the 2>&1 >> $_logfile (as per my answer below) I got what I need, but now I do have also my echo ... in the output file. Is there a way to print them on screen as well as the file at the same time?

Ah, so you don't want everything to go to the file...that complicates things a bit. There are ways, of course; not necessarily straight-forward. I'd probably use exec 3>&1; to set file descriptor 3 going to the same place as 1 (standard output — or use 3>&2 if I wanted the echoes to standard error) before the main redirection. Then I'd create a function echoecho() { echo "$*"; echo "$*" >&3; } and I'd use echoecho Whatever to do the echoing. When you're done with file descriptor 3 (if you're not about to exit, when the system will close it) you can close it with exec 3>&-.

When you refer to exec that's supposed to be the command I'm executing in the individual script file I created and that I will execute in between the cycle right? (just have a look at my answer below to see how I have evolved the script). For the rest of the suggestion I completely lost you.

No; I'm referring the the Bash (shell) built-in command exec. It can be used to do I/O redirection permanently (for the rest of the script), or to replace the current script with a new program, as in exec ls -l — which is probably a bad example.

I guess now I even more confused than when I started :) Would that be possible create a small example … so I can understand it better?

The disadvantage of comments is that they're hard to format and limited in size. Those limitations are also benefits, but there comes a time when the answer has to be extended. Said time has arrived.

For the sake of discussion, I'm going to restrict myself to 2 commands instead of 4 as in the question (but this doesn't lose any generality). Those commands will be cmd1 and cmd2, and in fact those are two different names for the same script:

#!/bin/bash
for i in {01..10}
do
echo "$0: stdout $i - $*"
echo "$0: stderr $i - error message" >&2
done

As you can see, this script writes messages to both standard output and standard error. For example:

$ ./cmd1 trying to work
./cmd1: stdout 1 - trying to work
./cmd1: stderr 1 - error message
./cmd1: stdout 2 - trying to work
./cmd1: stderr 2 - error message

./cmd1: stdout 9 - trying to work
./cmd1: stderr 9 - error message
./cmd1: stdout 10 - trying to work
./cmd1: stderr 10 - error message
$

Now, from the answer posted by Andrea Moro we find:

#!/bin/bash

_logfile="output.txt"

# Delete output file if exist
if [ -f $_logfile ];
then
rm $_logfile
fi

for file in ./shell/*
do
$file 2>&1 >> $_logfile
done

I don't like the variable name starting _; there's no need for it that I can see. This redirects errors to where standard output is (currently) going, and then redirects standard output to the log file. So, if the sub-directory shell contains cmd1 and cmd2, the output is:

$ bash ex1.sh
./shell/cmd1: stderr 1 - error message
./shell/cmd1: stderr 2 - error message

./shell/cmd1: stderr 9 - error message
./shell/cmd1: stderr 10 - error message
./shell/cmd2: stderr 1 - error message
./shell/cmd2: stderr 2 - error message

./shell/cmd2: stderr 9 - error message
./shell/cmd2: stderr 10 - error message
$

To get both standard output and standard error to the file, you have use one of:

2>>$_logfile >>$_logfile
>>$_logfile 2>&1

I/O redirections are generally processed from left to right, except that piping controls where standard output (and standard error if you use |&) goes to before the I/O redirections are handled.

Adapting this script to generate information to standard output as well as logging to the log file, there are a variety of ways of working. I'm assuming the shebang line is #!/bin/bash from here on.

logfile="output.txt"
rm -f $logfile

for file in ./cmd1 ./cmd2
do
$file trying to work >> $logfile 2>&1
done

This removes the log file if it exists (but less verbosely than before). Everything on standard output and standard error goes to the log file. We could also write:

logfile="output.txt"
{
for file in ./cmd1 ./cmd2
do
$file trying to work
done
} >$logfile 2>&1

Or the code could use parentheses in place of the braces with only minor differences in functionality that wouldn't affect this script materially. Or, indeed, in this case, we could use:

logfile="output.txt"
for file in ./cmd1 ./cmd2
do
$file trying to work
done >$logfile 2>&1

And it is not clear that the variable is necessary, but we'll leave it in place. Note that both these use 'clobbering' I/O redirection because they create the log file just once, which in turn means there was no need to remove it (though there might be reasons to do so — related to other users running the command beforehand and leaving a non-writable file behind, but then you should probably have a date-stamped log file anyway so that isn't a problem after all).

Clearly, if we want to echo something to the original standard output as well as to the log file, we have to do something different as both standard error and standard output are going to the log file.

One option is:

logfile="output.txt"
rm -f $logfile

for file in ./cmd1 ./cmd2
do
echo $file $(date +'%Y-%m-%d %H:%M:%S')
$file trying to work >> $logfile 2>&1
done

Another option is:

exec 3>&1

logfile="output.txt"
for file in ./cmd1 ./cmd2
do
echo $file $(date +'%Y-%m-%d %H:%M:%S') >&3
$file trying to work
done >$logfile 2>&1

exec 3>&-

Now file descriptor 3 goes to the same place as the original standard output. Inside the loop, both standard output and standard error go to the log file, but the echo … >&3 sends the standard output of echo to file descriptor 3.

If you want the same echoed output to go to both the redirected standard output and the original standard output, then you can use:

exec 3>&1
echoecho()
{
echo "$*"
echo "$*" >&3
}

logfile="output.txt"
for file in ./cmd1 ./cmd2
do
echoecho $file $(date +'%Y-%m-%d %H:%M:%S')
$file trying to work
done >$logfile 2>&1

exec 3>&-

The output from this was:

$ bash ex3.sh
./cmd1 2014-01-07 14:57:13
./cmd2 2014-01-07 14:57:13
$ cat output.txt
./cmd1 2014-01-07 14:57:13
./cmd1: stdout 1 - trying to work
./cmd1: stderr 1 - error message
./cmd1: stdout 2 - trying to work
./cmd1: stderr 2 - error message

./cmd1: stdout 9 - trying to work
./cmd1: stderr 9 - error message
./cmd1: stdout 10 - trying to work
./cmd1: stderr 10 - error message
./cmd2 2014-01-07 14:57:13
./cmd2: stdout 1 - trying to work
./cmd2: stderr 1 - error message
./cmd2: stdout 2 - trying to work
./cmd2: stderr 2 - error message

./cmd2: stdout 9 - trying to work
./cmd2: stderr 9 - error message
./cmd2: stdout 10 - trying to work
./cmd2: stderr 10 - error message
$

This is roughly what I was saying in my comments, written out in full.

How can I display intermediate pipeline results for NUL-separated data?

You can just change the tee to point to proc sub, then do the exact same thing in there.

   find . -print0 | grep -z pattern | tee >(tr '\0' '\n' > /dev/tty) | xargs -0 command

The only issue with using tee this way, is that if the xargs command also prints to screen, then it is possible for all the output to get jumbled since both the pipe and process sub are asynchronous.

Windows batch redirect output to console

This is more a workaround.

The idea is to a use a pipe program in order to redirect the output of the Windows program to the console:

SomeWinProg SomeArgs 2>>&1 | SomePipeProg PipeProgArgs

As a pipe program you may use a program that passes throug everything, like:

SomeWinProg SomeArgs 2>>&1 | findstr /r "/c:.*"

If this works or not depends on the Windows program.

Concerning the timing:

There may be some trouble when you have a long time running Windows program which produces sporadic output or when the output is done always to the same line (no line feeds, only carriage returns) and you want to have the output in real time.

In these cacses Windows programs like findstr or find are a little bit weak.

On github there is an implementation of mtee thats fits better:

SomeWinProg SomeArgs 2>>&1 | mtee nul

Writing Python output to either screen or filename

Maybe something like this? It uses the symbolic name 'stdout' or 'stderr' in the constructor, or a real filename. The usage of if is limited to the constructor. By the way, I think you're trying to prematurely optimize (which is the root of all evil): you're trying to save time on if's while in real life, the program will spend much more time in file I/O; making the potential waste on your if's negligible.

import sys

class WriteLog:

def __init__(self, output):
self.output = output
if output == 'stdout':
self.logfile = sys.stdout
elif output == 'stderr':
self.logfile = sys.stderr
else:
self.logfile = open(output, 'a')

def write(self, text):
self.logfile.write(text)

def close(self):
if self.output != 'stdout' and self.output != 'stderr':
self.logfile.close()

def __del__(self):
self.close()

if __name__ == '__main__':
a = WriteLog('stdout')
a.write('This goes to stdout\n')
b = WriteLog('stderr')
b.write('This goes to stderr\n')
c = WriteLog('/tmp/logfile')
c.write('This goes to /tmp/logfile\n')


Related Topics



Leave a reply



Submit