How to Redirect Output to a File and Stdout

How to redirect output to a file and stdout

The command you want is named tee:

foo | tee output.file

For example, if you only care about stdout:

ls -a | tee output.file

If you want to include stderr, do:

program [arguments...] 2>&1 | tee outfile

2>&1 redirects channel 2 (stderr/standard error) into channel 1 (stdout/standard output), such that both is written as stdout. It is also directed to the given output file as of the tee command.

Furthermore, if you want to append to the log file, use tee -a as:

program [arguments...] 2>&1 | tee -a outfile

How to redirect and append both standard output and standard error to a file with Bash

cmd >>file.txt 2>&1

Bash executes the redirects from left to right as follows:

  1. >>file.txt: Open file.txt in append mode and redirect stdout there.
  2. 2>&1: Redirect stderr to "where stdout is currently going". In this case, that is a file opened in append mode. In other words, the &1 reuses the file descriptor which stdout currently uses.

Redirect output file to stdout

You can specify /dev/stdout as the filename.

qsub -o /dev/stdout -sync y awesome_script.sh

More info can be found in this answer: https://unix.stackexchange.com/questions/36403


Edit:

A common Unix idiom is that the filename - means stdin for an input
file and stdout for an output file. This has to be explicitly handled by
the program, though; it's not automatic. I can't find anything saying
whether qsub uses this idiom. Try it, it might work.

If you want to pipe the output to another command that does something
with it, you could create a named pipe (man mkfifo) and send the
output to that, with the other process reading from the pipe. Start the
read process first.

For another idea, see this answer,
which explains how to use bash process substitution. I haven't used this
much, but I think it would be something like:

qsub -o >(other-command-or-pipeline) -sync y awesome_script.sh

If you simply want to see the output in real-time, you could specify
/dev/tty as the file. Or, you could output to a file and watch it with
tail -f.

Hope one of these ideas works for you.

How to redirect output to file and STDOUT and exit on error

This is exactly what the pipefail runtime option is meant for:

# Make a pipeline successful only if **all** components are successful
set -o pipefail
ls /fake/folder | tee foo.txt || exit 1

If you want to be explicit about precedence, by the way, consider:

set -o pipefail
{ ls /fake/folder | tee foo.txt; } || exit 1 # same thing, but maybe more clear

...or, if you want to avoid making runtime configuration changes, you can use PIPESTATUS to check the exit status of any individual element of the most recent pipeline:

ls /fake/folder | tee foo.txt
(( ${PIPESTATUS[0]} == 0 )) || exit

If you don't want to take any of the approaches above, but are willing to use ksh extensions adopted by bash, putting it in a process substitution rather than a pipeline will prevent tee from impacting exit status:

ls /fake/folder > >(tee foo.txt) || exit 1

Command output redirect to file and terminal

Yes, if you redirect the output, it won't appear on the console. Use tee.

ls 2>&1 | tee /tmp/ls.txt

Redirect stdout and stderr to a file and also to console in linux

The tee command can help you out. It reads from standard input and writes to standard output and files.

So the following command will do:

some_command.sh 2>&1 | tee file.txt

Manpage: http://man7.org/linux/man-pages/man1/tee.1.html

Redirect all output to file in Bash

That part is written to stderr, use 2> to redirect it. For example:

foo > stdout.txt 2> stderr.txt

or if you want in same file:

foo > allout.txt 2>&1

Note: this works in (ba)sh, check your shell for proper syntax



Related Topics



Leave a reply



Submit