Redirect All Output to File in Bash

Redirect all output to file using Bash on Linux?

If the server is started on the same terminal, then it's the server's stderr that is presumably being written to the terminal and which you are not capturing.

The best way to capture everything would be to run:

script output.txt

before starting up either the server or the client. This will launch a new shell with all terminal output redirected out output.txt as well as the terminal. Then start the server from within that new shell, and then the client. Everything that you see on the screen (both your input and the output of everything writing to the terminal from within that shell) will be written to the file.

When you are done, type "exit" to exit the shell run by the script command.

How to redirect output of an entire shell script within the script itself?

Addressing the question as updated.

#...part of script without redirection...

#...part of script with redirection...
} > file1 2>file2 # ...and others as appropriate...

#...residue of script without redirection...

The braces '{ ... }' provide a unit of I/O redirection. The braces must appear where a command could appear - simplistically, at the start of a line or after a semi-colon. (Yes, that can be made more precise; if you want to quibble, let me know.)

You are right that you can preserve the original stdout and stderr with the redirections you showed, but it is usually simpler for the people who have to maintain the script later to understand what's going on if you scope the redirected code as shown above.

The relevant sections of the Bash manual are Grouping Commands and I/O Redirection. The relevant sections of the POSIX shell specification are Compound Commands and I/O Redirection. Bash has some extra notations, but is otherwise similar to the POSIX shell specification.

Redirect all output to file in Bash

That part is written to stderr, use 2> to redirect it. For example:

foo > stdout.txt 2> stderr.txt

or if you want in same file:

foo > allout.txt 2>&1

Note: this works in (ba)sh, check your shell for proper syntax

How to redirect and append both standard output and standard error to a file with Bash

cmd >>file.txt 2>&1

Bash executes the redirects from left to right as follows:

  1. >>file.txt: Open file.txt in append mode and redirect stdout there.
  2. 2>&1: Redirect stderr to "where stdout is currently going". In this case, that is a file opened in append mode. In other words, the &1 reuses the file descriptor which stdout currently uses.

redirect all output in a bash script when using set -x

This is what I've just googled and I remember myself using this some time ago...

Use exec to redirect both standard output and standard error of all commands in a script:

exec > $logfile 2>&1

For more redirection magic check out Advanced Bash Scripting Guide - I/O Redirection.

If you also want to see the output and debug on the terminal in addition to in the log file, see redirect COPY of stdout to log file from within bash script itself.

If you want to handle the destination of the set -x trace output independently of normal STDOUT and STDERR, see bash storing the output of set -x to log file.

Shell, redirect all output to a file but still print echo

You can take this approach of duplicating stdout file descriptor and using a custom echo function redirecting to duplicate file descriptor.


# open fd=3 redirecting to 1 (stdout)
exec 3>&1

# function echo to show echo output on terminal
echo() {
# call actual echo command and redirect output to fd=3 and log file
command echo "$@"
command echo "$@" >&3

# redirect stdout to a log file
exec >>logfile

printf "%s\n" "going to file"
echo "on stdout"

# close fd=3
exec 3>&-

Bash redirect all output to named pipes

If you want to be able to read the content after the file descriptor was closed, you need to just use files. The think with pipes is, that the reading command needs to be running first before the command that writes.

In such setup:

cmd1 | cmd2 | cmd3

cmd3 is run first, then cmd2, then cmd1. So if you want to setup it using pipes, you would need to open each fifo for reading in parallel and then call printer:

printer() {
echo "OUT" >&1
echo "ERR" >&2
echo "WRN" >&3

# Usage: mux
mux() {
cat "${pipe1}" &
cat "${pipe2}" &
cat "${pipe3}"

mux &
printer 1>${pipe1} 2>"${pipe2}" 3>"${pipe3}"

The shell will block on this snipped:

(exec >"${pipe1}" 2>"${pipe2}"; echo "Test"; echo "Test2" >&2) &
cat < "${pipe1}"
cat < "${pipe2}"

On cat < "$pipe1" Cause you need to read from both pipes for the exec to continue:

(exec >"${pipe1}" 2>"${pipe2}"; echo "Test"; echo "Test2" >&2) &
cat < "${pipe1}" &
cat < "${pipe2}"

If you want buffered output from a command, ie. read the output of the command after it had written something or exited, use just files for that, they are actually called logs.

As a workaround, you can use bash pipe internal buffering to buffer your messages:

printer() {
echo "OUT" >&3
echo "ERR" >&4
echo "WRN" >&5

# Usage: mux
mux() {
timeout 1 cat "${pipe1}"
timeout 1 cat "${pipe2}"
timeout 1 cat "${pipe3}"

printer 3> >(cat >$pipe1) 4> >(cat >$pipe2) 5> >(cat >$pipe3)

What happens here, is that pipes are always open for writing, even after the printer function exists and will remain open until the process substitution is running. You can close them manually, by exec 5>&-, which will write EOF to the pipe letting cat $pipe3 return normally. cat "$pipe1" will never exit if the function does not close the file descriptors, that's why the timeout functions are used, so that we can drain pipes without blocking on them.

Related Topics

Leave a reply