Pipe Only Stderr Through a Filter

Pipe only STDERR through a filter

Here's an example, modeled after how to swap file descriptors in bash . The output of a.out is the following, without the 'STDXXX: ' prefix.

STDERR: stderr output
STDOUT: more regular

./a.out 3>&1 1>&2 2>&3 3>&- | sed 's/e/E/g'
more regular
stdErr output

Quoting from the above link:

  1. First save stdout as &3 (&1 is duped into 3)
  2. Next send stdout to stderr (&2 is duped into 1)
  3. Send stderr to &3 (stdout) (&3 is duped into 2)
  4. close &3 (&- is duped into 3)

How can I pipe stderr, and not stdout?

First redirect stderr to stdout — the pipe; then redirect stdout to /dev/null (without changing where stderr is going):

command 2>&1 >/dev/null | grep 'something'

For the details of I/O redirection in all its variety, see the chapter on Redirections in the Bash reference manual.

Note that the sequence of I/O redirections is interpreted left-to-right, but pipes are set up before the I/O redirections are interpreted. File descriptors such as 1 and 2 are references to open file descriptions. The operation 2>&1 makes file descriptor 2 aka stderr refer to the same open file description as file descriptor 1 aka stdout is currently referring to (see dup2() and open()). The operation >/dev/null then changes file descriptor 1 so that it refers to an open file description for /dev/null, but that doesn't change the fact that file descriptor 2 refers to the open file description which file descriptor 1 was originally pointing to — namely, the pipe.

howto send stderr and stdout through separate grep filters?

Do:

{ command 2>&1 1>&3 3>&- | grep "b"; } 3>&1 1>&2 | grep "a"

Note: the above is adapted from antak's answer to "pipe stdout and stderr to two different processes in shell script?", with greps replacing its more abstract names. For how it works, see that answer.
(Nevertheless, lazy coders who need grep might prefer a more specific answer like this.)

A useful util for distinguishing stdout from stderr is annotate-output, (install the "devscripts" package), which sends stderr and stdin both to stdout along with helpful little prefixes:

annotate-output wc -c /bin/bash /bin/nosuchshell

Output:

00:29:06 I: Started wc -c /bin/bash /bin/nosuchshell
00:29:06 E: wc: /bin/nosuchshell: No such file or directory
00:29:06 O: 1099016 /bin/bash
00:29:06 O: 1099016 total
00:29:06 I: Finished with exitcode 1

That output could be parsed separately using sed, awk, or even a tee and a few greps.

filter stdout/stderr with sed

Using sed, you can use a pattern and the d flag:

DISPLAY=":1" $PYBIN myScript.pyc 2>&1 \
| sed '/missing on display/d' \
| > myLogfile.log

I'm splitting the command on multiple line with \.

Filtering stderr of subprocess captures empty bytes

An option is to replace those while loops with:

for line in iter(process.stderr.readline, b''):
print("STDERR", line)

This would at least deal with the fact that process.stderr will never become falsy even when there is nothing to read.

Piping both stdout and stderr in bash?

(Note that &>>file appends to a file while &> would redirect and overwrite a previously existing file.)

To combine stdout and stderr you would redirect the latter to the former using 1>&2. This redirects stdout (file descriptor 1) to stderr (file descriptor 2), e.g.:

$ { echo "stdout"; echo "stderr" 1>&2; } | grep -v std
stderr
$

stdout goes to stdout, stderr goes to stderr. grep only sees stdout, hence stderr prints to the terminal.

On the other hand:

$ { echo "stdout"; echo "stderr" 1>&2; } 2>&1 | grep -v std
$

After writing to both stdout and stderr, 2>&1 redirects stderr back to stdout and grep sees both strings on stdin, thus filters out both.

You can read more about redirection here.

Regarding your example (POSIX):

cmd-doesnt-respect-difference-between-stdout-and-stderr 2>&1 | grep -i SomeError

or, using >=bash-4:

cmd-doesnt-respect-difference-between-stdout-and-stderr |& grep -i SomeError

pipe stdout and stderr to two different processes in shell script?

Use another file descriptor

{ command1 2>&3 | command2; } 3>&1 1>&2 | command3

You can use up to 7 other file descriptors: from 3 to 9.

If you want more explanation, please ask, I can explain ;-)

Test

{ { echo a; echo >&2 b; } 2>&3 | sed >&2 's/$/1/'; } 3>&1 1>&2 | sed 's/$/2/'

output:

b2
a1

Example

Produce two log files:

1. stderr only

2. stderr and stdout

{ { { command 2>&1 1>&3; } | tee err-only.log; } 3>&1; } > err-and-stdout.log

If command is echo "stdout"; echo "stderr" >&2 then we can test it like that:

$ { { { echo out>&3;echo err>&1;}| tee err-only.log;} 3>&1;} > err-and-stdout.log
$ head err-only.log err-and-stdout.log
==> err-only.log <==
err

==> err-and-stdout.log <==
out
err

handle stderr with another script

The simplest thing to do is create a named pipe to act as the buffer between the two.

mkfifo errors           # "errors" is an arbitrary file name
compute.sh 2> errors & # Run in the background
error_handler.sh < errors

As one line:

mkfifo errors; compute.sh 2> errors & error_handler.sh 

Now the two processes run concurrently, and error_handler.sh can read from errors as compute.sh writes to it. The buffer is of bounded size, so compute.sh will automatically block if it gets full. Once error_handler.sh consumes some input, compute.sh will automatically resume. As long as errors aren't produced too quickly (i.e., faster than error_handler.sh can process them), compute.sh will run as if the buffer were unbounded.

If the buffer is ever emptied, error_handler.sh will block until either more input is available, or until compute.sh closes its end of the pipe by exiting.


Regular pipeline syntax foo | bar creates an anonymous pipe (or unnamed pipe), and it is just a shortcut for

mkfifo tmp
foo > tmp &
bar < tmp

but limits you to connecting standard output of one command to standard input of another. Using other file descriptors requires contorted redirections. Using named pipes is slightly longer to type, but can be much clearer to read.

Pipe Find command stderr to a command

Pipe stderr and stdout simultaneously - idea taken from this post:

(find /boot | sed s'/^/STDOUT:/' ) 3>&1 1>&2 2>&3 | sed 's/^/STDERR:/'

Sample output:

STDOUT:/boot/grub/usb_keyboard.mod
STDERR:find: `/boot/lost+found': Brak dostępu

Bash redirections like 3>&1 1>&2 2>&3 swaps stderr and stdout.

I would modify your sample script to look like this:

#!/bin/bash
ErrorFile="error.log"
(find ./subdirectory -type f 3>&1 1>&2 2>&3 | sed "s#^#${PWD}: #" >> $ErrorFile) 3>&1 1>&2 2>&3 | while read line; do
field1=$(echo "$line" | cut -d / -f2)
...
done

Notice that I swapped stdout & stderr twice.

Small additional comment - look at -printf option in find manual page. It might be useful to you.



Related Topics



Leave a reply



Submit