Shell: redirect stdout to /dev/null and stderr to stdout
You want
./script 2>&1 1>/dev/null | ./other-script
The order here is important. Let's assume stdin (fd 0), stdout (fd 1) and stderr (fd 2) are all connected to a tty initially, so
0: /dev/tty, 1: /dev/tty, 2: /dev/tty
The first thing that gets set up is the pipe. other-script's stdin gets connected to the pipe, and script's stdout gets connected to the pipe, so script's file descriptors so far look like:
0: /dev/tty, 1: pipe, 2: /dev/tty
Next, the redirections occur, from left to right. 2>&1
makes fd 2 go wherever fd 1 is currently going, which is the pipe.
0: /dev/tty, 1: pipe, 2: pipe
Lastly, 1>/dev/null
redirects fd1 to /dev/null
0: /dev/tty, 1: /dev/null, 2: pipe
End result, script's stdout is silenced, and its stderr is sent through the pipe, which ends up in other-script's stdin.
Also see http://bash-hackers.org/wiki/doku.php/howto/redirection_tutorial
Also note that 1>/dev/null
is synonymous to, but more explicit than >/dev/null
Redirect stderr to stdout and stdout to /dev/null when running git command
ends up containing git output upon a successful push. Why!?
Because these messages are written to stderr as well. It's not just for errors, but also for progress and status messages that aren't considered an end product.
Redirect stderr to /dev/null
In order to redirect stderr to /dev/null use:
some_cmd 2>/dev/null
You don't need xargs
here. (And you don't want it! since it performs word splitting)
Use find's exec option:
find . -type f -name "*.txt" -exec grep -li needle {} +
To suppress the error messages use the -s
option of grep
:
From man grep
:
-s, --no-messages
Suppress error messages about nonexistent or unreadable files.
which gives you:
find . -type f -name "*.txt" -exec grep -lis needle {} +
Is it possible to redirect stdout to a non-stderr stream inside a shell script?
For completeness, based on tripleee's comment, what I actually wanted was the following:
#!/bin/bash
echo "Send me to standard out"
echo "Send me to standard err" >&2
if ! (printf '' 1>&3) 2>&-; then
# File descriptor 3 is not open
# Option 1: Redirect fd3 to stdout.
exec 3>&1
# Option 2: Redirect fd3 to /dev/null.
# exec 3>/dev/null
fi
# No error here.
echo "Send me to stream 3" >&3
With this modification, a parent process can redirect 3 when it desires to do so, and otherwise have it as part of stdout.
$ ./Play.sh
Send me to standard out
Send me to standard err
Send me to stream 3
$ ./Play.sh 3> /dev/null
Send me to standard out
Send me to standard err
How can I redirect STDERR to STDOUT, but ignore the original STDOUT?
What does not work:
The reason the last command you quoted:
cmd 1>/dev/null 2>&1 | grep pattern
does not work, stems from a confusion on the order in which redirection works. You expected the last quoted redirection to be applied to the ones before it on every output, so that output the original standard output file descriptor (1) will go to /dev/null, and output to the standard error file descriptor (2) will go to the original standard output.
However, this is not how shell redirection works. Each redirection causes the file descriptors to be "remapped" by closing the "source" and duplicating the "destination" into it (see the man
pages of dup(2)
and close(2)
), in order. This means that in your command standard output is first replaced with /dev/null
, and then standard error replaced with standard output, which is /dev/null
already.
What works:
Therefore, to obtain the desired effect, you just need to reverse the redirections. Then you will have standard error go to standard output, and the original standard output go to /dev/null
:
cmd 2>&1 >/dev/null | grep pattern
(note that the 1
before >
is unnecessary - for output redirection standard output is the default)
Addendum: Charlie mentioned redirecting to &-
to close a file descriptor. If using an interactive shell which supports that extension (bash
and some other implementations do but not all and it is not standard), you can also do it like this:
cmd 2>&1 >&- | grep pattern
This may be better - it can save some time, because when the command tries to write to standard output the call to write
may fail immediately without waiting for a context switch into the kernel and the driver handling /dev/null
(depending on the system call implementation - some may catch this in the libc
function, and some may also have special handling for /dev/null
). If there is a lot of output that can be worthwhile, and it's faster to type.
This will mostly work because most programs do not care if they fail to write to standard output (who really checks the return value of printf
?) and will not mind that standard output is closed. But some programs can bail out with a failure code if write
fails - usually block processors, programs using some careful library for I/O or logging to stdandard output. So if it doesn't work remember that this is a likely cause and try /dev/null
.
Related Topics
Difference Between Flat Memory Model and Protected Memory Model
Remove Log Files Using Cron Job
How to Control Backlight by Terminal Command
The Return Code from 'Grep' Is Not as Expected on Linux
Possible to Use a .Dll on Linux
How to Convert String to Number in Gnuplot
Code Snippet Managers for Linux Desktops
Prevent Git from Popping Up Gnome Password Box
Standard Library Abi Compatibility
Can't Su to User Jenkins After Installing Jenkins
Tcp_Tw_Reuse VS Tcp_Tw_Recycle:Which to Use (Or Both)
Shell: Redirect Stdout to /Dev/Null and Stderr to Stdout
Nginx Not Listening to Port 80
Crontab Run Every 15 Minutes Between Certain Hours