Why Exit Code 141 with Grep -Q

Why exit code 141 with grep -q?

This is because grep -q exits immediately with a zero status as soon as a match is found. The zfs command is still writing to the pipe, but there is no reader (because grep has exited), so it is sent a SIGPIPE signal from the kernel and it exits with a status of 141.

Another common place where you see this behaviour is with head. e.g.

$ seq 1 10000 | head -1
1

$ echo ${PIPESTATUS[@]}
141 0

In this case, head read the first line and terminated which generated a SIGPIPE signal and seq exited with 141.

See "The Infamous SIGPIPE Signal" from The Linux Programmer's Guide.

Ignoring Bash pipefail for error code 141

The trap command lets you specify a command to run when encountering a signal. To ignore a signal, pass the empty string:

trap '' PIPE

Using yes with interactive script results in exit code 141

I was able to override the error by using double pipes to true.

yes | my_command || true

This works but it overrides any errors that the sequence might throw, making my CI tests evergreen. Which is not ideal, but it works.

Error code 141 with tar

As a workaround we are now using cpio for archiving, that works fine for us now, although I would want to know why tar is causing this issue, its around for long time, and being used as standard tool for years.

Why the pclose(3) don't wait shell command terminate

exit status:141 is a SIGPIPE error. It is well explained in this question Why exit code 141 with grep -q?

The issue is that your b.sh script tries to write to the pipe, but nobody is reading this pipe in your C program.

bash pipe - if first executable exits, will all downstream executables exit?

Downstream processes won't necessarily exit. When execN exits, it closes the write end of the pipe, which closes the read end of execN+1's standard input. But until execN+1 tries to read from standard input, it won't notice, and even then, it's will simply detect that it has reached the end of the file; it can continue do other things or exit, as it decides.

Upstream, execN-1 won't notice that execN has exited and closed its read end of the pipe until execN-1 tries to write to its end of the pipe, at which point it will receive a SIGPIPE signal. The default handler for that signal is to exit, but execN-1 can install its own handler to decide what when and if that situation occurs.

Incorrect results with bash process substitution and tail?

Ok, what seems to happen is that once the head -1 command finishes it exits and that causes tee to get a SIGPIPE it tries to write to the named pipe that the process substitution setup which generates an EPIPE and according to man 2 write will also generate SIGPIPE in the writing process, which causes tee to exit and that forces the tail -1 to exit immediately, and the cat on the left gets a SIGPIPE as well.

We can see this a little better if we add a bit more to the process with head and make the output both more predictable and also written to stderr without relying on the tee:

for i in {1..30}; do echo "$i"; echo "$i" >&2; sleep 1; done | tee >(head -1 > h.txt; echo "Head done") >(tail -1 > t.txt) >/dev/null

which when I run it gave me the output:

1
Head done
2

so it got just 1 more iteration of the loop before everything exited (though t.txt still only has 1 in it). If we then did

echo "${PIPESTATUS[@]}"

we see

141 141

which this question ties to SIGPIPE in a very similar fashion to what we're seeing here.

The coreutils maintainers have added this as an example to their tee "gotchas" for future posterity.

For a discussion with the devs about how this fits into POSIX compliance you can see the (closed notabug) report at http://debbugs.gnu.org/cgi/bugreport.cgi?bug=22195

If you have access to GNU version 8.24 they have added some options (not in POSIX) that can help like -p or --output-error=warn. Without that you can take a bit of a risk but get the desired functionality in the question by trapping and ignoring SIGPIPE:

trap '' PIPE
for i in {1..30}; do echo "$i"; echo "$i" >&2; sleep 1; done | tee >(head -1 > h.txt; echo "Head done") >(tail -1 > t.txt) >/dev/null
trap - PIPE

will have the expected results in both h.txt and t.txt, but if something else happened that wanted SIGPIPE to be handled correctly you'd be out of luck with this approach.

Another hacky option would be to zero out t.txt before starting then not let the head process list finish until it is non-zero length:

> t.txt; for i in {1..10}; do echo "$i"; echo "$i" >&2; sleep 1; done | tee >(head -1 > h.txt; echo "Head done"; while [ ! -s t.txt ]; do sleep 1; done) >(tail -1 > t.txt; date) >/dev/null


Related Topics



Leave a reply



Submit