Kill the previous command in a pipeline
What makes this tricky is that waf
misbehaves by not exiting when the pipe breaks, and it spawns off a second process that we also have to get rid off:
tmp=$(mktemp)
cat <(./waf --run scratch/myfile & echo $! > "$tmp"; wait) | awk -f filter.awk;
pkill -P $(<$tmp)
kill $(<$tmp)
- We use
<(process substitution)
to runwaf
in the background and write its pid to a temp file. - We use
cat
as an intermediary to relay data from this process to awk, sincecat
will exit properly when the pipe is broken, allowing the pipeline to finish. - When the pipeline's done, we kill all processes that
waf
has spawned (by Parent PID) - Finally we kill
waf
itself.
Kill last program in pipe if any previous command fails
Instead of running this all as one pipeline, split off upload-to-cloud
into a separate process substitution which can be independently signaled, and for which your parent shell script holds a descriptor (and thus can control the timing of reaching EOF on its stdin).
Note that upload-to-cloud
needs to be written to delete content it already uploaded in the event of an unclean exit for this to work as you intend.
Assuming you have a suitably recent version of bash:
#!/usr/bin/env bash
# dynamically allocate a file descriptor; assign it to a process substitution
# store the PID of that process substitution in upload_pid
exec {upload_fd}> >(exec upload-to-cloud); upload_pid=$!
# make sure we recorded an upload_pid that refers to a process that is actually running
if ! kill -0 "$upload_pid"; then
# if this happens without any other obvious error message, check that we're bash 4.4
echo "ERROR: upload-to-cloud not started, or PID not stored" >&2
fi
shopt -s pipefail
if acquire_data | gzip -9 | gpg --batch -e -r me@example.com >&"$upload_fd"; then
exec {upload_fd}>&- # close the pipeline writing up upload-to-cloud gracefully...
wait "$upload_pid" # ...and wait for it to exit
exit # ...then ourselves exiting with the exit status of upload-to-cloud
# (which was returned by wait, became $?, thus exit's default).
else
retval=$? # store the exit status of the failed pipeline component
kill "$upload_pid" # kill the backgrounded process of upload-to-cloud
wait "$upload_pid" # let it handle that SIGTERM...
exit "$retval" # ...and exit the script with the exit status we stored earlier.
fi
Without a new enough bash to be able to store the PID for a process substitution, the line establishing the process substitution might change to:
mkfifo upload_to_cloud.fifo
upload-to-cloud <upload_to_cloud.fifo & upload_pid=$!
exec {upload_fd}>upload_to_cloud.fifo
rm -f upload_to_cloud.fifo
...after which the rest of the script should work unmodified.
Kill next command in pipeline on failure
A short script which uses process substitution instead of named pipes would be:
#!/bin/bash
exec 4> >( ./second-process.sh )
./first-process.sh >&4 &
if ! wait $! ; then echo "error in first process" >&2; kill 0; wait; fi
It works much like with a fifo, basically using the fd as the information carrier for the IPC instead of a file name.
Two remarks: I wasn't sure whether it's necessary to close fd 4 ; I would assume that upon script exit the shell closes all open files.
And I couldn't figure out how to obtain the PID of the process in the process substitution (anybody? at least on my cygwin the usual $!
didn't work.) Therefore I resorted to killing all processes in the group, which may not be desirable (but I'm not entirely sure about the semantics).
Killing a specific process in a pipeline, from later in the pipeline
You could try using a fifo to send data to the telnet, so you can close it at the wanted time.
rm -f myfifo
mkfifo myfifo
result=$(
telnet "$SERVER_ADD" "$SERVER_PORT" <myfifo |
( echo '{"op":"get","path":"access"}' >&5
while read -r line; do
echo "$line"
exec 5>&-
done
) 5>myfifo
)
The syntax 5>myfifo
(no space) opens a new output file description number 5 writing to the fifo. The echo >&5
writes to this fd. The exec 5>&-
closes this fd. This syntax should work in ash
.
How to give arguments to kill via pipe
kill $(ps -e | grep dmn | awk '{print $1}')
Kill process on the left side of a pipe
By killing the whole process group instead of just bash
(the parent), you can send the kill signal to all children as well.
Syntax examples are:
kill -SIGTERM -$!
kill -- -$!
Example:
bash -c 'sleep 50 | sleep 40' & sleep 1; kill -SIGTERM -$!; wait; ps -ef | grep -c sleep
[1] 14683
[1]+ Terminated bash -c 'sleep 50 | sleep 40'
1
Note that wait
here waits for bash to be effectively killed which takes some milliseconds.
Also note that the final result (1) is the 'grep sleep' itself. A result of 3 would show that this did not work as two additional sleep processes would still be running.
The kill
manual mentions:
-n
where n is larger than 1. All processes in process group n are signaled.
When an argument of the form '-n' is given, and it is meant to denote a
process group, either the signal must be specified first, or the argument
must be preceded by a '--' option, otherwise it will be taken as the signal
to send.
Find and kill a process in one line using bash and regex
In bash
, you should be able to do:
kill $(ps aux | grep '[p]ython csp_build.py' | awk '{print $2}')
Details on its workings are as follows:
- The
ps
gives you the list of all the processes. - The
grep
filters that based on your search string,[p]
is a trick to stop you picking up the actualgrep
process itself. - The
awk
just gives you the second field of each line, which is the PID. - The
$(x)
construct means to executex
then take its output and put it on the command line. The output of thatps
pipeline inside that construct above is the list of process IDs so you end up with a command likekill 1234 1122 7654
.
Here's a transcript showing it in action:
pax> sleep 3600 &
[1] 2225
pax> sleep 3600 &
[2] 2226
pax> sleep 3600 &
[3] 2227
pax> sleep 3600 &
[4] 2228
pax> sleep 3600 &
[5] 2229
pax> kill $(ps aux | grep '[s]leep' | awk '{print $2}')
[5]+ Terminated sleep 3600
[1] Terminated sleep 3600
[2] Terminated sleep 3600
[3]- Terminated sleep 3600
[4]+ Terminated sleep 3600
and you can see it terminating all the sleepers.
Explaining the grep '[p]ython csp_build.py'
bit in a bit more detail:
When you do sleep 3600 &
followed by ps -ef | grep sleep
, you tend to get two processes with sleep
in it, the sleep 3600
and the grep sleep
(because they both have sleep
in them, that's not rocket science).
However, ps -ef | grep '[s]leep'
won't create a process with sleep
in it, it instead creates grep '[s]leep'
and here's the tricky bit: the grep
doesn't find it because it's looking for the regular expression "any character from the character class [s]
(which is s
) followed by leep
.
In other words, it's looking for sleep
but the grep process is grep '[s]leep'
which doesn't have sleep
in it.
When I was shown this (by someone here on SO), I immediately started using it because
- it's one less process than adding
| grep -v grep
; and - it's elegant and sneaky, a rare combination :-)
Exit when one process in pipe fails
The main issue at hand here is clearly the pipe. In bash, when executing a command of the form
command1 | command2
and command2
dies or terminates, the pipe which receives the output (/dev/stdout
) from command1
becomes broken. The broken pipe, however, does not terminate command1
. This will only happen when it tries to write to the broken pipe, upon which it will exit with sigpipe
. A simple demonstration of this can be seen in this question.
If you want to avoid this problem, you should make use of process substitution in combination with input redirection. This way, you avoid pipes. The above pipeline is then written as:
command2 < <(command1)
In the case of the OP, this would become:
./script.sh < <(tee /dev/stderr) | tee /dev/stderr
which can also be written as:
./script.sh < <(tee /dev/stderr) > >(tee /dev/stderr)
Related Topics
Cdc_Acm: Failed to Set Dtr/Rts - Can Not Communicate with Usb Cdc Device
Recording from Alsa - Understanding Memory Mapping
How to Configure Gitlab as a Subdomain in Nginix.Conf
Can a Program Read Its Own Elf Section
.Htaccess Redirect Index.PHP to /
Check Library Version Netcdf Linux
How to Authenticate Username/Password Using Pam W/O Root Privileges
Reliably Kill Sleep Process After Usr1 Signal
Automate Scp with Multiple Files with Expect Script
How to View Dask Dashboard When Running on a Virtual Machine
Linux Umask for Sudo and Apache
How to Get Exit Code of Remote Command Through Ssh
Core Dump of Multithreaded Application Shows Only One Thread
Change Owner of a Currently Running Process
Xargs and Find, Rm Complaining About \N (Newline) in Filename
Sharing Virtual Network with Docker Container
Elastic Beanstalk: Log Task Customization on Amazon Linux 2 Platforms