Run Shell Command and Don't Wait for Return

Run shell command and don't wait for return

A single & symbol between commands will let each run independently without relying on the previous command having succeeded.

Curl: don't wait for response

If you have a large number of requests you want to issue quickly, and you don't care about the output, there are two things you should do:

  1. Do more requests with the same connection.

For small requests, it's generally much faster to do 10 requests each on 1 connection, than 1 request each on 10 connections. For Henry's HTTP post test server, the difference is 2.5x:

$ time for i in {1..10}; do
curl -F foo=bar https://posttestserver.com/post.php ;
done
Successfully dumped 1 post variables.
View it at http://www.posttestserver.com/data/2016/06/09/11.44.48536583865
Post body was 0 chars long.
(...)
real 0m2.429s

vs

$ time  {
array=();
for i in {1..10}; do
array+=(--next -F foo=bar https://posttestserver.com/post.php ) ;
done;
curl "${array[@]}";
}
Successfully dumped 1 post variables.
View it at http://www.posttestserver.com/data/2016/06/09/11.45.461371907842
(...)
real 0m1.079s

  1. Process at most a N connections in parallel, to avoid DoS'ing the host or your machine

Here sem from GNU parallel is limiting the number of parallel connections to 4. This is a better version of backgrounding and waiting, since it will always ensure full capacity.

for i in {1..20}
do
sem -j 4 curl -F foo=bar https://posttestserver.com/post.php
done
sem --wait

The number of parallel requests you want depends on how beefy the host is. A realistic number could be 32+

Combine the two strategies, and you should see a hefty speedup without DoS'ing yourself.

Is it possible for bash commands to continue before the result of the previous command?

Yes, if you do nothing else then commands in a bash script are serialized. You can tell bash to run a bunch of commands in parallel, and then wait for them all to finish, but doing something like this:

command1 &
command2 &
command3 &
wait

The ampersands at the end of each of the first three lines tells bash to run the command in the background. The fourth command, wait, tells bash to wait until all the child processes have exited.

Note that if you do things this way, you'll be unable to get the exit status of the child commands (and set -e won't work), so you won't be able to tell whether they succeeded or failed in the usual way.

The bash manual has more information (search for wait, about two-thirds of the way down).

python execute shell command and continue without waiting and check if running before executing

You can use subprocess.Popen to do that, example:

import subprocess

command1 = subprocess.Popen(['command1', 'args1', 'arg2'])
command2 = subprocess.Popen(['command2', 'args1', 'arg2'])

if you need to retrieve the output do the following:

command1.wait()
print command1.stdout

Example run:

sleep = subprocess.Popen(['sleep', '60'])
sleep.wait()
print sleep.stdout # sleep outputs nothing but...
print sleep.returncode # you get the exit value

how to wait for first command to finish?

Shell scripts, no matter how they are executed, execute one command after the other. So your code will execute results.sh after the last command of st_new.sh has finished.

Now there is a special command which messes this up: &

cmd &

means: "Start a new background process and execute cmd in it. After starting the background process, immediately continue with the next command in the script."

That means & doesn't wait for cmd to do it's work. My guess is that st_new.sh contains such a command. If that is the case, then you need to modify the script:

cmd &
BACK_PID=$!

This puts the process ID (PID) of the new background process in the variable BACK_PID. You can then wait for it to end:

while kill -0 $BACK_PID ; do
echo "Process is still active..."
sleep 1
# You can add a timeout here if you want
done

or, if you don't want any special handling/output simply

wait $BACK_PID

Note that some programs automatically start a background process when you run them, even if you omit the &. Check the documentation, they often have an option to write their PID to a file or you can run them in the foreground with an option and then use the shell's & command instead to get the PID.

How to run a command in the background and get no output?

Use nohup if your background job takes a long time to finish or you just use SecureCRT or something like it login the server.

Redirect the stdout and stderr to /dev/null to ignore the output.

nohup /path/to/your/script.sh > /dev/null 2>&1 &

How to wait in bash for several subprocesses to finish, and return exit code !=0 when any subprocess ends with code !=0?

wait also (optionally) takes the PID of the process to wait for, and with $! you get the PID of the last command launched in the background.
Modify the loop to store the PID of each spawned sub-process into an array, and then loop again waiting on each PID.

# run processes and store pids in array
for i in $n_procs; do
./procs[${i}] &
pids[${i}]=$!
done

# wait for all pids
for pid in ${pids[*]}; do
wait $pid
done

Launch a shell command with in a python script, wait for the termination and return to the script

subprocess: The subprocess module
allows you to spawn new processes,
connect to their input/output/error
pipes, and obtain their return codes.

http://docs.python.org/library/subprocess.html

Usage:

import subprocess
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE)
process.wait()
print process.returncode


Related Topics



Leave a reply



Submit