Run Process and Don't Wait

Run Process and Don't Wait

This call doesn't wait for the child process to terminate (on Linux). Don't ask me what close_fds does; I wrote the code some years ago. (BTW: The documentation of subprocess.Popen is confusing, IMHO.)

proc = Popen([cmd_str], shell=True,
stdin=None, stdout=None, stderr=None, close_fds=True)

Edit:

I looked at the the documentation of subprocess, and I believe the important aspect for you is stdin=None, stdout=None, stderr=None,. Otherwise Popen captures the program's output, and you are expected to look at it. close_fds makes the parent process' file handles inaccessible for the child.

How to run a background process and do *not* wait?

Here is verified example for Python REPL:

>>> import subprocess
>>> import sys
>>> p = subprocess.Popen([sys.executable, '-c', 'import time; time.sleep(100)'], stdout=subprocess.PIPE, stderr=subprocess.STDOUT); print('finished')
finished

How to verify that via another terminal window:

$ ps aux | grep python

Output:

user           32820   0.0  0.0  2447684   3972 s003  S+   10:11PM   0:00.01 /Users/user/venv/bin/python -c import time; time.sleep(100)

Can subprocess.call be invoked without waiting for process to finish?

Use subprocess.Popen instead of subprocess.call:

process = subprocess.Popen(['foo', '-b', 'bar'])

subprocess.call is a wrapper around subprocess.Popen that calls communicate to wait for the process to terminate. See also What is the difference between subprocess.popen and subprocess.run.

Wait subprocess.run until completes its task

According to the python documentation subprocess.run waits for the process to end.

The problem is that ffmpeg overwrites the input file if the input and output files are the same and therefore the output video becomes unusable.

Python subprocess.popen() without waiting

Just don't call myProc.communicate() if you don't want to wait. subprocess.Popen will start the process.

How to NOT wait for a process to complete in batch script?

Use

START c:\wherever\whatever.exe

bash wait for all processes to finish (doesn't work)

A literal porting to GNU Parallel looks like this:

task(){
dir="$1"
P=`pwd`
dirname=$(basename $dir)
echo $dirname running >> output.out
if [[ $dirname != "backup"* ]]; then
sed -i "s/$dirname running/$dirname is good/" $P/output.out
else
sed -i "s/$dirname running/$dirname ignored/" $P/output.out
fi
}
export -f task

parallel -j8 task ::: */
echo all done

As others point out you have race conditions when you run sed on the same file in parallel.

To avoid race conditions you could do:

task(){
dir="$1"
P=`pwd`
dirname=$(basename $dir)
echo $dirname running
if [[ $dirname != "backup"* ]]; then
echo "$dirname is good" >&2
else
echo "$dirname ignored" >&2
fi
}
export -f task

parallel -j8 task ::: */ >running.out 2>done.out
echo all done

You will end up with two files running.out and done.out.

If you really just want to ignore the dirs called backup*:

task(){
dir="$1"
P=`pwd`
dirname=$(basename $dir)
echo $dirname running
echo "$dirname is good" >&2
}
export -f task

parallel -j8 task '{=/backup/ and skip()=}' ::: */ >running.out 2>done.out
echo all done

Consider spending 20 minutes on reading chapter 1+2 of https://doi.org/10.5281/zenodo.1146014 Your command line will love you for it.



Related Topics



Leave a reply



Submit