Using a Python Subprocess Call to Invoke a Python Script

Launch a python script from another script, with parameters in subprocess argument

The subprocess library is interpreting all of your arguments, including demo_oled_v01.py as a single argument to python. That's why python is complaining that it cannot locate a file with that name. Try running it as:

p = subprocess.Popen(['python', 'demo_oled_v01.py', '--display',
'ssd1351', '--width', '128', '--height', '128', '--interface', 'spi',
'--gpio-data-command', '20'])

See more information on Popen here.

Using subprocess.call to execute python file?

subprocess.call(["python", "script2.py"]) waits for the sub-process to finish.

Just use Popen instead:

proc = subprocess.Popen(["python", "script2.py"])

You can later do proc.poll() to see whether it is finished or not, or proc.wait() to wait for it to finish (as call does), or just forget about it and do other things instead.

BTW, you might want to ensure that the same python is called, and that the OS can find it, by using sys.executable instead of just "python":

subprocess.Popen([sys.executable, "script2.py"])

Run Python script within Python by using `subprocess.Popen` in real time

Last night I've set out to do this using a pipe:

import os
import subprocess

with open("test2", "w") as f:
f.write("""import time
print('start')
time.sleep(2)
print('done')""")

(readend, writeend) = os.pipe()

p = subprocess.Popen(['python3', '-u', 'test2'], stdout=writeend, bufsize=0)
still_open = True
output = ""
output_buf = os.read(readend, 1).decode()
while output_buf:
print(output_buf, end="")
output += output_buf
if still_open and p.poll() is not None:
os.close(writeend)
still_open = False
output_buf = os.read(readend, 1).decode()

Forcing buffering out of the picture and reading one character at the time (to make sure we do not block writes from the process having filled a buffer), closing the writing end when process finishes to make sure read catches the EOF correctly. Having looked at the subprocess though that turned out to be a bit of an overkill. With PIPE you get most of that for free and I ended with this which seems to work fine (call read as many times as necessary to keep emptying the pipe) with just this and assuming the process finished, you do not have to worry about polling it and/or making sure the write end of the pipe is closed to correctly detect EOF and get out of the loop:

p = subprocess.Popen(['python3', '-u', 'test2'],
stdout=subprocess.PIPE, bufsize=1,
universal_newlines=True)
output = ""
output_buf = p.stdout.readline()
while output_buf:
print(output_buf, end="")
output += output_buf
output_buf = p.stdout.readline()

This is a bit less "real-time" as it is basically line buffered.

Note: I've added -u to you Python call, as you need to also make sure your called process' buffering does not get in the way.

Using subprocess to run Python script on Windows

Just found sys.executable - the full path to the current Python executable, which can be used to run the script (instead of relying on the shbang, which obviously doesn't work on Windows)

import sys
import subprocess

theproc = subprocess.Popen([sys.executable, "myscript.py"])
theproc.communicate()

Python; how to properly call another python script as a subprocess

Every time a program is started, it receives a list of arguments it was invoked with. This is often called argv (v stands for vector, i.e. one-dimensional array). The program parses this list, extracts options, parameters, filenames, etc. depending on its own invocation syntax.

When working at the command line, the shell takes care of parsing the input line, starting new program or programs and passing them their argument list.

When a program is called from another program, the caller is responsible to provide the correct arguments. It could delegate this work to shell. The price for it is very high. There is substantial overhead and possibly a security risk! Avoid this approach whenever possible.

Finally to the question itself:

shpfiles = 'shapefile_a.shp shapefile_b.shp'
subprocess.call(['python', 'shapemerger.py', '%s' % shpfiles])

This will invoke python to run the script shapemerger.py with one argument shapefile_a.shp shapefile_b.shp. The script expects filenames and receives this one name. The file "shapefile_a.shp shapefile_b.shp" does not exist, but the script probably stops before attempting to access that file, because it expect 2 or more files to process.

The correct way is to pass every filename as one argument. Assuming shpfiles is a whitespace separated list:

subprocess.call(['python', 'shapemerger.py'] + shpfiles.split())

will generate a list with 4 items. It is important to understand that this approach will fail if there is a space in a filename.

Run one subprocess after another in a single call that works in the background

subprocess.run waits until the command is complete. You could create a background thread that runs all the commands you want in a row.

import threading
import subprocess

def run_background():
subprocess.run(['secondary.py'], shell=True, stdin=None, stdout=None, stderr=None, close_fds=True)
subprocess.run(['tertiary.py'], shell=True, stdin=None, stdout=None, stderr=None, close_fds=True)

bg_thread = threading.Thread(target=run_background)
bg_thread.start()

Because this was not marked as a daemon thread, the main thread will wait until this thread has completed while exiting the program.

Calling a python script with input within a python script using subprocess

To call a Python script from another one using subprocess module and to pass it some input and to get its output:

#!/usr/bin/env python3
import os
import sys
from subprocess import check_output

script_path = os.path.join(get_script_dir(), 'a.py')
output = check_output([sys.executable, script_path],
input='\n'.join(['query 1', 'query 2']),
universal_newlines=True)

where get_script_dir() function is defined here.

A more flexible alternative is to import module a and to call a function, to get the result (make sure a.py uses if __name__=="__main__" guard, to avoid running undesirable code on import):

#!/usr/bin/env python
import a # the dir with a.py should be in sys.path

result = [a.search(query) for query in ['query 1', 'query 2']]

You could use mutliprocessing to run each query in a separate process (if performing a query is CPU-intensive then it might improve time performance):

#!/usr/bin/env python
from multiprocessing import freeze_support, Pool
import a

if __name__ == "__main__":
freeze_support()
pool = Pool() # use all available CPUs
result = pool.map(a.search, ['query 1', 'query 2'])


Related Topics



Leave a reply



Submit