Keep a subprocess alive and keep giving it commands? Python
Use the standard subprocess module. You use subprocess.Popen() to start the process, and it will run in the background (i.e. at the same time as your Python program). When you call Popen(), you probably want to set the stdin, stdout and stderr parameters to subprocess.PIPE. Then you can use the stdin, stdout and stderr fields on the returned object to write and read data.
Untested example code:
from subprocess import Popen, PIPE
# Run "cat", which is a simple Linux program that prints it's input.
process = Popen(['/bin/cat'], stdin=PIPE, stdout=PIPE)
process.stdin.write(b'Hello\n')
process.stdin.flush()
print(repr(process.stdout.readline())) # Should print 'Hello\n'
process.stdin.write(b'World\n')
process.stdin.flush()
print(repr(process.stdout.readline())) # Should print 'World\n'
# "cat" will exit when you close stdin. (Not all programs do this!)
process.stdin.close()
print('Waiting for cat to exit')
process.wait()
print('cat finished with return code %d' % process.returncode)
keep a subprocess alive and keep giving it commands python 3
Between cat
and your program is a buffer, most likely in the libc stdio implementation on the Python side. You need to flush this buffer to ensure cat
has seen the bytes you wrote before putting your process to sleep waiting for cat
to write some bytes back.
You can do this explicitly with a process.stdin.flush()
call, or you can do it implicitly by disabling the buffer. I think the explicit form is probably better here: it's simple and clearly correct.
how to keep a subprocess running and keep supplying output to it in python?
As noticed, communicate()
and readlines()
are not fit for the task because they don't return before gdb's output has ended, i. e. gdb exited. We want to read gdb's output until it waits for input; one way to do this is to read until gdb's prompt appears - see below function get_output()
:
from subprocess import Popen, PIPE, STDOUT
from os import read
command = ['gdb', 'demo']
process = Popen(command, stdin=PIPE, stdout=PIPE, stderr=STDOUT)
def get_output():
output = ""
while not output.endswith("\n(gdb) "): # read output till prompt
buffer = read(process.stdout.fileno(), 4096)
if (not buffer): break # or till EOF (gdb exited)
output += buffer
return output
print get_output()
process.stdin.write("b main\n") # don't forget the "\n"!
print get_output()
process.stdin.write("r\n") # don't forget the "\n"!
print get_output()
process.stdin.write("n\n") # don't forget the "\n"!
print get_output()
process.stdin.write("n\n") # don't forget the "\n"!
print get_output()
process.stdin.write("n\n") # don't forget the "\n"!
print get_output()
process.stdin.close()
print get_output()
Keep the subprocess program running and accepting new arguments
When run
returns, the program you ran has exited and is no longer in memory. There's no pre-initialized copy still in RAM to reuse (by re-invoking its main
with different arguments).
Constantly print Subprocess output while process is running
You can use iter to process lines as soon as the command outputs them: lines = iter(fd.readline, "")
. Here's a full example showing a typical use case (thanks to @jfs for helping out):
from __future__ import print_function # Only Python 2.x
import subprocess
def execute(cmd):
popen = subprocess.Popen(cmd, stdout=subprocess.PIPE, universal_newlines=True)
for stdout_line in iter(popen.stdout.readline, ""):
yield stdout_line
popen.stdout.close()
return_code = popen.wait()
if return_code:
raise subprocess.CalledProcessError(return_code, cmd)
# Example
for path in execute(["locate", "a"]):
print(path, end="")
How to keep sub-process running after main process has exited?
The multiprocessing
module is generally used to split a huge task into multiple sub tasks and run them in parallel to improve performance.
In this case, you would want to use the subprocess
module.
You can put your fun
function in a seperate file(sub.py
):
import time
while True:
print("Hello")
time.sleep(3)
Then you can call it from the main file(main.py
):
from subprocess import Popen
import time
if __name__ == '__main__':
Popen(["python", "./sub.py"])
time.sleep(6)
print('Parent Exiting')
In Python need to keep openssl alive and keep giving it commands
Found an alternative method of doing this in linux using ctypescrypto. This doesn't require any additional external dependency (just include all of the given source files in your code as the licence allows us to do so) and in terms of performance it should be quite fast since the functions available are compiled in C.
On Linux for test.py we have to replace:
crypto_dll = os.path.join(r'C:\Python24', 'libeay32.dll')
libcrypto = cdll.LoadLibrary(crypto_dll)
with:
from ctypes.util import find_library
crypto_dll = find_library('crypto') # In my case its 'libcrypto.so.1.0.0'
libcrypto = cdll.LoadLibrary(crypto_dll)
After changing it the following example works:
import cipher
from ctypes import cdll
from base64 import b64encode
from base64 import b64decode
libcrypto = cdll.LoadLibrary('libcrypto.so.1.0.0')
libcrypto.OpenSSL_add_all_digests()
libcrypto.OpenSSL_add_all_ciphers()
# Encryption
c = cipher.CipherType(libcrypto, 'AES-256', 'CBC')
ce = cipher.Cipher(libcrypto, c, '11111111111111111111111111111111', '1111111111111111', encrypt=True)
encrypted_text = b64encode(ce.finish("Four Five Six"))
print encrypted_text
# Decryption
cd = cipher.Cipher(libcrypto, c, '11111111111111111111111111111111', '1111111111111111', encrypt=False)
plain_text = cd.finish(b64decode(encrypted_text))
print plain_text
Run a program from python, and have it continue to run after the script is killed
The usual way to do this on Unix systems is to fork and exit if you're the parent. Have a look at os.fork()
.
Here's a function that does the job:
def spawnDaemon(func):
# do the UNIX double-fork magic, see Stevens' "Advanced
# Programming in the UNIX Environment" for details (ISBN 0201563177)
try:
pid = os.fork()
if pid > 0:
# parent process, return and keep running
return
except OSError, e:
print >>sys.stderr, "fork #1 failed: %d (%s)" % (e.errno, e.strerror)
sys.exit(1)
os.setsid()
# do second fork
try:
pid = os.fork()
if pid > 0:
# exit from second parent
sys.exit(0)
except OSError, e:
print >>sys.stderr, "fork #2 failed: %d (%s)" % (e.errno, e.strerror)
sys.exit(1)
# do stuff
func()
# all done
os._exit(os.EX_OK)
Related Topics
How to Install Python Packages in Google's Colab
Checking If Object on Ftp Server Is File or Directory Using Python and Ftplib
Interact with Other Programs Using Python
Timedelta to String Type in Pandas Dataframe
Get Files Names Inside a Zip File on Ftp Server Without Downloading Whole Archive
Cx_Freeze Crashing Python 3.7.0
How to Normalize JSON Correctly by Python Pandas
When to Use Sys.Path.Append and When Modifying %Pythonpath% Is Enough
How to Read One Single Line of CSV Data in Python
Get First Row Value of a Given Column
Class Variables Is Shared Across All Instances in Python
Python Subprocess and User Interaction
Pandas Groupby.Size VS Series.Value_Counts VS Collections.Counter with Multiple Series
How Is Tuple Implemented in Cpython