Python Spawn Off a Child Subprocess, Detach, and Exit

Spawn and detach process in python

First of all, there is no process that is completely 'detached' from any process, even the OS kernel, except the first process. You could find that there always be one root process for all the other processes.

So, there is no way to start a complete separate process not even a child.

What you can do is to use multiprocessing module to avoid GIL limitation, and it is a standalone process that communicate with the main process via pipe, so they don't share the same memory space.

regards other questions:

  1. You can change the stdout and stderr to whatever you like as long as it is a valid descriptor.

  2. you can turn off shell=True... just to make sure you know what is shell=True

  3. subprocess is just a mean to execute something and get its output or errors. Do what is recommended. If you don't care the output or exit code, just ignore it, and use subprocess.PIPE to replace stdout/stderr, then it will not print anything on the console.

I am not sure why it is so important for you to use a standalone process, unless GIL has limited your performance. Using threads instead of processes could improve the performance too, because switching process context is much expensive than thread context. If it is just some IO operations or simple tasks, using threads is more convenient.

python subprocess - detach a process

You need to ensure that the standard output and standard error of the third process are directed somewhere other than the pipe from which the af_audit_run.py is reading output.

What is going wrong with the existing code is that by using stdout=None, stderr=None, you are requesting the default action (as if you had not used those keywords at all). This is to write to the same output streams as the parent process, in this case the request_audit.py, using the file descriptors which are inherited when the subprocess is forked. This means that the top-level af_audit_run.py will wait for output because it does not see end-of-file on that output stream until the third process has completed.

This can be seen in output of lsof - in the following example, the third process is the command /bin/sleep 600 (see test code at end).

Here is part of the lsof output for the third process:

sleep   3057  myuser    0u   CHR 136,20      0t0      23 /dev/pts/20
sleep 3057 myuser 1w FIFO 0,13 0t0 9441062 pipe
sleep 3057 myuser 2w FIFO 0,13 0t0 9441063 pipe

and here is part of the lsof output for the top-level af_audit_run.py:

python3 3053  myuser    0u   CHR 136,20      0t0      23 /dev/pts/20
python3 3053 myuser 1u CHR 136,20 0t0 23 /dev/pts/20
python3 3053 myuser 2u CHR 136,20 0t0 23 /dev/pts/20
python3 3053 myuser 3r FIFO 0,13 0t0 9441062 pipe
python3 3053 myuser 5r FIFO 0,13 0t0 9441063 pipe

As you can see, the sleep process in this example (pid 3057) has its stdout (fd 1) and stderr (fd 2) streams connected to write end of the pipes which the top-level process (pid 3053) is reading from -- note the pipe numbers in the second-last column -- even though it is not directly the parent of that process.

You are specifying the close_fds=True, but this is documented as follows:

"If close_fds is true, all file descriptors except 0, 1 and 2 will be closed before the child process is executed." (emphasis mine)

So it is not having any effect on the stdin, stdout or stderr streams, although any other open file descriptors would be closed in the child.

If instead of stdout=None, stderr=None you use stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, then this will explicitly direct these streams to the null device (/dev/null on Linux), and then the af_audit_run.py does not have to wait for it.

Some output from lsof in this case:

sleep   3318  myuser    0u   CHR 136,20      0t0      23 /dev/pts/20
sleep 3318 myuser 1u CHR 1,3 0t0 6 /dev/null
sleep 3318 myuser 2u CHR 1,3 0t0 6 /dev/null

It is possible also to use stdin=subprocess.DEVNULL so that if the process tries to read then it will see end-of-file. In this example I have not done so, and its input is still connected to the terminal device, although this does not affect whether af_audit_run.py waits for it.



Test code

af_audit_run.py

import subprocess

cmd = "python3 request_audit.py"

p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
result, error = p.communicate()
print(result.decode('utf-8'))
print(error.decode('utf-8'))

request_audit.py

import subprocess

cmd = "/bin/sleep 600"

subprocess.Popen(cmd, shell=True,
stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)

print(5)

Python spawn detatched non-python process on linux?

A working solution can be found in JonMc's answer here. I use it to open documents using 'xdg-open'.

You can change the stderr argument to stderr=open('/dev/null', 'w'), if you do not want a logfile.

Script that installs itself to another location and runs from there

  1. Do not use "cp" to copy the script, but shutil.copy() instead.

  2. Instead of "python3", use sys.executable to start the script with the same interpreter the original is started with.

  3. subprocess.Popen() without anything else will work as long as the child process isn't writing anything to stdout and stderr, and isn't requesting any output. In general, the process is not started unless communicate() is not called or PIPEs being read/written to. You have to use os.fork() to detach from the parent (research how daemons are made), then use:


p = subprocess.Popen([sys.executable, new_path], stdout=subprocess.PIPE, stderr=subprocess.PIPE, stdin=subprocess.PIPE)
p.stdin.close() # If you do not need it
p.communicate()

or do not use subprocess.PIPE for stdin, stderr and stdout and make sure that the terminal is bound to the child when forking. After os.fork() you do with the parent what you want and with the child what you want. You can bind the child to whatever terminal you want or start a new shell e.g.:

pid = os.fork()
if pid==0: # Code in this if block is the child
<code to change the terminal and appropriately point sys.stdout, sys.stderr and sys.stdin>
subprocess.Popen([os.getenv("SHELL"), "-c", sys.executable, new_path]).communicate()

  1. Note that you can point PIPEs to file-like objects using stdin, stderr and stdout arguments if you need.

  2. To detach on Windows you can use os.startfile() or use subprocess.Popen(...).communicate() in a thread. If you then sys.exit() the parent, the child should stay opened. (that is how it worked on Windows XP with Python 2.x, I didn't try with Py3 nor on newer Win versions)



Related Topics



Leave a reply



Submit