When a Parent Process Is Killed by "Kill -9", Will Subprocess Also Be Killed

When a parent process is killed by kill -9 , will subprocess also be killed?

You have to make the sub processes daemonic in order to have them killed when the father is killed (or dies), otherwise they are adopted by init(1).

when parent process is killed by 'kill -9', how to terminate child processes

The parent, once killed via SIGKILL, will just stop existing and thus cannot send signals to its child processes any more. The child process would have to monitor its parent process by itself. When the parent gets killed, the PPID changes to 1 - this might be a helpful to the client to act on its parent being killed. But to "ensure that child process will always be closed with parent process" is not possible from the parents process - that's the nature of SIGKILL. However, if you feel brave you can always hack the source and redefine SIGKILL to something else, but I wouldn't recommend it :)

Kill Child Process if Parent is killed in Python

I've encounter the same problem myself, I've got the following solution:

before calling p.start(), you may set p.daemon=True. Then as mentioned here python.org multiprocessing

When a process exits, it attempts to terminate all of its daemonic child processes.

Python: how to kill child process(es) when parent dies?

Heh, I was just researching this myself yesterday! Assuming you can't alter the child program:

On Linux, prctl(PR_SET_PDEATHSIG, ...) is probably the only reliable choice. (If it's absolutely necessary that the child process be killed, then you might want to set the death signal to SIGKILL instead of SIGTERM; the code you linked to uses SIGTERM, but the child does have the option of ignoring SIGTERM if it wants to.)

On Windows, the most reliable options is to use a Job object. The idea is that you create a "Job" (a kind of container for processes), then you place the child process into the Job, and you set the magic option that says "when no-one holds a 'handle' for this Job, then kill the processes that are in it". By default, the only 'handle' to the job is the one that your parent process holds, and when the parent process dies, the OS will go through and close all its handles, and then notice that this means there are no open handles for the Job. So then it kills the child, as requested. (If you have multiple child processes, you can assign them all to the same job.) This answer has sample code for doing this, using the win32api module. That code uses CreateProcess to launch the child, instead of subprocess.Popen. The reason is that they need to get a "process handle" for the spawned child, and CreateProcess returns this by default. If you'd rather use subprocess.Popen, then here's an (untested) copy of the code from that answer, that uses subprocess.Popen and OpenProcess instead of CreateProcess:

import subprocess
import win32api
import win32con
import win32job

hJob = win32job.CreateJobObject(None, "")
extended_info = win32job.QueryInformationJobObject(hJob, win32job.JobObjectExtendedLimitInformation)
extended_info['BasicLimitInformation']['LimitFlags'] = win32job.JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE
win32job.SetInformationJobObject(hJob, win32job.JobObjectExtendedLimitInformation, extended_info)

child = subprocess.Popen(...)
# Convert process id to process handle:
perms = win32con.PROCESS_TERMINATE | win32con.PROCESS_SET_QUOTA
hProcess = win32api.OpenProcess(perms, False, child.pid)

win32job.AssignProcessToJobObject(hJob, hProcess)

Technically, there's a tiny race condition here in case the child dies in between the Popen and OpenProcess calls, you can decide whether you want to worry about that.

One downside to using a job object is that when running on Vista or Win7, if your program is launched from the Windows shell (i.e., by clicking on an icon), then there will probably already be a job object assigned and trying to create a new job object will fail. Win8 fixes this (by allowing job objects to be nested), or if your program is run from the command line then it should be fine.

If you can modify the child (e.g., like when using multiprocessing), then probably the best option is to somehow pass the parent's PID to the child (e.g. as a command line argument, or in the args= argument to multiprocessing.Process), and then:

On POSIX: Spawn a thread in the child that just calls os.getppid() occasionally, and if the return value ever stops matching the pid passed in from the parent, then call os._exit(). (This approach is portable to all Unixes, including OS X, while the prctl trick is Linux-specific.)

On Windows: Spawn a thread in the child that uses OpenProcess and os.waitpid. Example using ctypes:

from ctypes import WinDLL, WinError
from ctypes.wintypes import DWORD, BOOL, HANDLE
# Magic value from http://msdn.microsoft.com/en-us/library/ms684880.aspx
SYNCHRONIZE = 0x00100000
kernel32 = WinDLL("kernel32.dll")
kernel32.OpenProcess.argtypes = (DWORD, BOOL, DWORD)
kernel32.OpenProcess.restype = HANDLE
parent_handle = kernel32.OpenProcess(SYNCHRONIZE, False, parent_pid)
# Block until parent exits
os.waitpid(parent_handle, 0)
os._exit(0)

This avoids any of the possible issues with job objects that I mentioned.

If you want to be really, really sure, then you can combine all these solutions.

Hope that helps!

How to cleanly kill subprocesses in python

There are 2 main issues here:

First issue: If you're using shell=True, so you're killing the shell running the process, not the process itself. With its parent killed, the child process goes defunct / isn't killed immediately.

In your case, you're using sleep which is not built-in, so you could drop shell=True, and Popen would yield the actual process id: p.terminate() would work.

You can (and you should) avoid shell=True most of the time, even if it requires extra python coding effort (piping 2 commands together, redirecting input/output, all those cases can be nicely handled by one or several Popen without shell=True.

And (second issue) if the process is still defunct when terminating after that fix, you could call p.wait() (from this question). Seems that calling terminate isn't enough. The Popen object needs to be garbage collected.

Why child process still alive after parent process was killed in Linux?

No, when you kill a process alone, it will not kill the children.

You have to send the signal to the process group if you want all processes for a given group to receive the signal

For example, if your parent process id has the code 1234, you will have to specify the parentpid adding the symbol minus followed by your parent process id:

kill -9 -1234

Otherwise, orphans will be linked to init, as shown by your third screenshot (PPID of the child has become 1).

Python Kill all subprocesses if one of them is finished

In Unix and Unix-like Operating System has SIGCHLD signal which is send by OS kernel. This signal will be sent to parent process when child process terminated. If you have no handler for this signal, SIGCHLD signal will ignored by default. But if you have a handler function for this signal, you tell the kernel “hey I have a handler function, when child process terminated please trigger this handler function to run”

In your case, you have many child process, if one of them killed or finished its execution(by exit() syscall) kernel will send a SIGCHLD signal to the parent process which is your shared code.

We have a handler for SIGCHLD signal which is chld_handler() function. When one of the child process terminated, SIGCHLD signal will be sent to parent process and chld_handler function will triggered to run by OS kernel. (This named is signal catching)

In this function signal.signal(signal.SIGCHLD,chld_handler) we tell the kernel, “i have handler function for SIGCHLD signal, don’t ignore it when child terminated”. In chld_handler function which is run when SIGCHLD signal was sent, we call signal.signal(signal.SIGCHLD, signal.SIG_IGN) function that we tell the kernel, “hey I have no handler function, ignore the SIGCHLD signal” we do that because we do not need that anymore since we killing other childs with p.terminate() looping the procs.

All code would be like below

import ctypes
import os
import signal
import subprocess

libc = ctypes.CDLL("libc.so.6")

def set_pdeathsig(sig=signal.SIGTERM):
def callable():
return libc.prctl(1, sig)

return callable

def chld_handler(sig, frame):
signal.signal(signal.SIGCHLD, signal.SIG_IGN)
print("one of the childs dead")
for p in procs:
p.terminate()

signal.signal(signal.SIGCHLD,chld_handler)

if __name__ == "__main__":
procs = []
for i in range((os.cpu_count() * 2) - 1):
proc = subprocess.Popen(['python', "pythonscript_i_need_to_run/"], preexec_fn=set_pdeathsig(signal.SIGTERM))
procs.append(proc)
procs.append(subprocess.Popen(["python", "other_pythonscript_i_need_to_run"], preexec_fn=set_pdeathsig(signal.SIGTERM)))
for proc in procs:
proc.wait()

Also there are much more detail about SIGCHLD signal and python signal library and also zombie process, i do not tell all the thing here because there are so many detail, and i am not expert all the deep knowledge now

I hope above informations give you some insight. If you think i am wrong somewhere, please correct me



Related Topics



Leave a reply



Submit