Kill Child Process When Parent Process Is Killed

How to make child process die after parent exits?

Child can ask kernel to deliver SIGHUP (or other signal) when parent dies by specifying option PR_SET_PDEATHSIG in prctl() syscall like this:

prctl(PR_SET_PDEATHSIG, SIGHUP);

See man 2 prctl for details.

Edit: This is Linux-only

Are child processes created with fork() automatically killed when the parent is killed?

No. If the parent is killed, children become children of the init process (that has the process id 1 and is launched as the first user process by the kernel).

The init process checks periodically for new children, and waits for them (thus freeing resources that are allocated by their return value).

The question was already discussed with quality answers here:
How to make child process die after parent exits?

Kill child process when parent process is killed

From this forum, credit to 'Josh'.

Application.Quit() and Process.Kill() are possible solutions, but have proven to be unreliable. When your main application dies, you are still left with child processes running. What we really want is for the child processes to die as soon as the main process dies.

The solution is to use "job objects" http://msdn.microsoft.com/en-us/library/ms682409(VS.85).aspx.

The idea is to create a "job object" for your main application, and register your child processes with the job object. If the main process dies, the OS will take care of terminating the child processes.

public enum JobObjectInfoType
{
AssociateCompletionPortInformation = 7,
BasicLimitInformation = 2,
BasicUIRestrictions = 4,
EndOfJobTimeInformation = 6,
ExtendedLimitInformation = 9,
SecurityLimitInformation = 5,
GroupInformation = 11
}

[StructLayout(LayoutKind.Sequential)]
public struct SECURITY_ATTRIBUTES
{
public int nLength;
public IntPtr lpSecurityDescriptor;
public int bInheritHandle;
}

[StructLayout(LayoutKind.Sequential)]
struct JOBOBJECT_BASIC_LIMIT_INFORMATION
{
public Int64 PerProcessUserTimeLimit;
public Int64 PerJobUserTimeLimit;
public Int16 LimitFlags;
public UInt32 MinimumWorkingSetSize;
public UInt32 MaximumWorkingSetSize;
public Int16 ActiveProcessLimit;
public Int64 Affinity;
public Int16 PriorityClass;
public Int16 SchedulingClass;
}

[StructLayout(LayoutKind.Sequential)]
struct IO_COUNTERS
{
public UInt64 ReadOperationCount;
public UInt64 WriteOperationCount;
public UInt64 OtherOperationCount;
public UInt64 ReadTransferCount;
public UInt64 WriteTransferCount;
public UInt64 OtherTransferCount;
}

[StructLayout(LayoutKind.Sequential)]
struct JOBOBJECT_EXTENDED_LIMIT_INFORMATION
{
public JOBOBJECT_BASIC_LIMIT_INFORMATION BasicLimitInformation;
public IO_COUNTERS IoInfo;
public UInt32 ProcessMemoryLimit;
public UInt32 JobMemoryLimit;
public UInt32 PeakProcessMemoryUsed;
public UInt32 PeakJobMemoryUsed;
}

public class Job : IDisposable
{
[DllImport("kernel32.dll", CharSet = CharSet.Unicode)]
static extern IntPtr CreateJobObject(object a, string lpName);

[DllImport("kernel32.dll")]
static extern bool SetInformationJobObject(IntPtr hJob, JobObjectInfoType infoType, IntPtr lpJobObjectInfo, uint cbJobObjectInfoLength);

[DllImport("kernel32.dll", SetLastError = true)]
static extern bool AssignProcessToJobObject(IntPtr job, IntPtr process);

private IntPtr m_handle;
private bool m_disposed = false;

public Job()
{
m_handle = CreateJobObject(null, null);

JOBOBJECT_BASIC_LIMIT_INFORMATION info = new JOBOBJECT_BASIC_LIMIT_INFORMATION();
info.LimitFlags = 0x2000;

JOBOBJECT_EXTENDED_LIMIT_INFORMATION extendedInfo = new JOBOBJECT_EXTENDED_LIMIT_INFORMATION();
extendedInfo.BasicLimitInformation = info;

int length = Marshal.SizeOf(typeof(JOBOBJECT_EXTENDED_LIMIT_INFORMATION));
IntPtr extendedInfoPtr = Marshal.AllocHGlobal(length);
Marshal.StructureToPtr(extendedInfo, extendedInfoPtr, false);

if (!SetInformationJobObject(m_handle, JobObjectInfoType.ExtendedLimitInformation, extendedInfoPtr, (uint)length))
throw new Exception(string.Format("Unable to set information. Error: {0}", Marshal.GetLastWin32Error()));
}

#region IDisposable Members

public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}

#endregion

private void Dispose(bool disposing)
{
if (m_disposed)
return;

if (disposing) {}

Close();
m_disposed = true;
}

public void Close()
{
Win32.CloseHandle(m_handle);
m_handle = IntPtr.Zero;
}

public bool AddProcess(IntPtr handle)
{
return AssignProcessToJobObject(m_handle, handle);
}

}

Looking at the constructor ...

JOBOBJECT_BASIC_LIMIT_INFORMATION info = new JOBOBJECT_BASIC_LIMIT_INFORMATION();
info.LimitFlags = 0x2000;

The key here is to setup the job object properly. In the constructor I'm setting the "limits" to 0x2000, which is the numeric value for JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE.

MSDN defines this flag as:

Causes all processes associated with
the job to terminate when the last
handle to the job is closed.

Once this class is setup...you just have to register each child process with the job. For example:

[DllImport("user32.dll", SetLastError = true)]
public static extern uint GetWindowThreadProcessId(IntPtr hWnd, out uint lpdwProcessId);

Excel.Application app = new Excel.ApplicationClass();

uint pid = 0;
Win32.GetWindowThreadProcessId(new IntPtr(app.Hwnd), out pid);
job.AddProcess(Process.GetProcessById((int)pid).Handle);

Python: how to kill child process(es) when parent dies?

Heh, I was just researching this myself yesterday! Assuming you can't alter the child program:

On Linux, prctl(PR_SET_PDEATHSIG, ...) is probably the only reliable choice. (If it's absolutely necessary that the child process be killed, then you might want to set the death signal to SIGKILL instead of SIGTERM; the code you linked to uses SIGTERM, but the child does have the option of ignoring SIGTERM if it wants to.)

On Windows, the most reliable options is to use a Job object. The idea is that you create a "Job" (a kind of container for processes), then you place the child process into the Job, and you set the magic option that says "when no-one holds a 'handle' for this Job, then kill the processes that are in it". By default, the only 'handle' to the job is the one that your parent process holds, and when the parent process dies, the OS will go through and close all its handles, and then notice that this means there are no open handles for the Job. So then it kills the child, as requested. (If you have multiple child processes, you can assign them all to the same job.) This answer has sample code for doing this, using the win32api module. That code uses CreateProcess to launch the child, instead of subprocess.Popen. The reason is that they need to get a "process handle" for the spawned child, and CreateProcess returns this by default. If you'd rather use subprocess.Popen, then here's an (untested) copy of the code from that answer, that uses subprocess.Popen and OpenProcess instead of CreateProcess:

import subprocess
import win32api
import win32con
import win32job

hJob = win32job.CreateJobObject(None, "")
extended_info = win32job.QueryInformationJobObject(hJob, win32job.JobObjectExtendedLimitInformation)
extended_info['BasicLimitInformation']['LimitFlags'] = win32job.JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE
win32job.SetInformationJobObject(hJob, win32job.JobObjectExtendedLimitInformation, extended_info)

child = subprocess.Popen(...)
# Convert process id to process handle:
perms = win32con.PROCESS_TERMINATE | win32con.PROCESS_SET_QUOTA
hProcess = win32api.OpenProcess(perms, False, child.pid)

win32job.AssignProcessToJobObject(hJob, hProcess)

Technically, there's a tiny race condition here in case the child dies in between the Popen and OpenProcess calls, you can decide whether you want to worry about that.

One downside to using a job object is that when running on Vista or Win7, if your program is launched from the Windows shell (i.e., by clicking on an icon), then there will probably already be a job object assigned and trying to create a new job object will fail. Win8 fixes this (by allowing job objects to be nested), or if your program is run from the command line then it should be fine.

If you can modify the child (e.g., like when using multiprocessing), then probably the best option is to somehow pass the parent's PID to the child (e.g. as a command line argument, or in the args= argument to multiprocessing.Process), and then:

On POSIX: Spawn a thread in the child that just calls os.getppid() occasionally, and if the return value ever stops matching the pid passed in from the parent, then call os._exit(). (This approach is portable to all Unixes, including OS X, while the prctl trick is Linux-specific.)

On Windows: Spawn a thread in the child that uses OpenProcess and os.waitpid. Example using ctypes:

from ctypes import WinDLL, WinError
from ctypes.wintypes import DWORD, BOOL, HANDLE
# Magic value from http://msdn.microsoft.com/en-us/library/ms684880.aspx
SYNCHRONIZE = 0x00100000
kernel32 = WinDLL("kernel32.dll")
kernel32.OpenProcess.argtypes = (DWORD, BOOL, DWORD)
kernel32.OpenProcess.restype = HANDLE
parent_handle = kernel32.OpenProcess(SYNCHRONIZE, False, parent_pid)
# Block until parent exits
os.waitpid(parent_handle, 0)
os._exit(0)

This avoids any of the possible issues with job objects that I mentioned.

If you want to be really, really sure, then you can combine all these solutions.

Hope that helps!

Avoid killing children when parent process is killed

I would recommend against your design as it's quite error prone. Better solutions would de-couple the workers from the server using some sort of queuing system (RabbitMQ, Celery, Redis, ...).

Nevertheless, here's a couple of "hacks" you could try out.

  1. Turn your child processes into UNIX daemons. The python daemon module could be a starting point.
  2. Instruct your child processes to ignore the SIGINT signal. The service orchestrator might work around that by issuing a SIGTERM or SIGKILL signal if child processes refuse to die. You might need to disable such feature.

    To do so, just add the following line at the beginning of the function_wrapper function:

    signal.signal(signal.SIGINT, signal.SIG_IGN)

Kill Child Process if Parent is killed in Python

I've encounter the same problem myself, I've got the following solution:

before calling p.start(), you may set p.daemon=True. Then as mentioned here python.org multiprocessing

When a process exits, it attempts to terminate all of its daemonic child processes.

kill child processes when parent ends

You can to this kind of thing with a finally clause. A finally clause gets executed after a try block, even if the execution of the try block threw an exception or if the execution was aborted by the user.

So one way to approach your problem would be the following:

  1. keep track of the process ids of the child processes, your script is spawning and

  2. kill these processes in the finally clause.

    try
{
$process = Start-Process 'notepad.exe' -PassThru

# give script some time before ending
Write-Host "Begin of sleep section"
Start-Sleep -Seconds 5
Write-Host "End of sleep section"

}
finally
{
# Kill the process if it still exists after the script ends.
# This throws an exception, if process ended before the script.
Stop-Process -Id $process.Id
}



Related Topics



Leave a reply



Submit