Run Child Processes as Different User from a Long Running Python Process

Run child processes as different user from a long running Python process

Since you mentioned a daemon, I can conclude that you are running on a Unix-like operating system. This matters, because how to do this depends on the kind operating system. This answer applies only to Unix, including Linux, and Mac OS X.

  1. Define a function that will set the gid and uid of the running process.
  2. Pass this function as the preexec_fn parameter to subprocess.Popen

subprocess.Popen will use the fork/exec model to use your preexec_fn. That is equivalent to calling os.fork(), preexec_fn() (in the child process), and os.exec() (in the child process) in that order. Since os.setuid, os.setgid, and preexec_fn are all only supported on Unix, this solution is not portable to other kinds of operating systems.

The following code is a script (Python 2.4+) that demonstrates how to do this:

import os
import pwd
import subprocess
import sys

def main(my_args=None):
if my_args is None: my_args = sys.argv[1:]
user_name, cwd = my_args[:2]
args = my_args[2:]
pw_record = pwd.getpwnam(user_name)
user_name = pw_record.pw_name
user_home_dir = pw_record.pw_dir
user_uid = pw_record.pw_uid
user_gid = pw_record.pw_gid
env = os.environ.copy()
env[ 'HOME' ] = user_home_dir
env[ 'LOGNAME' ] = user_name
env[ 'PWD' ] = cwd
env[ 'USER' ] = user_name
report_ids('starting ' + str(args))
process = subprocess.Popen(
args, preexec_fn=demote(user_uid, user_gid), cwd=cwd, env=env
)
result = process.wait()
report_ids('finished ' + str(args))
print 'result', result

def demote(user_uid, user_gid):
def result():
report_ids('starting demotion')
os.setgid(user_gid)
os.setuid(user_uid)
report_ids('finished demotion')
return result

def report_ids(msg):
print 'uid, gid = %d, %d; %s' % (os.getuid(), os.getgid(), msg)

if __name__ == '__main__':
main()

You can invoke this script like this:

Start as root...

(hale)/tmp/demo$ sudo bash --norc
(root)/tmp/demo$ ls -l
total 8
drwxr-xr-x 2 hale wheel 68 May 17 16:26 inner
-rw-r--r-- 1 hale staff 1836 May 17 15:25 test-child.py

Become non-root in a child process...

(root)/tmp/demo$ python test-child.py hale inner /bin/bash --norc
uid, gid = 0, 0; starting ['/bin/bash', '--norc']
uid, gid = 0, 0; starting demotion
uid, gid = 501, 20; finished demotion
(hale)/tmp/demo/inner$ pwd
/tmp/demo/inner
(hale)/tmp/demo/inner$ whoami
hale

When the child process exits, we go back to root in parent ...

(hale)/tmp/demo/inner$ exit
exit
uid, gid = 0, 0; finished ['/bin/bash', '--norc']
result 0
(root)/tmp/demo$ pwd
/tmp/demo
(root)/tmp/demo$ whoami
root

Note that having the parent process wait around for the child process to exit is for demonstration purposes only. I did this so that the parent and child could share a terminal. A daemon would have no terminal and would seldom wait around for a child process to exit.

running a process as a different user from Python

The following answer has a really nice approach for this: https://stackoverflow.com/a/6037494/505154

There is a working code example there, but the summary is to use subprocess.Popen() with a preexec_fn to set up the environment of the subprocess so that it executes as another user.

Run a program from python, and have it continue to run after the script is killed

The usual way to do this on Unix systems is to fork and exit if you're the parent. Have a look at os.fork() .

Here's a function that does the job:

def spawnDaemon(func):
# do the UNIX double-fork magic, see Stevens' "Advanced
# Programming in the UNIX Environment" for details (ISBN 0201563177)
try:
pid = os.fork()
if pid > 0:
# parent process, return and keep running
return
except OSError, e:
print >>sys.stderr, "fork #1 failed: %d (%s)" % (e.errno, e.strerror)
sys.exit(1)

os.setsid()

# do second fork
try:
pid = os.fork()
if pid > 0:
# exit from second parent
sys.exit(0)
except OSError, e:
print >>sys.stderr, "fork #2 failed: %d (%s)" % (e.errno, e.strerror)
sys.exit(1)

# do stuff
func()

# all done
os._exit(os.EX_OK)

Does Python's multiprocessing package's spawning use the file state when the parent process started or the file state at the time of process spawning?

The answer to your question depends on the operating system.

In Linux multiprocessing uses fork system call to create child processes. As a result, the child process "inherits" the bytecode of all python sources and does not re-read the sources. That is, in Linux the child will not recognize the changes.

In Windows multiprocessing creates child processes using _winapi.CreateProcess. The child initialises itself from scratch, that is it will read all source files again, including the modified ones.

Proof.

Here is a small example where the main processes modifies one of the source files.

somelib.py: prints out the process ID of the process that loads it.

import os

print("SomeLib loaded in process", os.getpid())

test.py: before spawning the child process it patches somelib.py

from multiprocessing import Process
# somelib prints process ID and, if patched, an extra line
import somelib

def f(name):
print('hello', name)

if __name__ == '__main__':
print("The main process is patching SomeLib")
with open("somelib.py", "a+") as patch:
patch.write("\n\nprint('SomeLib is patched')")
p = Process(target=f, args=('bob',))
# Spawn the the child
p.start()
p.join()

The output in Linux:

SomeLib loaded in process 70
The main process is patching SomeLib
hello bob

somelib.py was modified, but the child ignored that because of fork.

The output in Windows

SomeLib loaded in process 22512
The main process is patching SomeLib
SomeLib loaded in process 17008
SomeLib is patched
hello bob

See? The child with pid 17008 "re-loaded" somelib.py and processed the modified one.



Related Topics



Leave a reply



Submit