Run a Process to /Dev/Null in Python

run a process to /dev/null in python

For Python 3.3 and later, just use subprocess.DEVNULL:

call(["/some/path/and/exec","arg"], stdout=DEVNULL, stderr=DEVNULL)

Note that this redirects both stdout and stderr. If you only wanted to redirect stdout (as your sh line implies you might), leave out the stderr=DEVNULL part.

If you need to be compatible with older versions, you can use os.devnull. So, this works for everything from 2.6 on (including 3.3):

with open(os.devnull, 'w') as devnull:
call(["/some/path/and/exec","arg"], stdout=devnull, stderr=devnull)

Or, for 2.4 and later (still including 3.3):

devnull = open(os.devnull, 'w')
try:
call(["/some/path/and/exec","arg"], stdout=devnull, stderr=devnull)
finally:
devnull.close()

Before 2.4, there was no subprocess module, so that's as far back as you can reasonably go.

Cross platform /dev/null in Python

How about os.devnull ?

import os
f = open(os.devnull,"w")
zookeeper.set_log_stream(f)

Can I redirect all output to /dev/null from within python?

import sys
old_stdout, old_stderr = sys.stdout, sys.stderr
sys.stdout = open('/dev/null', 'w')
sys.stderr = open('/dev/null', 'w')

Redirecting or appending to /dev/null

It seems the behaviour for redirecting to /dev/null via either redirect > or append >> is identical. A quick test shows that it also makes no difference timing wise:

Content to print:

for i in range(10**4):
print("content")

Test time command:

 time python printlots.py >> /dev/null ; time python printlots.py > /dev/null

Result:

$ time python printlots.py >> /dev/null ; time python printlots.py > /dev/null

real 0m0.094s
user 0m0.047s
sys 0m0.047s

real 0m0.096s
user 0m0.031s
sys 0m0.063s

So it won't make a measureable difference which you use. It seems the reason both work is to enable developers to use /dev/null in their code with more flexibility. If you have a program where one input parameter is the output file it prints to, and append is your default mode, not having append to /dev/null would mean you'd have to check first what the target file is. At least that's what this answer assumes.

Python start multiprocessing without print/logging statements from processes

If I correct understand you, you want to not show printing from one of processes.
You can achieve this by redirect output of the Python Interpreter.
Add sys.stdout = open("/dev/null", 'w') to the process which you want to "mute".

Full working example below.

from multiprocessing import Process
from time import sleep
import sys

def start_viewer():
sys.stdout = open("/dev/null", 'w')
while True:
print("start_viewer")
sleep(1)

def start_server():
while True:
print("start_server")
sleep(1)

if __name__ == '__main__':
processes = [
Process(target=start_viewer, args=()),
Process(target=start_server, args=())
]

for p in processes:
p.start()

Be aware that /dev/null is like passing prints to nowhere, if you want to save it you can use text file. Also to achieve multi os support you should use os.devnull.

Subprocess, stderr to DEVNULL but errors are printed

Run it with setsid (just add that string in front of the command and arguments). That will stop it from opening /dev/tty to report the malloc errors. It will also prevent terminal signals, including SIGHUP when the terminal is closed, from affecting the process, which may be a good or a bad thing.

Alternatively, set the environment variable LIBC_FATAL_STDERR_ to some nonempty string, with whose name I was able to find several similar questions.



Related Topics



Leave a reply



Submit