Child Processes Created with Python Multiprocessing Module Won't Print

Python Multiprocessing disallows printing in main process after processes are created

The problem is your are catching all exceptions. So your code was not passing the correct arguments to the Process constructor (which generated an AssertionError), but your catch statement was silently handling the exception.

The current exception is:

Traceback (most recent call last):
File "C:\Users\MiguelAngel\Downloads\test.py", line 19, in <module>
process = multiprocessing.Process(scrape_retailer_product, args=(retailer_products[i+j]))
File "C:\Users\MiguelAngel\AppData\Local\Programs\Python\Python38-32\lib\multiprocessing\process.py", line 82, in __init__
assert group is None, 'group argument must be None for now'
AssertionError: group argument must be None for now

I suppose that scrape_retailer_product is the function that should be executed in the new process. Therefore, according to the documentation, the call to the constructor should be:

process = multiprocessing.Process(target=scrape_retailer_product, 
args=(retailer_products[i+j],))

If you want to catch all multiprocessing exceptions, you should catch multiprocessing.ProcessError. According to the documentation, it is the base class of all multiprocessing exceptions.

Python Multiprocessing runs entire Program instead of called function

Especially on Windows, the main entrypoint of a multiprocessing script must be guarded by if __name == "__main__": (see the "safe importing" section).

from multiprocessing import Process
import time
import os

def plint():
print("here", os.getpid())

def parallel():
p1 = Process(target=plint)
p1.start()

def main():
print("test")
while True:
parallel()
time.sleep(2)

if __name__ == "__main__":
main()

Print child process index when stdout is flushed

I think this will give you an idea:

from contextlib import redirect_stdout
from io import StringIO
from multiprocessing import current_process

def my_function():
f = StringIO()
with redirect_stdout(f):
library_function()
s = f.getvalue()
lines = s.splitlines()
pid = current_process().pid
for line in lines:
print(f'[{pid}] {line}', flush=True)

def library_function():
for i in range(4):
print('This is line', i, flush=True)

my_function()

Prints:

[18492] This is line 0
[18492] This is line 1
[18492] This is line 2
[18492] This is line 3

If you don't want to use the process id, you can always pass to my_function an additional parameter, the process_number that varies from 1 .. N, where N is the number of child processes.

Python: multiprocessing Queue.put() in module won't send anything to parent process

If I see it correctly, your desired submodule is the class Population. However, you start your process with a parameter of the type ParallelEvaluator. Next, I can't see that you supply your Queue q to the sub-Process. That's what I see from the code provided:

stop_event = Event()
q = Queue()
pe = neat.ParallelEvaluator(**args)

child_process = Process(target=p.run, args=(pe.evaluate, **args)
child_process.start()

Moreover, the following lines create a race condition:

q.put(True)
print(q.get())

The get command is like a pop. So it takes an element and deletes it from the queue. If your sub-process doesn't access the queue between these two lines (because it is busy), the True will never make it to the child-process. Hence, it is better two use multiple queues. One for each direction. Something like:

stop_event = Event()
q_in = Queue()
q_out = Queue()
pe = neat.ParallelEvaluator(**args)

child_process = Process(target=p.run, args=(pe.evaluate, **args))
child_process.start()

i = 0
while not stop_event.is_set():

q_in.put(True)
print(q_out.get())
time.sleep(5)
i += 1
if i == 5:
child_process.terminate()
stop_event.set()

This is your submodule

class Population(object):
def __init__():
*initialization*

def run(self, **args):

while n is None or k < n:
*some stuff*
accord = add_2.get() # add_2 = q_in
if accord == True:
add_3.put(self.best_genome.fitness) #add_3 = q_out
accord = False

return self.best_genome

Function not returning in multiprocess, no errors

It turns out this is a known issue with pytorch running multithreading in child processes, causing deadlocks.

https://github.com/explosion/spaCy/issues/4667

A workaround is to add the following:

import torch

torch.set_num_threads(1)


Related Topics



Leave a reply



Submit