Python Multithreading Wait Till All Threads Finished

python multithreading wait till all threads finished

You need to use join method of Thread object in the end of the script.

t1 = Thread(target=call_script, args=(scriptA + argumentsA))
t2 = Thread(target=call_script, args=(scriptA + argumentsB))
t3 = Thread(target=call_script, args=(scriptA + argumentsC))

t1.start()
t2.start()
t3.start()

t1.join()
t2.join()
t3.join()

Thus the main thread will wait till t1, t2 and t3 finish execution.

How to wait till all threads finish their work?

Your last threading example is close, but you have to collect the threads in a list, start them all at once, then wait for them to complete all at once. Here's a simplified example:

import threading
import time

# Lock to serialize console output
output = threading.Lock()

def threadfunc(a,b):
for i in range(a,b):
time.sleep(.01) # sleep to make the "work" take longer
with output:
print(i)

# Collect the threads
threads = []
for i in range(10,100,10):
# Create 9 threads counting 10-19, 20-29, ... 90-99.
thread = threading.Thread(target=threadfunc,args=(i,i+10))
threads.append(thread)

# Start them all
for thread in threads:
thread.start()

# Wait for all to complete
for thread in threads:
thread.join()

How to wait till all threads are executed without using thread.join()?

Did you try using the concurrent.futures package?

You can instantiate a ThreadPoolExecutor and start your threads by submitting to it.
Then call the executor's shutdown(wait=True) function to wait for all threads to complete.

Alternatively, use a with ThreadPoolExecutor as e: statement. When you exit the with block, all your threads are completed.

Do something after all threads finished python ( using t.start()! )

To print out "finished" at the end you have to collect all thread objects in a list and wait for all of them to finish (using join method) before printing. This will look like:

threads = []

for tryproxy in proxy:
t = threading.Thread(target=check, args=(tryproxy,))
threads.append(t)
t.start()

for t in threads:
t.join()

print("finished")

Make the main thread wait until all threads finish

No, x.join() only blocks the main thread. The other threads continue to execute in parallel.

for thread in threads:
thread.join()

is somewhat more idiomatic, since you're not actually building a list.

You should also be aware that multithreading doesn't work as expected in Python, and it's unlikely that you'll get any performance gain from this unless you're doing work that's IO-bound (i.e. hitting a remote service many times).

How to wait until all threads finished?

There is missing code in your sample, I guess you are calling newthread.start() in your createNewDownloadThread() method above, aren't you?

You might know the usual way of working is by calling thread.start() and thread.join(), so it will block until the thread has finished.

I'd say it might work better by doing this in your for loop:

   for t in ts:
t.start()
t.join()

How to wait for one threading to finish then run another threading

thds = []
for k in range(5):
thds.append( threading.Thread(target=main_worker, args=(k,)))
for t in thds:
t.start()
for t in thds:
t.join()

Or, even:

thds = [threading.Thread(target=main_worker, args=(k,)) for k in range(5)]
for t in thds:
t.start()
for t in thds:
t.join()

How to run x amount of thread and wait for thread to be finished

There is an existing implementation of a ThreadPool included in multiprocessing.
Here is an example of how to use it:

import csv
from multiprocessing.pool import ThreadPool

# argument name is inherited from process pool, and is a bit confusing
# will use <number of CPUs> if omitted
pool = ThreadPool(processes=max_threads)

def process_row(row):
pass # do something

# file handler can be directly iterated instead
# then, you'll get a line instead of a parsed CSV row
reader = csv.reader(open(filename))

# pool.map is faster but doesn't guarantee order of results
pool.imap(process_row, reader)

UPD: pool.imap is an iterator. It will be automatically evaluated in console, but in a standalone script it must be evaluated explicitly. Fix:

result = list(pool.imap(process_row, reader))


Related Topics



Leave a reply



Submit