How to Know All My Tasks in Grand Central Dispatch Finished

How do I know all my tasks in Grand Central Dispatch finished?

You can use dispatch groups to be notified when all tasks completed. This is an example from http://cocoasamurai.blogspot.com/2009/09/guide-to-blocks-grand-central-dispatch.html

dispatch_queue_t queue = dispatch_get_global_queue(0,0);
dispatch_group_t group = dispatch_group_create();

dispatch_group_async(group,queue,^{
NSLog(@"Block 1");
});

dispatch_group_async(group,queue,^{
NSLog(@"Block 2");
});

dispatch_group_notify(group,queue,^{
NSLog(@"Final block is executed last after 1 and 2");
});

Grand Central Dispatch-Check for Task Completion

You just need to place it in a main queue after your code:

let globalQueue = DispatchQueue.global()
globalQueue.async {
// Your code here
DispatchQueue.main.async {
self.treeview.reloadData()
}
}

Completion handler in serial Grand Central Dispatch

You can use DispatchGroup here to achieve completion like behaviour. You can use DispatchGroup to submit multiple tasks and track when they all complete, even though they might run on different queues.

func serialGCD(links: [String]) -> [String] {
let data: [String] = []
let serialQueue = DispatchQueue(label: "com.self.serialGCD")

let group = DispatchGroup()

links.forEach { (x) in
group.enter()

serialQueue.async {
//data task
//data.append(downloadedData)

group.leave()
}
}

group.notify(queue: .main) {
//Completion block
}

return data
}

How does a serial queue/private dispatch queue know when a task is complete?

The serialisation of work on a serial dispatch queue is at the unit of work that is directly submitted to the queue. Once execution reaches the end of the submitted closure (or it returns) then the next unit of work on the queue can be executed.

Importantly, any other asynchronous tasks that may have been started by the closure may still be running (or may not have even started running yet), but they are not considered.

For example, for the following code:

dispatch_async(serialQueue) {
print("Start")
dispatch_async(backgroundQueue) {
functionThatTakes10Seconds()
print("10 seconds later")
}
print("Done 1st")
}

dispatch_async(serialQueue) {
print("Start")
dispatch_async(backgroundQueue) {
functionThatTakes10Seconds()
print("10 seconds later")
}
print("Done 2nd")
}

The output would be something like:

Start

Done 1st

Start

Done 2nd

10 seconds later

10 seconds later

Note that the first 10 second task hasn't completed before the second serial task is dispatched. Now, compare:

dispatch_async(serialQueue) {
print("Start")
dispatch_sync(backgroundQueue) {
functionThatTakes10Seconds()
print("10 seconds later")
}
print("Done 1st")
}

dispatch_async(serialQueue) {
print("Start")
dispatch_sync(backgroundQueue) {
functionThatTakes10Seconds()
print("10 seconds later")
}
print("Done 2nd")
}

The output would be something like:

Start

10 seconds later

Done 1st

Start

10 seconds later

Done 2nd

Note that this time because the 10 second task was dispatched synchronously the serial queue was blocked and the second task didn't start until the first had completed.

In your case, there is a very good chance that the operations you are wrapping are going to dispatch asynchronous tasks themselves (since that is the nature of network operations), so a serial dispatch queue on its own is not enough.

You can use a DispatchGroup to block your serial dispatch queue.

dispatch_async(serialQueue) {
let dg = dispatch_group_create()
dispatch_group_enter(dg)
print("Start")
dispatch_async(backgroundQueue) {
functionThatTakes10Seconds()
print("10 seconds later")
dispatch_group_leave(dg)
}
dispatch_group_wait(dg)
print("Done")
}

This will output

Start

10 seconds later

Done

The dg.wait() blocks the serial queue until the number of dg.leave calls matches the number of dg.enter calls. If you use this technique then you need to be careful to ensure that all possible completion paths for your wrapped operation call dg.leave. There are also variations on dg.wait() that take a timeout parameter.

how to monitor when tasks of dispatch queue in recursion all completed?

The closure associated with the dispatchGroup.notify isn't called until the last dispatchGroup.leave is called, so you call enter outside the asynchronous task and leave inside

Something like:

func findPath(node: Node) {
if !node.isValid { return }
dispatchGroup.enter()
queue.async { //concurrent queue
findPath(node.north)
dispatchGroup.leave()
}

dispatchGroup.enter()
queue.async {
findPath(node.west)
dispatchGroup.leave()
}

dispatchGroup.enter()
queue.async {
findPath(node.south)
dispatchGroup.leave()
}

dispatchGroup.enter()
queue.async {
findPath(node.east)
dispatchGroup.leave()
}
}

func findPaths(startNode: Node) {
findPath(node: startNode)
dispatchGroup.notify {
print("All done")
}
}

When would a queue consider a task is completed?

In short, the first example starts an asynchronous network request, so the async call “finishes” as soon as that network request is submitted (but does not wait for that network request to finish).

I am assuming that the real question is that you want to know when the network request is done. Bottom line, GCD is not well suited for managing dependencies between tasks that are, themselves, asynchronous requests. The dispatching the initiation of a network request to a serial queue is undoubtedly not going to achieve what you want. (And before someone suggests using semaphores or dispatch groups to wait for the asynchronous request to finish, note that can solve the tactical issue, but it is a pattern to be avoided because it is inefficient use of resources and, in edge cases, can introduce deadlocks.)

One pattern is to use completion handlers:

func performRequestA(completion: @escaping () -> Void) { // task A
aNetworkRequest.doneInAnotherQueue() { object in
...
completion()
}
}

Now, in practice, we would generally use the completion handler with a parameter, perhaps even a Result type:

func performRequestA(completion: @escaping (Result<Foo, Error>) -> Void) { // task A
aNetworkRequest.doneInAnotherQueue() { result in
guard ... else {
completion(.failure(error))
return
}
let foo = ...
completion(.success(foo))
}
}

Then you can use the completion handler pattern, to process the results, update models, and perhaps initiate subsequent requests that are dependent upon the results of this request. For example:

performRequestA { result in
switch result {
case .failure(let error):
print(error)

case .success(let foo):
// update models or initiate next step in the process here
}
}

If you are really asking how to manage dependencies between asynchronous tasks, there are a number of other, elegant patterns (e.g., Combine, custom asynchronous Operation subclass, the forthcoming async/await pattern contemplated in SE-0296 and SE-0303, etc.). All of these are elegant solutions for managing dependencies between asynchronous tasks, controlling the degree of concurrency, etc.

We probably would need to better understand the nature of your broader needs before we made any specific recommendations. You have asked the question about a single dispatch, but the question probably is best viewed from a broader context of what you are trying to achieve. For example, I'm assuming you are asking because you have multiple asynchronous requests to initiate: Do you really need to make sure that they happen sequentially and lose all the performance benefits of concurrency? Or can you allow them to run concurrently and you just need to know when all of the concurrent requests are done and how to get the results in the correct order? And might you have so many concurrent requests that you might need to constrain the degree of concurrency?

The answers to those questions will probably influence our recommendation of how to best manage your multiple asynchronous requests. But the answer is almost certainly is not a GCD queue.

How to check if two asynchronous tasks are done with success

You could do this using DispatchGroup. Try the following playground;

import UIKit
import XCPlayground

let dispatchGroup = DispatchGroup.init()

for index in 0...4 {
dispatchGroup.enter()
let random = drand48()
let deadline = DispatchTime.now() + random/1000
print("entered \(index)")
DispatchQueue.global(qos: .background).asyncAfter(deadline: deadline, execute: {
print("leaving \(index)")
dispatchGroup.leave()
})
}

dispatchGroup.notify(queue: .global()) {
print("finished all")
}

which should output something similar to

entered 0
leaving 0
entered 1
entered 2
leaving 1
leaving 2
entered 3
leaving 3
entered 4
leaving 4
finished all

Grand Central Dispatch: How do I wait for the queue of blocks to complete?

The dispatch_sync() trick will only work for serial queues, which is what that tutorial is showing. The dispatch_get_global_queue() returns a concurrent queue, see it's documentation note:

Blocks submitted to these global concurrent queues may be executed concurrently with respect to each other.

To deal with a global concurrent queue you should use a group where you submit your blocks, also mentioned on that tutorial, and wait for the whole group with dispatch_group_wait().



Related Topics



Leave a reply



Submit