Swift/How to Use Dispatch_Group with Multiple Called Web Service

Swift / How to use dispatch_group with multiple called web service?

The Problem

As you stated, calls to dispatch_group_enter and dispatch_group_leave must be balanced. Here, you are unable to balance them because the function that performs the actual real-time fetching only calls leave.

Method 1 - All calls on the group

If you take no issue with myFirebaseFunction always performing its work on that dispatch group, then you could put both enter and leave inside there, perhaps with a beginHandler and completionHandler:

func loadStuff() {
myFirebaseFunction(beginHandler: {
dispatch_group_enter(group)
dispatch_group_notify(group, dispatch_get_main_queue()) {
print("done")
}
}, completionHandler: { dispatch_group_leave(group) })

}

func myFirebaseFunction(beginHandler: () -> (), completionHandler: () -> ()) {

let usersRef = firebase.child("likes")
usersRef.observeEventType(.Value, withBlock: { snapshot in

beginHandler()
if snapshot.exists() {
let sorted = (snapshot.value!.allValues as NSArray).sortedArrayUsingDescriptors([NSSortDescriptor(key: "date",ascending: false)])

for item in sorted {

dict.append(item as! NSDictionary)
}
}
completionHandler()
})
}

Here, the completion handler would still be set to dispatch_group_leave by loadStuff, but there is also a begin handler that would call dispatch_group_enter and also dispatch_group_notify. The reason notify would need to be called in begin is that we need to ensure that we have already entered the group before we call notify or the notify block will execute immediately if the group is empty.

The block you pass to dispatch_group_notify will only be called exactly once, even if blocks are performed on the group after notify has been called. Because of this, it might be safe for each automatic call to observeEventType to happen on the group. Then anytime outside of these functions that you need to wait for a load to finish, you can just call notify.

Edit: Because notify is called each time beginHandler is called, this method would actually result in the notify block being called every time, so it might not be an ideal choice.

Method 2 - Only first call on group, several methods

If what you really need is for only the first call of observeEventType to use the group, then one option is to have two versions of myFirebaseFunction: one much like the one you already have and one using observeSingleEventOfType. Then load stuff could call both of those functions, only passing dispatch_group_leave as a completion to one of them:

func loadStuff() {
dispatch_group_enter(group)
myInitialFirebaseFunction() {
dispatch_group_leave(group)
}

dispatch_group_notify(group, dispatch_get_main_queue()) {
print("done")
}

myFirebaseFunction({})
}

func myInitialFirebaseFunction(completionHandler: () -> ()) {

let usersRef = firebase.child("likes")
usersRef.observeSingleEventOfType(.Value, withBlock: { snapshot in
processSnapshot(snapshot)
completionHandler()
})
}

func myFirebaseFunction(completionHandler: () -> ()) {

let usersRef = firebase.child("likes")
usersRef.observeSingleEventOfType(.Value, withBlock: { snapshot in
processSnapshot(snapshot)
completionHandler()
})
}

func processSnapshot(snapshot: FDataSnapshot) {

if snapshot.exists() {
let sorted = (snapshot.value!.allValues as NSArray).sortedArrayUsingDescriptors([NSSortDescriptor(key: "date",ascending: false)])

for item in sorted {
dict.append(item as! NSDictionary)
}
}
}

Method 3 - Only first call on group, no extra methods

Note that because loadStuff in "Method 2" basically loads things from Firebase twice, it might not be as efficient as you'd like. In that case you could instead use a Bool to determine whether leave should be called:

var shouldLeaveGroupOnProcess = false

func loadStuff() {
dispatch_group_enter(group)
shouldLeaveGroupOnProcess = true
myFirebaseFunction() {
if shouldLeaveGroupOnProcess {
shouldLeaveGroupOnProcess = false
dispatch_group_leave(group)
}
}

dispatch_group_notify(group, dispatch_get_main_queue()) {
print("done")
}
}

func myFirebaseFunction(completionHandler: () -> ()) {

let usersRef = firebase.child("likes")
usersRef.observeEventType(.Value, withBlock: { snapshot in

if snapshot.exists() {
let sorted = (snapshot.value!.allValues as NSArray).sortedArrayUsingDescriptors([NSSortDescriptor(key: "date",ascending: false)])

for item in sorted {
dict.append(item as! NSDictionary)
}
}
completionHandler()
})
}

Here, even if multiple calls to observeEventType are made during the initial load, leave is guaranteed to be called only once and no crashed should occur.

Swift 3

PS Currently I'm using Swift 2.3, but an upgrade to Swift 3 is planned, so it would be very awesome to receive an answer capable for both.

Dispatch has gotten a complete overhaul in Swift 3 (it is object-oriented), so code that works well on both is not really a thing :)

But the concepts of each of the three methods above are the same. In Swift 3:

  • Create your group with one of the inits of DispatchGroup
  • dispatch_group_enter is now the instance method enter on the group
  • dispatch_group_leave is now the instance method leave on the group
  • dispatch_group_notify is now the instance method notify on the group

Synchronise multiple web service calls in serial order in swift

Solution: Use DispatchSemaphores and a DispatchQueue

Rather than saving the closures, I decided to wrap everything up in a dispatch queue and use semaphores inside it

//Create a dispatch queue 
let dispatchQueue = DispatchQueue(label: "myQueue", qos: .background)

//Create a semaphore
let semaphore = DispatchSemaphore(value: 0)

func weatherService() {

dispatchQueue.async {
for i in 1...10 {
APIManager.apiGet(serviceName: self.weatherServiceURL, parameters: ["counter":i]) { (response:JSON?, error:NSError?, count:Int) in
if let error = error {
print(error.localizedDescription)
self.semaphore.signal()
return
}
guard let response = response else {
self.semaphore.signal()
return
}

print("\(count) ")

//Check by index, the last service in this case
if i == 10 {
print("Services Completed")
} else {
print("An error occurred")
}

// Signals that the 'current' API request has completed
self.semaphore.signal()
}

// Wait until the previous API request completes
self.semaphore.wait()
}
}
print("Start Fetching")
}

Output is this always
Sample Image

How to know when multiple web calls have ended, to call completion

I'd suggest using dispatch groups:

func resetDevice(completion: () -> ()) {
let dispatchGroup = DispatchGroup()

for device in devices {

dispatchGroup.enter()

device.isValid = 0

DeviceManager.instance.updateDevice(device).call { response in
print("device reset")
dispatchGroup.leave()
}
}

dispatchGroup.notify(queue: DispatchQueue.main) {
// Some code to execute when all devices have been reset
}
}

Each device enters the group immediately, but doesn't leave the group till the response is received. The notify block at the end isn't called until all objects have left the group.

How to do two concurrent API calls in swift 4

This is exactly the use case for DispatchGroup. Enter the group for each call, leave the group when the call finishes, and add a notification handler to fire when they're all done. There's no need for a separate operation queue; these are already asynchronous operations.

func downloadDetails(){
let dispatchGroup = DispatchGroup()

dispatchGroup.enter() // <<---
WebServiceManager.getAData(format:A, withCompletion: {(data: Any? , error: Error?) -> Void in

if let success = data {

DispatchQueue.main.async {
(success code)
dispatchGroup.leave() // <<----
}
}
})

dispatchGroup.enter() // <<---
webServiceManager.getBData(format: B, withCompletion: {(data: Any? , error: Error?) -> Void in

if let success = data {

DispatchQueue.main.async {
(success code)
dispatchGroup.leave() // <<----
}
}
})

dispatchGroup.notify(queue: .main) {
// whatever you want to do when both are done
}
}

How can you use Dispatch Groups to wait to call multiple functions that depend on different data?

To achieve that with dispatch groups alone you would need three
dispatch groups which are entered and left accordingly:

let abGroup = DispatchGroup()
let bcGroup = DispatchGroup()
let acGroup = DispatchGroup()

abGroup.enter()
abGroup.enter()
bcGroup.enter()
bcGroup.enter()
acGroup.enter()
acGroup.enter()

// When a is updated:
abGroup.leave()
acGroup.leave()

// When b is updated:
abGroup.leave()
bcGroup.leave()

// When c is updated:
acGroup.leave()
bcGroup.leave()

Then you can wait for the completion of each group independently

abGroup.notify(queue: .main) {
// Do something with a and b
}
bcGroup.notify(queue: .main) {
// Do something with b and c
}
acGroup.notify(queue: .main) {
// Do something with a and c
}

However, this does not scale well with more tasks and dependencies.

The better approach is to add Operations to an
OperationQueue, that allows to add arbitrary dependencies:

let queue = OperationQueue()

let updateA = BlockOperation {
// ...
}
queue.addOperation(updateA)

let updateB = BlockOperation {
// ...
}
queue.addOperation(updateB)

let updateC = BlockOperation {
// ...
}
queue.addOperation(updateC)

let doSomethingWithAandB = BlockOperation {
// ...
}
doSomethingWithAandB.addDependency(updateA)
doSomethingWithAandB.addDependency(updateB)
queue.addOperation(doSomethingWithAandB)

let doSomethingWithBandC = BlockOperation {
// ...
}
doSomethingWithBandC.addDependency(updateB)
doSomethingWithBandC.addDependency(updateC)
queue.addOperation(doSomethingWithBandC)

let doSomethingWithAandC = BlockOperation {
// ...
}
doSomethingWithAandC.addDependency(updateA)
doSomethingWithAandC.addDependency(updateC)
queue.addOperation(doSomethingWithAandC)

For asynchronous request with completion handlers you can use a
(local) dispatch group inside each block operation to wait for the
completion.

Here is a self-contained example:

import Foundation

var a: String?
var b: String?
var c: String?

let queue = OperationQueue()

let updateA = BlockOperation {
let group = DispatchGroup()
group.enter()
DispatchQueue.global().asyncAfter(deadline: .now() + 1.0, execute: {
a = "A"
group.leave()
})
group.wait()
print("updateA done")
}
queue.addOperation(updateA)

let updateB = BlockOperation {
let group = DispatchGroup()
group.enter()
DispatchQueue.global().asyncAfter(deadline: .now() + 2.0, execute: {
b = "B"
group.leave()
})
group.wait()
print("updateB done")
}
queue.addOperation(updateB)

let updateC = BlockOperation {
let group = DispatchGroup()
group.enter()
DispatchQueue.global().asyncAfter(deadline: .now() + 3.0, execute: {
c = "C"
group.leave()
})
group.wait()
print("updateC done")
}
queue.addOperation(updateC)

let doSomethingWithAandB = BlockOperation {
print("a=", a!, "b=", b!)
}
doSomethingWithAandB.addDependency(updateA)
doSomethingWithAandB.addDependency(updateB)
queue.addOperation(doSomethingWithAandB)

let doSomethingWithAandC = BlockOperation {
print("a=", a!, "c=", c!)
}
doSomethingWithAandC.addDependency(updateA)
doSomethingWithAandC.addDependency(updateC)
queue.addOperation(doSomethingWithAandC)

let doSomethingWithBandC = BlockOperation {
print("b=", b!, "c=", c!)
}
doSomethingWithBandC.addDependency(updateB)
doSomethingWithBandC.addDependency(updateC)
queue.addOperation(doSomethingWithBandC)

queue.waitUntilAllOperationsAreFinished()

Output:


updateA done
updateB done
a= A b= B
updateC done
a= A c= C
b= B c= C

Dispatch work to multiple queue's and wait synchronically

I managed to do it spinlock-way (just sleep the thread in a while loop):

import XCTest

class DispatchGroupTestTests: XCTestCase {

func testIets() {
let clazz = AsyncClass()

var isCalled = false

clazz.doSomething {
isCalled = true
}

XCTAssert(isCalled)
}

}

class AsyncClass {
func doSomething(completionHandler: () -> ()) {
var isDone = false
let dispatchGroup = DispatchGroup()

for _ in 0...5 {
dispatchGroup.enter()

DispatchQueue.global().async {
let _ = (0...1000000).map { $0 * 10000 }

dispatchGroup.leave()
}
}

dispatchGroup.notify(queue: .global()) {
isDone = true
}

while !isDone {
print("sleepy")
Thread.sleep(forTimeInterval: 0.1)
}

completionHandler()
}
}

Now, the closure isn't needed anymore, and the expectations can be removed (although I don't have them in the provided examples, I can omit them in my 'real' testing code):

import XCTest

class DispatchGroupTestTests: XCTestCase {

var called = false

func testIets() {
let clazz = AsyncClass()

clazz.doSomething(called: &called)

XCTAssert(called)
}

}

class AsyncClass {
func doSomething(called: inout Bool) {
var isDone = false
let dispatchGroup = DispatchGroup()

for _ in 0...5 {
dispatchGroup.enter()

DispatchQueue.global().async {
let _ = (0...1000000).map { $0 * 10000 }

dispatchGroup.leave()
}
}

dispatchGroup.notify(queue: .global()) {
isDone = true
}

while !isDone {
print("sleepy")
Thread.sleep(forTimeInterval: 0.1)
}

called = true
}
}

So I did a performance test with the measure block and my expectations (not XCTestCase expectations) become fulfilled: it is quicker to dispatch work to other threads and block the calling thread in a spinlock, comparing to dispatch all the work to the calling thread (taking into account we don't want escaping blocks and we want everything sync, just for easy calling functions inside test methods). I literally just filled in random computations and this was the result:

import XCTest

let work: [() -> ()] = Array.init(repeating: { let _ = (0...1000000).map { $0 * 213123 / 12323 }}, count: 10)

class DispatchGroupTestTests: XCTestCase {

func testSync() {
measure {
for workToDo in work {
workToDo()
}
}
}

func testIets() {
let clazz = AsyncClass()

measure {
clazz.doSomething()
}
}

}

class AsyncClass {
func doSomething() {
var isDone = false
let dispatchGroup = DispatchGroup()

for workToDo in work {
dispatchGroup.enter()

DispatchQueue.global().async {
workToDo()

dispatchGroup.leave()
}
}

dispatchGroup.notify(queue: .global()) {
isDone = true
}

while !isDone {
Thread.sleep(forTimeInterval: 0.1)
}
}
}

Sample Image

Swift: Synchronous Web Service Calls

I'd suggest you adopt asynchronous patterns. For example, have a method that retrieves phone numbers asynchronous, reporting success or failure with a completion handler:

let session = NSURLSession.sharedSession()

func requestPhoneNumber(id: String, completionHandler: (String?) -> Void) {
let request = ...

let task = session.dataTaskWithRequest(request) { data, response, error in
do {
let phoneNumber = ...
completionHandler(phoneNumber)
}
catch {
completionHandler(nil)
}
}
task.resume()
}

Then your first request, that retrieves all of the places, will use this asynchronous requestDetails:

// I don't know what your place structure would look like, but let's imagine an `id`,
// some `info`, and a `phoneNumber` (that we'll retrieve asynchronously).

struct Place {
var id: String
var placeInfo: String
var phoneNumber: String?

init(id: String, placeInfo: String) {
self.id = id
self.placeInfo = placeInfo
}
}

func retrievePlaces(completionHandler: ([Place]?) -> Void) {
let request = ...

let task = session.dataTaskWithRequest(request) { data, response, error in
// your guard statements

do {
// Extract results from JSON response (without `phoneNumber`, though

var places: [Place] = ...

let group = dispatch_group_create()

// now let's iterate through, asynchronously updating phone numbers

for (index, place) in places.enumerate() {
dispatch_group_enter(group)

self.requestPhoneNumber(place.id) { phone in
if let phone = phone {
dispatch_async(dispatch_get_main_queue()) {
places[index].phoneNumber = phone
}
}
dispatch_group_leave(group)
}
}

dispatch_group_notify(group, dispatch_get_main_queue()) {
completionHandler(places)
}
}
}
task.resume()
}

This also adopts asynchronous pattern, this time using dispatch group to identify when the requests finish. And you'd use the completion handler pattern when you call this:

retrievePlaces { phoneNumberDictionary in 
guard phoneNumberDictionary != nil else { ... }

// update your model/UI here
}

// but not here

Note, the retrievePlaces will issues those requests concurrently with respect to each other (for performance reasons). If you want to constrain that, you can use a semaphore to do that (just make sure to do this on a background queue, not the session's queue). The basic pattern is:

dispatch_async(dispatch_get_global_queue(QOS_CLASS_UTILITY, 0)) {
let semaphore = dispatch_semaphore_create(4) // set this to however many you want to run concurrently

for request in requests {
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER)

performAsynchronousRequest(...) {
dispatch_semaphore_signal(semaphore)
}
}
}

So that might look like:

func retrievePlaces(completionHandler: ([Place]?) -> Void) {
let request = ...

let task = session.dataTaskWithRequest(request) { data, response, error in
// your guard statements

do {
dispatch_async(dispatch_get_global_queue(QOS_CLASS_UTILITY, 0)) {
// Extract results from JSON response
var places: [Place] = ...

let semaphore = dispatch_semaphore_create(4) // use whatever limit you want here; this does max four requests at a time

let group = dispatch_group_create()

for (index, place) in places.enumerate() {
dispatch_group_enter(group)
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER)

self.requestPhoneNumber(place.id) { phone in
if let phone = phone {
dispatch_async(dispatch_get_main_queue()) {
places[index].phoneNumber = phone
}
}
dispatch_semaphore_signal(semaphore)
dispatch_group_leave(group)
}
}

dispatch_group_notify(group, dispatch_get_main_queue()) {
completionHandler(places)
}
}
}
}
task.resume()
}

Frankly, when it's this complicated, I'll often use asynchronous NSOperation subclass and use maxConcurrentOperationCount of the queue to constrain concurrency, but that seemed like it was beyond the scope of this question. But you can also use semaphores, like above, to constrain concurrency. But the bottom line is that rather than trying to figure out how to make the requests behave synchronously, you'll achieve the best UX and performance if you follow asynchronous patterns.



Related Topics



Leave a reply



Submit