Swift: Simple Dispatchqueue Does Not Run & Notify Correctly

Swift: Simple DispatchQueue does not run & notify correctly

You should put the group.leave() statement in the dispatchQueue.async block as well, otherwise it will be executed synchronously before the async block would finish execution.

@objc func buttonTapped(){

let group = DispatchGroup()
let dispatchQueue = DispatchQueue.global(qos: .default)

for i in 1...4 {
group.enter()
dispatchQueue.async {
print(" \(i)")
group.leave()
}
}

for i in 1...4 {
group.enter()
dispatchQueue.async {
print("❌ \(i)")
group.leave()
}
}

group.notify(queue: DispatchQueue.main) {
print("jobs done by group")
}
}

DispatchGroup notify method should only run once after all events are finished in Swift

I was able find a solution to my problem without the need to rewrite my code to queue up my requests.

I create a class variable

var counter:Int = 0

Before my "myUploadGroup.enter()" function I add the following code

counter = counter + 1

In my "myUploadGroup.notify" function I add this code

self.counter = self.counter - 1
if self.counter == 0 {
print("I run once")
}

It works for me now.

Why does DispatchWorkItem notify crash?

That first example you share (which I gather is directly from the tutorial) is not well written for a couple of reasons:

  1. It's updating a variable from multiple threads. That is an inherently non thread-safe process. It turns out that for reasons not worth outlining here, it's not technically an issue in the author's original example, but it is a very fragile design, illustrated by the non-thread-safe behavior quickly manifested in your subsequent examples.

    One should always synchronize access to a variable if manipulating it from multiple threads. You can use a dedicated serial queue for this, a NSLock, reader-writer pattern, or other patterns. While I'd often use another GCD queue for the synchronization, I think that's going to be confusing when we're focusing on the GCD behavior of DispatchWorkItem on various queues, so in my example below, I'll use NSLock to synchronize access, calling lock() before I try to use value and unlock when I'm done.

  2. You say that first example displays "20". That's a mere accident of timing. If you change it to say ...

    let workItem = DispatchWorkItem {
    os_log("starting")
    Thread.sleep(forTimeInterval: 2)
    value += 5
    os_log("done")
    }

    ... then it will likely say "15", not "20" because you'll see the notify for the workItem.perform() before the async call to the global queue is done. Now, you'd never use sleep in real apps, but I put it in to illustrate the timing issues.

    Bottom line, the notify on a DispatchWorkItem happens when the dispatch work item is first completed and it won't wait for the subsequent invocations of it. This code entails what is called a "race condition" between your notify block and the call you dispatched to that global queue and you're not assured which will run first.

  3. Personally, even setting aside the race conditions and the inherently non thread-safe behavior of mutating some variable from multiple threads, I'd advise against invoking the same DispatchWorkItem multiple times, at least in conjunction with notify on that work item.

  4. If you want to do a notification when everything is done, you should use a DispatchGroup, not a notify on the individual DispatchWorkItem.

Pulling this all together, you get something like:

import os.log

var value = 10
let lock = NSLock() // a lock to synchronize our access to `value`

func notifyExperiment() {
// rather than using `DispatchWorkItem`, a reference type, and invoking it multiple times,
// let's just define some closure or function to run some task

func performTask(message: String) {
os_log("starting %@", message)
Thread.sleep(forTimeInterval: 2) // we wouldn't do this in production app, but lets do it here for pedagogic purposes, slowing it down enough so we can see what's going on
lock.lock()
value += 5
lock.unlock()
os_log("done %@", message)
}

// create a dispatch group to keep track of when these tasks are done

let group = DispatchGroup()

// let's enter the group so that we don't have race condition between dispatching tasks
// to the queues and our notify process

group.enter()

// define what notification will be done when the task is done

group.notify(queue: .main) {
self.lock.lock()
os_log("value = %d", self.value)
self.lock.unlock()
}

// Let's run our task once on the global queue

DispatchQueue.global(qos: .utility).async(group: group) {
performTask(message: "from global queue")
}

// Let's run our task also on a custom queue

let customQueue = DispatchQueue(label: "com.appcoda.delayqueue1", qos: .utility)
customQueue.async(group: group) {
performTask(message: "from custom queue")
}

// Now let's leave the group, resolving our `enter` at the top, allowing the `notify` block
// to run iff (a) all `enter` calls are balanced with `leave` calls; and (b) once the `async(group:)`
// calls are done.

group.leave()
}

SwiftUI DispatchQueue asyncAfter stops working correctly after ten scheduled tasks

That is an odd behavior, and it even occurs for much longer time intervals between events (for example, 2 seconds).

Consider using a Timer instead:

You can use multiple single fire timers that work just like your separate dispatches:

func buttonTap() {

var delay : Double = 0

for _ in 1...50 {

delay = delay + 0.2

Timer.scheduledTimer(withTimeInterval: delay, repeats: false) { timer in
self.counter += 1
}

}
}

or a single timer that fires every 0.2 seconds:

func buttonTap() {
Timer.scheduledTimer(withTimeInterval: 0.2, repeats: true) { timer in
self.counter += 1
if counter == 50 {
timer.invalidate()
}
}
}

Multiple requests with DispatchQueue.main.async not executing properly in swift 3

I had a similar issue, where I populated an array with elements coming from different asynchronous network requests and when the requests were running concurrently, the final size of my array depended on the execution of the concurrent tasks.

I managed to solve my problem using a serial queue and Dispatch Groups.
This is how I would change your code:

func fetchBoard(){

let repo = GameRepository()
let prefs = UserDefaults.standard
if self.jsonGame.boards.count > 0 {
self.sortedBoardArr.reserveCapacity(self.BoardArr.count)
let serialQueue = DispatchQueue(label: "serialQueue")
let group = DispatchGroup()
for board in self.jsonGame.boards{
group.enter()
serialQueue.async {
repo.GetBoardInfo(gameID: self.jsonGame.Id, boardID: board , completion : {(response , errorCode ) -> Void in
if errorCode == ErrorCode.NoError{
self.BoardArr.append(response)
}
group.leave()
})
DispatchQueue.main.async{
group.wait()
self.sortArr()
self.collectionView.reloadData()
}
}
}
}
}

These 2 answers for similar questions here on stack overflow were quite helpful for me when I had a similar issue: Blocking async requests and Concurrent and serial queues

DispatchQueue does not update the data in swift

The issue is that callWebAPI is asynchronous (it returns immediately before the request is done), so you are calling leave immediately. You could give this method a completion handler and call leave in that. And you would also call the UI update in a notify block for your group, not just dispatch it.

Easier, just retire the DispatchGroup entirely and just update your UI in the completion handler you supply to callWebAPI.

For example, give callWebAPI a completion handler:

func callWebAPI(completionHandler: @escaping ([[String: Any]]?, Error?) -> Void) {
let urlString = URL(string: "https://restcountries.eu/rest/v2/all")
if let url = urlString {
let task = URLSession.shared.dataTask(with: url) { (data, response, error) in
guard let data = data, error == nil else {
completionHandler(nil, error)
return
}

do {
let jsonResponse = try JSONSerialization.jsonObject(with:
data)

completionHandler(jsonResponse as? [[String: Any]], nil)
} catch let parsingError {
completionHandler(nil, parsingError)
}
}
task.resume()
}
}

And then, you can eliminate the dispatch groups, the global queues (because it’s already an asynchronous method, you don’t need to invoke this from background queue), etc., and it’s simplified to just:

func searchBar(_ searchBar: UISearchBar, textDidChange searchText: String) {
callWebAPI { jsonResponse, error in
guard let jsonResponse = jsonResponse, error == nil else {
print("Error:", error ?? "Response was not correct format")
return
}

print(jsonResponse)

// Note, you don’t appear to be using `jsonResponse` at all,
// so I presume you’d update the relevant model objects.

DispatchQueue.main.async {
self.filteredCountry = self.arrCountry.filter({$0.name.prefix(searchText.count) == searchText})
self.searching = true
self.tableView.reloadData()
}
}
}

As an aside, nowadays we use JSONDecoder to parse JSON to populate model objects directly, but that’s beyond the scope of this question.

DispatchQueue threads don't always set the correct results

tl;dr

You are using this non-zero semaphore to do parallel calculations, constraining the degree of concurrency to something reasonable. I would recommend concurrentPerform.

But the issue here is not how you are constraining the degree of the parallelism, but rather that you are using the same properties (shared by all of these concurrent tasks) for your calculations, which means that one iteration on one thread can be mutating these properties while they are being used/mutated by another parallel iteration on another thread.

So, I would avoid using any shared properties at all (short of the final array of boards). Use local variables only. And make sure to synchronize the updating of this final array so that you don't have two threads mutating it at the same time.


So, for example, if you wanted to create the boards in parallel, I would probably use concurrentPerform as outlined in my prior answer:

func populateBoards(count: Int, rows: Int, columns: Int, mineCount: Int, completion: @escaping ([Board]) -> Void) {
var boards: [Board] = []
let lock = NSLock()

DispatchQueue.global().async {
DispatchQueue.concurrentPerform(iterations: count) { index in
let board = Board(rows: rows, columns: columns, mineCount: mineCount)
lock.synchronize {
boards.append(board)
}
}
}

DispatchQueue.main.async {
lock.synchronize {
completion(boards)
}
}
}

Note, I'm not referencing any ivars. It is all local variables, passing the result back in a closure.

And to avoid race conditions where multiple threads might be trying to update the same array of boards, I am synchronizing my access with a NSLock. (You can use whatever synchronization mechanism you want, but locks a very performant solution, probably better than a GCD serial queue or reader-writer pattern in this particular scenario.) That synchronize method is as follows:

extension NSLocking {
func synchronize<T>(block: () throws -> T) rethrows -> T {
lock()
defer { unlock() }
return try block()
}
}

That is a nice generalized solution (handling closures that might return values, throw errors, etc.), but if that is too complicated to follow, here is a minimalistic rendition that is sufficient for our purposes here:

extension NSLocking {
func synchronize(block: () -> Void) {
lock()
block()
unlock()
}
}

Now, I confess, that I'd probably employ a different model for the board. I would define a Square enum for the individual squares of the board, and then define a Board which was an array (for rows) of arrays (for columns) for all these squares. Anyway, this in my implementation of the Board:

enum Square {
case count(Int)
case mine
}

struct Board {
let rows: Int
let columns: Int
var squares: [[Square]]

init(rows: Int, columns: Int, mineCount: Int) {
self.rows = rows
self.columns = columns

// populate board with all zeros

self.squares = (0..<rows).map { _ in
Array(repeating: Square.count(0), count: columns)
}

// now add mines

addMinesAndUpdateNearbyCounts(mineCount)
}

mutating func addMinesAndUpdateNearbyCounts(_ mineCount: Int) {
let mines = (0..<rows * columns)
.map { index in
index.quotientAndRemainder(dividingBy: columns)
}
.shuffled()
.prefix(mineCount)

for (mineRow, mineColumn) in mines {
squares[mineRow][mineColumn] = .mine
for row in mineRow-1 ... mineRow+1 where row >= 0 && row < rows {
for column in mineColumn-1 ... mineColumn+1 where column >= 0 && column < columns {
if case .count(let n) = squares[row][column] {
squares[row][column] = .count(n + 1)
}
}
}
}
}
}

extension Board: CustomStringConvertible {
var description: String {
var result = ""

for row in 0..<rows {
for column in 0..<columns {
switch squares[row][column] {
case .count(let n): result += String(n)
case .mine: result += "*"
}
}
result += "\n"
}

return result
}
}

Anyway, I would generate 1000 9×9 boards with ten mines each like so:

exercise.populateBoards(count: 1000, rows: 9, columns: 9, mineCount: 10) { boards in
for board in boards {
print(board)
print("")
}
}

But feel free to use whatever model you want. But I'd suggest encapsulating the model for the Board in its own type. It not only abstracts the details of the generation of a board from the multithreaded algorithm to create lots of boards, but it naturally avoids any unintended sharing of properties by the various threads.


Now all of this said, this is not a great example of parallelized code because the creation of a board is not nearly computationally intensive enough to justify the (admittedly very minor) overhead of running it in parallel. This is not a problem that is likely to benefit much from parallelized routines. Maybe you'd see some modest performance improvement, but not nearly as much as you might experience from something a little more computationally intensive.

Dispatch group - cannot notify to main thread

After reading post suggested by Matt, I found that I was submitting task to main queue and when I asked to be notified on main thread itself, it got in the deadlock.

I have altered the code and now it is working as intended,

typealias CallBack = (result: [Int]) -> Void
func longCalculations (completion: CallBack) {
let backgroundQ = DispatchQueue.global(attributes: .qosDefault)
let group = DispatchGroup()

var fill:[Int] = []
for number in 0..<100 {
group.enter()
backgroundQ.async(group: group, execute: {
if number > 50 {
fill.append(number)
}
group.leave()

})
}

group.notify(queue: DispatchQueue.main, execute: {
print("All Done"); completion(result: fill)
})
}

longCalculations(){print($0)}


Related Topics



Leave a reply



Submit