Exc_Bad_Instruction Happens When Using Dispatch_Get_Global_Queue on iOS 7(Swift)

EXC_BAD_INSTRUCTION happens when using dispatch_get_global_queue on ios 7(swift)

found out the reason seconds after i posted.
it seems not me being stupid, but apple's document

QOS_CLASS_USER_INTERACTIVE, QOS_CLASS_USER_INITIATED, QOS_CLASS_UTILITY, or QOS_CLASS_BACKGROUND

can not be used on ios7, though
https://developer.apple.com/library/ios/documentation/Performance/Reference/GCD_libdispatch_Ref/
#//apple_ref/c/func/dispatch_get_global_queue

doesn't bother to metion any of it
instead use

DISPATCH_QUEUE_PRIORITY_HIGH, DISPATCH_QUEUE_PRIORITY_DEFAULT, DISPATCH_QUEUE_PRIORITY_LOW, DISPATCH_QUEUE_PRIORITY_BACKGROUND

Dispatch to concurrent queue results in execution on main thread?

Yes it appears that dispatch_sync to a global queue can mean executing code on the main thread if the caller is on the main thread. The documentation for dispatch_sync explains:

As an optimization, this function invokes the block on the current thread when possible.

dispatch_sync always scheduling a block on Main Thread

How does a DispatchQueue work? (specifically multithreading)

Why ask this?

This is a fairly broad question about the internals of grand-central-dispatch. I had difficulty understanding the dumped output because the original WWDC '10 videos and slides for GCD are no longer public. I also didn't know about the open-source libdispatch repo (thanks Rob). That needn't be a problem, but there are no related QAs on SO explaining the topic in detail.

Why GCD?

According to the WWDC '10 GCD transcripts (Thanks Rob), the main idea behind the API was to simplify the boilerplate associated with using the #selector API for multithreading.

Benefits of GCD

Apple released a new block-based API instead of going with function pointers, to also enable type-safe code that wouldn't crash if the block had the wrong type signature. Using typedefs also made code cleaner when used in function parameters, local variables and @property declarations. Queues allow you to capture code and some state as a chunk of data that get managed, enqueued and executed automatically behind the scenes.

The same session mentions how GCD manages low-level threads under the hood. It enqueues blocks to execute on threads when they need to be executed and then releases those threads (PThreads to be precise) when they are no longer referenced. GCD manages threads automatically and doesn't expose this API - when a DispatchWorkItem is dequeued GCD creates a thread for this block to execute on.

Drawbacks of performSelector

performSelector:onThread:withObject:waitUntilDone: has numerous drawbacks that suggest poor design for the modern challenges of concurrency, waiting, synchronisation. leads to pyramids of doom when switching threads in a func. Furthermore, the NSObject.performSelector family of threading methods are inflexible and limited:

  1. No options to optimise for concurrent, initially inactive, or synchronisation on a particular thread. Unlike GCD.
  2. Only selectors can be dispatched on to new threads (awful).
  3. Lots of threads for a given function leads to messy code (pyramids of doom).
  4. No support for queueing without a limited (at the time when GCD was announced in iOS 4) NSOperation API. NSOperations are a high-level, verbose API that became more powerful after incorporating elements of dispatch (low-level API that became GCD) in iOS 4.
  5. Lots of bugs related to unhandled invalid Selector errors (type safety).

DispatchQueue internals

I believe the xref, ref and sref are internal registers that manage reference counts for automatic reference counting. GCD calls dispatch_retain and dispatch_release in most cases when needed, so we don't need to worry about releasing a queue after all its blocks have been executed. However, there were cases when a developer could call retain and release manually when trying to ensure the queue is retained even when not directly in use. These registers allow libDispatch to crash when release is called on a queue with a positive reference count, for better error handling.

When calling a block with DispatchQueue.global().async or similar, I believe this increments the reference count of that queue (xref and ref).

The variables in the question are not documented explicitly, but from what I can tell:

  • xref counts the number of external references to a general DispatchQueue.
  • ref counts the total number of references to a general DispatchQueue.
  • sref counts the number of references to API serial/concurrent/runloop queues, sources and mach channels (these need to be tracked differently as they are represented using different types).

in-barrier looks like an internal state flag (DispatchWorkItemFlag) to track whether new work items submitted to a concurrent queue should be scheduled or not. Only once the barrier work item finishes, the queue returns to scheduling work items that were submitted after the barrier. in-flight means that there is no barrier in force currently.

state is also not documented explicitly but I presume points to memory where the block can access variables from the scope where the block was scheduled.

DispatchQueue threads don't always set the correct results

tl;dr

You are using this non-zero semaphore to do parallel calculations, constraining the degree of concurrency to something reasonable. I would recommend concurrentPerform.

But the issue here is not how you are constraining the degree of the parallelism, but rather that you are using the same properties (shared by all of these concurrent tasks) for your calculations, which means that one iteration on one thread can be mutating these properties while they are being used/mutated by another parallel iteration on another thread.

So, I would avoid using any shared properties at all (short of the final array of boards). Use local variables only. And make sure to synchronize the updating of this final array so that you don't have two threads mutating it at the same time.


So, for example, if you wanted to create the boards in parallel, I would probably use concurrentPerform as outlined in my prior answer:

func populateBoards(count: Int, rows: Int, columns: Int, mineCount: Int, completion: @escaping ([Board]) -> Void) {
var boards: [Board] = []
let lock = NSLock()

DispatchQueue.global().async {
DispatchQueue.concurrentPerform(iterations: count) { index in
let board = Board(rows: rows, columns: columns, mineCount: mineCount)
lock.synchronize {
boards.append(board)
}
}
}

DispatchQueue.main.async {
lock.synchronize {
completion(boards)
}
}
}

Note, I'm not referencing any ivars. It is all local variables, passing the result back in a closure.

And to avoid race conditions where multiple threads might be trying to update the same array of boards, I am synchronizing my access with a NSLock. (You can use whatever synchronization mechanism you want, but locks a very performant solution, probably better than a GCD serial queue or reader-writer pattern in this particular scenario.) That synchronize method is as follows:

extension NSLocking {
func synchronize<T>(block: () throws -> T) rethrows -> T {
lock()
defer { unlock() }
return try block()
}
}

That is a nice generalized solution (handling closures that might return values, throw errors, etc.), but if that is too complicated to follow, here is a minimalistic rendition that is sufficient for our purposes here:

extension NSLocking {
func synchronize(block: () -> Void) {
lock()
block()
unlock()
}
}

Now, I confess, that I'd probably employ a different model for the board. I would define a Square enum for the individual squares of the board, and then define a Board which was an array (for rows) of arrays (for columns) for all these squares. Anyway, this in my implementation of the Board:

enum Square {
case count(Int)
case mine
}

struct Board {
let rows: Int
let columns: Int
var squares: [[Square]]

init(rows: Int, columns: Int, mineCount: Int) {
self.rows = rows
self.columns = columns

// populate board with all zeros

self.squares = (0..<rows).map { _ in
Array(repeating: Square.count(0), count: columns)
}

// now add mines

addMinesAndUpdateNearbyCounts(mineCount)
}

mutating func addMinesAndUpdateNearbyCounts(_ mineCount: Int) {
let mines = (0..<rows * columns)
.map { index in
index.quotientAndRemainder(dividingBy: columns)
}
.shuffled()
.prefix(mineCount)

for (mineRow, mineColumn) in mines {
squares[mineRow][mineColumn] = .mine
for row in mineRow-1 ... mineRow+1 where row >= 0 && row < rows {
for column in mineColumn-1 ... mineColumn+1 where column >= 0 && column < columns {
if case .count(let n) = squares[row][column] {
squares[row][column] = .count(n + 1)
}
}
}
}
}
}

extension Board: CustomStringConvertible {
var description: String {
var result = ""

for row in 0..<rows {
for column in 0..<columns {
switch squares[row][column] {
case .count(let n): result += String(n)
case .mine: result += "*"
}
}
result += "\n"
}

return result
}
}

Anyway, I would generate 1000 9×9 boards with ten mines each like so:

exercise.populateBoards(count: 1000, rows: 9, columns: 9, mineCount: 10) { boards in
for board in boards {
print(board)
print("")
}
}

But feel free to use whatever model you want. But I'd suggest encapsulating the model for the Board in its own type. It not only abstracts the details of the generation of a board from the multithreaded algorithm to create lots of boards, but it naturally avoids any unintended sharing of properties by the various threads.


Now all of this said, this is not a great example of parallelized code because the creation of a board is not nearly computationally intensive enough to justify the (admittedly very minor) overhead of running it in parallel. This is not a problem that is likely to benefit much from parallelized routines. Maybe you'd see some modest performance improvement, but not nearly as much as you might experience from something a little more computationally intensive.

How to create a delay in Swift?

Instead of a sleep, which will lock up your program if called from the UI thread, consider using NSTimer or a dispatch timer.

But, if you really need a delay in the current thread:

do {
sleep(4)
}

This uses the sleep function from UNIX.

In Swift how to call method with parameters on GCD main thread?

Modern versions of Swift use DispatchQueue.main.async to dispatch to the main thread:

DispatchQueue.main.async { 
// your code here
}

To dispatch after on the main queue, use:

DispatchQueue.main.asyncAfter(deadline: .now() + 0.1) {
// your code here
}

Older versions of Swift used:

dispatch_async(dispatch_get_main_queue(), {
let delegateObj = UIApplication.sharedApplication().delegate as YourAppDelegateClass
delegateObj.addUIImage("yourstring")
})


Related Topics



Leave a reply



Submit