Are Boolean Reads and Writes Guaranteed Atomic in Swift

Is BOOL read/write atomic in Objective C?

No. Without a locking construct, reading/writing any type variable is NOT atomic in Objective C.

If two threads write YES at the same time to a BOOL, the result is YES regardless of which one gets in first.

Please see: Synchronizing Thread Execution

Atomic properties vs thread-safe in Objective-C

An atomic property in Objective C guarantees that you will never see partial writes.
When a @property has the attribute atomic it is impossible to only partially write the value. The setter is like that:

- (void)setProp:(NSString *)newValue {
[_prop lock];
_prop = newValue;
[_prop unlock];
}

So if two threads want to write the value @"test" and @"otherTest" at the same time, then
at any given time the property can only be the initial value of the property or @"test" or @"otherTest".
nonatomic is faster but the value is a garbage value and no partial String of @"test"/@"otherTest" (thx @Gavin) or any other garbage value.

But atomic is only thread-safe with simple use.
It is not garantueed.
Appledoc says the following:

Consider an XYZPerson object in which both a person’s first and last
names are changed using atomic accessors from one thread. If another
thread accesses both names at the same time, the atomic getter methods
will return complete strings (without crashing), but there’s no
guarantee that those values will be the right names relative to each
other. If the first name is accessed before the change, but the last
name is accessed after the change, you’ll end up with an inconsistent,
mismatched pair of names.

I never had a problem using atomic at all. I designed the code that way, that there is not problem with atomic properties.

How important is it to fix race conditions for small primitives?

This is a “benign” race, discussed in WWDC 2016 video Thread Sanitizer and Static Analysis (at about 14:40).

They point out that no race should be considered benign because:

  • It’s contingent upon the particular hardware architecture you are using, and you have no assurances that such a data race will continue to be benign under different architectures;

  • All data races (benign or otherwise) are considered to be an undefined behavior from C/C++ standards.

  • While this may not be an issue in your code, compilers are free to reorder instructions oblivious to what other threads might be doing, so in some cases, in the absence of some synchronization mechanism, it can lead to “very subtle bugs.”

Bottom line, even though it’s likely not essential to fix these benign races, Apple advises that you do so, regardless. Fortunately, since you’re dealing with Objective-C, it’s easily remedied by making the properties atomic.

Swift | Force cancel execution of code

Your problem is, essentially, that you're checking global state when you should be checking local state. Let's say you've got operation 'L' going on, and you've just typed an 'e', so operation 'Le' is going to start. Here's what roughly happens:

func updateMyPeepz() {
// At this point, `self.workItem` contains operation 'L'

self.workItem.cancel() // just set isCancelled to true on operation 'L'
self.workItem = DispatchWorkItem { /* bla bla bla */ }
// *now* self.workItem points to operation 'Le'!
}

So later, in the work item for operation 'L', you do this:

if self.workItem.isCancelled { /* do something */ }

Unfortunately, this code is running in operation 'L', but self.workItem now points to operation 'Le'! So while operation 'L' is cancelled, and operation 'Le' is not, operation 'L' sees that self.workItem—i.e. operation 'Le'—is not cancelled. And thus, the check always returns false, and operation 'L' never actually stops.

When you use the global boolean variable, you have the same problem, because it's global and doesn't differentiate between the operations that should still be running and the ones that shouldn't (in addition to atomicity issues you're already going to introduce if you don't protect the variable with a semaphore).

Here's how to fix it:

func updateMyPeepz() {
var workItem: DispatchWorkItem? = nil // This is a local variable

self.currentWorkItem?.cancel() // Cancel the one that was already running

workItem = DispatchWorkItem { /* bla */ } // Set the *local variable* not the property

self.currentWorkItem = workItem // now we set the property
DispatchQueue.global(qos: whatever).async(execute: workItem) // run it
}

Now, here's the crucial part. Inside your loops, check like this:

if workItem?.isCancelled ?? false // check local workItem, *not* self.workItem!
// (you can also do workItem!.isCancelled if you want, since we can be sure this is non-nil)

At the end of the workItem, set workItem to nil to get rid of the retain cycle (otherwise it'll leak):

workItem = nil // not self.workItem! Nil out *our* item, not the property

Alternatively, you can put [weak workItem] in at the top of the workItem block to prevent the cycle—then you don't have to nil it out at the end, but you should be sure to use ?? false instead of ! since you always want to assume that a weak variable can conceivably go nil at any time.

How do I atomically increment a variable in Swift?

From Low-Level Concurrency APIs:

There’s a long list of OSAtomicIncrement and OSAtomicDecrement
functions that allow you to increment and decrement an integer value
in an atomic way – thread safe without having to take a lock (or use
queues). These can be useful if you need to increment global counters
from multiple threads for statistics. If all you do is increment a
global counter, the barrier-free OSAtomicIncrement versions are fine,
and when there’s no contention, they’re cheap to call.

These functions work with fixed-size integers, you can choose
the 32-bit or 64-bit variant depending on your needs:

class Counter {
private (set) var value : Int32 = 0
func increment () {
OSAtomicIncrement32(&value)
}
}

(Note: As Erik Aigner correctly noticed, OSAtomicIncrement32 and
friends are deprecated as of macOS 10.12/iOS 10.10. Xcode 8 suggests to use functions from <stdatomic.h> instead. However that seems to be difficult,
compare Swift 3: atomic_compare_exchange_strong and https://openradar.appspot.com/27161329.
Therefore the following GCD-based approach seems to be the best
solution now.)

Alternatively, one can use a GCD queue for synchronization.
From Dispatch Queues in the "Concurrency Programming Guide":

... With dispatch queues, you could add both tasks to a serial
dispatch queue to ensure that only one task modified the resource at
any given time. This type of queue-based synchronization is more
efficient than locks because locks always require an expensive kernel
trap in both the contested and uncontested cases, whereas a dispatch
queue works primarily in your application’s process space and only
calls down to the kernel when absolutely necessary.

In your case that would be

// Swift 2:
class Counter {
private var queue = dispatch_queue_create("your.queue.identifier", DISPATCH_QUEUE_SERIAL)
private (set) var value: Int = 0

func increment() {
dispatch_sync(queue) {
value += 1
}
}
}

// Swift 3:
class Counter {
private var queue = DispatchQueue(label: "your.queue.identifier")
private (set) var value: Int = 0

func increment() {
queue.sync {
value += 1
}
}
}

See Adding items to Swift array across multiple threads causing issues (because arrays aren't thread safe) - how do I get around that? or GCD with static functions of a struct for more sophisticated examples. This thread
What advantage(s) does dispatch_sync have over @synchronized? is also very interesting.

Why is a boolean 1 byte and not 1 bit of size?

Because the CPU can't address anything smaller than a byte.



Related Topics



Leave a reply



Submit