How to Utilize Nslock to Prevent a Function from Firing Twice

How to utilize NSLock to prevent a function from firing twice?

An NSLock is a mutex; it prevents multiple threads from accessing the same resource simultaneously, which is exactly what you want to do here. Once one thread acquires the lock, other threads attempting to acquire the lock will wait until the first thread releases the lock.

You'll want to create a lock and store it somewhere that persists across and between function calls, most likely in an instance variable in this case. To acquire the lock, call its lock method, and to release it, use unlock:

var checkOne = false
var checkTwo = false

//create the lock
let lock = NSLock()

func functionOne(){
//async stuff
//acquire the lock
lock.lock()
checkOne = true
if checkOne == true && checkTwo == true{
functionThree()//will only run if both functionOne and functionTwo have been completed
}
//release the lock
lock.unlock()
}

func functionTwo(){
//async stuff
lock.lock()
checkTwo = true
if checkOne == true && checkTwo == true{
functionThree()//will only run if both functionOne and functionTwo have been completed
}
lock.unlock()
}

func functionThree(){
//stuff
}


override func viewDidLoad() {

functionOne()
functionTwo()
}

A more "modern" approach is to use a DispatchQueue instead of an NSLock. Dispatch is higher-level than APIs like NSLock and NSThread; instead of directly working with locks and threads, you'll use queues.

A serial dispatch queue works like a checkout line at a store. You submit blocks of code to the queue, and it executes them one at a time in the order they were received. You can also create a concurrent dispatch queue which executes its tasks simultaneously by passing .concurrent to the options parameter of the DispatchQueue initializer.

A serial dispatch queue is an easy way to protect a resource from being accessed by multiple threads at once -- just create a queue for that resource, and put every access to that resource on the queue.

var checkOne = false
var checkTwo = false

//Create a serial dispatch queue
let queue = DispatchQueue(label: "name of queue")

func functionOne(){
//async stuff

//Add a task to the queue, and execute it synchronously (i.e. wait for it to finish.)
//You can also use async to execute a task asynchronously,
//but sync is slightly more efficient unless you need it to be asynchronous.
queue.sync {
checkOne = true
if checkOne == true && checkTwo == true{
functionThree()//will only run if both functionOne and functionTwo have been completed
}
}
}

func functionTwo(){
//async stuff
queue.sync {
checkTwo = true
if checkOne == true && checkTwo == true{
functionThree()//will only run if both functionOne and functionTwo have been completed
}
}
}

func functionThree(){
//stuff
}


override func viewDidLoad() {

functionOne()
functionTwo()
}

What is the difference (advantage and disadvantage) between using DispatchGroup and NSRecursiveLock?

Locks and groups serve very different purposes. When dealing with a series of concurrent tasks:

  • A lock generally is used to prevent/block these tasks from simultaneously interacting with some shared, non thread-safe, resource.

  • A group is generally used for identifying when these concurrent tasks are all complete (regardless of the order in which they finished).

For example, if processing a series of images in parallel, you might use a lock or similar mechanism to update some property (e.g. the array of the results), whereas the dispatch groups is used to know when all of these concurrent tasks are done.

How to ensure two asynchronous tasks that 'start' together are completed before running another?

You can use when or join to start something after multiple other promises have completed. The difference is in how they handled failed promises. It sounds like you want join. Here is a concrete, though simple example.

This first block of code is an example of how to create 2 promise chains and then wait for both of them to complete before starting the next task. The actual work being done is abstracted away into some functions. Focus on this block of code as it contains all the conceptual information you need.

Snippet

let chain1 = firstly(execute: { () -> (Promise<String>, Promise<String>) in
let secondPieceOfInformation = "otherInfo" // This static data is for demonstration only

// Pass 2 promises, now the next `then` block will be called when both are fulfilled
// Promise initialized with values are already fulfilled, so the effect is identical
// to just returning the single promise, you can do a tuple of up to 5 promises/values
return (fetchUserData(), Promise(value: secondPieceOfInformation))

}).then { (result: String, secondResult: String) -> Promise<String> in
self.fetchUpdatedUserImage()
}

let chain2 = firstly {
fetchNewsFeed() //This promise returns an array

}.then { (result: [String : Any]) -> Promise<String> in

for (key, value) in result {
print("\(key) \(value)")
}
// now `result` is a collection
return self.fetchFeedItemHeroImages()
}

join(chain1, chain2).always {
// You can use `always` if you don't care about the earlier values

let methodFinish = Date()
let executionTime = methodFinish.timeIntervalSince(self.methodStart)
print(String(format: "All promises finished %.2f seconds later", executionTime))
}

PromiseKit uses closures to provide it's API. Closures have an scope just like an if statement. If you define a value within an if statement's scope, then you won't be able to access it outside of that scope.

You have several options to passing multiple pieces of data to the next then block.

  1. Use a variable that shares a scope with all of the promises (you'll likely want to avoid this as it works against you in managing the flow of asynchronous data propagation)
  2. Use a custom data type to hold both (or more) values. This can be a tuple, struct, class, or enum.
  3. Use a collection (such as a dictionary), example in chain2
  4. Return a tuple of promises, example included in chain1

You'll need to use your best judgement when choosing your method.

Complete Code

import UIKit
import PromiseKit

class ViewController: UIViewController {

let methodStart = Date()

override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)

<<Insert The Other Code Snippet Here To Complete The Code>>

// I'll also mention that `join` is being deprecated in PromiseKit
// It provides `when(resolved:)`, which acts just like `join` and
// `when(fulfilled:)` which fails as soon as any of the promises fail
when(resolved: chain1, chain2).then { (results) -> Promise<String> in
for case .fulfilled(let value) in results {
// These promises succeeded, and the values will be what is return from
// the last promises in chain1 and chain2
print("Promise value is: \(value)")
}

for case .rejected(let error) in results {
// These promises failed
print("Promise value is: \(error)")
}

return Promise(value: "finished")
}.catch { error in
// With the caveat that `when` never rejects
}
}

func fetchUserData() -> Promise<String> {
let promise = Promise<String> { (fulfill, reject) in

// These dispatch queue delays are standins for your long-running asynchronous tasks
// They might be network calls, or batch file processing, etc
// So, they're just here to provide a concise, illustrative, working example
DispatchQueue.global().asyncAfter(deadline: .now() + 2.0) {
let methodFinish = Date()
let executionTime = methodFinish.timeIntervalSince(self.methodStart)

print(String(format: "promise1 %.2f seconds later", executionTime))
fulfill("promise1")
}
}

return promise
}

func fetchUpdatedUserImage() -> Promise<String> {
let promise = Promise<String> { (fulfill, reject) in
DispatchQueue.global().asyncAfter(deadline: .now() + 2.0) {
let methodFinish = Date()
let executionTime = methodFinish.timeIntervalSince(self.methodStart)

print(String(format: "promise2 %.2f seconds later", executionTime))
fulfill("promise2")
}
}

return promise
}

func fetchNewsFeed() -> Promise<[String : Any]> {
let promise = Promise<[String : Any]> { (fulfill, reject) in
DispatchQueue.global().asyncAfter(deadline: .now() + 1.0) {
let methodFinish = Date()
let executionTime = methodFinish.timeIntervalSince(self.methodStart)

print(String(format: "promise3 %.2f seconds later", executionTime))
fulfill(["key1" : Date(),
"array" : ["my", "array"]])
}
}

return promise
}

func fetchFeedItemHeroImages() -> Promise<String> {
let promise = Promise<String> { (fulfill, reject) in
DispatchQueue.global().asyncAfter(deadline: .now() + 2.0) {
let methodFinish = Date()
let executionTime = methodFinish.timeIntervalSince(self.methodStart)

print(String(format: "promise4 %.2f seconds later", executionTime))
fulfill("promise4")
}
}

return promise
}
}

Output

promise3 1.05 seconds later

array ["my", "array"]

key1 2017-07-18 13:52:06 +0000

promise1 2.04 seconds later

promise4 3.22 seconds later

promise2 4.04 seconds later

All promises finished 4.04 seconds later

Promise value is: promise2

Promise value is: promise4

If a method is called at the exact same time twice, how to only execute it once?

One method is a BOOL member that you set when entering the method and clear on leaving it. If the variable is set upon entry, you know it's already executing and can just return.

Assuming you're being called from multiple threads, you'll want to lock access to this critical area of checking/setting. An NSLock is good for this.

The code below has two implementations: myMethod1 which uses NSLock and myMethod2 which shows using @synchronize.

@interface MyClass : NSObject
{
NSLock* theLock;
BOOL isRunning;
}
@end

@implementation MyClass

-(id)init
{
self = [super init];
if(self != nil)
{
theLock = [[NSLock alloc] init];
isRunning = NO;
}
return self;
}

-(void)dealloc
{
[theLock release];
[super dealloc];
}

// Use NSLock to guard the critical areas
-(void)myMethod1
{
[theLock lock];

if(isRunning == YES)
{
[theLock unlock]; // Unlock before returning
return;
}

isRunning = YES;

// Do fun stuff here

isRunning = NO;

[theLock unlock];
}

// This method uses @synchronize
-(void)myMethod2
{
@synchronized(self)
{
if(isRunning == YES)
{
return;
}

isRunning = YES;

// Do stuff here.

isRunning = NO;
}
}
@end

Is there a way to make a signal similar to combineLatest without needing all the signals to initially fire?

The other solutions are technically correct but I thought this might fit your use case better.

// Only ever produces either a single `true` or a single `false`.
let doesAnyOfTheActionsLoad =
SignalProducer<Bool, NoError>
.init(signals)
.flatten(.merge) // Merge the signals together into a single signal.
.skipNil() // Discard `nil` values.
.map { $0 == .pending } // Convert every value to a bool representing whether that value is `.pending`.
.filter { $0 } // Filter out `false`.
.concat(value: false) // If all signals complete without going to `.pending`, send a `false`.
.take(first: 1) // Only take one value (so we avoid the concatted value in the case that something loads).

How to execute a lock that waits until an animation completes?

At the bottom, in my original answer, I describe a way to achieve the requested functionality (if you initiate an animation while the prior animation is still in progress, queue up this subsequent animation to only start after the current ones are done).

While I'll keep that for historical purposes, I might want to suggest an entirely different approach. Specifically, if you tap on a button that should result in an animation, but a previous animation is still in progress, I would instead suggest that you remove the old animation and immediately start the new animation, but do so in manner such that the new animation picks up from wherever the current one left off.

  1. In iOS versions prior to iOS 8, the challenge is that if you start a new animation while another is in progress, the OS awkwardly immediately jumps to where the current animation would have ended and starts the new animation from there.

    The typical solution in iOS versions prior to 8 would be to:

    • grab the presentationLayer of the animated view (this is the current state of the CALayer of the UIView ... if you look at the UIView while the animation is in progress, you'll see the final value, and we need to grab the current state);

    • grab the current value of the animated property value from that presentationLayer;

    • remove the animations;

    • reset the animated property to the "current" value (so that it doesn't appear to jump to the end of the prior animation before starting the next animation);

    • initiate the animation to the "new" value;

    So, for example, if you are animating the changing of a frame which might be in progress of an animation, you might do something like:

    CALayer *presentationLayer = animatedView.layer.presentationLayer;
    CGRect currentFrame = presentationLayer.frame;
    [animatedView.layer removeAllAnimations];
    animatedView.frame = currentFrame;
    [UIView animateWithDuration:1.0 animations:^{
    animatedView.frame = newFrame;
    }];

    This completely eliminates all of the awkwardness associated with queuing up the "next" animation to run after the "current" animation (and other queued animations) completes. You end up with a far more responsive UI, too (e.g. you don't have to wait for prior animations to finish before the user's desired animation commences).

  2. In iOS 8, this process is infinitely easier, where it will, if you initiate a new animation, it will often start the animation from not only the current value of the animated property, but will also identify the speed at which that this currently animated property is changing, resulting in a seamless transition between the old animation and the new animation.

    For more information about this new iOS 8 feature, I'd suggest you refer to WWDC 2014 video Building Interruptible and Responsive Interactions.

For the sake of completeness, I'll keep my original answer below, as it tries to precisely tackle the functionality outlined in the question (just uses a different mechanism to ensure the main queue isn't blocked). But I'd really suggest considering stopping the current animation and starting the new one in such a manner that it commences from wherever any in-progress animation might have left off.


Original answer:

I wouldn't recommend wrapping an animation in a NSLock (or semaphore, or any other similar mechanism) because that can result in blocking the main thread. You never want to block the main thread. I think your intuition about using a serial queue for resizing operations is promising. And you probably want a "resizing" operation that:

  • initiates the UIView animation on the main queue (all UI updates must take place on the main queue); and

  • in the animation's completion block, finish the operation (we don't finish the operation until then, to ensure that other queued operations don't initiate until this one finished).

I might suggest a resizing operation:

SizeOperation.h:

@interface SizeOperation : NSOperation

@property (nonatomic) CGFloat sizeChange;
@property (nonatomic, weak) UIView *view;

- (id)initWithSizeChange:(NSInteger)change view:(UIView *)view;

@end

SizingOperation.m:

#import "SizeOperation.h"

@interface SizeOperation ()

@property (nonatomic, readwrite, getter = isFinished) BOOL finished;
@property (nonatomic, readwrite, getter = isExecuting) BOOL executing;

@end

@implementation SizeOperation

@synthesize finished = _finished;
@synthesize executing = _executing;

- (id)initWithSizeChange:(NSInteger)change view:(UIView *)view
{
self = [super init];
if (self) {
_sizeChange = change;
_view = view;
}
return self;
}

- (void)start
{
if ([self isCancelled] || self.view == nil) {
self.finished = YES;
return;
}

self.executing = YES;

// note, UI updates *must* take place on the main queue, but in the completion
// block, we'll terminate this particular operation

[[NSOperationQueue mainQueue] addOperationWithBlock:^{
[UIView animateWithDuration:2.0 delay:0.0 options:kNilOptions animations:^{
CGRect frame = self.view.frame;
frame.size.width += self.sizeChange;
self.view.frame = frame;
} completion:^(BOOL finished) {
self.finished = YES;
self.executing = NO;
}];
}];
}

#pragma mark - NSOperation methods

- (void)setExecuting:(BOOL)executing
{
[self willChangeValueForKey:@"isExecuting"];
_executing = executing;
[self didChangeValueForKey:@"isExecuting"];
}

- (void)setFinished:(BOOL)finished
{
[self willChangeValueForKey:@"isFinished"];
_finished = finished;
[self didChangeValueForKey:@"isFinished"];
}

@end

Then define a queue for these operations:

@property (nonatomic, strong) NSOperationQueue *sizeQueue;

Make sure to instantiate this queue (as a serial queue):

self.sizeQueue = [[NSOperationQueue alloc] init];
self.sizeQueue.maxConcurrentOperationCount = 1;

And then, anything that makes the view in question grow, would do:

[self.sizeQueue addOperation:[[SizeOperation alloc] initWithSizeChange:+50.0 view:self.barView]];

And anything that makes the view in question shrink, would do:

[self.sizeQueue addOperation:[[SizeOperation alloc] initWithSizeChange:-50.0 view:self.barView]];

Hopefully this illustrates the idea. There are all sorts of possible refinements:

  • I made the animation really slow, so I could easily queue up a whole bunch, but you'd probably be using a much shorter value;

  • If using auto layout, you'd be adjusting the width constraint's constant and in the animation block you'd perform a layoutIfNeeded) rather than adjusting the frame directly; and

  • You probably want to add checks to not perform the frame change if the width has hit some maximum/minimum values.

But the key is that using locks to control animation of UI changes is inadvisable. You don't want anything that could block the main queue for anything but a few milliseconds. Animation blocks are too long to contemplate blocking the main queue. So use a serial operation queue (and if you have multiple threads that need to initiate changes, they'd all just add an operation to the same shared operation queue, thereby automatically coordinating changes initiated from all sorts of different threads).



Related Topics



Leave a reply



Submit