Countdownlatch VS. Semaphore

CountDownLatch vs. Semaphore

CountDownLatch is frequently used for the exact opposite of your example. Generally, you would have many threads blocking on await() that would all start simultaneously when the countown reached zero.

final CountDownLatch countdown = new CountDownLatch(1);

for (int i = 0; i < 10; ++ i) {
Thread racecar = new Thread() {
public void run() {
countdown.await(); //all threads waiting
System.out.println("Vroom!");
}
};
racecar.start();
}
System.out.println("Go");
countdown.countDown(); //all threads start now!

You could also use this as an MPI-style "barrier" that causes all threads to wait for other threads to catch up to a certain point before proceeding.

final CountDownLatch countdown = new CountDownLatch(num_thread);

for (int i = 0; i < num_thread; ++ i) {
Thread t= new Thread() {
public void run() {
doSomething();
countdown.countDown();
System.out.printf("Waiting on %d other threads.",countdown.getCount());
countdown.await(); //waits until everyone reaches this point
finish();
}
};
t.start();
}

That all said, the CountDownLatch can safely be used in the manner you've shown in your example.

What's the point of CountDownLatch in java?

Semantically, they're different; and that matters, because it makes your code easier to read. When I see a Semaphore, I immediately start thinking "a limited amount of a shared resource." When I see a CountDownLatch, I immediately start thinking "a bunch of threads waiting for the 'go!' signal." If you give me the former in code that actually needs the latter, it's confusing.

In this sense, a Semaphore being used as a CountDownLatch is a bit like a garden-path sentence; while technically correct, it leads people astray and confuses them.

In terms of more pragmatic uses, a CountDownLatch is just simpler if that's all you need. Simpler is better!

As for reusing a CountDownLatch, that would complicate its usage. For instance, let's say you're trying to queue up threads A, B, and C for some work. You have them await on the latch, and then you release it. Then you reset it, presumably to queue up threads D, E and F for some other work. But what happens if (due to a race condition), thread B hasn't actually been released from the first latch yet? What if it hadn't even gotten to the await() call yet? Do you close the gate on it, and tell it to wait with D, E and F for the second opening? That might even cause a deadlock, if the second opening depends on work that B is supposed to be doing!

I had the same questions you did about resetting when I first read about CountDownLatch. But in practice, I've rarely even wanted to reset one; each unit of "wait then go" (A-B-C, then D-E-F) naturally lends itself to creating its own CountDownLatch to go along with it, and things stay nice and simple.

Some threads gets stuck at semaphore.aquire() (threads/semaphore/countdownlatch)

Try adding true as the second parameter on the Semphore constructor call.

By default, there is no attempt at fairness, which you need to get all renters taking turns. Generally, a renter that has just returned a movie will get to the acquire call faster than one that was waiting for the semaphore. With the added true argument "this semaphore will guarantee first-in first-out granting of permits under contention" Semaphore

java concurrency: lightweight nonblocking semaphore?

In the case of single permit you can use AtomicBoolean:

final AtomicBoolean once = new AtomicBoolean(true);
Runnable r = new Runnable() {
@Override public void run() {
if (once.getAndSet(false))
doSomething();
}
}

If you need many permits, use your solution with compareAndSet(). Don't worry about the loop, getAndIncrement() works the same way under the cover.

CyclicBarrier and CountDownLatch?

A CountDownLatch is used for one-time synchronization. While using a CountDownLatch, any thread is allowed to call countDown() as many times as they like. Threads which called await() are blocked until the count reaches zero because of calls to countDown() by other unblocked threads. The javadoc for CountDownLatch states:

The await methods block until the current count reaches zero due to
invocations of the countDown() method, after which all waiting threads
are released and any subsequent invocations of await return
immediately.
...

Another typical usage would be to divide a problem into N parts,
describe each part with a Runnable that executes that portion and
counts down on the latch, and queue all the Runnables to an Executor.
When all sub-parts are complete, the coordinating thread will be able
to pass through await. (When threads must repeatedly count down in
this way, instead use a CyclicBarrier.)

In contrast, the cyclic barrier is used for multiple sychronization points, e.g. if a set of threads are running a loop/phased computation and need to synchronize before starting the next iteration/phase. As per the javadoc for CyclicBarrier:

The barrier is called cyclic because it can be re-used after the
waiting threads are released.

Unlike the CountDownLatch, each call to await() belongs to some phase and can cause the thread to block until all parties belonging to that phase have invoked await(). There is no explicit countDown() operation supported by the CyclicBarrier.

Java concurrency - is there a reverse CountDownLatch?

In case anyone ever wants to know what I ended up doing, recording that here:

I ended up needing two locks:

    this.workInProgressLock = new ReentrantReadWriteLock(true);
this.singleFileLock = new Semaphore(1, true);

The ReadWriteLock works mostly as described in the comment above. The main issue is that writeLock requests are given preference as they pend. Thus sequencing is not as easily controllable as it should be.

The Semaphore does count UP, but works opposite as I needed. Why did I add one? I quickly realized that Background Processing (Tasks) are a two-way street! As initially described, Background Processes were allowed to start whenever needed. When running during certain 'critical sections' of the Application, major problems arose. Consequently, the App-side needed a way to 'lock-out' the Background Tasks. The single count Semaphore was a perfect fit for that purpose.

It would be nice to have a "BackgroundProcessingLock" somewhere (Guava, where-ever):

public interface BackgroundProcessingLock {

void allowBackgroundTasks();

void preventBackgroundsTasks(boolean wait);

void wait(); // simply waits until all background tasks complete

void startBackgroundTask();

void completeBackgroundTask();

void setMaxNumberOfBackgroundTasks(int maxParallelBackgroundTasks);

}

Java concurrency: Countdown latch vs Cyclic barrier

One major difference is that CyclicBarrier takes an (optional) Runnable task which is run once the common barrier condition is met.

It also allows you to get the number of clients waiting at the barrier and the number required to trigger the barrier. Once triggered the barrier is reset and can be used again.

For simple use cases - services starting etc... a CountdownLatch is fine. A CyclicBarrier is useful for more complex co-ordination tasks. An example of such a thing would be parallel computation - where multiple subtasks are involved in the computation - kind of like MapReduce.

Real Life Examples For CountDownLatch and CyclicBarrier

The key difference is that CountDownLatch separates threads into waiters and arrivers while all threads using a CyclicBarrier perform both roles.

  • With a latch, the waiters wait for the last arriving thread to arrive, but those arriving threads don't do any waiting themselves.
  • With a barrier, all threads arrive and then wait for the last to arrive.

Your latch example implies that all ten people must wait to lift the stone together. This is not the case. A better real world example would be an exam prompter who waits patiently for each student to hand in their test. Students don't wait once they complete their exams and are free to leave. Once the last student hands in the exam (or the time limit expires), the prompter stops waiting and leaves with the tests.



Related Topics



Leave a reply



Submit