Happens-Before Relationships with Volatile Fields and Synchronized Blocks in Java - and Their Impact on Non-Volatile Variables

Happens-before relationships with volatile fields and synchronized blocks in Java - and their impact on non-volatile variables?

Yes, it is guaranteed that thread 2 will print "done" . Of course, that is if the write to b in Thread 1 actually happens before the read from b in Thread 2, rather than happening at the same time, or earlier!

The heart of the reasoning here is the happens-before relationship. Multithreaded program executions are seen as being made of events. Events can be related by happens-before relationships, which say that one event happens before another. Even if two events are not directly related, if you can trace a chain of happens-before relationships from one event to another, then you can say that one happens before the other.

In your case, you have the following events:

  • Thread 1 writes to s
  • Thread 1 writes to b
  • Thread 2 reads from b
  • Thread 2 reads from s

And the following rules come into play:

  • "If x and y are actions of the same thread and x comes before y in program order, then hb(x, y)." (the program order rule)
  • "A write to a volatile field (§8.3.1.4) happens-before every subsequent read of that field." (the volatile rule)

The following happens-before relationships therefore exist:

  • Thread 1 writes to s happens before Thread 1 writes to b (program order rule)
  • Thread 1 writes to b happens before Thread 2 reads from b (volatile rule)
  • Thread 2 reads from b happens before Thread 2 reads from s (program order rule)

If you follow that chain, you can see that as a result:

  • Thread 1 writes to s happens before Thread 2 reads from s

Volatile happens-before relationship when there's mix of volatile and non-volatile fields

Your assumption that the writes after stopRequested = true; are not garanteed to be visible to the readers is correct. The writer is not garanteed to do those writes to shared cache/memory where they'd become visible to the readers. It could just write them to its local cache and the readers wouldn't see the updated values.

The Java language makes garantees about visibility, e.g. when you use a volatile variable. But it doesn't garantee that changes on a non-volatile variable won't be visible to other threads. Such writes can still be visible like here in your case. The JVM implementation, the memory consistency model of the processor and other aspects influence visibility.

Note that the JLS, and the happens-before relationship, is a specification. JVM implementations and hardware often do more than the JLS specifies, which can lead to visibilities of writes that don't have to be visible by the JLS.

Does a non-volatile variable need synchronized?

Aforementioned behavior is not guaranteed. The guarantee of such a "visibility" is actually a subject of happens-before relationship:

The key to avoiding memory consistency errors is understanding the happens-before relationship. This relationship is simply a guarantee that memory writes by one specific statement are visible to another specific statement.

Happens-before relationship (according to JLS) can be achieved as such:

  1. Each action in a thread happens-before every action in that thread that comes later in the program's order.
  2. An unlock (synchronized block or method exit) of a monitor happens-before every subsequent lock (synchronized block or method entry) of that same monitor. And because the happens-before relation is transitive, all actions of a thread prior to unlocking happen-before all actions subsequent to any thread locking that monitor.
  3. A write to a volatile field happens-before every subsequent read of that same field. Writes and reads of volatile fields have similar memory consistency effects as entering and exiting monitors, but do not entail mutual exclusion locking.
  4. A call to start on a thread happens-before any action in the started thread.
  5. All actions in a thread happen-before any other thread successfully returns from a join on that thread.

So, in your particular case, you actually need either synchronization using a shared monitor or AtomicIntegerArray in order to make access to the array thread-safe; volatile modifier won't help as is, because it only affects the variable pointing to the array, not the array's elements (more detailed explanation).

Java memory model: volatile variables and happens-before

  • i = 1 always happens-before v = 2

True. By JLS section 17.4.5,

If x and y are actions of the same thread and x comes before y in program order, then hb(x, y).


  • v = 2 happens-before vDst = v in JMM only if it's actually happens before in time
  • i = 1 happens-before iDst = i in JMM (and iDst will be predictably assigned 1) if v = 2 actually happens before vDst = v in time

False. The happens-before order does not make guarantees about things happening before each other in physical time. From the same section of the JLS,

It should be noted that the presence of a happens-before relationship between two actions does not necessarily imply that they have to take place in that order in an implementation. If the reordering produces results consistent with a legal execution, it is not illegal.

It is, however, guaranteed that v = 2 happens-before vDst = v and i = 1 happens-before iDst = i if v = 2 comes before vDst = v in the synchronization order, a total order over the synchronization actions of an execution that is often mistaken for the real-time order.


  • Otherwise order between i = 1 and iDst = i is undefined and resulting value of iDst is undefined as well

This is the case if vDst = v comes before v = 2 in the synchronization order, but actual time doesn't come into it.

Java Memory Model, volatile, and synchronized blocks accessing non-volatile variables

The safety is defined by a transitive happens-before relationship as follows:

17.4.5. Happens-before Order

Two actions can be ordered by a happens-before relationship. If one action happens-before another, then the first is visible to and ordered before the second.

If we have two actions x and y, we write hb(x, y) to indicate that x happens-before y.

  • If x and y are actions of the same thread and x comes before y in program order, then hb(x, y).
  • There is a happens-before edge from the end of a constructor of an object to the start of a finalizer (§12.6) for that object.
  • If an action x synchronizes-with a following action y, then we also have hb(x, y).
  • If hb(x, y) and hb(y, z), then hb(x, z).

The preceding section stated

An unlock action on monitor m synchronizes-with all subsequent lock actions on m (where "subsequent" is defined according to the synchronization order).

which allows to conclude what the specification also says explicitly:

It follows from the above definitions that:

  • An unlock on a monitor happens-before every subsequent lock on that monitor.

We can apply these rules to your program:

  • The first thread assigning null to the box.supplier does this before releasing the monitor (leaving the synchronized (box) { … }) block. This is ordered within the thread itself due to the first bullet (“If x and y are actions of the same thread and x comes before y in program order, then hb(x, y)”)
  • A second thread subsequently acquiring the same monitor (entering the synchronized (box) { … }) block) has a happens-before relationship to the first thread’s release of the monitor (as concluded above, “An unlock on a monitor happens-before every subsequent lock on that monitor”)
  • The second thread’s reading of the box.supplier variable within the synchronized block is again ordered with the acquisition of the monitor due to the program order (“If x and y are actions of the same thread and x comes before y in program order, then hb(x, y)”)
  • Now the three relations stated above can be combined due to the last rule, “If hb(x, y) and hb(y, z), then hb(x, z)”. This transitivity allows us to conclude that there is a thread safe ordering between the write of null to the box.supplier and the subsequent read of the box.supplier variable, both within a synchronized block on the same object.

Note that this has nothing to do with the fact that box.supplier is a member variable of the object we’re using for synchronized. The important aspect is that both threads use the same object in synchronized to establish an ordering that interacts with the other actions due to the transitivity rule.

But it is a useful convention to synchronize on the object whose member we want to access, as it makes it easier to ensure that all threads use the same object for the synchronization. Still, all threads must adhere to the same convention to make it work.

As a counter-example, consider the following code:

List<SomeType> list = …;

Thread 1:

synchronized(list) {
list.set(1, new SomeType(…));
}

Thread 2:

List<SomeType> myList = list.subList(1, 2);

synchronized(list) {
SomeType value = myList.get(0);
// process value
}

Here, it is crucial for Thread 2 not to use myList for the synchronization, despite we use it for accessing the content, as it is a different object. Thread 2 still must use the original list instance for synchronization. This is a real issue with synchronizedList, whose documentation demonstrates it with an example of accessing the list via an Iterator instance which still must be guarded by synchronizing on the List instance.

does volatile gives other normal stores and loads a happens-before relation?

Is there also an extra happens-before rule that action 1 of normal store also happens before subsequent action2 of normal load?

No it doesn't.

The happens-before is between a volatile write and a following volatile read.

In your example, the volatile read is missing so there is no chaining of the happens-before relations to the non-volatile read. Therefore your program is not well-formed with respect to memory visibility. The value assigned to k may not be 100, on some hardware, etc.

To fix this, you would need to do this:

int a;
volatile int x;

public void action1() {
a = 100; --- normal store
x = 123; ---volatile store
}

public void action2() {
int x = x; ---volatile load
int k = a; ---normal load
}

I have another question why volatile guarantees the following normal loads use memory instead of cache? (java spec doesn't explain the underlying level, and only state the rules so I don't quite understand the mechanics behind.

The Java spec deliberately doesn't talk about the hardware. Instead, it specifies the prerequisites for a well-formed Java program. If the program meets those prerequisites, then visibility properties are guaranteed. How they are met is the compiler writer's problem.

A consequence of the JMM specification is that on hardware with caches and multiple processors, the most obvious and efficient implementation approach is to do cache flushes, etc. But that is the compiler writer's concern ... not yours.

You (the Java programmer) do not need to know about caches, memory barriers, etc. You just need to understand the happens-before rules. But if you want to understand things in terms of the JSR 133 cookbook, then there are a few things to bear in mind:

  1. The Cookbook is not definitive, and not complete. It says so clearly.

  2. The Cookbook is only directly relevant to the behavior of well-formed programs. If the required happens-before chain is not there, then necessary barriers are likely to be missing and other things and all bets are off.

  3. An actual Java implementation isn't necessarily going to do things the way that the Cookbook ... umm ... recommends.

Note that for my (corrected) version of the example, the cookbook says that there would / should be a LoadLoad barrier between the two loads.

Happens-before with different monitors

Memory effects as a result of using synchronized or volatile are simply just that - memory effects. The fact that memory is observed to be "flushed from cache" has no direct relationship back to synchronized or volatile. Happens-before relationships are only true when operations occur as a result of actions specified in JLS 17.4.5, or as a result of being in a happens-before chain (i.e. piggyback) with those variables. If you perform two synchronized or volatile actions on different variables, there is NO happens-before relationship.

Memory effects come as a result of happens-before ordering, but happens-before never comes as a result of memory effects.



Related Topics



Leave a reply



Submit