A puzzle on how Java implement volatile in new memory model (JSR 133)

Jacky :

In JSR 133 Java Memory Model FAQ, it states

the new memory model places stricter constraints on reordering of volatile field accesses with other field accesses, volatile or not, anything that was visible to thread A when it writes to volatile field f becomes visible to thread B when it reads f

It also gives an example of how volatile fields can be used

class VolatileExample {
  int x = 0;
  volatile boolean v = false;

  public void writer() {
    x = 42;
    v = true;
  }

  public void reader() {
    if (v == true) {
      //uses x - guaranteed to see 42.
    }
  }
}

In the above code, the JVM (JIT?) would insert a LoadLoad memory barrier/fence between the load of v and the load of x in reader(), refers to The JSR-133 Cookbook for Compiler Writers (the actual implementation depends on the underlying CPU architecture)

barriers correspond to JSR-133 ordering rules

and the hardwares use cache coherency protocol to ensure consistency between L1,2... cache and main memory.

But I guess these mechanisms seem to insufficient. In order to guaranteed to see 42 in reader(), does JVM (JIT) must read v (non-volatile) and x (volatile) from main memory (or L1,2.. cache which controlled by hardware) instead of CPU registers?

Is there any links or documents show the details of how JVM implement the new memory model? The JSR-133 Cookbook for Compiler Writers only show how memory barriers are used, but say nothing about cache (in register, L1,2.. cache, main memory).

apangin :

LoadLoad barrier mentioned in JSR-133 Cookbook is not just some CPU instruction. It is a logical barrier which has impact on JIT compiler, too. In particular, it means that JIT compiler will not cache or reorder the load of x with respect to the load of v.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=100198&siteId=1