does volatile gives other normal stores and loads a happens-before relation?

maki XIE :

I have a question about a volatile happens-before rule, and the same question also accords to monitor-rule. According to volatile rule, a volatile write happens before any subsequent reads.

I have the following example of a volatile write and a normal subsequent read. As far as I know, this volatile write should have a StoreStore memory barrier that flushes a normal store to memory so that other processes can see it (according to DougLea's cookbook about JSR-133 memory model).

So my question is: is there also an extra happens-before rule that action 1 of normal store also happens before subsequent action2 of normal load.

int a;
volatile int x;


public void action1(){
    a = 100;  --- normal store
    x = 123;  ---volatile store
}

public void action2{
    int k = a; ---normal load
}
Stephen C :

Is there also an extra happens-before rule that action 1 of normal store also happens before subsequent action2 of normal load?

No it doesn't.

The happens-before is between a volatile write and a following volatile read.

In your example, the volatile read is missing so there is no chaining of the happens-before relations to the non-volatile read. Therefore your program is not well-formed with respect to memory visibility. The value assigned to k may not be 100, on some hardware, etc.

To fix this, you would need to do this:

int a;
volatile int x;

public void action1() {
    a = 100;  --- normal store
    x = 123;  ---volatile store
}

public void action2() {
    int x = x; ---volatile load
    int k = a; ---normal load
}

I have another question why volatile guarantees the following normal loads use memory instead of cache? (java spec doesn't explain the underlying level, and only state the rules so I don't quite understand the mechanics behind.

The Java spec deliberately doesn't talk about the hardware. Instead, it specifies the prerequisites for a well-formed Java program. If the program meets those prerequisites, then visibility properties are guaranteed. How they are met is the compiler writer's problem.

A consequence of the JMM specification is that on hardware with caches and multiple processors, the most obvious and efficient implementation approach is to do cache flushes, etc. But that is the compiler writer's concern ... not yours.

You (the Java programmer) do not need to know about caches, memory barriers, etc. You just need to understand the happens-before rules. But if you want to understand things in terms of the JSR 133 cookbook, then there are a few things to bear in mind:

  1. The Cookbook is not definitive, and not complete. It says so clearly.

  2. The Cookbook is only directly relevant to the behavior of well-formed programs. If the required happens-before chain is not there, then necessary barriers are likely to be missing and other things and all bets are off.

  3. An actual Java implementation isn't necessarily going to do things the way that the Cookbook ... umm ... recommends.

Note that for my (corrected) version of the example, the cookbook says that there would / should be a LoadLoad barrier between the two loads.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=10117&siteId=1