Talk about Java concurrency from the JMM level (2) - volatile

Talk about Java concurrency from the JMM level (2) - volatile

The volatile keyword is a special field used for inter-thread communication. It guarantees that a thread reading a volatile variable will always see (any thread) the last write to the volatile variable. That is, to ensure the visibility of variables. Another point more important is that there is a happens-before relationship between volatile reads and volatile writes  .

The semantics of volatile at the JMM level

The memory semantics of the volatile keyword at the JMM level are two points:

  • Guaranteed visibility (changes the memory semantics of read/write to variables)
  • Disable reorder optimization

Among them, the latter is the realization and reason of the former. Here is a brief description of their implementation:

Guaranteed visibility

In the JMM memory model, a variable declared as volatile will read and write directly to main memory without going through working memory:

  • When writing a volatile variable, JMM will flush the shared variable in the local memory corresponding to the thread to the main memory.
  • When reading a volatile variable, the JMM invalidates the thread's corresponding local memory. The thread will next read the shared variable from main memory.

These two points ensure the visibility of volatile variables between threads: after reading thread B reads a volatile variable, the values ​​of all shared variables that were visible to writing thread A before writing the volatile variable will immediately become visible to reading thread B. .

Disable reordering

To implement volatile memory semantics, the JMM restricts compiler reordering and processor reordering, respectively. JMMs are reordered by inserting JMM memory barrier constraints:

  • Insert a StoreStore barrier in front of each volatile write operation.
  • Insert a StoreLoad barrier after each volatile write operation.
  • Insert a LoadLoad barrier after each volatile read.
  • Insert a LoadStore barrier after each volatile read.

A schematic diagram of the instruction sequence generated after a volatile write is inserted into the memory barrier:

Alt

These rules ensure two things:

  • Make sure that operations before the volatile write are not reordered by the compiler after the volatile write.
  • Ensures that operations after a volatile read are not reordered by the compiler before a volatile read.

These restrictions imposed by the JMM ensure that there is a happens-before relationship between volatile reads and volatile writes.

The happens-before rule using volatile variables

See the code below:

class VolatileExample {
    int a = 0;
    volatile boolean flag = false;

    public void writer() {
        a = 1;                   //1
        flag = true;               //2
    }

    public void reader() {
        if (flag) {                //3
            int i =  a;           //4
            ……
        }
    }
}

Looking at the meaning of the program, it is obvious that we want to ensure that the value of 1 can be read readerwhen the method is executed across threads . That is, 1 operation happens-before 4 operations.ia

Here, it is impossible to do without adequate synchronization mechanism. And the synchronization brought by the volatile keyword can be easily implemented, you can think about the happens-before relationship between these four operations:

  1. According to the procedural order rule, 1 happens before 2; 3 happens before 4.
  2. According to the volatile rule, 2 happens before 3.
  3. According to the transitive rule of happens before, 1 happens before 4.

This way, we guarantee that 1 happens before 4.

refer to

  • "Java Concurrent Programming Practice"
  • http://ifeve.com/java-memory-model-4/
  • http://ifeve.com/syn-jmm-volatile/

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325663935&siteId=291194637