Art Java concurrent programming (7) volatile memory semantics and implementation

When the shared variable declared volatile, the reading of this variable / write will be special.

The characteristics of volatile

(1) Visibility: reading of a volatile variables, always able to see (any thread) of the last write volatile variable
(2) atom of: reading of any single variable volatile / write atomic, but such a composite similar volatile ++ atomic operations without having

volatile write - happens-before relationship established reading

volatile variable write - read communication between threads can be achieved.
From the perspective of semantic memory, volatile write - read and lock release - obtaining the same effect memory: volatile release locks and have written the same semantic memory; read lock acquiring volatile memory has the same semantics

class VolatileExample {
    int a = 0;
    volatile boolean flag = false;
    public void writer() {
        a = 1;         1 
    flag = true;          2 
    }
    public void reader() { 
        if (flag) {         3 
            int i = a;        4 
        …… }
    }
}

After thread A execution is assumed writer () method, the thread B executed Reader () method. The happens-before rule, happens-before relation established this process can be divided into three categories:
(1) The program sequence rules, 1 happens-before 2; 3 happens-before. 4
(2) The volatile rule, 2 happens-before 3
(3) happens-before transfer rules, 1 happens-before 4
Here Insert Picture Description
after the thread a to write a variable, a variable thread B reads volatile variable. A thread before writing volatile variables all visible shared variables, thread B after reading the same volatile variable, will immediately become visible to thread B.

volatile write - read semantic memory

When writing a volatile variable, JMM will correspond to the thread local memory shared variable values flushed to main memory
when reading a volatile variable, JMM will the thread corresponding local memory is deasserted. Next thread reads from the main memory to share this variable.
Here Insert Picture Description

volatile read-write summary

(1) A thread write a volatile variable, is essentially a thread A sends a message (their shared variables to make modifications) to be read next to this volatile variables a thread.
(2) reading a volatile variable thread B, thread B is essentially accepted (before writing this volatile variable amount of changes made to the shared variable) message sent by a thread before.
(3) Write a volatile variable thread A, then thread B reads the volatile variables, the process is essentially the thread A sends a message to the thread B through the main memory.

volatile memory implementation semantic

JMM table is for the compiler developed by volatile heavy collation table.
Here Insert Picture Description
(1) when the second operation is a volatile time of writing , no matter what is the first operation that can not be reordered. This rule ensures that volatile write operation will not be compiled until discouraged after ordering to write.
(2) when the first operation is a volatile read time , no matter what is the second operation, can not be reordered. This rule ensures that volatile read operation will not be compiled after ordering discouraged to read before.
(3) when the first operation is a volatile-write , the second operation is a volatile read, no reordering.

Insert memory barrier

In order to achieve volatile memory semantics, the compiler when generating bytecode will be inserted in the instruction sequence memory barrier to inhibit a particular type of processor reordering.
(1) Write operation to insert a front barrier StoreStore each volatile.
(2) a write-back operation is inserted in each barrier StoreLoad volatile.
(3) inserting a volatile LoadLoad barrier after each read operation.
(4) inserting a volatile LoadStore barrier after each read operation.
Write command volatile memory barriers generated sequence is inserted at a conservative strategy schematic
Here Insert Picture Description
volatile write behind StoreLoad barrier. The effect of this barrier is to avoid volatile write back and may have volatile read / write reordering.
Because the compiler often can not accurately determine whether a volatile write back StoreLoad need to insert a barrier (for example, a volatile write method immediately after the return).
In order to ensure to achieve volatile memory semantically correct, JMM taking a conservative strategy: In each volatile write back, or insert a StoreLoad barrier in front of each volatile read.
volatile write - common usage pattern is read memory semantics : a writer thread to write volatile variables, multiple reader threads reading the same volatile variable. When the number of threads to read much more than write a thread, select Insert StoreLoad barrier after volatile write will bring considerable efficiency improvement.
JMM is a feature on the implementation : First ensure the correctness, then go to the pursuit of efficiency.

Barrier type Instruction type Explanation
LoadLoad Barriers Load1;LoadLoad;Load2 Load1 ensure data Load2 loaded prior to all subsequent load instruction load
StoreStore Barriers Store1;StoreStore;Store2 Data storage to ensure Store1 visible to other processors (refresh memory) prior to Store2 and all subsequent store instruction
LoadStore Barriers Load1;LoadStore;Store2 Load1 salted Store2 ensure data loading and storing all subsequent instructions are flushed to the memory
StoreLoad Barriers Store1; Large Load; Load2 Store1 ensure data becomes visible (refer to refresh memory) is first loaded in Load2 and all subsequent load instruction to other processors. All memory access instructions (load and store instructions) StoreLoad Barriers will after the barrier before completion, before the implementation of the memory access instruction after the barrier

StoreLoad Barriers is a "versatile" barriers, it also has the effect of three other barrier. Most modern multi-processor support for the barrier. Implementation of the barrier overhead will be very expensive, because the current processor typically want to write all the data in the buffer to refresh memory (Buffer Fully Flush)

Examples

class VolatileBarrierExample {
    int a;
    volatile int v1 = 1;
    volatile int v2 = 2;
    void readAndWrite() {
        int i = v1;  第一个volatileint j = v2;  第二个volatile读 
        a = i + j;  普通写 v1 = i + 1; // 第一个volatile写 
        v2 = j * 2;  第二个 volatile} 
    ...... 其他方法 
}

For readAndWrite () method, the compiler generates bytecode when optimized as follows.
Here Insert Picture Description
The final barrier to jump in StoreLoad omitted. Because after the second volatile write methods return immediately. At this time, the compiler may not accurately determine whether there will be volatile later read or write, for safety reasons, the compiler will usually insert here a StoreLoad barrier.

Published 24 original articles · won praise 1 · views 545

Guess you like

Origin blog.csdn.net/qq_45366515/article/details/105139240