A thorough understanding of the volatile keyword

1. volatile Introduction

In the last article in our in-depth understanding of java keywords, we know that there are a large java artifact is the key volatile, can be said to be synchronized and the leading position, in which the secret, let's discuss the next.

By previous article we learned that synchronized is blocking synchronization in the fierce competition in the thread will be upgraded to heavyweight lock. The volatile it can be said that the most lightweight synchronization mechanism provided by the java virtual machine. But it is not easy to be properly understood, is also a problem in concurrent programming As many programmers encounter thread-safe will use synchronized. java memory model tells us that each thread will be shared variable copy from main memory into the working memory, and then perform the operation processing engine based on the data in the working memory. When the thread to operate in the working memory is written to main memory? The timing is not provided to ordinary variables, and for that volatile variables java virtual machine to a special agreement, the thread of the volatile variable changes will immediately be perceived by the other thread, that data does not appear dirty read phenomenon, so as to ensure "visibility" of the data.

Now that we have a general idea is this: the modified volatile variables can ensure that each thread can obtain the latest value of the variable, so as to avoid dirty reads data phenomenon.

2. volatile implementation principle

How volatile is achieved? For example, a very simple Java code:

1
instance = new Instancce () // instance variable is volatile

When it will be written in the volatile modification of shared variables when generating assembly code will be more Lock prefix instructions (you can use some specific tools take a look, here I will only say the results). We think this Lock command must have magical place, then the Lock prefix instruction in multi-core processors will see what things? Have a major impact of these two aspects:

1. The current data processor cache line is written back to system memory;
2. The write-back operation will make the memory of the other CPU caches data in the memory address is not valid

In order to increase the processing speed, memory, and processor does not communicate directly, but first the system memory to read data internal cache (L1, L2, or other) before the operation, but the operation is not finished know when memory writes. If you declare a variable volatile write operation is performed, JVM will send a Lock prefix instructions to the processor, where the variable data cache line is written back to system memory. However, even if written back to memory if the other processor cache or old values, then perform computing operations will be a problem. Therefore, in a multi-processor, each processor cache in order to ensure consistent, ** will achieve cache coherence ** protocol ** each processor checks its own cache by sniffing the data propagated on the bus value is not expired **, and when the processor finds himself cache line corresponding to the memory address is modified, the current cache line will be set to inactive the processor when the processor of the data modification operation, will from system memory to re-read the data processor cache. Therefore, after analysis we can draw the following conclusions:

1. Lock prefix instructions cause the processor cache write-back memory;
2. a processor's cache memory is written back to causes other processor cache invalidation;
3. When the processor found in the local cache invalidation will from memory in rereading the variable data, which can get the most current value.

Such volatile variables for such a mechanism makes each thread can get the latest value of the variable.

3. volatile of happens-before relationship

After the above analysis, we already know the volatile variables can ** cache coherency protocol ** ensure that each thread can get the latest value, namely "visibility" of data. We continue to follow the way of an analysis of the problem (I always think that way of thinking is their own, that is the most important, but also continue to develop capacity in this area), I have been an entry point into the concurrent analysis * * two core, three properties **. Two core: JMM memory model (main memory and working memory) and happens-before; three properties: atomicity, visibility, orderliness (summary of the nature of the three articles and everyone will have to discuss later). Ado, one of the two core First look: happens-before relationship is volatile.

There is a six [happens-before rule] (https://juejin.im/post/5ae6d309518825673123fd0e) in: ** volatile variable rules: to write a volatile domain, happens-before any subsequent to this volatile domain read. ** Let conjunction with a specific code, we derived using the rule:

1
2
3
4
5
6
7
8
9
10
11
12
13
public class VolatileExample {
    private int a = 0;
    private volatile boolean flag = false;
    public void writer(){
        a = 1;          //1
        flag = true;   //2
    }
    public void reader(){
        if(flag){      //3
            int i = a; //4
        }
    }
}

Examples of codes corresponding to the above happens-before relation as shown below:

VolatileExample的happens-before关系推导

A first execution thread locking writer method, and then performing the method of FIG reader thread B in each of the nodes on the code for a two arrows happens-before relationship, according to a representative black ** sequence rule procedure ** derived, based on the red * * a volatile variable to write happens-before any subsequent read to the volatile variables **, while blue is derivable from the transitivity rule. Here 2 happen-before 3, also in accordance happens-before rule definition: if A happens-before B, then the results of A to B is visible, and the execution order of A precedes the execution order of B, we can know the operation 2 Run the results of the operation 3 is visible, that is when the thread a volatile variable flag to true change after thread B will be able to quickly sense.

 4. volatile memory semantics

还是按照**两个核心**的分析方式,分析完happens-before关系后我们现在就来进一步分析volatile的内存语义(按照这种方式去学习,会不会让大家对知识能够把握的更深,而不至于不知所措,如果大家认同我的这种方式,不妨给个赞,小弟在此谢过,对我是个鼓励)。还是以上面的代码为例,假设线程A先执行writer方法,线程B随后执行reader方法,初始时线程的本地内存中flag和a都是初始状态,下图是线程A执行volatile写后的状态图。

AAE çº¿ç¨ ?? ?? ?? §è¡ volatileå ???? å ç ???? ???? ???? å å ?? ç ?? å ?? ???? ¶æ ¾

当volatile变量写后,线程中本地内存中共享变量就会置为失效的状态,因此线程B再需要读取从主内存中去读取该变量的最新值。下图就展示了线程B读取同一个volatile变量的内存变化示意图。

?? Be çº¿ç¨ »ç volatile ???? ???? ???? å ç ?? â ?? â ?? ???? ¶æ ¾

从横向来看,线程A和线程B之间进行了一次通信,线程A在写volatile变量时,实际上就像是给B发送了一个消息告诉线程B你现在的值都是旧的了,然后线程B读这个volatile变量时就像是接收了线程A刚刚发送的消息。既然是旧的了,那线程B该怎么办了?自然而然就只能去主内存去取啦。

好的,我们现在**两个核心**:happens-before以及内存语义现在已经都了解清楚了。是不是还不过瘾,突然发现原来自己会这么爱学习(微笑脸),那我们下面就再来一点干货----volatile内存语义的实现。

4.1 volatile的内存语义实现

我们都知道,为了性能优化,JMM在不改变正确语义的前提下,会允许编译器和处理器对指令序列进行重排序,那如果想阻止重排序要怎么办了?答案是可以添加内存屏障。

内存屏障

JMM内存屏障分为四类见下图,

???? ?? å å å ± é ?? ???? ???? å ç ± »è¡¨

java编译器会在生成指令系列时在适当的位置会插入内存屏障指令来禁止特定类型的处理器重排序。为了实现volatile的内存语义,JMM会限制特定类型的编译器和处理器重排序,JMM会针对编译器制定volatile重排序规则表:

volatileé º ???? ???? ?? æ ?? å ???? è§ è¡¨

"NO"表示禁止重排序。为了实现volatile内存语义时,编译器在生成字节码时,会在指令序列中插入内存屏障来禁止特定类型的**处理器重排序**。对于编译器来说,发现一个最优布置来最小化插入屏障的总数几乎是不可能的,为此,JMM采取了保守策略:

1. 在每个volatile写操作的**前面**插入一个StoreStore屏障;
2. 在每个volatile写操作的**后面**插入一个StoreLoad屏障;
3. 在每个volatile读操作的**后面**插入一个LoadLoad屏障;
4. 在每个volatile读操作的**后面**插入一个LoadStore屏障。

需要注意的是:volatile写是在前面和后面**分别插入内存屏障**,而volatile读操作是在**后面插入两个内存屏障**

**StoreStore屏障**:禁止上面的普通写和下面的volatile写重排序;

**StoreLoad屏障**:防止上面的volatile写与下面可能有的volatile读/写重排序

**LoadLoad屏障**:禁止下面所有的普通读操作和上面的volatile读重排序

**LoadStore屏障**:禁止下面所有的普通写操作和上面的volatile读重排序

下面以两个示意图进行理解,图片摘自相当好的一本书《java并发编程的艺术》。

volatileå ???? ???? å ?? æ ¥ å å ???? ?? ?? å ± is 示æ ???? ???? ?? å ¾

volatileè¯ »æ ???? å ¥ å ?? ???? ?? å ?? å ± is 示æ ???? ???? ?? å ¾

 5. 一个示例

我们现在已经理解volatile的精华了,文章开头的那个问题我想现在我们都能给出答案了。更正后的代码为:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
public class VolatileDemo {
    private static volatile boolean isOver = false;
 
    public static void main(String[] args) {
        Thread thread = new Thread(new Runnable() {
            @Override
            public void run() {
                while (!isOver) ;
            }
        });
        thread.start();
        try {
            Thread.sleep(500);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        isOver = true;
    }
}

Note that different points, ** is now the set has become a volatile variable isOver **, so the main thread will be changed to true after isOver, thread the working memory of the value of the variable will fail, and thus need to be read from the main memory again the value can now read the latest isOver true value can be ended in an infinite loop in the thread, so that a smooth stop off thread thread. Now the problem is also solved, but also the knowledge learned :). (If you feel good, please Chan, is an incentive for me.)

Guess you like

Origin www.cnblogs.com/wangwudi/p/12303772.html