Don't you understand the volatile keyword? Hurry up and take a look

Through the study of JMM in the previous article, you must have mastered that the Java memory model is to standardize how the JVM provides methods for disabling caching and compilation optimization on demand. Specifically, these methods include three keywords volatile, synchronized, and final, and six Happens-Before rules.

In this article, let's take a closer look at the memory semantics and implementation of the volatile keyword at high-frequency interview points. At the same time, figuring out volatile will be of great help to our subsequent learning of various thread-safe containers and some open source framework source codes in the java concurrent package, because you will frequently see volatile in the implementation . Through the study of this article, you can clearly understand why you need to use this keyword in many concurrent scenarios, and it can also allow you to pass the customs smoothly in some big factory interviews.

What is the appearance of volatile

To learn technology, we need to look at the essence through phenomena, then the question is, what is voaltile? What is appearance? What are the specific characteristics?

To understand the voaltile feature, a very good way is to treat a single read or write of a volatile variable as using the same lock to synchronize these single read and write operations. This piece may not be easy to understand, looking at a set of codes, you will definitely understand.

Assuming that there are multiple threads calling the three methods of the above program, this program is semantically equivalent to the following program.

Through the above code, we can see that when a single read and write operation is performed on a volatile-modified shared variable, it has the same effect as locking, which means that a read of a volatile variable can always see the last time it is Write. It should be noted that if there are multiple voaltile operations or operations like volatile++, the atomicity of the overall operation cannot be guaranteed.

In summary, volatile modified variables have these two characteristics:

  1. Visibility . Reading a volatile variable can always see (any thread) the last write to this volatile variable
    .
  2. Atomicity : The read/write of any single volatile variable is atomic, but compound operations similar to volatile++ are not
    atomic.

After looking at the appearance, you will definitely ask, how does voaltile ensure the visibility and atomicity of a single-step operation of a single variable?

Don't worry, then, let's take a look at how JMM guarantees.

Memory semantics and implementation of volatile

Memory semantics of volatile writing :
When writing a volatile variable, JMM will flush the shared variable value in the local memory corresponding to the thread to the main memory.

Memory semantics of volatile read :

When reading a volatile variable, JMM will invalidate the local memory corresponding to the thread. The thread will then read the shared variable from main memory

To put it bluntly, it is actually to disable the CPU cache.

For example, we declare a volatile variable volatile int x = 0, which expresses: tells the compiler that the read and write of this variable cannot use the CPU cache and must be read or written from memory.

In the previous article, we learned that reordering is divided into compiler reordering and processor reordering . In order to achieve volatile memory semantics, JMM will restrict these two types of reordering types respectively.

The following table is the volatile reordering rule table formulated by JMM for the compiler :

For example, the last cell in the third row means: In the program, when the first operation is the reading or writing of ordinary variables, if the second operation is volatile writing, the compiler cannot reorder the two Operations.

In order to realize the memory semantics of volatile, the compiler inserts a memory barrier in the instruction sequence to prohibit specific types of processor reordering when generating bytecode .

  1. Insert a StoreStore barrier before each volatile write operation.
  2. Insert a StoreLoad barrier after each volatile write operation.
  3. Insert a LoadLoad barrier after each volatile read operation.
  4. Insert a LoadStore barrier after each volatile read operation.

As Java programmers, as long as we know the memory barrier, it can ensure that the correct volatile memory semantics can be obtained on any processor platform and any program. The specific processor implementation details are limited and we will not go on. Those who are interested can go further on their own.

Extension: Why JSR-133 should enhance the memory semantics of volatile

Before explaining this problem, let's look at a piece of code:

For example, the following sample code assumes that thread A executes the writer() method, and according to the volatile semantics, will write the variable "v=true" into the memory; assumes that thread B executes the reader() method, and also according to the volatile semantics, thread B will be from memory When reading variable v in Thread B, if thread B sees "v == true", what variable x does thread B see?

Intuitively, it should be 42, but how much should it actually be? This depends on the Java version. If it is run on a version lower than 1.5, x may be 42 or 0; if it is run on a version higher than 1.5, x is equal to 42.

Analysis : In the old Java memory model before JSR-133, although reordering between volatile variables is not allowed, it is allowed to reorder between volatile variables and ordinary variables .

That is to say, in the old memory model, when thread A executes 1. x=42 2. v=true, since there is no data dependency between 1 and 2, it may be reordered, so it may cause execution in thread B After v==true, x may not be executed, and the value of x is 0;

Therefore, in the old memory model, volatile write-read does not release lock-acquire memory semantics. In order to provide a mechanism for communication between threads that is lighter than locks, the JSR-133 expert group decided to enhance the memory semantics of volatile . How to enhance it?

From the perspective of compiler reordering rules and processor memory barrier insertion strategy, as long as the reordering between volatile variables and ordinary variables may destroy the memory semantics of volatile, this reordering will be reordered by the compiler and processor memory barriers. The insertion policy is prohibited.

From the perspective of our programmers' understanding, it is a Happens-Before rule. How to understand the specific Happens-Before rules, follow-up Java concurrent programming series articles will talk about it, so stay tuned.

 

 

Recommended articles:


Concurrency in Java? The three principles of the underlying implementation of the concurrency mechanism that need to be learned first

When Ali interviewed, he fell on the Java memory model

Dachang Interview: How to find the K-th largest element in O(n) with the idea of ​​fast sorting?

 

Recently I interviewed Byte and BAT, and compiled an interview material "Java Interview BAT Clearance Manual", covering Java core technologies, JVM, Java concurrency, SSM, microservices, databases, data structures, etc. Obtaining method: Click "Watching", follow the official account and reply to 666 to receive, more content will be provided one after another

Guess you like

Origin blog.csdn.net/taurus_7c/article/details/105320119