Concurrent programming | thoroughly understand volatile

Under what circumstances will the volatile keyword be used?

In the process of multi-threaded development, you operate the same shared variable. If you want each thread to modify the shared variable to be immediately visible to other threads, you need to use the volatile keyword to modify it.

Why is the modification of the same shared variable not immediately visible to other threads under multiple threads?

To explain this problem, we have to talk about the structure of the memory model. The memory model structure is actually divided into shared main memory and thread private memory. When the thread starts, variables are first read from the main memory to the current thread’s private memory. Subsequent modification operations are performed in its own private memory. The data modified in the private memory will not be immediately synchronized to the main memory. You must wait for the processor to flush this part of the data to the main memory, but this time is not Certainly, so during this period, changes to the variables will not be visible to other threads.Insert picture description here

Why volatile can make thread modification of variables visible to other threads

The reasons for thread visibility have already been discussed above, so if we want to solve this problem, we must make the thread refresh to the main memory immediately after the variable is modified. Of course, this is not enough, and we have to notify other threads to change The current cached variable becomes invalid and read from the main memory again. Of course, this is not notified by the thread itself. So how does he notify other threads to read data from the main memory again? Let's see how they use the keyword volatile to complete the communication between threads: 1. Thread A modifies the volatile variable 2. JMM modifies it The shared variable is flushed to the main memory. 3. When thread B reads the vloatile variable, JMM will invalidate the value of the local memory. 4. Thread B reads the latest value directly from the main memory.

Insert picture description here

Why is it said that the semantics of writing and reading memory of volatile variables are equivalent to the semantics of releasing and acquiring memory of a lock?

Because we know that after writing the volatile variable, jmm will flush the shared variable to the memory. In fact, when we release the lock, JMM also does this operation, because jmm needs to ensure that this thread is in local memory after acquiring the lock The modified data is visible to other threads, so after releasing the lock, the data of the shared variable must be flushed to the memory immediately. Volatile reading will invalidate the memory address of the shared variable. Re-read the data in the main memory. In order to see the data flushed to the main memory after the previous thread lock is released, the same operation will be performed when the lock is acquired. Read the latest shared variable data in the memory. So to summarize: the semantics of writing and reading memory of volatile variables are equivalent to the semantics of releasing and acquiring memory

Many people say that volatile has the characteristic of prohibiting instruction rearrangement, so what is it?

When executing programs, in order to improve performance, compilers and processors usually reorder our code.
Compiler reordering will reorder without changing the single-threaded semantics. For example, in the following code, 1 and 2 will be reordered.

a = 1  // 1
if(flag) {  
a = 1 // 2
}

However, if these are in a multi-threaded situation, instruction reordering will cause problems that are inconsistent with the expected results.
In order to prohibit these reordering actions, JMM will insert memory barriers in instructions to prohibit the processor and compiler from reordering these codes.
Therefore, the prohibition of instruction rearrangement is to solve some unpredictable problems in the case of multithreading.
Volatile just has this characteristic.

Optimize the hidden dangers of double check lock in singleton with the help of volatile prohibit instruction rearrangement feature

There are two ways to implement a singleton: one is the lazy man mode, the other is the hungry man mode, the hungry man mode is thread-safe, and the lazy man mode is thread-unsafe. It uses double check locks to avoid thread safety It brings problems, but is he really safe?
public final class VirtualCore {
private static VirtualCore instance = null;
    private VirtualCore() {
    }
    public static VirtualCore get() {
        if(instance == null){
        synchronized(VirtualCore.class){
        if(instance == null){
        instance = new VirtualCore();
        }
        }
        }
        return instance;
    }
}

Looking at the code, we have made two non-empty judgments. It should be safe, but it is not what we think, it is still unsafe, because we are in new VirtualCore(), this action may happen The instructions are rearranged because new VirtualCore(); this instruction is actually divided into three steps

memory = allocate() //分配对象内存空间 1
ctorinstance(memory) // 初始化对象 2
instance = memory; // 设置instance指向刚分配的内存地址 3

If the order of 2 and 3 is reversed after the instruction is rearranged, this situation may occur. Thread A executes new VirtualCore(); at this time, thread B will directly use the uninitialized instance, and the result will be Cause a null pointer.
Solution: You can use volatile to modify the instance to explicitly prohibit the processor from rearranging its instructions to achieve thread safety

Summary: This blog post has made you understand volatile more deeply through several questions. We will update more tutorials related to concurrent programming in the future, thanks for your attention. You can follow my public account "Le Zai Talk" to get more information.

Insert picture description here

A search on WeChat [Le Zai open talk] Follow the handsome me, reply [Dry goods], there will be a lot of interview materials and architect must-read books waiting for you to choose, including java basics, java concurrency, microservices, middleware, etc. The information is waiting for you.
The more you read the book without thinking, you will feel that you know a lot; and the more you read and think, the more clearly you will see that you know very little. --Voltaire

Guess you like

Origin blog.csdn.net/weixin_34311210/article/details/108305847