volatile和synchronized

Copyright statement: This article is an original article by the blogger and may not be reproduced without the blogger's permission. http://www.cnblogs.com/jokermo/

0. Preface

Please indicate the source for reprint: http://www.cnblogs.com/jokermo/

Both volatile and synchronized are methods to solve the problem of multi-thread safety. In order to understand and use these two modifiers, you first need to understand what is the problem of multi-thread safety. That is, the cause of the multi-thread safety problem. The reasons for the multi-thread safety problem can be summarized as two points:

  • The shared data is processed in the thread task .
  • There are multiple shared data operations in the thread task. When one thread is operating the shared data, another thread participates in the operation, resulting in a data error.

Ideas for solving multithreading problems:

  • As long as the code shared by multiple operations is guaranteed to be operated by the same thread within a certain period of time, other threads are not allowed to participate in the operation during execution. This is where volatile and synchronized are used .

1. Java Memory Model referred to as JMM (Java Memory Model)

JMM is used to shield the memory access differences of different hardware and operating systems, and it is expected that Java programs can achieve consistent memory access effects on various platforms.

  • Main memory: The main memory can be simply understood as the memory in the computer, but it is not exactly the same. The main memory is shared by all threads. For a shared variable (such as a static variable, or an instance in heap memory), the main memory stores its "self".

Working memory:

Working memory can be simply understood as the CPU cache in the computer, but it is not exactly the same. Each thread has its own working memory, and for a shared variable, a "copy" of it is stored in the working memory.

 

All operations of threads on shared variables must be performed in working memory, and variables in main memory cannot be directly read and written. Because it is too slow to directly operate the main memory, the JVM uses the working memory with better performance, which can be compared to CPU, memory, and cache. Different threads cannot access each other's working memory, and variable values ​​can only be passed through the main memory. to proceed.

 

2.synchronized

Synchronized can act on a piece of code or method, which can ensure both visibility and atomicity.

  • The method is called a synchronization function, and the lock of the synchronization function is this
  • Used on methods called synchronized code blocks, with a unique object as a lock

Visibility is reflected in: synchronized or Lock can ensure that only one thread acquires the lock at the same time and then executes the synchronization code, and the changes to variables are flushed to main memory before releasing the lock.

Atomicity is expressed in: either do not execute, or execute to the end.

It is recommended to use synchronized code blocks, because when synchronizing functions, non-shared resources may be included in the function, so other threads cannot access them. Performance will be worse than synchronized code blocks.

3.volatile

The volatile keyword has many properties, the most important of which is that it guarantees the visibility of variables decorated with volatile to all threads . Visibility here means that when a thread operates on a variable, the new value is immediately updated to main memory. When another thread accesses this variable, it gets the latest value in main memory. This is due to the principle of first happening in java .

But volatile does not guarantee the atomicity of variables . Because volatile does not guarantee the atomicity of variables, errors may occur during concurrent processing, so the use of volatile requires conditions, and the following situations are suitable for use.

  • The result of the operation does not depend on the current value of the variable, or ensures that only a single thread modifies the value of the variable. That is, multiple threads cross-process shared resource variables.
  • Variables do not need to participate in invariant constraints with other state variables.

The first one is easy to understand, let's take a look at the second one.

volatile static int start = 3;

volatile static int end = 6

Thread A executes the following code:

while (start < end){

  //do something

}

Thread B executes the following code:

start+=3;

end+=3;

In this case, once thread B is executed in the loop of thread A, start may be updated to 6 first, causing start == end for a moment, thus jumping out of the while loop.

 

4. Instruction reordering

What is instruction reordering?

Instruction rearrangement refers to the reordering of the existing instruction order when the JVM compiles Java code, or when the CPU executes the JVM bytecode.

The purpose of instruction rearrangement is to optimize the running efficiency of the program without changing the execution result of the program. It should be noted that the "do not change the execution result" mentioned here means that the execution result of the program under a single thread is not changed .

 Because the instruction reordering in the case of multi-threading will produce incorrect results.

/* Thread A */ 
boolean b = false ;
contex = test();
b = true;
/*线程B*/
while(!b){
sleep(2000);
...
}
doAfter(contex);

The above code will not affect the final result when executed normally, but when instruction rearrangement occurs, the following code:

/* Thread A */ 
boolean b = false ;
b = true;
contex = test();

/* Thread B */ 
while (! b){
sleep(2000);
...
}
doAfter(contex);

At this time, it is very likely that the context object has not been loaded, and the variable b is already true. Thread B directly jumps out of the loop waiting and starts to execute the doAfter() method, and an error will naturally occur as a result.

5. Memory barriers

To solve the problem of instruction rearrangement, a memory screen is used, a memory barrier, also known as a memory fence or a fence instruction, is a barrier instruction that causes the CPU or compiler to enforce an ordering constraint on memory operations issued before and after the barrier instruction. This usually means that operations issued before the barrier are guaranteed to execute before operations issued after the barrier.

There are four types of memory screens:

  • LoadLoad barrier :

    Abstract Scenario: Load1; LoadLoad; Load2

    Load1 and Load2 represent two read instructions. Before the data to be read by Load2 is accessed, ensure that the data to be read by Load1 has been read.

  • StoreStore Barrier:

    Abstract scene: Store1; StoreStore; Store2

    Store1 and Store2 represent two write instructions. Before Store2 writes are performed, ensure that Store1's write operations are visible to other processors

  • LoadStore barrier:

    Abstract Scenario: Load1; LoadStore; Store2

    Before Store2 is written, ensure that the data to be read by Load1 has been read.

StoreLoad barrier:

Abstract Scenario: Store1; StoreLoad; Load2

Writes to Store1 are guaranteed to be visible to all processors before Load2 reads. The overhead of the StoreLoad barrier is the largest of the four barriers.

5. Valatile instruction rearrangement

What does volatile do?

 

After a variable is volatile, the JVM does two things for us:

1 Insert a StoreStore barrier before each volatile write operation and a StoreLoad barrier after the write operation .

2. Insert a LoadLoad barrier before each volatile read operation , and insert a LoadStore barrier after the read operation.

Maybe this is a bit abstract, let's take a look at the example of thread A code just now:

boolean b = false;
b = true;
contex = test();

We add the volatile modifier to b, what effect will it bring?

 

volatile boolean b = false; 
StoreStore barrier b = true;
StoreLoad barrier context = test();
Due to the addition of the StoreStore barrier, the normal write statement context = test() above the barrier and the volatile write statement b = true below the barrier cannot be Swap the order, thus successfully preventing instruction reordering.

 Summarize:

1. volatile feature one

 Guarantees visibility of variables between threads. The guarantee of visibility is based on the CPU's memory barrier instructions, abstracted by JSR-133 as the happens-before principle.

2. volatile feature 2

Prevent compile-time and run-time instruction reordering. The JVM compiler obeys the constraints of memory barriers at compile time, and the runtime relies on CPU barrier instructions to prevent reordering.

Note: Most of the content comes from "Code Farmer Turning Over", I am a porter.

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324786346&siteId=291194637