JVM series-java memory model (JMM)

The Java Memory Model** (Java Memory Model, JMM)** is different from the JVM runtime data area. The two are completely different concepts and must not be confused.

One, the difference between JMM and JVM

The JVM runtime data area is a logical division of the memory occupied by the Java process during the runtime of the Java virtual machine, including method area, heap memory, virtual machine stack, local method stack, and program counter. These blocks are actually different uses of the requested memory by the Java process through different data structures under the operation of the Java virtual machine.

The Java memory model is a specification for the Java language to read and write shared variables (actually the memory operations corresponding to shared variables) in the case of multi-threaded concurrency. It is mainly used to shield different operating systems and different hardware when Java programs access shared memory. In order to solve the problems of multi-thread visibility, atomicity and so on .

After we write the program, the compiler and processor will have corresponding optimizations to improve operating efficiency. There are many optimizations, such as instruction reordering. The instructions are reordered and the performance is improved, but can we still get the execution result we want?

The premise of optimization is that the results of the execution are still correct, which requires additional guarantees. JMM guarantees the java programmers to ensure that the results are still correct after optimization and the performance is improved.

Second, the happens-before principle

How does JMM ensure that the passed results for improving performance are still correct? The JVM specification stipulates some rules for the Java virtual machine's multi-threaded memory operations: the happens-before principle . Mainly reflected in the two keywords volatile and synchronized.

The eight principles of happens-before (the happens-before principle cannot be simply literally understood as one operation occurs before another operation ):

  • Single-threaded happen-before principle: In the same thread, write the operations that follow happen-before to the previous operations.
  • The lock's happen-before principle: the unlock operation of the same lock happens-before the lock operation of this lock.
  • Volatile's happen-before principle: the write operation of a volatile variable happen-before any operation of this variable (of course, it also includes the write operation).
  • The transitivity principle of happen-before: If A operation happens-before B operation, B operation happens-before C operation, then A operation happens-before C operation.
  • The happen-before principle of thread start: the start method of the same thread happen-before other methods of this thread.
  • The happen-before principle of thread interruption: the call to the thread interrupt method happen-before is the code that detects the interruption sent by the interrupted thread.
  • The happen-before principle of thread termination: all operations in the thread happen-before thread termination detection.
  • The happen-before principle of object creation: the initialization of an object is completed before its finalize method is called.

In the same thread, write in the previous operation happen-before and the operation behind: Many articles understand this as the code written in the front occurs before the code written in the back, but the reordering of instructions can indeed make the code written in the back come first. Occurs from the code written in the front. This is the interpretation of happen-before as "before what happens". In fact, happen-beofre does not have any temporal meaning here. For example, the following code:

int a = 3;      //1
int b = a + 1; //2

Here // the operation of assigning a value to b will use the variable a, so Java's "single-thread happen-before principle" guarantees that the value of a in ///2 must be 3, not 0 and other values, because / /1 is written in front of ///2, //1 assignment operation to variable a must be visible to ///2. Because the variable a in // is used in ///2, and the java memory model provides the "single thread happen-before principle", the java virtual machine does not allow the operating system to instruct the operation of // //2 Reordering, that is, it is impossible for // to happen before ///1. But for the following code:

int a = 3;
int b = 4;

The two statements have no direct dependencies, so instruction reordering may occur, that is, the assignment to b may precede the assignment to a.

The unlock operation of the same lock happen-beofre the lock operation of this lock: do n’t say much, just look at the following code:

public class A {
    
    
   public int var;

   private static A a = new A();

   private A(){
    
    }

   public static A getInstance(){
    
    
       return a;
   }

   public synchronized void method1(){
    
    
       var = 3;
   }

   public synchronized void method2(){
    
    
       int b = var;
   }

   public void method3(){
    
    
       synchronized(new A()){
    
     //注意这里和method1 method2 用的可不是同一个锁哦
           var = 4;
       }
   }
}
main(){
    
    
   	//线程1执行的代码:
	A.getInstance().method1(); 

	//线程2执行的代码:
	A.getInstance().method2(); 

	//线程3执行的代码:
	A.getInstance().method3(); 
    
}

If “thread 1” is executed at a certain time, “thread 2” will be executed immediately, because “thread 1” must release the lock after executing the method1 method of class A, and “thread 2” must get it before executing the method2 method of class A Locking conforms to the "happen-before principle of locking", then the variable var in the "thread 2" method2 method must be 3, so the value of the variable b must also be 3. But if it is in the order of "Thread 1", "Thread 3", and "Thread 2", is the value of b in the last "Thread 2" method 2 method 3 or 4? The result is either 3 or 4. It is true that "thread 3" does have to unlock after executing method3, and then "thread 2" has a lock, but these two threads are not using the same lock, so the two operations of JMM do not conform to the eight happen-before Therefore, JMM cannot guarantee that the modification of the var variable by "Thread 3" will be visible to "Thread 2", although "Thread 3" occurs before "Thread 2".

Write operation to a volatile variable happen-before any operation on this variable:

volatile int a;

//线程1
a = 1; //1

//线程2
b = a;  //2

If thread 1 executes // 1 and “thread 2” executes //, and after “thread 1” executes, “thread 2” executes again, then it conforms to the “happen-before principle of volatile” so the value in “thread 2” The value of a must be 1.

**If A operation happens-before B operation, B operation happens-before C operation, then A operation happen-before C operation: **If there is the following code block:

volatile int var;
int b;
int c;

//线程1
b = 4; //1
var = 3; //2

//线程2
c = var; //3
c = b; //4

Suppose that "Thread 1" executes //2 this code, and "Thread 2" executes //4 this code. If the order of execution is as follows:
//1 //2 //3 //4. Then there is the following derivation (hd(a,b) means a happen-before b):

Because there are hd(//1,//2), hd(//3,//4) (single-threaded happen-before principle)
and hd(//2,//3) (volatile happen-before principle) )
So there is hd(//1,//3), which can be exported hd(//1,//4) (transitivity of the happen-before principle),
so the value of variable c is finally 4
if the order of execution is as follows :
//1 //3 //2// //4 Then the result of the last 4 cannot be determined. The reason is that //2 directly conforms to any one of the above eight principles, and nothing can be inferred through transitivity.

Through the detailed explanation of the above four principles, the saved four principles are more obvious.

Three, summary

Summary: The happens-before principle is mainly reflected in the two keywords volatile and synchronized.

  • Volatile is the visibility guarantee provided by the JVM for shared variables in multi-threaded reading and writing. The main function is to prohibit volatile-modified shared variables from being cached (here related to the CPU cache and cache coherency protocol), and no reordering (Reordering: optimization to improve performance under the current situation where the CPU processing speed is much higher than the memory read and write speed), but the atomicity of shared variable operations is not guaranteed.
  • Synchronized is a lock mechanism provided by the JVM, which guarantees the atomicity, visibility, and order of operations in the locked region through the characteristics of locks and memory barriers.
  • The lock contention is the object (the static lock is the class object, the non-static lock is the current object, that is, the lock method block locks the custom object) the "sovereignty" of a piece of memory in the object header in the heap memory, only A thread can acquire the "sovereignty", that is, exclusivity, and the atomicity of the operation of the locked area is guaranteed through the exclusiveness of the lock
  • By adding Load Barrier and Store Barrier before and after the code, the visibility of the operation of the shared variable in the locked code block or method can be guaranteed
  • By adding Acquire Barrier and Release Barrier before and after the code, the order of operations on shared variables in the locked code block or method can be guaranteed

reference:

https://www.cnblogs.com/tiancai/p/9636199.html

https://zhuanlan.zhihu.com/p/92341957

Finished, call it a day!

[ Dissemination of knowledge, sharing of value ], thank you friends for your attention and support. I am [ Zhuge Xiaoyuan ], an Internet migrant worker struggling in hesitation.

Guess you like

Origin blog.csdn.net/wuxiaolongah/article/details/109733432