Java memory model JMM six in-depth understanding of synchronized (2)

This article undertakes the unfinished  Java memory model JMM six in-depth understanding of synchronized (1)

3.7 Lock Elimination

Eliminating synchronization locks is another kind of JVM lock optimization . This optimization is more thorough. The JVM compiles through runtime JIT (which can be understood as compiling when a certain piece of code is about to be executed for the first time, also known as just-in-time compilation). For some It is required to add synchronization on the code, but it is found that there is no shared data competition through data escape technology analysis. At this time, the JVM will eliminate these unnecessary synchronization locks. Therefore, lock elimination can save the time of meaningless lock requests, so why synchronize operations knowing that there is no data competition? As a programmer, it is true that such code will not be written, but sometimes the program is not what we think. Although we do not use synchronization locks explicitly, we are using some JDK built-in APIs, such as StringBuffer. , Vector, HashTable, etc. At this time, there will be invisible synchronization locking operations. For example, the append() method of StringBuffer and the add() method of Vector.

public void add(String str1, String str2) {
	//StringBuffer is thread-safe, since sb is only used in the append method, it cannot be referenced by other threads
	//So sb is a resource that cannot be shared, and the JVM will automatically eliminate the internal lock
	StringBuffer sb = new StringBuffer();
	sb.append(str1)
	  .append(str2);
}

 The above code is very common, especially when splicing some SQL, HQL and other operations, because the sb object there is a local variable and will not be referenced by other threads, but the append method of StringBuffer is a synchronization method, this , the JVM will eliminate its lock through the lock elimination mechanism.

 

3.7 Chain coarsening

In principle, when we write code, we always recommend that the scope of the synchronization block be as small as possible, and only synchronize in the actual scope of the shared data. This is to make the operations that need to be synchronized as few as possible. If there is lock competition, the thread waiting for the lock can also get the lock as soon as possible. In most cases, there is no problem with this principle, but if a series of continuous operations are repeated locking and unlocking of the same lock, or even repeated locking and unlocking operations in the loop body, even if there is no such situation. Thread competition and frequent mutual exclusion synchronization operations will also lead to unnecessary performance losses.

Lock coarsening is to connect multiple consecutive locking and unlocking operations together to expand into a lock with a larger scope, as shown in the following code example:

 

public void vectorTest(){
	Vector<String> vector = new Vector<String>();
	for(int i = 0 ; i < 10 ; i++){
		vector.add(i + "");
	}

	System.out.println(vector);
}
The above code is very common to add collection elements in the loop body. We know that the add method of vector is a synchronous method. Each add needs to be locked, and then needs to be unlocked after the execution. At this time, the JVM detects that the same When an object (vector) is continuously locked and unlocked, a wider range of locking and unlocking operations will be combined, that is, the locking and unlocking operations will be moved out of the for loop, and multiple locking and unlocking operations will be replaced by one-time locking and unlocking operations. Unlock operation.

 

3.8 synchronized cannot be interrupted by interrupt

The interrupt method of a thread means that it can only interrupt a thread that is in a blocking state or is preparing to perform a blocking operation. The blocking here actually refers to the WAITING / TIMED_WAITING state that the thread enters by calling Join, wait, sleep and other methods. Because these methods are aware of the interrupt operation and throw an interrupt exception. The so-called synchronized cannot be interrupted by the interrupt method, which means that when the thread executes the synchronized method or code block, the thread has to enter the dead state by calling the underlying park method because the object lock is occupied. At this time, the thread is actually Entering the blocking state, but because there is no displayed method that can throw an interrupt exception, the blocked thread cannot recover from the blocking state, so that the thread has been in the blocking state of lock waiting. The following code example can be used as an illustration:

public class ThreadTest implements Runnable{
	
	public ThreadTest() {
		//The thread already holds the current instance lock
        new Thread() {
            public void run() {
                f(); // Lock acquired by this thread
            }
        }.start();
	}

	public synchronized void f() {
        System.out.println("Trying to call f()");
        while(true) // Never releases lock
            Thread.yield();
    }
	
	public void run() {
        // interrupt judgment
        while (true) {
            if (Thread.interrupted()) {
                System.out.println("Interrupt thread!!");
                break;
            } else {
                f();
            }
        }
    }

	public static void main(String[] args) throws InterruptedException {
		Thread t = new Thread(new ThreadTest());
		t.start();
        TimeUnit.SECONDS.sleep(1);
        //Interrupt the thread, can not take effect
        t.interrupt();
	}

}

 When the ThreadTest example is created, the synchronized method f is immediately executed in its constructor. The f method will cause it to hold the object lock all the time. When the thread t is created and the f method is called, it will always be in a lock waiting state. It is also blocked. Although the interrupt interrupt method is called later, thread t cannot be interrupted, because there is no method that can throw an interrupt exception, thread t can only wait until the object lock is acquired.

 

3.9 Synchronized lock and lock optimization summary

Lock principle advantage shortcoming Applicable scene
Bias lock If a thread acquires the lock, the lock enters bias mode, and when the thread requests the lock again, there is no need to do any synchronization.  Eliminate the overhead of repeatedly acquiring/releasing locks If the competition is fierce, it will bring additional consumption of lock revocation There is only one same thread most of the time, and there is no competition in the scenario where the synchronized block is accessed multiple times
Lightweight lock Lock acquisition and release are achieved only by CAS instructions and spin operations Avoids the consumption of threads switching from user mode to kernel mode, threads will not be blocked, and the response speed of the program is improved If there is lock contention, it will consume CPU by spinning There is no competition, multiple threads are executed alternately, and synchronized blocks are executed faster
heavyweight lock Through the underlying implementation of the operating system, blocking competing threads and waking up blocked threads Few advantages over other cases Thread blocking/awakening requires switching between user mode and kernel mode, which consumes a lot of cost and makes the thread response slower due to blocking Scenarios where competition is fierce and the logic of synchronous code blocks is complex and takes a lot of time
spin/self-adaptive spin Avoid unnecessary performance overhead caused by state transitions when suspending and resuming threads by executing multiple meaningless instructions Avoids the consumption of threads switching from user mode to kernel mode CPU consumption in vain When a lightweight lock and a heavyweight lock compete, before switching to kernel mode
lock removal Implement lock elimination for synchronization locks without data sharing through data escape technology analysis Make it eliminate unnecessary consumption of acquire/release locks Well, it is completely done by the JVM itself, relying on the JIT compilation of the JVM Just-in-time compilation of the JVM, targeting the underlying API of the JDK
Chain coarsening Extend multiple acquisition/release operations on the same lock object into a single acquire/release operation Eliminate the overhead of repeatedly acquiring/releasing locks Completely rely on the JVM itself Scenarios with Synchronized Blocks in Loop Actions

 

 

 

 

 

 

 

 

 

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326315297&siteId=291194637