[Upgrade] --- synchronized concurrent programming lock + JDK1.6 other synchronized keyword optimization Introduction

Source Address: https://github.com/nieandsun/concurrent-study.git


1 JDK1.6 Overview of the synchronized keyword optimization

Previous article " [] Concurrent programming - from the perspective of bytecode instructions to understand the principles of synchronized keyword ", " [] Concurrent programming - further to understand the synchronized keyword from the perspective of the source JVM principle " has been introduced in JDK1 .6 synchronized keyword before the processing logic is 只要线程想进入synchronized block 就会to go 调用内核函数to seize the associated monitor lock object ownership.

Call the kernel function will switch issues related to kernel mode and user mode, which consumes a lot of system resources, reduce process efficiency. And studies have shown that in most cases the code is alternately performed , 交替执行就不会产生并发,自然也就不会带来并发安全问题,so this mechanism JDK1.6 before the synchronized keyword there is a certain problem.

Doug Lea made Reentrantlock fact, a good solution to this problem, you can look at my article, " [Concurrent Programming] - Reentrantlock Resolution 1 Source: processing logic synchronization method alternately executed ."

It should be because the synchronized keyword is the reason JDK original keyword, right, HotSpot virtual machine development team spent on this version of JDK1.6 a lot of effort to achieve a variety of lock optimization techniques, including biased locking (Biased Locking), light middleweight lock (Lightweight locking), adaptive spin (Adaptive Spinning), eliminating the lock (lock elimination), lock coarsening (lock coarsening), etc. - "these technologies are designed to more efficiently share data between threads, and problem solving competition, thereby enhancing the efficiency of the program.


2 synchronized lock escalation process

The upgrade process is lock: no lock -> biased locking -> Lightweight lock -> Lock heavyweight


2.1 Lock biased (Biased Locking) - the same applies to a case where the thread is repeated in the synchronization code block


2.1.1 What is the biased locking

Biased locking is important to introduce the 6 JDK, because HotSpot After the study authors found that practice in 大多数情况下,Lock not only the absence of multi-threaded competition, and always get multiple times by the same threadIn order to lock the thread to get a lower price, the introduction of a biased locking.

Biased locking of "bias" is eccentric "partial" favoritism "partial", it means that the lock will be biased in favor of the first to get its thread, the thread ID will be stored in the object head lock bias, after the only you need to check the thread enters and exits sync block is a biased lock, the lock flag and ThreadID can be.

But it must be withdrawn as soon as competition appeared biased locking multiple threads, so the revocation saved before consumption biased locking performance must be less than consumption performance CAS atomic operation, otherwise more harm than good.


2.1.2 biased locking lock + revoke principle

[Lock]
When the first thread to acquire the lock and access the sync blocks, biased locking process is as follows:

  • (1) Virtual Machine will flag the object header is set to "01", that is biased mode.
  • (2) use the CAS operation to get to the lock thread ID is recorded in Mark Word object among the CAS if the operation is successful, the future holds a biased locking thread lock every time you enter the relevant sync block, a virtual machine can no longer be any synchronous operation, high efficiency biased locking.

Combined with Mark Word (I can see on the blog post " [] Concurrent programming - the original layout of the java object can be proved so !!! ") storage structure can be better understood:
Here Insert Picture Description


[Revocation]
Biased locking revocation process is as follows:

  • (1) undo biased locking operation must wait for global security point (which is determined by JVM, generally like the front end of the loop, the method returns as security dots)
  • (2) has a biased locking thread hangs, determines whether the object is locked the lock state
  • (3) revocation biased locking, no lock to recover (flag is 01) or lightweight lock (00 flags) of state

[+ Locking revocation process schematics]
Biased locking lock + revoke principle can be expressed by the following figure:
This is the "art of Java concurrent programming" in the explanation of the process of drawing, painting quite good, here brought with it ☺☺☺.
Here Insert Picture Description

2.1.3 biased locking verification

After biased locking in Java 6 is enabled by default, but only activated after the application starts a few seconds, you can use -XX:BiasedLockingStartupDelay=0the parameter off delay, if it is determined in a competitive mode under normal circumstances all locks applications, you can XX:-UseBiasedLocking=falseclose the biased locking parameters.

Verification procedures are as follows:

package com.nrsc.ch1.base.jmm.syn_study.upgrade;
import org.openjdk.jol.info.ClassLayout;
public class BiasedLockingDemo {
    
    private static class MyThread extends Thread {
        //static修饰只会初始化一次
        static Object obj = new Object();

        @Override
        public void run() {
            for (int i = 0; i < 3; i++) {
                synchronized (obj) {
                    //打印锁对象的布局
                    System.out.println(ClassLayout.parseInstance(obj).toPrintable());
                }
            }
        }
    }

    public static void main(String[] args) {
        MyThread mt = new MyThread();
        mt.start();
    }
}
  • To get the desired effect, at run time, the need to add the following VM parameter '
-XX:BiasedLockingStartupDelay=0
  • Results are as follows:

Here Insert Picture Description

The green part of the former 56 Mark Word of storing the thread ThreadId and Epoch, you can see the 56-bit values are the same as
the yellow part of the Mark Word last three 101

This result is consistent with the table described in 2.1.2.


2.1.4 Benefits of biased locking

Biased locking is to further improve performance when only one thread to perform synchronization block applies to a thread repeatedly get the same lock. Biased locking can improve performance with synchronized without competition.

It is also a benefit tradeoffs with optimized properties, that is, it is not always beneficial to run the program, if the program most of the locks are always a number of different threads access such as thread pool that bias mode It is redundant.

In JDK5 biased locking is disabled by default, but by the biased locking JDK6 has been enabled by default. But only activated after the application starts a few seconds, you can use -XX:BiasedLockingStartupDelay=0parameters closing delay, if it is determined in a competitive mode and all normally locks the application, by XX:-UseBiasedLocking=falseclosing biased lock parameters.


summary

  • Biased lock principle:

When the lock object is first thread gets, the virtual machine will be subject to flag its head is set to "101", that is biased mode. At the same time the use of CAS operation to get to the lock thread ID is recorded in Mark Word object into the lock, if the CAS operation is successful, the future holds a biased locking thread lock every time you enter the relevant sync block, the virtual machines can no longer any synchronous operation, high efficiency biased locking.

  • Benefits tend to lock

Biased locking is to further improve performance when only one thread to perform synchronization block applies to a thread repeatedly get the same lock. Biased locking can improve performance with synchronized without competition.


Applies to thread into the alternate synchronization method - 2.2 lightweight lock (Lightweight Locking)


2.2.1 What is a lightweight lock

Lightweight is the new lock lock mechanism in JDK 6 added that the name of "lightweight" is relative to the use of traditional lock monitor terms, traditional lock mechanism is called "heavyweight" lock. First thing that needs to be emphasized:Lightweight locks are not intended to replace the heavyweight lock.

The purpose of introducing lightweight lock: in the case of multi-thread alternately performed sync block, to avoid performance overhead due to the heavyweight lock, but if multiple threads enter the critical region at the same time, the lock will cause lightweight expanded upgraded heavyweight lock, so there's not a lock to be lightweight alternative heavyweight lock.


2.2.2 + lightweight lock locking principle revoked

[Lock]
When closed biased locking function or multiple threads compete biased locking upgraded to lightweight lead to biased locking lock, it will try to obtain a lightweight lock, the following steps:

  • (1) determines whether the current object is no lock state (hashcode, 0, 01), and if so, the first JVM to establish a space called the locks (Lock Record) in the current thread's stack frame for storing the lock currently the object of Mark Word copy (official put this copy plus a Displaced prefix, namely Displaced Mark Word), copy the Mark Word object to the stack frame Lock Record, the Lock Reocrd in owner points to the current object.
  • (2) JVM attempts to use the CAS operation target of Mark Word updated to point to Lock Record pointer, if successful competition to represent the lock, then the lock flag becomes 00, perform synchronization operations.
  • (3) if it is determined that the current object fails Mark Word points to the current thread's stack frame, if it is then the current thread already holds this lock object, the direct implementation of the synchronization code blocks; otherwise the lock object has only been described other threads seize, then expanded into a lightweight lock needs a heavyweight lock, the lock flag becomes 10, waiting behind the thread will enter the blocked state.

[Revocation]
Lightweight locks are released by the CAS operation is performed in the following steps:

  • (1) taken out to obtain data stored in the lightweight lock the Displaced Mark Word.
  • (2) taken out by the CAS operation replaces the current Mark Word data object, if successful, then the successful release lock.
  • (3) If the CAS replacement operation fails, the other thread tries to acquire the lock, you will need to upgrade to a lightweight lock inflation heavyweight lock.

[+ Locking revocation process schematics]
Here still borrow "Art of Java Concurrency" in the Figure: ☺☺☺.
Here Insert Picture Description


2.2.3 Lightweight lock verification

Interested to try it myself, you may get unexpected results. . .


2.2.4 Lightweight lock benefit

For lightweight lock, its performance is based “对于绝大部分的锁,在整个生命周期内都是不会存在竞争的”, if the break is in addition to the exclusive basis of cost, there are additional CAS operation, so in the case of multiple threads of competition, lightweight than heavyweight lock lock Slower.

In the case where the sync blocks are alternately performed multithreaded heavyweight lock performance can be avoided due to consumption.


2.3 Spinlocks

We believe that through previous articles of bedding you should already know, use the monitor lock function calls the kernel threads park and unpark, namely threads park and unpark need to switch back and forth the CPU user mode and kernel mode switch. Frequent park and unpark to the CPU as a heavy burden of work, concurrent performance of these operating systems will bring a lot of pressure.

At the same time, the virtual machine development team also noted that in many applications, data sharing locked state 只会持续很短的一段时间, this time to wake up the thread and blocking is not worth it. If the physical machine has more than one processor that allows two or more threads execute in parallel, we can get behind that request thread lock “稍等一下”,, but do not give up the processor execution time, to see whether it is the thread that holds the lock will soon release the lock. To make a thread wait, we can 让线程执行一个死循环(自旋), this technique is known as spin locks. - "spin lock has been introduced in JDK 1.4.2, but is off by default, you can use -XX: + UseSpinning parameters to open in JDK 1.6 has been changed in is enabled by default.

自旋等待不能代替阻塞And I will not speak to the number of processors required, spin-wait itself, although without the overhead of thread switching, but it is to take up processor time, so if the lock is occupied for a short time, the effect of the spin wait will be very good; on the contrary, if the lock is occupied for a long time. Then spin the thread will only wasteful consumption of processor resources, and will not do any useful work, but will bring unnecessary performance. Therefore, the spin-wait time must have a certain limit, if the spin exceeds the limit of the number of lock is still not successful, you should use the traditional way to hang a thread. The default value is 10 times the number of spins, the user can use the parameters -XX: PreBlockSpin to change.


2.4 adaptive spin locks

In JDK1.6 also introduced adaptive spin lock. Adaptive means that spin time is no longer fixed, but is locked in a state with a spin lock time and the owner is determined by the previous. If a lock on the same object spin-wait just successfully won locks, and thread holding the lock is running, the virtual machine will think this is also very likely to be successful spin again, and then it will allow spin wait for a relatively longer period of time, such as 100 cycles. In addition, if a lock for a spin rarely been successful, and that they wanted to get in
when the lock will be possible to omit the spin process, in order to avoid wasting processor resources. With adaptive spin, with the program running and continuously improve performance monitoring information, the virtual machine program lock status prediction will be more accurate, the virtual machine will become more and more "smart".


3 JDK1.6 other synchronized keyword optimization Introduction


3.1 Lock elimination

Locks elimination is a virtual machine time compiler (JIT) at run time, some code requirements for synchronization, but can not be detected latched share data to eliminate competition. Lock eliminated from the main judgment based 逃逸分析data to support all the data to determine if a piece of code in, the heap will not escape out so as to be accessible to other threads, it is possible to treat them as data on the stack, they are considered thread private, locked naturally without the need for synchronization. Variable is escape, for it requires the use of virtual machine data flow analysis to determine, but the programmer should be very clear, how will know that there is no clear case also require synchronization of data contention it? There are actually many measures are not synchronous programmer added, the prevalence of synchronous code in Java programs may exceed the imagination of most readers.

For example, following this very simple code string of only three output the added result, whether literally or the semantics of the program source are not synchronized.

public class Demo01 {
    
    public static void main(String[] args) {
        contactString("aa", "bb", "cc");
    }
    public static String contactString(String s1, String s2, String s3) {
        return new StringBuffer().append(s1).append(s2).append(s3).toString();
    }
}

The StringBuffer append () is a synchronous method, is that this is a lock (new StringBuilder ()). Its dynamic virtual machine discovery scope is confined inside concatString () method. That is to say, quote new StringBuilder () object is never "escape" to concatString () method, other threads can not access it, so while there are locks, but can be safely eliminated, in-time compilation after that, this code will ignore all sync directly executed.


3.2 锁粗 of

In principle, we are in the preparation of the code is always recommended to limit the scope of sync blocks as small as possible, but only for synchronization, so in order to make possible the number of operations required to synchronize smaller in the actual scope of shared data If there is lock contention, waiting for the lock that thread can get the lock as soon as possible.

In most cases, the principle of the above are correct, but if a series of successive operations are repeated locking and unlocking of the same object, or even lock operation is present in the loop body, and that even if there is no thread contention, frequently the mutex synchronous operation also cause unnecessary performance loss. For example the following code:

class Demo02 {
    public static void main(String[] args) {
        StringBuffer sb = new StringBuffer();
        //StringBuffer是同步方法,
        // 其实没必要每次append都去判断锁相关的内容,可以将整个for循环搞成同步的 ---> JVM的锁粗化可能会直接帮你这样弄
        for (int i = 0; i < 100; i++) {
            sb.append("aa");
        }
        System.out.println(sb.toString());
    }
}

What is lock coarsening, I believe you must understand, give defined as follows:

JVM will detect a series of small operations use the same lock object, the range of the synchronization code blocks enlarged, outside this string into operation, so that lock can only added once.


end

Published 226 original articles · won praise 319 · views 530 000 +

Guess you like

Origin blog.csdn.net/nrsc272420199/article/details/105232637