You must know Synchronized (novella: lock upgrade)

In Part I we talked fundamental difference between the use and implementation of the principle of sync herein, this continues to be talk of sync lock the upgrade process, after JDK1.6, JVM keywords to sync done quite a complicated optimization, of course, the purpose is to enhance sync performance

本篇测试环境:
JDK版本 :java version "1.8.0_221"
JDK模式 :Java HotSpot(TM) 64-Bit Server VM (build 25.221-b11, mixed mode)
操作系统:Windows 10 企业版  x64 笔记本
内存容量:8G DDR3
CPU型号 :Intel i7-5500U 2.4GHz
JVM参数 :-Xmx512m -Xms512m -XX:+UseParallelGC复制代码

Object header

Before talking sync upgrade process, first we have to understand a thing, object header in the implementation of the JVM in each object will have a target head, it is a system for storing information of an object, the object header has an official said as part MarkDown, he is the key to lock the various implementations. In the 32-bit system, 32-bit data is Markdown, 64-bit system is 64-bit data, he stored hash value of the object, the age of the object, lock pointer information and the like. In short, a lock object is occupied, it occupied what kind of lock on the record in MarkDown in.

Biased locking

Biased locking core idea

If the thread race condition exists in the system, it will have been canceled thread holding the lock synchronization; in short, if there is a thread t1, it will first acquire the lock after entering biased locking mode, if the request to hold it again when t1 when the lock is not required before proceeding to acquire the lock, thus saving the application operation of the lock, so as to enhance performance. During the lock is in favor of, if there are other threads to acquire a lock, then t1 exits biased locking mode.

Biased representation of a lock

When the lock object is biased locking mode, the object header saves information:

[Hold biased locking thread | timestamp biased locking of | target age | fixed to 1, indicating biased locking | the last two to 01, represents a bias / unlocked]
[Thread_id | epoch | age | 1 | 01] when t1 try to acquire the lock again, JVM above information can directly determine whether the current thread holds the lock bias.

Biased locking performance test

So, JVM performance optimization results tend to lock in the end do? Next we come to look at the practical effect of the test code

private static List<Integer> list = new Vector<>();
public static void main(String[] args) {
    long start = System.currentTimeMillis();
    for (int i = 0; i < 10000000; i++) {
        list.add(i + 2);
    }
    long end = System.currentTimeMillis();
    System.out.println(end - start);
}复制代码

In order to test the accuracy of the results, we need to configure two additional parameters: -XX:+UseBiasedLockingrepresentation biased locking set to open -XX:BiasedLockingStartupDelay=0representation biased locking the JVM starts immediately after the start, if not set, the default JVM code 4 seconds to start biased locking top after starting we use a Vector is written, and make ready to initialize, well-known internal access operation is synchronized Vector lock sync is used to control, each time the add method will request a lock list object, and then I run ten times in a row, Finally, the output value is probably around 440-470, while the closed biased locking (-XX: -UseBiasedLocking) after the same code also runs ten times, the final output value in about 670-690, remove the other factors 估算性能差距在百分之20around.

The problem of lock bias

Tend to lock in the absence of resources are multithreaded competition as much as possible to reduce the performance overhead caused by the lock, but do not ignore a problem, that is, when the thread competition within our application more intense when a large number of threads continue to request a lock, but also will lead to lock difficult to sustain in favor of keeping the lock mode, this time using a biased locking will not only improve performance, but may degrade system performance, so, if we compare the fierce competition within the system thread , shut down as a direct biased locking:-XX:-UseBiasedLocking

Lightweight lock

If the bias lock failed (other thread contention when biased lock) a, JVM will not immediately suspend the thread, but tend to undo the lock will occur, then the object may be in two states, one is Do not turn the lock-free state, one is not biased locked (lightweight) state, when the thread if the holders of lightweight lock, then the time MarkDown object is this: [prt | 00] lockedit is 00 after two , where it is simply the subject's head as a pointer to the thread holding the stack space inside the lock, when the need to determine whether a thread holds a lightweight lock of the object when the object header need only check whether in the stack pointer to the address range of the current thread. Indeed lightweight locks used within the JVM is a BasicObjectLock object to achieve, a BasicLock and internal object contains a pointer to the object holding the lock, the JVM implementations, BasicLock MarkDown will first copy the original object, then use atomic CAS operation to copy the address to MarkDown objects BasicLock head, if the copy is successful, represents a successful lock, otherwise lock failure, if it fails, then there may be a lightweight lock lock inflation!

Spinlocks

After making the lock expansion, this time the thread will likely be carried out directly in the operating system level to suspend operation, which means a switching process of user mode to kernel mode occurs, the performance loss this time is relatively large! So, after the lock inflation, JVM will make a final effort to avoid the thread is suspended, that is, the use of spin locks.

Spin mean here is that if the current thread does not get the lock will not be suspended, but to execute an empty loop (spin), after N cycles, the current thread will request a lock again, if the acquisition successful, normal execution, if still can not get, will be suspended.

Spin-lock problem

Spin locks have different performance consumption under different scenarios, for example, competition is not very intense for the lock, the lock time under occupation parity short scene, the spin lock can effectively avoid operating system locks the number of pending, it is to reduce the user state to the switching times kernel mode; but following the other hand, the lower if the lock more intense competition, or lock occupies a long time, and the more the number of threads of the scene, a thread acquires the lock, the other N threads have to compete lock, It means a lot of thread will be spinning, but after a certain time can not acquire the lock, the operating system is still pending, but is more of a waste of time and CPU resources (number of threads empty spin)

Spin lock settings

In JDK6, JVM is to provide parameters to enable spin lock: -XX:+UseSpinningit can be used with: -XX:PreBlockSpinparameters to set the number of spin-spin lock (default spin 10 times)

However, in the above JDK7 version, JVM is abandoned these two parameters, JVM spin lock is enabled by default, and will automatically adjust the number of spin

Lock elimination

Lock cancellation is a more direct lock optimization technology, is a way to optimize generated by the JVM JIT compiler at the time, JVM doing the scanning operation context of the program, remove the lock can not directly compete shared resources, can save unnecessary locking operations

For example, the following code:

private static String t(String s1, String s2){
    StringBuffer buffer = new StringBuffer();
    buffer.append(s1);
    buffer.append(s2);
    return buffer.toString();
}复制代码

We all know that can be said StringBuffer StringBuilder locked version is thread-safe, but in the above code, the variable scope is limited to the internal buffer method body, there can be no escape to the method, but obviously do not need to use thread-safe way to do string concatenation, so that the lock can be optimized to eliminate such code

Add the following startup parameters:

-server					// 锁消除必须在Server模式下
-Xcomp					// 使用编译模式
-XX:+DoEscapeAnalysis 			// 打开逃逸分析
-XX:+EliminateLocks 			// 打开锁消除
-XX:BiasedLockingStartupDelay=0         // JVM启动立刻打开偏向锁
-XX:+UseBiasedLocking			// 打开偏向锁复制代码

A configuration associated with the lock is to eliminate escape analysis, where I briefly explain the escape analysis, probably meant to say to see if a variable escaped from a scope, in the example above, the variable buffer is no escape method t the scope function, also known as escape did not happen, in this case, the JVM internal buffer in order to lock the variable elimination optimization, on the contrary, if t following methods:

private static StringBuffer t(String s1, String s2){
    StringBuffer buffer = new StringBuffer();
    buffer.append(s1);
    buffer.append(s2);
    return buffer;
}复制代码

The method of the above-described variable buffer occurs escaping, t escape from the interior to the exterior method, the JVM can not eliminate the variable buffer lock operation

After opening the lock theoretically eliminate performance should have improved, but I use more than the actual test configuration, open and close the locks elimination is actually the result of no significant difference, so the test results are not posted here, if interested friends can self-test, or have reason to know friends can post messages to discuss



Guess you like

Origin juejin.im/post/5defb228518825124c50d583