4 optimizations of synchronized in jdk1.6

The synchronized core optimization plan mainly includes the following four:

1. Lock expansion
2. Eliminate
3. Lock roughening
4. Adaptive spin lock

1. Lock expansion

Insert image description here
The so-called lock expansion refers to the process of synchronized upgrade from lock-free to biased lock, then to lightweight lock, and finally to heavyweight lock. It is called lock expansion or lock upgrade.
Before JDK 1.6, synchronized was a heavyweight lock, which means that synchronized would convert from user mode to kernel mode when releasing and acquiring locks, and the conversion efficiency was relatively low. But with the lock expansion mechanism, synchronized statuses include lock-free, biased locks and lightweight locks. At this time, when performing concurrent operations, most scenarios do not require the conversion from user mode to kernel mode. This greatly improves the performance of synchronized.

2. Lock elimination

Many people know about the lock expansion mechanism in synchronized, but they know little about the next three optimizations. This will miss the opportunity in the interview, so let’s talk about these three optimizations separately in this article.

Lock elimination refers to the fact that in some cases, if the JVM virtual machine cannot detect the possibility of a certain piece of code being shared and competed, it will eliminate the synchronization lock to which this piece of code belongs, thereby ultimately improving program performance.

The basis for lock elimination is the data support of escape analysis, such as the append() method of StringBuffer, or the add() method of Vector. In many cases, lock elimination can be performed, such as the following code:

public String method() {
    
    
    StringBuffer sb = new StringBuffer();
    for (int i = 0; i < 10; i++) {
    
    
        sb.append("i:" + i);
    }
    return sb.toString();
}

The compiled bytecode is as follows:
Insert image description here
From the above results, it can be seen that the thread-safe locked StringBuffer object we wrote before was replaced by the non-locked and unsafe StringBuilder object after the bytecode was generated. The reason The variable of StringBuffer belongs to a local variable and will not escape from this method, so at this time we can use lock elimination (without locking) to speed up the running of the program.

3. Lock roughening

Lock coarsening refers to connecting multiple consecutive locking and unlocking operations together and expanding them into a lock with a larger scope.

I only heard that lock "refinement" can improve the execution efficiency of the program, that is, reducing the scope of the lock as much as possible, so that when lock competition occurs, the thread waiting to acquire the lock can acquire the lock earlier, thus improving the running efficiency of the program. , but how does lock coarsening improve performance?

Yes, the point of lock refinement is true in most cases, but a series of continuous locking and unlocking operations will also cause unnecessary performance overhead, thus affecting the execution efficiency of the program, such as this code:

public String method() {
    
    
    StringBuilder sb = new StringBuilder();
    for (int i = 0; i < 10; i++) {
    
    
        // 伪代码:加锁操作
        sb.append("i:" + i);
        // 伪代码:解锁操作
    }
    return sb.toString();
}

We do not consider compiler optimization here. If a lock is defined in a for loop, the scope of the lock will be very small, but each for loop requires locking and lock-releasing operations, and the performance is very low; but if we If you add a lock directly to the outer layer of the for loop, the performance of this code for operating the same object will be greatly improved, as shown in the following pseudo code:

public String method() {
    
    
    StringBuilder sb = new StringBuilder();
    // 伪代码:加锁操作
    for (int i = 0; i < 10; i++) {
    
    
        sb.append("i:" + i);
    }
    // 伪代码:解锁操作
    return sb.toString();
}

The role of lock coarsening: If it is detected that the same object has performed consecutive locking and unlocking operations, this series of operations will be merged into a larger lock, thereby improving the execution efficiency of the program.

4. Adaptive spin lock

Spin lock refers to a way of trying to acquire a lock through its own loop. The pseudo code is implemented as follows:

// 尝试获取锁
while(!isLock()){
    
    
    
}

The advantage of spin lock is that it avoids the suspension and resumption operations of some threads, because both the suspending thread and the resuming thread need to transfer from user mode to kernel mode. This process is relatively slow, so spin can be used to a certain extent. Avoid the performance overhead caused by thread suspension and resumption.

However, if the lock cannot be obtained after spinning for a long time, it will also cause a certain waste of resources, so we usually set a fixed value for the spin to avoid the performance overhead of spinning all the time. However, for the synchronized keyword, its spin lock is more "intelligent". The spin lock in synchronized is an adaptive spin lock.

Adaptive spin lock means that the number of thread spins is no longer a fixed value, but a dynamically changing value. This value will determine the number of spins based on the status of the previous spin to acquire the lock. For example, if the lock was successfully obtained through spin last time, it is possible to obtain the lock through spin this time, so the number of spins will increase this time. If the lock was not successfully obtained through spin last time, then This spin may not be able to acquire the lock, so in order to avoid wasting resources, there will be less or no loops to improve the execution efficiency of the program. Simply put, if the thread spins successfully, the number of spins will increase next time. If it fails, the number of spins next time will decrease.

Guess you like

Origin blog.csdn.net/weixin_45163291/article/details/130963413