One article to get through the lock upgrade (biased lock, lightweight lock, heavyweight lock)

Prerequisite knowledge: synchronized

 Before JavaSE1.6, synchronized was called a heavyweight lock. However, in JavaSE1.6, synchronized was optimized, and biased locks and lightweight locks were introduced, as well as the storage structure and upgrade process of locks, which reduced the performance consumption of acquiring and releasing locks. In some cases, it also Not so heavy.

In the synchronization method, the flag mark ACC_SYNCHRONIZED is used. When the method is called, the call instruction will check whether the ACC_SYNCHRONIZED access flag of the method is set. If set, the execution thread first holds the synchronization lock, then executes the method, and finally releases the lock after the method is executed.

Synchronized code blocks are synchronized using the monitorenter and monitorexit instructions. The monitorenter instruction is inserted at the beginning of the synchronization code block after compilation, and the monitorexit is inserted at the end of the method and at the exception. The JVM must ensure that each monitorenter must have a corresponding monitorexit paired with it. Any object has a monitor associated with it, and after a monitor is held, it is locked. When the thread executes the monitorenter instruction, it will try to acquire the monitor ownership of the object, that is, try to acquire the lock of the object.

 synchronized lock upgrade

The background of synchronized lock optimization:
using locks can achieve data security, but it will cause performance degradation.
Lock-free can improve program performance based on thread parallelism, but it will reduce thread safety. Synchronized lock: According to the mark word lock flag bit of the object header, it is determined which kind of lock it currently belongs to.

 Why does lock escalation exist?

In java5 and before, there is only synchronized, which is a heavyweight lock and a heavyweight operation at the operating system level. If the lock competition is fierce, the performance will drop. Because there is a transition between user mode and kernel mode.

Java threads are mapped to the native threads of the operating system. If you want to block or wake up a thread, you need the operating system to intervene, and you need to switch between the user state and the core state. This switching will consume a lot of system resources, because the user State and kernel state have their own dedicated memory space, dedicated registers, etc. Switching from user state to kernel state needs to pass many variables and parameters to the kernel, and the kernel also needs to protect some register values, variables, etc. when switching from user state , in order to switch back to the user mode to continue working after the kernel mode call ends.

In the release of the early version of Java, synchronized is a heavyweight lock, which is inefficient, because the monitor lock (montor) is implemented by relying on the MutexLoCK (system worker repulsion) of the underlying operating system, and both suspending and resuming threads need to be transferred Entering the kernel mode to complete, blocking or waking up a Java thread needs to be completed by switching the CPU state of the system. This state switching takes processor time. If the content of the synchronization code block is too simple, the switching time may be faster than the user code. The execution time is still long", and the time cost is relatively high. This is why the early synchronized was inefficient. After Java 6, in order to reduce the performance consumption caused by acquiring and releasing locks, lightweight locks and biased locks were introduced.

Bias lock

As the name suggests, it will favor the first thread to access the lock

When a piece of synchronization code has been accessed by the same thread multiple times, since there is only one thread, the thread will automatically acquire the lock when it is accessed subsequently. A thread acquires multiple times, and the biased lock explanation appears in this case. Her appearance is a future solution to improve performance only when one thread performs synchronization.

  • If there is only one thread accessing the synchronization lock during operation, and there is no multi-thread contention, the thread does not need to trigger synchronization. In this case, a biased lock will be added to the thread. When the thread reaches the synchronization code block for the second time, it will judge whether the thread holding the lock at this time is itself, and if it is, it will continue to execute normally. Since the lock was not released before, there is no need to re-lock it here. If there is only one thread that uses the lock from beginning to end, it is obvious that there is almost no additional overhead for biasing the lock, and the performance is extremely high. (avoiding the up and down switching of threads), that is, the biased lock eliminates the synchronization statement when there is no resource competition, and the lazy one does not even perform the CAS operation, which directly improves the program performance.
  • If other threads seize the lock during the running process, the thread holding the biased lock will be suspended, and the JVM will eliminate the biased lock on it and restore the lock to a standard lightweight lock. The biased lock further improves the performance of the program by eliminating the synchronization primitives in the case of no resource competition. Once a second thread joins 锁竞争, the biased lock is upgraded to a lightweight lock (spin lock). When upgrading to a lightweight lock, it is necessary to revoke the bias lock, which will cause STW(stop the word)operations when revoking the bias lock;

 

In fact, the biased lock is enabled by default after JDK1.6, but the startup time is delayed (4 seconds),
so you need to add the parameter -XX:BiasedLockingStartupDelay=0 to make it start immediately when the program starts 

Turn on the bias lock:

-XX:+UseBiasedLocking -XX:BiasedLockingStartupDelay=0

Close the biased lock: After closing, the program will directly enter the -------> lightweight lock state by default

-XX:-UseBiasedLocking

 

lightweight lock

Multi-thread competition, but there is at most one thread competition at any time, that is, there is no excessive lock competition, and there is no thread blocking.

There are threads to participate in the lock competition, but the conflict time to acquire the lock is extremely short. The essence is spin lock CAS

Main purpose: On the premise of no multi-thread competition, use CAS to reduce the performance consumption caused by the use of operating system mutex by heavyweight locks. To put it bluntly, spin first, and then upgrade and block if it fails.

Upgrade timing: When the biased lock function is turned off or multi-threaded competition for biased locks will cause the biased lock to be upgraded to a lightweight lock

If thread A has already acquired the lock, then thread B comes to grab the lock of the object again, because the lock of the object has been acquired by thread A, and the lock is currently a biased lock. When thread B is scrambling, it finds that the thread ID in the object header Mark Word is not thread B's own thread ID (but thread A), then thread B will perform a CAS operation hoping to obtain the lock.

At this time, there are two situations in the operation of thread B:

If the lock is successfully obtained, directly replace the thread ID in Mark Word with B's own ID (A→B), and re-bias to other threads (that is, to hand over the biased lock to other threads, which is equivalent to the current thread "being" released the lock), The lock will maintain a biased lock state, A thread is Over, and B thread is on top;

If the lock acquisition fails, the biased lock is upgraded to a lightweight lock (set the biased lock flag to 0 and the lock flag to 00), at this time, the lightweight lock is held by the thread that originally held the biased lock and continues to execute Synchronize the code, and the competing thread B will enter the spin and wait to acquire the lightweight lock.

Before Java6

Enabled by default, the number of spins is 10 by default, or the number of spin threads exceeds half of the number of CPU cores.

After Java6

Becomes an adaptive spinlock. It means that the number of spins is not fixed, but is determined according to the state of the thread that owns the lock, or the time of one spin on the same lock.

If the thread spins successfully, the maximum number of spins for the next time will increase, because the JVM believes that since the last time it succeeded, there is a high probability that it will succeed this time. Conversely, if the spin is rarely successful, the number of spins will be reduced or even not spin next time to avoid CPU idling.

heavyweight lock

Applicable to: There are a large number of threads participating in the lock competition, and the conflict is very high.

Heavyweight lock principle

The synchronized heavyweight lock in Java is implemented based on entering and exiting the Monitor object. When compiling, the start position of the synchronized block will be inserted into the monitor enter instruction, and the monitor exit instruction will be inserted at the end position.

When the thread executes the monitor enter instruction, it will try to acquire the ownership of the Monitor corresponding to the object. If it is acquired, it will acquire the lock and store the id of the current thread in the owner of the Monitor, so that it will be in a locked state unless it exits synchronization block, otherwise other threads cannot get this Monitor.

Guess you like

Origin blog.csdn.net/m0_62436868/article/details/129909296