Differences and Analysis of Biased Locks, Lightweight Locks, and Heavyweight Locks

In exchange for performance, JVM has made a lot of optimizations on built-in locks, and the expansion lock allocation strategy is one of them. Understanding the basic problems to be solved of biased locks, lightweight locks, and heavyweight locks, as well as the allocation and expansion process of several locks, helps to write and optimize lock-based concurrent programs.

The allocation and expansion process of built-in locks is relatively complicated, limited by time and energy. This part of the article is based on the integration of multiple sources on the Internet; for convenience of reference, there is also a reference when continuing to analyze the JVM source code later. If you already have a basic understanding of locks at all levels, readers can skip this article.

1 Fundamental problems hidden under built-in locks

The built-in lock is the most convenient thread synchronization tool provided by the JVM. You can use the built-in lock by adding the synchronized keyword to the code block or method declaration. The use of built-in locks can simplify the concurrency model; with the upgrade of the JVM, you can directly enjoy the optimization results of the JVM on the built-in locks without modifying the code. From simple heavyweight locks to gradually expanding lock allocation strategies, various optimization methods are used to solve the basic problems hidden under built-in locks.

1.1 Heavyweight lock

Built-in locks are abstracted as monitor locks in Java. Prior to JDK 1.6, monitor locks could be thought of as directly corresponding to mutexes in the underlying operating system. The cost of this synchronization method is very high, including switching between kernel mode and user mode caused by system calls, thread switching caused by thread blocking, and so on. Therefore, this lock was later called a "heavyweight lock".

1.1.1 Spinlocks

First of all, it is not easy to optimize the switching between kernel mode and user mode. But through spin locks, thread switching (including suspending threads and resuming threads) caused by thread blocking can be reduced .

If the granularity of the lock is small, the holding time of the lock is relatively short (although the specific holding time cannot be known, it can be considered that usually some locks can satisfy the above properties). Then, for those who compete for these locks, the thread switching time caused by lock blocking is equivalent to the lock holding time, reducing the thread switching caused by thread blocking can achieve a large performance improvement. details as follows:

  • When the current thread fails to compete for the lock, it intends to block itself

  • Do not block yourself directly, but spin (empty wait, such as an empty limited for loop) for a while

  • re-competing lock while spinning

  • If the lock is acquired before the end of the spin, the lock acquisition is successful; otherwise, block itself after the end of the spin

If the lock is released by the old owner within the spin time, then the current thread does not need to block itself (and does not need to recover when the lock is released in the future), reducing one thread switch.

The condition "the lock is held for a relatively short time" can be relaxed. In fact, as long as the lock competition time is relatively short (for example, when thread 1 is about to release the lock, thread 2 will come to compete for the lock), the probability of obtaining the lock by spin can be increased. This usually occurs in scenarios where locks are held for a long time, but competition is not intense .

shortcoming:

  • On a single-core processor, there is no actual parallelism. If the current thread does not block itself, the old owner cannot execute, and the lock will never be released. At this time, no matter how long the spin is, it is a waste; furthermore, if there are many threads and few processors , the spin will also cause a lot of unnecessary waste.

  • Spin locks take up CPU. If it is a computationally intensive task, this optimization is usually not worth the candle. Reducing the use of locks is a better choice.

  • If the lock competition time is relatively long, the spin usually cannot obtain the lock, and the CPU time occupied by the spin is wasted. This usually occurs in scenarios where the lock is held for a long time and the competition is fierce . At this time, the spin lock should be actively disabled.

Use the -XX:-UseSpinning parameter to turn off spinlock optimization; the -XX:PreBlockSpin parameter to modify the default number of spins.

1.1.2 Adaptive spin

Adaptive means that the spin time is no longer fixed, but is determined by the previous spin time on the same lock and the state of the lock owner:

  • If on the same lock object, the spin wait has just successfully acquired the lock, and the thread holding the lock is running, then the virtual machine will think that this spin is likely to succeed again, and then it will allow the spin Wait for a relatively longer period of time, say 100 loops.

  • On the contrary, if for a certain lock, the spin is seldom successfully obtained, then the spin time may be reduced or even the spin process may be omitted when the lock is to be acquired in the future, so as to avoid wasting processor resources.

Adaptive spin solves the problem of "uncertain lock competition time" .

It is difficult for the JVM to perceive the exact lock competition time, and handing it over to the user for analysis violates the original design intention of the JVM. Adaptive spin assumes that different threads hold the same lock object for almost the same time, and the degree of competition tends to be stable. Therefore, the time of the next spin can be adjusted according to the time and result of the last spin.

shortcoming:

However, the adaptive spin cannot completely solve this problem. If the default spin number setting is unreasonable (too high or too low), it will be difficult for the adaptive process to converge to an appropriate value.

1.2 Lightweight locks

The goal of spinlocks is to reduce the cost of thread switching. If the lock competition is fierce, we have to rely on heavyweight locks to block the threads that fail the competition; if there is no actual lock competition at all, then applying for heavyweight locks is a waste. The goal of lightweight locks is to reduce the performance consumption caused by using heavyweight locks without actual competition, including switching between kernel mode and user mode caused by system calls, thread switching caused by thread blocking, etc.

As the name implies, lightweight locks are relative to heavyweight locks. When using a lightweight lock, there is no need to apply for a mutex, just_update part of the byte CAS in the Mark Word to the Lock Record in the thread stack, if the update is successful, the lightweight lock is acquired successfully_, and record the lock status It is a lightweight lock; otherwise, it means that a thread has already acquired a lightweight lock, and there is currently lock competition (it is not suitable to continue using lightweight locks), and then it will expand to a heavyweight lock .

Mark Word is part of the object header; each thread has its own thread stack (virtual machine stack), which records the basic information of threads and function calls. The two belong to the basic content of the JVM and will not be introduced here.

Of course, since lightweight locks are naturally aimed at scenarios where there is no lock competition, if there is lock competition but not intense, you can still use spin locks to optimize, and then expand to heavyweight locks after the spin fails .

shortcoming:

Similar to spinlock:

  • If the lock competition is fierce , the lightweight locks will quickly expand into heavyweight locks, and the process of maintaining lightweight locks becomes a waste.

1.3 Bias lock

In the absence of actual competition, it can continue to optimize for some scenarios. If not only is there no actual competition, but there is only one thread using the lock from beginning to end, then maintaining lightweight locks is wasteful. The goal of biased locks is to reduce the performance consumption of using lightweight locks when there is no competition and only one thread uses the lock . Lightweight locks require at least one CAS every time they apply for and release a lock, but biased locks only need one CAS when they are initialized.

"Bias" means that the biased lock assumes that only the first thread that applies for the lock will use the lock in the future (no thread will apply for the lock again), so it is only necessary to record the owner in CAS in Mark Word (essentially also update, But the initial value is empty), if the record is successful, the biased lock is acquired successfully , and the recorded lock status is a biased lock. In the future, the current thread is equal to the owner, and the lock can be obtained directly at zero cost; otherwise, it means that there are other threads competing, and the expansion is lightweight level lock .

Biased locks cannot be optimized using spin locks, because once other threads apply for locks, the assumption of biased locks is broken.

shortcoming:

Similarly, if there are obviously other threads applying for locks, the biased lock will quickly expand into a lightweight lock.

However, this side effect has been much smaller.
If desired, disable biased lock optimization with the parameter -XX:-UseBiasedLocking (on by default).

1.4 Summary

See below for the detailed process of biased lock, lightweight lock, heavyweight lock allocation and expansion. It will involve some knowledge of Mark Word and CAS.

Biased locks, lightweight locks, and heavyweight locks are suitable for different concurrency scenarios:

  • Biased lock: There is no actual competition, and only the first thread to apply for a lock will use the lock in the future.

  • Lightweight lock: no actual competition, multiple threads use locks alternately; short-term lock competition is allowed.

  • Heavyweight lock: There is actual competition, and the lock competition time is long.

In addition, if the lock competition time is short, spin locks can be used to further optimize the performance of lightweight locks and heavyweight locks to reduce thread switching.

If the degree of lock competition gradually increases (slowly), then gradually expanding from biased locks to weighted locks can improve the overall performance of the system.

2 Lock allocation and expansion process

To reiterate, this part is mainly compiled based on multiple sources on the Internet. The core is the flowchart organized by this giant , which is quite detailed and basically logical.

Some basic problems and solutions in the use of built-in locks have been described above, and the implementation principles have been briefly mentioned. The detailed lock allocation and expansion process is as follows:

There is a question in the figure:
according to the flow in the figure, if it is found that the lock has expanded to a heavyweight lock, the current thread will be blocked directly with the mutex .
However, one of the great benefits of spinlocks is to reduce the overhead of thread switching. There is no need to directly block the current thread here, it can be like a lightweight lock, spin for a while, and then block when it fails.

Two points in particular:

  • When CAS records owner, expected == null and newValue == ownerThreadId. Therefore, only the first thread that applies for a biased lock can return success, and subsequent threads will inevitably fail (some threads detect biased and try to CAS record owner at the same time).

  • Built-in locks can only gradually expand along the order of biased locks, lightweight locks, and heavyweight locks , and cannot "shrink". This is based on another assumption of the JVM, " Once the assumption of the upper-level lock is broken, it is considered that the assumption will not be true in the future ."

In addition, when the heavyweight lock is released, a blocked thread needs to be woken up. This part of the logic is basically the same as ReentrantLock

Simplifying the above diagram shows:

Guess you like

Origin blog.csdn.net/qq_37284798/article/details/129746291