Overview of various locks

Optimistic locking and pessimistic locking

Pessimistic locking refers to a conservative attitude towards data being modified by the outside world. It believes that the data will be easily modified by other threads, so the data is locked before the data is processed, and the data is kept in a locked state during the entire data processing process.

The implementation of pessimistic locking often relies on the locking mechanism provided by the database, that is, in the database, an exclusive lock is added to the record before operating on the data record.

If the lock acquisition fails, it means that the data is being modified by other threads, and the current thread waits or throws an exception.

If the lock is acquired successfully, the record is operated, and then the exclusive lock is released after the transaction is committed.
Insert image description here
Assume that the updateEntry, query, and update methods all use the transaction aspect method, and the transaction propagation is set to required.

When executing the updateEntry method, if the transaction is not opened in the upper-layer calling method, a transaction will be opened immediately, and then code (1) will be executed.

Code (1) calls the query method, which queries a record from the database based on the specified id.

Since the transaction propagation is requried, no new transaction is opened when query is executed, but the transaction opened by updateEntry is added. That is, the query method will not be submitted until the updateEntry method completes execution and submits the transaction, which means that the record lock will be Continues until updateEntry execution ends.

Code (2) modifies the obtained record, and code (3) writes the modified content back to the database. Similarly, the update method of code (3) does not open a new transaction, but adds the updateEntry transaction.

That is, the updateEntry, query, and update methods share the same transaction.

When multiple threads call the updateEntry method at the same time and pass the same ID, only one thread will succeed in executing code (1), and other threads will be blocked. This is because only one thread can obtain the corresponding record at the same time. Lock, before the thread that acquired the lock releases the lock (updateEntry is executed and before the transaction is committed), other threads must wait, that is, only one thread can modify the record at the same time.

Optimistic locking is relative to pessimistic locking. It believes that data will not cause conflicts under normal circumstances, so it will not add an exclusive lock before accessing the record. Instead, it will formally deal with data conflicts and data when the data is submitted and updated. No testing.

Specifically, let the user decide what to do based on the number of rows returned by update. The code to change the above example to use optimistic locking is as follows.
Insert image description here
When multiple threads call the updateEntry method and pass the same id, multiple threads can execute the code at the same time (1) to obtain the record corresponding to the id and put the record into the thread local stack, and then execute the code at the same time (2) to compare their own stacks If the records on the thread are modified, the attributes in their respective entries should be different after modification by multiple threads.

Then multiple threads can execute code (3) at the same time. The version=#{version} condition is added to the where condition of the update statement in code (3), and the version=${version}+1 expression is added to the set statement. , the meaning of this expression is that if the record with id=#{id} and version=#{version} exists in the database, the updated version value is the original value plus 1, which is a bit like a CAS operation.

Assuming that multiple threads execute updateEntry at the same time and pass the same id, then the Entry obtained when they execute code (1) is the same, and the version value in the obtained Entry is the same (assuming version=0 here).

When multiple threads execute code (3), since the update statement itself is atomic, if thread A executes the update successfully, then the version value of the record corresponding to the ID will change from the original version value to 1. When other threads execute code (3) to update, they find that there is no version=0 statement in the database, so the affected row number 0 will be returned.

In terms of business, you can know that the current update has not been successful based on the return value of 0. Then there are two ways to do it. If the business finds that the update has failed, you can do nothing, or you can choose to retry. If you choose to retry, then The code of updateEntry can be modified as follows.

Insert image description here
The above code uses retryNum to set the number of retries after the update fails. If the code (3.1) returns 0 after execution, it means that the record obtained by the code (1.1) has been modified, then the loop is repeated and the latest one is obtained through the code (1.1). data, and then execute the code (3.1) again to try to update.

This is similar to the spin operation of CAS, except that instead of using an infinite loop, the number of attempts is specified.

Optimistic locking does not use the locking mechanism provided by the database. It is generally implemented by adding a version field to the table or using business status. Optimistic locking does not lock until commit, so no deadlock will occur.

Fair lock and unfair lock

According to the preemption mechanism of threads acquiring locks, locks can be divided into fair locks and unfair locks. Fair locks mean that the order in which threads acquire locks is determined based on the time when threads request locks. That is, the thread that requests the lock earliest will acquire it earliest. Lock. Unfair locks break in at runtime, that is, first come is not necessarily first served.
ReentrantLock provides fair and unfair lock implementations.

  • Fair lock: ReentrantLock pairLock =new ReentrantLock(true).
  • Unfair lock: ReentrantLock pairLock =new ReentrantLock(false). If the constructor does not pass parameters, the default is unfair lock.

For example, assuming that thread A already holds the lock, when thread B requests the lock, it will be suspended. After thread A releases the lock, if there is currently thread C that also needs to acquire the lock, if the unfair lock method is adopted, then according to the thread scheduling policy, either thread B or thread C may acquire the lock. At this time, no other thread is needed. Interference, and if you use fair lock, you need to suspend C and let B acquire the current lock.
Try to use unfair locks if there is no fairness requirement, because fair locks will bring performance overhead.

Exclusive locks and shared locks

Depending on whether the lock can only be held by a single thread or can be held by multiple threads, locks can be divided into exclusive locks and shared locks.

Exclusive locks ensure that only one thread can obtain the lock at any time, and ReentrantLock is implemented in an exclusive manner.

Shared locks can be held by multiple threads at the same time, such as ReadWriteLock, which allows a resource to be read by multiple threads at the same time.

An exclusive lock is a pessimistic lock. Since each access to a resource requires a mutex lock, this limits concurrency because the read operation does not affect the consistency of the data, and the exclusive lock only allows one thread to access the resource at the same time. To read data, other threads must wait for the current thread to release the lock before reading.

Shared lock is an optimistic lock, which relaxes the conditions for locking and allows multiple threads to perform read operations at the same time.

What is a reentrant lock

When a thread wants to acquire an exclusive lock held by other threads, the thread will be blocked. Will it be blocked when a thread acquires the lock it has already acquired again? If it is not blocked, then we say that The lock is reentrant, that is, as long as the thread acquires the lock, it can enter the code locked by the lock an unlimited number of times.

Insert image description here
Before calling the helloB method, the built-in lock will be acquired and then printed.

After calling the helloA method, the built-in lock will be acquired before the call. If the built-in lock is not reentrant, the calling thread will always be blocked.

In fact, synchronized internal locks are reentrant locks. The principle of a reentrant lock is to maintain a thread indicator inside the lock to indicate which thread the lock is currently occupied, and then associate a counter.

The counter value is 0 at the beginning, indicating that the lock is not occupied by any thread. When a thread acquires the lock, the counter value will become 1. At this time, when other threads acquire the lock, they will find that the owner of the lock is not themselves and will be blocked and suspended.

But when the thread that acquired the lock finds that the lock owner is itself when it acquires the lock again, it will add +1 to the counter value, and when the lock is released, the counter value will be -1.

When the counter value reaches 0, the thread indicator in the lock is reset to null. At this time, the blocked thread will be awakened to compete to acquire the lock.

spin lock

Since threads in Java correspond one-to-one to threads in the operating system, when a thread fails to acquire a lock (such as an exclusive lock), it will be switched to the kernel state and suspended.

When the thread acquires the lock, it needs to switch to the kernel state and wake up the thread.

The overhead of switching from user state to kernel state is relatively large, which will affect concurrency performance to a certain extent.

For spin locks, when the current thread acquires the lock, if it finds that the lock is already occupied by another thread, it does not block itself immediately, and tries to acquire it multiple times without giving up the right to use the CPU (the default number is 10, you can use -XX:PreBlockSpinsh parameter sets this value), it is very likely that other threads have released the lock in the next few attempts.

If the lock is not obtained after the specified number of attempts, the current thread will be blocked and suspended.

It seems that spin locks use CPU time in exchange for thread blocking and scheduling overhead, but it is very likely that these CPU times are wasted.

Guess you like

Origin blog.csdn.net/zhuyufan1986/article/details/135450092