Summary of common Java locks: fair locks, reentrant locks, exclusive locks, mutual exclusion locks, optimistic locks, segment locks, biased locks, spin locks, etc.

Preface

In reading many concurrent articles, various locks such as fair locks, optimistic locks, etc. will be mentioned. This article introduces the classification of various locks. The content of the introduction is as follows:

1. Fair lock / unfair lock

2. Reentrant lock/non-reentrant lock

3. Exclusive lock / shared lock

4. Mutex / read-write lock

5. Optimistic lock/pessimistic lock

6. Segmented lock

7. Bias lock/lightweight lock/heavyweight lock

8. Spin lock

The above are many lock terms. These categories do not all refer to the state of the lock. Some refer to the characteristics of the lock, and some refer to the design of the lock. The content summarized below is a certain explanation of the nouns of each lock. Organized 100+ Java project tutorials + notes + source code .

Fair lock/Unfair lock

Fair lock

A fair lock means that multiple threads acquire locks in the order in which they apply for locks.

Unfair lock

An unfair lock means that the order in which multiple threads acquire locks is not in the order of applying locks. It is possible that the threads that apply later will have priority to acquire the locks than the threads that apply first. It may cause priority reversal or starvation.

For Java ReentrantLock, the constructor specifies whether the lock is a fair lock, and the default is an unfair lock. The advantage of unfair locks is that the throughput is greater than fair locks.
For Synchronized, it is also an unfair lock. Since it is not like ReentrantLock which implements thread scheduling through AQS, there is no way to make it a fair lock.

Reentrant lock/non-reentrant lock

Reentrant lock

In a broad sense, a reentrant lock refers to a lock that can be called repeatedly and recursively. After the outer layer uses the lock, the inner layer can still be used without deadlock (provided that it is the same object or class), such a lock It is called a reentrant lock. ReentrantLock and synchronized are both reentrant locks

synchronized void setA() throws Exception{
   Thread.sleep(1000);
   setB();
}
synchronized void setB() throws Exception{
   Thread.sleep(1000);
}

The above code is a feature of a reentrant lock. If it is not a reentrant lock, setB may not be executed by the current thread, which may cause a deadlock.

Non-reentrant lock

Non-reentrant locks, in contrast to reentrant locks, cannot be called recursively, and deadlock occurs when recursive calls. See a classic explanation, using a spin lock to simulate a non-reentrant lock, the code is as follows

import java.util.concurrent.atomic.AtomicReference;

public class UnreentrantLock {

   private AtomicReference<Thread> owner = new AtomicReference<Thread>();

   public void lock() {
       Thread current = Thread.currentThread();
       //这句是很经典的“自旋”语法,AtomicInteger中也有
       for (;;) {
           if (!owner.compareAndSet(null, current)) {
               return;
           }
       }
   }

   public void unlock() {
       Thread current = Thread.currentThread();
       owner.compareAndSet(current, null);
   }
}

The code is also relatively simple, using atomic references to store threads. The same thread calls the lock() method twice. If unlock() is not executed to release the lock, a deadlock will occur when spin is called the second time. This lock is not Reentrant, but in fact the same thread does not need to release the lock every time and then acquire the lock. Such scheduling switching is very resource intensive.

Turn it into a reentrant lock:

import java.util.concurrent.atomic.AtomicReference;

public class UnreentrantLock {

   private AtomicReference<Thread> owner = new AtomicReference<Thread>();
   private int state = 0;

   public void lock() {
       Thread current = Thread.currentThread();
       if (current == owner.get()) {
           state++;
           return;
       }
       //这句是很经典的“自旋”式语法,AtomicInteger中也有
       for (;;) {
           if (!owner.compareAndSet(null, current)) {
               return;
           }
       }
   }

   public void unlock() {
       Thread current = Thread.currentThread();
       if (current == owner.get()) {
           if (state != 0) {
               state--;
           } else {
               owner.compareAndSet(current, null);
           }
       }
   }
}

Before performing each operation, determine whether the current lock holder is the current object, and use the state count instead of releasing the lock each time.

Implementation of reentrant lock in ReentrantLock

Here is the lock acquisition method for unfair locks:

final boolean nonfairTryAcquire(int acquires) {
   final Thread current = Thread.currentThread();
   int c = getState();
   if (c == 0) {
       if (compareAndSetState(0, acquires)) {
           setExclusiveOwnerThread(current);
           return true;
       }
   }
   //就是这里
   else if (current == getExclusiveOwnerThread()) {
       int nextc = c + acquires;
       if (nextc < 0) // overflow
           throw new Error("Maximum lock count exceeded");
       setState(nextc);
       return true;
   }
   return false;
}

A private volatile int state is maintained in AQS to count the number of reentries, avoiding frequent hold release operations, which not only improves efficiency, but also avoids deadlocks.

Exclusive lock / shared lock

Exclusive lock and shared lock When you read ReeReentrantLock and ReentrantReadWriteLock under the CUT package, you will find that one of them is exclusive and the other is shared lock.

Exclusive lock

The lock can only be held by one thread at a time.

Shared lock

The lock can be shared by multiple threads, typically the read lock in ReentrantReadWriteLock. Its read lock can be shared, but its write lock can only be exclusive each time.

In addition, the sharing of read locks can ensure that concurrent reading is very efficient, but reading and writing, writing, and reading are mutually exclusive.

Exclusive locks and shared locks are also realized through AQS. By implementing different methods, exclusive or shared locks can be realized.
For Synchronized, it is of course an exclusive lock.

Mutex / read-write lock

Mutex

Perform a lock operation before accessing the shared resource, and perform an unlock operation after the access is completed. After the lock is locked, any other threads that try to lock again will be blocked until the current process is unlocked.

If more than one thread is blocked when unlocking, then all the threads on the lock are programmed in the ready state, and the first thread that becomes ready performs the lock operation, and then other threads will wait again. In this way, only one thread can access the resources protected by the mutex

Read-write lock

The read-write lock is both a mutual exclusion lock and a shared lock. The read mode is shared, and write is mutually exclusive (exclusive lock).

There are three states of read-write lock: read locked state, write locked state and unlocked state

The specific implementation of read-write lock in Java is ReadWriteLock

Only one thread can hold the read-write lock in write mode at a time, but multiple threads can hold the read-write lock in read mode at the same time.
Only one thread can hold the lock in the write state, but multiple threads can hold the lock in the read state at the same time, which is why it can achieve high concurrency. When it is in the write state lock, any thread trying to acquire the lock will be blocked until the write state lock is released; if it is in the read state lock, other threads are allowed to obtain its read state lock, but are not allowed to obtain it Write state lock until the read state locks of all threads are released; in order to avoid threads that want to try to write operations never get the write state lock, when the read-write lock senses that a thread wants to obtain the write state lock, it will Block all subsequent threads that want to acquire the read status lock. Therefore, the read-write lock is very suitable for resource read operations far more than write operations.

Optimistic lock / pessimistic lock

Pessimistic lock

Always assume the worst case. Every time you get the data, you think that others will modify it, so every time you get the data, you will lock it, so that others who want to get the data will block until it gets the lock (shared resource It is only used by one thread at a time, other threads are blocked, and the resources are transferred to other threads after they are used up). Many such locking mechanisms are used in traditional relational databases, such as row locks, table locks, etc., read locks, write locks, etc., which are all locked before operations. Exclusive locks such as synchronized and ReentrantLock in Java are the realization of the pessimistic lock idea.

Optimistic lock

Always assume the best situation. Every time I get the data, I think that others will not modify it, so it will not be locked, but when updating, it will be judged whether others have updated the data during this period. You can use the version. Number mechanism and CAS algorithm implementation. Optimistic locks are suitable for multi-read application types, which can improve throughput. The mechanism similar to write_condition provided by databases is actually optimistic locks provided. The atomic variable class under the java.util.concurrent.atomic package in Java is implemented using CAS, an implementation of optimistic locking.

Segment lock

Segmented lock is actually a lock design, not a specific type of lock. For ConcurrentHashMap, its concurrent implementation is to achieve efficient concurrent operations in the form of segmented locks.

The locking mechanism of concurrent container classes is based on segmented locks with smaller granularity. Segmented locks are also one of the important means to improve the performance of multiple concurrent programs.

In concurrent programs, serial operations will reduce scalability, and context switching will also reduce performance. When the lock contention occurs, the water will flow to cause these two problems. When using an exclusive lock to protect restricted resources, it is basically a serial method-only one thread can access it at a time. So the biggest threat to scalability is exclusive locks.

We generally have three ways to reduce the degree of lock competition:

1. Reduce the holding time of the lock.
2. Reduce the frequency of lock requests.
3. Use an exclusive lock with a coordination mechanism. These mechanisms allow for higher concurrency.

In some cases, we can further extend the lock decomposition technology into a set of locks on independent objects for decomposition, which becomes a segmented lock.

In fact, the simpler point is:

There are multiple locks in the container, and each lock is used to lock part of the data in the container. When multiple threads access data in different data segments in the container, there will be no lock competition between threads, which can effectively improve the efficiency of concurrent access , This is the lock segmentation technology used by ConcurrentHashMap. First, the data is divided into a segment of storage, and then a lock is assigned to each segment of data. When a thread occupies the lock to access one of the segments of data, the data of other segments can also be used. Accessed by other threads.

For example, an array containing 16 locks is used in ConcurrentHashMap, each lock protects 1/16 of all hash buckets, and the Nth hash bucket is protected by the (N mod 16)th lock. Assuming that a reasonable hash algorithm is used to make the keywords evenly divided, this can approximately reduce the number of lock requests to 1/16. It is this technology that enables ConcurrentHashMap to support up to 16 concurrent write threads.

Deflection lock/ Lightweight lock/ Heavyweight lock

The state of the lock:

1. No lock state

2. Bias lock state

3. Lightweight lock state

4. Heavyweight lock status

The state of the lock is indicated by the field in the object header of the object monitor.
The four states will gradually escalate with competition, and it is an irreversible process, that is, irreversible.
These four states are not locks in the Java language, but optimizations made by Jvm to improve the efficiency of lock acquisition and release (when using synchronized).

Bias lock

Biased lock means that a piece of synchronized code has been accessed by a thread, then the thread will automatically acquire the lock. Reduce the cost of acquiring locks.

Lightweight lock

Lightweight lock means that when the lock is a biased lock, it is accessed by another thread, the biased lock will be upgraded to a lightweight lock, and other threads will try to acquire the lock by spinning, without blocking, and improve performance .

Heavyweight lock

A heavyweight lock means that when the lock is a lightweight lock, although another thread is spinning, the spin will not continue forever. When a certain number of spins have not acquired the lock, it will enter blocking , The lock expands into a heavyweight lock. Heavyweight locks will block other application threads and reduce performance.

Spin lock

We know that the CAS algorithm is an implementation of optimistic locking, and the CAS algorithm involves spin locks, so here I will tell you what a spin lock is.

A brief review of the CAS algorithm

CAS is the English word Compare and Swap (Compare and Swap), which is a well-known lock-free algorithm. Lock-free programming, that is, realize the synchronization of variables between multiple threads without using locks, that is, realize the synchronization of variables without threads being blocked, so it is also called non-blocking synchronization (Non-blocking Synchronization). The CAS algorithm involves three operands

1. The memory value V that needs to be read and written

2. The value to be compared A

3. New value B to be written

When updating a variable, only when the expected value A of the variable and the actual value in the memory address V are the same, the value corresponding to the memory address V will be changed to B, otherwise no operation will be performed. Under normal circumstances, it is a spin operation, that is, constant retry.

What is a spin lock?

Spinlock: refers to when a thread is acquiring a lock, if the lock has been acquired by other threads, then the thread will wait in a loop, and then continue to determine whether the lock can be successfully acquired, until the lock is acquired Exit the loop.

It proposes a lock mechanism to protect shared resources. In fact, spin locks are similar to mutex locks, they are both to solve the mutually exclusive use of a certain resource. Whether it is a mutex lock or a spin lock, there can be at most one holder at any time, that is, at most one execution unit can acquire the lock at any time. But the two are slightly different in the scheduling mechanism. For mutex locks, if the resource is already occupied, the resource applicant can only enter the sleep state. But the spin lock will not cause the caller to sleep. If the spin lock has been held by another execution unit, the caller will keep looping there to see if the holder of the spin lock has released the lock. The term "spin" That's why it got its name.

How does Java implement spin locks?

Here is a simple example:

public class SpinLock {
   private AtomicReference<Thread> cas = new AtomicReference<Thread>();
   public void lock() {
       Thread current = Thread.currentThread();
       // 利用CAS
       while (!cas.compareAndSet(null, current)) {
           // DO nothing
       }
   }
   public void unlock() {
       Thread current = Thread.currentThread();
       cas.compareAndSet(current, null);
   }
}

The CAS used by the lock() method. When the first thread A acquires the lock, it can be successfully acquired without entering the while loop. If thread A does not release the lock at this time, another thread B will acquire the lock again. Because the CAS is not satisfied, it will enter the while loop, and continue to judge whether the CAS is satisfied, until the A thread calls the unlock method to release the lock.

Problems with spin locks

1. If a thread holds the lock for too long, it will cause other threads waiting to acquire the lock to enter the loop waiting, consuming CPU. Improper use will cause extremely high CPU usage.
2. The spin lock implemented by Java above is not fair, that is, it cannot satisfy the thread with the longest waiting time to obtain the lock first. Unfair locks will have "thread starvation" problems.

Advantages of spin locks

1. The spin lock will not cause the thread state to switch, and it will always be in the user state, that is, the thread is always active; it will not cause the thread to enter the blocking state, reducing unnecessary context switching, and the execution speed is fast.
2. Non-spinning The lock enters the blocking state when the lock is not acquired, and thus enters the kernel state. When the lock is acquired, it needs to be restored from the kernel state, and a thread context switch is required. (After the thread is blocked, it enters the kernel (Linux) scheduling state, which will cause the system to switch back and forth between the user mode and the kernel mode, which will seriously affect the performance of the lock)

Reentrant spin lock and non-reentrant spin lock

A careful analysis of the code at the beginning of the article shows that it does not support reentrancy, that is, when a thread has acquired the lock for the first time, it will acquire the lock again before the lock is released. It cannot be successfully obtained the second time. Since CAS is not satisfied, the second acquisition will enter the while loop and wait, and if it is a reentrant lock, the second acquisition should be successful.

Moreover, even if it can be successfully acquired the second time, when the lock is released for the first time, the lock acquired the second time will be released, which is unreasonable.

In order to implement reentrant locks, we need to introduce a counter to record the number of threads that acquire the lock.

public class ReentrantSpinLock {
   private AtomicReference<Thread> cas = new AtomicReference<Thread>();
   private int count;
   public void lock() {
       Thread current = Thread.currentThread();
       if (current == cas.get()) { // 如果当前线程已经获取到了锁,线程数增加一,然后返回
           count++;
           return;
       }
       // 如果没获取到锁,则通过CAS自旋
       while (!cas.compareAndSet(null, current)) {
           // DO nothing
       }
   }
   public void unlock() {
       Thread cur = Thread.currentThread();
       if (cur == cas.get()) {
           if (count > 0) {// 如果大于0,表示当前线程多次获取了该锁,释放锁通过count减一来模拟
               count--;
           } else {// 如果count==0,可以将锁释放,这样就能保证获取锁的次数与释放锁的次数是一致的了。
               cas.compareAndSet(cur, null);
           }
       }
   }
}

Spin locks and mutex locks

1. Both spin locks and mutual exclusion locks are mechanisms for protecting resource sharing.

2. Whether it is a spin lock or a mutex lock, there can be at most one holder at any time.

3 The thread that acquires the mutex lock will enter a sleep state if the lock is already occupied; the thread that acquires the spin lock will not sleep, but will wait for the lock to be released in a loop.

Spin lock summary

1. Spin lock: When a thread acquires a lock, if the lock is held by another thread, the current thread will wait in a loop until the lock is acquired.

2. During the spin lock waiting period, the state of the thread will not change, and the thread is always in user mode and active.

3. If the spin lock holds the lock for too long, it will cause other threads waiting to acquire the lock to exhaust the CPU.

4. The spin lock itself cannot guarantee fairness, nor can it guarantee reentrancy.

5. Based on spin locks, locks with fairness and reentrant properties can be realized.

作者:搜云库技术团队
原文:segmentfault.com/a/1190000017766364

Guess you like

Origin blog.51cto.com/14442094/2664125