Java "lock" (reading notes for the 2018 Meituan Review technical articles collection)

Insert picture description here

1. Optimistic lock vs. pessimistic lock

悲观锁Believe in yourself when using data must be another thread to modify the data , and therefore will first lock when acquiring data, ensure data不be modified by another thread. (Synchronized and Lock)
乐观锁consider themselves in不there will be other threads to modify the data when using the data , so不added locks, but when更new data to judge before there is no other threads更new了this data . If this data is not updated, the current thread successfully writes the modified data. If the data has been updated by other threads, different operations (such as error reporting or automatic retry) are performed according to different implementations.
Insert picture description here

  • Pessimistic lock is suitable for scenarios where there are many write operations. Locking first can ensure that the data is correct during write operations.
  • Optimistic lock is suitable for scenarios with many read operations, and the feature of no lock can greatly improve the performance of read operations.
// ------------------------- 悲观锁的调用方式 -------------------------
// synchronized
public synchronized void testMethod() {
		// 操作同步资源
}
// ReentrantLock
private ReentrantLock lock = new ReentrantLock(); // 需要保证多个线程使用的是同一个锁
public void modifyPublicResources() {
			lock.lock();
			// 操作同步资源
			lock.unlock();
}
// ------------------------- 乐观锁的调用方式 -------------------------
private AtomicInteger atomicInteger = new AtomicInteger(); // 需要保证多个线程使用的是同一个AtomicInteger
atomicInteger.incrementAndGet(); //执行自增1

CAS与Synchronized

From an ideological point of view:

  • Synchronized is a pessimistic lock, and pessimistically believes that the concurrency in the program is serious, so guard against it.
  • CAS is an optimistic lock. It is optimistic that the concurrency in the program is not so serious, so the thread is constantly trying to update.

2. Spin lock VS adaptive spin lock

Spin lock

Blocking or waking up a Java thread requires the operating system to switch the CPU state to complete. This state transition requires processor time ( context switching takes time ). If the content in the synchronization code block is too simple, the time consumed by the state transition may be longer than the execution time of the user code.
If the physical machine has multiple processors and can allow two or more threads to execute in parallel at the same time, we can make the latter thread requesting the lock not give up the execution time of the CPU, and see if the thread holding the lock will soon Release the lock.

In order to make the current thread "wait a moment", we need to spin the current thread. If the thread that locked the synchronization resource has released the lock after the spin is completed, the current thread can directly acquire the synchronization resource without blocking. , Thereby avoiding the overhead of switching threads. This is the spin lock.

Insert picture description here
缺点: It cannot replace blocking. Although spin waiting avoids the overhead of thread switching, it takes up processor time.
Therefore, the spin waiting time must have a certain limit. If the spin exceeds the limited number of times (the default is 10 times, you can use -XX:PreBlockSpin to change) and the lock is not successfully obtained, the thread should be suspended.

The realization principle of the spin lock-CAS, the do-while loop in the source code that calls unsafe for the auto-increment operation in AtomicInteger is a spin operation. If the modification of the value fails, the spin is executed through the loop until the modification is successful.

Adaptive spinlock

Adaptive means that the time (number of times) of the spin is no longer fixed , but is determined by the previous spin time on the same lock and the state of the lock owner. If on the same lock object, spin wait has just successfully acquired the lock, and the thread holding the lock is running, then the virtual machine will think that this spin is also likely to succeed again, and it will allow the spin The wait lasts for a relatively longer time.
If the spin is rarely successfully obtained for a certain lock, it is possible to omit the spin process when trying to acquire the lock in the future, and directly block the thread to avoid wasting processor resources. (TicketLock, CLHlock and MCSlock).

3. No lock VS Deflection lock VS Lightweight lock VS Heavyweight lock

The structure of the java object in the memory (HotSpot virtual machine)
lock elimination : During JIT compilation, the context is scanned to remove the locks that are impossible to compete. (StringBuffer of StringBuffer ("..."))
Lock coarsening : By expanding the scope of the lock, avoiding repeated locking and unlocking (in the while loop) The data stored in
the Mark Word storage structure
 Java object header storage structure
Mark Word in the Java object header will follow the lock Change of flag position
Mark Word will change as the program runs

Bias lock

In most cases, the lock not only does not have multi-thread competition, but is always acquired by the same thread multiple times . In order to make the thread acquire the lock cheaper, a biased lock is introduced.
Core idea : If a thread acquires a lock, then the lock enters the biased mode. At this time, the structure in Mark Word becomes a biased lock structure. When the thread requests the lock again , there is no need to do any synchronization operations, that is, the thread that acquires the lock. It only needs to check that the lock mark bit of Mark Word is a biased lock and the current thread ID is equal to the Thread ID of Mark Word, which saves a lot of operations related to lock application.

Global security point (no
bytecode is being executed at this point in time )

Lightweight lock

The biased lock runs when one thread enters the synchronized block , and is upgraded to a lightweight lock when the second thread joins the lock contention . (Threads alternately execute synchronized blocks)

Heavyweight lock

When multiple threads access the same lock at the same time, it expands to a heavyweight lock.

Comparison of the advantages and disadvantages of locks

Insert picture description here
The lock status feature The
Features of four lock states
Insert picture description here
bias lock solves the locking problem by comparing the Mark Word and avoids the execution of CAS operations.
Lightweight locks solve the locking problem by using CAS operations and spins to avoid thread blocking and wake-up that affect performance.
Heavyweight locks block all threads except the thread that owns the lock.

4. Fair lock VS unfair lock

Fair lock
means that multiple threads acquire locks in the order in which they apply for locks. The threads directly enter the queue and queue, and the first thread in the queue can acquire the lock.

  • Advantages: The thread waiting for the lock will not starve to death.
  • Disadvantages: The overall throughput efficiency is lower than that of unfair locks. All threads except the first thread in the waiting queue will be blocked, and the overhead of CPU waking up blocked threads is larger than that of unfair locks.
    Fair lock
    Single-window service processing, wait on the seat after taking the number, call the number (waking up) and go to the window to handle the service

Unfair locks When
multiple threads are locked, they directly try to acquire the lock, and they will wait until the end of the waiting queue until they are not acquired. But if the lock is just available at this time, then this thread can directly acquire the lock without blocking, so an unfair lock may occur when the thread that applies for the lock first acquires the lock.

  • Advantages: It can reduce the overhead of invoking threads, and the overall throughput efficiency is high, because threads have a chance to directly obtain locks without blocking, and the CPU does not need to wake up all threads.
  • Disadvantages: Threads in the waiting queue may starve to death, or wait a long time before acquiring the lock.
    Implementation of unfair locks

The source code of ReentrantLock: the realization of fair lock and unfair lock.

5. Reentrant lock VS non-reentrant lock

Reentrant lock, also known as recursive lock , means that when the same thread acquires the lock in the outer method, the inner method of the thread will automatically acquire the lock (provided that the lock object must be the same object or class). Blocked because it has been acquired before but has not been released.
Both ReentrantLock and synchronized in Java are reentrant locks. One advantage of reentrant locks is that deadlocks can be avoided to a certain extent .

public class Widget {
	public synchronized void doSomething() {
		System.out.println("方法1执行...");
		doOthers();
	}
	public synchronized void doOthers() {
		System.out.println("方法2执行...");
	}
}

The two methods in the class are modified by the built-in lock synchronized, and the doOthers() method is called in the doSomething() method. Because the built-in lock is reentrant, the same thread can directly obtain the lock of the current object when calling doOthers(), and enter doOthers() for operation.

If it is a non-reentrant lock, the current thread needs to release the lock of the current object acquired during doSomething() before calling doOthers(). In fact, the object lock has been held by the current thread and cannot be released. So there will be deadlock at this time.
Reentrant lock understanding(A villager brought multiple buckets to fetch water)

But if it is a non-reentrant lock, the administrator only allows the lock to be bound to a bucket of the same person . After the first bucket is bound to the lock, the lock will not be released. As a result, the second bucket cannot be bound to the lock and cannot be filled. The current thread is deadlocked, and all threads in the entire waiting queue cannot be awakened.Non-reentrant lock

ReentrantLock Non-reentrant lock NonReentrantLock

Source code comparison Why non-reentrant locks will deadlock when synchronized resources are called repeatedly.
Both ReentrantLock and NonReentrantLock inherit the parent class AQS, and a synchronization status status is maintained in the parent class AQS to count the number of reentries. The initial value of status is 0.

When a thread tries to acquire a lock, the reentrant lock first tries to acquire and update the status value. If status == 0 means that no other threads are executing synchronization code, the status is set to 1, and the current thread starts execution. If status != 0, judge whether the current thread is the thread that has acquired the lock , if it is, execute status+1, and the current thread can acquire the lock again.

The non-reentrant lock is to directly acquire and try to update the current status value. If status != 0, it will fail to acquire the lock and the current thread will be blocked.
Source code
When the lock is released, the reentrant lock also first obtains the value of the current status, provided that the current thread is the thread holding the lock. If status-1== 0, it means that all repeated lock acquisition operations of the current thread have been executed, and then the thread will actually release the lock.
The non-reentrant lock is to directly set the status to 0 after determining that the current thread is the thread holding the lock, and release the lock.

6. Exclusive lock VS shared lock

An exclusive lock is also called an exclusive lock, which means that the lock can only be held by one thread at a time. If thread T adds an exclusive lock to data A, other threads can no longer add any type of lock to A. The thread that obtains the exclusive lock can both read the data and modify the data .
The implementation classes of synchronized in JDK and Lock in JUC are mutual exclusion locks.

Shared lock means that the lock can be held by multiple threads. If thread T adds a shared lock to data A, other threads can only add a shared lock to A, and cannot add an exclusive lock. The thread that obtains the shared lock can only read the data and cannot modify the data .
ReentrantReadWriteLock
In ReentrantReadWriteLock, the main body of the read lock ReadLock and write lock WriteLock is Sync, but the lock method of read lock and write lock is different. Read locks are shared locks, and write locks are exclusive locks.
The shared lock of the read lock can ensure that concurrent reading is very efficient, and the processes of reading, writing, reading, and writing are mutually exclusive, because the read lock and the write lock are separated. Therefore, the concurrency of ReentrantReadWriteLock has been greatly improved compared to general mutex locks.

In an exclusive lock, the value of state is usually 0 or 1 (if it is a reentrant lock, the state value is the number of reentrants), and in a shared lock, the state is the number of locks held.
Insert picture description here
However, there are two locks for reading and writing in ReentrantReadWriteLock, so it is necessary to describe the number of read locks and write locks (or state) on an integer variable state. So the state variable is " bitwise cut " into two parts, the upper 16 bits represent the read lock status (number of read locks), and the lower 16 bits represent the write lock status (number of write locks).

  • In the case of a thread holding a read lock, the thread cannot acquire a write lock
    (because when acquiring a write lock, if the current read lock is found to be occupied, the acquisition will fail immediately, regardless of whether the read lock is held by the current thread)

  • When the thread holds the write lock, the thread can continue to acquire the read lock
    (if the write lock is found to be occupied when the read lock is acquired, the acquisition will fail only if the write lock is not occupied by the current thread)

Because when a thread acquires a read lock, there may be other threads holding a read lock at the same time, so the thread that acquires the read lock cannot be "upgraded" to a write lock;
and for the thread that obtains the write lock, it must monopolize the read and write Therefore, it can continue to let it acquire a read lock. When it acquires a write lock and a read lock at the same time,
it can also release the write lock and continue to hold the read lock, so that a write lock is "degraded" to a read lock.

To sum up: if
a thread wants to hold a write lock and a read lock at the same time, it must first acquire the write lock and then acquire the read lock;
write locks can be "downgraded" to read locks;
read locks cannot be "upgraded" to write locks.

ReentrantLock is an exclusive lock regardless of read operation or write operation.

Guess you like

Origin blog.csdn.net/eluanshi12/article/details/84771250