[Multithreading]-Classification of locks in Java

Preface

I believe that many students are confused by various locks when they are learning multithreading. Today I am going to make a review of these locks. If you like, please remember one-key three-connection!

One, optimistic lock and pessimistic lock

First of all, let's talk about optimistic locking and pessimistic locking. Optimistic and pessimistic locking were originally changes proposed by the database designer, and then implemented in the JAVA concurrency package. Optimistic locking and pessimistic locking are mainly two optimization strategies proposed by designers for different levels of competition between threads. If there is no competition between threads for shared variables or the competition is not fierce, optimistic locking can help reduce the time overhead caused by synchronization blocking. And if the competition between threads is fierce, using pessimistic locks can reduce the time overhead caused by thread spin.

Specifically:

  1. Optimistic locking : Refers to the caller's optimistic attitude towards the current state of competition between threads. There is no competition or competition between threads, and shared variables are not changed frequently. At this time, there is no need to lock shared variables to ensure Thread safety. Optimistic locking often uses the CAS algorithm, which relies on the hardware instructions that come with the CPU to achieve a more exchange effect.
  2. Pessimistic lock : equivalent to optimistic lock, pessimistic lock believes that there is competition and fierce competition for shared variables between current threads, and synchronization blocking is needed to ensure thread safety. Pessimistic lock solves the performance overhead caused by CPU redundant spin, but because its locking operation will cause thread blockage, it needs to be used as appropriate.

For CAS algorithm or optimistic locking, pessimistic locking, students who don’t understand can check CAS algorithm (optimistic locking and pessimistic locking) to deepen their understanding

To summarize: Optimistic locks and pessimistic locks are mainly used to describe the different decisions of designers under different levels of competition between shared variable threads.

Two, shared locks and exclusive locks

First of all, we need to know what an exclusive lock is : the ones we mentioned before, such as synchronized, reentrantlock, etc. are basically exclusive locks. These are only allowed to be accessed by one thread at the same time, and threads that do not hold a lock can only block and wait for wake-up. , And obviously this does not meet our daily performance optimization requirements for many business scenarios, so read-write locks came into being.
A read-write lock can allow multiple threads to access the same lock object at the same time. The corresponding read-write lock is divided into two parts: a read lock and a write lock. The read lock is also called a shared lock . The thread holding the shared lock can Access to the same lock object at the same time will not cause blockage. The other part of the write lock is an exclusive lock . When the object holding the write lock accesses the lock object, only one thread is allowed to access it at the same time, which will cause blockage.
The call of the read-write lock is also very simple, which can be achieved by calling the readlock() method and writelock() method in ReentrantReadWriteLock.
A simple example of the use of shared locks and exclusive locks is:

Assumption: Thread A holds a shared lock, thread B holds an exclusive lock, and thread C holds a shared lock

  1. Thread A accesses the synchronized code block
  2. Thread B accesses the synchronization code block. At this time, because thread A holds the shared lock to enter the synchronization code, thread B needs to wait for thread A to release the shared lock before entering the synchronization code.
  3. Thread A exits the synchronization code block, and thread B executes the synchronization code
  4. Thread C template synchronization code block, at this time, because thread B holds an exclusive lock, thread C needs to wait for thread B to release the exclusive lock before entering
  5. Thread B exits the synchronization code block, and thread C enters the synchronization code block
  6. Thread A accesses the synchronization code quickly. At this time, because both thread A and thread C hold shared locks, thread A is allowed to execute synchronization code with thread C at this time.

The simple summary is: shared locks and shared locks can be executed together, and mutual exclusion between shared locks and exclusive locks will cause blockage.
There is also a need to pay attention: when the exclusive lock (write lock) starts, all shared locks that are later than the write lock need to wait. The purpose of this is that the read operation can read the correct data without phantom reads.

The use of reentrantlock can refer to this article on the use of lock objects

Three, fair lock and unfair lock

According to the preemption mechanism by which threads acquire locks, locks can be divided into fair locks and unfair locks. Fair locks indicate that the order in which threads acquire locks is determined according to the sequence of thread requests, that is, the earlier the thread gets the lock the earliest. Specifically, when a thread accesses the synchronization code block, it will determine whether there is a waiting queue at this time. If there is a waiting queue, it will not try to acquire the lock and wait at the end of the queue. On the contrary, the unfair lock means that the thread will not acquire the lock in the order of access. Whenever a new thread tries to acquire the lock, it will not judge whether there are threads waiting in the waiting queue at this time, but will try directly Acquire the lock once, if it succeeds, the synchronization code runs faster, and if it fails, it will join the waiting queue.
There is an interesting thing here: the synchronized waiting queue is different from the reentrantlock waiting queue. Synchronized itself is an unfair lock. It puts the newly accessed thread into the head of the waiting queue, while the reentrantlock is the opposite. The most recently accessed thread is placed at the end of the waiting queue.

Four, reentrant locks and non-reentrant locks

When a thread wants to acquire an exclusive lock that is already owned by another thread, it will be blocked. What happens if a thread wants to access an exclusive lock that it owns? If the thread does not block at this time, we will describe this lock as a reentrant lock . A reentrant lock means that when a thread has a reentrant lock, it can enter the synchronization area restricted by the lock for unlimited times without blocking. .
Here is a small question: Does Java support non-reentrant locks? why?
Friends can leave a message in the comment area to express their opinions.

5. Biased locks, lightweight locks, heavyweight locks

Biased locks: Java designers found that in many cases the code modified by the lock is accessed multiple times by the same thread. In order to reduce the cost of the thread acquiring the lock, Java introduced a biased lock. When the thread holds a biased lock to access the synchronized code Blocking will not trigger the verification of synchronization blocking, which greatly speeds up the efficiency of the synchronization code.
Lightweight locks: After JDK 1.6, in order to reduce the performance overhead of acquiring and releasing locks, lightweight locks were introduced. Lightweight locks are used in situations where there is competition between threads but the competition is not fierce. It mainly uses thread spin instead of blocking to improve the response speed of the program.
Heavyweight lock: Provides an atomic built-in lock synchronized in Java, which relies on abstracting the built-in lock as a moniter (monitor lock). The cost of using synchronized before JDK1.6 is very huge. It relies on directly operating the mutex in the operating system. The cost of implementing the lock function is that both suspending and waking up threads need to be completed by the operating system kernel. Consumes performance, so this kind of lock is also called a heavyweight lock.
Here we are comparing the advantages and disadvantages of different locks:

Bias lock

  1. Advantages: No additional operations are required to lock and release locks, and the gap between performing asynchronous methods is only in nanoseconds
  2. Disadvantages: When the bias lock is upgraded to a lightweight lock, it will cause stw to cause lag
  3. Applicable scenario: Only one thread accesses the synchronized block

Lightweight lock:

  1. Advantages: Will not cause thread blockage, no need to wait for thread sleep and wake up, improve the response speed of the program.
  2. Disadvantages: Excessive spin will bring extra cpu consumption
  3. Applicable scenarios: In pursuit of response time, synchronization block execution speed is very fast

Heavyweight lock:

  1. Advantages: thread competition does not need to spin and does not consume CPU
  2. Disadvantages: thread blocking, slow response time
  3. Applicable scenarios: Pursuit of throughput, longer synchronization block execution

If you have any questions, you can look at the comparison of Java locks in this article to learn the characteristics and upgrade process of the next three locks in detail.

Well, today's lock classification is here. If you feel that you have gained something, you can choose one-click three-connection.
I wish all the friends of one key three consecutive years can find a satisfactory job in the new year, get a promotion and raise their salary to the pinnacle of life!

Guess you like

Origin blog.csdn.net/xiaoai1994/article/details/111476653