How can pessimistic lock and optimistic lock be confused? You just haven't understood the mechanism yet

The reasons for locking are all caused by concurrency problems. Here I am just writing some questions that may be asked in the interview and the answers to the questions. I am not going to give you an in-depth explanation of the unlocking mechanism.
Generally, interviewers ask questions from one point Point to ask questions, so I first introduced the thread problem to the lock problem

Insert picture description here

Talk about thread safety issues

Thread safety is a problem in the field of multithreading. Thread safety can be simply understood as a method or an instance that can be used in a multithreaded environment without problems.

In Java multithreaded programming, a variety of ways to achieve Java thread safety are provided:

The simplest way is to use the Synchronization keyword.
Use the atomic classes in the java.util.concurrent.atomic package. For example, AtomicInteger
uses the locks in the java.util.concurrent.locks package.
Use the thread-safe collection. ConcurrentHashMap
uses the volatile keyword to ensure variables. Visibility (read directly from memory, not from thread cache)

What is pessimistic lock and optimistic lock

Insert picture description here

The optimistic lock corresponds to the optimistic people in life who always think about things going for the better, and the pessimistic lock corresponds to the pessimistic people in life who always think about things going bad. These two kinds of people have their own advantages and disadvantages, and one kind of person is better than the other depending on the scene.

Pessimistic lock

Pessimistic lock is pessimistic that every operation will cause the problem of update loss, and an exclusive lock is added to each query

Every time I get the data, I think that someone else will modify it, so every time I get the data, it will be locked, so that if someone wants to get the data, it will block until it gets the lock. Many such locking mechanisms are used in traditional relational databases, such as row locks, table locks, etc., read locks, write locks, etc., all of which are locked before performing operations

  • Pessimistic locking always assumes the worst case. Every time you get the data, you think that others will modify it, so every time you get the data, it will lock, so that others will block until it gets the lock ( Shared resources are only used by one thread at a time, other threads are blocked, and the resources are transferred to other threads after they are used up). Many such locking mechanisms are used in traditional relational databases, such as row locks, table locks, etc., read locks, write locks, etc., which are all locked before operations. Exclusive locks such as synchronized and ReentrantLock in Java are the realization of the pessimistic lock idea.

CAS Optimistic Lock

CAS is an optimistic locking technology. When multiple threads try to use CAS to update the same variable at the same time, only one thread can update the value of the variable, and other threads fail. The failed thread will not be suspended, but will be Notify that you have failed in this competition and you can try again

The CAS operation consists of three operands-memory location (V), expected original value (A) and new value (B). If the value of the memory location matches the expected original value, the processor will automatically update the location value to the new value. Otherwise, the processor does nothing. In either case, it will return the value of that position before the CAS instruction. (In some special cases of CAS, it will only return whether the CAS is successful, without extracting the current value.) CAS effectively states "I think position V should contain value A; if it contains this value, place B in this position; Otherwise, don't change the position, just tell me the current value of this position." This is actually the same principle as the conflict check of optimistic locking + data update. Every query will not cause the update to be lost. Use version field control.

  • Optimistic locking always assumes the best situation. Every time you get the data, you think that others will not modify it, so it will not be locked. However, when updating, it will judge whether others have updated the data during this period. Use version number mechanism and CAS algorithm to achieve. Optimistic locks are suitable for multi-read applications, which can improve throughput. The mechanism similar to write_condition provided by databases is actually optimistic locks provided. The atomic variable class under the java.util.concurrent.atomic package in Java is implemented using CAS, an implementation method of optimistic locking.

Use scenarios of two locks

From the above introduction of the two locks, we know that the two locks have their own advantages and disadvantages. One cannot be considered as better than the other. For example, optimistic locks are suitable for less writes (read more scenarios), that is, conflicts are true. When this happens rarely, this saves the overhead of the lock and increases the overall throughput of the system. However, if it is over-write, conflicts will usually occur frequently, which will cause the upper-level application to constantly retry, which will reduce performance, so it is more appropriate to use pessimistic lock in the scenario of over-write.

Talk about the CAS'ABA' problem

The CAS algorithm realizes an important prerequisite to take out the data at a certain moment in the memory, and compare and replace it at the next moment, then this time difference class will cause the data to change

For example, a thread one fetches A from memory location V. At this time, another thread two also fetches A from memory, and two performs some operations to become B, and then two changes the data at V location to A. This At that time, thread one performs CAS operation and finds that there is still A in the memory, and then the one operation succeeds. Although the CAS operation of thread one is successful, it does not mean that the process is no problem

Part of the implementation of optimistic locking is to solve the ABA problem through the version number (version). Every time the optimistic lock performs a data modification operation, it will bring a version number. Once the version number is consistent with the version number of the data, it can be executed Modify the operation and perform the +1 operation on the version number, otherwise the execution will fail. Because the version number of each operation will increase, so there will be no ABA problem, because the version number will only increase but not decrease

##Optimistic lock business scenarios and implementation methods

Optimistic Lock:

  • Every time you get the data, you don't worry about the data being modified, so every time you get the data, you won't lock it, but you need to judge whether the data has been modified by others when you update the data. If the data is modified by other threads, no data update is performed, and if the data is not modified by other threads, data update is performed. Since the data is not locked, the data can be read and written by other threads during the period. It is
    more suitable for scenarios where read operations are frequent. If there are a lot of write operations, the possibility of data conflicts will increase. In order to ensure data Consistency, the application layer needs to constantly re-acquire data, which will increase a large number of query operations and reduce the throughput of the system

Briefly talk about spin locks

The spin lock is implemented by allowing the current thread to continuously execute in the loop body. The critical section can only be entered when the loop condition is changed by other threads.

When a thread calls this non-reentrant spin lock to lock, there is no problem. When lock() is called again, because the reference held by the spin lock is no longer empty, the thread object will be mistaken for Someone else's thread holds a spin lock and
uses the CAS atomic operation. The lock function sets the owner to the current thread and predicts that the original value is null. The unlock function sets the owner to null and the predicted value is the current thread.
When a second thread calls the lock operation because the owner value is not empty, the loop has been executed until the first thread calls the unlock function to set the owner to null, the second thread can enter the critical section.

Since the spin lock just keeps the current thread executing the loop body without changing the thread state, the response speed is faster. But when the number of threads keeps increasing, the performance drops significantly, because each thread needs to be executed and takes up CPU time. If the thread is less competitive, and keep the lock period of time, it is more suitable for use spin locks
latest finishing face questions

At last

In view of the fact that many people have been interviewing recently, I have also compiled a lot of interview topic materials here, as well as experience from other major companies. Hope it helps everyone.

Latest finishing interview questions

Friends in need can add group 1149778920 secret code: qf

Insert picture description here

Real interview experience

Insert picture description here

The latest compilation of interview documents

Insert picture description here
The above is the whole content of this article, I hope it will be helpful to everyone's study, and I hope you can support it. One-click three consecutive!

Guess you like

Origin blog.csdn.net/w1103576/article/details/109313294