Multithreading (Advanced 1: Lock Strategy)

Table of contents

1. Optimistic locking and pessimistic locking

2. Lightweight locks and heavyweight locks

3. Spin lock and pending wait lock

4. Ordinary mutex locks and read-write locks

 5. Fair lock and unfair lock

6. Reentrant locks and non-reentrant locks

7. A simple comparison between synchronized and Linux mutex locks

8. Synchronized adaptation

1. Lock upgrade

(1) Bias lock stage

(2) Lightweight lock stage

(3) Heavyweight lock stage

2. Lock elimination

3. Lock roughening


1. Optimistic locking and pessimistic locking

Optimistic locking:Before locking, the probability of lock conflict is estimated to be relatively small, so you will not make too many preparations for this when locking. There are fewer things to do, so the locking speed may be faster.

Pessimistic lock: Before locking, it is estimated that the probability of lock conflict is relatively high, so a lot of preparations will be made when locking, and locking will do more things. , so the locking speed may be slower.


2. Lightweight locks and heavyweight locks

Lightweight lock:After running the code, the probability of lock conflict is relatively small, so the cost of locking is relatively small and the speed is relatively fast.

Heavyweight lock:After running the code, the probability of lock conflict is relatively high, so the cost of locking is relatively high and the speed is relatively slow.

Note:The difference here from optimistic and pessimistic locks is that optimistic and pessimistic locks are estimates made before locking, while here they refer to the results after locking. evaluate.


3. Spin lock and pending wait lock

Spin lock: is a typical implementation of lightweight lock. When locking, if the locking fails, the lock competition will not exit, but will loop. Lock competition continues. If the lock competition fails this time, it will enter the loop to compete for the lock again. It will not exit the loop until the lock is successful.

This kind of repeated and rapid execution is called "spin". At the same time, the spin lock is also an optimistic lock. As long as other threads release the lock, it can lock successfully at the first time, so its locking speed is also very fast. , but if there are many threads to be locked, there is no need to use spin locks, which will waste CPU resources.

Waiting for pending lock: is a typical manifestation of heavyweight lock, and it is also a pessimistic lock. When locking, if the lock fails, it will wait for a period of time. This It will not lock for a period of time and compete with other thread locks. At this time, some CPU resources can be released. These CPU resources can be used to do other things. After a period of time, it will try to lock again. If it fails, repeat the above work. If it succeeds, you will get the lock. There is nothing to say.

When waiting for suspension, the kernel scheduler will intervene, and there will be more operations to be completed, so it will take more time to acquire the lock.


4. Ordinary mutex locks and read-write locks

Ordinary mutex lock:Similar to synchronized, the operation involves locking and unlocking

Read-write lock:There are two types of locking situations: adding read lock and adding write lock

There will be no lock conflict (no blocking) between read locks and read locks.

There will be a lock conflict (blocking) between write lock and write lock.

There will be a lock conflict (blocking) between the read lock and the write lock.

When a thread adds a read lock, the other thread can only read but not write.

When a thread adds a write lock, another thread cannot write or read.

Why introduce read-write lock?

If the two threads are read-only, the operation itself is safe and does not require locking. However, if synchronized is used to lock, there will be lock competition (blocking) between reading and reading, but we cannot not lock the reading operation at all, because we are afraid that if one thread reads and another thread writes, there will definitely be threads at this time. Security Question.

Read-write locks can solve the above problems very well.

If there is only a read operation, there will be no modification operation in the read operation. At this time, the data read by each thread is the same unchanged data. If it is locked, it will consume some unnecessary resource space. It is better to add There is a read lock. There will be no lock competition when reading, but only when writing. This can save a lot of unnecessary resource overhead (the overhead of lock conflicts).


 5. Fair lock and unfair lock

Fair lock:If the lock competition time is different between threads, the thread with the longest lock competition time will get the lock first, in order (first come, first served) meaning)

Unfair lock:There is no lock-taking order between threads, random scheduling, and each thread takes the lock according to its own ability.

The use of fair locks here can effectively solve the problem of thread starvation. To achieve fair locks, you need to introduce additional data structures (introducing queues to record the order of each thread) to achieve fair locks.


6. Reentrant locks and non-reentrant locks

Reentrant lock: If synchronized, lock a piece of code, the lock can be locked again, and multiple locks can be nested in the lock, which uses a counter. This method applies locking technology and determines whether it is unlocked and the lock is reentrant.

Non-reentrant lock:The system's own lock cannot be locked twice in a row.


7. A simple comparison between synchronized and Linux mutex locks

synchronized:Optimistic lock / pessimistic lock adaptive

                        Lightweight lock / heavyweight lock adaptation

                        to be adaptive

                        Not a read-write lock

                        Unfair lock

                        Reentrant lock

mutex:          pessimistic lock

                        Heavyweight lock

                       

                        Not a read-write lock

                        Unfair lock

                        Do not enter the lock


8. Synchronized adaptation

Synchronized is well optimized internally and is adaptive. It can usually be used mindlessly and the efficiency is not low.

1. Lock upgrade

(1) Bias lock stage

Core idea:"Lazy man mode", that is, if you can't lock it, don't lock it. If you can lock it later, lock it later. ; The so-called biased lock is not really locked, it is just a very lightweight mark.

Once a thread competes for the lock, the thread holding the biased lock can get the lock as soon as possible. If there are no other threads competing for the lock, the thread that will get the lock next time will most likely still be the thread holding the biased lock. In general, when there is lock competition, biased locking does not improve efficiency. When there is no lock competition, biased locking can greatly improve efficiency.

This mark is an attribute in the lock. Each lock has its own mark. When the lock object is locked for the first time, there is no lock competition involved. This stage is the biased lock stage. Once lock competition is involved, it will be upgraded to light. Volume lock stage.

(2) Lightweight lock stage

The lightweight lock here is implemented by spinning. Assuming that there is lock competition, but not much, it will be in the lightweight lock stage. Its advantage: when other threads release the lock, they will be in the lightweight lock stage. The thread in the stage can get the lock immediately; Disadvantage: It consumes more CPU resources. Because it is implemented by spinning, there will be a loop that keeps trying to get the lock.

When there are too many threads, lightweight locks are not suitable. Each thread tries to get the lock in a loop. But if a thread has already got the lock, other threads will block and wait, but the waiting process will continue in a loop. Try to get the lock, which consumes a lot of CPU resources. At this time, it will be upgraded from the lightweight lock stage to the heavyweight lock stage.

(3) Heavyweight lock stage

The lightweight lock here is implemented by suspending and waiting. When many threads compete for this lock at the same time, if one thread gets the lock but other threads do not, they will not keep trying again. To get the lock, block and wait. Wait for a period of time before trying to get the lock. If unsuccessful, block and wait for a period of time. Repeat the above steps. This problem will give up CPU resources, and you can use these CPU resources to do other things. .

Note:You can only upgrade here, not downgrade, but this is only the current version, and there is no guarantee of future versions. Add downgrade functionality.

2. Lock elimination

is also an internal optimization of synchronized ; sometimes, some code may not need to be locked at first glance, but the code is locked. At this time, the compiler will lock the Get rid of it, after all, the locking operation also consumes some hardware resources.

Note:The difference between lock elimination and biased lock

        Lock elimination: For code that can be seen at a glance as not involving thread safety issues, the compiler can eliminate locks.

        Biased lock: Only after the code has been run does it know that there is no lock competition.

3. Lock roughening

Normally, we prefer to make the lock thinner, which is more conducive to solving thread safety issues during concurrent programming, but sometimes, making the lock thicker can improve efficiency, and we want the lock to be thicker.

Lock coarsening: combine multiple fine-grained locks into a coarse-grained lock.

As shown in the picture:

Frequent locking and unlocking in a piece of code will definitely consume more hardware resources. However, if these locking and unlocking operations of a piece of code can be optimized to only lock and unlock once, this can also improve efficiency. The purpose of lock roughening is also to improve efficiency.

Guess you like

Origin blog.csdn.net/cool_tao6/article/details/134886923