4 necessary conditions for deadlock in Java multithreading? How to avoid deadlock?

Generally, the following four conditions must be met at the same time to cause a deadlock:
1. Mutually exclusive conditions: At least one of the resources used by the thread must not be shared. That is, within a period of time, a resource can only be occupied by one process until it is released by that process.
2. Request and retention conditions: Refers to the process has at least one resource, but a new resource request is made, and the resource is already occupied by other threads. At this time, the requesting process is blocked, but it does not maintain other resources obtained by itself. freed.
3. Non-preemption condition: Refers to the process that has obtained the resource, before it is used up, it cannot be preempted, and can only be released by itself when it is used up.
4. Loop waiting condition: the first thread waits for other threads to release resources. The latter is waiting for the first thread to release resources.
Because of the deadlock, these four conditions must be met at the same time. Therefore, to prevent deadlock, only one of the conditions needs to be broken.

How to avoid deadlock?

answer:

Specify the order of acquiring locks, for example:

For example, a thread can only operate on a certain resource when it obtains A lock and B lock. How to avoid deadlock under multi-threaded conditions?
The order of acquiring locks is certain. For example, it is stipulated that only threads that acquire A locks are eligible to acquire B locks. Deadlocks can be avoided by acquiring locks in order.

So how should we deal with the problem of deadlock during the development process?

  1. Prevent deadlock.

This is a simpler and more intuitive method of prevention in advance. The method is to prevent deadlock by setting certain restrictions to destroy one or more of the four necessary conditions for deadlock. Prevention of deadlock is a relatively easy method to implement and has been widely used. However, because the imposed restrictions are often too strict, it may lead to a decrease in system resource utilization and system throughput.

  1. Avoid deadlocks.

This method is also a pre-prevention strategy, but it does not need to take various restrictive measures in advance to destroy the four necessary conditions for the deadlock, but in the process of dynamic resource allocation, a certain method is used to prevent the system Enter an unsafe state to avoid deadlock.

3) Detect deadlock.

This method does not require any restrictive measures in advance, nor does it need to check whether the system has entered an unsafe area. This method allows the system to deadlock during operation. However, through the detection mechanism set up by the system, the occurrence of deadlock can be detected in time, and the processes and resources related to the deadlock can be accurately determined, and then appropriate measures can be taken to remove the deadlock that has occurred from the system.

4) Release the deadlock.

This is a measure that matches the detection of deadlock. When it is detected that a deadlock has occurred in the system, the process must be released from the deadlock state. The commonly used implementation method is to cancel or suspend some processes in order to reclaim some resources, and then allocate these resources to the processes that are already in the blocked state and turn them into the ready state to continue running. Deadlock detection and removal measures may enable the system to obtain better resource utilization and throughput, but it is also the most difficult to implement.

Java

/**
线程Thread1率先占有了resource1, 继续运行时需要resource2, 但此时resource2却被线程Thread2占有了,
因此只能等待Thread2释放resource2才能够继续运行; 同时,Thread2也需要resource1,
它只能等待Thread1释放resource1才能够继续运行, 因此,Thread1和Thread2都处于等待状态,
谁也无法继续运行,即产生了死锁。
**/
public class DeadLock {
    
    

    public static void main(String[] args) {
    
    
        dead_lock();
    }

    private static void dead_lock() {
    
    
        // 两个资源
        final Object resource1 = "resource1";
        final Object resource2 = "resource2";
        
        // 第一个线程,想先占有resource1,再尝试着占有resource2
        Thread t1 = new Thread() {
    
    
            public void run() {
    
    
                // 尝试占有resource1
                synchronized (resource1) {
    
    
                    // 成功占有resource1
                    System.out.println("Thread1 1:locked resource1");
                    // 休眠一段时间
                    try {
    
    
                        Thread.sleep(50);
                    } catch (InterruptedException e) {
    
    
                        e.printStackTrace();
                    }

                    // 尝试占有resource2,如果不能占有,该线程会一直等到
                    synchronized (resource2) {
    
    
                        System.out.println("Thread1 1:locked resource2");
                    }
                } //syn
            } //run
        }; //Thread t1
    
        // 第二个线程,想先占有resource2,再占有resource1
        Thread t2 = new Thread() {
    
    
            public void run() {
    
    
                // 尝试占有resource2
                synchronized (resource2) {
    
    
                    // 成功占有resource2
                    System.out.println("Thread 2 :locked resource2");
                    // 休眠一段时间
                    try {
    
    
                        Thread.sleep(50);
                    } catch (InterruptedException e) {
    
    
                        e.printStackTrace();
                    }
                
                    // 尝试占有resource1,如果不能占有,该线程会一直等到
                    synchronized (resource1) {
    
    
                        System.out.println("Thread1 2:locked resource1");
                    }
                } //syn
            } //run
        }; //Thread t2
    
    // 启动线程
    t1.start();
    t2.start();
    }
}

Another type of deadlock: recursive deadlock, for example: the
so-called recursive function is a self-invoking function, calling itself directly or indirectly in the function body, that is, the nesting of the function is the function itself.
There are two types of recursion: direct recursion and indirect recursion. Direct recursion means calling the function itself in the function. Indirect recursion means that other functions are called in the function, and this function is called by this other function.
When do you use recursion? Generally speaking, when you want to use loop iteration in a certain piece of code logic, but the number of iterations is not known before the iteration, use recursion. For example, if you want to find a certain file in a folder, and there are more than N subfolders and files under this folder, you have to use recursion when you don’t know how many layers of folders and files there are. Up.
The advantage of recursion is that it makes the code very concise, and some application scenarios have to use recursion, such as the aforementioned file search. Recursion is a good thing, but it can also cause you some trouble at some point. For example, if you use recursion in a multi-threaded environment, you have to face synchronization problems if you encounter multiple threads. Recursive programs are prone to problems when they encounter synchronization.
Multi-threaded recursion means that a method in the recursive chain is operated by another thread. The following codes all mean to call recursive() and businessLogic(), not one thread.

[java] view plain copy
 
public class Test {
    
      
    public void recursive(){
    
      
        this.businessLogic();  
    }  
    public synchronized void businessLogic(){
    
      
        System.out.println("处理业务逻辑");  
    System.out.println("<a href="http://lib.csdn.net/base/mysql" class='replace_word' title="MySQL" target='_blank' style='color:#df3434; font-weight:bold;'></a>");  
        this.recursive();  
    }  
}  

The above code is a code that can form a deadlock. In fact, this "synchronized" placed in "businessLogic()" and "recursive()" will cause a deadlock, and it will be locked if it is multi-threaded! His logical sequence is to execute the recursive() method first, then execute the businessLogic() method and lock the businessLogic() method at the same time. Then the program enters the businessLogic() method and executes the recursive() after the print statement is executed, and then enters the recursive () After preparing to execute businessLogic(), the problem arises. The lock of the previously executed businessLogic() has not been released. This time the execution is here again. Of course, it will not go through, and a deadlock is formed!
From this example, we can conclude that a rule is that when recursive, locking on the method on the recursive chain will definitely cause deadlock (the so-called recursive chain means that recursive() links to businessLogic(), and businessLogic() links back to recursive() )), the way to solve this problem is to avoid locking on the recursive chain, please see the following example:

[java] view plain copy
 
public class Test {
    
      
    public void recursive(){
    
      
        this.businessLogic();  
    }  
    public  void businessLogic(){
    
      
        System.out.println("处理业务逻辑");  
        this.saveToDB();  
        this.recursive();  
    }  
    public synchronized void saveToDB(){
    
      
        System.out.println("保存到数据库");  
    }  
}  

saveToDB() is not on this recursive chain, there will be no deadlock, so it is very dangerous to lock in the recursion. It is really impossible to avoid the lock and add it to the smallest granularity of the program code to reduce the deadlock. Probability of lock.

Avoid deadlock

Deadlocks Deadlocks can be avoided in some situations. There are generally three techniques used to avoid deadlocks:

  • Locking sequence
  • Lock time limit
  • Deadlock detection

1. Locking sequence

When multiple threads need the same locks, but locks are added in different orders, deadlocks are prone to occur.
If you can ensure that all threads acquire locks in the same order, then deadlocks will not occur. Look at the following example:

Thread 1:
  lock A 
  lock B

Thread 2:
   wait for A
   lock C (when A locked)

Thread 3:
   wait for A
   wait for B
   wait for C

If a thread (such as thread 3) needs some locks, then it must acquire locks in a certain order. It can acquire the following locks only after acquiring the locks that are ranked first in order.

For example, thread 2 and thread 3 can only try to acquire lock C after acquiring lock A (Translator's Note: acquiring lock A is a necessary condition for acquiring lock C). Because thread 1 already owns lock A, threads 2 and 3 need to wait until lock A is released. Then they must successfully lock A before they try to lock B or C.

Locking in order is an effective deadlock prevention mechanism. However, this method requires you to know in advance all the locks and sequences that may be used, but sometimes it is unpredictable.

2. Locking time limit

Another way to avoid deadlock is to add a timeout when trying to acquire the lock, which means that if this time limit is exceeded in the process of trying to acquire the lock, the thread will abandon the request for the lock. If a thread does not successfully acquire all required locks within a given time limit, it will roll back and release all acquired locks, and then wait for a random period of time and try again. This random waiting time gives other threads the opportunity to try to acquire the same locks, and allows the application to continue running when the lock is not acquired. Come back and repeat the logic of locking before).

The following is an example that shows a scenario where two threads try to acquire the same two locks in different orders, then roll back and try again after a timeout occurs.

Thread 1 locks A
Thread 2 locks B

Thread 1 attempts to lock B but is blocked
Thread 2 attempts to lock A but is blocked

Thread 1's lock attempt on B times out
Thread 1 backs up and releases A as well
Thread 1 waits randomly (e.g. 257 millis) before retrying.

Thread 2's lock attempt on A times out
Thread 2 backs up and releases B as well
Thread 2 waits randomly (e.g. 43 millis) before retrying.

In the above example, thread 2 retryed to lock 200 milliseconds earlier than thread 1, so it can successfully acquire two locks first. At this time, thread 1 tries to acquire lock A and is in a waiting state. When thread 2 ends, thread 1 can also successfully acquire these two locks (unless thread 2 or other threads acquire some of the locks before thread 1 successfully acquires the two locks).

It should be noted that due to the lock timeout, we cannot assume that this scenario must be a deadlock. It may also be because the thread that acquired the lock (causing other threads to time out) takes a long time to complete its task.

In addition, if there are a lot of threads competing for the same batch of resources at the same time, even if there is a timeout and rollback mechanism, it may still cause these threads to try repeatedly but never get the lock. If there are only two threads, and the retry timeout is set between 0 and 500 milliseconds, this phenomenon may not happen, but if it is 10 or 20 threads, the situation is different. Because the probability of these threads waiting for the same retry time is much higher (or so close that there will be problems).
(Translator's Note: The timeout and retry mechanism is to avoid competition at the same time, but when there are many threads, it is very likely that the timeout time of two or more threads will be the same or close, so even if there is After the contention causes a timeout, because the timeout period is the same, they will start to retry at the same time, leading to a new round of competition and new problems.)

There is a problem with this mechanism. The timeout period cannot be set for the synchronized block in Java. You need to create a custom lock or use the tools in the java.util.concurrent package in Java5. Writing a custom lock class is not complicated, but it is beyond the content of this article. The subsequent Java concurrency series will cover the content of custom locks.

3. Deadlock detection

Deadlock detection is a better deadlock prevention mechanism, it is mainly for those scenarios where it is impossible to achieve sequential locks and lock timeout is not feasible.

Whenever a thread acquires a lock, it will be recorded in the data structure (map, graph, etc.) related to the thread and the lock. In addition, whenever a thread requests a lock, it also needs to be recorded in this data structure.

When a thread fails to request a lock, the thread can traverse the lock relationship graph to see if there is a deadlock. For example, thread A requests lock 7, but lock 7 is held by thread B at this time. At this time, thread A can check whether thread B has requested the lock currently held by thread A. If thread B does have such a request, then a deadlock has occurred (thread A owns lock 1, requesting lock 7; thread B owns lock 7, requesting lock 1).

Of course, deadlocks are generally more complicated than the situation where two threads hold each other's locks. Thread A is waiting for Thread B, Thread B is waiting for Thread C, Thread C is waiting for Thread D, and Thread D is waiting for Thread A again. In order for thread A to detect deadlocks, it needs to progressively detect all locks requested by B. Starting from the lock requested by thread B, thread A finds thread C, then finds thread D, and finds that the lock requested by thread D is held by thread A itself. This is when it knows that a deadlock has occurred.

The following is a diagram of the relationship between lock occupancy and request among four threads (A, B, C, and D). Data structures like this can be used to detect deadlocks.
Insert picture description here
So what should these threads do when a deadlock is detected?

One possible approach is to release all locks, roll back, and wait a random period of time before trying again. This is similar to a simple lock timeout. The difference is that only the deadlock has occurred will be rolled back, not because the lock request has timed out. Although there are rollbacks and waits, if a large number of threads compete for the same batch of locks, they will still deadlock repeatedly (Editor's Note: The reason is similar to the timeout, which cannot fundamentally reduce the contention).

A better solution is to set the priority for these threads, let one (or several) threads roll back, and the remaining threads continue to hold the locks they need as if there is no deadlock. If the priority given to these threads is fixed, the same batch of threads will always have a higher priority. To avoid this problem, you can set a random priority when a deadlock occurs.

Banker's algorithm

We can regard the operating system as a banker, the resources managed by the operating system are equivalent to the funds managed by the banker, and the process requesting the operating system to allocate resources is equivalent to a user's loan to the banker.
In order to ensure the safety of funds, the banker stipulates:
(1) A customer can accept the customer when the maximum demand for funds does not exceed the banker’s existing funds;
(2) The customer can borrow in installments, but the total number of loans cannot Exceed the maximum demand;
(3) When the banker's existing funds cannot meet the customer's loan amount, the loan to the customer can be postponed, but the customer can always get the loan in a limited time;
(4) When After customers get all the funds they need, they will be able to return all the funds within a limited time.

The operating system allocates resources to the process in accordance with the rules set by the banker. When a process applies for resources for the first time, the process’s maximum demand for resources must be tested. If the system’s existing resources can meet its maximum demand, it will be allocated according to the current application volume. Resources, otherwise the allocation will be postponed. When a process continues to apply for resources during execution, first test whether the number of resources requested by the process this time exceeds the total amount of resources remaining. If it exceeds, refuse to allocate resources. If there is a safe state, resources will be allocated according to the current application amount, otherwise the allocation will also be postponed.

Guess you like

Origin blog.csdn.net/weixin_42118981/article/details/111467309