[java] Detailed explanation of java multi-threading and thread pool

Table of contents

Preface

What is a thread? What is multithreading?

Thread: It is the smallest unit that the operating system can perform calculation scheduling. It is included in the process and is the actual operating unit in the process. A thread refers to a single sequential flow of control in a process (such as a program, a function, etc.).
Multithreading: A technology in which multiple threads execute concurrently.

The role, benefits and disadvantages of multithreading

Function: Make full use of CPU resources and use multi-threading methods to complete several things at the same time without interfering with each other.

Benefits:
Using threads can put long-term tasks in the program into the background for processing, such as downloading pictures and videos.
Taking advantage of multi-core processors, concurrent execution makes the system run faster and smoother, and the user experience is better.

Disadvantages
A large number of threads reduce the readability of the code.
②More threads require more memory space.
③There is a thread insecurity problem.

Daemon threads and user threads

Daemon thread: Thread given by jvm. For example: GC thread.
User thread: User-defined thread. For example: main thread.

Extension:
Thread.setDaemon(false) is set to the user thread
Thread.setDaemon(true) is set to the daemon thread

The difference between concurrency and parallelism

Concurrency: One processor handles multiple tasks at the same time. (A person eats two apples at the same time)
Parallelism: Multiple processors handle multiple tasks at the same time. (Two people eat two apples at the same time)

1. Thread status and common methods

1. Various state transition diagrams of threads

Insert image description here

2. Common thread-related methods include

① wait()

  • wait(): The thread waits, releases the lock (used in synchronized code blocks or synchronized methods, otherwise an error will be reported), and then enters the waiting state and is used with notify() and notifyAll().
  • wait(long timeout): Wait for the specified time. If it is awakened within the specified time, it will continue to execute. If it exceeds the specified time, it will continue to execute.
public class TestWait implements Runnable {
    
    
    private Object lock;
    public TestWait(Object lock) {
    
    
        this.lock = lock;
    }
    public static void main(String[] args) throws InterruptedException {
    
    
        Object lock = new Object();
        // 创建两个线程
        Thread t1 = new Thread(new TestWait(lock),"t1");
        Thread t2 = new Thread(new TestWait(lock),"t2");
        t1.start();
        t2.start();
    }

    @Override
    public void run() {
    
    
        synchronized (lock) {
    
    
            System.out.println(Thread.currentThread().getName() + "准备进入等待状态");
            try {
    
    
                //唤醒等待中的线程,进入就绪状态
                lock.notify();
                
                //线程等待,释放锁
                lock.wait();
            } catch (InterruptedException e) {
    
    
                throw new RuntimeException(e);
            }
            System.out.println(Thread.currentThread().getName() + "等待结束,继续执行");
        }
    }
}

运行结果:
t2准备进入等待状态                
t1准备进入等待状态
t2等待结束,本线程继续执行

运行结果不唯一,但是总会有一个线程执行不完。
以上运行步骤如下:
1.t2线程运行,调用notify()方法唤醒等待中的线程(现在等待队列中没有线程),然后调用wait()方法进入等待队列。
2.t1线程运行,调用notify()方法唤醒等待中的线程(这时唤醒了t2线程,因为只有等待队列中只有t2,所以一定是t2获取cpu使用权),然后wait()方法进入等待队列。
3.t2线程继续向下运行,然后结束,t1还在等待队列,没有被唤醒,所以t1执行不完。


注意:被唤醒的线程,会从等待队列,进入就绪状态,然后去竞争cpu的使用权。

② sleep(long timeout)

  • sleep(): The thread sleeps and the lock will not be released (the thread will continue to execute after the specified time is up). One move is used to simulate network delay.
public class TestSleep implements Runnable {
    
    

    public static void main(String[] args) throws InterruptedException {
    
    
        Object lock = new Object();
        // 创建两个线程
        Thread t1 = new Thread(new TestSleep(),"t1");
        Thread t2 = new Thread(new TestSleep(),"t2");
        t1.start();
        t2.start();
    }

    @Override
    public void run() {
    
    
        System.out.println(Thread.currentThread().getName() + "准备进入等待状态");
        try {
    
    
            //线程睡眠两秒之后,继续执行,不会释放锁。
            Thread.sleep(2000);
        } catch (InterruptedException e) {
    
    
            throw new RuntimeException(e);
        }
        System.out.println(Thread.currentThread().getName() + "等待结束,继续执行");
    }
}

运行结果:
t1准备进入睡眠状态
t2准备进入睡眠状态
t1睡眠结束,继续执行
t2睡眠结束,继续执行

以上结果不唯一。

③ join()

  • join(): The specified thread is added to the current thread, and two alternately executing threads can be merged into sequential execution threads. The bottom layer is implemented using the wait() method. It can be applied to situations where multiple threads must be completed before the main thread can be executed. For example, if three people go to a hotel to eat, the food can only be served when all three of them arrive at the hotel.

Normal execution

public class TestJoin {
    
    

    public static void main(String[] args) {
    
    
      Thread t1 = new Thread(new Runnable() {
    
     //线程t1
            @Override
            public void run() {
    
    
                try {
    
    
                    Thread.sleep(2000);
                    System.out.println("执行t1");
                } catch (InterruptedException e) {
    
    
                    e.printStackTrace();
                }
            }
        });

        Thread t2 = new Thread(new Runnable() {
    
     //线程t2
            @Override
            public void run() {
    
    
                try {
    
    
                    Thread.sleep(3000);
                    System.out.println("执行t2");
                } catch (InterruptedException e) {
    
    
                    e.printStackTrace();
                }
            }
        });
        t1.start();
        t2.start();
        System.out.println("执行main");
    }
}

运行结果:
执行main
执行t1
执行t2

Let the main thread (main) execute last, and let t1 and t2 execute first.

public class TestJoin {
    
    

    public static void main(String[] args) throws InterruptedException {
    
    
      Thread t1 = new Thread(new Runnable() {
    
     //线程t1
            @Override
            public void run() {
    
    
                try {
    
    
                    System.out.println("执行t1");
                    Thread.sleep(4000);
                } catch (InterruptedException e) {
    
    
                    e.printStackTrace();
                }
            }
        });

        Thread t2 = new Thread(new Runnable() {
    
     //线程t2
            @Override
            public void run() {
    
    
                try {
    
    
                    System.out.println("执行t2");
                    Thread.sleep(3000);
                } catch (InterruptedException e) {
    
    
                    e.printStackTrace();
                }
            }
        });
        t1.start();
        t2.start();
        t1.join(); //t1参与到当前线程(main)的执行中,所以主线程需要等待t1执行完成才会继续执行,但是t2不受影响、。
        System.out.println("执行main");
    }
}

运行结果:
执行t2
执行t1
执行main

或者

执行t1
执行t2
执行main

Let t1 and t2 execute in order

public class TestJoin {
    
    

    public static void main(String[] args) throws InterruptedException {
    
    
      Thread t1 = new Thread(new Runnable() {
    
     //线程t1
            @Override
            public void run() {
    
    
                try {
    
    
                    System.out.println("执行t1");
                    Thread.sleep(2000);
                } catch (InterruptedException e) {
    
    
                    e.printStackTrace();
                }
            }
        });

        Thread t2 = new Thread(new Runnable() {
    
     //线程t2
            @Override
            public void run() {
    
    
                try {
    
    
                    System.out.println("执行t2");
                    Thread.sleep(3000);
                } catch (InterruptedException e) {
    
    
                    e.printStackTrace();
                }
            }
        });
        t1.start();
        t1.join(); //t1参与到当前线程的执行中,所以主线程需要等待t1执行完成才会继续执行,这是t2需要等待t1执行完毕之后才能执行。
        t2.start();
    }
}

运行结果:
执行t1
执行t2

④ yield()

  • yield(): The thread yields (the thread being executed), which will cause the thread to give up the right to use the CPU and enter the ready state.
public class TestYield {
    
    

    public static void main(String[] args) throws InterruptedException {
    
    
      Thread t1 = new Thread(new Runnable() {
    
     //线程t1
            @Override
            public void run() {
    
    
                System.out.println("执行t1");
                Thread.yield();//t1让出cpu使用权限,回到就绪状态
                System.out.println("执行结束t1");
            }
        });

        Thread t2 = new Thread(new Runnable() {
    
     //线程t2
            @Override
            public void run() {
    
    
                System.out.println("执行t2");
                System.out.println("执行结束t2");
            }
        });
        t1.start();
        t2.start();
    }
}

如果先执行t1,然后t1会让出线程,进入就绪状态,然后t2执行完成,t1再去竞争cpu的使用权,在继续执行。

⑤ notify()和notifyAll()

  • notify(): Randomly wake up a thread in the waiting state and enter the ready state.
public class TestNotify implements Runnable{
    
    
    static Object lock = new Object();

    public static void main(String[] args) {
    
    
        Thread thread1 = new Thread(new TestNotify(), "t1");
        Thread thread2 = new Thread(new TestNotify(), "t2");
        thread1.start();
        thread2.start();
        try {
    
    
            Thread.sleep(1000); //主线程睡眠,让线程t1或t2执行
        } catch (InterruptedException e) {
    
    
            e.printStackTrace();
        }
        synchronized (lock) {
    
    
            System.out.println(Thread.currentThread().getName() + " 获得了锁");
            System.out.println("唤醒前" + thread1.getName() + "状态是" + thread1.getState());
            System.out.println("唤醒前" + thread2.getName() + "状态是 " + thread2.getState());
            lock.notify(); // 随机唤醒一个在等待状中的线程
        }
        try {
    
    
            Thread.sleep(1000);//主线程睡眠,让线程t1或t2执行
        } catch (InterruptedException e) {
    
    
            e.printStackTrace();
        }
        System.out.println("唤醒后" + thread1.getName() + "状态是" + thread1.getState());
        System.out.println("唤醒后" + thread2.getName() + "状态是" + thread2.getState());
    }

    @Override
    public void run() {
    
    
        synchronized (lock) {
    
    
            System.out.println(Thread.currentThread().getName() + " 开始执行");
            try {
    
    
                lock.wait(); // 进入等待状态
            } catch (InterruptedException e) {
    
    
                e.printStackTrace();
            }
            System.out.println(Thread.currentThread().getName() + " 线程结束");
        }
    }
}

运行结果:
t1 开始执行
t2 开始执行
main 获得了锁
唤醒前t1状态是WAITING
唤醒前t2状态是 WAITING
t1 线程结束
唤醒后t1状态是TERMINATED
唤醒后t2状态是WAITING

运行结果不唯一,由上面可以看出,notify()随机唤醒了一个线程,唤醒的是t1,也有可能唤醒的是t2。然后没被唤醒的线程一直处于等待状态,这样就会导致程序结束不了。
  • notifyAll(): Wake up all threads in the waiting queue and enter the ready state.
package com.navi.vpx;

public class TestNotify implements Runnable{
    
    
    static Object lock = new Object();

    public static void main(String[] args) {
    
    
        Thread thread1 = new Thread(new TestNotify(), "t1");
        Thread thread2 = new Thread(new TestNotify(), "t2");
        thread1.start();
        thread2.start();
        try {
    
    
            Thread.sleep(1000); //主线程睡眠,让线程t1或t2执行
        } catch (InterruptedException e) {
    
    
            e.printStackTrace();
        }
        synchronized (lock) {
    
    
            System.out.println(Thread.currentThread().getName() + " 获得了锁");
            System.out.println("唤醒前" + thread1.getName() + "状态是" + thread1.getState());
            System.out.println("唤醒前" + thread2.getName() + "状态是 " + thread2.getState());
            lock.notifyAll(); // 唤醒全部在等待状态的线程
        }
        try {
    
    
            Thread.sleep(1000);//主线程睡眠,让线程t1或t2执行
        } catch (InterruptedException e) {
    
    
            e.printStackTrace();
        }
        System.out.println("唤醒后" + thread1.getName() + "状态是" + thread1.getState());
        System.out.println("唤醒后" + thread2.getName() + "状态是" + thread2.getState());
    }

    @Override
    public void run() {
    
    
        synchronized (lock) {
    
    
            System.out.println(Thread.currentThread().getName() + " 开始执行");
            try {
    
    
                lock.wait(); // 进入等待状态
            } catch (InterruptedException e) {
    
    
                e.printStackTrace();
            }
            System.out.println(Thread.currentThread().getName() + " 线程结束");
        }
    }
}
运行结果:
t1 开始执行
t2 开始执行
main 获得了锁
唤醒前t1状态是WAITING
唤醒前t2状态是 WAITING
t2 线程结束
t1 线程结束
唤醒后t1状态是TERMINATED
唤醒后t2状态是TERMINATED


运行结果不唯一,有上面可得notifyAll()唤醒的是所有在等待状态的线程,使线程都进入就绪状态,然后竞争锁,最后全部执行完成,程序结束。

3.What is the difference between wait() and sleep()?

① wait() comes from Object, sleep() comes from Thread.

② wait() will release the lock, sleep() will not release the lock.

③ wait() can only be used in synchronized methods or code blocks, sleep() can be used anywhere.

④ wait() does not need to catch exceptions, sleep() needs to catch exceptions.

4.Why are the wait(), notify(), and notifyAll() methods defined in the Object class instead of the Thread class?

The lock can be any object . If it is in the Thread class, only objects of the Thread class can call the above method.

② When entering the critical section (synchronized code block or synchronized method) in Java, the thread only needs to get the lock, and does not care which thread holds the lock.

③ The above method is the communication mechanism between two Java threads. If this mechanism cannot be implemented through Java keywords like synchronized, then the Object class is the best place to define them, so that any Java object can Can have the ability to implement thread communication mechanisms.

2. Ways to implement multi-threading

1. Inherit the Thread class (only one class can be inherited)

public class MyThread extends Thread{
    
    
	
    public void run(){
    
    
        System.out.println("执行run()方法");
    }

    public static void main(String[] args) {
    
    
    	//创建对象
        MyThread myThread = new MyThread();

		//启动线程,执行run()方法
        myThread.start(); 
    }
}

2. Implement the Runnable interface (can implement multiple interfaces)

public class MyRunable implements Runnable{
    
    
    
    //重写的run()方法
    @Override
    public void run() {
    
    
        System.out.println("执行run()方法");
    }

    public static void main(String[] args) {
    
    
        //创建对象
        MyRunable myRunable = new MyRunable();
        
        //创建线程对象
        Thread thread = new Thread(myRunable);
		
		//启动线程,执行run()方法
        thread.start(); 
    }
}

3. Use Callable and FutureTask to implement multi-threading with returned results

public class MyCallable implements Callable<String> {
    
    

    //重写的run()方法
    @Override
    public String call() {
    
    
        System.out.println("执行run()方法");
        return "call返回值";
    }

    public static void main(String[] args) throws ExecutionException, InterruptedException {
    
    
        //创建对象
        MyCallable myCallable = new MyCallable();

        //创建futureTask
        FutureTask<String>  futureTask = new FutureTask(myCallable);

        //创建线程对象
        Thread thread = new Thread(futureTask);

        //开启线程,执行run()方法
        thread.start();

        //接收call()的返回值
        String result = futureTask.get();
        System.out.println(result);

    }
}

4. Use a thread pool – explained in detail below

5. Additional question: The difference between runnable and callable

① runnable has no return value, and callable has a return value.

②runnable can only throw exceptions and cannot catch them, while callable can throw exceptions and catch them.

3. Thread pool (4 major methods, 7 major parameters, 4 rejection strategies)

1. Benefits of thread pool

① Threads are scarce resources. Using a thread pool can reduce the creation and destruction of threads, and each thread can be reused.

② The number of threads in the thread pool can be adjusted according to the needs of the system to prevent the server from crashing due to excessive memory consumption.

2. Seven parameters (using bank counter as an example)

public ThreadPoolExecutor(int corePoolSize,
                          int maximumPoolSize,
                          long keepAliveTime,
                          TimeUnit unit,
                          BlockingQueue<Runnable> workQueue
                          ThreadFactory threadFactory,
                          RejectedExecutionHandler handler)
  • corePoolSize: Number of core threads (minimum number of open counters)

The minimum number of threads maintained by the thread pool will not be recycled after creation (unless: after setting allowCoreThreadTimeout=true, idle core threads will also be recycled if they exceed the survival time (keepAliveTime)). When the thread pool uses the executor() method to execute a new task, it will first determine whether the number of running threads exceeds the number of core threads. If not, a new thread will be created and the task will be executed.

    /** ThreadPoolExecutor 源码
     * If false (default), core threads stay alive even when idle.
     * If true, core threads use keepAliveTime to time out waiting
     * for work.翻译如下:
     * 如果为false(默认值),则核心线程即使在空闲时也保持活动状态。
     * 如果为true,则核心线程使用keepAliveTime超时等待工作。
     */
    private volatile boolean allowCoreThreadTimeOut;   //核心线程能否被回收
  • maximumPoolSize: maximum number of threads (maximum number of counters to open a method)

The maximum number of threads allowed to be created by the thread pool.

When there is a new task, when the core thread is full, the maximum number of threads has not been reached, and there are no idle threads, and the work queue is full, a new thread is created to execute the task.

  • keepAliveTime: idle thread survival time (time to close when there is no one at the counter)

When a recyclable thread is idle for longer than keepAliveTime, it will be recycled.

Threads that can be recycled:
① Core threads with allowCoreThreadTimeout=true set.
② Threads greater than the number of core threads (non-core threads).

  • unit: unit of idle thread survival time

The time unit of keepAliveTime

Optional meaning
NANOSECONDS Nanosecond (1000 nanosecond = 1 microsecond)
MICROSECONDS Microseconds (1000 microseconds = 1 millisecond)
MILLISECONDS milliseconds (1000 milliseconds = 1 second)
SECONDS Second
MINUTES minute
HOURS Hour
DAYS sky
  • workQueue: work queue (waiting area, if the waiting area is full, the maximum number of counters will be opened)

Queue for storing tasks

  • threadFactory

Thread factory, used to create threads

  • handler: rejection strategy (the counter is full, the waiting area is also full, no reception strategy)

Handlers for tasks that exceed thread scope and queue capacity

3.Four major methods

Insert image description here

Note : Why is it not recommended to use the executors that come with jdk to create a thread pool?

Image content comes from Alibaba Development Manual
Insert image description here

① ExecutorService executor = Executors.newCachedThreadPool()

Create a cacheable thread pool. If the length of the thread pool exceeds processing needs, idle threads can be flexibly recycled. If no threads can be recycled, new threads will be created.
Advantages: Threads can be recycled flexibly.
Disadvantages: If too many tasks are created, OOM (out of memory) will occur.

 	public static ExecutorService newCachedThreadPool() {
    
    
        return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
                                      60L, TimeUnit.SECONDS,
                                      new SynchronousQueue<Runnable>());
    }

② ExecutorService executor = Executors.newFixedThreadPool()

Creates a thread pool with a specified number of worker threads.
Advantages: Improve the efficiency of the thread pool and the overhead of creating threads.
Disadvantages: Because a large number of requests may accumulate (in the queue), resulting in OOM.

	//如果核心线程满了,那就存入队列中,提高了线程池效率和创建的线程时的开销,即使空闲也不会被回收,
	public static ExecutorService newFixedThreadPool(int nThreads) {
    
    
        return new ThreadPoolExecutor(nThreads, nThreads,
                                      0L, TimeUnit.MILLISECONDS,
                                      new LinkedBlockingQueue<Runnable>());
    }
    
	public LinkedBlockingQueue() {
    
    
        this(Integer.MAX_VALUE);
    }

③ ExecutorService executor = Executors.newSingleThreadPool()

Advantages of creating a single thread
: It can ensure the orderly execution of threads. If an exception occurs, a new thread will be created.
Disadvantages: Because you can only create one, it may not be that fast. It is also possible to accumulate a large number of requests (in the queue), causing OOM.

    public static ExecutorService newSingleThreadExecutor() {
    
    
        return new FinalizableDelegatedExecutorService
            (new ThreadPoolExecutor(1, 1,
                                    0L, TimeUnit.MILLISECONDS,
                                    new LinkedBlockingQueue<Runnable>()));
    }
    
	public LinkedBlockingQueue() {
    
    
        this(Integer.MAX_VALUE);
    }

④ ExecutorService executor = Executors.newScheduleThreadPool()

Create a fixed-length thread pool to support scheduled and periodic task execution .
Advantages: Supports scheduled and periodic task execution.
Disadvantages: If too many tasks are created, OOM (out of memory) will occur.

    public static ScheduledExecutorService newScheduledThreadPool(int corePoolSize) {
    
    
        return new ScheduledThreadPoolExecutor(corePoolSize);
    }

    public ScheduledThreadPoolExecutor(int corePoolSize) {
    
    
        super(corePoolSize, Integer.MAX_VALUE, 0, NANOSECONDS,
             new DelayedWorkQueue());
    }

Because this thread pool is quite special, let’s write an example to enhance understanding.
第一种:
ScheduledExecutorService scheduledThreadPool = Executors.newScheduledThreadPool(5);

 scheduledThreadPool.schedule(new Runnable() {
    
    
 
      @Override
      public void run() {
    
    
         log.info("延长5秒后执行");
       }
 }, 5, TimeUnit.SECONDS);


第二种:
ScheduledExecutorService scheduledThreadPool = Executors.newScheduledThreadPool(5);
 
 scheduledThreadPool.schedule(new Runnable() {
    
    
 
      @Override
      public void run() {
    
    
         log.info("延迟一秒,每5秒执行一次");
       }
 }, 15, TimeUnit.SECONDS);

4. Four rejection strategies

  • new ThreadPoolExecutor.AbortPolicy()
    is rejected when adding a thread pool and will throw an exception (default policy)
  • new ThreadPoolExecutor.CallerRunsPolicy()
    is rejected when adding a thread pool. It will not give up the task and will not throw an exception. It will let the caller thread perform the task (that is, it will not use the thread in the thread pool to perform the task and will let the call thread pool thread to execute)
  • new ThreadPoolExecutor.DiscardPolicy()
    is rejected when adding a thread pool, discards the task, and does not throw an exception.
  • If new ThreadPoolExecutor.DiscardOldestPolicy()
    is rejected when adding a thread pool, it will give up the longest waiting task in the thread pool queue and put the rejected task into it.

4. Thread deadlock

1. What is deadlock?

Each process waits for each other's resources, causing each process to be blocked and unable to move forward. Deadlock will waste a lot of system resources and cause the system to crash, so we must avoid deadlock.

2. The difference between process deadlock, starvation and infinite loop

Deadlock: A phenomenon in which processes wait for each other's resources, causing each process to be blocked and unable to move forward. Deadlock will waste a lot of system resources and cause the system to crash, so we must avoid deadlock.
Hungry: The thread never gets the resources it wants (for example: in the Short Process First (SPF) algorithm, if a steady stream of short processes arrives, the long process will never get the processor, resulting in long process "starvation". ".)
Infinite loop: The phenomenon that a certain process cannot escape from a certain loop during execution.

3. Four necessary conditions for causing deadlock

  • Mutual exclusion: When a resource is occupied by one thread, other threads cannot use it.
  • Non-preemptible: When the process is blocked, the occupied resources will not be released.
  • No deprivation: The resources obtained by the process have not been used up and cannot be forcibly deprived.
  • Cyclic waiting: Several processes form a cyclic waiting resource relationship connected head to tail.

3. When does a deadlock occur?

① Competition for system resources
Competition for inalienable resources (tape drives, printers) may lead to deadlock. A deprivable resource like the CPU will not form a deadlock.
② Illegal process advancement sequence
and improper resource request and release can also lead to deadlock. For example: Thread A and thread B occupy resource 1 and resource 2 in parallel, then thread A needs resource 2, and thread B needs resource 1. Because the resource is occupied by the other party, it is blocked and causes a deadlock.
③ Improper semaphore. For
example, in the producer-consumer problem, if mutually exclusive P operations are implemented, deadlock will occur before synchronous P operations are implemented.

Semaphore: It is actually a variable (can be an integer or a more complex record variable). A semaphore can be used to represent the number of a certain resource in the system.
A pair of primitives: wait(S) primitive and signal(S) primitive, referred to as P(S) and V(S). The primitives can be understood as functions written by ourselves. The function names are wait and signal respectively. , the semaphore S in the brackets is actually a parameter passed in when the function is called.

Producer-consumer sets mutual exclusion and synchronization.
The producer and consumer share an initially empty buffer of size n:
they must access the buffer mutually (mutually exclusive)
. Only when the buffer is not full, the producer Only the consumer can put the product into the buffer, otherwise it must wait (synchronize)
. Only when the buffer is not empty, the consumer can take the product from it, otherwise it must wait (synchronize).

4. How to solve deadlock?

① Prevent deadlock

  • Destroy mutual exclusion
    and only allow resources used for mutual exclusion to be transformed into shared use; for example: SPOOLing technology; but many times some resources cannot change mutual exclusion, and some need to protect mutual exclusion.
  • Destruction without deprivation
    Ⅰ When a process requests new resources and is rejected, it releases the resources it owns and applies again later. Even if resources are not used up, they must be released, and the condition of non-deprivation is broken.
    Ⅱ When the resources required by a process are occupied by other processes, the operating system can coordinate and forcibly deprive the resources. This approach generally takes into account the priority of each process.
    Disadvantages:
    (1). The implementation is relatively complex.
    (2). Releasing the acquired resources may cause the failure of the previous stage of work. Therefore, this method is generally only suitable for resources whose state is easy to save and restore.
    (3). Repeatedly applying for and releasing resources will increase system overhead and reduce system throughput (throughput refers to the number of successfully transmitted data (in bits, Bytes, packets, etc. measurements)).
    (4). If you adopt Ⅰ, you may never get the resources you want, which may lead to hunger.
  • Destruction requests and retention
    adopt static allocation method: that is, the process applies for all the resources it needs once before running, and does not let it be put into operation before its resources are not satisfied. Once put into operation, these resources will always belong to it, and the process will not request any other resources.
    Disadvantages:
    Because resources will always be occupied, the resource utilization rate is extremely low, which is a serious waste of resources. It may also lead to hunger.
  • Destroy the cycle of waiting and
    use the sequential resource allocation method: number the system resources, and then apply for them in ascending order of numbers. Similar resources must be applied for at once.
    Disadvantages
    : Because resource numbers are relatively fixed, it is inconvenient to add new resources because numbers need to be reassigned.

② Avoid deadlock

Banker's Algorithm: When each new process enters the system, it must declare the maximum number of each resource it requires, and its number cannot exceed the total number of resources owned by the system. When a process requests a set of resources, the system must first determine whether there are enough resources allocated to the process. If so, it will further calculate whether allocating these resources to the process will put the system in an unsafe state. If not, then it will Resources are allocated to it, otherwise the process is made to wait.
Safe state: refers to the system allocating resources according to this sequence, so that each process can be completed successfully. As long as a safe sequence system can be found, it is a safe state. Of course, there may be multiple security sequences.

③ Detection and resolution of deadlock

Deadlock detection

Ⅰ. Use a certain data structure (graph structure) to save resource request and allocation information;
Ⅱ. Provide an algorithm to use the above information to detect whether the system has entered a deadlock state.
Insert image description here
(1) In the above figure, find a process Pi that is neither blocked nor an orphan point (that is, find a directed edge connected to it, and the number of requests for resources corresponding to this directed edge is less than or equal to the existing idle resources in the system Quantity. In the figure below, R1 has no idle resources and R2 has idle resources. If all the edges connecting the process meet the above conditions, the process can continue to run until it is completed and then release all the resources it occupies). Eliminate all its request edges and allocation edges to make it an isolated node. In the figure below, P1 is a process node that meets this condition, so all edges of P1 are eliminated.
(2) The resources released by process Pi can be used to wake up some processes that are blocked waiting for these resources. The original blocked process may become a non-blocking process. In the figure below, P2 satisfies such conditions. After performing a series of simplifications according to the method in 1, if all the edges in the graph can be eliminated, the graph is said to be completely simplified.
Directed edge: If the edge between vertices Vi to Vj has a direction, this edge is called a directed edge. It is a noun for graph structure.

Deadlock solution

ⅠResource deprivation method: suspend (temporarily put on external memory) some deadlock processes, seize their resources, and allocate such resources to other deadlock processes. However, you should prevent the suspended process from starving without resources for a long time.
Ⅱ Process cancellation method (or process termination method) : Forcefully cancel some or even all deadlock processes and deprive these processes of their resources. The advantage of this approach is that it is simple to implement, but the price paid may be high. If some processes have been running for a long time and are canceled, they must be re-executed.
Ⅲ Process rollback method: Let one or more deadlocked processes roll back enough to avoid deadlock. This requires the system to record historical information of the process and set restore points.

How to decide "who to attack (select process to resolve deadlock)"

  • process priority
  • How long has it been executed
  • How long will it take to complete
  • How many resources the process has used
  • Is the process interactive or batch?

5. Thread safety

1. Thread safety mainly consists of three aspects:

  • Atomicity: Operations are indivisible, either all are executed and cannot be interrupted, otherwise none are executed.
  • Visibility: Visibility means that when multiple threads access the same variable, if one thread modifies the value of the variable, other threads can immediately see the modified value.
  • Orderliness: The order of program execution is executed in the order of code.

2. Ensure atomicity

  • Use lock synchronized and lock.
  • Use CAS (compareAndSet: compare and exchange), CAS is the concurrency primitive of the CPU).

3. Ensure visibility

  • Use lock synchronized and lock.
  • Use volatile keyword.

4. Ensure orderliness

  • Use volatile keyword
  • Use the synchronized keyword.

5.The difference between volatile and synchronized

① volatile can only be used at the variable level, synchronized can be used at the variable, method, and class levels
② volatile does not have atomicity but has visibility, while synchronized has atomicity and visibility.
③ Volatile will not cause thread blocking, but synchronized will cause thread blocking.
④ The volatile keyword is a lightweight implementation of thread synchronization, so volatile performance is definitely better than synchronized.

6.The difference between synchronized and lock

① synchronized is the keyword, lock is the java class, and the default is unfair lock (source code).
② synchronized is suitable for a small amount of synchronized code, and lock is suitable for a large amount of synchronized code.
③ synchronized will automatically release the lock. The lock must be placed in finally and manually unlocked to release the lock, otherwise it is easy to deadlock.

Thread safety, I will write down the reasons and explanation in detail later.

Guess you like

Origin blog.csdn.net/twotwo22222/article/details/128450613