[Learning Java from 0 to 1] 14 Java multi-threading

14 Java multithreading

1. Multithreading Overview

In people's daily life, many things can be done at the same time. For example, a person can listen to music while cleaning the room, eat and watch TV at the same time. When using a computer, many tasks can also be performed simultaneously. For example, you can browse the web and print documents at the same time, or you can chat and copy files at the same time, etc. Computer technology that can complete multiple tasks at the same time is multi-threading technology. Java is one of the languages ​​that supports multi-threading. It has built-in support for multi-threading technology, which allows the program to execute multiple execution fragments at the same time.

1.1 Introduction of multi-threading

As can be seen from the calling process of the program in the figure above, this program has only one execution process, so such a program is a single-threaded program. If a program has multiple execution processes, then the program is a multi-threaded program.

1.2 Overview of Multithreading

1.2.1 What is a process?

A process is a running program and an independent unit for resource allocation and call by the system. Each process has its own memory space and system resources.

In an operating system, each independently executed program can be called a process, that is, a "running program." Currently, most computers are equipped with
multi-tasking operating systems, which can execute multiple applications at the same time. The most common ones are Windows, Linux, Unix, etc. Under the Windows operating system used in this textbook, right-click the taskbar and select the [Start Task Manager] option to open the Task Manager panel. You can see the currently running program in the [Process] tab of the window. That is, all processes in the system, such as chrome.exe, QQ.exe, etc. The Task Manager window is shown in the figure.

Multithreading

In a multi-tasking operating system, it seems that concurrent execution of processes is supported. For example, you can listen to music and chat at the same time. But in fact these processes do not run at the same time. In a computer, all application programs are executed by the CPU. For a CPU, only one program can be run at a certain point in time, which means that only one process can be executed. The operating system allocates a limited period of CPU usage time to each process. The CPU executes a certain process during this period, and then switches to another process for execution in the next period of time. Because the CPU runs very fast and can switch between different processes in a very short time, it gives people the feeling of executing multiple programs at the same time.

1.2.2 What is the significance of multi-process?

A single-process computer can only do one thing, but our current computers can do multiple things. Example: Play a game (game process) and listen to music (music process). In other words, today's computers all support multi-process and can perform multiple tasks within a period of time. Moreover, it can improve the CPU usage and solve the problem of multiple parts of code running at the same time.

In fact, when multiple applications are executed at the same time, the CPU is doing rapid switching. This switch is random. CPU switching takes time, resulting in a reduction in efficiency.

1.2.3 What is a thread?

Each running program is a process, and there can be multiple execution units running simultaneously in a process. These execution units can be regarded as threads of program execution and are called threads. There is at least one thread in every process in the operating system. For example, when a Java program is started, a process will be generated. A thread will be created by default in the process, and the code in the main() method will be run on this thread.

A thread is the execution unit and execution path of a program; it is a single sequential control flow in a process, and it is an execution path; if a process has only one execution path, it is called a single-threaded program. If a process has multiple execution paths, it is called a multi-threaded program.

The code is executed sequentially according to the calling order, and there is no effect of two pieces of program code running alternately. Such a program is called a single-threaded program. If you want to achieve the effect of multiple sections of program code running alternately in your program, you need to create multiple threads, that is, a multi-threaded program. The so-called multi-threading means that a process can generate multiple single-threads during execution. These single-threaded programs are independent of each other when running, and they can be executed concurrently.

The execution process of a multi-threaded program is shown in the figure

thread

The multiple threads shown in the picture seem to be executed at the same time, but in fact they are not. Like processes, they are also executed by the CPU in turns. However, the CPU runs very fast, so it gives the impression that they are executed at the same time.

1.2.4 What is the significance of multi-threading?

The existence of multi-threading does not improve the execution speed of the program. In fact, it is to improve the usage of the application. The execution of the program is actually grabbing the resources of the CPU and the execution power of the CPU. Multiple processes are grabbing this resource, and if one of the processes has more execution paths, it will have a higher chance of grabbing the execution right of the CPU. We cannot guarantee which thread can grab it at which time, so the execution of threads is random.

1.2.5 The relationship between threads and processes

Thread is the smallest unit of CPU scheduling. At the same time, thread is a limited system resource, that is, threads cannot be generated unlimitedly, and the creation and destruction of threads have corresponding overhead. A process generally refers to an execution unit, which refers to a program or an application on PCs and mobile devices. A process can contain multiple threads, so processes and threads are included and included.

  • A thread can only belong to one process, and a process can have multiple threads, but there must be at least one thread.

  • Resources are allocated to processes, and all threads of the same process share all resources of that process.

  • The processor is assigned to threads, that is, the threads are actually running on the processor.

  • During the execution of threads, collaborative synchronization is required. The threads of different processes must use message communication to achieve synchronization.

  • Threads have their own stacks and local variables, but there is no separate address space between threads. The death of one thread means the death of the entire process. Therefore, multi-process programs are more robust than multi-thread programs, but when switching processes, it consumes a lot of time. The resources are larger and the efficiency is lower. However, for some concurrent operations that require simultaneous operation and sharing of certain variables, only threads, not processes, can be used.

1.2.6 The difference between processes and threads

  • Scheduling: Threads are the basic unit of CUP scheduling and allocation, and processes are the basic units of resource ownership.
  • Concurrency: Not only processes can execute concurrently, but multiple threads of the same process can also execute concurrently.
  • Own resources: A process is an independent unit that owns resources. A thread does not own system resources, but it can access resources belonging to the process.
  • System overhead: When creating or canceling a process, because the system has to allocate and reclaim resources for it, the system overhead is significantly greater than the overhead when creating or canceling a thread. When switching processes, more resources are consumed and the efficiency is lower.
  • Robustness: The process has an independent address space. After a process crashes, it will not affect other processes in protected mode, and threads are just different execution paths in a process. Threads have their own stacks and local variables, but there is no separate address space between threads. The death of one process means the death of all threads, so multi-process programs are more robust than multi-threaded programs. The crashes between processes will not affect each other, but the crash of one thread will cause the entire process to crash, and other threads will also hang up.

In general: a process is just a collection of resources. Real program execution is done by threads. When the program starts, the operating system creates a main thread for you. Each thread has its own stack.

When Android starts the program, we will allocate a main thread (UI thread). If there is no special processing, all our operations are completed in the UI thread.

1.2.7 What is parallelism and concurrency?

The former happens logically at the same time, which means running multiple programs at the same time at a certain time; the latter happens at the same time physically, which means running multiple programs at the same time at a certain point in time. So, can we achieve true concurrency? The answer is yes, multiple CPUs can do it, but you have to know how to schedule and control them.

PS:

  • There can be multiple execution paths in a process, which is called multithreading.
  • There must be at least one thread in a process.
  • The purpose of opening multiple threads is to run multiple parts of code at the same time. Each thread has its own running content, which can be called the task to be performed by the thread.

1.3 Java program operation principle

The java command will start the java virtual machine. Starting the JVM is equivalent to starting an application, that is, starting a process. The process will automatically start a "main thread", and then the main thread will call the main method of a certain class. So the main method runs in the main thread. All programs before this were single-threaded.

Thinking: Is the startup of the JVM virtual machine single-threaded or multi-threaded?

Answer: Multiple threads are started when the JVM is started. At least two threads can be analyzed.

  • The thread that executes the main function, the task code of the thread is defined in the main function.

  • Thread responsible for garbage collection. The gc method of the System class tells the garbage collector to call the finalize method, but not necessarily immediately.

2. Multi-threaded implementation

Since threads depend on processes, we should create a process first. The process is created by the system, so we should call the system function to create a process. Java cannot directly call system functions, so we have no way to directly implement multi-threaded programs. But what? Java can call programs written in C/C++ to implement multi-threaded programs. C/C++ calls system functions to create processes, and then Java calls such things, and then provides some classes for us to use. We can implement multi-threaded programs.

2.1 Multithreading implementation scheme 1: Inherit the Thread class and rewrite the run() method

  • Define a class that inherits the Thread class
  • Override the run method in the Thread class
  • Directly create a subclass object of Thread to create a thread
  • Call the start method to start the thread and call the thread's task run method to execute
package cn.itcast;

//多线程的实现方案一:继承Thread类,重写run()方法
//1、定义一个类继承Thread类。
class MyThread extends Thread {
    
    
    private String name;

    MyThread(String name) {
    
    
        this.name = name;
    }

    // 2、覆盖Thread类中的run方法。
    public void run() {
    
    
        for (int x = 0; x < 5; x++) {
    
    
            System.out.println(name + "...x=" + x + "...ThreadName="
                    + Thread.currentThread().getName());
        }
    }
}

class ThreadTest {
    
    
    public static void main(String[] args) {
    
    
        // 3、直接创建Thread的子类对象创建线程。
        MyThread d1 = new MyThread("黑马程序员");
        MyThread d2 = new MyThread("中关村在线");
        // 4、调用start方法开启线程并调用线程的任务run方法执行。
        d1.start();
        d2.start();
        for (int x = 0; x < 5; x++) {
    
    
            System.out.println("x = " + x + "...over..."
                    + Thread.currentThread().getName());
        }
    }
}

operation result:

黑马程序员...x=0...ThreadName=Thread-0
中关村在线...x=0...ThreadName=Thread-1
x = 0...over...main
中关村在线...x=1...ThreadName=Thread-1
黑马程序员...x=1...ThreadName=Thread-0
中关村在线...x=2...ThreadName=Thread-1
x = 1...over...main
中关村在线...x=3...ThreadName=Thread-1
黑马程序员...x=2...ThreadName=Thread-0
中关村在线...x=4...ThreadName=Thread-1
x = 2...over...main
x = 3...over...main
x = 4...over...main
黑马程序员...x=3...ThreadName=Thread-0
黑马程序员...x=4...ThreadName=Thread-0

2.1.2 Why rewrite the run() method?

The Thread class is used to describe threads, and threads need tasks. So the Thread class also has a description of the task. This task is reflected through the run method in the Thread class. In other words, the run method is a function that encapsulates a custom thread to run a task, and the run method defines the task code to be run by the thread. So only inherit the Thread class, and rewrite the run method, and define the running code in the run method.

2.1.3 Which method is used to start the thread

The start() method is called to start the thread, not the run() method. The run() method only encapsulates the code executed by the thread. Calling run() is just a call to a normal method and cannot start the thread.

2.1.4 Can threads be started multiple times

No, IllegalThreadStateException will appear illegal thread state exception.

2.1.5 The difference between run() and start() methods

  • run(): just encapsulates the code executed by the thread, calling it directly is a normal method
  • start(): First, the thread is started, and then the jvm calls the run() method of the thread.

2.1.6 Basic getting and setting methods of Thread class

  • public final String getName(): Get the name of the thread
  • public final void setName(String name): Set the name of the thread
  • Thread(String name): Name the thread through the construction method

Thinking: How to get the name of the thread where the main method is located?

public static Thread currentThread() // 获取任意方法所在的线程名称

2.2 Multi-threading implementation plan 2: Implement Runnable interface

  • Define a class that implements the Runnable interface
  • Override the run method in the interface and encapsulate the thread's task code into the run method
  • Create a thread object through the Thread class, and pass the subclass object of the Runnable interface as a parameter to the constructor of the Thread class. Why? Because the tasks of the thread are encapsulated in the run method of the Runnable interface subclass object. Therefore, when the thread object is created, the task to be run must be clearly defined.
  • Call the start method of the thread object to start the thread
package cn.itcast;

//多线程的实现方案二:实现Runnable接口
//1、定义类实现Runnable接口。
class MyThread implements Runnable {
    
    
    // 2、覆盖接口中的run方法,将线程的任务代码封装到run方法中。
    public void run() {
    
    
        show();
    }

    public void show() {
    
    
        for (int x = 0; x < 5; x++) {
    
    
            System.out.println(Thread.currentThread().getName() + "..." + x);
        }
    }
}

class ThreadTest {
    
    
    public static void main(String[] args) {
    
    
        MyThread d = new MyThread();
        // 3、通过Thread类创建线程对象,并将Runnable接口的子类对象作为Thread类的构造函数的参数进行传递。
        Thread t1 = new Thread(d);
        Thread t2 = new Thread(d);
        // 4、调用线程对象的start方法开启线程。
        t1.start();
        t2.start();
    }
}

operation result:

Thread-0...0
Thread-1...0
Thread-0...1
Thread-1...1
Thread-0...2
Thread-1...2
Thread-0...3
Thread-1...3
Thread-0...4
Thread-1...4
  • How to get the thread name: Thread.currentThread().getName()
  • How to set a name for a thread: setName(), Thread(Runnable target, String name)
  • The benefits of implementing the interface method:
    • It can avoid the limitations caused by Java single inheritance, so the second way of creating threads is more commonly used.
    • It is suitable for multiple codes of the same program to process the same resource, effectively separating threads from program codes and data, which better reflects the object-oriented design idea.

2.3 Multi-threaded program implementation scheme three: implement the Callable interface

  • Create a thread pool object to control how many thread objects to create.
public static ExecutorService newFixedThreadPool(int nThreads)
  • Threads in this kind of thread pool can execute: they can execute threads represented by Runnable objects or Callable objects.
  • Just call the following method
    • Future<?> submit(Runnable task)
      submits a Runnable task for execution and returns a Future representing the task
    • <T> Future<T> submit(Callable<T> task)
      submits a Callable task for execution and returns a Future representing the pending results of the task.
  • End thread: shutdown() closes the thread pool
package cn.itcast;

import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;

import java.util.concurrent.Callable;

//Callable:是带泛型的接口。
//这里指定的泛型其实是call()方法的返回值类型。
class MyCallable implements Callable {
    
    

    public Object call() throws Exception {
    
    
        for (int x = 0; x < 100; x++) {
    
    
            System.out.println(Thread.currentThread().getName() + ":" + x);
        }
        return null;
    }

}

/*
 * 多线程实现的方式3: A:创建一个线程池对象,控制要创建几个线程对象。 public static ExecutorService
 * newFixedThreadPool(int nThreads) B:这种线程池的线程可以执行:
 * 可以执行Runnable对象或者Callable对象代表的线程 做一个类实现Runnable接口。 C:调用如下方法即可 Future<?>
 * submit(Runnable task) <T> Future<T> submit(Callable<T> task) D:我就要结束,可以吗? 可以。
 */
public class CallableDemo {
    
    
    public static void main(String[] args) {
    
    
        // 创建线程池对象
        ExecutorService pool = Executors.newFixedThreadPool(2);

        // 可以执行Runnable对象或者Callable对象代表的线程
        pool.submit(new MyCallable());
        pool.submit(new MyCallable());

        // 结束
        pool.shutdown();
    }
}

operation result:

Advantages and Disadvantages of Implementing Callable

  • Benefits: It can have a return value; it can throw exceptions.
  • Disadvantages: The code is relatively complex, so it is generally not used

2.4 Using multi-threading with anonymous inner classes

//新创建一个线程,重写run()方法
new Thread() {
    
    
    @Override
    public void run() {
    
    
        super.run();
        // code
    }
}.start();

//新创建一个线程,传递一个Runnable对象
new Thread(new Runnable() {
    
    
    @Override
    public void run() {
    
    
        // code
    }
}).start();

2.5 Single-threaded and multi-threaded running processes

thread

It can be seen from the figure that when a single-threaded program is running, it will be executed in the order of calling the code, but in multi-threading, the main() method and the run() method of the MyThread class can run at the same time without affecting each other. It is the difference between single thread and multi-thread.

3. Thread scheduling and thread control

Multiple threads in the program are executed concurrently. If a thread wants to be executed, it must obtain the right to use the CPU. The Java virtual machine allocates CPU usage rights to each thread in the program according to a specific mechanism. This mechanism is called thread scheduling.

In computers, there are two models of thread scheduling, namely the time-sharing scheduling model and the preemptive scheduling model. The so-called time-sharing scheduling model refers to allowing all threads to obtain the right to use the CPU in turn, and evenly allocating the CPU time slice occupied by each thread. The preemptive scheduling model refers to allowing threads with high priority in the runnable pool to occupy the CPU first. For threads with the same priority, randomly select a thread to occupy the CPU. When it loses the right to use the CPU, it will be randomly selected again. Other threads obtain CPU usage rights. The Java virtual machine uses a preemptive scheduling model by default. Normally, programmers do not need to care about it, but under certain specific needs, this mode needs to be changed, and the program itself controls the scheduling of the CPU.

3.1 Thread Scheduling

If our computer has only one CPU, then the CPU can only execute one instruction at a certain time, and the thread can only execute the instruction if it gets the CPU time slice, that is, the right to use it. So how does Java make calls to threads?

3.1.1 There are two scheduling models for threads

Time-sharing scheduling model: All threads take turns using the CPU, and the time slice occupied by each thread on the CPU is evenly distributed.

Preemptive scheduling model: Prioritize threads with higher priority to use the CPU. If threads have the same priority, one will be randomly selected. The thread with higher priority will get relatively more CPU time slices.

Java uses a preemptive scheduling model.

3.1.2 How to set and get thread priority

In an application, if you want to schedule threads, the most direct way is to set the priority of the thread. Threads with a higher priority have a greater chance of getting CPU execution, while threads with a lower priority have a smaller chance of getting CPU execution. The priority of a thread is represented by an integer between 1 and 10. The larger the number, the higher the priority. In addition to directly using numbers to represent the thread's priority, you can also use the three static constants provided in the Thread class to represent the thread's priority, as shown in the table.

thread

While the program is running, each thread in the ready state has its own priority. For example, the main thread has a normal priority. However, the thread priority is not fixed and can be set through the setPriority(int newPriority) method of the Thread class. The parameter newPriority in this method receives an integer between 1 and 10 or three static constants of the Thread class. .

public final int getPriority(); //获取线程的优先级
public final void setPriority(int newPriority); //设置线程的优先级

Note: The default priority of a thread is 5; the range of thread priority is: 1-10; a high thread priority only means that the thread has a high probability of obtaining a CPU time slice, but it must be executed many times or multiple times Only in this way can we get better results.

package cn.itcast.chapter10.example06;
/**
 * 不同优先级的两个线程在程序中运行情况
 */
public class Example06 {
    
    
	public static void main(String[] args) {
    
    
		//创建两个线程
		Thread minPriority = new Thread(new Task(), "优先级较低的线程 ");
		Thread maxPriority = new Thread(new Task(), "优先级较高的线程 ");
		minPriority.setPriority(Thread.MIN_PRIORITY); //设置线程的优先级为1  
		maxPriority.setPriority(Thread.MAX_PRIORITY); //设置线程的优先级为10
		//开启两个线程
		minPriority.start();
		maxPriority.start();
	}
}

//定义一个线程的任务类
class Task implements Runnable {
    
    
	@Override
	public void run() {
    
    
		for (int i = 0; i < 10; i++) {
    
    
			System.out.println(Thread.currentThread().getName() + "正在输出" + i);
		}
	}
}

3.2 Thread control

method declaration Functional description
sleep(long millis) Thread sleep, let the current thread pause and enter the sleep waiting state
join() The thread joins and waits for the target thread to finish executing before continuing. The thread calling this method will be inserted first and executed first.
yield() Thread politeness pauses the currently executing thread object and executes other threads.
setDaemon(boolean on) Mark the thread as a daemon thread (background thread) or user thread.
When the running threads are all daemon threads, the Java virtual machine exits.
This method must be called before starting the thread.
stop() Stop the thread, obsolete, but still available
interrupt() Interrupt thread. Terminate the thread's state and throw an InterruptedException.
setPriority(int newPriority) Change the priority of a thread
isInterrupted() Whether the thread is interrupted

Thread sleep

Programs with higher priority will be executed first, and programs with lower priority will be executed later. If you want to artificially control threads, pause the executing thread, and give up the CPU to other threads, you can use the static method sleep(long millis). This method allows the currently executing thread to pause for a period of time and enter sleep waiting. state. After the current thread calls the sleep(long millis) method, the thread will not execute within the specified time (parameter millis), so that other threads can get the opportunity to execute.

The sleep(long millis) method is declared to throw InterruptedException, so the exception should be caught when calling this method, or the exception should be thrown.

package cn.itcast.chapter10.example07;
/**
 * sleep(long millis) 方法在程序中的使用 
 */
public class Example07 {
    
    
	public static void main(String[] args) throws Exception {
    
    
		//创建一个线程
		new Thread(new Task()).start();		
		for (int i = 1; i <= 10; i++) {
    
    
			if (i == 5) {
    
    
				Thread.sleep(2000); //当前main主线程 休眠2秒
			} else {
    
    
				Thread.sleep(500); 
			}
			System.out.println("main主线程正在输出:" + i);
		}
	}
}
//定义线程的任务类
class Task implements Runnable {
    
    
	@Override
	public void run() {
    
    
		for (int i = 1; i <= 10; i++) {
    
    
			try{
    
    
				if (i == 3) {
    
    
					Thread.sleep(2000);//当前线程休眠2秒
				} else {
    
    
					Thread.sleep(500);
				}
				System.out.println("线程一正在输出:" + i);
			} catch (Exception e){
    
    
				e.printStackTrace();
			}
		}
	}
}

Thread queue jumping

In real life, we often encounter the situation of "jumping in line". Similarly, the Thread class also provides a join() method to realize this "function". When the join() method of another thread is called in a thread, the calling thread will be blocked until the thread joined by the join() method completes execution, and it will not continue running.

The thread joins and waits for the target thread to finish executing before continuing. Block the thread currently in which the join function is called until the receiving function completes execution and then continue execution.

Worker worker1 = new Worker("work-1");
Worker worker2 = new Worker("work-2");

worker1.start();
System.out.println("启动线程1");
try {
    
    
    worker1.join();
    System.out.println("启动线程2");
    worker2.start();
    worker2.join();
} catch (InterruptedException e) {
    
    
    e.printStackTrace();
}

System.out.println("主线程继续执行");
class Worker extends Thread {
    
    

    public Worker(String name) {
    
    
        super(name);
    }

    @Override
    public void run() {
    
    
        try {
    
    
            Thread.sleep(2000);
        } catch (InterruptedException e) {
    
    
            e.printStackTrace();
        }
        System.out.println("work in " + getName());
    }
}

Output results

启动线程1
work in work-1
启动线程2
work in work-2
主线程继续执行

Case code 2

package cn.itcast.chapter10.example09;
/**
 * 线程插队,join()方法的使用
 */
public class Example09{
    
    
	public static void main(String[] args) throws Exception {
    
    
		// 创建线程
		Thread t = new Thread(new EmergencyThread(),"线程一");
		t.start(); // 开启线程
		for (int i = 1; i < 6; i++) {
    
    
			System.out.println(Thread.currentThread().getName()+"输出:"+i);
			if (i == 2) {
    
    
				t.join(); // 调用join()方法
		}
			Thread.sleep(500); // 线程休眠500毫秒
		}
	}
}
class EmergencyThread implements Runnable {
    
    
	public void run() {
    
    
		for (int i = 1; i < 6; i++) {
    
    
			System.out.println(Thread.currentThread().getName()+"输出:"+i);
			try {
    
    
				Thread.sleep(500); // 线程休眠500毫秒
			} catch (InterruptedException e) {
    
    
				e.printStackTrace();
			}
		}
	}
}

Thread concession

In basketball games, we often see two teams of players grabbing basketballs from each other. When a player grabs the basketball, he can shoot it for a while, then he will let the basketball go, and the other players start grabbing the basketball again. This process is quite Concede to threads in Java programs. The so-called thread concession refers to the executing thread giving up CPU resources to other threads for execution under certain circumstances.

Thread yield can be achieved through the yield() method, which is somewhat similar to the sleep() method. Both can pause the currently running thread. The difference is that the yield() method does not block the thread, it just converts the thread into ready. status, allowing the system scheduler to reschedule. When a thread calls the yield() method, only threads with the same or higher priority than the current thread can get the opportunity to execute.

Make the thread that calls the yield() function give up execution time to other threads in the ready state, that is, actively give up the execution rights of the thread to other threads.

class YieldThread extends Thread {
    
    
    public YieldThread(String name) {
    
    
        super(name);
    }

    public synchronized void run() {
    
    
        for (int i = 0; i < MAX; i++) {
    
    
            System.out.printf("%s ,优先级为 : %d ----> %d\n", this.getName(), this.getPriority(), i);
            // i整除4时,调用yield
            if (i == 2) {
    
    
                Thread.yield();
            }
        }
    }
}
YieldThread t1 = new YieldThread("thread-1");
YieldThread t2 = new YieldThread("thread-2");
t1.start();
t2.start();
thread-1 ,优先级为:5 ----> 0
thread-1 ,优先级为:5 ----> 1
thread-1 ,优先级为:5 ----> 2
thread-1 ,优先级为:5 ----> 0
thread-1 ,优先级为:5 ----> 1
thread-1 ,优先级为:5 ----> 2
thread-1 ,优先级为:5 ----> 3
thread-1 ,优先级为:5 ----> 4
thread-1 ,优先级为:5 ----> 3
thread-1 ,优先级为:5 ----> 4

Case code 2

package cn.itcast.chapter10.example08;
/**
 * 线程让步,yield()方法的使用
 */
// 定义YieldThread类继承Thread类
class YieldThread extends Thread {
    
    
     // 定义一个有参的构造方法
	public YieldThread(String name) {
    
     
		super(name); // 调用父类的构造方法
	}
	public void run() {
    
    
		for (int i = 0; i < 6; i++) {
    
    
			System.out.println(Thread.currentThread().getName() + "---" + i);
			if (i == 3) {
    
    
				System.out.print("线程让步:");
				Thread.yield(); // 线程运行到此,作出让步
			}
		}
	}
}
public class Example08 {
    
    
	public static void main(String[] args) {
    
    
         // 创建两个线程
		Thread t1 = new YieldThread("线程A");
		Thread t2 = new YieldThread("线程B");
         // 开启两个线程
		t1.start();
		t2.start();
	}
}

**Basic status of the process**

The basic states that are often discussed in processes are: Ready, Running, and Blocked. Additionally, the less commonly discussed creation and closure are included.

  • Ready state: When the process has been allocated all necessary resources except the CPU, it can be executed immediately as long as it obtains the CPU. The state of the process at this time is called the ready state. There may be multiple processes in the ready state in a system, and they are usually queued into a queue, called the ready queue.
  • Running status: The process has obtained the CPU and its program is executing. In a single-processor system, there is only one process in the execution state; in a multi-processor system, there are multiple processes in the execution state.
  • Blocked state: When the executing process is temporarily unable to continue execution due to an event, it gives up the processor and is in a suspended state, that is, the execution of the process is blocked. This suspended state is called blocked state, sometimes also called waiting status or blocked status. Typical events that cause process blocking include: requesting I/O, applying for buffer space, etc. Usually such blocked processes are also placed in a queue. Some systems arrange blocked processes into multiple queues based on different reasons for blocking.

The switching of the three states is shown in the figure below:

4. Thread life cycle

In Java, any object has a life cycle, and threads are no exception. It also has its own life cycle. When the Thread object is created, the thread's life cycle begins. When the code in the run() method is executed normally or the thread throws an uncaught exception (Exception) or error (Error), the life cycle of the thread will end. The entire life cycle of a thread can be divided into five stages, namely the new state (New), the ready state (Runnable), the running state (Running), the blocked state (Blocked) and the dead state (Terminated). The different states of the thread indicate that the thread Activities currently taking place. In the program, through some operations, the thread can be converted between different states, as shown in the figure.

thread

The figure shows the conversion relationship of various states of the thread. The arrow indicates the direction of conversion. The single arrow indicates that the state can only be converted in one direction. For example, the thread can only be converted from the new state to the ready state, and vice versa; the double arrow Indicates that two states can be converted to each other, for example, ready state and running state can be converted to each other. The difference between the states of the thread cannot be fully described through a single picture. Next, the five states in the thread life cycle will be explained in detail, as follows:

1. New status (New)

After a thread object is created, the thread object is in a newly created state. At this time, it cannot run. Like other Java objects, it is only allocated memory by the Java virtual machine, and does not show any dynamic characteristics of threads.

2. Ready state (Runnable)

When the thread object calls the start() method, the thread enters the ready state. The thread in the ready state is located in the thread queue. At this time, it only has the conditions to run. Whether it can obtain the right to use the CPU and start running still needs to wait for the scheduling of the system.

3. Running status (Running)

If a thread in the ready state obtains the right to use the CPU and starts executing the thread execution body in the run() method, the thread is in the running state. After a thread is started, it may not be running all the time. When the running thread has used up the time allocated by the system, the system will deprive the CPU resources occupied by the thread and allow other threads to have the opportunity to execute. It should be noted that only threads in the ready state may transition to the running state.

4. Blocked state (Blocked)

Under certain special circumstances, such as being artificially suspended or performing time-consuming input/output operations, an executing thread will give up the right to use the CPU and temporarily suspend its own execution, entering a blocked state. After a thread enters the blocking state, it cannot enter the queue. Only when the cause of the blocking is eliminated can the thread be transferred to the ready state.

The reason why a thread changes from running state to blocking state, and how to change from blocking state to ready state.

  • When a thread tries to acquire a synchronization lock on an object, if the lock is held by another thread, the current thread will enter the blocking state. If you want to enter the ready state from the blocking state, you must acquire the lock held by other threads.
  • When a thread calls a blocking IO method, the thread will enter the blocking state. If you want to enter the ready state, you must wait until the blocking IO method returns.
  • When a thread calls the wait() method of an object, it will also enter the blocking state. If you want to enter the ready state, you need to use the notify() method to wake up the thread.
  • When the thread calls Thread's sleep(long millis) method, the thread will also enter the blocking state. In this case, you only need to wait until the thread sleep time is up, and the thread will automatically enter the ready state.
  • When the join() method of another thread is called in one thread, the current thread will enter the blocked state. In this case, it will not end the blocked state until the newly joined thread finishes running, and enter the ready state.

It should be noted that a thread can only enter the ready state from the blocked state, and cannot directly enter the running state, that is to say, the blocked thread needs to re-enter the runnable pool and wait for the scheduling of the system.

5. Terminated

When the thread calls the stop() method or the run() method to execute normally, or the thread throws an uncaught exception (Exception) or error (Error), the thread enters the dead state. Once it enters the death state, the thread will no longer be eligible to run and cannot transition to other states.

4.1 Thread status

Thread status status description
New status Waiting state, call start() method to start
ready state Qualified for execution but no right to execute
Operating status Have execution qualifications and execution rights
blocking state When the sleep() method and wait() method are encountered, the execution qualification and execution right will be lost.
When the sleep() method time is up or the notify() method is called, the execution qualification will be obtained and it will become a temporary state
state of death Interrupt the thread, or the run() method ends

4.2 Thread life cycle diagram

** pending status **

In many systems, the process only has the above three states, but in other systems, some new states have been added, the most important of which is the suspended state. The reasons for introducing the pending state are:

(1) End user request. When end users find suspicious problems while their programs are running, they want to temporarily stop their programs. That is, the execution of the executing process is suspended; if the user process is in the ready state but not executed at this time, the process will not accept scheduling for the time being, so that the user can study its execution or modify the program. We call this quiescent state a suspended state.

(2) Parent process request. Sometimes the parent process wants to suspend one of its child processes in order to examine and modify the child process, or to coordinate activities between child processes.

(3) The need for load regulation. When the workload in the real-time system is heavy and may affect the control of real-time tasks, the system can suspend some unimportant processes to ensure that the system can run normally.

(4) Operating system needs. The operating system sometimes wants to suspend certain processes in order to check running resource usage or for accounting purposes.

Add the new and final states as shown in the figure below:

5. Thread safety issues

5.1 Criteria for judging whether a program has thread safety issues

  • Is it a multi-threaded environment?
  • Is there any shared data?
  • Whether multiple statements operate on shared data

5.2 How to solve the multi-thread safety problem?

The basic idea: Make the program an environment without security issues.
Solution: Synchronization mechanism: synchronized code blocks, synchronized methods. Just lock the code that uses multiple statements to operate on shared data so that only one thread can execute it at any time.

5.2.1 Solving thread safety issues Implementation 1: Synchronized code block, the format is as follows

When multiple threads use the same shared resource, the code that processes the shared resource can be placed in a code block modified with the synchronized keyword. This code block is called a synchronized code block.

synchronized(lock){
    
    
  需要同步的代码;
}

lock is a lock object, which is the key to synchronizing code blocks. When a thread executes a synchronization code block, other threads will not be able to execute the current synchronization code block, and will be blocked. After the current thread finishes executing the synchronization code block, all threads will start to grab the execution right of the thread, and the thread that grabs the execution right The synchronized code block will be entered and the code in it will be executed. The cycle repeats until the shared resources are processed. This process is like a public phone booth. Only after the previous person has finished calling and comes out, can the person behind him make a call.

package cn.itcast;

//卖票程序的同步代码块实现示例
class Ticket implements Runnable {
    
    
    private int num = 10;
    Object obj = new Object();

    public void run() {
    
    
        while (true) {
    
    
            // 给可能出现问题的代码加锁
            synchronized (obj) {
    
    
                if (num > 0) {
    
    
                    // 显示线程名及余票数
                    System.out.println(Thread.currentThread().getName()
                            + "...sale..." + num--);
                }
            }
        }
    }
}

class TicketDemo {
    
    
    public static void main(String[] args) {
    
    
        // 通过Thread类创建线程对象,并将Runnable接口的子类对象作为Thread类的构造函数的参数进行传递。
        Ticket t = new Ticket();
        Thread t1 = new Thread(t);
        Thread t2 = new Thread(t);
        Thread t3 = new Thread(t);
        Thread t4 = new Thread(t);
        // 调用线程对象的start方法开启线程。
        t1.start();
        t2.start();
        t3.start();
        t4.start();
    }
}

operation result:

Thread-0...sale...10
Thread-0...sale...9
Thread-0...sale...8
Thread-0...sale...7
Thread-0...sale...6
Thread-0...sale...5
Thread-0...sale...4
Thread-0...sale...3
Thread-0...sale...2
Thread-0...sale...1
  • Synchronization can solve the root cause of security problems on that object. This object functions like a lock.
  • What objects can be used in synchronized code blocks? It can be any object, but each thread must be the same object.
  • Prerequisite for synchronization: multiple threads; multiple threads use the same lock object
  • Benefits of synchronization: The emergence of synchronization solves the safety problem of multi-threading.
  • Disadvantages of synchronization: When there are quite a few threads, each thread will determine the synchronization lock, which is very resource-consuming and will virtually reduce the running efficiency of the program.

5.3 Solving thread safety issues Implementation 2: Synchronization method

Synchronous method: Just add the synchronization keyword to the method

package cn.itcast;

//卖票程序的同步代码块实现示例
class Ticket implements Runnable {
    
    
    // 定义100张票
    private static int tickets = 10;

    Object obj = new Object();

    public void run() {
    
    
        while (true) {
    
    
            sellTicket();
        }
    }

    private synchronized void sellTicket() {
    
    
        if (tickets > 0) {
    
    
            try {
    
    
                Thread.sleep(100);
            } catch (InterruptedException e) {
    
    
                e.printStackTrace();
            }
            System.out.println(Thread.currentThread().getName() + "正在出售第"
                    + (tickets--) + "张票 ");
        }
    }
}

class TicketDemo {
    
    
    public static void main(String[] args) {
    
    
        // 通过Thread类创建线程对象,并将Runnable接口的子类对象作为Thread类的构造函数的参数进行传递。
        Ticket t = new Ticket();
        Thread t1 = new Thread(t);
        Thread t2 = new Thread(t);
        Thread t3 = new Thread(t);
        Thread t4 = new Thread(t);
        // 调用线程对象的start方法开启线程。
        t1.start();
        t2.start();
        t3.start();
        t4.start();
    }
}

operation result:

Thread-0正在出售第10张票 
Thread-0正在出售第9张票 
Thread-0正在出售第8张票 
Thread-3正在出售第7张票 
Thread-3正在出售第6张票 
Thread-3正在出售第5张票 
Thread-3正在出售第4张票 
Thread-3正在出售第3张票 
Thread-3正在出售第2张票 
Thread-3正在出售第1张票 

The lock of a synchronized code block is an object of any type defined by yourself, so does the synchronized method also have a lock? If so, what is its lock? The answer is yes, the synchronized method also has a lock, and its lock is the object currently calling the method, which is the object pointed to by this. The advantage of this is that the synchronization method is shared by all threads, and the object where the method is located is unique relative to all threads, thus ensuring the uniqueness of the lock. When a thread executes this method, other threads cannot enter the method until this thread finishes executing the method. This achieves the effect of thread synchronization.

Sometimes the method that needs to be synchronized is a static method. Static methods can be called directly using "class name. method name ()" without creating an object. At this time, readers will have a question. If the object is not created, the lock of the static synchronization method will not be this. So what is the lock of the static synchronization method? The lock of a static method in Java is the class object of the class where the method is located. This object is automatically created when the class is loaded. The object can be obtained directly using the class name.class.

PS:

  • What is the lock object of the synchronized method? this object
  • If it is a static method, what is the lock object of the synchronized method? The bytecode file of the class (class file of the class)
  • So, do we use synchronized methods or synchronized code blocks?

If the lock object is this, you can consider using a synchronized method. Otherwise, if you can use synchronized code blocks, try to use synchronized code blocks.

5.3.1 Thread-safe and non-thread-safe classes

Some thread-safe classes: StringBuffer, Vector, HashTable, although thread-safe, are less efficient, so we generally don’t use them. Instead, we use Collections tool classes to solve thread-safety issues.

List<String> list = Colletions.syncrinizedList(new ArrayList<String>()); // 获取线程安全的List集合

5.3.2 Use of Lock in JDK5

Although we can understand the lock object problem of synchronized code blocks and synchronized methods, we do not directly see where the lock is added and where the lock is released. In order to express more clearly how to lock and release the lock, JDK5 will provide Create a new lock object Lock

Lock interface: The Lock implementation provides a wider range of locking operations than can be obtained using synchronized methods and statements, and this implementation allows a more flexible structure.

  • void lock(): Get the lock

  • void unlock(): Release the lock

  • ReentrantLock: Lock implementation class

package cn.itcast;

import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;

class SellTicket implements Runnable {
    
    

    // 定义票
    private int tickets = 100;

    // 定义锁对象
    private Lock lock = new ReentrantLock();

    public void run() {
    
    
        while (true) {
    
    
            try {
    
    
                // 加锁
                lock.lock();
                if (tickets > 0) {
    
    
                    try {
    
    
                        Thread.sleep(100);
                    } catch (InterruptedException e) {
    
    
                        e.printStackTrace();
                    }
                    System.out.println(Thread.currentThread().getName()
                            + "正在出售第" + (tickets--) + "张票");
                }
            } finally {
    
    
                // 释放锁
                lock.unlock();
            }
        }
    }

}

/*
 * 虽然我们可以理解同步代码块和同步方法的锁对象问题,但是我们并没有直接看到在哪里加上了锁,在哪里释放了锁,
 * 为了更清晰的表达如何加锁和释放锁,JDK5以后提供了一个新的锁对象Lock。
 *
 * Lock: void lock(): 获取锁。 void unlock():释放锁。 ReentrantLock是Lock的实现类.
 */
public class SellTicketDemo {
    
    
    public static void main(String[] args) {
    
    
        // 创建资源对象
        SellTicket st = new SellTicket();

        // 创建三个窗口
        Thread t1 = new Thread(st, "窗口1");
        Thread t2 = new Thread(st, "窗口2");
        Thread t3 = new Thread(st, "窗口3");

        // 启动线程
        t1.start();
        t2.start();
        t3.start();
    }
}

operation result:

...
窗口2正在出售第15张票
窗口2正在出售第14张票
窗口2正在出售第13张票
窗口2正在出售第12张票
窗口2正在出售第11张票
窗口3正在出售第10张票
窗口3正在出售第9张票
窗口3正在出售第8张票
窗口3正在出售第7张票
窗口3正在出售第6张票
窗口3正在出售第5张票
窗口3正在出售第4张票
窗口3正在出售第3张票
窗口1正在出售第2张票
窗口2正在出售第1张票

The difference in usage between synchronized and lock

synchronized: Add this control to the object that needs to be synchronized. Synchronized can be added to the method or in a specific code block. The objects that need to be locked are represented in parentheses.

lock: Need to display the specified starting position and ending position. Generally, the ReentrantLock class is used as a lock. Multiple threads must use a ReentrantLock class as an object to ensure that the lock takes effect. And the locking and unlocking points need to be displayed through lock() and unlock(). Therefore, unlock() is generally written in the finally block to prevent deadlock.

The usage difference is relatively simple, so I won’t go into details here. If you don’t understand, you can take a look at the basic Java syntax.

The performance difference between synchronized and lock

synchronized is hosted by the JVM for execution, and lock is the code written in Java to control the lock. In Java 1.5, synchronize is performance inefficient. Because this is a heavyweight operation that requires calling the operation interface, locking may consume more system time than operations other than locking. In contrast, using the Lock object provided by Java has higher performance. But with Java 1.6, things have changed. Synchronize is very clear in semantics and can perform many optimizations, including adaptive spin, lock elimination, lock coarsening, lightweight locks, biased locks, etc. As a result, the performance of synchronize on Java1.6 is no worse than Lock. Officials also stated that they also support synchronize more and there is room for optimization in future versions.

synchronized originally used the CPU pessimistic locking mechanism, that is, the thread obtained an exclusive lock. An exclusive lock means that other threads can only rely on blocking to wait for the thread to release the lock. When the CPU conversion thread is blocked, it will cause thread context switching. When there are many threads competing for the lock, it will cause frequent context switching of the CPU, resulting in very low efficiency.

Lock uses an optimistic locking method. The so-called optimistic locking is to complete an operation without locking each time but assuming that there is no conflict. If it fails due to a conflict, it will be retried until it succeeds.

6. Deadlock

The so-called deadlock <DeadLock>: refers to a phenomenon of two or more processes waiting for each other due to competition for resources during the execution process. Without external force, they will not be able to advance. At this time, the system is said to be in deadlock. The lock state or the system produces a deadlock. These processes that are always waiting for each other are called deadlock processes.

The main reasons for deadlock are

  • Because the system resources are insufficient.
  • The order in which processes are run and advanced is inappropriate.
  • Improper allocation of resources, etc.

If the system resources are sufficient and the resource requests of the process can be satisfied, the possibility of deadlock is very low. Otherwise, it will fall into a deadlock due to competition for limited resources. Secondly, if the process running order and speed are different, deadlock may also occur.

Four necessary conditions for deadlock to occur

  • Mutually exclusive condition: A resource can only be used by one process at a time.
  • Request and hold conditions: When a process is blocked due to requesting resources, it keeps the obtained resources.
  • Non-deprivation condition: The resources that have been obtained by the process cannot be forcibly deprived before they are used up.
  • Cyclic waiting condition: A head-to-tail cyclic waiting resource relationship is formed between several processes.

These four conditions are necessary conditions for deadlock. As long as a deadlock occurs in the system, these conditions must be true. As long as one of the above conditions is not met, a deadlock will not occur.

Deadlock release and prevention

By understanding the causes of deadlocks, especially the four necessary conditions for deadlocks, you can avoid, prevent and eliminate deadlocks to the greatest extent possible. Therefore, in terms of system design, process scheduling, etc., pay attention to how to prevent these four necessary conditions from being established, and how to determine a reasonable resource allocation algorithm to avoid processes permanently occupying system resources. In addition, it is also necessary to prevent the process from occupying resources while in the waiting state. During the system operation, each resource request issued by the process that the system can satisfy is dynamically checked, and based on the check results, it is decided whether to allocate resources. If the resource is allocated after If the system may deadlock, it will not be allocated, otherwise it will be allocated. Therefore, the allocation of resources must be properly planned.

1. Orderly resource allocation method

This algorithm resource is uniformly numbered according to all resources in a certain rule system (for example, printer is 1, tape drive is 2, disk is 3, etc.), and the application must be in ascending order. System requirements application process:
  
1. All resources that must be used and belong to the same category must be applied for at one time;
2. When applying for resources of different types, applications must be made in sequence according to the numbers of each type of equipment. For example: process PA uses resources in the order R1, R2; process PB uses resources in the order R2, R1; if dynamic allocation is used, a loop condition may be formed, causing a deadlock.

Adopt the ordered resource allocation method: R1 is numbered 1, R2 is numbered 2;
PA: The application order should be: R1, R2
PB: The application order should be: R1, R2.
  
This destroys the loop condition and avoids death. The occurrence of locks means that subsequent applicants cannot block the previous applicants due to the resource part, and it can be guaranteed that the process that applied first can definitely be executed.

2. Banker’s Algorithm

The most representative algorithm to avoid deadlock is the banker's algorithm proposed by Dijkstra EW in 1968:
This algorithm needs to check the applicant's maximum demand for resources. If the various existing resources in the system can meet the applicant's request, satisfy the applicant's request.

In this way, the applicant can quickly complete its calculation and then release the resources it occupies, thereby ensuring that all processes in the system can be completed, so the occurrence of deadlock can be avoided.

3. Methods to eliminate deadlocks

  • Undo all processes stuck in deadlock;
  • Undo processes trapped in deadlock one by one until the deadlock no longer exists;
  • Forced to give up the occupied resources one by one from the deadlocked process until the deadlock disappears.
  • Forcibly deprive a sufficient amount of resources from other processes and allocate them to the deadlock process to relieve the deadlock state.

Disadvantages of synchronization: low efficiency. If synchronization nesting occurs, deadlock problems may easily occur.
Deadlock problem: refers to a phenomenon of two or more threads waiting for each other due to competition for resources during execution.

package cn.itcast;

class Ticket implements Runnable {
    
    
    private static int num = 100;
    Object obj = new Object();
    boolean flag = true;

    public void run() {
    
    
        if (flag) {
    
    
            while (true) {
    
    
                synchronized (obj) {
    
    
                    show();
                }
            }
        } else
            while (true)
                show();
    }

    public synchronized void show() {
    
    
        synchronized (obj) {
    
    
            if (num > 0) {
    
    
                try {
    
    
                    Thread.sleep(10);
                } catch (InterruptedException e) {
    
    
                    e.printStackTrace();
                }
                System.out.println(Thread.currentThread().getName()
                        + "...function..." + num--);
            }
        }
    }
}

class DeadLockDemo {
    
    
    public static void main(String[] args) {
    
    
        Ticket t = new Ticket();
        Thread t1 = new Thread(t);
        Thread t2 = new Thread(t);

        t1.start();
        try {
    
    
            Thread.sleep(10);
        } catch (InterruptedException e) {
    
    
            e.printStackTrace();
        }
        t.flag = false;
        t2.start();
    }
}

operation result:

Cause Analysis:

As you can see from the picture above, the program has been locked and cannot be executed downwards. The synchronized code block in the run method needs to obtain the obj object lock before executing the show method in the code block. To execute the show method, you must obtain the this object lock before executing the synchronized code block in it. When thread t1 acquires the obj object lock and executes the synchronization code block, thread t2 acquires the this object lock and executes the show method. The show method in the synchronized code block cannot be executed because it cannot obtain the this object lock, and the synchronized code block in the show method cannot be executed because it cannot obtain the obj object lock, resulting in a deadlock.

7. Communication between threads

Multiple threads are processing the same resource, but the tasks are different. At this time, inter-thread communication is required. In order to achieve communication between threads, Java provides a waiting wake-up mechanism.
Methods involved in the wait/wake-up mechanism:

  • wait(): Put the thread in a frozen state, and the thread being waited will be stored in the thread pool.

  • notify(): Wake up a thread in the thread pool (any one is possible).

  • notifyAll(): Wake up all threads in the thread pool.

PS:

  • These methods must be defined in synchronization because these methods are used to manipulate thread state.

  • It must be clear which locked thread is being operated on!

  • What is the difference between wait and sleep?

4. Why are the thread operation methods wait, notify, and notifyAll defined in the object class? Because these methods are monitor methods, and the monitor is actually a lock. The lock can be any object, and the method of calling any object must be in the object class.

5. Producer-consumer problem:

package cn.itcast;

class Student {
    
    
    String name;
    int age;
    boolean flag;
}

class SetThread implements Runnable {
    
    

    private Student s;
    private int x = 0;

    public SetThread(Student s) {
    
    
        this.s = s;
    }

    public void run() {
    
    
        while (true) {
    
    
            synchronized (s) {
    
    
                // 判断有没有
                if (s.flag) {
    
    
                    try {
    
    
                        s.wait(); // t1等着,释放锁
                    } catch (InterruptedException e) {
    
    
                        e.printStackTrace();
                    }
                }

                if (x % 2 == 0) {
    
    
                    s.name = "张三";
                    s.age = 15;
                } else {
    
    
                    s.name = "李四";
                    s.age = 16;
                }
                x++; // x=1

                // 修改标记
                s.flag = true;
                // 唤醒线程
                s.notify(); // 唤醒t2,唤醒并不表示你立马可以执行,必须还得抢CPU的执行权。
            }
            // t1有,或者t2有
        }
    }
}

class GetThread implements Runnable {
    
    
    private Student s;

    public GetThread(Student s) {
    
    
        this.s = s;
    }

    public void run() {
    
    
        while (true) {
    
    
            synchronized (s) {
    
    
                if (!s.flag) {
    
    
                    try {
    
    
                        s.wait(); // t2就等待了。立即释放锁。将来醒过来的时候,是从这里醒过来的时候
                    } catch (InterruptedException e) {
    
    
                        e.printStackTrace();
                    }
                }

                System.out.println(s.name + "---" + s.age);
                // 林青霞---27
                // 刘意---30

                // 修改标记
                s.flag = false;
                // 唤醒线程
                s.notify(); // 唤醒t1
            }
        }
    }
}

public class StudentDemo {
    
    
    public static void main(String[] args) {
    
    
        // 创建资源
        Student s = new Student();

        // 设置和获取的类
        SetThread st = new SetThread(s);
        GetThread gt = new GetThread(s);

        // 线程类
        Thread t1 = new Thread(st);
        Thread t2 = new Thread(gt);

        // 启动线程
        t1.start();
        t2.start();
    }
}

operation result:

8. Thread group

ThreadGroup is used in Java to represent a thread group, which can classify and manage a batch of threads. Java allows programs to directly control thread groups. By default, all threads belong to the main thread group.

public final ThreadGroup getThreadGroup() //返回该线程所属的线程组。

Thread(ThreadGroup group,Runnable target, String name) // 给线程设置分组

9. Thread pool

9.1 What is a thread pool?

Baidu Encyclopedia defines it this way: **Thread pool is a form of multi-thread processing. Tasks are added to the queue during processing, and then these tasks are automatically started after the thread is created. Thread pool threads are background threads. **Each thread uses the default stack size, runs at the default priority, and is in a multi-threaded apartment.

9.2 Why use thread pool?

  • Creating/destroying threads is accompanied by system overhead. Creating/destroying threads too frequently will greatly affect processing efficiency.

For example: record the time T1 consumed to create a thread, the time T2 consumed to execute a task, and the time T3 consumed to destroy a thread. If T1+T3>T2, does it mean that it is not cost-effective to start a thread to perform this task? It just so happens that the thread pool caches threads and can use existing idle threads to perform new tasks, avoiding the system overhead caused by T1+T3.

  • Too many concurrent threads seize system resources and cause blocking.

We know that threads can share system resources. If too many threads are executed at the same time, it may cause insufficient system resources and cause blocking. The use of thread pools can effectively control the maximum number of concurrent threads and avoid the above problems.

  • Do some simple management of threads.

For example: delayed execution, timed loop execution strategies, etc., can be well implemented using thread pools.

Here first introduces the concurrent small toolkit in java5: java.util.concurrent.atomic, then introduces the concept of thread pool, demonstrates the creation of different forms of threads using java5, and then introduces two objects: Callable and Future are used to obtain the results after thread execution, and the thread lock technology is introduced in another article.

The thread concurrency library in Java5 is in the java.util.concurrent package and its subpackages

9.3 Inheritance structure of Executor class

ThreadPoolExecutor

Executor is the top-level interface of the thread pool and has only one method execute() for executing tasks.

ExecutorService is a sub-interface of Executor, which contains some commonly used methods in thread pools.

method Functional description
execute() perform tasks
shutdown() No new tasks will be received after calling, if there are tasks in it, it will be executed
shutdownNow() No new tasks will be accepted after the call, if there are waiting tasks, remove them from the queue; if there are ongoing tasks, try to stop them
isShutdown() Determine whether the thread pool is closed
isTerminated() Determine whether the execution of the task in the thread pool is completed
submit() Submit task
invokeAll() perform a set of tasks

9.4 ThreadPoolExecutor

The default implementation of ExecutorService and the underlying implementation of Executors

9.4.1 Construction method

public ThreadPoolExecutor(
    int corePoolSize, //核心线程数
    int maximumPoolSize, //最大线程数
    long keepAliveTime, //保持时间
    TimeUnit unit, //时间单位
    BlockingQueue<Runnable> workQueue, //用于保存等待执行的任务的阻塞队列
    ThreadFactory threadFactory, //线程工厂
    RejectedExecutionHandler handler //异常捕获器
)
9.4.1.1 int corePoolSize

The size of the core pool, this parameter has a great relationship with the implementation principle of the thread pool described later. After the thread pool is created, by default, there are no threads in the thread pool. Instead, threads are created to execute tasks until a task arrives, unless the prestartAllCoreThreads() or prestartCoreThread() method is called. From the difference between these two methods, As can be seen from the name, it means pre-created threads, that is, corePoolSize threads or one thread are created before no tasks arrive. By default, after the thread pool is created, the number of threads in the thread pool is 0. When a task comes, a thread will be created to execute the task. When the number of threads in the thread pool reaches corePoolSize, the number of threads that arrive will be Put the tasks into the cache queue

9.4.1.2 int maximumPoolSize

The maximum number of threads in the thread pool. This parameter is also a very important parameter. It indicates the maximum number of threads that can be created in the thread pool.

9.4.1.3 long keepAliveTime

Indicates how long the thread can last until it terminates when there is no task to execute. By default, keepAliveTime will only work when the number of threads in the thread pool is greater than corePoolSize, and until the number of threads in the thread pool is not greater than corePoolSize, that is, when the number of threads in the thread pool is greater than corePoolSize, if a thread is idle When the time reaches keepAliveTime, it will terminate until the number of threads in the thread pool does not exceed corePoolSize. But if the allowCoreThreadTimeOut(boolean) method is called, when the number of threads in the thread pool is not greater than corePoolSize, the keepAliveTime parameter will also take effect until the number of threads in the thread pool is 0.

9.4.1.4 TimeUnit unit

The time unit of parameter keepAliveTime, there are 7 values

  • TimeUnit.DAYS //days
  • TimeUnit.HOURS //Hours
  • TimeUnit.MINUTES //Minutes
  • TimeUnit.SECONDS // seconds
  • TimeUnit.MILLISECONDS //Milliseconds
  • TimeUnit.MICROSECONDS // Subtle
  • TimeUnit.NANOSECONDS // nanoseconds
9.4.1.5 RejectedExecutionHandler

When the thread pool and workQueue are both full, there are several default implementations of the processing strategy for newly added tasks, as follows:

  • ThreadPoolExecutor.AbortPolicy
    When adding a policy catcher when a task error occurs, the task is discarded and a RejectedExecutionException is thrown.

  • ThreadPoolExecutor.DiscardPolicy
    also discards tasks, but does not throw an exception

  • ThreadPoolExecutor.DiscardOldestPolicy
    discards the frontmost task in the queue and retryes to execute the task (repeat the process)

  • ThreadPoolExecutor.CallerRunsPolicy
    refuses new tasks to enter. If the thread pool has not been closed, the calling thread handles the task.

9.5 Processing strategy after the task is submitted to the thread pool

3.1 If the number of threads in the current thread pool is less than corePoolSize, every time a task comes, a thread will be created to execute the task.

ThreadPoolExecutor

3.2 If the number of threads in the current thread pool >= corePoolSize, every time a task comes, it will try to be added to the task cache queue.

3.2.1 If added successfully, the task will wait for an idle thread to take it out for execution.

ThreadPoolExecutor

3.2.2 If the addition fails (generally the task cache queue is full), it will try to create a new thread to perform the task.

ThreadPoolExecutor

3.3 If the number of threads in the current thread pool reaches maximumPoolSize, the task rejection policy will be adopted for processing.

ThreadPoolExecutor

If the number of threads in the thread pool is greater than corePoolSize, if a thread idle time exceeds keepAliveTime, the thread will be terminated until the number of threads in the thread pool is not greater than corePoolSize; if it is allowed to set the survival time for threads in the core pool, then the core pool If the idle time of the thread exceeds keepAliveTime, the thread will also be terminated.

9.6 Introduction to blocking queues

9.6.1 BlockingQueue

blocking queue Functional description
BlockingQueue The top-level interface of blocking queue, mainly used to implement producer-consumer queue
BlockingDeque deque
SynchronousQueue Synchronous queue, unbounded queue, direct submission strategy, and alternating queue. After adding an element at a certain time, you must wait for other threads to take it away before you can continue to add it. The static factory method Executors.newCachedThreadPool uses this queue.
LinkedBlockingQueue Unbounded queue, linked list-based blocking queue, can run concurrently, FIFO. The static factory method Executors.newFixedThreadPool() uses this queue.
LinkedBlockingDueue A two-way concurrent blocking queue implemented by a doubly linked list. It supports both FIFO and FILO operation modes, that is, it can operate (insertion/deletion) from the head and tail of the queue at the same time, and the blocking queue supports thread safety.
ConcurrentLinkedQueue Unbounded thread-safe queue based on linked nodes
ArrayBlockingQueue Array-based bounded (fixed size array) blocking queue, only put method and take method have blocking function, fairness
PriorityBlockingQueue Priority-based blocking queue, based on the natural sorting order of objects or the order determined by the Comparator carried by the constructor. The request queue in Volley uses a priority queue.
DelayQueue Delay queue

9.6.2 Queuing strategy

Submit directly

The default option for work queues is SynchronousQueue, which submits tasks directly to threads without holding them. Here, if there is no thread available to run the task immediately, the attempt to enqueue the task will fail, so a new thread will be constructed. This strategy avoids locks when processing sets of requests that may have internal dependencies. Direct submissions generally require an unbounded maximumPoolSizes to avoid rejection of newly submitted tasks. This strategy allows unbounded threads to have the potential to grow as commands arrive in succession than the average number of queues can handle.

unbounded queue

使用无界队列(例如,不具有预定义容量的 LinkedBlockingQueue)使用无界队列将导致在所有 corePoolSize 线程都忙时新任务在队列中等待。这样,创建的线程就不会超过 corePoolSize。(因此,maximumPoolSize 的值也就无效了。)当每个任务完全独立于其他任务,即任务执行互不影响时,适合于使用无界队列。例如,在 Web 页服务器中。这种排队可用于处理瞬态突发请求,当命令以超过队列所能处理的平均数连续到达时,此策略允许无界线程具有增长的可能性

有界队列

当使用有限的 maximumPoolSizes 时,有界队列(如 ArrayBlockingQueue)有助于防止资源耗尽,但是可能较难调整和控制。队列大小和最大池大小可能需要相互折衷:使用大型队列和小型池可以最大限度地降低 CPU 使用率、操作系统资源和上下文切换开销,但是可能导致人工降低吞吐量。如果任务频繁阻塞(例如,如果它们是 I/O 边界),则系统可能为超过您许可的更多线程安排时间。使用小型队列通常要求较大的池大小,CPU 使用率较高,但是可能遇到不可接受的调度开销,这样也会降低吞吐量。

9.6.3 BlockingQueue

方法 Throw exception 抛出异常 Special value 特殊值 Blocks 阻塞 Time out 超时
Insert add() offer() put() offer(e,time,unit)
Remove remove() poll() take() poll(time,unit)
Examine检查 element() peek() 不可用 不可用

BlockingQueue 不接受 null 元素。试图 addputoffer 一个 null 元素时,某些实现会抛出 NullPointerExceptionnull 被用作指示 poll 操作失败的警戒值。

BlockingQueue 可以是限定容量的。它在任意给定时间都可以有一个 remainingCapacity,超出此容量,便无法无阻塞地 put 附加元素。没有任何内部容量约束的 BlockingQueue 总是报告 Integer.MAX_VALUE 的剩余容量。

BlockingQueue 实现主要用于生产者-使用者队列,但它另外还支持 Collection 接口。因此,举例来说,使用 remove(x) 从队列中移除任意一个元素是有可能的。然而,这种操作通常 会有效执行,只能有计划地偶尔使用,比如在取消排队信息时。

BlockingQueue 实现是线程安全的。所有排队方法都可以使用内部锁或其他形式的并发控制来自动达到它们的目的。然而,大量的 Collection 操作(addAllcontainsAllretainAllremoveAll)没有必要自动执行,除非在实现中特别说明。因此,举例来说,在只添加了 c 中的一些元素后,addAll(c) 有可能失败(抛出一个异常)。

BlockingQueue 实质上不支持使用任何一种“close”或“shutdown”操作来指示不再添加任何项。这种功能的需求和使用有依赖于实现的倾向。例如,一种常用的策略是:对于生产者,插入特殊的end-of-stream或poison对象,并根据使用者获取这些对象的时间来对它们进行解释。

9.6.4 BlockingDeque

双端队列

9.6.5 ArrayBlockingQueue

一个由数组支持的有界阻塞队列。此队列按 FIFO(先进先出)原则对元素进行排序。创建其对象必须明确大小,像数组一样。其内部实现是将对象放到一个数组里。有界也就意味着,它不能够存储无限多数量的元素。它有一个同一时间能够存储元素数量的上限。你可以在对其初始化的时候设定这个上限,但之后就无法对这个上限进行修改了(译者注:因为它是基于数组实现的,也就具有数组的特性:一旦初始化,大小就无法修改)。

实现互斥,你一下我一下

public class BlockingQueueCondition {
    
    
    public static void main(String[] args) {
    
    
        ExecutorService service = Executors.newSingleThreadExecutor();
        final Business3 business = new Business3();
        service.execute(new Runnable(){
    
    
            public void run() {
    
    
                for(int i=0;i<50;i++){
    
    
                    business.sub();
                }
            }
        });
        for(int i=0;i<50;i++){
    
    
            business.main();
        }
    }
}
class Business3{
    
    
    BlockingQueue subQueue  = new ArrayBlockingQueue(1);
    BlockingQueue mainQueue = new ArrayBlockingQueue(1);
    {
    
    
        try {
    
    
            mainQueue.put(1);
        } catch (InterruptedException e) {
    
    
            e.printStackTrace();
        }
    }
    public void sub(){
    
    
        try
        {
    
    
            mainQueue.take();
            for(int i=0;i<10;i++){
    
    
                System.out.println(Thread.currentThread().getName() + " : " + i);
            }
            subQueue.put(1);
        }catch(Exception e){
    
    
        }
    }
    public void main(){
    
    
        try
        {
    
    
            subQueue.take();
            for(int i=0;i<5;i++){
    
    
                System.out.println(Thread.currentThread().getName() + " : " + i);
            }
            mainQueue.put(1);
        }catch(Exception e){
    
    
        }
    }
}

输出结果

pool-1-thread-1 : 0
pool-1-thread-1 : 1
pool-1-thread-1 : 2
pool-1-thread-1 : 3
pool-1-thread-1 : 4
pool-1-thread-1 : 5
pool-1-thread-1 : 6
pool-1-thread-1 : 7
pool-1-thread-1 : 8
pool-1-thread-1 : 9
main : 0
main : 1
main : 2
main : 3
main : 4
pool-1-thread-1 : 0
pool-1-thread-1 : 1
pool-1-thread-1 : 2
pool-1-thread-1 : 3
pool-1-thread-1 : 4
pool-1-thread-1 : 5
pool-1-thread-1 : 6
pool-1-thread-1 : 7
pool-1-thread-1 : 8
pool-1-thread-1 : 9
main : 0
main : 1
main : 2
main : 3
main : 4
pool-1-thread-1 : 0
pool-1-thread-1 : 1
pool-1-thread-1 : 2
pool-1-thread-1 : 3
pool-1-thread-1 : 4
pool-1-thread-1 : 5
pool-1-thread-1 : 6
pool-1-thread-1 : 7
pool-1-thread-1 : 8
pool-1-thread-1 : 9
...

9.6.6 LinkedBlockingQueue

一个可改变大小的阻塞队列。此队列按 FIFO(先进先出)原则对元素进行排序。创建其对象如果没有明确大小,默认值是Integer.MAX_VALUE。链接队列的吞吐量通常要高于基于数组的队列,但是在大多数并发应用程序中,其可预知的性能要低。

9.6.7 SynchronousQueue

同步队列。同步队列没有任何容量,每个插入必须等待另一个线程移除,反之亦然。是一个特殊的队列,它的内部同时只能够容纳单个元素。如果该队列已有一元素的话,试图向队列中插入一个新元素的线程将会阻塞,直到另一个线程将该元素从队列中抽走。同样,如果该队列为空,试图向队列中抽取一个元素的线程将会阻塞,直到另一个线程向队列中插入了一条新的元素。据此,把这个类称作一个队列显然是夸大其词了。它更多像是一个汇合点。

9.6.8 DelayQueue

延时队列,对元素进行持有直到一个特定的延迟到期,只有在延迟期满时才能从中提取元素。注入其中的元素必须实现 java.util.concurrent.Delayed 接口。

9.6.9 PriorityBlockingQueue

基于优先级的阻塞队列,依据对象的自然排序顺序或者是构造函数所带的Comparator决定的顺序,应用:Volley

9.6.10 生产者消费者

生产者生产任务,消费者消费任务,那么这时就需要一个任务队列,生产者向队列里插入任务,消费者从队列里提取任务执行

9.7 线程池工具类Executors

jdk1.5之后的一个新类,提供了一些静态工厂,生成一些常用的线程池,ThreadPoolExecutor是Executors类的底层实现

方法 功能描述
newCachedThreadPool() 创建一个可缓存的线程池
newFixedThreadPool() 创建一个固定大小的线程池
newScheduledThreadPool() 创建一个大小无限的线程池。此线程池支持定时以及周期性执行任务的需求
newSingleThreadExecutor() 创建单个线程的线程池,始终保证线程池中会有一个线程在。当某线程死去,会找继任者
defaultThreadFactory() 创建一个默认线程池工厂

9.8 线程池

在线程池的编程模式下,任务是提交给整个线程池,而不是直接交给某个线程,线程池在拿到任务后,它就在内部找有无空闲的线程,再把任务交给内部某个空闲的线程,这就是封装

记住:任务是提交给整个线程池,一个线程同时只能执行一个任务,但可以同时向一个线程池提交多个任务。

示例:

  • 创建固定大小的线程池
  • 创建缓存线程池
  • 用线程池创建定时器
  • 创建单一线程池(始终保证线程池中会有一个线程在。当某线程死去,会找继任者)

注意:

定时器中总是相对时间,我们要想指定具体时间的方法:比如明天早上10点钟执行,则可以使用明天早上10点的时间减去当前的时间,得到时间间隔

import java.util.concurrent.ExecutorService;  
import java.util.concurrent.Executors;  
import java.util.concurrent.TimeUnit;  
  
public class ThreadPoolTest {
    
      
    public static void main(String[] args){
    
      
          
        //创建固定大小的线程池,这里只能完成3个任务  
        //ExecutorService threadPool = Executors.newFixedThreadPool(3);  
          
        //创建缓存线程池,根据任务来自动创建线程的数量,可以完成创建的所有任务  
        //ExecutorService threadPool = Executors.newCachedThreadPool();  
          
        //创建单一线程池(始终保持线程池中有一个线程存活。当唯一线程死去,会创建新的继任者、  
        ExecutorService threadPool = Executors.newSingleThreadExecutor();  
          
        for(int i=1;i<=10;i++){
    
      
			//内部类不能访问外部类的局部变量,所以i要定义为final,又由于i++.  
			//所以在循环内部定义一个变量接收i  
            final int task = i;  
        threadPool.execute(new Runnable() {
    
      
              
            @Override  
            public void run() {
    
      
                for(int j=1;j<=10;j++){
    
      
                    System.out.println(Thread.currentThread().getName()  
                            +" is looping of "+ j+"  for task of " +task);  
                }  
            }  
        });  
        }  
        //验证10个任务都提交给了线程池  
        System.out.println("all of 10 tasks have committed! ");  
        //threadPool.shutdown();        //等任务完成后,杀死线程、  
        //threadPool.shutdownNow();     //立即停止线程  
      
        //用线程池启动定时器  
          
        Executors.newScheduledThreadPool(3).schedule(  
                new Runnable() {
    
      //任务  
                @Override  
                public void run() {
    
      
                    System.out.println("bombing!");  
                }  
            },   
                    5,  //5秒以后执行  
                    TimeUnit.SECONDS);  //单位  
              
    	//在某个时间执行一次后,再指定后续的执行间隔时间  
        Executors.newScheduledThreadPool(3).scheduleAtFixedRate(new Runnable(){
    
      
  
            @Override  
            public void run() {
    
               
                System.out.println("bombing!");  
            }  
              
        }, 10,   //第一次在10秒时爆炸  
            3,   //以后每隔3秒爆炸一次。  
        TimeUnit.SECONDS);   
    }  
}  

9.9 线程池的简单使用

/**
 * 一个简易的线程池管理类,提供三个线程池
 */
public class ThreadManager {
    
    
	public static final String DEFAULT_SINGLE_POOL_NAME = "DEFAULT_SINGLE_POOL_NAME";

	private static ThreadPoolProxy mLongPool = null;
	private static Object mLongLock = new Object();

	private static ThreadPoolProxy mShortPool = null;
	private static Object mShortLock = new Object();

	private static ThreadPoolProxy mDownloadPool = null;
	private static Object mDownloadLock = new Object();

	private static Map<String, ThreadPoolProxy> mMap = new HashMap<String, ThreadPoolProxy>();
	private static Object mSingleLock = new Object();

	/** 获取下载线程 */
	public static ThreadPoolProxy getDownloadPool() {
    
    
		synchronized (mDownloadLock) {
    
    
			if (mDownloadPool == null) {
    
    
				mDownloadPool = new ThreadPoolProxy(3, 3, 5L);
			}
			return mDownloadPool;
		}
	}

	/** 获取一个用于执行长耗时任务的线程池,避免和短耗时任务处在同一个队列而阻塞了重要的短耗时任务,通常用来联网操作 */
	public static ThreadPoolProxy getLongPool() {
    
    
		synchronized (mLongLock) {
    
    
			if (mLongPool == null) {
    
    
				mLongPool = new ThreadPoolProxy(5, 5, 5L);
			}
			return mLongPool;
		}
	}

	/** 获取一个用于执行短耗时任务的线程池,避免因为和耗时长的任务处在同一个队列而长时间得不到执行,通常用来执行本地的IO/SQL */
	public static ThreadPoolProxy getShortPool() {
    
    
		synchronized (mShortLock) {
    
    
			if (mShortPool == null) {
    
    
				mShortPool = new ThreadPoolProxy(2, 2, 5L);
			}
			return mShortPool;
		}
	}

	/** 获取一个单线程池,所有任务将会被按照加入的顺序执行,免除了同步开销的问题 */
	public static ThreadPoolProxy getSinglePool() {
    
    
		return getSinglePool(DEFAULT_SINGLE_POOL_NAME);
	}

	/** 获取一个单线程池,所有任务将会被按照加入的顺序执行,免除了同步开销的问题 */
	public static ThreadPoolProxy getSinglePool(String name) {
    
    
		synchronized (mSingleLock) {
    
    
			ThreadPoolProxy singlePool = mMap.get(name);
			if (singlePool == null) {
    
    
				singlePool = new ThreadPoolProxy(1, 1, 5L);
				mMap.put(name, singlePool);
			}
			return singlePool;
		}
	}

	public static class ThreadPoolProxy {
    
    
		private ThreadPoolExecutor mPool;
		private int mCorePoolSize;
		private int mMaximumPoolSize;
		private long mKeepAliveTime;

		private ThreadPoolProxy(int corePoolSize, int maximumPoolSize, long keepAliveTime) {
    
    
			mCorePoolSize = corePoolSize;
			mMaximumPoolSize = maximumPoolSize;
			mKeepAliveTime = keepAliveTime;
		}

		/** 执行任务,当线程池处于关闭,将会重新创建新的线程池 */
		public synchronized void execute(Runnable run) {
    
    
			if (run == null) {
    
    
				return;
			}
			if (mPool == null || mPool.isShutdown()) {
    
    
				//参数说明
				//当线程池中的线程小于mCorePoolSize,直接创建新的线程加入线程池执行任务
				//当线程池中的线程数目等于mCorePoolSize,将会把任务放入任务队列BlockingQueue中
				//当BlockingQueue中的任务放满了,将会创建新的线程去执行,
				//但是当总线程数大于mMaximumPoolSize时,将会抛出异常,交给RejectedExecutionHandler处理
				//mKeepAliveTime是线程执行完任务后,且队列中没有可以执行的任务,存活的时间,后面的参数是时间单位
				//ThreadFactory是每次创建新的线程工厂
				mPool = new ThreadPoolExecutor(mCorePoolSize, mMaximumPoolSize, mKeepAliveTime, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>(), Executors.defaultThreadFactory(), new AbortPolicy());
			}
			mPool.execute(run);
		}

		/** 取消线程池中某个还未执行的任务 */
		public synchronized void cancel(Runnable run) {
    
    
			if (mPool != null && (!mPool.isShutdown() || mPool.isTerminating())) {
    
    
				mPool.getQueue().remove(run);
			}
		}

		/** 取消线程池中某个还未执行的任务 */
		public synchronized boolean contains(Runnable run) {
    
    
			if (mPool != null && (!mPool.isShutdown() || mPool.isTerminating())) {
    
    
				return mPool.getQueue().contains(run);
			} else {
    
    
				return false;
			}
		}

		/** 立刻关闭线程池,并且正在执行的任务也将会被中断 */
		public void stop() {
    
    
			if (mPool != null && (!mPool.isShutdown() || mPool.isTerminating())) {
    
    
				mPool.shutdownNow();
			}
		}

		/** 平缓关闭单任务线程池,但是会确保所有已经加入的任务都将会被执行完毕才关闭 */
		public synchronized void shutdown() {
    
    
			if (mPool != null && (!mPool.isShutdown() || mPool.isTerminating())) {
    
    
				mPool.shutdownNow();
			}
		}
	}
}

10. 定时器的使用

定时器是一个应用十分广泛的线程工具,可用于调度多个定时任务以后台线程的方式执行。在Java中,可以通过Timer和TimerTask类来实现定义调度的功能

Timer定时类

一种工具,线程用其安排以后在后台线程中执行的任务。可安排任务执行一次,或者定期重复执行

方法声明 功能描述
public Timer() 构造方法
public void schedule(TimerTask task, long delay) 安排在指定延迟后执行指定的任务
public void schedule(TimerTask task,long delay,long period) 安排指定的任务从指定的延迟后开始进行重复的固定延迟执行

TimerTask

任务类,有Timer安排为一次执行或重复执行的任务

方法声明 功能描述
public abstract void run() 此计时器任务要执行的操作
public boolean cancel() 取消此计时器任务

实际开发中使用的是Quartz:一个完全由java编写的开源调度框架。

11. 线程运行架构

12. 多线程总结

12.1 三种多选实现方案

  1. 继承Thread类,重写run()方法。
  2. 实现Runnable接口,new Thread(new Runnable(){…}){…};
  3. 实现Callable接口。和线程池结合。

12.2 实现Runnable好处

  • 将线程的任务从线程的子类中分离出来,进行了单独的封装,实现数据和程序分离,按照面向对象的思想将任务封装成对象
  • 避免了Java单继承的局限性。所以,创建线程的第二种方式较为常用

12.3 线程间的通信

  • 多个线程在处理同一资源,但是任务却不同,这时候就需要线程间通信。
  • 等待/唤醒机制涉及的方法
    • wait():让线程处于冻结状态,被wait的线程会被存储到线程池中。
    • notify():唤醒线程池中的一个线程(任何一个都有可能)。
    • notifyAll():唤醒线程池中的所有线程。

12.4 wait、sleep区别

  • wait可以指定时间也可以不指定。sleep必须指定时间。
  • 在同步中时,对CPU的执行权和锁的处理不同。
    • wait:释放CPU执行权,释放锁。Object中的方法。
    • sleep:释放CPU执行权,不释放锁。Thread中的方法。
  • sleep必需捕获异常,wait,notify,notifyAll不需要捕获异常

12.5 常用方法

方法声明 功能描述
String getName() 获取线程的名称
void setName(String name) 设置线程的名称
static Thread currentThread() Get the currently executing thread
int getPriority() Get thread priority
void setPriority(int newPriority) Set thread priority (1-10)
static void sleep(long millis) Thread sleep
void join() A thread joins, and other threads can only execute after the thread is executed
static void yield() Thread courtesy
setDaemon(boolean on) Background thread/daemon thread
void stop( ) Outdated and deprecated
interrupt( ) Interrupt thread
isInterrupted() Whether the thread is interrupted

12.6 Thread life cycle

new, ready, running, blocking (synchronous blocking, waiting blocking, other blocking), dead

Guess you like

Origin blog.csdn.net/qq_34988304/article/details/132662683