Detailed explanation and summary of threads, multithreading and thread pools

What is a process?

A process is a dynamic execution of a program at a time. A program is generally a process, but there can also be multiple processes. A process can have multiple threads, but there is only one main thread. There is no one-to-one correspondence between processes and programs. There is no identical process in the system (different start time).

What is a thread?

Thread: It is the smallest unit that the operating system can perform operation scheduling. The thread is a part of the process, the actual operation unit in the process. A thread refers to a single sequential control flow in a process. Multiple threads can be concurrent in a process, and each thread executes different tasks in parallel. Thread is the basic unit of independent scheduling and dispatch . Multiple threads in the same process will share all system resources in the process, such as virtual address space, file descriptors, signal processing, and so on. However, multiple threads in the same process have their own call stack, their own register context, and their own thread-local storage. A process can have many threads, and each thread performs different tasks in parallel.

4 ways to create threads

  • Inherit the Thread class and override the run() method
public class ThreadTest extends Thread{
    
    
	@Override
	public void run() {
    
    
		super.run();
	}
	public static void main(String[] args) {
    
    
		ThreadTest thread = new ThreadTest();
		thread.start();
	}
}
  • Implement the Runnable interface and implement the run() method
public class ThreadTest implements Runnable{
    
    
	@Override
	public void run() {
    
    
		System.out.println("实现Runnable接口");
	}
	public static void main(String[] args) {
    
    
		Thread thread = new Thread(new ThreadTest());
		thread.start();
	}
}
  • Implement the Callable interface and create a thread through FutureTask
public class ThreadTest implements Callable<Integer>{
    
    
	@Override
	public Integer call() throws Exception {
    
    
		int i = 0;
		System.out.println("实现Callable接口");
		return i;
	}
	public static void main(String[] args) {
    
    
		// 创建Callable实现类
		ThreadTest threadTest = new ThreadTest();
		// 将实现类封装到FutureTask中
		FutureTask<Integer> futureTask = new FutureTask<Integer>(threadTest);
		// 创建线程
		Thread thread = new Thread(futureTask);
		// 启动线程
		thread.start();
	}
}
  • Using thread pool
    Java provides four thread pools through Executors, namely:
    (1) newCachedThreadPool creates a cacheable thread pool. If the length of the thread pool exceeds the processing needs, idle threads can be flexibly recycled. If there is no recyclable, new threads are created.
    (2) newFixedThreadPool creates a fixed-length thread pool, which can control the maximum concurrent number of threads, and the excess threads will wait in the queue.
    (3) newScheduledThreadPool creates a fixed-length thread pool to support timing and periodic task execution.
    (4) newSingleThreadExecutor creates a single-threaded thread pool, which will only use a unique worker thread to execute tasks, ensuring that all tasks are executed in the specified order (FIFO, LIFO, priority).
    If you want the thread pool to perform tasks, you need to implement the Runnable or Callable interface.
public class ThreadTest implements Callable<Integer>{
    
    

	@Override
	public Integer call() throws Exception {
    
    
		int i = 0;
		System.out.println("实现Callable接口");
		return i;
	}
	public static void main(String[] args) throws InterruptedException, ExecutionException {
    
    
		
		// 获取线程池
		ExecutorService cachedThreadPool = Executors.newCachedThreadPool();
		// 获取Future,里面封装了实现Callable的ThreadTest
		Future<Integer> submit = cachedThreadPool.submit(new ThreadTest());
		// 运行线程,获取结果
		Integer i = submit.get();
	}
}

The difference between Runnable interface and Callable interface

The run() method of the Runnable interface has no return value, and the call() method of the Callable interface has a return value.

The difference between execute() method and submit() method

(1) The execute() method is used to submit tasks that do not require a return value, so it is impossible to determine whether the task is executed successfully by the thread or not;
(2) The submit() method is used to submit tasks that require a return value, and the thread pool will return one Future type object, through this Future object, you can judge whether the task is executed successfully, and you can get the return value through Future's get() method. The get() method will block the current thread until the task is completed, and use get(long timeout, TimeUnit) The unit) method will block the current thread for a period of time and then return immediately. At this time, the task may not be completed.

Thread status

Insert picture description here

  1. New state (New): A thread object is newly created.
  2. Ready state (Runnable): After the thread object is created, other threads call the object's start() method.
  3. Running state: The thread in the ready state acquires the CPU and executes the program code.
  4. Blocked: In the blocked state, the thread gives up CPU usage rights for some reason and temporarily stops running. Until the thread enters the ready state, it has a chance to go to the running state. There are three types of blocking:
    (1) Waiting for blocking: the running thread executes the wait() method, and the JVM will put the thread into the waiting pool. (Note: the wait() method will release the held lock).
    (2) Synchronous blocking: When the running thread acquires the synchronization lock of the object, if the synchronization lock is occupied by other threads, the JVM will put the thread into the lock pool.
    (3) Other blocking: When the running thread executes the sleep() or join() method, or sends an I/O request, the JVM will set the thread to a blocked state. When the sleep() state times out, join() waits for the thread to terminate or time out, or the I/O processing is completed, the thread reverts to the ready state (note: sleep() will not release the locks held).
  5. Dead: The thread finishes its execution or exits the run() method due to an exception , and the thread ends its life cycle.

The difference between general thread and daemon thread

The so-called daemon thread refers to the thread that provides a general service in the background when the program is running. For example, the garbage collection thread is a very competent guardian, and this thread is not an indispensable part of the program. Therefore, when all non-daemon threads are finished, the program is also terminated, and all daemon threads in the process are killed at the same time.
Difference: The only difference is to determine when the JVM will leave. Daemon (daemon thread) serves other threads. If all User Threads have been evacuated and Daemon has no serviceable threads, the JVM will leave. It can also be understood that the daemon thread is the thread created automatically by the JVM (but not necessarily, it can also be created by the user), and the user thread is the thread created by the program.
Points to note when using daemon threads:
(1) thread.setDaemon(true) must be set before thread.start(), otherwise an IllegalThreadStateException will be thrown. You cannot set a running regular thread as a daemon thread.
(2) The new thread generated in the Daemon thread is also Daemon.
(3) The daemon thread should never access inherent resources, such as files, databases, etc., because it will be interrupted at any time or even during an operation.

Thread method

(1) The difference between sleep and wait
(1) Sleep is a method of the thread class (Thread), which causes this thread to suspend execution for a specified time and give the execution opportunity to other threads, but the monitoring state is still maintained and will automatically resume at that time. Calling sleep will not release the object lock . sleep() causes the current thread to enter a blocking state and will not execute within a specified time.
(2) Wait is a method of the Object class. Calling the wait method on this object will cause the thread to give up the object lock and enter the waiting lock pool waiting for this object . Only after the notify method (or notifyAll) is issued for this object can the thread enter the object The lock pool is ready to acquire the object lock and enter the running state.
Differences:
1. Sleep is a method of the Thread class, and wait is a method of the Object class.
2. Calling the sleep method will not release all the object locks held, while the wait method will release all the object locks held.
3. wait, notify, notifyAll can only be used in synchronous control methods or synchronous control blocks, while sleep can be used anywhere (the scope of use is different)
4.
(1) Sleep will make a thread enter the sleep state, wait for a certain amount of time, and then automatically wake up and enter the runnable state (ready state), but it will not enter the running state immediately, because the thread scheduling mechanism also takes time to resume the running of the thread. Note that the sleep method is a static method, which means that it is only valid for the current object. It is wrong to let the thread object enter sleep through thread.sleep(5000). It will only make the current thread (main thread) sleep, not the thread thread. The result of the following code is that after the thread thread executes the for loop 100 times, the main thread waits for a few seconds before printing "main execution finished!"

public class SleepTest implements Runnable{
    
    

	@Override
	public void run() {
    
    
		for (int i = 0; i < 100; i++) {
    
    
			System.out.println("测试线程执行:"+i);
		}
	}
	public static void main(String[] args) throws InterruptedException {
    
    
		// 创建测试的线程
		Thread thread = new Thread(new SleepTest());
		// 线程开始
		thread.start();
		// 让主线程(main)睡眠5秒
		thread.sleep(5000);
		System.out.println("main执行完!");
	}
}

(2) Once an object calls the wait method, notify() or notifyAll() must be used to wake up the object.
(2), yield(), join(), notify(), notifyAll()
(1) The yield() method is to stop the current thread, so that threads of the same priority or higher priority have a chance to execute. If not, then the yield() method will not work, and will be executed immediately after being in the executable state.
(2) The join() method is used to call another thread to execute during the execution of a certain thread, and then continue to execute the current thread after the execution of the called thread ends. Such as: tt.join() //Mainly used to wait for the end of thread tt, if there is no such sentence, main will be executed.

When tt.join() is not used

public class JoinTest implements Runnable{
    
    
	
	@Override
	public void run() {
    
    
		
		System.out.println("tt线程执行开始!");
		System.out.println("tt线程执行结束!");
	}
	public static void main(String[] args) throws InterruptedException {
    
    
		// 创建测试的线程
		Thread tt = new Thread(new JoinTest());
		System.out.println("main开始执行!");
		// 线程开始
		tt.start();
		// 在main线程中加入tt线程
		// tt.join();
		System.out.println("main执行完!");
	}
}

The result of the operation is:

main开始执行!
main执行完!
tt线程执行开始!
tt线程执行结束!

When using tt.join()

public class JoinTest implements Runnable{
    
    
	
	@Override
	public void run() {
    
    
		
		System.out.println("tt线程执行开始!");
		System.out.println("tt线程执行结束!");
	}
	public static void main(String[] args) throws InterruptedException {
    
    
		// 创建测试的线程
		Thread tt = new Thread(new JoinTest());
		System.out.println("main开始执行!");
		// 线程开始
		tt.start();
		// 在main线程中加入tt线程
		tt.join();
		System.out.println("main执行完!");
	}
}

The result of the operation is:

main开始执行!
tt线程执行开始!
tt线程执行结束!
main执行完!

(3) The notify() method only wakes up a thread waiting on this object and causes the thread to start execution. If there are multiple threads waiting for an object, this method will only wake up one thread. The choice of which thread depends on the operating system's implementation of multi-thread management.
(4) notifyAll() will wake up all threads waiting on this object. This method does not give the object's lock to all threads, but lets them compete. Only the thread that obtains the lock can enter the ready state, and there is no thread that obtains the lock. Then continue to wait.

What is multithreading?

Running multiple threads at the same time in a program to complete different tasks is called multithreading .

Advantages and problems of multithreading

Advantages:
(1) Resource utilization is better
(2) Program design is simpler in some cases
(3) Program responds faster.
Learn more: http://ifeve.com/benefits/
Problem:
(1) Thread synchronization problem
(Two) thread safety issues

What is a deadlock?

The so-called deadlock means that each concurrent process waits for each other's resources, and these concurrent processes will not release their own resources before they get each other's resources, causing everyone to want resources but none of them. The state where each concurrent process cannot continue to move forward.

Necessary conditions for deadlock

(1) Mutually exclusive conditions . That is, a resource can only be occupied by one process within a period of time, and cannot be occupied by two or more processes at the same time. Such exclusive resources, such as CD-ROM drives, printers, etc., must be released by the process occupying the resource before other processes can occupy the resource. This is determined by the properties of the resource itself. For example, a single-plank bridge is an exclusive resource, and people on both sides cannot cross the bridge at the same time.

(2) Inalienable conditions . Before the resource obtained by the process is used up, the resource applicant cannot forcibly grab the resource from the resource owner, but can only be released by the resource owner process. For example, a person who crosses a single-plank bridge cannot force the opponent to retreat or push the opponent off the bridge illegally. The person on the bridge must vacate the bridge after crossing the bridge (that is, actively release resources) before the opponent's person can cross the bridge.

(3) Possession and application conditions . The process has at least occupied one resource, but applied for a new resource; because the resource has been occupied by another process, the process is blocked at this time; however, it continues to occupy the occupied resource while it is waiting for the new resource. Taking the single-plank bridge as an example, the two people met on the bridge. A walks over a section of the bridge deck (that is, some resources are occupied), and needs to walk the rest of the bridge deck (to apply for new resources), but that part of the bridge deck is occupied by B (B walks a section of the bridge deck). A can't make it through, can't move forward, and doesn't retreat; B is also in the same situation.

(4) Loop waiting condition . There is a process waiting sequence {P1, P2,..., Pn}, where P1 waits for a certain resource occupied by P2, P2 waits for a certain source occupied by P3, ..., and Pn waits for a certain resource occupied by P1, Form a process loop waiting loop. Just like the previous problem of crossing a single-plank bridge, A waits for the bridge deck that B occupies, and B waits for the bridge deck that A occupies, thus waiting for each other in a cycle.

The four conditions we mentioned above will happen at the same time in deadlock. In other words, as long as a necessary condition is not met, the deadlock can be eliminated.

Prevention of deadlock

(1) Break the mutually exclusive condition . That is, the process is allowed to access certain resources at the same time. However, some resources are not allowed to be accessed at the same time, such as printers, etc., which are determined by the properties of the resources themselves. Therefore, this method has no practical value.

(2) Break the non-preemption condition . That is, the process is allowed to forcibly seize certain resources from the occupant. That is to say, when a process has already occupied some resources and it applies for new resources, but it cannot be satisfied immediately, it must release all the resources it has occupied and apply again later. The resources it releases can be allocated to other processes. This is equivalent to covertly occupying the resources occupied by the process. This method of preventing deadlock is difficult to implement and will reduce system performance.

(3) Break the conditions of possession and application . Can implement resource pre-allocation strategy. That is, the process applies to the system for all the resources it needs at one time before running. If all the resources required by a process are not met, no resources are allocated and the process will not run temporarily. Only when the system can meet all the resource requirements of the current process, it will allocate all the requested resources to the process at one time. Since the running process has occupied all the resources it needs, the phenomenon of occupying resources and applying for resources will not occur, so deadlock will not occur. However, this strategy also has the following disadvantages:

1) In many cases, it is impossible for a process to know all the resources it needs before it executes. This is because the process is dynamic and unpredictable during execution;
2) Low resource utilization. No matter when the allocated resources are used, a process can only be executed after it has all the resources it needs. Even if some resources are used only once by the process, the process keeps occupying them during its lifetime, resulting in long-term use. This is obviously a great waste of resources;
3) Reduce the concurrency of the process. Because of the limited resources and the waste, the number of processes that can allocate all the required resources will inevitably be fewer.

(4) Break the cycle of waiting conditions and implement an orderly allocation strategy of resources. Using this strategy, the resources are classified and numbered in advance, and allocated according to the number, so that the process does not form a loop when applying for resources. All process requests for resources must be made strictly in the order of increasing resource serial number. The process occupies a small number of resources to apply for a large number of resources, and no loop will occur, thus preventing deadlock. Compared with the previous strategy, this strategy has greatly improved resource utilization and system throughput, but it also has the following shortcomings:

1) It limits the process' requests for resources, and it is also difficult to properly number all resources in the system, and increases the system overhead;
2) In order to follow the order of application by number, the resources that are not used need to be applied in advance, so Increase the time that the process takes up resources.

Deadlock avoidance

Banker's Algorithm etc.

Original address: https://blog.csdn.net/abigale1011/article/details/6450845

What is thread pool

In object-oriented programming, creating and destroying objects is time-consuming, because creating an object requires access to memory resources or more resources. This is especially true in Java, where the virtual machine will try to track each object so that it can be garbage collected after the object is destroyed. Therefore, one way to improve the efficiency of the service program is to reduce the number of creation and destruction of objects as much as possible, especially the creation and destruction of some resource-intensive objects. This is the reason for the "pooled resource" technology. The thread pool, as the name implies, is to create several executable threads in advance and put them into a pool (container). When you need to get threads from the pool, you don’t need to create them. You don’t need to destroy the threads but put them back into the pool when you need them. This reduces creation and The cost of destroying thread objects.

Advantages of thread pool

First: reduce resource consumption. Reduce the consumption caused by thread creation and destruction by reusing the created threads.

Second: Improve response speed. When the task arrives, the task can be executed without waiting for the thread to be created.

Third: Improve the manageability of threads. Threads are scarce resources. If they are created unlimitedly, they will not only consume system resources, but also reduce the stability of the system. The thread pool can be used for unified allocation, tuning and monitoring.

Create thread pool

The Executor interface in Java 5+ defines a tool for executing threads. Its subtype, thread pool interface, is ExecutorService. It is more complicated to configure a thread pool, especially when the principle of the thread pool is not very clear. Therefore, some static factory methods are provided on the Executors side of the tool class to generate some commonly used thread pools, as shown below:
1. newFixedThreadPool creates a thread pool that specifies the number of worker threads. Whenever a task is submitted, a worker thread is created. If the number of worker threads reaches the initial maximum number of the thread pool, the submitted task is stored in the pool queue.
2. newCachedThreadPool creates a cacheable thread pool. The characteristics of this type of thread pool are:
1). There is almost no limit to the number of worker threads created (in fact, there is also a limit, the number is Interger. MAX_VALUE), so that threads can be added to the thread pool flexibly.
2). If the task is not submitted to the thread pool for a long time, that is, if the worker thread is idle for a specified time (the default is 1 minute), the worker thread will automatically terminate. After termination, if you submit a new task, the thread pool recreates a worker thread.
3. newSingleThreadExecutor creates a single-threaded Executor, that is, only creates a single worker thread to perform tasks. If this thread ends abnormally, another will replace it to ensure sequential execution (I think this is its characteristic). The biggest feature of a single worker thread is that it can guarantee the sequential execution of various tasks, and there will be no multiple threads active at any given time.
4. newScheduleThreadPoolCreate a fixed-length thread pool, and support timed and periodic task execution, similar to Timer.

The drawbacks of Executors returning thread pool objects

  • FixedThreadPool and SingleThreadExecutor: Allow the length of the request queue to be Integer.MAX_VALUE, which may accumulate a large number of requests, resulting in OOM (Out Of Memory, memory overflow);
  • CachedThreadPool and ScheduleThreadPool: The number of threads allowed to be created is Integer.MAX_VALUE, which may create a large number of threads, resulting in OOM (Out Of Memory, memory overflow);

Thread pool running process

Recommend the thread pool running process written by this blogger:
https://blog.csdn.net/u011240877/article/details/73440993

Guess you like

Origin blog.csdn.net/qq_47768542/article/details/109129897