Interpretation and principle of thread pool source code

foreword

Older Programmer Lao Wang Lao
Wang is a programmer who has been drifting in Beijing for more than ten years. He is too old to work overtime but cannot be promoted. He chose the bathing industry and opened a bathing center, yes, a regular bathing center. When he was in Beijing before, the bathhouse he liked to go to was called "Tsinghua Pool". After thinking about it, he named his bathing center "Thread Pool".

Thread Pool Bath Center
After thread pool opened, Lao Wang found that some customers wanted to have pedicures, so he recruited a pedicure technician and added an additional business to increase income. As the number of pedicure customers increased, four more pedicure technicians were recruited in order to earn more money.
After a period of time, the business of the bathing center is getting better and better, and there are more and more customers doing pedicures. However, Lao Wang found that there were already 5 pedicure technicians in his shop, and there would be too many recruits, and he could not afford to pay more wages. What if the pedicure technician is too busy? Lao Wang is a smart man, so he immediately thought of a solution: let the customers line up, and if any pedicure technician has finished and is free, he will ask another customer in the line to continue the work.

Busy weekends
On weekends, the number of customers who come to the bathing center is several times higher than usual. The queue time for customers who want pedicures is too long, and the customers are already impatient. Lao Wang reacted immediately, and urgently recruited 5 pedicure technicians from other bathing centers to do pedicures for customers in the team, greatly reducing the number of customers queuing.
However, sometimes the business is so hot that urgently recruited technicians are used, and the queue time for customers is also very long. When new customers come, Lao Wang can only say to the customer with a smile on his face: "Come again next time, next time Find a good technician for you.", shutting customers out.
After the weekend, the store couldn't support idlers, so Lao Wang dismissed all the urgently recruited technicians.

Lao Wang's management method
Lao Wang's business is booming, and soon he will open a branch, raise funds to go public, and reach the pinnacle of his life. Since it is so successful, let us review his business methods:
————————————————

1. Introduction to thread pool

1.1 Basic concept of thread

The thread life cycle is as follows:

insert image description here

新建:java.lang.Thread.State.NEW
public static void thread_state_NEW(){
    
    
        Thread thread = new Thread();
        System.out.println(thread.getState());
    }
就绪:java.lang.Thread.State.RUNNABLE
public static void thread_state_RUNNABLE(){
    
    
        Thread thread = new Thread();
        thread.start();
        System.out.println(thread.getState());
    }
超时等待:java.lang.Thread.State#TIMED_WAITING
public static void thread_state_SLEEP(){
    
    
        Thread thread3 = new Thread(() -> {
    
    
            try {
    
    
                Thread.sleep(10000);
            } catch (InterruptedException e) {
    
    
                e.printStackTrace();
            } });
        thread3.start();
        Thread.sleep(500);
        System.out.println(thread3.getState());
    }
等待:java.lang.Thread.State.WAITING
public static void thread_state_WAITING(){
    
    
        Thread thread2 = new Thread(new Runnable() {
    
    
            public void run() {
    
    
                LockSupport.park();
            }
        });
        thread2.start();
        Thread.sleep(500);
        System.out.println(thread2.getState());
        LockSupport.unpark(thread2);
    }
阻塞:java.lang.Thread.State.BLOCKED
    public static void thread_state_BLOCKED(){
    
    
        final byte[] lock = new byte[0];
        Thread thread1 = new Thread(() -> {
    
    
            synchronized (lock){
    
    
                try {
    
    
                    Thread.sleep(3000);
                } catch (InterruptedException e) {
    
    
                    e.printStackTrace();
                } }
        });
        thread1.start();
        Thread thread2 = new Thread(() -> {
    
    
            synchronized (lock){
    
    
            } });
        thread2.start();
        Thread.sleep(1000);
 
        System.out.println(thread1.getState());
        System.out.println(thread2.getState());
    }
销亡:java.lang.Thread.State.TERMINATED

public static void thread_state_TERMINATED(){
    
    
        Thread thread = new Thread();
        thread.start();
        Thread.sleep(1000);
        System.out.println(thread.getState());
    }

1.2 Basic concept of thread pool

1.2.1 Why use thread pool

There are also some precautions for using the thread pool in the project. Refer to the "Java Development Manual - Taishan Edition" for instructions:
[Mandatory] Thread resources must be provided through the thread pool, and it is not allowed to explicitly create threads in the application.
Description: The advantage of the thread pool is to reduce the time spent on creating and destroying threads and the overhead of system resources, and solve the problem of insufficient resources. If the thread pool is not used, it may cause the system to create a large number of threads of the same type, resulting in memory consumption or "over-switching".
[Mandatory] It is not allowed to use Executors to create thread pools, but to use ThreadPoolExecutor. This processing method allows students to be more clear about the running rules of thread pools and avoid the risk of resource exhaustion.

The disadvantages of the thread pool objects returned by Executors are as follows:
FixedThreadPool and SingleThreadPool:
The allowed request queue length is Integer.MAX_VALUE, which may accumulate a large number of requests, resulting in OOM.
CachedThreadPool:
The number of threads allowed to be created is Integer.MAX_VALUE, which may create a large number of threads, resulting in OOM.

1.2.2 Principle

Thread Pool (ThreadPool): The thread pool is to create a buffer pool to store threads. After the execution, the thread will not die, but will return to the thread pool again to become idle, waiting for the next task to come, which makes the thread pool more than manual Creating threads has more advantages and
is often used in high-concurrency scenarios. Using multi-threading to optimize code efficiency, therefore, using a thread pool has more advantages than manually creating threads:
reducing system resource consumption, and reducing the consumption caused by thread creation and destruction by reusing existing threads;

Improve the response speed of the system. When a task arrives, by reusing the existing thread, it can be executed immediately without waiting for the creation of a new thread; it is convenient for the
control of the number of concurrent threads. Because if the thread is created without limit, it may cause too much memory usage and OOM to
save the time cost of cpu switching threads (it is necessary to keep the current execution thread site and restore the execution thread site)
to provide more powerful functions and delay Timing thread pool. Timer vs ScheduledThreadPoolExecutor
common thread pool structure (UML)

insert image description here

Executor
: The top-level interface Executor provides an idea: to decouple task submission and task execution.
ExecutorService
expands the ability to execute tasks, supplements the method that can generate Future for one or a batch of asynchronous tasks, and
provides methods for managing and controlling the thread pool, such as stopping the operation of the thread pool.
The abstract class of the upper layer of AbstractExecutorService
connects the process of executing tasks in series to ensure that the implementation of the lower layer only needs to focus on one method of executing tasks. ThreadPoolExecutor is the most commonly used
thread
pool. On the one hand, it maintains its own life cycle, and on the other hand, it manages threads at the same time and tasks, so that the two are well combined to perform parallel tasks

1.2.3 Thread pool status

insert image description here

RUNNING: Accept new tasks and process queued tasks.
SHUTDOWN: Do not accept new tasks, but process queued tasks.
STOP: Do not accept new tasks, do not process queued tasks, and interrupt ongoing tasks.
TIDYING: All tasks have been terminated, the workerCount is zero, and the thread transitioning to the TIDYING state will run the terminated() hook method.
TERMINATED: terminated() completed.
The above are the five states of the thread pool, so what are these five states recorded by? Mark it ~ the following details.

1.2.4 Execution process

insert image description here

Hypothetical scenario:
Create a thread pool, add tasks in an infinite loop, and debug to see the growth pattern of the number of works and queues.
After waiting for a period of time, check whether the number of works falls back to the core

First attach the conclusion:
add a task, if the number of threads in the thread pool does not reach the coreSize, create a new thread directly to execute. When the
core is reached, put it into the queue. The
queue is full. If the maxSize is not reached, continue to create threads
.
When the maxSize is reached, the thread will be rejected according to the reject policy. release, down to coreSize

2. Working principle

Parameter introduction
First, let's understand the constructor of ThreadPoolExecutor.
From the source code, we can see that the constructor of ThreadPoolExecutor has 7 parameters, namely corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue, threadFactory, and handler. The following will explain these 7 parameters one by one.

insert image description here

2.1 corePoolSize number of core threads:

A minimum number of threads will be maintained in the thread pool. Even if these threads are idle, they will not be destroyed unless allowCoreThreadTimeOut is set. The minimum number of threads here is corePoolSize. After the task is submitted to the thread pool, it will first check whether the current number of threads has reached the corePoolSize, if not, a new thread will be created to process the task.

2.2 maximumPoolSize The maximum number of threads in the thread pool:

After the current number of threads reaches corePoolSize, if tasks continue to be submitted to the thread pool, the tasks will be cached in the work queue (described later). If the queue is also full, a new thread will be created to handle this. The thread pool will not create new threads indefinitely, it will have a limit on the maximum number of threads, which is specified by maximumPoolSize.

2.3 keepAliveTime idle thread survival time:

  一个线程如果处于空闲状态,并且当前的线程数量大于corePoolSize,那么在指定时间后,这个空闲线程会被销毁,这里的指定时间由keepAliveTime来设定

2.4 unit Idle thread survival time unit:

The measurement unit of keepAliveTime, commonly used SECONDS (seconds) MILLISECONDS (milliseconds)

2.5 workQueue work queue:

Task queue, a blocking queue for transferring and saving tasks waiting to be executed. When the initialization of corePoolSize is completed, the next task will be directly stored in the queue, and the thread will spin to obtain the task through the getTask() method. Common queue setups are as follows:

①ArrayBlockingQueue Array blocking queue:
Array-based bounded blocking queue, sorted by FIFO. When new tasks come in, they will be placed at the end of the queue, and a bounded array can prevent resource exhaustion. When the number of threads in the thread pool reaches corePoolSize, and a new task comes in, the task will be placed at the end of the queue, waiting to be scheduled. If the queue is already full, a new thread is created, and if the number of threads has reached maxPoolSize, the rejection strategy will be executed.

②※LinkedBlockingQuene linked list blocking queue (note: the length can be specified): an
unbounded blocking queue based on linked list (the default maximum capacity is Interger.MAX, and the length can be specified), sorted according to FIFO. Due to the approximately unbounded nature of the queue, when the number of threads in the thread pool reaches corePoolSize, new tasks will always be stored in the queue, and basically will not create new threads until maxPoolSize (it is difficult to reach the number of Interger.MAX ), so when using this work queue, the parameter maxPoolSize actually has no effect.

③SynchronousQuene Synchronous Queue:
A blocking queue that does not cache tasks. The producer puts a task and must wait until the consumer takes it out. That is to say, when a new task comes in, it will not be cached, but will be directly scheduled to execute the task. If there is no available thread, a new thread will be created. If the number of threads reaches maxPoolSize, the rejection strategy will be executed.

④PriorityBlockingQueue Priority blocking queue:
an unbounded blocking queue with priority, the priority is realized by the parameter Comparator.

2.6, threadFactory thread factory:

The factory used when creating a new thread can be used to set the thread name, whether it is a daemon thread, etc.

2.7, handler rejection strategy:

When the tasks in the work queue have reached the maximum limit, and the number of threads in the thread pool has also reached the maximum limit, if a new task is submitted, how to deal with it. The rejection strategy here is to solve this problem. jdk provides 4 rejection strategies:
①CallerRunsPolicy
Under this strategy, the run method of the rejected task is directly executed in the caller thread.
②AbortPolicy
Under this policy, the task is directly discarded and a RejectedExecutionException is thrown.
ps: ThreadPoolTaskExecutor defaults to
③DiscardPolicy
under this policy, directly discards the task and does nothing.
④DiscardOldestPolicy
Under this policy, discard the earliest task that entered the queue, and then try to put the rejected task into the queue

3 Source code analysis

After introducing the basic information of the above thread pool, the source code analysis will start next. First look at the basic concept of source code.

3.1 Basic concept: CTL

What is "ctl"?
ctl is an atomic integer that packs two conceptual fields.
1) workerCount: Indicates the effective number of threads;
2) runState: Indicates the running state of the thread pool, including RUNNING, SHUTDOWN, STOP, TIDYING, TERMINATED and other states.

The int type has 32 bits, of which the lower 29 bits of ctl are used to represent workerCount, and the upper 3 bits are used to represent runState, as shown in the figure below.

insert image description here

Source code introduction:

/**
     * 主池控制状态ctl是包含两个概念字段的原子整数: workerCount:指有效的线程数量;
     * runState:指运行状态,运行,关闭等。为了将workerCount和runState用1个int来表示,
     * 我们限制workerCount范围为(2 ^ 29) - 1,即用int的低29位用来表示workerCount,
     * 用int的高3位用来表示runState,这样workerCount和runState刚好用int可以完整表示。
     */
    // 初始化时有效的线程数为0, 此时ctl为: 1010 0000 0000 0000 0000 0000 0000 0000
    private final AtomicInteger ctl = new AtomicInteger(ctlOf(RUNNING, 0));
    // 高3位用来表示运行状态,此值用于运行状态向左移动的位数,即29位
    private static final int COUNT_BITS = Integer.SIZE - 3;
    // 线程数容量,低29位表示有效的线程数, 0001 1111 1111 1111 1111 1111 1111 1111
    private static final int CAPACITY   = (1 << COUNT_BITS) - 1;
 
    /**
     * 大小关系:RUNNING < SHUTDOWN < STOP < TIDYING < TERMINATED,
     * 源码中频繁使用大小关系来作为条件判断。
     * 1110 0000 0000 0000 0000 0000 0000 0000 运行
     * 0000 0000 0000 0000 0000 0000 0000 0000 关闭
     * 0010 0000 0000 0000 0000 0000 0000 0000 停止
     * 0100 0000 0000 0000 0000 0000 0000 0000 整理
     * 0110 0000 0000 0000 0000 0000 0000 0000 终止
     */
    private static final int RUNNING    = -1 << COUNT_BITS; // 运行
    private static final int SHUTDOWN   =  0 << COUNT_BITS; // 关闭
    private static final int STOP       =  1 << COUNT_BITS; // 停止
    private static final int TIDYING    =  2 << COUNT_BITS; // 整理
    private static final int TERMINATED =  3 << COUNT_BITS; // 终止

runstate get:

/**
 * 得到运行状态:入参c为ctl的值,~CAPACITY高3位为1低29位全为0,
 * 因此运算结果为ctl的高3位, 也就是运行状态
 */
private static int runStateOf(int c)     {
    
     return c & ~CAPACITY; }

workCount get:

/**
     * 得到有效的线程数:入参c为ctl的值, CAPACITY高3为为0,
     * 低29位全为1, 因此运算结果为ctl的低29位, 也就是有效的线程数
     */
    private static int workerCountOf(int c)  {
    
     return c & CAPACITY; }

3.2 What are the advantages of such a design of CTL?

The main advantage of ctl's design is that it encapsulates the operations on runState and workerCount into an atomic operation.
runState and workerCount are the two most important attributes in the normal operation of the thread pool. What the thread pool should do at a certain moment depends on the values ​​of these two attributes.

Therefore, whether it is query or modification, we must ensure that the operations on these two attributes belong to the "same moment", that is, atomic operations, otherwise confusion will occur. If we use two variables to store separately, additional locking operations are required to ensure atomicity, which will obviously bring additional overhead, but encapsulating these two variables into an AtomicInteger will not bring additional There is no lock overhead, and the runState and workerCount can be obtained separately using simple bit operations.

3.3 Source Code Debugging Scenario

Still the above hypothetical scenario:
create a thread pool, add tasks in an infinite loop, debug to see the growth law of the number of works and queues,
core thread 3, bounded queue 2, maximum thread 5
waiting time 20s, custom rejection strategy
After waiting for a period of time, check the number of works Whether to fall back to the core
task scenario
. The thread pool opened. In the first three days, there were big rewards. The speed of visitors was greater than the speed of consumption. The observation gradually increased. The
thread pool started on the eighth day, the amount of tasks decreased, and the speed of consumption was greater than the speed of production. Observing a Gradual Declining Trend

/**
     * desc : 回落场景
     */
    @SneakyThrows
    private static void test_threadPoolExecutor_down_core() {
    
    
        ThreadPoolExecutor executor = new ThreadPoolExecutor(
                3,
                5,
                20,
                TimeUnit.SECONDS,
                new ArrayBlockingQueue<>(2),
                new RejectedExecutionHandler() {
    
    
                    public void rejectedExecution(Runnable r, ThreadPoolExecutor executor) {
    
    
                        System.out.println("爆满了,择日再来啊!!!!!拒绝任务~~~~~~~~~~~~~~~");
                    }
                });
 
        // 开业前七天高峰,瞬间打满
        for (int i = 0; i < 100; i++) {
    
    
 
            int finalI = i + 1;
            executor.execute(() -> {
    
    
                try {
    
    
                    System.out.println("~~~~~~~~~~~~~~~~~~~~~~来活了,第" + finalI + "位客人~~~~~~~~~~~~~~~~~~~~~~");
                    System.out.println("当前排队任务数: " + executor.getQueue().size() + "  当前线程数量: " + 
                    executor.getPoolSize());
                    Thread.sleep(10 * 1000);
                } catch (InterruptedException e) {
    
    
                    e.printStackTrace();
                }
            });
            if (i >= 7) {
    
    
                if (i == 8) {
    
    
                    System.out.println("线程任务高峰期已过 | 分界线!!!!!!!!!!");
                }
                // 此时任务产生的速率高于线程执行速度(线程有富余)
                Thread.sleep(15 * 1000);
            } else {
    
    
                Thread.sleep(1L * 1000);
            }
        }
    }

Here's the result:
threads gradually increase to 3, then queue increases to 2, and then threads increase to 5. When the 7th mission comes, ascend to the top
insert image description here

Subsequent tasks are generated slower, queue tasks are reduced by 2->0, and then the number of threads is gradually reduced by 5->3

insert image description here

Through the above cases, you can observe the process of dynamic changes in the thread pool. The reason for this phenomenon will be analyzed from the perspective of source code. [ps: follow the source code from a broad to a deep perspective]

3.4 Source code debugging process

public void execute(Runnable command) {
    
    
        // 防御性容错
        if (command == null)
            throw new NullPointerException();
        int c = ctl.get();
        // case1 -> worker数量小于核心线程数,addWork
        if (workerCountOf(c) < corePoolSize) {
    
    
            // 添加worker - core
            if (addWorker(command, true))
                return;
            c = ctl.get();
        }
        // case2 -> 如果线程池还在运行态,offer到队列
        if (isRunning(c) && workQueue.offer(command)) {
    
    
            //再检查一下状态
            int recheck = ctl.get();
            //如果线程池已经终止,直接移除任务,不再响应
            if (! isRunning(recheck) && remove(command))
                reject(command);
            //否则,如果没有线程干活的话,创建一个空work,该work会从队列获取任务去执行
            else if (workerCountOf(recheck) == 0)
                addWorker(null, false);
        }
        // case3 -> 队列也满,继续调addWork,但是注意,core=false,开启到maxSize的大门
        else if (!addWorker(command, false)) {
    
    
            // case4 -> 超出max的话,addWork会返回false,进入reject
            reject(command);
        }
    }

Then enter the addWork method, which provides two input parameters (task, core thread or not), the internal logic is as follows:

/**
     * desc : 线程创建过程
     */
    private boolean addWorker(Runnable firstTask, boolean core) {
    
    
        // 第一步,先是ctl-wc 通过CAS + 1
        retry:
        for (;;) {
    
    
            int c = ctl.get();
            int rs = runStateOf(c);
 
            // 判断线程池状态是否是可运行态(停止及之后 直接falsee)
            if (rs >= SHUTDOWN &&
                ! (rs == SHUTDOWN &&
                   firstTask == null &&
                   ! workQueue.isEmpty()))
                return false;
 
            for (;;) {
    
    
                // 获取运行中线程数量,判断是否能增加
                int wc = workerCountOf(c);
                if (wc >= CAPACITY ||
                    wc >= (core ? corePoolSize : maximumPoolSize))
                    return false;
                // 满足条件,此时ctl-wc CAS原子性增加,正常break
                if (compareAndIncrementWorkerCount(c))
                    break retry;
                // 增加失败,判断线程池状态决定内循环 or 外循环(结束)
                c = ctl.get();
                if (runStateOf(c) != rs)
                    continue retry;
            }
        }
 
        // 第二步,创建新work放入线程集合works(一个HashSet)
        boolean workerStarted = false;
        boolean workerAdded = false;
        Worker w = null;
        try {
    
    
            // 符合条件,创建新的work并包装task
            w = new Worker(firstTask);
            final Thread t = w.thread;
            if (t != null) {
    
    
                // 加锁 - 避免workers的线程安全问题
                final ReentrantLock mainLock = this.mainLock;
                mainLock.lock();
                try {
    
    
                    // 再次校验运行状态,防止关闭
                    int rs = runStateOf(ctl.get());
 
                    if (rs < SHUTDOWN ||
                        (rs == SHUTDOWN && firstTask == null)) {
    
    
                        if (t.isAlive()) // precheck that t is startable
                            throw new IllegalThreadStateException();
                        // 添加打工人
                        workers.add(w);
                        int s = workers.size();
                        if (s > largestPoolSize)
                            largestPoolSize = s;
                        workerAdded = true;
                    }
                } finally {
    
    
                    mainLock.unlock();
                }
                if (workerAdded) {
    
    
                    // 添加成功,启动线程
                    t.start();
                    workerStarted = true;
                }
            }
        } finally {
    
    
            // 添加失败,减ctl,集合内移除
            if (! workerStarted)
                addWorkerFailed(w);
        }
        return workerStarted;
    }

Here we emphatically declare our worker target, Worker

private final class Worker extends AbstractQueuedSynchronizer
     implements Runnable{
    
    
         /** Thread this worker is running in.  Null if factory fails. */
        final Thread thread;
        /** Initial task to run.  Possibly null. */
        Runnable firstTask;
}

Worker is a worker thread that implements the Runnable interface and holds a thread thread and an initialized task firstTask. thread is a thread created by ThreadFactory when the constructor is called, and can be used to execute tasks; firstTask uses it to save the first task passed in, which can be null or null. If this value is non-null, then the thread will execute this task immediately at the beginning of startup, which corresponds to the situation when the core thread is created; if this value is null, then a thread needs to be created to execute the tasks in the task list (workQueue) Tasks, that is, the creation of non-core threads.

Worker inherits AQS and uses AQS to realize the function of exclusive lock. ReentrantLock is not used, but AQS is used to realize the non-reentrant feature to reflect the current execution state of the thread. Used to recycle threads.
insert image description here

Let's continue to follow the Worker class and see his run method

//在worker执行runWorker()的时候,不停循环,先查看自己有没有携带Task,如果有,执行
   while (task != null || (task = getTask()) != null)
 
 
//如果没有绑定,会调用getTask,从队列获取任务
    private Runnable getTask() {
    
    
        boolean timedOut = false; // Did the last poll() time out?
        // 自旋获取任务
        for (;;) {
    
    
            int c = ctl.get();
            int rs = runStateOf(c);
 
            // Check if queue empty only if necessary.
            if (rs >= SHUTDOWN && (rs >= STOP || workQueue.isEmpty())) {
    
    
                decrementWorkerCount();
                return null;
            }
 
            int wc = workerCountOf(c);
 
            // 判断是不是要超时处理,重点!!!决定了当前线程要不要被释放
            // 首次进来 allowCoreThreadTimeOut = false 主要看 wc > corePoolSize
            boolean timed = allowCoreThreadTimeOut || wc > corePoolSize;
            //线程数超出max,并且上次循环中poll等待超时了,那么说明该线程已终止 //将线程队列数量原子性减
            if ((wc > maximumPoolSize || (timed && timedOut))
                && (wc > 1 || workQueue.isEmpty())) {
    
    
                if (compareAndDecrementWorkerCount(c))
                    return null;
                continue;
            }
            try {
    
    
                // 重点!!!
                // 如果线程可被释放,那就poll,释放的时间为:keepAliveTime
                // 否则,线程是不会被释放的,take一直被阻塞在这里,直到来了新任务继续工作
                Runnable r = timed ?
                    workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS) :
                    workQueue.take();
                if (r != null)
                    return r;
                //到这里说明可被释放的线程等待超时,已经销毁,设置该标记,下次循环将线程数减少
                timedOut = true;
            } catch (InterruptedException retry) {
    
    
                timedOut = false;
            }
        }
    }

Finally, the exit of our Worker (the release of the thread)

    private void processWorkerExit(Worker w, boolean completedAbruptly) {
    
    
 
        final ReentrantLock mainLock = this.mainLock;
        mainLock.lock();
        try {
    
    
            // 统计总执行任务数 && 释放worker
            completedTaskCount += w.completedTasks;
            workers.remove(w);
        } finally {
    
    
            mainLock.unlock();
        }
        // 中断线程
        tryTerminate();
}

Complete process review:
insert image description here

3.5 Notes

How does the thread pool ensure that core threads are not destroyed?
How do non-core threads die after keepAliveTime?
When the number of core threads is less than corePoolSize and there are idle threads, adding tasks at this time is creating threads or existing execution.
What is the difference between core threads and non-core threads?
What locks are used in the thread pool and why?

Guess you like

Origin blog.csdn.net/chuige2013/article/details/131119377