Thread Pool-ThreadPoolExecutor Detailed Explanation

Thread Pool-ThreadPoolExecutor Detailed Explanation

For thread pool ThreadPoolExecutor source code analysis, refer to the juc column series of articles .

Basic overview

Thread pool: A container that holds multiple threads. The threads in the container can be reused, eliminating the need for frequent creation and destruction of thread objects.

Thread pool role:

  • Reduce resource consumption, reduce the number of threads created and destroyed, and each worker thread can be reused to perform multiple tasks.
  • Improve the response speed. When the task arrives, if there is a thread that can be used directly, the system will not freeze.
  • Improve the manageability of threads. If you create unlimited threads, it will not only consume system resources, but also reduce the stability of the system. Using the thread pool can be used for unified allocation, tuning and monitoring.

The core idea of ​​the thread pool: thread reuse , the same thread can be reused to handle multiple tasks.

Pooling technology (Pool): A programming technique, the core idea is resource reuse, which can optimize application performance when there are a large number of requests, and reduce the resource overhead of frequent system connection establishment.

custom thread pool

image-20221003195301320

illustrate

The code implements a simple thread pool, that 只实现了核心线程数,没有实现最大线程数is, when the number of threads in the thread pool reaches the coreSize, the new tasks are put into the queue directly, and if the queue is full, the rejection strategy is used directly, and the maximum number of threads maxSize is not set.

1) Custom rejection policy interface

  • Design pattern - 策略模式: abstract the specific operation into an interface, and the specific implementation is passed in by the caller.
// 拒绝策略
@FunctionalInterface
interface RejectPolicy<T> {
    
    
	void reject(BlockingQueue<T> queue, T task);
}

2) Custom task queue

// 阻塞队列 用来协调生产者与消费者
class BlockingQueue<T> {
    
    
    
    // 1.任务队列
    private Deque<T> queue = new ArrayDeque<>();

    // 2.锁
    private ReentrantLock lock = new ReentrantLock();

    // 3.生产者条件变量
    private Condition fullWaitSet = lock.newCondition();

    // 4.消费者条件变量
    private Condition emptyWaitSet = lock.newCondition();

    // 5.容量
    private int capcity;

    public BlockingQueue(int capcity) {
    
    
        this.capcity = capcity;
    }

    // 带超时阻塞获取
    public T poll(long timeout, TimeUnit unit) {
    
    
        lock.lock();
        try {
    
    
            // 将 timeout 统一转换为 纳秒
            long nanos = unit.toNanos(timeout);
            while (queue.isEmpty()) {
    
    
                try {
    
    
                    // 返回值是剩余时间
                    if (nanos <= 0) {
    
    
                        return null;
                    }
                    nanos = emptyWaitSet.awaitNanos(nanos);
                } catch (InterruptedException e) {
    
    
                    e.printStackTrace();
                }
            }
            T t = queue.removeFirst();
            fullWaitSet.signal();
            return t;
        } finally {
    
    
            lock.unlock();
        }
    }

    // 阻塞获取
    public T take() {
    
    
        lock.lock();
        try {
    
    
            while (queue.isEmpty()) {
    
    
                try {
    
    
                    emptyWaitSet.await();
                } catch (InterruptedException e) {
    
    
                    e.printStackTrace();
                }
            }
            T t = queue.removeFirst();
            fullWaitSet.signal();
            return t;
        } finally {
    
    
            lock.unlock();
        }
    }

    // 阻塞添加
    public void put(T task) {
    
    
        lock.lock();
        try {
    
    
            while (queue.size() == capcity) {
    
    
                try {
    
    
                    log.debug("等待加入任务队列 {} ...", task);
                    fullWaitSet.await();
                } catch (InterruptedException e) {
    
    
                    e.printStackTrace();
                }
            }
            log.debug("加入任务队列 {}", task);
            queue.addLast(task);
            emptyWaitSet.signal();
        } finally {
    
    
            lock.unlock();
        }
    }

    // 带超时时间阻塞添加
    public boolean offer(T task, long timeout, TimeUnit timeUnit) {
    
    
        lock.lock();
        try {
    
    
            long nanos = timeUnit.toNanos(timeout);
            while (queue.size() == capcity) {
    
    
                try {
    
    
                    if (nanos <= 0) {
    
    
                        return false;
                    }
                    log.debug("等待加入任务队列 {} ...", task);
                    nanos = fullWaitSet.awaitNanos(nanos);
                } catch (InterruptedException e) {
    
    
                    e.printStackTrace();
                }
            }
            log.debug("加入任务队列 {}", task);
            queue.addLast(task);
            emptyWaitSet.signal();
            return true;
        } finally {
    
    
            lock.unlock();
        }
    }

    public int size() {
    
    
        lock.lock();
        try {
    
    
            return queue.size();
        } finally {
    
    
            lock.unlock();
        }
    }

    public void tryPut(RejectPolicy<T> rejectPolicy, T task) {
    
    
        lock.lock();
        try {
    
    
            // 判断队列是否满
            if (queue.size() == capcity) {
    
    
                // 要执行的拒绝策略
                rejectPolicy.reject(this, task);
            } else {
    
     // 有空闲
                log.debug("加入任务队列 {}", task);
                queue.addLast(task);
                emptyWaitSet.signal();
            }
        } finally {
    
    
            lock.unlock();
        }
    }
}

3) Custom thread pool

class ThreadPool {
    
    
    
    // 任务队列
    private BlockingQueue<Runnable> taskQueue;

    // 线程集合
    private HashSet<Worker> workers = new HashSet<>();

    // 核心线程数
    private int coreSize;

    // 获取任务时的超时时间
    private long timeout;

    private TimeUnit timeUnit;

    // 拒绝策略
    private RejectPolicy<Runnable> rejectPolicy;

    // 执行任务
    public void execute(Runnable task) {
    
    
        // 当任务数没有超过 coreSize 时,直接交给 worker 对象执行
        // 如果任务数超过 coreSize 时,加入任务队列暂存
        synchronized (workers) {
    
    
            if (workers.size() < coreSize) {
    
    
                Worker worker = new Worker(task);
                log.debug("新增 worker{}, {}", worker, task);
                workers.add(worker);
                worker.start();
            } else {
    
    
                //taskQueue.put(task);
                // 1) 死等
                // 2) 带超时等待
                // 3) 让调用者放弃任务执行
                // 4) 让调用者抛出异常
                // 5) 让调用者自己执行任务
                // 策略模式-把具体的操作抽象成接口,具体的实现由调用者传递进来
                taskQueue.tryPut(rejectPolicy, task);  
            }
        }
    }

    public ThreadPool(int coreSize, long timeout, TimeUnit timeUnit, int queueCapcity, RejectPolicy<Runnable> rejectPolicy) {
    
    
        this.coreSize = coreSize;
        this.timeout = timeout;
        this.timeUnit = timeUnit;
        this.taskQueue = new BlockingQueue<>(queueCapcity);
        // 拒绝策略的具体的实现通过调用者使用构造方法传递进来
        this.rejectPolicy = rejectPolicy;
    }

    class Worker extends Thread {
    
    
        private Runnable task;

        public Worker(Runnable task) {
    
    
            this.task = task;
        }

        @Override
        public void run() {
    
    
            // 执行任务
            // 1) 当 task 不为空,执行任务
            // 2) 当 task 执行完毕,再接着从任务队列获取任务并执行
            //while(task != null || (task = taskQueue.take()) != null) {
    
    
            while (task != null || (task = taskQueue.poll(timeout, timeUnit)) != null) {
    
    
                try {
    
    
                    log.debug("正在执行...{}", task);
                    task.run();
                } catch (Exception e) {
    
    
                    e.printStackTrace();
                } finally {
    
    
                    task = null;
                }
            }
            synchronized (workers) {
    
    
                log.debug("worker 被移除{}", this);
                workers.remove(this);
            }
        }
    }
}

4) test

public class TestPool {
    
    
    public static void main(String[] args) {
    
    
        ThreadPool threadPool = new ThreadPool(1,
                1000, TimeUnit.MILLISECONDS, 1, (queue, task) -> {
    
     // 调用者选择拒绝策略
            // 1) 死等
            //queue.put(task);
            // 2) 带超时等待
            //queue.offer(task, 1500, TimeUnit.MILLISECONDS);
            // 3) 让调用者放弃任务执行
            //log.debug("放弃{}", task);
            // 4) 让调用者抛出异常
            //throw new RuntimeException("任务执行失败 " + task);
            // 5) 让调用者自己执行任务
            task.run();
        });
        for (int i = 0; i < 4; i++) {
    
    
            int j = i;
            threadPool.execute(() -> {
    
    
                try {
    
    
                    Thread.sleep(1000L);
                } catch (InterruptedException e) {
    
    
                    e.printStackTrace();
                }
                log.debug("{}", j);
            });
        }
    }
}

Deny Policy Demonstration Effect

We set the maximum number of core threads of the blocking queue to 1, that is, only 1 thread can be stored in the queue, for a more convenient demonstration effect.

  • 1) die and wait

    It is necessary to increase the time-consuming task execution (sleep sleep time in the code), so that it will wait for the task execution to complete, achieving the effect of dead waiting.

    18:05:24.718 c.ThreadPool [main] - 新增 workerThread[Thread-0,5,main], cn.itcast.n8.TestPool$$Lambda$2/245672235@66d33a
    18:05:24.722 c.BlockingQueue [main] - 加入任务队列 cn.itcast.n8.TestPool$$Lambda$2/245672235@2c8d66b2
    18:05:24.722 c.ThreadPool [Thread-0] - 正在执行...cn.itcast.n8.TestPool$$Lambda$2/245672235@66d33a
    18:05:24.722 c.BlockingQueue [main] - 等待加入任务队列 cn.itcast.n8.TestPool$$Lambda$2/245672235@5a39699c ...
    
  • 2) wait with timeout

    A total of 3 tasks are executed, the waiting timeout period with timeout is 500ms, and the execution time of the tasks takes 1000ms.

    It can be seen that the program does not wait for death. Task 0 is executed first, and task 1 is added to the waiting queue. After 1s, task 0 is executed, and after 1s, task 1 is also executed, but task 2 is waiting for a timeout during the adding process, and Not added to the blocking queue, so task 2 is not executed.

    18:10:40.295 c.ThreadPool [main] - 新增 workerThread[Thread-0,5,main], cn.itcast.n8.TestPool$$Lambda$2/245672235@66d33a
    18:10:40.298 c.BlockingQueue [main] - 加入任务队列 cn.itcast.n8.TestPool$$Lambda$2/245672235@2c8d66b2
    18:10:40.298 c.BlockingQueue [main] - 等待加入任务队列 cn.itcast.n8.TestPool$$Lambda$2/245672235@5a39699c ...
    18:10:40.298 c.ThreadPool [Thread-0] - 正在执行...cn.itcast.n8.TestPool$$Lambda$2/245672235@66d33a
    18:10:41.311 c.TestPool [Thread-0] - 0
    18:10:41.311 c.ThreadPool [Thread-0] - 正在执行...cn.itcast.n8.TestPool$$Lambda$2/245672235@2c8d66b2
    18:10:42.317 c.TestPool [Thread-0] - 1
    18:10:43.321 c.ThreadPool [Thread-0] - worker 被移除Thread[Thread-0,5,main]
    
  • 3) Let the caller abandon the task execution

    To write nothing is to give up. We can use the log to output a paragraph.

    A total of 3 tasks were executed, and the queue was full as soon as it came up, so task 2 gave up execution directly, and only tasks 0 and 1 were executed.

    18:19:41.920 c.ThreadPool [main] - 新增 workerThread[Thread-0,5,main], cn.itcast.n8.TestPool$$Lambda$2/245672235@66d33a
    18:19:41.924 c.BlockingQueue [main] - 加入任务队列 cn.itcast.n8.TestPool$$Lambda$2/245672235@2c8d66b2
    18:19:41.925 c.ThreadPool [Thread-0] - 正在执行...cn.itcast.n8.TestPool$$Lambda$2/245672235@66d33a
    18:19:41.925 c.TestPool [main] - 放弃cn.itcast.n8.TestPool$$Lambda$2/245672235@5a39699c
    18:19:42.931 c.TestPool [Thread-0] - 0
    18:19:42.932 c.ThreadPool [Thread-0] - 正在执行...cn.itcast.n8.TestPool$$Lambda$2/245672235@2c8d66b2
    18:19:43.941 c.TestPool [Thread-0] - 1
    18:19:44.951 c.ThreadPool [Thread-0] - worker 被移除Thread[Thread-0,5,main]
    
  • 4) Let the caller throw an exception

    Throwing an exception will cause the remaining tasks to not be executed. At this time, we adjust the number of tasks to 4.

    An exception was thrown when the second task was executed, so the third task did not come in at all.

    18:47:31.348 c.ThreadPool [main] - 新增 workerThread[Thread-0,5,main], cn.itcast.n8.TestPool$$Lambda$2/245672235@66d33a
    18:47:31.352 c.BlockingQueue [main] - 加入任务队列 cn.itcast.n8.TestPool$$Lambda$2/245672235@2c8d66b2
    18:47:31.352 c.ThreadPool [Thread-0] - 正在执行...cn.itcast.n8.TestPool$$Lambda$2/245672235@66d33a
    Exception in thread "main" java.lang.RuntimeException: 任务执行失败cn.itcast.n8.TestPool$$Lambda$2/245672235@5a39699c
    	at cn.itcast.n8.TestPool.lambda$main$0(TestPool.java:25)
    	at cn.itcast.n8.BlockingQueue.tryPut(TestPool.java:250)
    	at cn.itcast.n8.ThreadPool.execute(TestPool.java:83)
    	at cn.itcast.n8.TestPool.main(TestPool.java:31)
    18:47:32.353 c.TestPool [Thread-0] - 0
    18:47:32.354 c.ThreadPool [Thread-0] - 正在执行...cn.itcast.n8.TestPool$$Lambda$2/245672235@2c8d66b2
    18:47:33.363 c.TestPool [Thread-0] - 1
    18:47:34.375 c.ThreadPool [Thread-0] - worker 被移除Thread[Thread-0,5,main]
    
  • 5) Let the caller perform the task itself

    Calling the run() method of the task directly is actually executed by the main thread itself.

    A total of 4 tasks, tasks 0 and 1 are executed by [Thread-0], tasks 2 and 3 are executed by the main thread main.

    18:58:03.790 c.ThreadPool [main] - 新增 workerThread[Thread-0,5,main], cn.itcast.n8.TestPool$$Lambda$2/245672235@66d33a
    18:58:03.794 c.BlockingQueue [main] - 加入任务队列 cn.itcast.n8.TestPool$$Lambda$2/245672235@2c8d66b2
    18:58:03.794 c.ThreadPool [Thread-0] - 正在执行...cn.itcast.n8.TestPool$$Lambda$2/245672235@66d33a
    18:58:04.800 c.TestPool [Thread-0] - 0
    18:58:04.800 c.TestPool [main] - 2
    18:58:05.810 c.TestPool [main] - 3
    18:58:05.810 c.ThreadPool [Thread-0] - 正在执行...cn.itcast.n8.TestPool$$Lambda$2/245672235@2c8d66b2
    18:58:06.814 c.TestPool [Thread-0] - 1
    18:58:07.817 c.ThreadPool [Thread-0] - worker 被移除Thread[Thread-0,5,main]
    

ThreadPoolExecutor

image-20221003212316370

The threads in the thread pool are all non-daemon threads.

1) Thread pool status

ThreadPoolExecutor uses the high 3 bits of int to represent the thread pool status, and the low 29 bits to represent the number of threads.

state name upper 3 bits receive new tasks Handle blocking queue tasks illustrate
RUNNING 111 Y Y
SHUTDOWN 000 N Y No new tasks will be accepted, but remaining tasks in the blocking queue will be processed
STOP 001 N N Will interrupt the task being executed and discard the blocking queue task
TIDYING 010 - - The task is fully executed, the active thread is 0 and it is about to enter the end
TERMINATED 011 - - terminal state

Comparing numerically, TERMINATED> TIDYING> STOP> SHUTDOWN>RUNNING

These information are stored in an atomic variable ctl, the purpose is to combine the state of the thread pool and the number of threads into one, so that a cas atomic operation can be used for assignment

// c 为旧值, ctlOf 返回结果为新值
ctl.compareAndSet(c, ctlOf(targetState, workerCountOf(c))));

// rs 为高 3 位代表线程池状态, wc 为低 29 位代表线程个数,ctl 是合并它们
private static int ctlOf(int rs, int wc) {
    
     return rs | wc; }

2) Construction method

public ThreadPoolExecutor(int corePoolSize,
 						  int maximumPoolSize,
 						  long keepAliveTime,
 						  TimeUnit unit,
 						  BlockingQueue<Runnable> workQueue,
 						  ThreadFactory threadFactory,
 						  RejectedExecutionHandler handler)
  • corePoolSize the number of core threads (the maximum number of reserved threads)
  • maximumPoolSize the maximum number of threads
  • keepAliveTime survival time - for emergency threads
  • unit time unit - for rescue threads
  • workQueue blocking queue
  • threadFactory thread factory - you can give a good name to the thread when it is created
  • handler rejection policy

Way of working:

24325243534

  • There are no threads in the thread pool at the beginning. When a task is submitted to the thread pool, the thread pool will create a new thread to execute the task.
  • When the number of threads reaches corePoolSize and no threads are idle, add tasks at this time, and the newly added tasks will be added to the workQueue queue until there are idle threads.
  • If the queue selects a bounded queue, when the task exceeds the queue size, threads of maximumPoolSize - corePoolSize number will be created for emergency.
  • If the thread reaches the maximumPoolSize and there are still new tasks, the rejection policy will be executed at this time.
    • The rejection strategy jdk provides 4 implementations:
      • AbortPolicy allows the caller to throw a RejectedExecutionException, which is the default policy
      • CallerRunsPolicy lets callers run tasks
      • DiscardPolicy abandons this task
      • DiscardOldestPolicy discards the oldest task in the queue and replaces it with this task
    • Other well-known frameworks also provide implementations:
      • The implementation of Dubbo will record the log before throwing the RejectedExecutionException exception, and dump the thread stack information to facilitate the positioning of the problem
      • The implementation of Netty is to create a new thread to perform tasks
      • The implementation of ActiveMQ, with a timeout wait (60s) to try to put into the queue, similar to our previous custom rejection strategy
      • An implementation of PinPoint that uses a chain of denial policies that try each denial policy in the chain one by one
  • When the peak value is over, if the emergency threads exceeding corePoolSize have no tasks to do for a period of time, they need to end to save resources. This time is controlled by keepAliveTime and unit.

image-20221003213138987

According to this construction method, many factory methods are provided in the JDK Executors class to create thread pools for various purposes.

3) newFixedThreadPool

Create a fixed size thread pool

public static ExecutorService newFixedThreadPool(int nThreads) {
    
    
    return new ThreadPoolExecutor(nThreads, nThreads,
            					  0L, TimeUnit.MILLISECONDS,
            					  new LinkedBlockingQueue<Runnable>());
}

features

  • 核心线程数 == 最大线程数(no rescue thread is created), so there is no need for a timeout
  • The blocking queue is unbounded and can hold any number of tasks

evaluate

Applicable to tasks with a known amount of tasks and relatively time-consuming tasks

4) newCachedThreadPool

buffered thread pool

public static ExecutorService newCachedThreadPool() {
    
    
    return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
            				      60L, TimeUnit.SECONDS,
             				      new SynchronousQueue<Runnable>());
}

features

  • The number of core threads is 0, the maximum number of threads is Integer.MAX_VALUE, and the idle life time of emergency threads is 60s, which means
    • All are emergency threads (can be recycled after 60s)
    • Emergency threads can be created infinitely
  • The queue adopts SynchronousQueue to realize the characteristic that it has no capacity, and it cannot be put in if there is no thread to fetch it (one-handed payment, one-handed delivery)
SynchronousQueue<Integer> integers = new SynchronousQueue<>();
new Thread(() -> {
    
    
    try {
    
    
        log.debug("putting {} ", 1);
        integers.put(1);
        log.debug("{} putted...", 1);
        log.debug("putting...{} ", 2);
        integers.put(2);
        log.debug("{} putted...", 2);
    } catch (InterruptedException e) {
    
    
        e.printStackTrace();
    }
},"t1").start();

sleep(1);

new Thread(() -> {
    
    
    try {
    
    
        log.debug("taking {}", 1);
        integers.take();
    } catch (InterruptedException e) {
    
    
        e.printStackTrace();
    }
},"t2").start();

sleep(1);

new Thread(() -> {
    
    
    try {
    
    
        log.debug("taking {}", 2);
        integers.take();
    } catch (InterruptedException e) {
    
    
        e.printStackTrace();
    }
},"t3").start();

output

11:48:15.500 c.TestSynchronousQueue [t1] - putting 1 
11:48:16.500 c.TestSynchronousQueue [t2] - taking 1 
11:48:16.500 c.TestSynchronousQueue [t1] - 1 putted... 
11:48:16.500 c.TestSynchronousQueue [t1] - putting...2 
11:48:17.502 c.TestSynchronousQueue [t3] - taking 2 
11:48:17.503 c.TestSynchronousQueue [t1] - 2 putted...

evaluate

The entire thread pool shows that the number of threads will continue to increase according to the amount of tasks, and there is no upper limit. When the tasks are executed, the threads will be released after being idle for 1 minute.

It is suitable for situations where the number of tasks is relatively intensive, but the execution time of each task is relatively short.

5) newSingleThreadExecutor

single thread thread pool

public static ExecutorService newSingleThreadExecutor() {
    
    
    return new FinalizableDelegatedExecutorService
            (new ThreadPoolExecutor(1, 1,
                    			    0L, TimeUnit.MILLISECONDS,
                       		        new LinkedBlockingQueue<Runnable>()));
}

scenes to be used:

  • Expect multiple tasks to be queued for execution. The number of threads is fixed at 1, and when the number of tasks is more than 1, it will be queued in an unbounded queue. After the task is executed, the only thread will not be released.

the difference:

  • Create a single-threaded serial execution task by yourself. If the task execution fails and terminates, there is no remedy, and the thread pool will also create a new thread to ensure the normal operation of the pool.
  • Executors.newSingleThreadExecutor() The number of threads is always 1 and cannot be modified
    • FinalizableDelegatedExecutorService applies the decorator mode and only exposes the ExecutorService interface, so it cannot call the unique methods in ThreadPoolExecutor
  • Executors.newFixedThreadPool(1) is initially 1 and can be modified later
    • What is exposed to the outside world is the ThreadPoolExecutor object, which can be modified by calling setCorePoolSize and other methods after forced transfer

6) Submit the task

// 执行任务
void execute(Runnable command);

// 提交任务 task,用返回值 Future 获得任务执行结果
<T> Future<T> submit(Callable<T> task);

// 提交 tasks 中所有任务
<T> List<Future<T>> invokeAll(Collection<? extends Callable<T>> tasks)
        throws InterruptedException;

// 提交 tasks 中所有任务,带超时时间
<T> List<Future<T>> invokeAll(Collection<? extends Callable<T>> tasks,
                              long timeout, TimeUnit unit)
        throws InterruptedException;

// 提交 tasks 中所有任务,哪个任务先成功执行完毕,返回此任务执行结果,其它任务取消
<T> T invokeAny(Collection<? extends Callable<T>> tasks)
        throws InterruptedException, ExecutionException;

// 提交 tasks 中所有任务,哪个任务先成功执行完毕,返回此任务执行结果,其它任务取消,带超时时间
<T> T invokeAny(Collection<? extends Callable<T>> tasks,
                long timeout, TimeUnit unit)
        throws InterruptedException, ExecutionException, TimeoutException;

7) Close the thread pool

  • shutdown

    /*
    线程池状态变为 SHUTDOWN
    - 不会接收新任务
    - 但已提交任务会执行完
    - 此方法不会阻塞调用线程的执行
    */
    void shutdown();
    
    public void shutdown() {
          
          
        final ReentrantLock mainLock = this.mainLock;
        mainLock.lock();
        try {
          
          
            checkShutdownAccess();
            // 修改线程池状态
            advanceRunState(SHUTDOWN);
            // 仅会打断空闲线程
            interruptIdleWorkers();
            onShutdown(); // 扩展点 ScheduledThreadPoolExecutor
        } finally {
          
          
            mainLock.unlock();
        }
        // 尝试终结(没有运行的线程可以立刻终结,如果还有运行的线程也不会等)
        tryTerminate();
    }
    
  • shutdownNow

    /*
    线程池状态变为 STOP
    - 不会接收新任务
    - 会将队列中的任务返回
    - 并用 interrupt 的方式中断正在执行的任务
    */
    List<Runnable> shutdownNow();
    
    public List<Runnable> shutdownNow() {
          
          
        List<Runnable> tasks;
        final ReentrantLock mainLock = this.mainLock;
        mainLock.lock();
        try {
          
          
            checkShutdownAccess();
            // 修改线程池状态
            advanceRunState(STOP);
            // 打断所有线程
            interruptWorkers();
            // 获取队列中剩余任务
            tasks = drainQueue();
        } finally {
          
          
            mainLock.unlock();
        }
        // 尝试终结
        tryTerminate();
        return tasks; 
    }
    
  • other methods

    // 不在 RUNNING 状态的线程池,此方法就返回 true
    boolean isShutdown();
    
    // 线程池状态是否是 TERMINATED
    boolean isTerminated();
    
    // 调用 shutdown 后,由于调用线程并不会等待所有任务运行结束,因此如果它想在线程池 TERMINATED 后做些事情,可以利用此方法等待
    boolean awaitTermination(long timeout, TimeUnit unit) throws InterruptedException;
    

Worker threads in asynchronous mode

Worker Thread mode

definition

Let limited worker threads (Worker Thread) take turns to process unlimited tasks asynchronously. It can also be classified as a division of labor mode. Its typical implementation is the thread pool, which also reflects the flyweight mode in the classic design mode.

For example, Haidilao's waiters (threads) handle each guest's order (task) in turn. If each guest is assigned a dedicated waiter, the cost will be too high (compared to another multi-threaded design pattern : Thread-Per-Message).

Note that different task types should use different thread pools, which can avoid starvation and improve efficiency.

For example, if a restaurant worker has to greet guests (task type A) and go to the back kitchen to cook (task type B) is obviously not very efficient, it is more difficult to divide them into waiters (thread pool A) and chefs (thread pool B) To be reasonable, of course you can think of a more detailed division of labor.

hunger

There will be starvation in the fixed size thread pool

  • Two workers are two threads in the same thread pool
  • What they have to do is: ordering for the guests and cooking in the kitchen, this is a two-stage job
    • Guests order food: the food must be ordered first, wait for the food to be ready, and serve the food. During this period, the worker who handles the order must wait
    • Cooking in the back kitchen: nothing to say, just do it
  • For example, worker A handles the task of ordering food, and then it has to wait for worker B to prepare the dishes and then serve them. They also cooperate very well.
  • But now two guests come at the same time. At this time, both worker A and worker B are going to process the order. At this time, no one is cooking, and they are hungry.
public class TestDeadLock {
    
    
    static final List<String> MENU = Arrays.asList("地三鲜", "宫保鸡丁", "辣子鸡丁", "烤鸡翅");
    static Random RANDOM = new Random();
    static String cooking() {
    
    
        return MENU.get(RANDOM.nextInt(MENU.size()));
    }
    public static void main(String[] args) {
    
    
        ExecutorService executorService = Executors.newFixedThreadPool(2);
        executorService.execute(() -> {
    
    
            log.debug("处理点餐...");
            Future<String> f = executorService.submit(() -> {
    
    
                log.debug("做菜");
                return cooking();
            });
            try {
    
    
                log.debug("上菜: {}", f.get());
            } catch (InterruptedException | ExecutionException e) {
    
    
                e.printStackTrace();
            }
        });
        /*executorService.execute(() -> {
            log.debug("处理点餐...");
            Future<String> f = executorService.submit(() -> {
                log.debug("做菜");
                return cooking();
            });
            try {
                log.debug("上菜: {}", f.get());
            } catch (InterruptedException | ExecutionException e) {
                e.printStackTrace();
            }
        });*/
    }
}

output

17:21:27.883 c.TestDeadLock [pool-1-thread-1] - 处理点餐...
17:21:27.891 c.TestDeadLock [pool-1-thread-2] - 做菜
17:21:27.891 c.TestDeadLock [pool-1-thread-1] - 上菜: 烤鸡翅

Possible output when uncommented

17:08:41.339 c.TestDeadLock [pool-1-thread-2] - 处理点餐... 
17:08:41.339 c.TestDeadLock [pool-1-thread-1] - 处理点餐...

At this point starvation is created and no deadlock is detected via JConsole.

How to solve the current hunger phenomenon?

You can increase the size of the thread pool, but it is not a fundamental solution. As mentioned earlier, different task types use different thread pools , for example:

public class TestDeadLock {
    
    
    static final List<String> MENU = Arrays.asList("地三鲜", "宫保鸡丁", "辣子鸡丁", "烤鸡翅");
    static Random RANDOM = new Random();
    static String cooking() {
    
    
        return MENU.get(RANDOM.nextInt(MENU.size()));
    }
    public static void main(String[] args) {
    
    
        ExecutorService waiterPool = Executors.newFixedThreadPool(1);
        ExecutorService cookPool = Executors.newFixedThreadPool(1);
        waiterPool.execute(() -> {
    
    
            log.debug("处理点餐...");
            Future<String> f = cookPool.submit(() -> {
    
    
                log.debug("做菜");
                return cooking();
            });
            try {
    
    
                log.debug("上菜: {}", f.get());
            } catch (InterruptedException | ExecutionException e) {
    
    
                e.printStackTrace();
            }
        });
        waiterPool.execute(() -> {
    
    
            log.debug("处理点餐...");
            Future<String> f = cookPool.submit(() -> {
    
    
                log.debug("做菜");
                return cooking();
            });
            try {
    
    
                log.debug("上菜: {}", f.get());
            } catch (InterruptedException | ExecutionException e) {
    
    
                e.printStackTrace();
            }
        });
    }
}

output

17:25:14.626 c.TestDeadLock [pool-1-thread-1] - 处理点餐... 
17:25:14.630 c.TestDeadLock [pool-2-thread-1] - 做菜
17:25:14.631 c.TestDeadLock [pool-1-thread-1] - 上菜: 地三鲜
17:25:14.632 c.TestDeadLock [pool-1-thread-1] - 处理点餐... 
17:25:14.632 c.TestDeadLock [pool-2-thread-1] - 做菜
17:25:14.632 c.TestDeadLock [pool-1-thread-1] - 上菜: 辣子鸡丁
How many thread pools are appropriate to create
  • Too small will cause the program to not fully utilize system resources and easily lead to starvation
  • Too large will cause more thread context switches and take up more memory

CPU-intensive operations

  • Usually used cpu 核数 + 1to achieve optimal CPU utilization, +1 is to ensure that when a thread is suspended due to a page missing fault (operating system) or other reasons, the extra thread can be topped up to ensure that CPU clock cycles are not wasted

I/O intensive operations

  • The CPU is not always in a busy state. For example, when you perform business calculations, CPU resources are used at this time, but when you perform I/O operations, remote RPC calls, including database operations, the CPU is idle at this time Down, you can use multi-threading to improve its utilization.
  • The empirical formula is as follows
    • 线程数 = 核数 * 期望 CPU 利用率 * 总时间(CPU计算时间+等待时间) / CPU 计算时间
    • For example, the calculation time of 4-core CPU is 50%, and the other waiting time is 50%, and the expected cpu is 100% utilized, apply the formula4 * 100% * 100% / 50% = 8
    • For example, the calculation time of 4-core CPU is 10%, and the other waiting time is 90%. It is expected that the cpu is 100% utilized, and the formula is applied4 * 100% * 100% / 10% = 40
custom thread pool

Implemented above.

8) Task scheduling thread pool

Before the "task scheduling thread pool" function is added, you can use java.util.Timer to implement the timing function. The advantage of Timer is that it is easy to use, but since all tasks are scheduled by the same thread, all tasks are serialized. For execution, only one task can be executed at the same time, and the delay or exception of the previous task will affect the subsequent tasks.

public static void main(String[] args) {
    
    
    Timer timer = new Timer();
    TimerTask task1 = new TimerTask() {
    
    
        @Override
        public void run() {
    
    
            log.debug("task 1");
            
            sleep(2);
        }
    };
    TimerTask task2 = new TimerTask() {
    
    
        @Override
        public void run() {
    
    
            log.debug("task 2");
        }
    };
    // 使用 timer 添加两个任务,希望它们都在 1s 后执行
    // 但由于 timer 内只有一个线程来顺序执行队列中的任务,因此『任务1』的延时,影响了『任务2』的执行
    timer.schedule(task1, 1000);
    timer.schedule(task2, 1000);
}

output

20:46:09.444 c.TestTimer [main] - start... 
20:46:10.447 c.TestTimer [Timer-0] - task 1 
20:46:12.448 c.TestTimer [Timer-0] - task 2

Rewrite using ScheduledExecutorService:

ScheduledExecutorService executor = Executors.newScheduledThreadPool(2); // 如果线程池大小设置为 1 两个线程还是会串行执行
// 添加两个任务,希望它们都在 1s 后执行
executor.schedule(() -> {
    
    
    System.out.println("任务1,执行时间:" + new Date());
    //int i = 1 / 0; // 即使有异常 也不影响第二个线程的执行
    try {
    
     Thread.sleep(2000); } catch (InterruptedException e) {
    
     }
}, 1000, TimeUnit.MILLISECONDS); // 参数:任务对象,延时时间,时间单位
executor.schedule(() -> {
    
    
    System.out.println("任务2,执行时间:" + new Date());
}, 1000, TimeUnit.MILLISECONDS);

output

任务1,执行时间:Thu Jan 03 12:45:17 CST 2019 
任务2,执行时间:Thu Jan 03 12:45:17 CST 2019

scheduleAtFixedRate (scheduled execution) example:

ScheduledExecutorService pool = Executors.newScheduledThreadPool(1);
log.debug("start...");
pool.scheduleAtFixedRate(() -> {
    
    
    log.debug("running...");
}, 1, 1, TimeUnit.SECONDS); // 参数:任务对象,延时时间,执行的间隔时间,时间单位

output

21:45:43.167 c.TestTimer [main] - start... 
21:45:44.215 c.TestTimer [pool-1-thread-1] - running... 
21:45:45.215 c.TestTimer [pool-1-thread-1] - running... 
21:45:46.215 c.TestTimer [pool-1-thread-1] - running... 
21:45:47.215 c.TestTimer [pool-1-thread-1] - running...

scheduleAtFixedRate example (task execution time exceeds interval):

ScheduledExecutorService pool = Executors.newScheduledThreadPool(1);
log.debug("start...");
pool.scheduleAtFixedRate(() -> {
    
    
    log.debug("running...");
    sleep(2);
}, 1, 1, TimeUnit.SECONDS);

Output analysis: At the beginning, the delay is 1s, and then, because the task execution time > interval time, the interval is "supported" to 2s:

21:44:30.311 c.TestTimer [main] - start... 
21:44:31.360 c.TestTimer [pool-1-thread-1] - running... 
21:44:33.361 c.TestTimer [pool-1-thread-1] - running... 
21:44:35.362 c.TestTimer [pool-1-thread-1] - running... 
21:44:37.362 c.TestTimer [pool-1-thread-1] - running...

scheduleWithFixedDelay (real interval time) example:

ScheduledExecutorService pool = Executors.newScheduledThreadPool(1);
log.debug("start...");
pool.scheduleWithFixedDelay(()-> {
    
    
    log.debug("running...");
    sleep(2);
}, 1, 1, TimeUnit.SECONDS); // 参数:任务对象,延时时间,任务与任务之间真正的间隔时间,时间单位

Output analysis: At the beginning, the delay is 1s, and the interval of scheduleWithFixedDelay is the end of the previous task <-> delay <-> the start of the next task, so the interval is 3s:

21:40:55.078 c.TestTimer [main] - start... 
21:40:56.140 c.TestTimer [pool-1-thread-1] - running... 
21:40:59.143 c.TestTimer [pool-1-thread-1] - running... 
21:41:02.145 c.TestTimer [pool-1-thread-1] - running... 
21:41:05.147 c.TestTimer [pool-1-thread-1] - running...

evaluate

The performance of the entire thread pool is: the number of threads is fixed, and when the number of tasks is more than the number of threads, it will be queued in an unbounded queue.

After the task is executed, these threads will not be released. Used to perform delayed or repeated tasks.

9) Correctly handle execution task exceptions

  • Method 1: Actively catch exceptions

    ExecutorService pool = Executors.newFixedThreadPool(1);
    pool.submit(() -> {
          
          
        try {
          
          
            log.debug("task1");
            int i = 1 / 0;
        } catch (Exception e) {
          
          
            log.error("error:", e);
        }
    });
    

    output

    21:59:04.558 c.TestTimer [pool-1-thread-1] - task1 
    21:59:04.562 c.TestTimer [pool-1-thread-1] - error: 
    java.lang.ArithmeticException: / by zero 
     		at cn.itcast.n8.TestTimer.lambda$main$0(TestTimer.java:28) 
     		at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
     		at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
     		at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
     		at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
     		at java.lang.Thread.run(Thread.java:748)
    
  • Method 2: Use Future

    ExecutorService pool = Executors.newFixedThreadPool(1);
    Future<Boolean> f = pool.submit(() -> {
          
          
        log.debug("task1");
        int i = 1 / 0;
        return true;
    });
    log.debug("result:{}", f.get());
    

    output

    21:54:58.208 c.TestTimer [pool-1-thread-1] - task1 
    Exception in thread "main" java.util.concurrent.ExecutionException: 
    java.lang.ArithmeticException: / by zero 
     		at java.util.concurrent.FutureTask.report(FutureTask.java:122) 
     		at java.util.concurrent.FutureTask.get(FutureTask.java:192) 
     		at cn.itcast.n8.TestTimer.main(TestTimer.java:31) 
    Caused by: java.lang.ArithmeticException: / by zero 
     		at cn.itcast.n8.TestTimer.lambda$main$0(TestTimer.java:28) 
     		at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
     		at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
     		at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
     		at java.lang.Thread.run(Thread.java:748)
    

Timing task application

How to execute the task regularly every Thursday at 18:00:00?

// 获得当前时间
LocalDateTime now = LocalDateTime.now();
// 获取本周四 18:00:00.000
LocalDateTime thursday =
        now.with(DayOfWeek.THURSDAY).withHour(18).withMinute(0).withSecond(0).withNano(0);
// 如果当前时间已经超过 本周四 18:00:00.000, 那么找下周四 18:00:00.000
if (now.compareTo(thursday) >= 0) {
    
    
    thursday = thursday.plusWeeks(1);
}
// 计算时间差,即延时执行时间
long initialDelay = Duration.between(now, thursday).toMillis();
// 计算间隔时间,即 1 周的毫秒值
long oneWeek = 7 * 24 * 3600 * 1000;
ScheduledExecutorService executor = Executors.newScheduledThreadPool(2);
System.out.println("开始时间:" + new Date());
executor.scheduleAtFixedRate(() -> {
    
    
    System.out.println("执行时间:" + new Date());
}, initialDelay, oneWeek, TimeUnit.MILLISECONDS);

10) Tomcat thread pool

Where does Tomcat use the thread pool?

image-20221005141158148

  • LimitLatch is used to limit the current and can control the maximum number of connections, similar to the Semaphore in JUC, which will be discussed later
  • Acceptor is only responsible for [receiving new socket connections]
  • Poller is only responsible for monitoring the socket channel for [readable I/O events]
  • Once readable, encapsulate a task object (socketProcessor) and submit it to the Executor thread pool for processing
  • The worker threads in the Executor thread pool are ultimately responsible for [processing requests]

Tomcat thread pool extends ThreadPoolExecutor with slightly different behavior

  • If the total number of threads reaches maximumPoolSize
    • At this time, the RejectedExecutionException will not be thrown immediately
    • Instead, try to put the task into the queue again, and if it fails, throw a RejectedExecutionException

Source code tomcat-7.0.42

public void execute(Runnable command, long timeout, TimeUnit unit) {
    
    
    submittedCount.incrementAndGet();
    try {
    
    
        super.execute(command);
    } catch (RejectedExecutionException rx) {
    
    
        if (super.getQueue() instanceof TaskQueue) {
    
    
            final TaskQueue queue = (TaskQueue)super.getQueue();
            try {
    
    
                if (!queue.force(command, timeout, unit)) {
    
    
                    submittedCount.decrementAndGet();
                    throw new RejectedExecutionException("Queue capacity is full.");
                }
            } catch (InterruptedException x) {
    
    
                submittedCount.decrementAndGet();
                Thread.interrupted();
                throw new RejectedExecutionException(x);
            }
        } else {
    
    
            submittedCount.decrementAndGet();
            throw rx;
        }
    }
}

TaskQueue.java

public boolean force(Runnable o, long timeout, TimeUnit unit) throws InterruptedException {
    
    
    if ( parent.isShutdown() )
        throw new RejectedExecutionException(
                "Executor not running, can't force a command into the queue"
        );
    return super.offer(o,timeout,unit); //forces the item onto the queue, to be used if the task 
    is rejected
}
  • Connector configuration
configuration item Defaults illustrate
acceptorThreadCount 1 The number of acceptor threads (to establish connections)
pollerThreadCount 1 Number of poller threads (multiplexed monitoring channel)
minSpareThreads 10 The number of core threads, namely corePoolSize
maxThreads 200 The maximum number of threads, namely maximumPoolSize
executor - Executor name, used to refer to the following Executor
  • Executor thread configuration
configuration item Defaults illustrate
threadPriority 5 thread priority
daemon true Whether to guard the thread
minSpareThreads 25 The number of core threads, namely corePoolSize
maxThreads 200 The maximum number of threads, namely maximumPoolSize
maxIdleTime 60000 Thread life time, in milliseconds, the default value is 1 minute
maxQueueSize Integer.MAX_VALUE queue length
prestartminSpareThreads false Whether the core thread starts at server startup

image-20221005141732955



References for this article: Dark horse programmers learn Java concurrent programming in depth, a full set of tutorials for JUC concurrent programming

Guess you like

Origin blog.csdn.net/weixin_53407527/article/details/128604828