[Thread pool] ThreadPoolExecutor structure and constructor parameters detailed

Introduction to thread pool

The problem of not using the thread pool

  • It is expensive to repeatedly create and destroy threads
  • Too many threads take up a lot of memory

The use of thread pools is similar to planned economy, controlling the total amount of resources and reusing threads. There are three advantages as follows.

  1. Speed ​​up the response speed. Eliminates the delay caused by thread creation.
  2. Reasonable use of CPU and memory. Controlling the number of threads will neither cause memory overflow due to too many threads, nor waste CPU resources due to too few threads.
  3. Facilitate the unified management of threads. For example, statistics.

Examples of thread usage scenarios:

  • The server receives a large number of requests, and the use of the thread pool can greatly reduce the number of thread creation and destruction, and improve the efficiency of the server.
  • When using multiple threads in development, consider using a thread pool.

Thread pool structure

Insert picture description here
ThreadPoolExecutor inherits AbstractExecutorService (implements ExecutorService), all 4 classes/interfaces can be regarded as thread pools and are transformed upwards (polymorphism). There are 5 nested classes in ThreadPoolExecutor, 4 of which are the implementation classes of the RejectedExecutionHandler rejection strategy interface, and 1 is the Worker class, which is used to maintain the interrupt control status of the thread that is running the task, and other secondary information.

Executors is a thread pool tool class that can quickly implement thread pools through static factory methods, such as Executors.newFixedThreadPool(10).

Take an example of a bakery.
Suppose there is a bakery. The bakery usually has 5 bakers. As orders increase every day, 5 masters gradually join the work. Orders that cannot be processed in time will be hung on the order column on the wall, waiting for the baker to process them in order.
When there are too many orders, 5 bakers are too busy to hire temporary bakers, up to 5 bakers. The number of temporary bakers will be adjusted according to the number of tasks. Temporary bakers who have nothing to do for too long will be fired.
After the store is closed or the order reaches 10 bakers, no new orders will be taken.

Thread pool constructor

Insert picture description here
ThreadPoolExecutor has only four constructors, with a maximum of 7 parameters. Among them, the first 5 are necessary parameters, and the thread factory and rejection strategy are optional.

public ThreadPoolExecutor(int corePoolSize,
                          int maximumPoolSize,
                          long keepAliveTime,
                          TimeUnit unit,
                          BlockingQueue<Runnable> workQueue) {
    
    
    this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
         Executors.defaultThreadFactory(), defaultHandler);
}

Insert picture description here

Thread addition strategy

  1. If the number of threads is less than corePoolSize, even if other worker threads are idle, new threads will still be created to run new tasks. [Question: Is it to be built all at once, or one by one]
  2. If the number of threads is greater than corePoolSize but less than maxPoolSize, the task is placed in the blocking queue.
  3. If the queue is full and the number of threads is less than maxPoolSize, a new thread is created to run the task.
  4. If the queue is full and the number of threads is greater than or equal to maxPoolSize, the task is rejected.

Insert picture description here

The characteristics of adding or subtracting threads

  1. If you set the same corePoolSize and maxThreadPoolSize, it is equivalent to creating a fixed-size thread pool. At this time, the work queue usually uses an unbounded queue, and the timeout period is 0L.
  2. The thread pool hopes to keep the number of threads small and only add new threads when the load becomes very large.
  3. By setting maxPoolSize to Integer.MAX_VALUE, you can allow the thread pool to accommodate concurrent tasks of any capacity.
  4. Only when the queue is full, new threads with more than corePoolSize will be created. If an unbounded queue is used, such as LinkedBlockingQueue, the number of threads will not exceed corePoolSize.

Rejection strategy of thread pool

Rejection timing-see the two call positions in the execute method

final void reject(Runnable command) {
    
    
    handler.rejectedExecution(command, this);
}
  1. When Executor is closed, submitting new tasks will be rejected;
  2. When Executor uses limited boundaries for the maximum number of threads and work queue capacity, and it is saturated.

Rejection strategy

  1. AbortPolicy: throw an exception directly, prompting that the submission was not successful
  2. DiscardPolicy: Discard the new task directly without throwing an exception.
  3. DiscardOldestPolicy: discard the oldest task
  4. CallerRunsPolicy: Let the thread that submitted the task execute. The advantage is to avoid business losses; the submission speed is reduced (the main thread keeps submitting tasks, and when the thread pool and work queue are full, the main thread starts to execute the submitted tasks, which is equivalent to giving the thread pool a buffer time).

Source code analysis

public interface RejectedExecutionHandler {
    
    
    void rejectedExecution(Runnable r, ThreadPoolExecutor executor);
}

There is only one method to reject the policy interface, and the purpose is very clear.

The four rejection strategies implement the RejectedExecutionHandler interface, and the code implementation is also very simple and clear. Only two strategies, DiscardOldestPolicy and CallerRunsPolicy, are introduced here.

public static class DiscardOldestPolicy implements RejectedExecutionHandler {
    
    
    public DiscardOldestPolicy() {
    
     }
    public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
    
    
        // 1. 如果线程池是RUNNING状态,并且线程达到饱和,则丢弃最老的任务,重新执行execute方法
        // 2. 如果线程池非RUNNING状态,则直接丢弃任务。
        if (!e.isShutdown()) {
    
    
            e.getQueue().poll();
            e.execute(r);
        }
    }
}
public static class CallerRunsPolicy implements RejectedExecutionHandler {
    
    
    public CallerRunsPolicy() {
    
     }
    // 1. 如果线程池是RUNNING状态,并且线程达到饱和,则使用调用者线程执行任务
    // 2. 如果线程池非RUNNING状态,则直接丢弃任务。
    public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
    
    
        if (!e.isShutdown()) {
    
    
            r.run();
        }
    }
}

We know that directly calling the run() method of runnable is a synchronous call, and the caller thread is the thread that calls the execute method of the current thread pool.

Work queue

There are three general strategies for work queues.

  • Direct submission queue: The default is SynchronousQueue, which directly submits tasks to threads without keeping them. If there is no idle thread, the attempt to add the task to the queue will fail and a new thread will be created. Usually require unbounded
    maximumPoolSizes.
  • Unbounded queue: For example, LinkedBlockingQueue, used for burst requests.
  • Bounded queues: such as ArrayBlockingQueue, which helps prevent resource exhaustion, and it is difficult to adjust the value of the queue size and the maximum pool size.

Thread factory

public interface ThreadFactory {
    
    
    Thread newThread(Runnable r);
}

ThreadFactory has only one method, which is a functional interface.

static class DefaultThreadFactory implements ThreadFactory {
    
    
    // 标记线程池的数量
    private static final AtomicInteger poolNumber = new AtomicInteger(1);
    // 新线程所在线程组
    private final ThreadGroup group;
    // 标记线程的数量
    private final AtomicInteger threadNumber = new AtomicInteger(1);
    // 线程的命名前缀(所在线程池)
    private final String namePrefix;

    DefaultThreadFactory() {
    
    
        SecurityManager s = System.getSecurityManager();
        // 策略线程组或创建线程工厂的线程所在线程组
        group = (s != null) ? s.getThreadGroup() :
                              Thread.currentThread().getThreadGroup();
        namePrefix = "pool-" +
                      poolNumber.getAndIncrement() +
                     "-thread-";
    }

    public Thread newThread(Runnable r) {
    
    
        Thread t = new Thread(group, r,
                              namePrefix + threadNumber.getAndIncrement(),
                              0);
        // 只生产用户线程
        if (t.isDaemon())
            t.setDaemon(false);
        // 线程的安全级别是5
        if (t.getPriority() != Thread.NORM_PRIORITY)
            t.setPriority(Thread.NORM_PRIORITY);
        return t;
    }
}

Common thread pool

Summary of thread pool parameters generated by Executors.
Insert picture description here

The default thread factory ThreadFactory is Executors.defaultThreadFactory(), and the rejection policy is AbortPolicy.

  • SingleThreadPool: The internal and FixedThreadPool are basically the same, but the number of threads is different.
  • CachedThreadPool: Create a cacheable thread pool. It is an unbounded thread pool with the function of automatically reclaiming redundant threads. The synchronous handover queue is adopted, and tasks are directly given to idle threads or new threads are created for execution. If the thread is idle (no task is executed) for 60 seconds, the thread is recycled.
  • ScheduledThreadPool: A thread pool that supports timing and periodic task execution.
  • WorkStealingPool is a new thread pool in JDK8, an extension of the new thread pool class ForkJoinPool, which can reasonably use the CPU to perform parallel operations on tasks, and is suitable for time-consuming scenarios, such as recursion, divide and conquer, and unlocked scenarios .

Points to note about thread pools

Avoid task accumulation, avoid excessive increase in the number of threads, and troubleshoot thread leaks. When there are many threads, there may be thread leaks. The threads are executed but are not recycled. This may be a task logic problem.

Guess you like

Origin blog.csdn.net/LIZHONGPING00/article/details/105155805