Multi-threading-thread pool-focusing will be considered later

Advantages of thread pool

Overall, the thread pool has the following advantages:
(1) Reduce resource consumption. Reduce the cost of thread creation and destruction by reusing created threads.
(2) Improve response speed. When a task arrives, the task can be executed immediately without waiting for the thread to be created.
(3) Improve thread manageability. Threads are scarce resources. If created without limit, it will not only consume system resources, but also reduce the stability of the system. Using the thread pool can be used for unified allocation, tuning and monitoring.

Design with 4 parameters:

corePoolSize (required) : The number of core threads. By default, the core thread will always be alive, but when allowCoreThreadTimeout is set to true, the core thread will also recycle over time.
maximumPoolSize (required) : The maximum number of threads that the thread pool can hold. When the number of active threads reaches this value, subsequent new tasks will be blocked.
keepAliveTime (required) : Thread idle timeout. If this time is exceeded, non-core threads will be recycled. If allowCoreThreadTimeout is set to true, the core thread will also recycle after timeout.
unit (required) : Specifies the time unit for the keepAliveTime parameter. Commonly used are: TimeUnit.MILLISECONDS (milliseconds), TimeUnit.SECONDS (seconds), TimeUnit.MINUTES (minutes).
workQueue (required) : task queue. The Runnable object submitted by the thread pool's execute() method will be stored in this parameter. It is implemented using a blocking queue.
threadFactory (optional) : The thread factory. Used to specify how new threads are created for the thread pool.
handler (optional) : Deny policy. The saturation strategy to be executed when the maximum number of threads is reached.

Four ways to create a thread pool:

newCachedThreadPool
creates a cacheable thread pool. If the length of the thread pool exceeds the processing needs, idle threads can be recycled flexibly. If there is no recycling, a new thread newFixedThreadPool
creates
a thread pool with a specified number of worker threads. Whenever a task is submitted, a worker thread is created. If the number of worker threads reaches the initial maximum number of thread pools, the submitted task is stored in the pool queue.
newSingleThreadExecutor
creates a single-threaded Executor, that is, only creates a unique worker thread to execute tasks, and it only uses the only worker thread to execute tasks, ensuring that all tasks are executed in the specified order (FIFO, LIFO, priority). If this thread ends abnormally, another one will take its place, guaranteeing sequential execution.
newScheduledThreadPool
creates a fixed-length thread pool, and supports timing and periodic task execution, and supports timing and periodic task execution.

Reject policy (handler)

The thread pool rejection strategy refers to how to process new task requests when all threads in the thread pool are occupied. There are four common thread pool rejection strategies:

  1. AbortPolicy (default policy): directly throws a RejectedExecutionException to prevent the system from running normally.
  2. CallerRunsPolicy : Run rejected tasks directly in the main thread. If the task is submitted too fast, the main thread may be overloaded and affect the normal operation of the system.
  3. DiscardPolicy : directly discard rejected tasks without any processing.
  4. DiscardOldestPolicy : Discard the oldest task in the task queue and retry the task submission operation. If the capacity of the thread pool has reached the maximum value, then these discarded tasks will not be executed.

The best solution depends on the requirements and the scenario of the application. In general, it is recommended to use custom deny policies to meet specific needs.
Here are some possible custom deny policies:

  1. The capacity of the queue is limited. When the queue is full, the task is executed directly instead of being put into the queue. This strategy ensures that tasks are not discarded, but may lead to excessive load on the main thread.
  2. Implement an adaptive rejection strategy. If the number of threads in the thread pool has reached the maximum value, then gradually increase the maximum capacity of the thread pool and reduce the frequency of the rejection strategy to better handle task requests.
  3. Put the task into a distributed task queue, so that the task can be distributed to multiple nodes for execution, thereby improving the overall throughput.

The final best solution should be to choose an appropriate rejection strategy based on specific business needs and scenarios, and make appropriate adjustments and optimizations.

Implementation process

image.png

Sample code:

thread:
    # 核心线程池数
    corePoolSize:
    # 最大线程池数
    maxPoolSize:
    # 任务队列的容量
    queueCapacity:
    # 非核心线程的存活时间
    keepAlive:
    # 创建线程的等待时间
    awaitTerminationSeconds:
package com.demo.config;

import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.scheduling.annotation.AsyncConfigurer;
import org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor;

import java.util.Optional;
import java.util.concurrent.Executor;
import java.util.concurrent.ThreadPoolExecutor;

/**
 * 线程池
 */
@Configuration
public class ThreadPoolTaskExecutorConfig implements AsyncConfigurer {
    
    

    // 当前机器核数
    public static final int cpuNum = Runtime.getRuntime().availableProcessors();

    @Value("${thread.corePoolSize}")
    private Integer corePoolSize;

    @Value("${thread.maxPoolSize}")
    private Integer maxPoolSize;

    @Value("${thread.queueCapacity}")
    private Integer queueCapacity;

    @Value("${thread.keepAlive}")
    private Integer keepAlive;

    @Value("${thread.awaitTerminationSeconds}")
    private Integer awaitTerminationSeconds;

    @Override
    @Bean
    public Executor getAsyncExecutor() {
    
    
        ThreadPoolTaskExecutor threadPoolTaskExecutor = new ThreadPoolTaskExecutor();
        // 核心线程池数
        threadPoolTaskExecutor.setCorePoolSize(optionl(corePoolSize, cpuNum));
        //最大线程池数
        threadPoolTaskExecutor.setMaxPoolSize(optionl(maxPoolSize, cpuNum * 2));
        //任务队列的容量
        threadPoolTaskExecutor.setQueueCapacity(optionl(queueCapacity, 3));
        //非核心线程的存活时间-最大空闲时间
        threadPoolTaskExecutor.setKeepAliveSeconds(optionl(keepAlive, 10));
        threadPoolTaskExecutor.setThreadNamePrefix("test-thread-");
        // 抛异常规则
        threadPoolTaskExecutor.setRejectedExecutionHandler(new ThreadPoolExecutor.CallerRunsPolicy());
        threadPoolTaskExecutor.initialize();
        // 创建线程的等待时间
        threadPoolTaskExecutor.setAwaitTerminationSeconds(optionl(awaitTerminationSeconds, 10));
        return threadPoolTaskExecutor;
    }

    /**
     * 判空-替换
     *
     * @param key  原值
     * @param key2 替换值
     * @return 结果
     */
    public Integer optionl(Integer key, Integer key2) {
    
    
        // 如果key为空则赋值为key2
        return Optional.ofNullable(key).orElse(key2);
    }

}

public static void main(String[] args) {
    
    
        ThreadPoolTaskExecutorConfig threadPoolTaskExecutorConfig = new ThreadPoolTaskExecutorConfig();
        // 创建线程池
        Executor executor = threadPoolTaskExecutorConfig.getAsyncExecutor();
        executor.execute(()->{
    
    
            System.out.println("内容");
        });
    }

reference:

Four methods of Lagrange
thread pool creation

Guess you like

Origin blog.csdn.net/weixin_44824381/article/details/131088209