Java multithreading: those pits of thread pool

I. Introduction

Everyone usually uses the thread pool for multi-threaded program development at work. It is normal practice. The newly created thread is directly thrown into the thread pool for execution, and then nothing is left. In general, there is nothing wrong with doing this, but in actual projects In this case, we suffer too much from the general situation. If the business logic of thread task execution is relatively time-consuming, for example, if the system conducts big promotions and the traffic is relatively large, then there is a high probability (or basically) that the system will be caused by the backlog of threads. As a result, the memory is blown up and resources are consumed. So how to use java thread pool correctly? Let's discuss this issue together.

Two, ExecutorService interface

Beginning with JDK1.5, the ExecutorService interface has been added, and it is officially recommended to use this interface for thread pool creation.

2.1, Single thread pool

//创建单个核心线程数线程池
ExecutorService singlePool = Executors.newSingleThreadExecutor();

Specific implementation code:

public static ExecutorService newSingleThreadExecutor() {
    
    
        return new FinalizableDelegatedExecutorService
            (new ThreadPoolExecutor(1, 1,
                                    0L, TimeUnit.MILLISECONDS,
                                    new LinkedBlockingQueue<Runnable>()));
}

CorePoolSize and maximumPoolSize are both set to 1, but the blocking queue LinkedBlockingQueue is unbounded, which is a hidden danger of memory explosion. If the traffic is large, business threads are constantly being jammed, and the increasing queue queue will eventually lead to the exhaustion of memory resources.

2.2, fixed thread pool

//创建固定数量线程数线程池
ExecutorService fixPool = Executors.newFixedThreadPool(50);

The specific implementation code is as follows:

public static ExecutorService newFixedThreadPool(int nThreads) {
    
    
        return new ThreadPoolExecutor(nThreads, nThreads,
                                      0L, TimeUnit.MILLISECONDS,
                                      new LinkedBlockingQueue<Runnable>());
 }

The fixed thread pool thread is a Single thread pool with N threads, and it has the same hidden problems as the Single thread pool, so I won’t repeat it.

2.3, cache thread pool

//创建无界线程池
ExecutorService cachePool = Executors.newCachedThreadPool();

The specific implementation code is as follows:

public static ExecutorService newCachedThreadPool() {
    
    
        return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
                                      60L, TimeUnit.SECONDS,
                                      new SynchronousQueue<Runnable>());
 }

The hidden danger of the cache thread pool is even greater. The maximumPoolSize is Integer.MAX_VALUE, which can be understood as infinite, which means that threads can be continuously packed, and there is no limit to the queue. The risk of memory bursting when the traffic peaks is greater.

2.4, the core parameter understanding of the thread pool

The above three ways to create threads, we enter the source code to see that it uses the ThreadPoolExecutor class to create the thread pool, we look at the construction method of the ThreadPoolExecutor class, it receives the following six parameters, which are the core parameters for constructing the thread pool

2.4.1, corePoolSize (number of core threads)

The core thread will always exist, even if there is no task execution. When the number of business threads is less than the number of core threads, even if there are idle threads, threads will be created until the number of core threads is reached. When allowCoreThreadTimeout=true (default false), the core thread will be closed over time.

2.4.2, maximumPoolSize (maximum number of threads)

The maximum number of business threads allowed to run in the thread pool. The actual meaning of it and corePoolSize, for example, a basketball hall can accommodate 5000 people to watch a football under normal circumstances. The corePoolSize is 5000. When a popular game arena temporarily adds a seat, a total of 8000 fans flood in to watch the football. The maximumPoolSize is 8000.

2.4.3, keepAliveTime (thread idle time)

When the thread idle time reaches keepAliveTime, the thread will exit (close) until the number of threads equals the number of core threads. If allowCoreThreadTimeout=true is set, the thread will exit until the number of threads equals zero.

2.4.4, TimeUnit (unit of thread idle time)

2.4.5, workQueue (task queue capacity)

It is also called a blocking queue. When the core threads are all running, and there are tasks coming in at this time, they will enter the task queue and queue for the thread to execute.

2.4.6, RejectedExecutionHandler (task rejection handler)

When the number of threads reaches the maximum number of threads and the task queue is full, the rejection processor will be executed.

Three, the rejection strategy of the four thread pools

It is extremely important to use the thread pool to set the rejection strategy. The processing logic after the thread pool is full must be considered. The wrong use of the rejection strategy will cause loopholes in the business logic. The ThreadPoolExecutor class provides four built-in strategies:

3.1, AbortPolicy strategy

Discard tasks, throw runtime exceptions

3.2, CallerRunsPolicy strategy

Perform task

3.3, DiscardPolicy strategy

Ignore, nothing will happen

3.4, DiscardOldestPolicy strategy

The task that entered the queue first (the last executed) is kicked out of the queue

3.5, custom

Of course, the JDK also provides a custom rejection strategy, as long as the RejectedExecutionHandler interface is implemented.

Fourth, the correct practice method:

I suggest that you use thread pools like this:

ThreadPoolExecutor threadPool = new ThreadPoolExecutor(5, 20, 10,
            TimeUnit.SECONDS, new LinkedBlockingQueue<>(15), new LogRejectedExecutionHandler());

The core parameters of each thread pool are set by themselves, and you must customize the thread pool rejection strategy, and write the rejection logic according to your own business logic. Otherwise, if you use the built-in rejection strategy, it is easy to cause loopholes in the business logic. As for the value of each core parameter, you can set the most reasonable parameter value and rejection strategy according to the actual machine performance, cpu performance, cpu core number and other factors. In short, use the thread pool to continuously debug and observe the changes in machine performance.

Five, summary:

There is no once-and-for-all solution. As the name suggests, the thread pool is a pool. The number of incoming and outgoing threads must be reasonably controlled, and a reasonable rejection strategy must be set.
Like a basketball hall, its throughput is always capped. It is necessary to control the flow of people entering and exiting the arena. If the amount of entry is greater than the amount of exit, the arena will be overcrowded, and the amount of entry less than the amount of exit will also cause stadium resources. Waste. When the arena is overcrowded, it is necessary to control the flow of personnel into the arena and explain the reason to the fans, that is, reject the strategy. Okay, so much I hope it will be useful to everyone, welcome to leave a message and discuss it together.

Guess you like

Origin blog.csdn.net/datuanyuan/article/details/109097752