Comic: Talk about the thread growth / recycling strategy in the thread pool

First, the order

public static ExecutorService newThreadPool() {
  return new ThreadPoolExecutor(
    30, 60,
    60L, TimeUnit.MILLISECONDS,
    new LinkedBlockingQueue<Runnable>());
}

Let's borrow this question today and talk about the threads maintained in the thread pool. What is its growth and recycling strategy?

Second, the thread pool strategy

2.1 Thread pool parameters

When we talk about the growth strategy of threads in the thread pool, the most eye-catching is its core thread number (corePoolSize) and maximum thread number (maximumPoolSize), but looking at these two parameters is not comprehensive enough. The growth is also related to the task waiting queue.

Let's take a look at the most complete parameter construction method of ThreadPoolExecutor:

public ThreadPoolExecutor(
  int corePoolSize,
  int maximumPoolSize,
  long keepAliveTime,
  TimeUnit unit,
  BlockingQueue<Runnable> workQueue,
  ThreadFactory threadFactory,
  RejectedExecutionHandler handler) {
  // ...
}

Briefly explain what each parameter means:

  • corePoolSize: number of core threads;
  • maximumPoolSize: the maximum number of threads in the thread pool;
  • keepAliveTime: threads other than the number of core threads, the maximum idle survival time;
  • unit: the time unit of keepAliveTime;
  • workQueue: task waiting queue of the thread pool;
  • threadFractory: thread factory, used to create threads for the thread pool;
  • handler: Rejection strategy, when the thread pool cannot process the task;

The configuration of many of these parameters affects each other. For example, improper configuration of the task waiting queue workQueue may cause the threads in the thread pool to never grow to the number of core threads (maximumPoolSize) configured.

2.2 The growth strategy of threads in the thread pool

You should be clear by seeing this. The growth strategy of the thread pool thread is related to three parameters:

  • corePoolSize: number of core threads
  • maximumPoolSize: the maximum number of threads;
  • workQueue: waiting for task queue;

Their previous relationship is this:

Next we look at the ideal strategy for the growth of threads in the thread pool.

By default, the thread pool is empty initially. When a new task comes, the thread pool starts to create a thread through the thread factory (threadFractory) to process the task.

New tasks will continue to trigger the creation of threads in the thread pool until the number of threads reaches the core thread number (corePoolSize). Next, the creation of threads will be stopped, and the new task will be put into the task waiting queue (workQueue).

New tasks continue to enter the task waiting queue. When the queue is full, they begin to recreate thread processing tasks until the number of threads in the thread pool reaches the maximumPoolSize configuration.

At this step, the number of threads in the thread pool reaches the maximum, and there are no idle threads, and the task queue is full of tasks. At this time, if there are new tasks coming in, it will trigger the thread pool's rejection strategy (handler), If no rejection policy is configured, a RejectedExecutionException will be thrown.

At this point, the growth strategy of the thread is clear, we can understand the complete process through the following figure.

One of the more critical is the task's waiting queue. No matter what the waiting queue's implementation structure is, only when it is full, the threads in the thread pool will increase to the maximum number of threads. But for a queue to be full, its premise is that it must be a bounded queue .

This is the hidden hole in the example at the beginning of the article. Let's review the thread pool constructed earlier.

public static ExecutorService newThreadPool() {
  return new ThreadPoolExecutor(
    30, 60,
    60L, TimeUnit.MILLISECONDS,
    new LinkedBlockingQueue<Runnable>());
}

It can be seen that although the maximum number of threads is greater than the number of core threads, its waiting queue is configured with a LinkedBlockingQueue. From the name, it can be seen that this is a blocking queue based on a linked list, and its default constructor is used When constructed, its capacity is set to  Integer.MAX_VALUEbe simple to understand that it is an unbounded queue.

public LinkedBlockingQueue() {
  this(Integer.MAX_VALUE);
}

public LinkedBlockingQueue(int capacity) {
  if (capacity <= 0) throw new IllegalArgumentException();
  this.capacity = capacity;
  last = head = new Node<E>(null);
}

That is why, the thread pool constructed in this way and the configuration parameters of the number of core threads can never be used because its waiting queue is never full.

2.3 Thread shrinking strategy in the thread pool

The tasks executed in the thread pool always have the end of execution. Then when there are a large number of idle threads in the thread pool, there will also be a certain shrinking strategy to reclaim the redundant threads in the thread pool.

The shrinking strategy of threads in the thread pool is related to the following parameters:

  • corePoolSize: number of core threads;
  • maximumPoolSize: the maximum number of threads in the thread pool;
  • keepAliveTime: Threads other than the number of core threads, the length of time idle to survive;
  • unit: the time unit of keepAliveTime;

We are familiar with corePoolSize and maximumPoolSize, and the other thing that can control it is the keepAliveTime idle survival time and the unit of this time.

When the number of threads in the thread pool exceeds the number of core threads. At this time, if the amount of tasks drops, there will definitely be some threads in the idle state where no tasks are executed. Then, if the idle time of this thread exceeds the configured duration of keepAliveTime & unit, it will be recycled.

It should be noted that for the thread pool, it is only responsible for managing threads, and the created threads do not distinguish between so-called "core threads" and "non-core threads", it only manages the total number of threads in the thread pool, When the number of recycled threads reaches corePoolSize, the recycling process will stop.

For the threads in the number of core threads in the thread pool, there is also a method of recycling. You can allowCoreThreadTimeOut(true) set it through the  method. When the core thread is idle, once the time configured by keepAliveTime & unit is exceeded, it will also be recycled.

public void allowCoreThreadTimeOut(boolean value) {
  if (value && keepAliveTime <= 0)
    throw new IllegalArgumentException("Core threads must have nonzero keep alive times");
  if (value != allowCoreThreadTimeOut) {
    allowCoreThreadTimeOut = value;
    if (value)
      interruptIdleWorkers();
  }
}

allowCoreThreadTimeOut() The premise that can be set is that keepAliveTime cannot be 0.

2.3 Checking for gaps

1. The  waiting queue will also affect the rejection strategy

If the waiting queue is configured as an unbounded queue, it not only affects the growth of the number of threads from the number of core threads to the maximum number of threads, but also causes the configured rejection strategy to never be executed.

Because only when the number of worker threads in the thread pool has reached the number of core threads, and the waiting queue is also full at this time, the rejection strategy can take effect.

2. The  number of core threads can be "warmed up"

As mentioned earlier, by default, the threads in the thread pool grow according to tasks. But if there is a need, we can also prepare the core threads of the thread pool in advance to deal with sudden high concurrency tasks. For example, there is often such a need in panic buying systems.

At this time, you can use  prestartCoreThread() or  prestartAllCoreThreads() to create core threads in advance, this method is called "warm-up".

3.   What should I do for scenarios that require unbounded queues?

The demand is changeable, we will definitely encounter scenarios where unbounded queues are required, then the maximumPoolSize configured in this scenario is invalid.

At this time, you can refer to newFixedThreadPool() the process of creating a thread pool in Executors,  and keep corePoolSize and maximumPoolSize consistent.

public static ExecutorService newFixedThreadPool(int nThreads) {
  return new ThreadPoolExecutor(
    nThreads, nThreads,
    0L, TimeUnit.MILLISECONDS,
    new LinkedBlockingQueue<Runnable>());
}

At this time, the number of core threads is the maximum number of threads. Only when this number is increased will the task be placed in the waiting queue to ensure that the number of threads we configure is used.

4. Is the  thread pool fair?

The so-called fairness means that the first-come tasks will be executed first. This is obviously unfair in the thread pool.

Not to mention that the thread execution tasks in the thread pool are scheduled through the system. This determines that the order of execution of the tasks cannot be guaranteed. This is unfair. In addition, only from the perspective of the thread pool itself, we only look at the order of submitted tasks, it is also unfair.

First of all, if the core thread of the thread pool has been allocated, the task will enter the task queue. If the task queue is full, the new task will be directly processed by the newly created thread in the thread pool. Until the number of threads reaches the maximum number of threads.

At this time, although the tasks in the task queue are first added to the thread pool to be processed, the processing timing of these tasks is later than the tasks of the newly created thread to process, so it is still unfair from the perspective of the task only of.

3. Summary moment

In this article, we talked about the strategy of increasing and shrinking the number of threads in the thread pool.

Here we briefly summarize:

1.  Growth strategy. By default, the thread pool is based on the task to create a sufficient number of core threads to execute the task, when the core thread is full, the task is placed in the waiting queue. When the queue is full, continue to create new threads to execute the task until it reaches the maximum number of threads to stop. If there is a new task, it can only execute the rejection strategy or throw an exception.

2.  Shrinkage strategy. When the number of thread pool threads is greater than the number of core threads && currently has idle threads && the idle time of idle threads is greater than keepAliveTime, the idle thread will be recycled until the number of threads equals the number of core threads.

In short, remember to use unbounded queues with caution.

Published 488 original articles · praised 85 · 230,000 views +

Guess you like

Origin blog.csdn.net/Coo123_/article/details/104538269