[JUC Series] Thoroughly understand thread pool

The purpose of using thread pool thread pool :

  • Reduce resource consumption: creating and destroying threads will occupy system resources
  • Improve response speed: it takes time to create and destroy threads
  • Convenient centralized management: In order to prevent the abuse of multi-threading, there is a place for unified management

In the "Alibaba Java Development Manual", it is pointed out that thread resources must be provided through the thread pool, and it is not allowed to create threads in the application display; and the thread pool is not allowed to be created using Executors, and must be created through ThreadPoolExecutor, because the Executor framework in jdk provides Such as newFixedThreadPool(), newSingleThreadExecutor(), newCachedThreadPool() and other methods to create thread pools, but they are still not flexible enough.

1 Rules for setting the number of threads

However, depending on the business, tasks can be divided into IO-intensive and computationally intensive. There will be different rules for the number of threads we set for different types:
the number of thread pools should be as small as possible, which is approximately equal to the number of CPU cores.

  • For IO-intensive: the number of threads in the thread pool is relatively large, approximately equal to the number of CPU cores*2.
  • For computationally intensive: the number of thread pools should be as small as possible, approximately equal to the number of CPU cores.

2 Principle of Thread Pool

The realization principle of java thread pool is actually very simple, it is a thread collection workerSet and a blocking queue workQueue. When submitting a task to the thread pool, the thread pool will put the task in the workQueue first, and the threads in the workerSet will continuously obtain threads from the workQueue and execute them. When there is no task in the workQueue, the worker will block until there is a task in the queue and then take it out to continue execution
Insert picture description here

3 thread pool process

When a task is submitted to the thread pool, the approximate execution flow is as follows:

  • The thread pool will first determine whether the number of currently running threads is less than corePoolSize. If it is, a new worker thread is created to perform the task. If all tasks are being performed, go to the second step.
  • Determine whether the BlockingQueue is full, and if it is not full, put the thread into the BlockingQueue. Otherwise, go to the third step
  • Create a new thread until the number of threads reaches the maximumPoolSize, if creating a new worker thread will cause the number of currently running threads to exceed the maximumPoolSize, hand it over to RejectedExecutionHandler to handle the task

4 Thread pool parameters

  • corePoolSize: the number of core threads. When submitting a task to the thread pool, if the number of threads in the current thread pool is less than corePoolSize, new threads will be created to perform tasks, knowing that the number of threads is equal to corePoolSize. If the current number of threads is corePoolSize, the tasks that continue to be submitted are saved in the blocking queue and waiting to be executed; if the prestartAllCoreThreads() method of the thread pool is executed, the thread pool will create and start all core threads in advance.
  • workQueue: A blocking queue used to store tasks waiting to be executed. The following blocking queues are provided in the JDK
    • ArrayBlockingQueue: Bounded blocking queue based on array structure, sort tasks by FIFO
    • LinkedBlockingQueue: Based on the blocking of the linked list structure, tasks are sorted by FIFO, and the throughput is usually high. When the capacity is not specified, the capacity defaults to Integer.MAX_VALUE.
    • SynchronousQueue: A blocking queue with no elements, each insert operation must wait for another thread to call the remove operation, otherwise the insert operation has been blocked, and the throughput is usually higher than LinkedBlockingQueue
    • PriorityBlockingQueue: Unbounded blocking queue with priority
    • DelayQueue: Similar to PriorityBlockingQueue, it is an unbounded priority blocking queue implemented by a binary heap. All elements are required to implement the Delayed interface. Tasks are extracted from the queue through execution delay, and the time is not enough.
  • maximumPoolSize: The maximum number of threads allowed in the thread pool. If the current blocking queue is full, continue to submit tasks to the thread pool. If the current number of threads in the thread pool is less than the maximumPoolSize value, it will continue to create threads to perform tasks. When the blocking queue is an unbounded queue, the maximumPoolSize has no effect, because threads that cannot be submitted to the core thread pool will continue to be placed in the workQueue.
  • keepAliveTime: The survival time of a non-core thread when it is idle, that is, the time that the thread continues to survive when the thread has no task execution.
  • unit: keepAliveTime unit
  • threadFactory: The factory for creating threads. Through a custom thread factory, a recognizable thread name can be set for each newly created thread. The default is DefaultThreadFactory.
  • handler: Thread pool and saturation strategy. When the blocking queue is full, the number of threads in the thread pool is greater than the maximumPoolSize and there are no idle ones. If you continue to submit tasks, you must adopt a strategy to handle tasks that these threads cannot handle. The thread pool provides Four strategies:
    • AbortPolicy: throw an exception directly, the default policy;
    • CallerRunsPolicy: use the thread of the caller to perform tasks;
    • DiscardOldentPolicy: Discard the top task in the blocking queue and execute the current task;
    • DiscardPolicy: directly discard the task.
      We can also implement the RejectedExecutionHandler interface according to the actual application scenario, and customize the saturation strategy, such as logging tasks or tasks that cannot be processed by persistent storage.

5 Executor source code analysis

FixedThreadPool (fixed-length thread pool)

Insert picture description here

Features:

  • LinkedBlockingQueue is used, and the length is not specified

insufficient:

  • Because the default value is Integer.MAX_VALUE, it may consume a lot of memory, or even OOM

ScheduledThreadPool (timed thread pool)

Insert picture description here

Features:

  • LinkedBlockingQueue is used, and the length is not specified

insufficient:

  • Because the default value is Integer.MAX_VALUE, it may consume a lot of memory, or even OOM

CacheThreadPool (cacheable thread pool)

Insert picture description here

Features:

  • The maximum value of maximumPoolSize is Integer.MAX_VALUE, because its core thread pool number is 0, so when the thread is idle for 60s, it will be recycled. In extreme cases, it will not hold any thread resources.

insufficient:

  • May cause a lot of threads to be created, even OOM

SingleThreadExecutor (single thread thread pool)

Insert picture description here

Features:

  • There is only one core thread. If the thread ends abnormally, a new thread task will be created to continue the task. The only thread can guarantee the sequential execution of the submitted tasks, using the LinkedBlockingQueue unbounded queue

insufficient:

  • Due to the use of unbounded queues, SingleThreadPool will never refuse, that is, the saturation strategy is invalid

6 custom thread pool

After reading the source code, we found that although four thread pool implementations are provided, they all have certain drawbacks and many things cannot be freely defined. Therefore, Ali does not recommend using Executor for a reason. Let’s take a look at the custom ones below. Thread Pool
Insert picture description here

Description:

  • Number of core threads: 5
  • Maximum number of threads: 9
  • Non-core thread idle survival time: 20s
  • Task queue: ArrayBlockingQueue with length 1
    Insert picture description here

Description:
Execute 10 tasks continuously, and the execution time of each task is 10s

Effect:
First, 5 core threads will be used, and then the sixth task will be placed in the task queue. Because the length of the queue is 1, so when the subsequent tasks arrive, it will be judged whether the maximum number of threads has been reached, so the 7th~10th task It will make the thread pool create the maximum number of threads to 9, and then after 10s there are idle threads and then execute the sixth task, the execution effect is as shown in the figure below.
Insert picture description here

Guess you like

Origin blog.csdn.net/qq_41979344/article/details/113483681