The correct way to open the JAVA thread pool (transfer)

pre-environment

  1. jdk == 1.8

The hidden dangers of using Executors

Let's first look at a piece of code. We want to create a fixed thread pool, assuming that the number of fixed threads is 4. code show as below:

ExecutorsIt is provided in the JAVA concurrent package and is used to quickly create different types of thread pools.

Isn't it very simple, just one line of code to create a thread pool. For some personal projects or temporary projects, this is really no problem, and the development speed is very fast. But in some large projects, this practice is generally prohibited.

WHY???

Because Executorsthere are performance risks in using the created thread pool, we can see by looking at the source code. When Executorscreating a thread pool, the queue used new LinkedBlockingQueue<Runnable>()is an unbounded queue. If you keep adding tasks to it, it will eventually cause memory problems. That is to say, due to the use of unbounded queues in the project, the memory usage is uncontrollable. The following figure shows the situation in which the old generation is full due to the continuous addition of thread tasks:

Of course, in addition to memory problems, it also has some other problems, which will be explained in detail in the introduction of thread pool parameters below.

The correct way to create a thread pool

In fact, the problem is easy to solve. The provided convenience method has limitations, then we create a new one ourselves ThreadPoolExecutor, just write a few more lines of code.

ThreadPoolExecutorThe specific code about it is as follows :

Parameter Description:

  • corePoolSize: the number of core threads;
  • maximumPoolSize: the maximum number of threads, that is, the maximum number of threads allowed in the thread pool;
  • keepAliveTime: thread survival time. For threads that exceed the number of core threads, when the thread is in an idle state and the maintenance time reaches keepAliveTime, the thread will be destroyed;
  • unit: the time unit of keepAliveTime
  • workQueue: work queue for thread tasks to be executed;
  • threadFactory: A factory for creating threads, which is used to mark and distinguish threads created by different thread pools;
  • handler: The rejection processing logic when the upper limit of the number of threads is reached or the work queue is full;

specific code

  • Custom threadFactory. In addition to customizing the name of the newThread(Runnable r)created thread to facilitate troubleshooting, in the method of creating a thread, you can also perform customized settings, such as setting a specific context for the thread.

  • Custom RejectedExecutionHandler. Record exception information, choose different processing logic, either hand over the task to the current thread, throw an exception directly, or continue to add tasks after waiting, etc.

  • Create a custom thread pool

Thread pool internal processing logic

Let's take a look at its internal processing logic through some examples. Based on the above specific code, we have created a thread pool with a core thread count of 4, a maximum thread count of 8, a thread survival time of 10s, and a maximum work queue capacity of 10.

  • Initializing thread pool : No thread task added

    • At this time, no thread will be created in the thread pool, the surviving thread is 0, and the work queue is 0.
  • Core thread count not reached : add 4 thread tasks

    • Since the current number of surviving threads <= the number of core threads, new threads will be created***. That is, the number of live threads is 4, and the work queue is 0.
  • Core thread count is full : add 5th thread task

    • If there is an idle thread in the current thread pool, it will be handed over to this thread for processing. That is, the number of live threads is 4, and the work queue is 0.
    • If all threads currently process the running state, join the work queue. That is, the surviving thread is 4, and the work queue is 1. ( Note: At this time, the tasks in the work queue will not be executed, and can only be processed after a thread is idle )
  • The work queue is not full : Assuming that the added tasks are all time-consuming operations (which will not end in a short time), add 9 more time-consuming tasks

    • That is, the number of live threads is 4 and the work queue is 10.
  • Work queue full & max threads not reached : add 4 more tasks

    • When the work queue is full and there are no idle threads, additional threads will be created to process the current task. At this point, the number of live threads is 8, and the work queue is 10.
  • Work queue full & max threads full : add 1 more task

    • Trigger RejectedExecutionHandler and hand over the current task to the execution handle set by itself for processing. At this point, the number of live threads is 8, and the work queue is 10.
  • After the task is executed, there are no new tasks, and the temporarily expanded threads (those larger than the number of core threads) will be destroyed after 10s ( keepAliveTime ).

Summarize

Finally, when we use the thread pool, we need to choose according to the usage scenario. The function of timing tasks is realized by the combination of corePoolSize and maximumPoolSize, the choice of survival time, and the implementation of changing queues, such as selecting delay queues. Some of the methods provided in the concurrent package Executorsare really easy to use, but we still need to use them with reservations, so as not to dig too many holes in the project.

expand

For some time-consuming IO tasks, blindly selecting a thread pool is often not the best solution. Through asynchronous + single-threaded polling, the upper layer can cooperate with a fixed thread pool, and the effect may be better. Similar to the selector polling processing in the Reactor model.

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326234903&siteId=291194637