Scalable Java thread pool executor

Share recent sharp distinctions lesson study notes.

Java programs tend to execute thread pool queue instead of generating new thread. On the plus side, we have two solutions.

Ideally, any thread pool for execution of the program, expect the following:

  • Previously created an initial set of threads (core thread pool size) to handle the load.
  • If the load increases, it should create more threads to handle the maximum number of threads (maximum pool size) load.
  • If the number of threads increases to "max pool size" outside, then the task queue.
  • If you use a bounded queue, and the queue is full, please refuse to introduce some policies.

The following diagram depicts the process; only the initial thread created to handle the task (when the load is low).

 

 

 

 As more mission to enter, assuming that the total number of threads created less than the maximum pool size, more threads will be created to handle the load (the task queue is still empty).

 

 

 

 If the total number of tasks is greater than the total number of threads (+ initial extension), the task queue begins to fill:

 

 

 Unfortunately, Java thread pool executor (TPE) tend to line up instead of generating a new thread, that is, after the initial core threads are occupied, the task is added to the queue, and after the queue reaches its limit (which will only bounded queue happen)), additional threads will produce. If the queue is not limited, the thread will not produce expansion, as shown in FIG.

 

 

 

 

  1. Creating the initial kernel thread to handle the load.
  2. Once the number of tasks exceeds the core number of threads, the queue will begin to fill up storage tasks.
  3. After the queue fills up, the expansion will create a thread.

This is the TPE code, there is a problem

 

 

 

 

We have two solutions:

Solution 1 : Adjust the pool size

The corePoolSize  and maximumPoolSize set to the same value, and allowCoreThreadTimeOut  set to true.

advantage

  • No coding skills.

Shortcoming

  • Since the creation and termination of threads very frequently, so there is no real thread cache.
  • Without proper scalability.

Solution 2 : offer coverage method

  • 重写委托人TransferQueue的offer方法,并尝试将任务提供给空闲的工作线程之一。如果没有等待线程,则返回false。
  • 实现自定义RejectedExecutionHandler以始终添加到队列中。

 

 

 实现自定义RejectedExecutionHandler以始终添加到位数中。

优点

  • TransferQueue确保不需要创建线程,并将工作直接转移到等待队列。

缺点

  • 不能使用定制的拒绝处理程序,因为它用于插入要排队的任务。

解决方法#3:使用自定义队列

使用自定义队列(TransferQueue)并覆盖offer方法以执行以下操作:

  1. 尝试将任务直接转移到等待队列(如果有)。
  2. 如果以上操作失败,并且未达到最大池大小,则通过从offer方法返回false来创建扩展线程。
  3. 否则,将其插入队列。

 

 

 

 

优点

  • TransferQueue确保不需要创建线程,并将工作直接转移到等待队列。
  • 可以使用自定义拒绝处理程序。

缺点

  • 队列和执行程序之间存在循环依赖关系。

解决方法4:使用自定义线程池执行程序

使用专门用于此目的的自定义线程池执行程序。它使用系统@ Facebook规模中所述的LIFO调度。

Guess you like

Origin www.cnblogs.com/youruike-/p/12039647.html