java线程池2-任务队列的规则 java线程池2-任务队列的规则

先从一句代码开始,//创建固定线程数的线程池:

Java代码   收藏代码
  1. newFixedThreadPool = Executors.newFixedThreadPool(THREAD_POOL_SIZE);  

跟进这个方法,发现是一套默认参数创建出的ThreadPoolExecutor,而实际上,构造参数的参数是很多的,有门道

Java代码   收藏代码
  1. public static ExecutorService newFixedThreadPool(int nThreads) {  
  2.     return new ThreadPoolExecutor(nThreads, nThreads,  
  3.                                   0L, TimeUnit.MILLISECONDS,  
  4.                                   new LinkedBlockingQueue<Runnable>());  
  5. }  

那么来举例说明ThreadPoolExecutor的参数策略:

Java代码   收藏代码
  1. public ThreadPoolExecutor(int corePoolSize,  
  2.                           int maximumPoolSize,  
  3.                           long keepAliveTime,  
  4.                           TimeUnit unit,  
  5.                           BlockingQueue<Runnable> workQueue)  

 corePoolSize==10, 池内会维持10个线程,哪怕完全没有作业进入,或者是作业过多排队。

 maximumPoolSize==20  ,池内线程数最多可以达到20个,再多就不行了:) 但是能不能突破10,还需结合下面的参.

 

 keepAliveTime+TimeUnit==200ms, 当任务繁忙,core10个线程处理不过来的时候,新任务等待200ms后会被丢弃

,也就是排队超时机制

 

BlockingQueue ,排队策略,有三种:

 

1.直接提交方式(SynchronousQueue: 无缓冲队列,即时提交任务,即时处理。 当线程不够时会启动新线程处理任务,即突破corePoolSize的限制,奔向maximumPoolSize。如果最大线程数都处理不过来,任务将会被拒绝掉! 因此这里JDK推荐将maximum的值设置到尽量大。

 

  • Direct handoffs. A good default choice for a work queue is a SynchronousQueue that hands off tasks to threads without otherwise holding them. Here, an attempt to queue a task will fail if no threads are immediately available to run it, so a new thread will be constructed. This policy avoids lockups when handling sets of requests that might have internal dependencies. Direct handoffs generally require unbounded maximumPoolSizes to avoid rejection of new submitted tasks. This in turn admits the possibility of unbounded thread growth when commands continue to arrive on average faster than they can be processed. 

2.无界的队列(LinkedBlockingQueue),

会先维持corePoolSize个线程个数来处理队列中的任务,在任务繁忙的时候,Queue队列大小会增加到maximumPoolSize

而最大队列不能满足的情况下,则会出现提交阻塞,这个时候超时参数起作用。

Unbounded queues. Using an unbounded queue (for example a LinkedBlockingQueue without a predefined capacity) will cause new tasks to wait in the queue when all corePoolSize threads are busy. Thus, no more than corePoolSize threads will ever be created. (And the value of the maximumPoolSize therefore doesn't have any effect.) This may be appropriate when each task is completely independent of others, so tasks cannot affect each others execution; for example, in a web page server. While this style of queuing can be useful in smoothing out transient bursts of requests, it admits the possibility of unbounded work queue growth when commands continue to arrive on average faster than they can be processed.

3.有界队列(ArrayBlockingQueue),工作方式类似2,但是可以控制系统的资源消耗。不过往往非常难以取到平衡点。特别是经常阻塞的工作任务或者耗时较长的任务。

Bounded queues. A bounded queue (for example, an ArrayBlockingQueue) helps prevent resource exhaustion when used with finite maximumPoolSizes, but can be more difficult to tune and control. Queue sizes and maximum pool sizes may be traded off for each other: Using large queues and small pools minimizes CPU usage, OS resources, and context-switching overhead, but can lead to artificially low throughput. If tasks frequently block (for example if they are I/O bound), a system may be able to schedule time for more threads than you otherwise allow. Use of small queues generally requires larger pool sizes, which keeps CPUs busier but may encounter unacceptable scheduling overhead, which also decreases throughput.

下面仔细考虑三种队列的应用场景:

策略1.此策略可以避免在处理可能具有内部依赖性的请求集时出现锁。 任务线程互相等待的情况,使用策略一是明智的。

策略2.这种排队可用于处理瞬态突发请求,当命令以超过队列所能处理的平均数连续到达时,使用池子来缓冲,异步化。Web 服务器中非常适用。

 

策略3.是一种稳妥、防雪崩的方式。在任务阻塞可接受的情况下,配合较大keepAliveTime参数使用。只是如何取得线程调度和cpu,内存的平衡,比较困难。应该使用场景不多。

个人理解:

1.线程池非常适合用来做异步化。特别是对于峰值很高的web应用,应该异步化一切,特别是日志写。并且对于web应用来说,策略2使用最常见,偶尔也会用到策略3.

2. 策略1,2均有可能出现无界队列爆发性增长的可能性,导致系统资源过度消耗。过多的任务积累在内存中,一旦重启服务就会比较疼:)因此在这两种策略下,建议将池模型服务独立出来,不要胶合在业务进程。 业务和池进程可以使用udp包通讯,或者JMS。

3.异步化不代表出队速度可以比入队速度慢!因为欠下的债迟早还是要还的。因此异步化之后,第二件事情就是考虑优化task池的任务执行速度!

猜你喜欢

转载自723242038.iteye.com/blog/1972781