Advanced knowledge of Java concurrent thread pool

First, why should use threads

Thread pool provides a limit and manage resources (including the implementation of a task). Each thread pool also maintains some basic statistics, such as number of tasks have been completed.

Use the thread pool benefits:

  • Reduce resource consumption. By reusing the thread has been created to reduce thread creation and destruction caused by consumption.
  • Improve the response speed. When the mission arrives, the task may not need to wait until the thread creation can be implemented immediately.
  • Thread improve manageability. A thread is a scarce resource, if the unlimited creation, not only consumes system resources, but also a lower stability of the system, using a thread pool can be unified distribution, tuning and monitoring.

 

Second, how to create a thread pool?

(1) ThreadPoolExecutor create a thread pool

new ThreadPoolExecutor(corePoolSize, maximumPoolSize, keepAliveTime, milliseconds, runnableTaskQueue, handler);

Create a thread pool needs the following parameters:

  1. corePoolExecutor: basic core thread pool size.
  2. runnableTaskQueue: task queue, waiting for saving mission of blocking queue. There are several to choose from:
    • ArrayBlockingQueue: it is based on a data structure bounded blocking queue, the elements of this queue are sorted in the FIFO principle;
    • LinkedBlockingQueue: is a blocking queue based linked list structure, this queue is a FIFO ordering element, usually higher than a certain ArrayBlockingQueue. Executors.newFixedThreadPool static factory method () using the queue;
    • SynchronousQueue: a blocking queue element is not stored. Each insert operation must wait until another thread calls the removal operation, or insert an element has been in a blocked state, throughput is usually higher than LinkedBlockingQueue, static factory method Executor.newCachedThreadPool use this queue;
    • PriorityBlockingQueue: a priority queue has unlimited obstruction.
  3. maxmumPoolSize: The maximum number of threads pool.
  4. ThreadFactory: to create a thread factory settings, you can give each thread set out to create a more meaningful name by thread factory.
  5. RejectedExecutionHandle (saturation policy): When the queues and thread pools are full, indicating that the thread pool is saturated, must adopt a strategy to deal with the new task submission. By default, this strategy is AbortPolicy, it said it could not handle the new task is to throw an exception. Providing a bit in JDK1.5 in Java thread pool framework 4 strategies:
    • AbortPolicy: direct throw an exception;
    • CallerRunPolicy: only where the caller thread to run the task;
    • DiscardOldestPolicy: discard queued tasks in a recent task and execute the current task;
    • DiscardPolicy: no treatment, discarded.

You can also implement custom interfaces RejectedExecutionHandle policies based on application scenarios. The logging or persistent storage task can not be processed.

  • keepAliveTime: thread activity holding time, after the idle worker thread pool, to keep alive the time. So if the task a lot of words, and each task execution time is relatively short, it can turn up time, improve the utilization of the thread. Only when the number of threads in the thread pool greater than corePoolSize, this parameter will work .
  • TimeUnit: thread activity retention time units, optionally units of one day (DAYS), h (HOURS), min (MINUTES), ms (MILLISECONDS), microsecond (MICROSECONDS, thousandth ms) and ns (nanoseconds thousandth microseconds).

 

(2) Executor create a thread pool

a、newFixedThreadPool

FixedThreadPool referred reusable fixed number of threads in the thread pool . The following source code:

public static ExecutorService newFixedThreadPool(int nThreads) {
    return new ThreadPoolExecutor(nThreads, nThreads,
                                  0L, TimeUnit.MILLISECONDS,
                                  new LinkedBlockingQueue<Runnable>());
}

  1. 如果当前运行的线程数少于 corePoolSize,则创建新线程来执行任务。
  2. 在线程完成预热后(当前运行的线程数等于 corePoolSize),将任务加入 LinkedBlockingQueue。
  3. 线程执行完 1 中的任务后,会在循环中反复从 LinkedBlockingQueue 获取任务来执行。

 FixedThreadPool 使用无界队列 LinkedBlockingQueue 作为线程池的工作队列(队列容量为 Integer.MAX_VALUE)。使用无界队列会造成如下影响:

  1. 当线程池中的线程数达到 corePoolSize 后,新任务将在无界队列中等待,因此线程池中的线程数不会超过 corePoolSize。
  2. 由于 1,使用无界队列 maxmumPoolSize 将是一个无效参数。
  3. 由于 1 和 2,使用无界队列时 keepAliveTime 将是一个无效参数。
  4. 由于使用无界队列,运行中的 FixedThreadPool 不会拒绝任务。

b、newSingleThreadExecutor() 单线程线程池

public static ExecutorService newSingleThreadExecutor() {
    return new FinalizableDelegatedExecutorService
        (new ThreadPoolExecutor(1, 1,
                                0L, TimeUnit.MILLISECONDS,
                                new LinkedBlockingQueue<Runnable>()));
}

SingleThreadExecutor() 是使用单个 worker 线程的 Executor。单线程线程池,那么线程池中运行的线程数肯定是1。 workQueue 选择了无界的 LinkedBlockingQueue,那么不管来多少任务都排队,前面一个任务执行完毕,再执行队列中的线程。从这个角度讲,第二个参数 maximumPoolSize 是没有意义的,因为 maximumPoolSize 描述的是排队的任务多过 workQueue 的容量,线程池中最多只能容纳 maximumPoolSize 个任务,现在 workQueue 是无界的,也就是说排队的任务永远不会多过 workQueue 的容量,那 maximum 其实设置多少都无所谓了。

  1. 如果当前运行的线程数少于 corePoolSize(即线程池中无运行的线程),则创建一个新的线程来执行任务。
  2. 线程完成预热后(当前运行的线程数等于 corePoolSize),将任务加入 LinkedBlockingQueue。
  3. 线程执行完 1 中的任务后,会在一个无限循环中反复从 LinkedBlockingQueue 获取任务。

c、newCachedThreadPool

CachedThreadPool 是一个会根据需要创建新线程的线程池。如下源代码:

public static ExecutorService newCachedThreadPool() {
    return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
                                  60L, TimeUnit.SECONDS,
                                  new SynchronousQueue<Runnable>());
}

corePoolSize 被设置为 0;maximumPoolSize 被设置为 Integer.MAX_VALUE,即 maximumPool 是无界的。将 keepAliveTime 设置为 60L,表明空闲线程等待新任务的时间最长为 60s,超过后则会被终止。

CachedThreadPool 使用没有容量的 SynchronousQueue 作为线程池的工作队列,但是 maximumPool 确是无界的。这意味着,如果主线程提交任务的速度高于 maximumPool 中线程处理任务的速度时,CachedThreadPool 会不断创建新线程。极端情况下,CachedThreadPool 会因为创建过多的线程而耗尽 CPU 和内存资源。

  1. 首先执行 SynchronousQueue.offer(Runnable task)。如果当前 maximumPool 中有空闲线程正在执行 SynchronousQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS),那么主线程执行 offer 操作与空闲线程执行的 poll 操作配对成功,主线程把任务交给空闲线程执行,execute() 方法执行完成;否则执行步骤 2。
  2. 当初始maximumPool 为空,或者 maximumPool 中没有空闲线程时,将没有线程执行 SynchronousQueue.poll(keepAliveTime, TimeUnit.NANOSECODS)。这种情况下步骤 1 将失败。此时 CachedThreadPool 会创建一个新线程执行任务,execute() 方法执行完成。
  3. 在步骤 2 中新创建的线程将任务执行完后,会执行 SynchronousQueue.poll(keepAliveTime,TimeUnit.NANOSECONDS)。这个 poll 操作会让空闲线程最多在SynchronousQueue 中等待60秒钟。如果60秒钟内主线程提交了一个新任务(主线程执行步骤1),那么这个空闲线程将执行主线程提交的新任务;否则,这个空闲线程将终止。由于空闲60秒的空闲线程会被终止,因此长时间保持空闲的 CachedThreadPool 不会使用任何资源。

 

第 4 种:newScheduledThreadPool

创建固定长度的线程池,且同时以延迟或者定时的方法来执行任务。

 

三、阻塞队列 BlockingQueue

该类主要提供了两个方法 put() 和 take(),前者将一个对象放到队列中,如果队列以及满了,就等待直到有空闲节点;与后者从 head 去一个对象,如果没有对象,就等待直到有可取的对象。

FixedThreadPool 与 Sing了ThreadPool 都是采用无界的 LinkedBlockingQueue 实现。LinkedBlockingQueue 中引入了两把锁 takeLock 和 putLock,显然分别是用于 take 操作和 put 操作的。即 LinkedBlockingQueue 入队和出队用的是不同的锁,那么 LinkedBlockingQueue 可以同时进行入队和出队操作,但是由于使用链表实现,所有查找速度会慢一些。

CachedThreadPool 使用的是 SynchronousQueue。

线程池对任务队列包括三种:有界队列、无界队列、同步移交。

  • 无界队列:当请求不断增加时队列将无限增加,因此会出现资源耗尽的情况。
  • 有界队列:如 LinkedBlockingQueue,ArrayBlockingQueue 等,可以避免资源耗尽的情况,但是可能出现队列填满后新任务如何处理?执行饱和策略:中止,抛出异常;抛弃:抛弃该任务;调用者运行:将任务返回给调用者。一般任务的大小和线程池的大小一起调节。对于非常大的队列或者无界队列,里面的任务可能会长时间排队等待。可以直接使用同步移交交任务给工作者线程执行。同步移交并不是真正的队列,只是一种在线程之间移交的机制。

 

Guess you like

Origin www.cnblogs.com/reformdai/p/11099978.html