线程与并发简单笔记-[2]

线程池

Executors

线程池和连接池类似,避免频繁创建、销毁线程,实现线程的复用。
其中Executors相当于一个线程池的工厂,提供一些不同功能线程池,如下

这里写图片描述
其中典型有:

  • 拥有固定线程数的线程池

当请求任务超出线程数时,利用无边界的LinkedBlockingQueue进行排队等待

  /**
     * Creates a thread pool that reuses a fixed number of threads
     * operating off a shared unbounded queue.  At any point, at most
     * {@code nThreads} threads will be active processing tasks.
     * If additional tasks are submitted when all threads are active,
     * they will wait in the queue until a thread is available.
     * If any thread terminates due to a failure during execution
     * prior to shutdown, a new one will take its place if needed to
     * execute subsequent tasks.  The threads in the pool will exist
     * until it is explicitly {@link ExecutorService#shutdown shutdown}.
     *
     * @param nThreads the number of threads in the pool
     */
    public static ExecutorService newFixedThreadPool(int nThreads) {
        return new ThreadPoolExecutor(nThreads, nThreads,
                                      0L, TimeUnit.MILLISECONDS,
                                      new LinkedBlockingQueue<Runnable>());
    }
  • 按需cached线程池

(1)按需创建线程
(2)有助于提升拥有很多短生命周期的非同步任务程序的性能
(3)提交任务,无可用线程及时创建,对空闲线程60 seconds后终止回收销毁

/**
     * (1)Creates a thread pool that creates new threads as needed, but
     * will reuse previously constructed threads when they are
     * available.  
     * (2)These pools will typically improve the performance
     * of programs that execute many short-lived asynchronous tasks.
     * (3)Calls to {@code execute} will reuse previously constructed
     * threads if available. If no existing thread is available, a new
     * thread will be created and added to the pool. Threads that have
     * not been used for sixty seconds are terminated and removed from
     * the cache.
     *  Thus, a pool that remains idle for long enough will
     * not consume any resources. Note that pools with similar
     * properties but different details (for example, timeout parameters)
     * may be created using {@link ThreadPoolExecutor} constructors.
     *
     * @return the newly created thread pool
     */
    public static ExecutorService newCachedThreadPool() {
        return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
                                      60L, TimeUnit.SECONDS,
                                      new SynchronousQueue<Runnable>());
    }
  • 调度线程池

    延迟一段时间后可周期执行commands

    /**
     * Creates a thread pool that can schedule commands to run after a
     * given delay, or to execute periodically.
     */
    public static ScheduledExecutorService newScheduledThreadPool(int corePoolSize) {
        return new ScheduledThreadPoolExecutor(corePoolSize);
    }

ThreadPoolExecutor

CachedThreadPool和FixedThreadPool底层都是ThreadPoolExecutor

public static ExecutorService newCachedThreadPool() {
        return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
                                      60L, TimeUnit.SECONDS,
                                      new SynchronousQueue<Runnable>());

public static ExecutorService newFixedThreadPool(int nThreads) {
        return new ThreadPoolExecutor(nThreads, nThreads,
                                      0L, TimeUnit.MILLISECONDS,
                                      new LinkedBlockingQueue<Runnable>());

 /**
     * @param corePoolSize the number of threads to keep in the pool, even
     *        if they are idle, unless {@code allowCoreThreadTimeOut} is set
     * @param maximumPoolSize the maximum number of threads to allow in the
     *        pool
     * @param keepAliveTime when the number of threads is greater than
     *        the core, this is the maximum time that excess idle threads
     *        will wait for new tasks before terminating.
     * @param unit the time unit for the {@code keepAliveTime} argument
     * @param workQueue the queue to use for holding tasks before they are
     *        executed.  This queue will hold only the {@code Runnable}
     *        tasks submitted by the {@code execute} method.
     * @param threadFactory the factory to use when the executor
     *        creates a new thread
     * @param handler the handler to use when execution is blocked
     *        because the thread bounds and queue capacities are reached
     */
    public ThreadPoolExecutor(int corePoolSize,//池中线程数
                              int maximumPoolSize,//池中允许的最大线程数
                              long keepAliveTime,//大于corePoolSize的线程在idle时的存活时间
                              TimeUnit unit,
                              BlockingQueue<Runnable> workQueue,//等待队列
                              ThreadFactory threadFactory,//创建线程的工厂
                              RejectedExecutionHandler handler//拒绝策略) {}

主要应用到的队列有:

  • SynchronousQueue
    直接提交队列

插入操作时等待一个删除操作,反之亦然
提交一个线程,若没有空闲则创建新的线程,若已达到最大线程数则启动拒绝策略

/**
 * (1)A BlockingQueue in which each insert
 * operation must wait for a corresponding remove operation by another
 * thread, and vice versa.  
 * (2)A synchronous queue does not have any
 * internal capacity, not even a capacity of one. 
 * (3) You cannot {@code peek} at a synchronous queue because an element is only present when you try to remove it; you cannot insert an element
 * (using any method) unless another thread is trying to remove it;
 * (4)you cannot iterate as there is nothing to iterate. 
 * (5) The <em>head</em> of the queue is the element that the first queued
 * inserting thread is trying to add to the queue; if there is no such
 * queued thread then no element is available for removal and
 * {@code poll()} will return {@code null}. */

public class SynchronousQueue<E> extends AbstractQueue<E>
    implements BlockingQueue<E>, java.io.Serializable
  • ArrayBlockingQueue

    有界队列

    (1)FIFO先先出
    (2)固定大小,不可变

  • LinkedBlockingQueue

    无界有序队列

    (1)FIFO先先出
    (2)大小可变
    (3)效率低于ArrayBlockingQueue
    (4)频繁的任务提交,可能会使得队列快速膨胀

  • PriorityBlockingQueue

    优先级队列,根据任务优先级决定执行顺序

关于拒绝策略可具体见JDK说明,其中一般使用默认AbortPolicy:抛出异常

这里写图片描述

如何合理地估算线程池大小.

如何合理地估算线程池大小?
服务器性能IO优化 中发现一个估算公式:

最佳线程数目 = ((线程等待时间+线程CPU时间)/线程CPU时间 )* CPU数目

比如平均每个线程CPU运行时间为0.5s,而线程等待时间(非CPU运行时间,比如IO)为1.5s,CPU核心数为8,那么根据上面这个公式估算得到:((0.5+1.5)/0.5)*8=32。这个公式进一步转化为:

最佳线程数目 = (线程等待时间与线程CPU时间之比 + 1)* CPU数目

可以得出一个结论:
线程等待时间所占比例越高,需要越多线程。线程CPU时间所占比例越高,需要越少线程。

猜你喜欢

转载自blog.csdn.net/hjw199089/article/details/80700029