Turing Institute of Architects -VIP- Java concurrent programming (thread pool)

1、Executor

public interface Executor {

    // 执行Runnable任务
    void execute(Runnable command);
}

ExecutorService:

  1. It provides several ways to submit tasks, support Callable
  2. It defines the interface thread pool-related operations
public interface ExecutorService extends Executor {
  
    // 启动有序关闭,其中先前提交的任务将被执行,但不会接受任何新任务。
    void shutdown();

    /**
     * 立即关闭执行器, 主要有以下特点:
     * 1. 尝试停止所有正在执行的任务, 无法保证能够停止成功, 但会尽力尝试(例如, 通过 Thread.interrupt中断任务, 但是不响应中断的任务可能无法终止);
     * 2. 暂停处理已经提交但未执行的任务;
     *
     * @return 返回已经提交但未执行的任务列表
     */
    List<Runnable> shutdownNow();

    // 如果此执行程序已关闭,则返回true。
    boolean isShutdown();

    // 仅当执行器已关闭且所有任务都已经执行完成, 才返回true.
    boolean isTerminated();

    // 阻止所有任务在关闭请求之后完成执行,或者发生超时,或者当前线程被中断,以先发生者为准
    boolean awaitTermination(long timeout, TimeUnit unit)
        throws InterruptedException;

    // 接收Callable实现,返回Future对象
    <T> Future<T> submit(Callable<T> task);

    // 提交Runnable任务以执行并返回表示该任务的Future。
    <T> Future<T> submit(Runnable task, T result);

    // 提交Runnable任务以执行并返回表示该任务的Future
    // 注意: Future的get方法在成功完成时将会返回null.
    Future<?> submit(Runnable task);

    // 执行给定集合中的所有任务, 当所有任务都执行完成后, 返回保持任务状态和结果的 Future 列表.
    // 注意: 该方法为同步方法. 返回列表中的所有元素的Future.isDone() 为 true.
    <T> List<Future<T>> invokeAll(Collection<? extends Callable<T>> tasks)
        throws InterruptedException;

    // 执行给定集合中的所有任务, 当所有任务都执行完成后或超时期满时(无论哪个首先发生)
    // 返回保持任务状态和结果的 Future 列表.    
    <T> List<Future<T>> invokeAll(Collection<? extends Callable<T>> tasks,
                                  long timeout, TimeUnit unit)
        throws InterruptedException;

    // 执行给定集合中的任务, 只有其中某个任务率先成功完成(未抛出异常), 则返回其结果
    // 一旦正常或异常返回后, 则取消尚未完成的任务.
    <T> T invokeAny(Collection<? extends Callable<T>> tasks)
        throws InterruptedException, ExecutionException;

    // 执行给定集合中的任务, 如果在给定的超时期满前, 某个任务已成功完成(未抛出异常), 则返回其结果.
    // 一旦正常或异常返回后, 则取消尚未完成的任务.
    <T> T invokeAny(Collection<? extends Callable<T>> tasks,
                    long timeout, TimeUnit unit)
        throws InterruptedException, ExecutionException, TimeoutException;
}

AbstractExecutorService:

  1. Entrance submission of various polymerization, the various tasks packaging RunnableFuture, unified call execute(RunnableFuture ftask);
  2. Achieve a variety of invoke*call logic polymerizationdoInvokeAny
public Future<?> submit(Runnable task) {
    if (task == null) throw new NullPointerException();
    RunnableFuture<Void> ftask = newTaskFor(task, null);
    execute(ftask);
    return ftask;
}

public <T> Future<T> submit(Runnable task, T result) {
    if (task == null) throw new NullPointerException();
    RunnableFuture<T> ftask = newTaskFor(task, result);
    execute(ftask);
    return ftask;
}

public <T> Future<T> submit(Callable<T> task) {
    if (task == null) throw new NullPointerException();
    RunnableFuture<T> ftask = newTaskFor(task);
    execute(ftask);
    return ftask;
}

ScheduledExecutorService:

Some tasks to the actuator can be scheduled periodically performed or

public interface ScheduledExecutorService extends ExecutorService {

    
    public ScheduledFuture<?> schedule(Runnable command, long delay, TimeUnit unit);

    
    public <V> ScheduledFuture<V> schedule(Callable<V> callable, long delay, TimeUnit unit);

    
    public ScheduledFuture<?> scheduleAtFixedRate(Runnable command,
                                                  long initialDelay,
                                                  long period,
                                                  TimeUnit unit);

   
    public ScheduledFuture<?> scheduleWithFixedDelay(Runnable command,
                                                     long initialDelay,
                                                     long delay,
                                                     TimeUnit unit);

}

Executors:

  1. Provided the thread pool factory function
  2. Operating the thread pool is created

Core created:

  • newCachedThreadPool create a cache thread pool, thread pool longer than if treatment needs, the flexibility to reclaim idle thread, if not recyclable, the new thread.
  • newFixedThreadPool create a fixed-size thread pool, you can control the maximum number of concurrent threads, excess threads will wait in the queue.
  • newScheduledThreadPool create a fixed-size thread pool to support regular and periodic task execution
  • newSingleThreadExecutor create a single-threaded thread pool, use it only to perform the task only worker threads to ensure that all tasks are performed in a specified order (FIFO, LIFO, priorities). 

Core API:

1, execute (Runnable command): fulfill Ruannable type of task
2, the Submit (Task): used to submit a Callable or Runnable task, and returns the Future object that represents this task 
3, shutdown (): After completing the task has been submitted closed work, not to take over new tasks,
4, shutdownNow (): stop all tasks are performed and closed work.
5, isTerminated (): test whether all the tasks are fulfilled.
6, isShutdown (): test whether the ExecutorService has been closed. 

Core parameters:

  • corePoolSize: the number of threads to keep in the pool, including idle threads
  • maximumPoolSize: The maximum number of threads allowed in the pool
  • keepAliveTime: When the number of threads is greater than the core, this is terminated before the excess idle threads waiting for new tasks of the maximum time
  • unit: time unit keepAliveTime parameters
  • workQueue: Before performing tasks for maintaining the queue. This queue only holds Runnable tasks submitted by the execute method
  • Factory use when creating a new thread of execution procedures: threadFactory
  • handler: because the thread from the scope and the execution queue capacity

Operating logic:

  • If the current pool size, poolSize less than corePoolSize, a new thread is created to perform tasks
  • If the current pool size, greater than the poolSize corePoolSize, and the wait queue is not full, the process proceeds to queue
  • If the current pool size, poolSize corePoolSize greater than and less than maximumPoolSize, and the queue is full, create a new thread to perform tasks
  • If the current pool size, poolSize corePoolSize greater than and greater than maximumPoolSize, and the queue is full, call reject policy
  • Each thread in the pool after performing the task will not quit immediately, but will wait for the next queue to check whether there are threaded tasks need to be performed, if not wait for the new task in keepAliveTime years, the thread exits

2, the implementation of the principle of

Configuration parameters:

/**
 * 使用给定的参数创建ThreadPoolExecutor.
 *
 * @param corePoolSize    核心线程池中的最大线程数
 * @param maximumPoolSize 总线程池中的最大线程数
 * @param keepAliveTime   空闲线程的存活时间
 * @param unit            keepAliveTime的单位
 * @param workQueue       任务队列, 保存已经提交但尚未被执行的线程
 * @param threadFactory   线程工厂(用于指定如何创建一个线程)
 * @param handler         拒绝策略 (当任务太多导致工作队列满时的处理策略)
 */

ThreadPoolExecutor logically self-management thread pool is divided into two parts:

  • The core thread pool (size corresponds to corePoolSize) ,
  • Non-core thread pool (corresponding to the size of corePoolSize-maximumPoolSize) .

When you submit a job to the thread pool, it will create a worker thread - called Worker ; Worker logically belong in the following figure [core] or [thread pool thread pool of non-core], which specifically belong to The judgment corePoolSize, maximumPoolSize, Worker Total

[Core] [non-core thread pool thread pool] is a logical concept, ThreadPoolExecutor task scheduling process based corePoolSizeand maximumPoolSizesize, should determine how to schedule tasks

working principle:

Internal ThreadPoolExecutor defines a variable AtomicInteger - CTL , divided by bitwise, and the state of thread pool threads work recorded in a variable - lower 29 bits of stored thread count , high three thread pool status save :

/**
 * 保存线程池状态和工作线程数:
 * 低29位: 工作线程数
 * 高3位 : 线程池状态
 */
private final AtomicInteger ctl = new AtomicInteger(ctlOf(RUNNING, 0));
 
private static final int COUNT_BITS = Integer.SIZE - 3;
 
// 最大线程数: 2^29-1
private static final int CAPACITY = (1 << COUNT_BITS) - 1;  // 00011111 11111111 11111111 11111111
 
// 线程池状态
// 11100000 00000000 00000000 00000000
private static final int RUNNING = -1 << COUNT_BITS;   
// 00000000 00000000 00000000 00000000
private static final int SHUTDOWN = 0 << COUNT_BITS;   
// 00100000 00000000 00000000 00000000
private static final int STOP = 1 << COUNT_BITS;     
// 01000000 00000000 00000000 00000000
private static final int TIDYING = 2 << COUNT_BITS;    
// 01100000 00000000 00000000 00000000
private static final int TERMINATED = 3 << COUNT_BITS;      

You can see, ThreadPoolExecutor has defined five kinds of thread pool status:

  • The RUNNING : accept the new task, and the process has entered the blocking of the task queue
  • SHUTDOWN : do not accept the new task, but the process has entered the blocking of the task queue
  • STOP : do not accept new tasks without processing task has entered the blocking queue while interrupting the running task
  • TIDYING : All tasks have been terminated, the number of worker threads is zero, the thread into TIDYING and ready method calls terminated
  • TERMINATED : terminated method has completed execution

Internal Maintenance Worker by HashSet

/**
 * 工作线程集合.
 */
private final HashSet<Worker> workers = new HashSet<Worker>();

Worker:

/**
 * Worker表示线程池中的一个工作线程, 可以与任务相关联.
 * 由于实现了AQS框架, 其同步状态值的定义如下:
 * -1: 初始状态
 * 0:  无锁状态
 * 1:  加锁状态
 */
private final class Worker extends AbstractQueuedSynchronizer implements Runnable {
 
    /**
     * 与该Worker关联的线程.
     */
    final Thread thread;
    /**
     * Initial task to run.  Possibly null.
     */
    Runnable firstTask;
    /**
     * Per-thread task counter
     */
    volatile long completedTasks;
 
 
    Worker(Runnable firstTask) {
        setState(-1); // 初始的同步状态值
        this.firstTask = firstTask;
        // 每个worker对象会调用线程工厂,
        this.thread = getThreadFactory().newThread(this);
    }
 
    /**
     * 执行任务
     */
    public void run() {
        runWorker(this);
    }
 
    /**
     * 是否加锁
     */
    protected boolean isHeldExclusively() {
        return getState() != 0;
    }
 
    /**
     * 尝试获取锁
     */
    protected boolean tryAcquire(int unused) {
        if (compareAndSetState(0, 1)) {
            setExclusiveOwnerThread(Thread.currentThread());
            return true;
        }
        return false;
    }
 
    /**
     * 尝试释放锁
     */
    protected boolean tryRelease(int unused) {
        setExclusiveOwnerThread(null);
        setState(0);
        return true;
    }
 
    public void lock() {
        acquire(1);
    }
 
    public boolean tryLock() {
        return tryAcquire(1);
    }
 
    public void unlock() {
        release(1);
    }
 
    public boolean isLocked() {
        return isHeldExclusively();
    }
 
    /**
     * 中断线程(仅任务非初始状态)
     */
    void interruptIfStarted() {
        Thread t;
        if (getState() >= 0 && (t = thread) != null && !t.isInterrupted()) {
            try {
                t.interrupt();
            } catch (SecurityException ignore) {
            }
        }
    }
}

Thread Factory:

The following is the default thread factory

public static ThreadFactory defaultThreadFactory() {
    return new DefaultThreadFactory();
}

/**
 * 默认的线程工厂.
 */
static class DefaultThreadFactory implements ThreadFactory {
    private static final AtomicInteger poolNumber = new AtomicInteger(1);
    private final ThreadGroup group;
    private final AtomicInteger threadNumber = new AtomicInteger(1);
    private final String namePrefix;
 
    DefaultThreadFactory() {
        SecurityManager s = System.getSecurityManager();
        group = (s != null) ? s.getThreadGroup() : Thread.currentThread().getThreadGroup();
        namePrefix = "pool-" + poolNumber.getAndIncrement() + "-thread-";
    }
 
    public Thread newThread(Runnable r) {
        Thread t = new Thread(group, r, namePrefix + threadNumber.getAndIncrement(), 0);
        if (t.isDaemon())
            t.setDaemon(false);
        if (t.getPriority() != Thread.NORM_PRIORITY)
            t.setPriority(Thread.NORM_PRIORITY);
        return t;
    }
}

Throughout the implementation process is the key to execute the following two points:

  1. If the number of worker threads is smaller than the core thread pool limit (CorePoolSize), directly create a new thread and work tasks;
  2. If the number of worker threads greater than or equal CorePoolSize, try to execute the task later added to the queue waiting. If you fail to join the queue (for example, the case of queue is full), then under the general thread pool is not full ( CorePoolSize ≤ 工作线程数 < maximumPoolSize) to create a new worker thread perform the task immediately, otherwise deny policy.

Worker thread life cycle

Refused strategy:

ThreadPoolExecutor will refuse to execute strategies in the following two cases:

  1. When the core thread pool is full, if the task queue is also full, the first judgment of non-core thread pool there is not full, not full to create a worker thread (attributable to non-core thread pool), otherwise it will refuse to execute strategy;
  2. When you submit the task, ThreadPoolExecutor has closed.

Four kinds of denial strategy:

  • AbortPolicy (default) : AbortPolicy strategy is actually thrown a RejectedExecutionException abnormal
  • DiscardPolicy : DiscardPolicy policy inaction in fact, do nothing, and other tasks that they have been recovered
  • DiscardOldestPolicy : DiscardOldestPolicy strategy was recently dropped a task in the task queue, and execute the current task
  • CallerRunsPolicy : CallerRunsPolicy strategy is equivalent to its own thread to perform tasks, which can slow the rate of new job submission

3, source code analysis:

structure:

ExecutorService executorService = Executors.newFixedThreadPool(100);

Creating ThreadPoolExecutor instance:

// 创建ThreadPoolExecutor
public static ExecutorService newFixedThreadPool(int nThreads) {
    return new ThreadPoolExecutor(nThreads, nThreads,
                                  0L, TimeUnit.MILLISECONDS,
                                  new LinkedBlockingQueue<Runnable>());
}

// 指定默认的线程创建工厂和拒绝策略
public ThreadPoolExecutor(int corePoolSize,
                              int maximumPoolSize,
                              long keepAliveTime,
                              TimeUnit unit,
                              BlockingQueue<Runnable> workQueue) {
    this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
         Executors.defaultThreadFactory(), defaultHandler);
}

/**
 * 使用给定的参数创建ThreadPoolExecutor.
 *
 * @param corePoolSize    核心线程池中的最大线程数
 * @param maximumPoolSize 总线程池中的最大线程数
 * @param keepAliveTime   空闲线程的存活时间
 * @param unit            keepAliveTime的单位
 * @param workQueue       任务队列, 保存已经提交但尚未被执行的线程
 * @param threadFactory   线程工厂(用于指定如何创建一个线程)
 * @param handler         拒绝策略 (当任务太多导致工作队列满时的处理策略)
 */
public ThreadPoolExecutor(int corePoolSize,
                          int maximumPoolSize,
                          long keepAliveTime,
                          TimeUnit unit,
                          BlockingQueue<Runnable> workQueue,
                          ThreadFactory threadFactory,
                          RejectedExecutionHandler handler) {
    if (corePoolSize < 0 ||
        maximumPoolSize <= 0 ||
        maximumPoolSize < corePoolSize ||
        keepAliveTime < 0)
        throw new IllegalArgumentException();
    if (workQueue == null || threadFactory == null || handler == null)
        throw new NullPointerException();
    this.acc = System.getSecurityManager() == null ? 
        null : AccessController.getContext();
    this.corePoolSize = corePoolSize;
    this.maximumPoolSize = maximumPoolSize;
    this.workQueue = workQueue;
    this.keepAliveTime = unit.toNanos(keepAliveTime);
    this.threadFactory = threadFactory;
    this.handler = handler;
}

Mission:

public void execute(Runnable command) {
    if (command == null)
        throw new NullPointerException();
  
    int c = ctl.get();
    // CASE1: 工作线程数 < 核心线程池上限
    if (workerCountOf(c) < corePoolSize) {
        // 添加工作线程并执行
        if (addWorker(command, true))
            return;
        c = ctl.get();
    }
    // 如果工作队列未满,再次检查运行状态,并插入到任务
    if (isRunning(c) && workQueue.offer(command)) {
        int recheck = ctl.get();
        // 再次check判断运行状态如果是非运行状态就移除出去&reject掉
        if (!isRunning(recheck) && remove(command))
            reject(command);
        // 发现可能运行线程数是0那么增加一个null的worker
        else if (workerCountOf(recheck) == 0)
            addWorker(null, false);
    }
    // 直接增加worker如果不成功直接reject
    else if (!addWorker(command, false))
        reject(command);
}

addWorker()

/**
 * 添加工作线程并执行任务
 *
 * @param firstTask 如果指定了该参数, 表示将立即创建一个新工作线程执行该firstTask任务; 
                                        否则复用已有的工作线程,从工作队列中获取任务并执行
 * @param core      执行任务的工作线程归属于哪个线程池:  true-核心线程池  false-非核心线程池
 */
private boolean addWorker(Runnable firstTask, boolean core) {
    retry:
    for (;;) {
        int c = ctl.get();
        // 获取线程池状态
        int rs = runStateOf(c);

        /**
         * 这个if主要是判断哪些情况下, 线程池不再接受新任务执行
         * 1. 线程池状态为 STOP 或 TIDYING 或 TERMINATED: 线程池状态为上述任一一种
         * 2. 线程池状态 ≥ SHUTDOWN 且 firstTask != null: 
                    因为当线程池状态 ≥ SHUTDOWN时, 不再接受新任务的提交,所以直接返回
         * 3. 线程池状态 ≥ SHUTDOWN 且 队列为空
                    队列中已经没有任务了, 所以也就不需要执行任何任务了,可以直接返回
         */
        if (rs >= SHUTDOWN &&
            !(rs == SHUTDOWN && firstTask == null && !workQueue.isEmpty()))
            return false;

        for (;;) {
            int wc = workerCountOf(c);
            // 检查容量是否超出
            // 1. 超出最大容量;2、core为true表示核心线程数量,为false表示非核心线程数量
            if (wc >= CAPACITY || wc >= (core ? corePoolSize : maximumPoolSize))
                return false;
            // 增加worker数量
            if (compareAndIncrementWorkerCount(c))
                break retry;
            c = ctl.get();  // Re-read ctl
            // 线程池状态发生变化, 重新自旋判断
            if (runStateOf(c) != rs)
                continue retry;
            // else CAS failed due to workerCount change; retry inner loop
        }
    }

    boolean workerStarted = false;
    boolean workerAdded = false;
    Worker w = null;
    try {
        // 创建worker(AQS)
        w = new Worker(firstTask);
        final Thread t = w.thread;
        if (t != null) {
            final ReentrantLock mainLock = this.mainLock;
            // 加锁操作
            mainLock.lock();
            try {
                // Recheck while holding lock.
                // Back out on ThreadFactory failure or if
                // shut down before lock acquired.
                int rs = runStateOf(ctl.get());

                if (rs < SHUTDOWN ||
                    (rs == SHUTDOWN && firstTask == null)) {
                    if (t.isAlive()) // precheck that t is startable
                        throw new IllegalThreadStateException();
                    // 添加到worker队列中
                    workers.add(w);
                    int s = workers.size();
                    if (s > largestPoolSize)
                        largestPoolSize = s;
                    workerAdded = true;
                }
            } finally {
                mainLock.unlock();
            }
            if (workerAdded) {
                // 开始执行任务
                t.start();
                workerStarted = true;
            }
        }
    } finally {
        if (! workerStarted)
            // 添加worker失败处理
            addWorkerFailed(w);
    }
    return workerStarted;
}

After starting Worker, calls the run ()

public void run() {
    runWorker(this);
}

final void runWorker(Worker w) {
    Thread wt = Thread.currentThread();
    Runnable task = w.firstTask;
    w.firstTask = null;
    w.unlock(); // allow interrupts
    boolean completedAbruptly = true;
    try {
        // 当task==null时会通过getTask从队列取任务
        while (task != null || (task = getTask()) != null) {
            w.lock();
            /**
             * 下面这个if判断的作用如下:
             * 1.保证当线程池状态为STOP/TIDYING/TERMINATED时,
                    当前执行任务的线程wt是中断状态(因为线程池处于上述任一状态时,均不能再执行新任务)
             * 2.保证当线程池状态为RUNNING/SHUTDOWN时,当前执行任务的线程wt不是中断状态
             */
            if ((runStateAtLeast(ctl.get(), STOP) ||
                 (Thread.interrupted() &&
                  runStateAtLeast(ctl.get(), STOP))) &&
                !wt.isInterrupted())
                wt.interrupt();
            try {
                // 执行前置处理, 1.8版本为空实现
                beforeExecute(wt, task);
                Throwable thrown = null;
                try {
                    // 执行任务
                    task.run();
                } catch (RuntimeException x) {
                    thrown = x; throw x;
                } catch (Error x) {
                    thrown = x; throw x;
                } catch (Throwable x) {
                    thrown = x; throw new Error(x);
                } finally {
                    // 执行后置处理, 1.8版本为空实现
                    afterExecute(task, thrown);
                }
            } finally {
                task = null;
                // 更新该worker完成状态
                w.completedTasks++;
                w.unlock();
            }
        }
        // 执行到此处, 说明该工作线程自身既没有携带任务, 也没从任务队列中获取到任务
        // 正常退出时,为false, 如果抛出异常退出,则为true
        completedAbruptly = false;
    } finally {
          // 处理工作线程的退出工作
        processWorkerExit(w, completedAbruptly);
    }
}  

processWorkerExit(Worker w, boolean completedAbruptly)

private void processWorkerExit(Worker w, boolean completedAbruptly) {
      // 工作线程因异常情况而退出
    if (completedAbruptly) 
        // 工作线程数减1(如果工作线程执行时没有出现异常, 在getTask()方法中已经对线程数减1了)
        decrementWorkerCount();

    final ReentrantLock mainLock = this.mainLock;
    mainLock.lock();
    try {
        // completedTaskCount记录线程池完成的总任务数
        completedTaskCount += w.completedTasks;
        // 从工作线程集合中移除(该工作线程会自动被GC回收)
        workers.remove(w);
    } finally {
        mainLock.unlock();
    }
        // 根据线程池状态, 判断是否需要终止线程池
    tryTerminate();

    int c = ctl.get();
    if (runStateLessThan(c, STOP)) {
        // 工作线程为正常退出
        if (!completedAbruptly) {
            int min = allowCoreThreadTimeOut ? 0 : corePoolSize;
            if (min == 0 && ! workQueue.isEmpty())
                min = 1;
            if (workerCountOf(c) >= min)
                return; // replacement not needed
        }
        // 新建一个工作线程
        addWorker(null, false);
    }
}

4、ScheduledThreadPoolExecutor

To meet the task of delay / cycle scheduling features, it would have all the Runnable tasks are packaged, packed into a RunnableScheduledFuturetask.

  • ScheduledThreadPoolExecutor the task queue is a special queue delay
  • DelayedWorkQueue: It DelayQueue similar, but requires that all elements of the team must implement RunnableScheduledFuture interface.

The implementation of the principle of

structure:

ScheduledThreadPoolExecutor constructor, calls are actually inside a parent ThreadPoolExecutor constructor, the most important issue here is to choose the task queue - DelayedWorkQueue

Scheduling:

The core scheduling method schedule, scheduleAtFixedRate,scheduleWithFixedDelay

1, the task of packaging ScheduledFutureTask

2, status determination, add tasks to DelayedWorkQueue

  1. First of all, the task is submitted to the thread pool, it will determine the status of the thread pool, if not RUNNING policy state will execute rejected.
  2. Then, add tasks to the blocking queue. (Note that since DelayedWorkQueue is unbounded queue, it will add success)
  3. Then, create a worker thread, the thread pool is added to the core or non-core thread pool

If the core thread pool is full, the new worker thread will be placed in the core of the thread pool. If the core thread pool is full, ScheduledThreadPoolExecutor did not go to create a worker thread pool thread attributable to non-core as ThreadPoolExecutor, but directly returned. In other words, in ScheduledThreadPoolExecutor, once the core thread pool is full, it will not go to create a worker thread.

Production Practice

How to set reasonable thread pool size?

Analysis of the characteristics of the task:

  1. Nature of the tasks: CPU-intensive tasks, IO-intensive tasks, the hybrid mission.
  2. Priority tasks: high, medium and low.
  3. Execution time of the task: long, medium and short.
  4. Dependent tasks: whether to rely on other system resources, such as database connections.
  • CPU-intensive tasks should be configured as small threads, such as the number of threads to configure the number of CPU +1
  • IO-intensive tasks should be configured with as many threads as IO operations do not take up CPU, do not let the CPU retired and sit, should increase the number of threads, such as configuration twice the number of CPU +1
  • As for the hybrid mission, if you can split, split into IO-intensive and CPU-intensive processing, respectively, provided that both run time is about the same, if the processing time is a big difference, it is not necessary to split
  • If the task has dependencies on other system resources, such as a task dependent on the results of the database connection is returned, this time the longer the wait, the longer the CPU idle time, the greater the number of threads should be set in order to better using the CPU.

Set the thread pool size reasonable formula:

最佳线程数目 = ((线程等待时间+线程CPU时间)/ 线程CPU时间 )* CPU数目

change into:

最佳线程数目 = (线程等待时间与线程CPU时间之比 + 1)* CPU数目

Conclusion: thread wait the higher the percentage, the more the need to thread. Thread CPU time occupied the higher the ratio, the less need thread

Concurrent case the thread pool configuration:

  1. High concurrency, short execution time of task configuration as few threads: CPU core number + 1
  2. High concurrency, long-time business execution of the task, for the great pressure of the system should be optimized as much as possible through architecture, rather than solve the thread pool configuration. For example to the asynchronous clipping decoupled
  3. Concurrency is not high, long service time:
    1. If the task is a long time consumed in the IO operation, should increase the number of thread pool, not CPU retired and sit, try to perform more tasks
    2. If the task is a long time in the calculation of consumption, should be reduced by switching CPU threads, and consistent set of CPU core count the number of threads.

"Java Concurrency in combat."

Nthreads = NCPU * UCPU * (1 + W/C)

among them:

NCPU is the number of processor cores can be produced by Runtime.getRuntime (). AvailableProcessors () to give

UCPU CPU utilization is desired (between zero and the value should be between 1)

W / C is the ratio of the waiting time and the calculated time

CPU-intensive, set up a permanent core thread exists: because the priority tasks to determine the number of kernel threads, it is insufficient to create, to avoid waste of resources

CPU-intensive tasks, select the queue blocking queue, a large number of high concurrency scenarios CAS resources CPU-intensive, affecting performance.

Guess you like

Origin www.cnblogs.com/zuier/p/11388872.html