Principles and use the thread pool scene

1, the thread pool Description:
multi-threading technology to solve the main problem of multiple threads of execution within the processor unit, it can significantly reduce the idle time of the processor unit, increasing the throughput capacity of the processor unit.
Suppose a server time needed to complete a task are: T1 create a thread time, the time T2 to perform tasks in a thread, T3 destroy threads time.

如果:T1 + T3 远大于 T2,则可以采用线程池,以提高服务器性能。
            一个线程池包括以下四个基本组成部分:
            1、线程池管理器(ThreadPool):用于创建并管理线程池,包括 创建线程池,销毁线程池,添加新任务;
            2、工作线程(PoolWorker):线程池中线程,在没有任务时处于等待状态,可以循环的执行任务;
            3、任务接口(Task):每个任务必须实现的接口,以供工作线程调度任务的执行,它主要规定了任务的入口,任务执行完后的收尾工作,任务的执行状态等;
            4、任务队列(taskQueue):用于存放没有处理的任务。提供一种缓冲机制。

线程池技术正是关注如何缩短或调整T1,T3时间的技术,从而提高服务器程序性能的。它把T1,T3分别安排在服务器程序的启动和结束的时间段或者一些空闲的时间段,这样在服务器程序处理客户请求时,不会有T1,T3的开销了。
线程池不仅调整T1,T3产生的时间段,而且它还显著减少了创建线程的数目,看一个例子:
假设一个服务器一天要处理50000个请求,并且每个请求需要一个单独的线程完成。在线程池中,线程数一般是固定的,所以产生线程总数不会超过线程池中线程的数目,而如果服务器不利用线程池来处理这些请求则线程总数为50000。一般线程池大小是远小于50000。所以利用线程池的服务器程序不会为了创建50000而在处理请求时浪费时间,从而提高效率。

代码实现中并没有实现任务接口,而是把Runnable对象加入到线程池管理器(ThreadPool),然后剩下的事情就由线程池管理器(ThreadPool)来完成了

Copy the code
copy the code
package mine.util.thread;

import java.util.LinkedList;  
import java.util.List;  

/** 
 * 线程池类,线程管理器:创建线程,执行任务,销毁线程,获取线程基本信息 
 */  
public final class ThreadPool {  
    // 线程池中默认线程的个数为5  
    private static int worker_num = 5;  
    // 工作线程  
    private WorkThread[] workThrads;  
    // 未处理的任务  
    private static volatile int finished_task = 0;  
    // 任务队列,作为一个缓冲,List线程不安全  
    private List<Runnable> taskQueue = new LinkedList<Runnable>();  
    private static ThreadPool threadPool;  

    // 创建具有默认线程个数的线程池  
    private ThreadPool() {  
        this(5);  
    }  

    // 创建线程池,worker_num为线程池中工作线程的个数  
    private ThreadPool(int worker_num) {  
        ThreadPool.worker_num = worker_num;  
        workThrads = new WorkThread[worker_num];  
        for (int i = 0; i < worker_num; i++) {  
            workThrads[i] = new WorkThread();  
            workThrads[i].start();// 开启线程池中的线程  
        }  
    }  

    // 单态模式,获得一个默认线程个数的线程池  
    public static ThreadPool getThreadPool() {  
        return getThreadPool(ThreadPool.worker_num);  
    }  

    // 单态模式,获得一个指定线程个数的线程池,worker_num(>0)为线程池中工作线程的个数  
    // worker_num<=0创建默认的工作线程个数  
    public static ThreadPool getThreadPool(int worker_num1) {  
        if (worker_num1 <= 0)  
            worker_num1 = ThreadPool.worker_num;  
        if (threadPool == null)  
            threadPool = new ThreadPool(worker_num1);  
        return threadPool;  
    }  

    // 执行任务,其实只是把任务加入任务队列,什么时候执行有线程池管理器觉定  
    public void execute(Runnable task) {  
        synchronized (taskQueue) {  
            taskQueue.add(task);  
            taskQueue.notify();  
        }  
    }  

    // 批量执行任务,其实只是把任务加入任务队列,什么时候执行有线程池管理器觉定  
    public void execute(Runnable[] task) {  
        synchronized (taskQueue) {  
            for (Runnable t : task)  
                taskQueue.add(t);  
            taskQueue.notify();  
        }  
    }  

    // 批量执行任务,其实只是把任务加入任务队列,什么时候执行有线程池管理器觉定  
    public void execute(List<Runnable> task) {  
        synchronized (taskQueue) {  
            for (Runnable t : task)  
                taskQueue.add(t);  
            taskQueue.notify();  
        }  
    }  

    // 销毁线程池,该方法保证在所有任务都完成的情况下才销毁所有线程,否则等待任务完成才销毁  
    public void destroy() {  
        while (!taskQueue.isEmpty()) {// 如果还有任务没执行完成,就先睡会吧  
            try {  
                Thread.sleep(10);  
            } catch (InterruptedException e) {  
                e.printStackTrace();  
            }  
        }  
        // 工作线程停止工作,且置为null  
        for (int i = 0; i < worker_num; i++) {  
            workThrads[i].stopWorker();  
            workThrads[i] = null;  
        }  
        threadPool=null;  
        taskQueue.clear();// 清空任务队列  
    }  

    // 返回工作线程的个数  
    public int getWorkThreadNumber() {  
        return worker_num;  
    }  

    // 返回已完成任务的个数,这里的已完成是只出了任务队列的任务个数,可能该任务并没有实际执行完成  
    public int getFinishedTasknumber() {  
        return finished_task;  
    }  

    // 返回任务队列的长度,即还没处理的任务个数  
    public int getWaitTasknumber() {  
        return taskQueue.size();  
    }  

    // 覆盖toString方法,返回线程池信息:工作线程个数和已完成任务个数  
    @Override  
    public String toString() {  
        return "WorkThread number:" + worker_num + "  finished task number:"  
                + finished_task + "  wait task number:" + getWaitTasknumber();  
    }  

    /** 
     * 内部类,工作线程 
     */  
    private class WorkThread extends Thread {  
        // 该工作线程是否有效,用于结束该工作线程  
        private boolean isRunning = true;  

        /* 
         * 关键所在啊,如果任务队列不空,则取出任务执行,若任务队列空,则等待 
         */  
        @Override  
        public void run() {  
            Runnable r = null;  
            while (isRunning) {// 注意,若线程无效则自然结束run方法,该线程就没用了  
                synchronized (taskQueue) {  
                    while (isRunning && taskQueue.isEmpty()) {// 队列为空  
                        try {  
                            taskQueue.wait(20);  
                        } catch (InterruptedException e) {  
                            e.printStackTrace();  
                        }  
                    }  
                    if (!taskQueue.isEmpty())  
                        r = taskQueue.remove(0);// 取出任务  
                }  
                if (r != null) {  
                    r.run();// 执行任务  
                }  
                finished_task++;  
                r = null;  
            }  
        }  

        // 停止工作,让该线程自然执行完run方法,自然结束  
        public void stopWorker() {  
            isRunning = false;  
        }  
    }  
}  

Copy the code
Copy the code

Copy the code
copy the code
package mine.util.thread;

//测试线程池  
public class TestThreadPool {  
    public static void main(String[] args) {  
        // 创建3个线程的线程池  
        ThreadPool t = ThreadPool.getThreadPool(3);  
        t.execute(new Runnable[] { new Task(), new Task(), new Task() });  
        t.execute(new Runnable[] { new Task(), new Task(), new Task() });  
        System.out.println(t);  
        t.destroy();// 所有线程都执行完成才destory  
        System.out.println(t);  
    }  

    // 任务类  
    static class Task implements Runnable {  
        private static volatile int i = 1;  

        @Override  
        public void run() {// 执行任务  
            System.out.println("任务 " + (i++) + " 完成");  
        }  
    }  
}  

Copy the code
copy the code
run results:

WorkThread number: 3 finished task number: 0 wait task number: 6
Task 1 to complete
the task 2 to complete
the task is completed 3
Task 4 to complete
the task 5 complete
the task completion 6
WorkThread number: 3 finished task number: 6 wait task number: 0

Analysis: since there is no task interface, can be passed from any task definition, so the thread pool and can not accurately determine whether the real task has been completed (true method to accomplish this task is to run this task is finished), only to know that the task has been out of the job queue being executed or has been completed.

2, Java class libraries provided in the thread pool Description:

 java提供的线程池更加强大,相信理解线程池的工作原理,看类库中的线程池就不会感到陌生了。

Article 2:

Java thread pool instructions

A brief introduction
using threads occupies an extremely important position in java, in jdk1.4 jdk versions prior to extremely using on the thread pool is extremely poor. After jdk1.5 this situation has been greatly improved. Joined the java.util.concurrent package after Jdk1.5, this package focuses on the use of java in the thread and the thread pool. It provides a very big help for the problem we are dealing with threads in development.

Two: the thread pool
thread pool functions:

Thread pool function is to limit the number of execution threads in the system.
According to the environment of the system, either automatically or manually set the number of threads, to achieve the best operating results; less waste system resources, cause the system more crowded inefficient. With a thread pool controls the number of threads, other threads waiting in line. Task a task is completed, and then taken from the top of the queue started. If there is no queue waiting process, the thread pool resources in a wait. When a new task needs to run, if there is a thread pool worker threads to wait, you can start running; otherwise enter the queue.

Why use the thread pool:

1. Reduce the number of creating and destroying threads, each worker thread can be reused, you can perform multiple tasks.

2. The capacity of the system can adjust the number of threads of thread pool working line, as to prevent excessive consumption of memory while the server burning out (approximately 1MB of memory required for each thread, a thread to open more, consumed the greater the memory, the last crash).

Java thread pool inside the top-level interface is Executor, but is not a strict sense Executor thread pools, but only one thread of execution tool. The real thread pool interface is a ExecutorService.

The more important categories:

ExecutorService

The real thread pool interface.

ScheduledExecutorService

Energy and Timer / TimerTask similar, problem-solving tasks that require repeated.

ThreadPoolExecutor

ExecutorService default implementation.

ScheduledThreadPoolExecutor

ScheduledExecutorService interface inheritance ThreadPoolExecutor, periodic task scheduling class implementation.

To configure a thread pool is more complex, especially for under the principle of the thread pool is not very clear, the likely configuration of the thread pool is not superior, thus providing some static factory Executors class which generates some commonly used the thread pool.

  1. newSingleThreadExecutor

Creating a single-threaded thread pool. This is only a thread pool thread work, which is equivalent to a single-threaded serial execution of all tasks. If this thread only because of abnormal termination, then there will be a new thread to replace it. This thread pool to ensure that the order of execution of all tasks presented to the order of tasks.

2.newFixedThreadPool

Create a thread pool of fixed size. Each time you submit a task to create a thread, until the thread pool thread to reach the maximum size. Once the size of the thread pool reaches the maximum will remain unchanged, because if a thread execution abnormal end, the thread pool would add a new thread.

  1. newCachedThreadPool

Creates a cached thread pool. If the size of the thread pool threads exceeds the processing tasks required,

Thread it will recover partially free (60 seconds does not perform the task), and when the number of tasks, this thread pool and intelligently add a new thread to handle the task. This thread pool do not restrict the size of the thread pool, thread pool thread maximum size depends entirely on the size of the operating system (or JVM) that can be created.

4.newScheduledThreadPool

Create an unlimited size of the thread pool. This thread pool to support the timing and the need to perform tasks periodically.

Examples

1: newSingleThreadExecutor

MyThread.java

publicclassMyThread extends Thread {

@Override

publicvoid run() {

    System.out.println(Thread.currentThread().getName() + "正在执行。。。");

}

}

TestSingleThreadExecutor.java

publicclassTestSingleThreadExecutor {

publicstaticvoid main(String[] args) {

    //创建一个可重用固定线程数的线程池

    ExecutorService pool = Executors. newSingleThreadExecutor();

    //创建实现了Runnable接口对象,Thread对象当然也实现了Runnable接口

    Thread t1 = new MyThread();

    Thread t2 = new MyThread();

    Thread t3 = new MyThread();

    Thread t4 = new MyThread();

    Thread t5 = new MyThread();

    //将线程放入池中进行执行

    pool.execute(t1);

    pool.execute(t2);

    pool.execute(t3);

    pool.execute(t4);

    pool.execute(t5);

    //关闭线程池

    pool.shutdown();

}

}

Output

pool-1-thread-1 is being executed. . .

pool-1-thread-1 is being executed. . .

pool-1-thread-1 is being executed. . .

pool-1-thread-1 is being executed. . .

pool-1-thread-1 is being executed. . .

2newFixedThreadPool

TestFixedThreadPool.Java

publicclass TestFixedThreadPool {

publicstaticvoid main(String[] args) {

    //创建一个可重用固定线程数的线程池

    ExecutorService pool = Executors.newFixedThreadPool(2);

    //创建实现了Runnable接口对象,Thread对象当然也实现了Runnable接口

    Thread t1 = new MyThread();

    Thread t2 = new MyThread();

    Thread t3 = new MyThread();

    Thread t4 = new MyThread();

    Thread t5 = new MyThread();

    //将线程放入池中进行执行

    pool.execute(t1);

    pool.execute(t2);

    pool.execute(t3);

    pool.execute(t4);

    pool.execute(t5);

    //关闭线程池

    pool.shutdown();

}

}

Output

pool-1-thread-1 is being executed. . .

pool-1-thread-2 is being executed. . .

pool-1-thread-1 is being executed. . .

pool-1-thread-2 is being executed. . .

pool-1-thread-1 is being executed. . .

3 newCachedThreadPool

TestCachedThreadPool.java

publicclass TestCachedThreadPool {

publicstaticvoid main(String[] args) {

    //创建一个可重用固定线程数的线程池

    ExecutorService pool = Executors.newCachedThreadPool();

    //创建实现了Runnable接口对象,Thread对象当然也实现了Runnable接口

    Thread t1 = new MyThread();

    Thread t2 = new MyThread();

    Thread t3 = new MyThread();

    Thread t4 = new MyThread();

    Thread t5 = new MyThread();

    //将线程放入池中进行执行

    pool.execute(t1);

    pool.execute(t2);

    pool.execute(t3);

    pool.execute(t4);

    pool.execute(t5);

    //关闭线程池

    pool.shutdown();

}

}

Output:

pool-1-thread-2 is being executed. . .

pool-1-thread-4 being performed. . .

pool-1-thread-3 is being executed. . .

pool-1-thread-1 is being executed. . .

pool-1-thread-5 is performed. . .

4newScheduledThreadPool

TestScheduledThreadPoolExecutor.java

publicclass TestScheduledThreadPoolExecutor {

publicstaticvoid main(String[] args) {

    ScheduledThreadPoolExecutor exec = new ScheduledThreadPoolExecutor(1);

    exec.scheduleAtFixedRate(new Runnable() {//每隔一段时间就触发异常

                  @Override

                  publicvoid run() {

                       //throw new RuntimeException();

                       System.out.println("================");

                  }

              }, 1000, 5000, TimeUnit.MILLISECONDS);

    exec.scheduleAtFixedRate(new Runnable() {//每隔一段时间打印系统时间,证明两者是互不影响的

                  @Override

                  publicvoid run() {

                       System.out.println(System.nanoTime());

                  }

              }, 1000, 2000, TimeUnit.MILLISECONDS);

}

}

Output

================

8384644549516

8386643829034

8388643830710

================

8390643851383

8392643879319

8400643939383

Three: Detailed ThreadPoolExecutor
signature ThreadPoolExecutor complete construction method are: ThreadPoolExecutor (int corePoolSize, int maximumPoolSize , long keepAliveTime, TimeUnit unit, BlockingQueue <Runnable> workQueue, ThreadFactory threadFactory, RejectedExecutionHandler handler).

corePoolSize - the number of threads to keep in the pool, including idle threads.

MaximumPoolSize- maximum number of threads allowed in the pool.

keepAliveTime - when the number of threads is greater than the core, this is terminated before the excess idle threads waiting for new tasks longest time.

unit - the time unit keepAliveTime parameters.

workQueue - for holding queue before performing the task. This queue only holds Runnable tasks submitted by the execute method.

threadFactory - factory to use when the executor creates a new thread.

handler - because the thread from the scope and capacity of the queue is blocked when the processing program is used.

ThreadPoolExecutor is Executors underlying implementation class.

In JDK help documentation, there is such a passage:

"It is strongly recommended to use the programmer more convenient Executors factory methods Executors.newCachedThreadPool () (*** thread pool threads can be recovered automatically), Executors.newFixedThreadPool (int) (fixed size thread pool) Executors.newSingleThreadExecutor () (single background thread)

They are the most usage scenarios predefined settings. "

Here's what the source of several categories:

ExecutorService newFixedThreadPool (int nThreads): fixed-size thread pool.

We can see, corePoolSize maximumPoolSize and size is the same (in fact, will be introduced later, if you use the words maximumPoolSize *** queue parameter is meaningless), set the value of what the table name and unit of keepAliveTime? - This is achieved not want to keep alive! The final BlockingQueue chose LinkedBlockingQueue, the queue has a characteristic, he is a ***.

  1. public static ExecutorService newFixedThreadPool(int nThreads) {

  2. return new ThreadPoolExecutor(nThreads, nThreads,

  3. 0L, TimeUnit.MILLISECONDS,

  4. new LinkedBlockingQueue<Runnable>());

  5. }

ExecutorService newSingleThreadExecutor (): single-threaded

  1. public static ExecutorService newSingleThreadExecutor() {

  2. return new FinalizableDelegatedExecutorService

  3. (new ThreadPoolExecutor(1, 1,

  4. 0L, TimeUnit.MILLISECONDS,

  5. new LinkedBlockingQueue<Runnable>()));

  6. }

ExecutorService newCachedThreadPool (): *** thread pool threads can be automatically recovered

This implementation will interesting. *** The first is the thread pool, so we can find maximumPoolSize is big big. Second, the use of SynchronousQueue on BlockingQueue choice. There may be some strange for the BlockingQueue, it simply: the QUEUE, each insert operation must wait for a corresponding remove operation by another thread.

  1. public static ExecutorService newCachedThreadPool() {

  2. return new ThreadPoolExecutor(0, Integer.MAX_VALUE,

  3. 60L, TimeUnit.SECONDS,

  4. new SynchronousQueue<Runnable>());

    }
    Start BlockingQueue <Runnable> workQueue reference into this to begin. In the JDK, in fact, we have said very clearly, There are three types of queue.

All BlockingQueue can be used to transfer and hold submitted tasks. You can use this queue interacts with pool size:

If the thread running less than corePoolSize, the Executor always preferred to add a new thread rather than queuing. (If the currently running thread is less than corePoolSize, the task will not be stored, added to the queue, but rather direct Chao Jiahuo (thread) starts running)

If the thread running is equal to or more than corePoolSize, the Executor always preferred to request to join the queue, rather than adding a new thread.

If the request can not be queued, a new thread is created unless you create this thread beyond maximumPoolSize, in this case, the task will be rejected.

Three types of the queue.

There are three general queuing policy:

Direct submission. The default options work queue is SynchronousQueue, it will be submitted directly to the thread task without keeping them. In this case, if the thread can be used to run a task immediately it does not exist, then the task of trying to join the queue will fail, and therefore will construct a new thread. This strategy avoids lock occurs when processing the request set may have an internal dependency. *** maximumPoolSizes usually required to submit directly to avoid reject newly submitted task. When the command queue can handle more than the average number of continuous arrival, this policy allows *** threads growth possibilities have.

***queue. (LinkedBlockingQueue for example, does not have a predefined capacity) will lead to a new use *** queue tasks waiting in the queue when all corePoolSize threads are busy. Thus, thread creation will not exceed corePoolSize. (Thus, the value maximumPoolSize is also invalid.) When each task is completely independent of the other tasks that do not affect each task execution, suitable for use *** queue; for example, in a Web page server. This line may be used to handle transient burst request, when the command queue can handle more than the average number of continuous arrival *** This strategy allows the possibility of growth with thread.

Bounded queue. When using a limited maximumPoolSizes, bounded queue (such as ArrayBlockingQueue) helps prevent resource exhaustion, but may be difficult to adjust and control. Queue size and the maximum pool size may require a compromise between: the use of large queues and small pools minimizes CPU usage, operating system resources and context switching overhead, but doing so may result in decrease in throughput. If the task is frequently blocked (for example, if they are I / O boundary), the system may be more threads than your permission to arrange a time. Using a small queues generally requires larger pool size, high CPU utilization, but may encounter unacceptable scheduling overhead, it will also reduce the throughput.

BlockingQueue choice.

Example 1: direct submission policy, namely SynchronousQueue.

First SynchronousQueue is ***, that is his ability to keep the number of tasks is no limit, but due to the nature of the Queue itself after a particular element must be added in order to continue to wait after adding other threads removed. Here is not a core thread is newly created thread, but at the same we imagine the following scene.

We use the following parameters structure ThreadPoolExecutor:

  1. new ThreadPoolExecutor(

  2. 2, 3, 30, TimeUnit.SECONDS,

  3. new SynchronousQueue<Runnable>(),

  4. new RecorderThreadFactory("CookieRecorderPool"),

        new ThreadPoolExecutor.CallerRunsPolicy());  

    new ThreadPoolExecutor(

    2, 3, 30, TimeUnit.SECONDS,

    new SynchronousQueue<Runnable>(),

    new RecorderThreadFactory("CookieRecorderPool"),

    new ThreadPoolExecutor.CallerRunsPolicy());

    When the core has two threads are running.

At this time, to continue the task (A), according to previously described "if equal to or more than a thread of corePoolSize, the Executor always preferred queued requests, without adding a new thread.", So that A is added to queue in.
Here comes a task (B), and the Core 2 threads have not finished their day, OK, first try in the next 1 describes, but because SynchronousQueue used, they must not join in.
At this point it meets the above-mentioned "if the request can not be queued, a new thread is created unless you create this thread beyond maximumPoolSize, in this case, the task will be rejected.", It is bound to create a new thread run this task.
Yet possible, but if these three tasks are not yet complete, consecutive to the two tasks, first added to the queue in a post-it? It can not be inserted into the queue, and the number of threads reached maximumPoolSize, so I had to perform abnormal strategy.
Therefore, usually requires the use of SynchronousQueue *** maximumPoolSize is, thus avoid the above situation occurs (if desired used directly bounded to limit queue). For SynchronousQueue role in jdk written very clearly: This policy can avoid lock while processing the request set may have internal dependencies.

What does this mean? If your task A1, A2 there are inter-related, you need to run A1, A1 then to submit, submit A2, when SynchronousQueue we can guarantee, A1 must first be carried out in A1 before it has been executed, A2 can not add in the queue.

Example 2: Use *** queue policy, namely LinkedBlockingQueue

Take this newFixedThreadPool, according to the rules previously mentioned:

If the thread running less than corePoolSize, the Executor always preferred to add a new thread rather than queuing. So when the task continues to increase, what will happen then?

If the thread running is equal to or more than corePoolSize, the Executor always preferred to request to join the queue, rather than adding a new thread. OK, this time the task becomes to join in the queue, then when will it add a new thread?

If the request can not be queued, a new thread is created unless you create this thread beyond maximumPoolSize, in this case, the task will be rejected. Here it is interesting, there may be unable to join the queue it? Unlike SynchronousQueue that has its own characteristics, for *** queue, it can always be added (resource exhaustion, of course, another matter). Other words that will never trigger a new thread! CorePoolSize size of the number of threads will always run, the current busy, he took the job from the queue starts running. So to prevent soaring task, such as the implementation of a long running task, and add tasks much faster than time processing tasks, but also growing, soon burst.

Example 3: bounded queue used ArrayBlockingQueue.

This is the most complex to use, it is not recommended to use JDK also some truth. Compared with the above, the biggest feature is to prevent resource depletion occurs.

For example, see the following construction methods:

  1. new ThreadPoolExecutor(

  2. 2, 4, 30, TimeUnit.SECONDS,

  3. new ArrayBlockingQueue<Runnable>(2),

  4. new RecorderThreadFactory("CookieRecorderPool"),

  5. new ThreadPoolExecutor.CallerRunsPolicy());

new ThreadPoolExecutor(

2, 4, 30, TimeUnit.SECONDS,

new ArrayBlockingQueue<Runnable>(2),

new RecorderThreadFactory("CookieRecorderPool"),

new ThreadPoolExecutor.CallerRunsPolicy());

Assume that all tasks are never executed.

For the first coming of A, B is run directly, then, if to a C, D, they will be placed in queue, if you come back the next E, F, then increase the thread running E, F. But if the task again, the queue can not accept a number of threads have reached the maximum limit, so it will refuse to use strategies to deal with.

keepAliveTime

Jdk explanation is: When the number of threads is greater than the core, this task is waiting for a new excess idle threads terminate before the maximum time.

Bit of a mouthful, in fact, this is not difficult to understand, the use of the "pool" of applications, most of them have similar parameters need to be configured. Such as database connection pool, DBCP in maxIdle, minIdle parameters.

What does this mean? Following the above explanation, and later sent to the owner of workers has always been "borrowed", the saying goes, "there is also something borrowed", but the problem here is that when it comes to what, if borrowed workers just completed a task go back, but later found the task as well, it would not have to borrow? This way a go, boss is certainly big head also died.

A reasonable strategy: Since it borrowed, it would take more than a moment. Until "certain period" of time, no longer found less than these workers, we can also go back. A certain period of time here is keepAliveTime meaning, TimeUnit to measure keepAliveTime value.

RejectedExecutionHandler

Another case is that, even if the borrowed workers to the boss, but the task still continue to come, or busy, then the whole team had refused to accept.

RejectedExecutionHandler interface provides the opportunity to customize the method of treatment for rejected tasks. In ThreadPoolExecutor already included by default 4 strategy, because the source code is very simple, straightforward posted out here.

CallerRunsPolicy: running thread calls execute the task itself. This policy provides a simple feedback control mechanism, the speed can slow to submit new tasks.

  1. public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {

  2. if (!e.isShutdown()) {

  3. r.run();

  4. }

  5. }

public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {

       if (!e.isShutdown()) {

           r.run();

       }

   }

This strategy obviously did not want to give up the mission. However, because the pool has no resources, and then directly call the execute thread to execute itself.

AbortPolicy: handler refused to throw a run-time RejectedExecutionException

  1. public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {

  2. throw new RejectedExecutionException();

  3. }

public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {

       throw new RejectedExecutionException();

   }

This strategy is a direct throw, discard task.

DiscardPolicy: can not perform the task will be deleted

  1. public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {

  2. }

public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {

   }

This strategy and AbortPolicy almost the same, the task is discarded, but he does not throw an exception.

DiscardOldestPolicy: If the execution of the program has not been closed, is located at the head of the mission work queue will be deleted, and then retry the execution of the program (If it fails again, repeat this procedure)

  1. public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {

  2. if (!e.isShutdown()) {

  3. e.getQueue().poll();

  4. e.execute(r);

  5. }

    }  

    public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {

       if (!e.isShutdown()) {
    
           e.getQueue().poll();
    
           e.execute(r);
    
       }

    }

This policy will be slightly more complicated, first of all lost in the queue cache earliest task in the pool did not close the premise, and then try to run the task. This strategy requires proper care.

Idea: If other threads are still running, then the new task to task kicked off the old, cached in the queue, the queue again a task will kick off the oldest task.

to sum up:

keepAliveTime and type maximumPoolSize and BlockingQueue has a close relationship. If BlockingQueue is ***, then never fires maximumPoolSize, natural keepAliveTime there would be no meaning.

Conversely, if the core number is small, bounded BlockingQueue value and smaller, while keepAliveTime and set up a small, if the task frequently, then the system will recover frequent application threads.

public static ExecutorService newFixedThreadPool(int nThreads) {

   return new ThreadPoolExecutor(nThreads, nThreads,

                                 0L, TimeUnit.MILLISECONDS,

                                 new LinkedBlockingQueue<Runnable>());

}

Guess you like

Origin blog.51cto.com/9527824/2435706