Java thread pool explanation, just read this one is enough

Introduction
I believe that everyone has encountered problems related to thread pools during interviews. This question is asked very frequently. If you don't understand it, or just understand it simply, you should study it well, because it is really Very interesting.

The meaning of thread pool

We sometimes start a thread like this for convenience or when writing a test:

        new Thread(new Runnable() {
            @Override
            public void run() {
                //处理逻辑。。。
            }
        }).start();

There is no problem with this, but when you have a lot of tasks to perform, creating and destroying threads frequently at this time will have a great impact on the system overhead and execution efficiency. If we use Alibaba Java Coding Guidelines to test your code, we will prompt you: Do not display the creation of threads, please use the thread pool. If the sum of the time for creating a thread and the time for destroying a thread is greater than the time for executing this task, it is not worth the loss, it is better to keep the thread directly. At this time, we will consider whether there is a way to manage multiple threads. Of course, there is a thread pool.

thread pool model

When we understand a thing, we need to understand its model, that is, its basic composition and the role of each basic composition. Before explaining the thread pool model, there are several related classes that we need to pay attention to:

public interface Executor {
    //这是一个接口,用来执行提交的任务
    void execute(Runnable command);
}
public interface ExecutorService extends Executor {
    //这是一个继承了上面接口的类,里面有多个常用的API用来提交任务和中断任务等。
    void shutdown();

    List<Runnable> shutdownNow();

    boolean isShutdown();

    boolean isTerminated();

    boolean awaitTermination(long timeout, TimeUnit unit) throws InterruptedException;

    <T> Future<T> submit(Callable<T> task);

    <T> Future<T> submit(Runnable task, T result);

    Future<?> submit(Runnable task);

    <T> List<Future<T>> invokeAll(Collection<? extends Callable<T>> tasks)
        throws InterruptedException;

    <T> List<Future<T>> invokeAll(Collection<? extends Callable<T>> tasks,
                                  long timeout, TimeUnit unit)
        throws InterruptedException;


    <T> T invokeAny(Collection<? extends Callable<T>> tasks)
        throws InterruptedException, ExecutionException;

    <T> T invokeAny(Collection<? extends Callable<T>> tasks,
                    long timeout, TimeUnit unit)
        throws InterruptedException, ExecutionException, TimeoutException;
}

There is also an Executors class, which contains multiple APIs to obtain the above ExecutorService instance; another class is the most important ThreadPoolExecutor we are about to explain, which indirectly implements the ExecutorService interface.

We understand the thread pool model, and focus on the ThreadPoolExecutor class. First, let's take a look at the construction method of this class:

    //含有五个参数的构造方法
    public ThreadPoolExecutor(int corePoolSize,
                              int maximumPoolSize,
                              long keepAliveTime,
                              TimeUnit unit,
                              BlockingQueue<Runnable> workQueue) {
        this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
             Executors.defaultThreadFactory(), defaultHandler);
    }

    //含有六个参数的构造方法(多了一个ThreadFactory)    
    public ThreadPoolExecutor(int corePoolSize,
                              int maximumPoolSize,
                              long keepAliveTime,
                              TimeUnit unit,
                              BlockingQueue<Runnable> workQueue,
                              ThreadFactory threadFactory) {
        this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
             threadFactory, defaultHandler);
    }

    //含有六个参数的构造方法(多了一个RejectedExecutionHandler)  
    public ThreadPoolExecutor(int corePoolSize,
                              int maximumPoolSize,
                              long keepAliveTime,
                              TimeUnit unit,
                              BlockingQueue<Runnable> workQueue,
                              RejectedExecutionHandler handler) {
        this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
             Executors.defaultThreadFactory(), handler);
    }

     //含有七个参数的构造方法(多了ThreadFactory和RejectedExecutionHandler)  
    public ThreadPoolExecutor(int corePoolSize,
                              int maximumPoolSize,
                              long keepAliveTime,
                              TimeUnit unit,
                              BlockingQueue<Runnable> workQueue,
                              ThreadFactory threadFactory,
                              RejectedExecutionHandler handler) {
        if (corePoolSize < 0 ||
            maximumPoolSize <= 0 ||
            maximumPoolSize < corePoolSize ||
            keepAliveTime < 0)
            throw new IllegalArgumentException();
        if (workQueue == null || threadFactory == null || handler == null)
            throw new NullPointerException();
        this.corePoolSize = corePoolSize;
        this.maximumPoolSize = maximumPoolSize;
        this.workQueue = workQueue;
        this.keepAliveTime = unit.toNanos(keepAliveTime);
        this.threadFactory = threadFactory;
        this.handler = handler;
    }

Let's take a look at the parameters here:

  • int corePoolSize: This is the number of core threads in the thread pool. When a thread pool is created, no threads are created by default. When the thread pool submits a task, the thread pool starts to create threads. If the number of threads in the current thread pool does not reach the number of core threads, no matter whether the number of core threads is idle or not Status, will create a new core thread to perform tasks submitted by the thread pool, the core thread is generally not recycled by the thread pool, unless you set the allowCoreThreadTimeOut value to true, when the idle time of the core thread is greater than the set value, it will also be was recycled. If the prestartAllCoreThreads method is set, it will be clear from the meaning of the word that the core threads are created in the thread pool in advance.

  • int maximumPoolSize: The maximum number of threads in the thread pool, including the number of core threads.

  • long keepAliveTime: This parameter means that when the idle time of non-core threads in the thread pool reaches this set value, they will be destroyed and recycled. If you set the allowCoreThreadTimeOut value to true, the core thread will also be recycled when the idle time is greater than the set value.

  • TimeUnit unit: This is the unit of the above keepAliveTime parameter, which can be DAYS (days), HOURS (hours), MINUTES (minutes), SECONDS (seconds), MILLISECONDS (milliseconds), MICROSECONDS (microseconds), NANOSECONDS (nanoseconds).

  • BlockingQueue workQueue: Literally translated, it is a blocking queue. The following explains the commonly used queue
    1 ArrayBlockingQueue
    array bounded blocking queue: When the number of submitted tasks is less than the core thread, a new core thread will be created to execute the task. If the thread pool reaches the number of core threads, then Add the task to this queue and wait. If the queue is full, a new non-new core thread will be created to execute the task. If the number of threads in the thread pool reaches the value of maximumPoolSize, it means that the thread pool is saturated at this moment, and it will be thrown. exception.
    2 SynchronousQueue
    Bounded synchronous blocking queue: When there is a task, the task is directly handed over to the thread for processing, and the task will not be reserved by itself. If the threads are busy, a new thread will be created to handle the task. Generally, when using this queue, the maximum number of threads maximumPoolSize=Integer.MAX_VALUE will be set, because this will prevent threads from not being created when the current number of threads reaches the maximum number of threads, because the queue does not store tasks, and if threads cannot be created Isn't there a problem?
    3
    Unbounded blocking queue of LinkedBlockingQueue linked list structure: you can set or not set the length of the queue, generally do not set it. When the number of threads is less than the number of core threads, the newly submitted task will create threads to execute. When the number of core threads is reached, the task will be submitted to the queue for waiting, because the queue is unbounded, and the total number of threads will not be greater than the number of core threads. , so the setting of the value of maximumPoolSize here is meaningless. Generally, corePoolSize=maximumPoolSize here.

  • ThreadFactory threadFactory
    is an interface, we can implement its methods to perform some operations on it, such as modifying more meaningful thread names

    private class CustomThreadFactory implements ThreadFactory {

        private AtomicInteger count = new AtomicInteger(0);

        @Override
        public Thread newThread(Runnable r) {
            Thread t = new Thread(r);
            String threadName = CustomThreadPoolExecutor.class.getSimpleName() + count.addAndGet(1);
            System.out.println(threadName);
            t.setName(threadName);
            return t;
        }
    }
  • RejectedExecutionHandler handler
    This is also an interface, which is used to throw an exception when an error occurs. Several classes have been implemented in the official API, of which the default is
    public static class CallerRunsPolicy implements RejectedExecutionHandler {
        //这是官方默认的任务拒绝策略
        public CallerRunsPolicy() { }

        public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
            if (!e.isShutdown()) {
                r.run();
            }
        }
    }

    public static class AbortPolicy implements RejectedExecutionHandler {

        public AbortPolicy() { }

        public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
            throw new RejectedExecutionException("Task " + r.toString() +
                                                 " rejected from " +
                                                 e.toString());
        }
    }


    public static class DiscardPolicy implements RejectedExecutionHandler {

        public DiscardPolicy() { }

        public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
        }
    }


    public static class DiscardOldestPolicy implements RejectedExecutionHandler {

        public DiscardOldestPolicy() { }

        public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
            if (!e.isShutdown()) {
                e.getQueue().poll();
                e.execute(r);
            }
        }
    }

Execution strategy of ThreadPoolExecutor

In fact, the execution strategy has been described in more detail when explaining the blocking queue above, because the execution strategy is actually described according to the specific queue rules. Different queue descriptions are different. Let’s make a summary.

ArrayBlockingQueue When a task enters:
1 If the number of thread pools does not reach the number of core threads, a new core thread will be created to execute the task
2 If the number of threads in the thread pool has reached the number of core threads, the task will be submitted to the queue Waiting to be executed
3 If the queue is also full, a non-core thread will be created to execute the task
4 If the queue is full and the thread pool has reached the maximum number of threads, the RejectedExecutionHandler rejection strategy will be used.

SynchronousQueue because this queue does not retain tasks, generally set the core thread to 0, set the maximum number of threads maximumPoolSize=Integer.MAX_VALUE, so as to avoid the thread will not be created when the current number of threads reaches the maximum number of threads, so when a task enters When:
1 Directly create a thread to execute the task
2 If the threads are all busy, a new thread will be created to process the task

LinkedBlockingQueue unbounded queue When a task enters:
1 If the number of thread pools does not reach the number of core threads, a new core thread will be created to execute the task
2 If the number of threads in the thread pool has reached the number of core threads, the task will be submitted Waiting to be executed in the queue
3 Because the queue is an unbounded queue, the total number of threads will not be greater than the number of core threads, and they are all added to the queue. Therefore, the setting of the value of maximumPoolSize here is meaningless. Generally, corePoolSize=maximumPoolSize is used here.

Common thread pools in Executors

public class Executors {
    //使用LinkedBlockingQueue作为队列
    public static ExecutorService newFixedThreadPool(int nThreads) {
        return new ThreadPoolExecutor(nThreads, nThreads,
                                      0L, TimeUnit.MILLISECONDS,
                                      new LinkedBlockingQueue<Runnable>());
    }

    //使用SynchronousQueue作为队列
    public static ExecutorService newCachedThreadPool() {
        return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
                                      60L, TimeUnit.SECONDS,
                                      new SynchronousQueue<Runnable>());
    }

    //使用LinkedBlockingQueue作为队列,只有一个核心线程和最大线程数量
     public static ExecutorService newSingleThreadExecutor() {
        return new FinalizableDelegatedExecutorService
            (new ThreadPoolExecutor(1, 1,
                                    0L, TimeUnit.MILLISECONDS,
                                    new LinkedBlockingQueue<Runnable>()));
    }

    //定时执行任务使用
    public static ScheduledExecutorService newScheduledThreadPool(int corePoolSize) {
        return new ScheduledThreadPoolExecutor(corePoolSize);
    }
        public ScheduledThreadPoolExecutor(int corePoolSize) {
        super(corePoolSize, Integer.MAX_VALUE,
              DEFAULT_KEEPALIVE_MILLIS, MILLISECONDS,
              new DelayedWorkQueue());
    }
}

The above four common thread pools are explained separately:

1 newFixedThreadPool(): The LinkedBlockingQueue queue is used as the task queue. The maximum number of threads is equal to the number of core threads, which is the same as what we described above. Because the queue is unbounded, the total number of threads will not be greater than the number of core threads. Set the maximum number of threads The number is meaningless, the tasks will be added to the queue waiting to be executed. This will control the amount of concurrency, and excess threads will wait in the queue.

2 newCachedThreadPool(): Use SynchronousQueue as the queue, this is a bounded blocking queue, set the core thread to 0, the maximum number of threads is equal to Interger.MAX_VALUE, then all incoming tasks will create a new thread when other threads are not idle. There is no limit to the number of threads to perform this task.

3 SingleThreadExecutor(): A single-threaded thread pool can actually be understood as a special newFixedThreadPool processing. There is only one thread, according to the FIFO principle of the queue.

4 ScheduledThreadPool(): This supports the execution of timed period tasks, such as the effect of carousel.

Ok, so much about the thread pool, end

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325582639&siteId=291194637