Master control thread pool thread magic

1. Why do you need a thread pool

In today's CPU computing speed of the computer's case very quickly, in order to make full use of CPU performance to improve process efficiency we use the threads in the program. But frequently created under high concurrency and destroying threads, thus disguised hinder the execution speed of the program, so in order to manage resources and reduce the thread thread creation and destruction of property consumed on the introduction of the thread pool.

2. Under what scenarios for the use of the thread pool

When the server receives a large number of tasks, if you use the thread pool can significantly reduce the number of thread creation and destruction, so as to enhance the efficiency of the program
in the actual development, if you need to create more than five threads, then you can use the thread pool to manage

3. Introduction thread pool parameters and characteristics

file
https://user-gold-cdn.xitu.io/2020/1/12/16f9a146e43c9ba2?imageView2/0/w/1280/h/960/format/webp/ignore-error/1

3.1 corePoolSize and maxPoolSize

corePoolSize: when the thread pool has been created, and there is no thread, only when the task again soon create a thread.

maxPoolSize: thread pool threads may add some extra on the basis of the number of kernel threads on, but limit the number of threads is maxPoolSize. For example, the first day of the implementation of the task is very large, very few tasks performed the next day, but with maxPoolSize parameters, you can enhance the flexibility tasking.

Rule 3.2 Add thread

When the number of threads in the thread is not less than corePoolSize even perform tasks will be to create a new thread.
If the number of threads is equal to (or greater than) corePoolSize, but less than the task will be placed in the queue maxPoolSize.
If the queue is full, and the number of threads less than maxPoolSize, then create a new thread to run the task.
If the queue is full, and the number of threads is greater than or equal to the maxPoolSize, the task is rejected.
Execution flow:
file
https://user-gold-cdn.xitu.io/2020/1/12/16f9a29c384d9962?imageView2/0/w/1280/h/960/format/webp/ignore-error/1

3.3 changes in the characteristics of the thread

The corePoolSize and maxPoolSize set to the same value, then it will create a thread pool of fixed size.
We want to keep the number of threads the thread pool less, and only load becomes large when it will increase.
If the thread pool maxPoolSize parameter is set to a large value, e.g. Integer.MAX_VALUE, thread pool may be allowed to accommodate any number of concurrent tasks.
Only when the queue is full will to create threads is greater than corePoolSize, so if you use an unbounded queue (such as: LinkedBlockingQueue) does not exceed the number of threads to create the corePoolSize.

3.4 keepAliveTime

If the thread pool is greater than the current number of threads corePoolSize, so if excess thread idle time is greater than keepAliveTime, they will be terminated.

Use keepAliveTime parameters can reduce resource consumption when too many threads redundancy.

3.5 threadFactory

The new thread is created by the ThreadFactory, default Executors.defaultThreadFactory (), created out of the threads in the same thread group, have the same NORM_PRIORITY priority and not a daemon thread. If you own designated ThreadFactory, then you can change the thread name, thread group, priority, whether it is a daemon thread and so on. DefaultThreadFactory directly on the line under normal circumstances.

3.6 workQueue

Direct transfer (SynchronousQueue): A little tasks, just use the simple task of transit queue, the queue of this task can not be stored in the use of this queue, maxPoolSize needs to be set larger.

Unbounded queue (LinkedBlockingQueue): If you use an unbounded queue as workQueue, the much maxQueue settings are not used, the advantages of using unbounded queue is to prevent traffic spurt, the disadvantage is speed if speed processing tasks can not keep up submitted tasks, this will lead to more and more unbounded queue of tasks, resulting OOM exception.

Bounded queue (ArrayBlockingQueue): maxPoolSize meaningful use bounded queues can set the size of the queue, so that the thread pool.

4. thread pool should be created manually or automatically created

Manually create better, because it allows us to better understand the operating rules of the thread pool, to avoid the risk of resource depletion.

4.1 JDK packaged directly call the question thread pool will bring

newFixedThreadPool

public static ExecutorService newFixedThreadPool(int nThreads) {
        return new ThreadPoolExecutor(nThreads, nThreads,
                                      0L, TimeUnit.MILLISECONDS,
                                      new LinkedBlockingQueue<Runnable>());
    }

newFixedThreadPool thread pool by passing the same corePoolSize and maxPoolSize can guarantee a fixed number of threads, 0L of keepAliveTime represents time be destroyed, is unbounded queue workQueue use. Such potential problem is that when the speed of processing tasks behind the rate of job submission time, it may make a lot of accumulation in workQueue the task, causing OOM exception.

4.2 demo newFixedThreadPool memory overflow issue

/**
 * 演示newFixedThreadPool线程池OOM问题
 */
public class FixedThreadPoolOOM {

    private static ExecutorService executorService = Executors.newFixedThreadPool(1);

    public static void main(String[] args) {
        for (int i = 0; i < Integer.MAX_VALUE; i++) {
            executorService.execute(new SubThread());
        }
    }
}

class SubThread implements Runnable {

    @Override
    public void run() {
        try {
            //延长任务时间
            Thread.sleep(1000000000);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
    }
}

Changing JVM arguments
file

operation result
file

4.3 newSingleThreadExecutor

Printing using a thread pool thread name

public class SingleThreadExecutor {
    public static void main(String[] args) {
        ExecutorService executorService = Executors.newSingleThreadExecutor();
        for (int i = 0; i < 1000; i++) {
            executorService.execute(new Task());
        }
    }
}

file

View newSingleThreadExecutor source

public static ExecutorService newSingleThreadExecutor() {
    return new FinalizableDelegatedExecutorService
        (new ThreadPoolExecutor(1, 1,
                                0L, TimeUnit.MILLISECONDS,
                                new LinkedBlockingQueue<Runnable>()));
}

从源码可以看出newSingleThreadExecutor和newFixedThreadPool基本类似,不同的只是corePoolSize和maxPoolSize的值,所以newSingleThreadExecutor也存在内存溢出问题。

4.4 newCachedThreadPool

newCachedThreadPool也被称为可缓存线程池,它是一个无界线程池,具有自动回收多余线程的功能。
file

public class CachedThreadPool {
    public static void main(String[] args) {
        ExecutorService executorService = Executors.newCachedThreadPool();
        for (int i = 0; i < 1000; i++) {
            executorService.execute(new Task());
        }
    }
}

public static ExecutorService newCachedThreadPool() {
    return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
                                  60L, TimeUnit.SECONDS,
                                  new SynchronousQueue<Runnable>());
}

file
newCachedThreadPool的maxPoolSize设置的值为Integer.MAX_VALUE,所以可能会导致线程被无限创建,最终导致OOM异常。

4.5 newScheduledThreadPool

该线程池支持周期性任务的执行

public class ScheduledThreadPoolTest {
    public static void main(String[] args) {
        ScheduledExecutorService scheduledExecutorService =
                Executors.newScheduledThreadPool(10);
//        scheduledExecutorService.schedule(new Task(), 5, TimeUnit.SECONDS);
        scheduledExecutorService.scheduleAtFixedRate(new Task(), 1, 3, TimeUnit.SECONDS);
    }
}

public ScheduledThreadPoolExecutor(int corePoolSize) {
    super(corePoolSize, Integer.MAX_VALUE, 0, NANOSECONDS,
          new DelayedWorkQueue());
}

file

4.6 正确的创建线程池的方法

根据业务场景不同,自己设置线程池参数,例如内存有多大,自己取线程名子等。

4.7 线程池里的线程数量设置多少比较合适?

CPU密集型(加密、计算hash等):最佳线程数设置为CPU核心数的1——2倍。
耗时I/O型(读写数据库、文件、网络读写等):最佳线程数一般会大于CPU核心数很多倍,以JVM监控显示繁忙情况为依据,保证线程空闲可以衔接上。参考Brain Goezt推荐的计算方法:

线程数=CPU核心数 × (1+平均等待时间/平均工作时间)

5.对比线程池的特点

file

FixedThreadPool:通过手动传入corePoolSize和maxPoolSize,以固定的线程数来执行任务

SingleThreadExecutor:corePoolSize和maxPoolSize默认都是1,全程只以1条线程执行任务

CachedThreadPool:它没有需要维护的核心线程数,每当需要线程的时候就进行创建,因为它的线程存活时间是60秒,所以它也凭借着这个参数实现了自动回收的功能。

ScheduledThreadPool:这个线程池可以执行定时任务,corePoolSize是通过手动传入的,它的maxPoolSize为Integer.MAX_VALUE,并且具有自动回收线程的功能。

5.1 为什么FixedThreadPool和SingleThreadExecutor的Queue是LinkedBlockingQueue?

因为这两个线程池的核心线程数和最大线程数都是相同的,也就无法预估任务量,所以需要在自身进行改进,就使用了无界队列。

5.2 为什么CachedThreadPool使用的Queue是SynchronousQueue?

因为缓存线程池的最大线程数是“无上限”的,每当任务来的时候直接创建线程进行执行就好了,所以不需要使用队列来存储任务。这样避免了使用队列进行任务的中转,提高了执行效率。

5.3 为什么ScheduledThreadPool使用延迟队列DelayedWorkQueue?

因为ScheduledThreadPool是延迟任务线程池,所以使用延迟队列有利于对执行任务的时间做延迟。

5.4 JDK1.8中加入的workStealingPool

workStealingPool适用于执行产生子任务的环境,例如进行二叉树的遍历。
workStealingPool具有窃取能力。
使用时最好不要加锁,而且不保证执行顺序。

6.停止线程池的正确方法

shutdown:调用了shutdown()方法不一定会立即停止,这个方法仅仅是初始整个关闭过程。因为线程池中的线程有可能正在运行,并且队列中也有待处理的任务,不可能说停就停。所以每当调用该方法时,线程池会把正在执行的任务和队列中等待的任务都执行完毕再关闭,并且在此期间如果接收到新的任务会被拒绝。

/**
 * 演示关闭线程池
 */
public class ShutDown {
    public static void main(String[] args) throws InterruptedException {
        ExecutorService executorService = Executors.newFixedThreadPool(10);
        for (int i = 0; i < 1000; i++) {
            executorService.execute(new ShutDownTask());
        }
        Thread.sleep(1500);
        executorService.shutdown();

        //再次提交任务
        executorService.execute(new ShutDownTask());
    }
}

class ShutDownTask implements Runnable {

    @Override
    public void run() {
        try {
            Thread.sleep(500);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        System.out.println(Thread.currentThread().getName());
    }
}

file

isShutdown:可以用于判断线程池是否被shutdown了

/**
 * 演示关闭线程池
 */
public class ShutDown {
    public static void main(String[] args) throws InterruptedException {
        ExecutorService executorService = Executors.newFixedThreadPool(10);
        for (int i = 0; i < 1000; i++) {
            executorService.execute(new ShutDownTask());
        }
        Thread.sleep(1500);
        System.out.println(executorService.isShutdown());
        executorService.shutdown();
        System.out.println(executorService.isShutdown());
        //再次提交任务
//        executorService.execute(new ShutDownTask());
    }
}

class ShutDownTask implements Runnable {

    @Override
    public void run() {
        try {
            Thread.sleep(500);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        System.out.println(Thread.currentThread().getName());
    }
}

file

isTerminated:可以判断线程是否被完全终止了

/**
 * 演示关闭线程池
 */
public class ShutDown {
    public static void main(String[] args) throws InterruptedException {
        ExecutorService executorService = Executors.newFixedThreadPool(10);
        for (int i = 0; i < 1000; i++) {
            executorService.execute(new ShutDownTask());
        }
        Thread.sleep(1500);
        System.out.println(executorService.isShutdown());
        executorService.shutdown();
        System.out.println(executorService.isShutdown());
        System.out.println(executorService.isTerminated());
        //再次提交任务
//        executorService.execute(new ShutDownTask());
    }
}

file
将循环的次数改为100次,并且在第一次调用isTerminated方法的地方休眠10s
file

awaitTermination:传入等待时间,等待时间达到时判断是否停止了,主要用于检测。

//在3s后判断线程池是否被终止,返回boolean值
System.out.println(executorService.awaitTermination(3L, TimeUnit.SECONDS));

shutdownNow:调用了这个方法时,线程池会立即终止,并返回没有被处理完的任务。如果需要继续执行这里的任务可以再次让线程池执行这些返回的任务。

7.任务太多,怎么拒绝?

7.1 拒绝的时机

当Executor关闭时,新提交的任务会被拒绝。
以及Executor对最大线程数和工作队列容量使用有限边界并且已经饱和时。

7.2 拒绝策略

AbortPolicy(中断策略):直接抛出异常进行拒绝
DiscardPolicy(丢弃策略):不会得到通知,默默的抛弃掉任务
DiscardOldestPolicy(丢弃最老的):由于队列中存储了很多任务,这个策略会丢弃在队列中存在时间最久的任务。
CallerRunsPolicy:比如主线程给线程池提交任务,但是线程池已经满了,在这种策略下会让提交任务的线程去执行。

总结:第四种拒绝策略相对于前三种更加“机智”一些,可以避免前面三种策略产生的损失。在第四种策略下可以降低提交的速度,达到负反馈的效果。

8.使用钩子为线程池加点料(可用于日志记录)

/**
 * 演示每个任务执行的前后放钩子函数
 */
public class PauseableThreadPool extends ThreadPoolExecutor {

    private boolean isPaused;
    private final ReentrantLock lock = new ReentrantLock();
    private Condition unPaused = lock.newCondition();

    public PauseableThreadPool(int corePoolSize, int maximumPoolSize, long keepAliveTime,
                               TimeUnit unit, BlockingQueue<Runnable> workQueue) {
        super(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue);
    }


    public PauseableThreadPool(int corePoolSize, int maximumPoolSize, long keepAliveTime,
                               TimeUnit unit, BlockingQueue<Runnable> workQueue, ThreadFactory threadFactory) {
        super(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue, threadFactory);
    }


    public PauseableThreadPool(int corePoolSize, int maximumPoolSize, long keepAliveTime,
                               TimeUnit unit, BlockingQueue<Runnable> workQueue, RejectedExecutionHandler handler) {
        super(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue, handler);
    }

    public PauseableThreadPool(int corePoolSize, int maximumPoolSize, long keepAliveTime,
                               TimeUnit unit, BlockingQueue<Runnable> workQueue,
                               ThreadFactory threadFactory, RejectedExecutionHandler handler) {
        super(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue, threadFactory, handler);
    }

    @Override
    protected void beforeExecute(Thread t, Runnable r) {
        super.beforeExecute(t, r);
        lock.lock();
        try {
            while (isPaused) {
                unPaused.await();
            }
        } catch (InterruptedException e) {
            e.printStackTrace();
        } finally {
            lock.unlock();
        }
    }

    private void pause() {
        lock.lock();
        try {
            isPaused = true;
        } finally {
            lock.unlock();
        }
    }

    public void resume() {
        lock.lock();
        try {
            isPaused = false;
            //唤醒全部
            unPaused.signalAll();
        } finally {
            lock.unlock();
        }
    }

    public static void main(String[] args) throws InterruptedException {
        PauseableThreadPool pauseableThreadPool = new PauseableThreadPool(10, 20, 10L,
                TimeUnit.SECONDS, new LinkedBlockingQueue<>());
        Runnable runnable = new Runnable() {
            @Override
            public void run() {
                System.out.println("我被执行");
                try {
                    Thread.sleep(10);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
            }
        };
        for (int i = 0; i < 10000; i++) {
            pauseableThreadPool.execute(runnable);
        }
        Thread.sleep(1500);
        pauseableThreadPool.pause();
        System.out.println("线程池被暂停了");
        Thread.sleep(1500);
        pauseableThreadPool.resume();
        System.out.println("线程池被恢复了");
    }
}

file

9.线程池实现原理

9.1 线程池组成部分

线程池管理器
工作线程
任务队列
任务

9.2 Executor家族

file
Executor: It is a top level interface, other interfaces and class i inherited or implemented on it, comprising the following method:
void Execute (the Runnable Command);
ExecutorService: it inherits Executor, a sub-interface Executor, and the internal interface adds some the new method, several methods such as sub-section 6 mentioned
Executors: this class is a utility class, which contains some way to create a thread pool

9.3 thread pool implementation principle task multiplexed

Perform different tasks using the same thread

Source code analysis

public void execute(Runnable command) {
    // 判断任务是否为空,为空就抛出异常
    if (command == null)
        throw new NullPointerException();
    
    int c = ctl.get();
    // 如果当前线程数小于核心线程数,就增加Worker
    if (workerCountOf(c) < corePoolSize) {
        // command就是任务,点击addWorker方法
        // 第二个参数用于判断当前线程数是否小于核心线程数
        if (addWorker(command, true))
            return;
        c = ctl.get();
    }
    // 此时线程数大于等于核心线程数
    // 判断线程池是不是正在运行并将任务放到工作队列中
    if (isRunning(c) && workQueue.offer(command)) {
        // 再次检查线程状态
        int recheck = ctl.get();
        // 如果线程不是正在运行的,就删除掉任务并且拒绝
        if (! isRunning(recheck) && remove(command))
            reject(command);
        else if (workerCountOf(recheck) == 0)   //这里用于避免已经提交的任务没有线程进行执行
            addWorker(null, false);
    }
    // 如果任务无法添加或者大于最大线程数就拒绝任务
    else if (!addWorker(command, false))
        reject(command);
}

Worker is because you want to see it into addWorker () Click to view runWorker Worker Class Methods () method

w = new Worker(firstTask);

private final class Worker
    extends AbstractQueuedSynchronizer
    implements Runnable
{

final void runWorker(Worker w) {
    Thread wt = Thread.currentThread();
    // 获取到任务
    Runnable task = w.firstTask;
    w.firstTask = null;
    w.unlock(); // allow interrupts
    boolean completedAbruptly = true;
    try {
        // 只要任务不为空或者能够获取到任务就执行下面的方法
        while (task != null || (task = getTask()) != null) {
            w.lock();
            if ((runStateAtLeast(ctl.get(), STOP) ||
                 (Thread.interrupted() &&
                  runStateAtLeast(ctl.get(), STOP))) &&
                !wt.isInterrupted())
                wt.interrupt();
            try {
                beforeExecute(wt, task);
                Throwable thrown = null;
                try {
                    // task是一个Runnable类型,调用run()方法就是运行线程
                    task.run();
                } catch (RuntimeException x) {
                    thrown = x; throw x;
                } catch (Error x) {
                    thrown = x; throw x;
                } catch (Throwable x) {
                    thrown = x; throw new Error(x);
                } finally {
                    afterExecute(task, thrown);
                }
            } finally {
                task = null;
                w.completedTasks++;
                w.unlock();
            }
        }
        completedAbruptly = false;
    } finally {
        processWorkerExit(w, completedAbruptly);
    }
}

Summary: The core principle is to get to the task, if the task is not empty is called run () method, thus achieving a thread multiplexing, so that to achieve the same purpose thread to perform different tasks.

10. The thread pool status

RUNNING: accept the new task and process queued task
SHUTDOWN: do not accept the new task but the processing queued tasks
STOP: do not accept the new task, not process queued tasks and interrupt tasks are being performed, is to call shutdownNow () to bring effect
TIDYING: Chinese meaning is clean, meaning that tasks have been terminated, when workerCount zero, the thread will be converted to TIDYING state and run terminate () hook method
tERMINATED: terminate () operation completed
file

11. Use the thread pool Precautions

To avoid the accumulation of tasks (accumulation prone to memory overflow)
to avoid excessive increase in the number of threads (cached thread pool can lead to an excessive increase in the number of threads)
investigation of the leak thread (the thread has finished but can not be recycled)

Transfer: https: //juejin.im/post/5e1b1fcce51d454d3046a3de

Guess you like

Origin www.cnblogs.com/chen-chen-chen/p/12234208.html