Java entry series thread pool ThreadPoolExecutor principle analysis thinking (fifteen)

Foreword

For the analysis of the thread pool principle, please refer to " http://objcoding.com/2019/04/25/threadpool-running/ ". It is recommended that children's shoes that do not understand the principle first read this article and then look at this article. I will talk about my understanding of the thread pool. If there is something wrong, I hope to criticize and correct it.

Thread pool thinking

We can think of a thread pool as a set of pre-instantiated spare threads ready to perform application-level tasks. The thread pool improves performance by running multiple tasks at the same time, while preventing time and memory overhead during thread creation, for example, a Web The server instantiates the thread pool at startup, so that when the client requests to enter, it does not spend time creating threads. Compared with creating threads for each task, the thread pool avoids resources (processing Processor, kernel, memory, etc.) are exhausted, and after a certain number of threads are created, the extra tasks are usually placed in the waiting queue until there are threads available for new tasks. Below we summarize the thread pool principle through a simple example, as follows:

    public static void main(String[] args) {

        ArrayBlockingQueue<Runnable> arrayBlockingQueue = new ArrayBlockingQueue<>(5);

        ThreadPoolExecutor poolExecutor =
                new ThreadPoolExecutor(2,
                        5, Long.MAX_VALUE, TimeUnit.NANOSECONDS, arrayBlockingQueue);

        for (int i = 0; i < 11; i++) {
            try {
                poolExecutor.execute(new Task());
            } catch (RejectedExecutionException ex) {
                System.out.println("拒绝任务 = " + (i + 1)); 
            } 
            printStatus (i + 1 , poolExecutor); 
        } 
    } 

    static  void printStatus ( int taskSubmitted, ThreadPoolExecutor e) { 
        StringBuilder s = new StringBuilder (); 
        s.append ( "work pool size =" ) 
                .append (e. getPoolSize ()) 
                .append ( ", core pool size =" ) 
                .append (e.getCorePoolSize ()) 
                .append ( ", queue size =" ) 
                .append(e.getQueue().size())
                .append ( ", queue remaining capacity =" )
                .append(e.getQueue().remainingCapacity())
                .append(", 最大池大小 = ")
                .append(e.getMaximumPoolSize())
                .append(", 提交任务数 = ")
                .append(taskSubmitted);

        System.out.println(s.toString());
    }

    static class Task implements Runnable {

        @Override
        public void run() {
            while (true) {
                try {
                    Thread.sleep(1000000);
                } catch (InterruptedException e) {
                    break;
                }
            }
        }
    }

As the above example illustrates the basic principle of the thread pool, we declare a bounded queue (capacity is 5), the core pool size of the instantiated thread pool is 2, the maximum pool size is 10, there is no custom implementation for creating threads, the default is passed The thread pool factory is created, the rejection policy is the default, and 11 tasks are submitted. When starting the thread pool, it will start with no thread by default. When we submit the first task, the first worker thread will be generated and the task will be handed over to this thread, as long as the current number of worker threads is less than the configured core Pool size, even if some previously created core threads may be idle, a new worker thread will be generated for each newly submitted task (Note: When the worker thread pool size does not exceed the core pool size, the Worker is created The first task execution is firstTask, and the blocking queue is bypassed. If the core pool size is exceeded, the task will be placed in the blocking queue. Once the blocking queue is full, the thread task will be recreated. If the task exceeds the maximum thread pool size, it will be executed. Rejection strategy. When the blocking queue is an unbounded queue (such as LinkedBlockingQueue), it is clear that the maximum pool size set will be invalid. Let's elaborate again, when the number of worker threads reaches the size of the core pool, if more and more tasks are submitted at this time, what is the specific behavior of the thread pool?

1. As long as there are any idle core threads (worker threads previously created, but assigned tasks have been completed), they will take over and execute new tasks submitted.

2. If there is no free core thread available, each new task submitted will enter the defined work queue until a core thread can process it. If the work queue is full, but there are still not enough free core threads to process the task, then the thread pool will resume and create new worker threads, and the new tasks will be executed by them. Once the number of worker threads reaches the maximum pool size, the thread pool will stop creating new worker threads again, and all tasks submitted after this time will be rejected.

From the above 2 we know that once the core thread size is reached, it will enter the blocking queue (blocking queue is not full), we can think that this is a mechanism to perform blocking queue priority, then we can think about a question: why not create a non-core Thread to expand the size of the thread pool instead of entering the blocking queue. When the maximum pool size is reached, the blocking queue is queued. Could this way and the default implementation be better in efficiency and performance? But from another perspective, since you don't want to enter the blocking queue soon, why not expand the specified core pool size larger? We know that the larger the number of threads, the greater the number of threads in non-peak systems. That is to say, the creation of non-core threads in peak systems can theoretically be blocked immediately than the default. The queue has the performance advantages to support large-scale tasks? So how can we modify the default operation? Let ’s first take a look at the operation while performing the task

public void execute(Runnable command) {
    if (command == null)
        throw new NullPointerException();

    int c = ctl.get();
    if (workerCountOf(c) < corePoolSize) {
        if (addWorker(command, true))
            return;
        c = ctl.get();
    }
    if (isRunning(c) && workQueue.offer(command)) {
        int recheck = ctl.get();
        if (! isRunning(recheck) && remove(command))
            reject(command);
        else if (workerCountOf(recheck) == 0)
            addWorker(null, false);
    }
    else if (!addWorker(command, false))
        reject(command);
    }
}

In the first step, if the current number of worker threads is less than the size of the core pool, we will create a thread based on the core pool and then execute the task. We have no problem. In the second step, if the size of the worker thread exceeds the size of the core pool, if the current thread is running Status and put its task in the blocking queue. If it fails, the third step is to create a non-core pool thread. Through source code analysis, we know that if the thread in the core pool will create a thread to execute the task even if there are idle threads, then can we Get whether there are idle threads in the core pool, and if there are, then try to make them enter the blocking queue, so we need to rewrite the offer method in the blocking queue, add a thread that has an idle core pool, and let it receive tasks. So we inherit the above bounded blocking queue as follows:

public class CustomArrayBlockingQueue<E> extends ArrayBlockingQueue {

    private final AtomicInteger idleThreadCount = new AtomicInteger();

    public CustomArrayBlockingQueue(int capacity) {
        super(capacity);
    }

    @Override
    public boolean offer(Object o) {
        return idleThreadCount.get() > 0 && super.offer(o);
    }
}

But unfortunately, through the analysis of the thread pool source code, we are not able to get the threads of the idle core pool, but we can track the idle threads in the core pool. The method of obtaining tasks is as follows:

boolean timed = allowCoreThreadTimeOut || wc > corePoolSize;

if ((wc > maximumPoolSize || (timed && timedOut))
    && (wc > 1 || workQueue.isEmpty())) {
    if (compareAndDecrementWorkerCount(c))
        return null;
    continue;
}

try {
    Runnable r = timed ?
        workQueue.poll(keepAliveTime, TimeUnit.NANOSECONDS) :
        workQueue.take();
    if (r != null)
        return r;
    timedOut = true;
} catch (InterruptedException retry) {
    timedOut = false;
}

The core of the task is intercepted as above. If the worker thread size is larger than the core pool size, it will enter the blocking queue by default. At this time, the task in the blocking queue is obtained through the pool. If the worker thread size is smaller than the core pool size, take will be called at this time. The method obtains the available tasks from the blocking queue. At this time, it means that the current core pool thread is idle. If there are no tasks in the queue, the thread will block at this call until there are available tasks, so the core pool thread is still It is in the idle state, so we increase the above counter, otherwise , the call method returns, at this time the thread is no longer idle, we can reduce the counter, rewrite the take method, as follows:

@Override
public Object take() throws InterruptedException {
    idleThreadCount.incrementAndGet();
    Object take = super.take();
    idleThreadCount.decrementAndGet();
    return take;
}

Next, let ’s consider the case where timed is true. In this case, the thread will use the poll method. Obviously, any thread that enters the poll method is currently idle, so we can rewrite this method in the work queue. Implementation to increase the counter at the beginning , and then we can call the actual poll method, which may lead to the following two cases, if there is no task in the queue, the thread will wait for this call to provide the timeout provided, and then return null . At this point, the thread will time out and will soon exit the pool, thereby reducing the number of idle threads by 1, so we can decrease the counter at this time, otherwise it will be returned by the method call, so the thread is no longer idle, this We can also reduce the counter.

@Override
public Object poll(long timeout, TimeUnit unit) throws InterruptedException {
    idleThreadCount.incrementAndGet();
    Object poll = super.poll(timeout, unit);
    idleThreadCount.decrementAndGet();
    return poll;
}

Through the above rewriting of the offer, pool, and take methods, the expansion of non-core threads without idle threads based on the core pool is not yet over. If the maximum pool size is reached, we need to add them to the blocking queue Queue, so in the end we use our custom blocking queue and use a custom rejection strategy, as follows:

CustomArrayBlockingQueue<Runnable> arrayBlockingQueue = new CustomArrayBlockingQueue<>(5);

ThreadPoolExecutor poolExecutor =
        new ThreadPoolExecutor(10,
                100, Long.MAX_VALUE, TimeUnit.NANOSECONDS, arrayBlockingQueue
                , Executors.defaultThreadFactory(), (r, executor) -> {
            if (!executor.getQueue().add(r)) {
                System.out.println("拒绝任务");
            }
        });

for (int i = 0; i < 150; i++) {
    try {
        poolExecutor.execute(new Task());
    } catch (RejectedExecutionException ex) {
        System.out.println("拒绝任务 = " + (i + 1));
    }
    printStatus(i + 1, poolExecutor);
}

In the above, we implement a custom rejection strategy, and put the rejected tasks into the blocking queue. If the blocking queue is full and can no longer receive new tasks, we will call the default rejection strategy or other handlers, so we will When adding to the blocking queue, that is, calling the add method, we also need to rewrite the add method as follows:

@Override
public boolean add(Object o) {
    return super.offer(o);
}

to sum up

The above details are only the reflections caused by the default implementation of the thread pool. Can the performance of large-scale tasks be improved in the above way? There may also be some areas that are not well thought out, for the time being analyzed here.

Guess you like

Origin www.cnblogs.com/CreateMyself/p/12639215.html