[Reserved] Java programming logic (78) - thread pool

On section, we explore the Java and task execution service contract, in practice, the main task is the implementation mechanism of the implementation of the service thread pool, this section, we'll explore the thread pool.

basic concepts

Thread pool, by definition, a thread pool, there are several threads, their purpose is to commit to a thread pool task execution will not quit after completion of a task, but continue to wait or perform new tasks. Thread pool composed mainly of two concepts, a task queue, the other is a worker thread, the worker thread is a main loop that accept the task from the queue and execute tasks queue save the task to be performed.

The concept is similar to the thread pool life in some queuing scenes, such as at the train station ticket queue, queue up at the hospital, queuing at banks to transact business, in general by several windows provide services that are similar to the worker thread window , while the concept of the queue is similar, but, in reality scenario, each window is often a separate queue, the queue is difficult to equity, with the development of information technology, more and more unified virtual queuing occasions queue, usually acquire a queue number, then press the number and then click services.

Thread pool advantage is obvious:

  • It can be reused threads to avoid the overhead of thread creation
  • When too many tasks, avoid creating too many threads through the line, reduce system resource consumption and competition, to ensure the orderly completion of the task

Java and contract implementation class thread pool is ThreadPoolExecutor, it inherits from AbstractExecutorService, realized ExecutorService, basic usage similar to the previous section, we would not go into details. However, ThreadPoolExecutor there are some important parameters, these parameters are very important for understanding the rational use of the thread pool, then to the next, we explore these parameters.

Understand the thread pool

Construction method

ThreadPoolExecutor have more than one constructor, we need some parameters, the main construction methods are:

Copy the code
public ThreadPoolExecutor(int corePoolSize,
                          int maximumPoolSize,
                          long keepAliveTime,
                          TimeUnit unit,
                          BlockingQueue<Runnable> workQueue)
public ThreadPoolExecutor(int corePoolSize,
                          int maximumPoolSize,
                          long keepAliveTime,
                          TimeUnit unit,
                          BlockingQueue<Runnable> workQueue,
                          ThreadFactory threadFactory,
                          RejectedExecutionHandler handler) 
Copy the code

The second method of construction and more than two parameters threadFactory Handler, generally do not need these two parameters, the first constructor sets the default value.

The number of parameters corePoolSize, maximumPoolSize, keepAliveTime, unit for controlling the thread pool thread, workQueue represents the task queue, threadFactory for threads created some configuration, handler indicates that the task deny policy. Here let us discuss in detail these parameters.

Thread pool size

The size of the thread pool with four main parameters related to:

  • corePoolSize: a core number of threads
  • maximumPoolSize: The maximum number of threads
  • keepAliveTime and unit: idle thread survival time

maximumPoolSize represents the maximum number of threads in the pool, the number of threads dynamically changes, but this is the maximum, no matter how many tasks will not create the number is greater than the value of the thread.

corePoolSize represent the core of the number of threads in the pool, but this is not to say, beginning to create so many threads, just create a thread pool after the fact and does not create any thread.

Under normal circumstances, there is a new task comes, if the current number of threads less than corePoolSiz, it will create a new thread to perform this task, should be noted that, even if the other thread is now idle, will create a new thread.

However, if the number of threads greater than or equal corePoolSiz, it does not create a new thread immediately, it will first try to line up, needs to be emphasized is that it is "trying to" line up, instead of "blocking wait" into the team, if the queue is full or is otherwise unable to enter the team immediately, it does not line up, but the number of threads to check whether the maximumPoolSize, if not, it will continue to create a thread until the thread count reached maximumPoolSize.

keepAliveTime purpose is to release the excess thread resources, it is said that when the number of threads in the pool is greater than corePoolSize, survival extra idle thread, that is to say, a non-core thread, when in idle waiting for a new task, there is a maximum waiting time, that keepAliveTime, if it is time or not a new task will be terminated. If the value is 0, indicating that all threads will not terminate timeout.

In addition to these parameters can be specified in the configuration process, but also can be viewed and modified getter / setter methods.

Copy the code
public void setCorePoolSize(int corePoolSize)
public int getCorePoolSize()
public int getMaximumPoolSize()
public void setMaximumPoolSize(int maximumPoolSize)
public long getKeepAliveTime(TimeUnit unit)
public void setKeepAliveTime(long time, TimeUnit unit)
Copy the code

In addition to these static parameters, ThreadPoolExecutor can also view some of the dynamic figures on the number of threads and tasks:

Copy the code
// Returns the current number of threads
public int getPoolSize()
// returns the thread pool had reached the maximum number of threads
public int getLargestPoolSize()
All completed tasks // returns the number of thread pool since its inception
public long getCompletedTaskCount()
// returns the number of all tasks, including all completed, plus all queued to be executed
public long getTaskCount()
Copy the code

queue

Queue type ThreadPoolExecutor requirement is blocking queue BlockingQueue, we have introduced a variety of BlockingQueue at 76, they can be used as thread pool queue, such as:

  • LinkedBlockingQueue: blocking queue based on linked list, you can specify the maximum length, but the default is unbounded.
  • ArrayBlockingQueue: array-based bounded blocking queue
  • PriorityBlockingQueue: unbounded blocking priority queue based on heap
  • SynchronousQueue: no synchronous blocking queue actual storage space

If the queue is unbounded, needs to be emphasized is that the number of threads can only reach a maximum of corePoolSize, arrived corePoolSize, new tasks will always line up parameters maximumPoolSize would be no meaning.

The other side, for SynchronousQueue, we know that it does not have space to store the actual elements, when trying to line up, just exactly idle threads while waiting to accept the task, the team will be successful, otherwise, always creates a new thread, until it reaches maximumPoolSize.

Mission refused strategy

If the queue bounded and limited maximumPoolSize, when the queue fills up, the number of threads reached maximumPoolSize, then, the new task, and how to handle it? At this time, the task will trigger the thread pool denial strategy.

By default, the method of submitting tasks such as execute / submit / invokeAll so will throw an exception of type RejectedExecutionException.

However, the policy is to refuse customizable, ThreadPoolExecutor achieved four processes:

  • ThreadPoolExecutor.AbortPolicy: This is the default mode, an exception is thrown
  • ThreadPoolExecutor.DiscardPolicy: silent treatment, ignore the new task, do not throw an exception, not execution
  • ThreadPoolExecutor.DiscardOldestPolicy: The longest-waiting task to throw away, then line up their own
  • ThreadPoolExecutor.CallerRunsPolicy: perform tasks in the task submitter thread, rather than to the thread pool thread execution

public static inner classes they are ThreadPoolExecutor, have achieved RejectedExecutionHandler interface, which is defined as:

public interface RejectedExecutionHandler {
    void rejectedExecution(Runnable r, ThreadPoolExecutor executor);
}

When a thread pool can not accept the task, calling its method rejectedExecution refused strategy.

Rejection policy can be specified in the configuration process, you may be specified by the following method:

public void setRejectedExecutionHandler(RejectedExecutionHandler handler)

The default is a AbortPolicy RejectedExecutionHandler example, as follows:

private static final RejectedExecutionHandler defaultHandler =
    new AbortPolicy();

The AbortPolicy of rejectedExecution implementation is thrown, as follows:

public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {
    throw new RejectedExecutionException("Task " + r.toString() +
                                         " rejected from " +
                                         e.toString());
}

Next we need to emphasize, deny policy only in the queue bounded and limited maximumPoolSize circumstances will trigger.

If the queue is unbounded, not the task of the service will always line up, but this is not necessarily desirable because the request processing queue can consume very large memory, and even lead to insufficient memory exception.

If the queue is bounded but unlimited maximumPoolSize, may create too many threads, fill the CPU and memory, so that any task difficult to complete.

So, the task is very large scene, refused to let the opportunity to execute strategy is to ensure the stability of the system very important aspect.

A thread factory

Thread pool can also accept a parameter, ThreadFactory, it is an interface that is defined as:

public interface ThreadFactory {
    Thread newThread(Runnable r);
}

This interface according to Runnable to create a Thread, default ThreadPoolExecutor implementation is a static inner classes DefaultThreadFactory Executors class, mainly to create a thread to the thread set a name, set the daemon property is false, set the thread priority is standard default priority, thread name format is: pool- <thread pool number> -thread- <thread number>.

If you need some threads of custom attributes, such as name, can be achieved ThreadFactory custom.

Special configuration on the core thread

When the number of threads less corePoolSize, we call these threads as the core thread, by default:

  • The core thread does not pre-created only when there are tasks will be created
  • The core is idle threads will not be terminated, keepAliveTime argument does not apply to it

However, ThreadPoolExecutor the following method, you can change this default behavior.

Copy the code
// pre-created all of the core threads
public int prestartAllCoreThreads ()
// create a core thread, if all core threads have been created, returns false
public boolean prestartCoreThread()
// If the argument is true, the keepAliveTime argument also applies to the core thread
public void allowCoreThreadTimeOut(boolean value)
Copy the code

Executors factory class
class Executors provides static factory method, you can easily create pre-configured thread pool, the main methods are:

public static ExecutorService newSingleThreadExecutor()
public static ExecutorService newFixedThreadPool(int nThreads)
public static ExecutorService newCachedThreadPool() 

newSingleThreadExecutor basic equivalent to calling:

public static ExecutorService newSingleThreadExecutor() {
    return new ThreadPoolExecutor(1, 1,
                                0L, TimeUnit.MILLISECONDS,
                                new LinkedBlockingQueue<Runnable>());
}

Use only one thread, an unbounded queue LinkedBlockingQueue, does not time out after the termination of the thread is created, the thread order to perform all tasks. The thread pool is suitable for all occasions need to ensure that tasks are executed sequentially.

newFixedThreadPool of code:

public static ExecutorService newFixedThreadPool(int nThreads) {
    return new ThreadPoolExecutor(nThreads, nThreads,
                                  0L, TimeUnit.MILLISECONDS,
                                  new LinkedBlockingQueue<Runnable>());
}

Using a fixed number of n threads, use unbounded queue LinkedBlockingQueue, does not time out after terminating thread creation. And newSingleThreadExecutor same, because it is unbounded queue, the queue if too many tasks, can consume very large memory.

newCachedThreadPool of code:

public static ExecutorService newCachedThreadPool() {
    return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
                                  60L, TimeUnit.SECONDS,
                                  new SynchronousQueue<Runnable>());
}

CorePoolSize it is 0, maximumPoolSize is Integer.MAX_VALUE, keepAliveTime is 60 seconds, the queue is SynchronousQueue.

It means that when a new task arrives, if there is just idle thread is not limited in the total number of threads waiting tasks is one of the idle thread to accept the task, otherwise it always creates a new thread, created for any a idle thread, if there are no new task within 60 seconds, it is terminated.

In practice, you should use newFixedThreadPool or newCachedThreadPool it?

In the case of high system load, newFixedThreadPool can line up a new job through the queue to ensure there is enough resources to handle the actual task, but newCachedThreadPool creates a thread for each task, leading to create too many threads competing for CPU and memory resources so that any actual tasks are difficult to complete, at this time, newFixedThreadPool more applicable.

However, if the system load is not too high, a single task execution time is relatively short, efficiency newCachedThreadPool may be higher, because the task may, without queuing, directly to one of the idle thread.

In the case of high system load possible, neither a good choice, the problem newFixedThreadPool queue is too long, and the problem is that too many threads newCachedThreadPool, then, should the ThreadPoolExecutor custom, pass the appropriate parameters according to specific circumstances .

Thread pool deadlock

About submit tasks to the thread pool, we need to pay special attention to a situation in which there are dependencies between tasks, this situation may occur deadlock. For example, task A, in its execution, it gave the same task to perform service submits a task B, but need to wait for the end of the task B.

If the task A is presented to a single-threaded thread pool, there will be a deadlock, waiting for the results of A in B, and B in the queue waiting to be scheduled.

If it is submitted to a limited number of threads in the thread pool, there may be a deadlock, we look at a simple example:

Copy the code
public class ThreadPoolDeadLockDemo {
    private static final int THREAD_NUM = 5;
    static ExecutorService executor = Executors.newFixedThreadPool(THREAD_NUM);

    static class TaskA implements Runnable {
        @Override
        public void run() {
            try {
                Thread.sleep(100);
            } catch (InterruptedException e) {
                e.printStackTrace ();
            }
            Future<?> future = executor.submit(new TaskB());
            try {
                future.get();
            } catch (Exception e) {
                e.printStackTrace ();
            }
            System.out.println("finished task A");
        }
    }

    static class TaskB implements Runnable {
        @Override
        public void run() {
            System.out.println("finished task B");
        }
    }

    public static void main(String[] args) throws InterruptedException {
        for (int i = 0; i < 5; i++) {
            executor.execute(new TaskA());
        }
        Thread.sleep(2000);
        executor.shutdown();
    }
}
Copy the code

The previous code uses newFixedThreadPool create a thread pool thread 5, main program submitted five TaskA, TaskA will submit a TaskB, then wait TaskB end, and because the thread has been filled TaskB can only wait in line, so that the program deadlock.

How to solve this problem?

Replace newFixedThreadPool is newCachedThreadPool, lets create a thread is no longer limited, this will be no problem.

Another solution is to use SynchronousQueue, it can avoid a deadlock, how to do it? For ordinary queue, into the team just put the task into the queue, and for SynchronousQueue, the success of the team means that the thread has to accept the deal, if the team fails, you can create more threads until maximumPoolSize, if reached maximumPoolSize, will trigger rejection mechanism, anyway, will not deadlock. We will create executor Replace the code:

static ExecutorService executor = new ThreadPoolExecutor(
        THREAD_NUM, THREAD_NUM, 0, TimeUnit.SECONDS,
        new SynchronousQueue<Runnable>());

Just change the type of queue, run the same program, the program does not deadlock, but submit TaskA calls will throw an exception RejectedExecutionException, because the team will fail, and also reached the maximum number of threads.

summary

This section describes the basic concepts of the thread pool, discussed in detail the meaning of its main parameters, the parameters for the rational use of the thread pool is very important to understand, for interdependent task that requires special attention to avoid deadlock.

ThreadPoolExecutor realized the producer / consumer model, the worker thread is the consumer, who is the producer job submission, queue thread pool maintenance tasks themselves. When we come across a similar producer / consumer problems, priority should be given directly use the thread pool, rather than reinventing the wheel, manage and maintain their own task queue and consumer thread.

In asynchronous job program, a common scenario is that the main thread submit multiple asynchronous tasks, and then have to deal with the results of the task is completed, and the task is completed according to the order processed one, for this scenario, Java and contract provides a convenient method, using CompletionService, let's explore it to the next section.

Guess you like

Origin www.cnblogs.com/ivy-xu/p/12364923.html