Java concurrent programming summary 10: thread pool framework

Mainly refer to the blog previously written:

1: In- depth understanding of thread pools and the use of thread pools

2:java.util.concurrent.Executor、ExecutorService、ThreadPoolExecutor、Executors,ThreadFactory

 

 

Framework summary:

1. In order: java.util.concurrent.Executor, ExecutorService, ThreadPoolExecutor, Executors, ThreadFactory

According to the above summary:

  1. Executor : The executor interface is also the top-level abstract core interface , which separates the execution of tasks and tasks.
  2. ExecutorService : On the basis of Executor, it provides functions such as executor life cycle management and asynchronous task execution.
  3. ScheduledExecutorService: Provides the function of delayed execution/periodic execution of tasks on the basis of ExecutorService.
  4. Executors: A static factory that produces specific actuators .
  5. ThreadFactory: Thread Factory, used to create a single thread, reduce the tedious work of manually creating threads, and at the same time can reuse the characteristics of the factory.
  6. AbstractExecutorService: An abstract implementation of ExecutorService, which provides a foundation for the implementation of various types of actuators.
  7. ThreadPoolExecutor : Thread pool Executor, also the most commonly used Executor, can manage threads in a thread pool way.
  8. ScheduledThreadPoolExecutor: On the basis of ThreadPoolExecutor, support for periodic task scheduling is added.
  9. ForkJoinPool: Fork/Join thread pool, introduced in JDK1.7, is the core class that implements the Fork/Join framework.

2、ThreadPoolExecutor

Submit(), invokeAll(), and invokeAny() in ExecutorService are all invoked execute methods, so execute is the core of the core ;

Its important parameters: corePoolSize, maximumPoolSize, rejection strategy, blocking queue;

  • newFixedThreadPool : The thread pool corePoolSize and maximumPoolSize are equal, the blocking queue uses LinkedBlockingQueue, and the size is the maximum integer value;
  • newSingleThreadExecutor: A thread pool with only one thread. The blocking queue uses LinkedBlockingQueue. If redundant tasks are submitted to the thread pool, they will be temporarily stored in the blocking queue and will be executed when they are free.
  •  
  • newCachedThreadPool: Cached thread pool, cached threads survive 60 seconds by default. The size of the core pool corePoolSize of the thread is 0, the maximum core pool is Integer.MAX_VALUE, and the blocking queue uses SynchronousQueue .
  • newScheduledThreadPool : Timed thread pool, this thread pool can be used to perform tasks periodically, usually for periodic synchronization of data.

About the blocking queue :

  • Choose the appropriate blocking queue. Both newFixedThreadPool and newSingleThreadExecutor use unbounded blocking queues. Unbounded blocking queues will consume a lot of memory. If bounded blocking queues are used, it will avoid the problem of excessive memory usage, but when tasks fill up the bounded blocking queues, What should I do with the new task? When using a bounded queue, an appropriate rejection strategy needs to be selected, and the size of the queue and the size of the thread pool must be adjusted together. For very large or unbounded thread pools, SynchronousQueue can be used to avoid task queuing and directly submit tasks from the producer to the worker thread.
  • Since the core thread pool is easy to fill up, when using SynchronousQueue, it is generally necessary to set the maximumPoolSizes to be relatively large, otherwise the enqueue will easily fail and eventually lead to the execution of the rejection strategy. This is why the cache thread pool provided by Executors work by default The reason for using SynchronousQueue as a task queue.
  • SynchronousQueue has no capacity and uses a lock-free algorithm, so the performance is better, but each enqueue operation has to wait for a dequeue operation, and vice versa.

Rejection strategy :

  • AbortPolicy: discard the task and throw RejectedExecutionException
  • CallerRunsPolicy: As long as the thread pool is not closed, the policy is directly in the caller thread to run the currently discarded tasks. Obviously, this will not really discard the task, but the performance of the task submission thread is very likely to drop sharply.
  • DiscardOldestPolicy: Discard the oldest request in the queue, which is a task that is about to be executed, and try to submit the current task again.
  • DiscardPolicy: Discard the task without any processing.

3、ExecutorService

The core method of ExecutorService is the submit method, which is used to submit a task to be executed. If readers read the source code of ThreadPoolExecutor, they will find that it does not override the submit method, but uses the template of the parent class AbstractExecutorService, and then implements execute by itself. method.

4. Detailed explanation and comparison of the implementation principles of ArrayBlockingQueue and LinkedBlockingQueue:

ArrayBlockingQueue internally uses arrays for data storage ( 属性items), in order to ensure thread safety , it is used ReentrantLock lock, in order to ensure blockable insertion and deletion of data using Condition , when the consumer thread obtaining data is blocked, the thread will be placed In the notEmpty waiting queue, when the producer thread inserting data is blocked, the thread will be placed in the notFull waiting queue.

LinkedBlockingQueue is a bounded blocking queue implemented with a linked list. When the size of the queue is specified when the object is constructed, the default size of the queue is Integer.MAX_VALUE.

It can be seen that the main difference with ArrayBlockingQueue is : LinkedBlockingQueue uses two different locks ( takeLockand putLock) to control thread safety when inserting data and deleting data . Therefore, two corresponding conditions are also generated by these two locks. ( notEmptyAnd notFull) to achieve blockable insertion and deletion of data.

Comparison of the two:

The same point : ArrayBlockingQueue and LinkedBlockingQueue is through condition notification mechanism to achieve blocking insert and delete elements, and to meet the characteristics of the thread-safe;

Difference :

  1.  The bottom layer of ArrayBlockingQueue is implemented by using an array, while LinkedBlockingQueue uses a linked list data structure;
  2. ArrayBlockingQueue insert and delete data, using only one lock, and LinkedBlockingQueue is in the insertion and deletion were used putLockand takeLockso can reduce the possibility of thread because the thread can not get into the lock and enter WAITING state, thereby increasing concurrently executing threads effectiveness.

 

Guess you like

Origin blog.csdn.net/ScorpC/article/details/113914729