[2023] Java multi-threaded thread pool explanation summary (including code examples)

1. There are three main reasons why JAVA uses thread pool

  1. Creating/destroying threads consumes system resources, and the thread pool can reuse the created threads .
  2. Control the amount of concurrency . Too many concurrencies may cause excessive resource consumption and cause the server to crash. (main reason)
  3. Unified management can be done on the line.

There are a total of 7 ways to create a thread pool, which are generally divided into 2 categories:

1. The thread pool created by ThreadPoolExecutor;

2. Thread pool created by Executors.

Second, the thread pool creation method:

  • Executors.newFixedThreadPool: Create a fixed-size thread pool to control the number of concurrent threads. Exceeding threads will wait in the queue;
  • Executors.newCachedThreadPool: Create a cacheable thread pool. If the number of threads exceeds the processing requirements, the cache will be recycled after a period of time. If the number of threads is not enough, new threads will be created;
  • Executors.newSingleThreadExecutor: Create a thread pool with a single number of threads, which can guarantee the execution order of first-in, first-out;
  • Executors.newScheduledThreadPool: Create a thread pool that can execute delayed tasks;
  • Executors.newSingleThreadScheduledExecutor: Create a single-threaded thread pool that can execute delayed tasks;
  • Executors.newWorkStealingPool: Create a thread pool for preemptive execution (the order of task execution is uncertain) [Added in JDK1.8].
  • ThreadPoolExecutor: The most original way to create a thread pool. It contains 7 parameters to set, which will be discussed in detail later.

How to add tasks:
*Use submit to execute tasks with or without return values; while execute can only execute tasks without return values. *****

Create threads using a thread factory:

Insert image description here

Features provided:

1. Set the naming rules for threads (in the thread pool).

2. Set the priority of the thread.

3. Set thread grouping.

4. Set the thread type (user thread, daemon thread).

3. Create a thread pool

1. Create a fixed size thread pool
Insert image description here

2. Create a cacheable thread pool and feel the number of thread tasks created

  1. Advantages: These threads can be reused within a certain period of time to generate a corresponding thread pool.
  2. Disadvantages: It is suitable for scenarios with a large number of tasks in a short period of time. Its disadvantage is that it may take up a lot of resources.
    Insert image description here

3. Create a delayed scheduled task thread pool
3. scheduleAtFixedRate is the start time of the previous task, which is used as the reference time for the next scheduled task (reference time + delayed task = task execution). *
4. scheduleWithFixedDelay is the end time of the previous task, which is used as the reference time for the next scheduled task. *
5. schedule(): You can set the delay time and execute it regularly;
Insert image description here
4. Create a single-threaded thread pool
Insert image description here
5. Create a preemptible thread pool
Insert image description here

⭐6. Use ThreadPoolExecutor to create
ThreadPoolExecutor parameter description:
Insert image description here
rejection strategy

  • JDK 4 types + 1 custom strategy
    Insert image description here
    specific code
    Insert image description here

Implementation process

  1. The total number of threads < corePoolSize, no matter whether the thread is idle or not, a new core thread will be created to perform the task (let the number of core threads quickly reach corePoolSize, when the number of core threads < corePoolSize
    ). Note that this step requires a global lock.
  2. When the total number of threads >= corePoolSize, new thread tasks will enter the task queue to wait, and then the idle core threads will sequentially fetch tasks from the cache queue for execution (reflecting thread reuse ) .
  3. When the cache queue is full, it means that there are too many tasks at this time, and some "temporary workers" are needed to perform these tasks. Then a non-core thread will be created to perform this task. Note that this step requires a global lock.
  4. When the cache queue is full and the total number of threads reaches the maximumPoolSize, the rejection strategy mentioned above will be adopted for processing.
    Insert image description here

blocking queue

We assume a scenario where producers keep producing resources and consumers keep consuming resources. The resources are stored in a buffer pool. The producer stores the produced resources in the buffer pool. The consumer gets the resources from the buffer pool for consumption. This is the famous producer-consumer model .

This pattern can simplify the development process. On the one hand, it eliminates the code dependency between the producer class and the consumer class. On the other hand, it decouples the process of producing data from the process of using data to simplify the load.

When we code to implement this pattern ourselves, because multiple threads need to operate shared variables (i.e. resources), it is easy to cause thread safety issues , causing repeated consumption and deadlocks , especially when there are multiple producers and consumers. . In addition, when the buffer pool is empty, we need to block the consumer and wake up the producer; when the buffer pool is full, we need to block the producer and wake up the consumer. These wait-wake up logics need to be implemented by ourselves.

BlockingQueue is an important data structure under the Java util.concurrent package. Different from ordinary queues, BlockingQueue provides a thread-safe queue access method . The implementation of many advanced synchronization classes under the concurrent package is based on BlockingQueue.

BlockingQueue is generally used in the producer-consumer model. The producer is the thread that adds elements to the queue, and the consumer is the thread that takes elements from the queue. BlockingQueue is a container for storing elements .

The blocking queue provides four different sets of methods for inserting, removing, and checking elements:

Method\processing method Throw an exception Return special value Always blocked Exit with timeout
insert method add(e) offer(e) put(e) offer(e,time,unit)
Removal method remove() poll() take() poll(time,unit)
Inspection Method element() peek() - -

Blocking queue: BlockingQueue workQueue

  • Maintains Runnable task objects waiting to be executed.
  1. LinkedBlockingQueue
    1. Chained blocking queue, the underlying data structure is a linked list, the default size is Integer.MAX_VALUE, the size is also specified
  2. ArrayBlockingQueue
    1. Array blocking queue, the underlying data structure is an array, and the size of the queue needs to be specified.
  3. SynchronousQueue
    1. Synchronous queue, internal capacity is 0, each put operation must wait for a take operation, and vice versa.
  4. DelayQueue
    1. Delay queue, the element in the queue can only be obtained from the queue when its specified delay time expires.
  5. PriorityBlockingQueue
    1. Unbounded blocking queue, the priority is determined by the Compator object passed in by the constructor.

The principle of blocking queue

The blocking queue mainly uses the multi-condition (Condition) blocking control of the Lock lock.

The first is the constructor. In addition to initializing the size of the queue and whether it is a fair lock, it also initializes two monitors for the same lock (lock), namely notEmpty and notFull. The functions of these two monitors can currently be simply understood as mark grouping. When the thread is performing a put operation, add the monitor notFull to it to mark the thread as a producer; when the thread is performing a take operation, add it to it. Monitor notEmpty, marking this thread as a consumer.

Guess you like

Origin blog.csdn.net/weixin_52315708/article/details/131521785