The meaning of the seven parameters of the thread pool

Tell me about the meaning of the seven parameters of the thread pool?

The so-called seven parameters of the thread pool refer to the seven parameters set when using ThreadPoolExecutor to create a thread pool, as shown in the following source code:

public ThreadPoolExecutor(int corePoolSize,
                          int maximumPoolSize,
                          long keepAliveTime,
                          TimeUnit unit,
                          BlockingQueue<Runnable> workQueue,
                          ThreadFactory threadFactory,
                          RejectedExecutionHandler handler) {
}

The seven parameters are:

  1. corePoolSize: The number of core threads.
  2. maximumPoolSize: The maximum number of threads.
  3. keepAliveTime: Idle thread survival time.
  4. TimeUnit: time unit.
  5. BlockingQueue: thread pool task queue.
  6. ThreadFactory: A factory that creates threads.
  7. RejectedExecutionHandler: rejection strategy.

Parameter 1: corePoolSize

Number of core threads: refers to the number of long-term surviving threads in the thread pool.

This is like the big families in ancient times, they would hire some "long-term workers" to work for them for a long time. These people are generally relatively stable. No matter how much they work this year, these people will not be dismissed. other people's.

Parameter 2: maximumPoolSize

Maximum number of threads: The maximum number of threads allowed to be created by the thread pool, when the task queue of the thread pool is full, the maximum number of threads that can be created.

This is the maximum number of people that a big family can employ in ancient times. For example, when a certain festival or someone in a big family celebrates a birthday, because there are too many jobs, "long-term workers" alone cannot complete the task. At this time, some "short-time workers" will be recruited together. For work, the maximum number of threads is the total number of "long-term workers" + "short-term workers", that is, the number of recruits cannot exceed the maximumPoolSize.

Precautions

The value of the maximum number of threads maximumPoolSize cannot be less than the number of core threads corePoolSize, otherwise an IllegalArgumentException will be reported when the program is running, as shown in the following figure:
insert image description here

Parameter 3: keepAliveTime

Idle thread survival time, when there are no tasks in the thread pool, some threads will be destroyed, the number of destroyed threads = maximumPoolSize (maximum number of threads) - corePoolSize (number of core threads).

Still take the big family as an example, when the big family is busy, they will hire some "part-time workers" to work, but after finishing the work, these "part-time workers" will be dismissed when they are not busy, and keepAliveTime is used To describe the (maximum) time that a short-term worker can stay in a rich family after he is not alive.

Parameter 4: TimeUnit

Time unit: The description unit of the idle thread survival time, this parameter is used in conjunction with parameter 3. Parameter 3 is a long type value. For example, parameter 3 passes 1, so does this 1 represent 1 day? Or 1 hour? Or 1 second? Parameter 4 has the final say. TimeUnit has the following 7 values:

  • TimeUnit.DAYS: days
  • TimeUnit.HOURS: hours
  • TimeUnit.MINUTES: minutes
  • TimeUnit.SECONDS: seconds
  • TimeUnit.MILLISECONDS: Milliseconds
  • TimeUnit.MICROSECONDS: Subtle
  • TimeUnit.NANOSECONDS: nanoseconds

Parameter 5: BlockingQueue

In Java, BlockingQueue is an interface. Its implementation classes include ArrayBlockingQueue, DelayQueue, LinkedBlockingQueue, PriorityBlockingQueue, SynchronousQueue, etc. The difference between them is mainly reflected in the storage
structure or the difference in element operations, but for the principle of take and put operations , but it is similar, the purpose is to block access.

bounded and unbounded

Bounded queue: It is a queue with a fixed size. For example, a LinkedBlockingQueue with a fixed size is set, or the size is 0, which is just a SynchronousQueue for transfer between producers and consumers.

Unbounded queue: refers to a queue that does not have a fixed size. The characteristic of these queues is that they can be enqueued directly until they overflow. Of course, in reality, it is almost impossible to have such a large capacity (more than Integer.MAX_VALUE), so from the user experience, it is equivalent to "unbounded". For example, there is no fixed-size LinkedBlockingQueue.

blocking and non-blocking

Blocking and non-blocking refer to the state when the caller (program) is waiting for the return result (or input). When blocking, the current thread will be suspended before the result of the call is returned, and will return after getting the result. When non-blocking, if the result cannot be obtained immediately, the caller will not block the current thread. Therefore, in the case of non-blocking, the caller needs to periodically poll to check the processing status.

enqueue

add(E e): (non-blocking) call offer but will throw IllegalStateException("Queue full") according to the result of the offer if false. offer(
E e): (non-blocking) if the queue is not full, return true immediately; if the queue Full, return false immediately
put(E e): (blocking) If the queue is full, block until the queue is full or the thread is interrupted
offer(E e, long timeout, TimeUnit unit): insert an element at the end of the queue, , if the queue is full, it will wait until the following three situations occur:
1. It is awakened
2. The waiting time expires
3. The current thread is interrupted

dequeue

poll(): (non-blocking) If there is no element, return null directly; if there is an element, go out of the queue
remove(): (non-blocking) delete the element at the head of the queue, if there is no element, return false
take(): (blocking) if the queue Empty, keep blocking until the queue is not empty or the thread is interrupted
poll(long timeout, TimeUnit unit): if the queue is not empty, dequeue; if the queue is empty and has timed out, return null; if the queue is empty and the time is not Timeout, it enters the wait until the following three situations occur:
1. It is awakened
2. The waiting time expires
3. The current thread is interrupted

view element

element(): Call peek() to view the element, get it as null, and throw NoSuchElementException.
peek(): View the element without removing it, or null if it cannot be obtained.

blocking queue

Blocking queue: It is a special queue that provides two additional functions on the basis of ordinary queues.
insert image description here

Right now:

  1. When the queue is empty, the consumer thread that gets the elements in the queue will be blocked, and the producer thread will be woken up at the same time.
  2. When the queue is full, the producer thread that adds elements to the queue is blocked, and the consumer thread is woken up at the same time.

In the thread pool, the blocking queue is a queue used to store all pending tasks of the thread pool. It can set the following values:

- ArrayBlockingQueue:一个由数组结构组成的有界阻塞队列。
    特点:ArrayBlockingQueue底层是使用一个数组实现队列的,内部使用了一把锁对插入和取出做了限制,即插或者取的操作是原子性
    容量:需要指定一个大小,创建了无法修改
    元素:不允许为null的元素插入
- LinkedBlockingQueue:一个由链表结构组成的有界阻塞队列。
    特点:内部有两把锁,即入队锁和出队锁(ReentrantLock+Condition),插入和取出各一把,互不打扰。两把锁来控制插入和取出数组阻塞唤醒。内部通过AtomicInteger count变量保证统计队列元素准确
    容量:默认为Integer.MAX_VALUE
    元素:不允许为null的元素插入
- SynchronousQueue:一个不存储元素的阻塞队列,即直接提交给线程不保持它们。
	一种无缓冲的等待队列,相对于有缓冲的BlockingQueue来说,少了一个中间经销商的环节(缓冲区)。消费者必须亲自去集市找到所要商品的直接生产者
    特点:一对一,生产者和消费者缺一就阻塞,存在公平和非公平两种
    容量:size默认为0,剩余容量也为0
    元素:不允许为null的元素插入
- PriorityBlockingQueue:一个支持优先级排序的无界阻塞队列。
    特点:无界队列依赖Comparator来确保不同元素的排序位置,最大值不超过Integer.MAX_Value-8
    容量:默认大小为11,底层使用数组来存储,会扩容
    元素:不允许为null的元素插入
- DelayQueue:一个使用优先级队列实现的无界阻塞队列,只有在延迟期满时才能从中提取元素。
    特点:存储Delayed元素,可实现延时等功能
    容量:默认为11,底层使用PriorityBlockingQueue来存储
    元素:不允许为null的元素插入,内部存储Delay的实现类元素
    take:内部使用priorityblockingqueue排序,根据getDelay判断剩余时间,只有当前到点了,才可以取出元素
- LinkedBlockingDeque:一个由链表结构组成的双向阻塞队列。
	特点:BlockingDeque 类是一个双端队列,在不能够插入元素时,它将阻塞住试图插入元素的线程;在不能够抽取元素时,它将阻塞住试图抽取的线程。
	容量:可以指定队列的容量(防止过度膨胀),如果不指定,默认容量大小等于Integer.MAX_VALUE。
	元素:同时支持FIFO和FILO两种操作方式(即可以从队列的头和尾同时操作(插入/删除)),支持线程安全。
- LinkedTransferQueue:一个由链表结构组成的无界阻塞队列。与SynchronousQueue类似,还含有非阻塞方法。
	public interface TransferQueue<E> extends BlockingQueue<E> {
        // 如果可能,立即将元素转移给等待的消费者。 
        // 如果存在消费者已经等待接收它(在 take 或 timed poll(long,TimeUnit)poll)中,则立即传送指定的元素,否则返回 false。
        boolean tryTransfer(E e);

        // 将元素转移给消费者,如果需要的话等待。 
        // 如果存在一个消费者已经等待接收它(在 take 或timed poll(long,TimeUnit)poll)中,则立即传送指定的元素,否则等待直到元素由消费者接收。
        void transfer(E e) throws InterruptedException;

        // 上面方法的基础上设置超时时间
        boolean tryTransfer(E e, long timeout, TimeUnit unit) throws InterruptedException;

        // 如果至少有一位消费者在等待,则返回 true
        boolean hasWaitingConsumer();

        // 返回等待消费者人数的估计值
        int getWaitingConsumerCount();
    }
	特点:实现了TransferQueue接口。TransferQueue接口继承了BlockingQueue,主要扩展了两个方法tryTransfer、transfer。
	对比:和SynchronousQueue.TransferQueue(公平模式)相比,它是可以统计长度,可以进行查询的;和LinkedBlockingQueue相比,它拥有更高的性能(使用CAS自旋);和ConcurrentLinkedQueue相比,它拥有阻塞功能。因此可以看作是ConcurrentLinkedQueue、SynchronousQueue、LinkedBlockingQueue的超集,作为对比学习。既然说到了,那就顺便说一下ConcurrentLinkedQueue吧。
	总结:
	ArrayBlockingQueue:需要创建队列数组长度。
    LinkedBlockingQueue:内部使用Node实现,默认大小Integer.MAX_VALUE。
    PriorityBlockingQueue:优先级队列,默认大小11,内部需实现Comparator来比较。
    DelayQueue:延时队列,元素需要实现Delayed,底层使用PriorityBlockingQueue,默认大小11。
    SynchronousQueue:交换队列,默认大小0,需同时存在生产者和消费者,否则任一都会阻塞
    LinkedTransferQueue:新增transfer方法,tryTransfer和transfer可以检测是否有线程在等待获取数据,如果检测到就立即发送新增的数据给这个线程获取而不用放入队列。
-----------------------------------------------------------------------------------------	
顺便提一下ConcurrentLinkedQueue。
- ConcurrentLinkedQueue:是一种非阻塞的无界的线程安全队列,与阻塞队列LinkedBlockingQueue相对应,ConcurrentLinkedQueue同样也是使用链表实现的FIFO队列,但不同的是它没有使用任何锁机制,而是用自旋+CAS来实现线程安全。
	特点:
        1.不允许null入列
        2.在入队的最后一个元素的next为null
        3.队列中所有未删除的节点的item都不能为null且都能从head节点遍历到
        4.删除节点是将item设置为null, 队列迭代时跳过item为null节点
        5.head节点跟tail不一定指向头节点或尾节点,可能存在滞后性 

LinkedBlockingQueue is more commonly used, and the queuing strategy of the thread pool is closely related to BlockingQueue.

Parameter 6: ThreadFactory

Thread factory: The factory method called when the thread pool creates a thread. Through this method, the thread priority, thread naming rules, and thread type (user thread or daemon thread) can be set. An example of using a thread factory is as follows:

public static void main(String[] args) {
    // 创建线程工厂
    ThreadFactory threadFactory = new ThreadFactory() {
        @Override
        public Thread newThread(Runnable r) {
            // 创建线程池中的线程
            Thread thread = new Thread(r);
            // 设置线程名称
            thread.setName("Thread-" + r.hashCode());
            // 设置线程优先级(最大值:10)
            thread.setPriority(Thread.MAX_PRIORITY);
            //......
            return thread;
        }
    };
    // 创建线程池
    ThreadPoolExecutor threadPoolExecutor = new ThreadPoolExecutor(10, 10, 0,TimeUnit.SECONDS, new LinkedBlockingQueue<>(),threadFactory); // 使用自定义的线程工厂
    threadPoolExecutor.submit(new Runnable() {
        @Override
        public void run() {
            Thread thread = Thread.currentThread();
            System.out.println(String.format("线程:%s,线程优先级:%d",
                                             thread.getName(), thread.getPriority()));
        }
    });
}

The execution result of the above program is as follows:

insert image description here

It can be seen from the above execution results that the custom thread factory works, and the thread name and thread priority are set through the thread factory.

Parameter 7: RejectedExecutionHandler

Rejection strategy: The strategy executed when the tasks in the thread pool exceed the maximum value that can be stored in the thread pool queue. There are 4 default rejection strategies:

  • AbortPolicy: Discard the task and throw a RejectedExecutionException.
  • CallerRunsPolicy: Use the current calling thread to perform this task.
  • DiscardOldestPolicy: Discard a task at the head of the queue (oldest), and re-execute the current task (repeat this process).
  • DiscardPolicy: It also discards tasks, but does not throw exceptions.

The default policy of the thread pool is AbortPolicy to reject and throw an exception.

case analysis

When the number of threads in the thread pool is less than corePoolSize, the new task will create a new thread to execute the task, regardless of whether there are idle threads in the thread pool at this time.

When the number of threads in the thread pool reaches corePoolSize, the new task will be put into the workQueue, waiting for the task scheduling execution in the thread pool;

When the workQueue is full and maximumPoolSize>corePoolSize, the new task will create a new thread to execute the task;

When the workQueue is full and the number of submitted tasks exceeds the maximumPoolSize, the tasks are processed by RejectedExecutionHandler;

When the number of threads in the thread pool exceeds corePoolSize, and the idle time exceeding this part reaches keepAliveTime, the thread is recycled;

If allowCoreThreadTimeOut(true) is set, threads within the corePoolSize range in the thread pool will be recycled when the idle time reaches keepAliveTime;

Summarize

This article introduces the seven parameters of the thread pool:

  1. corePoolSize: the number of core threads, the number of threads held by the thread pool under normal conditions, and the number of "long-term workers" of big families.
  2. maximumPoolSize: the maximum number of threads, the maximum number of threads that can be owned when the thread pool is busy, and the total number of "long-term workers" + "short-time workers" of big families.
  3. keepAliveTime: The survival time of idle threads, the maximum time that "short-time workers" can survive after no activity.
  4. TimeUnit: Time unit, used together with parameter 3, used to describe the time unit of parameter 3.
  5. BlockingQueue: The task queue of the thread pool, which is used to save the container of the tasks to be executed by the thread pool.
  6. ThreadFactory: thread factory, the factory method used to create threads in the thread pool, through which the naming rules, priority and thread type of the thread can be set.
  7. RejectedExecutionHandler: Rejection strategy, when the amount of tasks exceeds the maximum number of tasks that the thread pool can save, the execution strategy.

Guess you like

Origin blog.csdn.net/weixin_44834205/article/details/127667428