Java source code analysis and interview questions-inference: the application of queues in other Java source code

This series of related blog, Mu class reference column Java source code and system manufacturers interviewer succinctly Zhenti
below this column is GitHub address:
Source resolved: https://github.com/luanqiu/java8
article Demo: HTTPS: // GitHub. com / luanqiu / java8_demo
classmates can look at it if necessary)


In addition to providing APIs for developers to use, the introduction queue also closely integrates with other APIs in Java, such as thread pools and locks. The thread pool directly uses the queue API. The lock borrows the idea of ​​the queue and reimplements the queue and thread. Pools and locks are the APIs we often use in our work, and are also frequently asked by interviewers. Queues play a vital role in the realization of the two. Let's take a look together.

1 The combination of queue and thread pool

1.1 The role of queues in the thread pool

Everyone should have used the thread pool. For example, we want to create a fixed-size thread pool and let the running thread print out a sentence. We will write code like this:

ExecutorService executorService = Executors.newFixedThreadPool(10);
// submit 是提交任务的意思
// Thread.currentThread() 得到当前线程
executorService.submit(() -> System.out.println(Thread.currentThread().getName() + " is run"));
// 打印结果(我们打印出了当前线程的名字):
pool-1-thread-1 is run

The Executors in the code are concurrent tool classes, mainly to help us construct the thread pool more conveniently. The newFixedThreadPool method indicates that a fixed-size thread pool will be constructed. The input parameter we give is 10, which means that the maximum thread pool can be constructed. 10 threads come out.

In actual work, we can't control the size of the flow. Here we set a maximum of 10 threads, but if 100 requests come at once, then 10 threads must be too busy, then What about the remaining 90 requests?

At this time, the queue needs to be put out. We will put the data that the thread cannot digest into the queue, let the data queue in the queue, and wait for the thread to consume, and then take it out of the queue and consume it slowly.

Let's draw a picture to explain:
Insert picture description here
the right side of the above figure indicates that 10 threads are consuming requests at full strength, and the left side indicates that the remaining requests are queued in the queue and waiting for consumption.

It can be seen that the queue occupies a very important position in the thread pool. When the threads in the thread pool are not busy, the requests can be waited in the queue and consumed slowly.

Next, let's take a look at what types of queues are used in the thread pool, and what role they play.

1.2 Types of queues used in the thread pool

1.2.1 LinkedBlockingQueue queue

newFixedThreadPool newFixedThreadPool
we just said is a fixed-size thread pool, which means that when the thread pool is initialized, the thread size in the thread pool will not change (the default setting of the thread pool will not recycle the number of core threads) Let's take a look at the source code of newFixedThreadPool:

// ThreadPoolExecutor 初始化时,第一个参数表示 coreSize,第二个参数是 maxSize,coreSize == maxSize,
// 表示线程池初始化时,线程大小已固定,所以叫做固定(Fixed)线程池。 
public static ExecutorService newFixedThreadPool(int nThreads) {
    return new ThreadPoolExecutor(nThreads, nThreads,
                                  0L, TimeUnit.MILLISECONDS,
                                  new LinkedBlockingQueue<Runnable>());
}

In the source code, you can see that ThreadPoolExecutor is initialized. ThreadPoolExecutor is the API of the thread pool. We will elaborate in the thread pool chapter. Its fifth construction parameter is the queue. The thread pool will select different queues according to the scenario. Here we use LinkedBlockingQueue is the default parameter of Queue, which means that the maximum capacity of this blocking queue is the maximum value of Integer, that is to say, when the processing capacity of the thread pool is limited, the maximum number of tasks can be stored in the blocking queue.

But in our actual work, it is often not recommended to use newFixedThreadPool directly, mainly because it uses the default constructor of LinkedBlockingQueue, the queue capacity is too large, and in requests that require real-time response, the queue capacity is too large and often harmful. .

For example, if we use the above thread pool, there are 10 threads, and the queue is the maximum value of Integer. When the concurrent traffic is large, such as a 1w / qps request, then 10 threads are not consumed at all, and there will be many requests. It is blocked in the queue. Although 10 threads are still consuming continuously, it takes time to consume all the data in the queue. Suppose it takes 3 seconds to consume all the data, and these real-time requests all have timeouts. The default timeout is 2 seconds. When the time reaches 2 seconds, the request has timed out and an error is returned. At this time, many tasks in the queue are waiting for consumption. Even if the consumption is completed later, it cannot be returned to the caller. .

The above situation will cause the caller to see that the interface returned an error after timeout, but the server task is still queued for execution. After 3 seconds, the server task may be successfully executed, but the caller has been unable to perceive, call When the party calls again, you will find that the request has been successful.

If the caller originated from the page, the experience will be worse. The first call on the page gives an error. When the user refreshes the page, the page shows that the previous request has been successful. This is a very bad experience.

Therefore, we hope that the size of the queue is not so large, and the size of the queue can be set according to the actual consumption situation, so as to ensure that the queued requests can be executed before the interface times out.

The scene is more complicated. In order to facilitate understanding, we drew a picture and explained the whole process:
Insert picture description here
This kind of problem has already been a very serious production accident in actual work. We must be careful when using it.

The newSingleThreadExecutor
and newFixedThreadPool are the same, the bottom of the newSingleThreadExecutor method is also LinkedBlockingQueue, the newSingleThreadExecutor thread pool bottom thread will only have one, which means that this thread pool can only handle one request at a time, the remaining requests will be queued for execution in the queue Look at the source code implementation of newSingleThreadExecutor:

public static ExecutorService newSingleThreadExecutor() {
    return new FinalizableDelegatedExecutorService
        // 前两个参数规定了这个线程池一次只能消费一个线程
        // 第五个参数使用的是 LinkedBlockingQueue,说明当请求超过单线程消费能力时,就会排队
        (new ThreadPoolExecutor(1, 1,
                                0L, TimeUnit.MILLISECONDS,
                                new LinkedBlockingQueue<Runnable>()));
}

It can be seen that the bottom layer uses the default parameters of LinkedBlockingQueue, which means that the maximum value of the queue is the maximum value of Integer.

1.2.2 SynchronousQueue

In addition to the newFixedThreadPool method, there are several other methods corresponding to different queues when the thread pool is created. Let's take a look at newCachedThreadPool. The bottom layer of newCachedThreadPool corresponds to the SynchronousQueue queue. The source code is as follows:

public static ExecutorService newCachedThreadPool() {
    // 第五个参数是 SynchronousQueue
    return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
                                  60L, TimeUnit.SECONDS,
                                  new SynchronousQueue<Runnable>());
}

The SynchronousQueue queue has no size limit, and the number of requests can withstand the queue. It can be said that this is his advantage. The disadvantage is that each time data is put into the queue, it cannot return immediately, but needs to wait for a thread to take data. In order to return normally, if the amount of requests is large and the consumption capacity is poor, it will cause a large number of requests to be held by hodler. It must be released after the slow consumption is completed, so you need to be cautious in normal work.

1.2.3 DelayedWorkQueue queue

newScheduledThreadPool represents the scheduled task thread pool, the underlying source code is as follows:
Insert picture description here
from left to right in the screenshot, we can see that the bottom queue uses the DelayedWorkQueue delay queue, indicating that the function of the bottom delay of the thread pool is provided by the DelayedWorkQueue queue, new delay requests are all first To the queue, when the delay time is up, the thread pool can naturally take the thread from the queue for execution.

The newSingleThreadScheduledExecutor method is also the same as newScheduledThreadPool, using the delayed function of DelayedWorkQueue, but the former is executed by a single thread.

1.3 Summary

From the source code of the thread pool, we can see:

  1. In the design of the thread pool, the queue plays the role of buffering data and delaying the execution of data. When the consumption capacity of the thread pool is limited, the request can be queued and the thread pool can slowly consume.
  2. According to different scenarios, the thread pool chooses to use a variety of queues such as DelayedWorkQueue, SynchronousQueue, and LinkedBlockingQueue, so as to implement its own different functions, such as using the Delay function of DelayedWorkQueue to implement the thread pool for regular execution.

2 The combination of queue and lock

We usually write this when we write the lock code:

ReentrantLock lock = new ReentrantLock();
try{
    lock.lock();
    // do something
}catch(Exception e){
  //throw Exception;
}finally {
    lock.unlock();
}

Initialize the lock-> lock-> execute business logic-> release the lock, this is a normal process, but we know that there can only be one thread at a time to obtain the lock, then what should other threads do not obtain the lock at this time What?

Waiting, other threads that cannot obtain the lock will wait in a waiting queue. When the lock is released, they will compete for the lock. We draw a schematic diagram.
Insert picture description here
The red mark in the figure is the synchronization queue. Threads that cannot obtain the lock will be queued in the synchronization queue. When the lock is released, the threads in the synchronization queue will start to compete for the lock.

It can be seen that one of the functions of the queue in the lock is to help manage the threads that cannot obtain the lock, so that these threads can wait patiently.

Synchronization queues are not implemented using the existing queue API, but the underlying structure and ideas are consistent with the current queues, so we learn the queue chapter well, and it is very useful for understanding lock synchronization queues.

3 Summary

The data structure of the queue is really important. It plays a very important role in the two heavyweight APIs of thread pool and lock. We need to be very clear about the general data structure at the bottom of the queue, and understand how the data is enqueued and dequeued. Yes, this chapter of the queue is also more complicated. I suggest you debug a lot. We also provide some debug demos on github. You can try to debug it.

Published 40 original articles · won praise 1 · views 4981

Guess you like

Origin blog.csdn.net/aha_jasper/article/details/105525875