Remember a production problem--CompletableFuture default thread pool

In jdk7, we may use ExecutorService using thread pool, there are four ways by default

Executors.newSingleeThreadPool()

Executors.newFixedThreadPool()

Executors.newCacheThreadPool()

Executors.newScheduledThreadPool()

In jdk8, CompletableFuture was born, which simplifies the writing method of asynchronous tasks and provides many calculation methods for asynchronous tasks.

Closer to home, the problem in production now is that when the traffic surges, the response is very slow, and all requests slowly get response results .

solution:

①View cpu and memory usage

The cpu usage rate is very low, about 5%, and the memory usage has remained unchanged. It is not their problem to basically rule out.

①.View gc

Seeing that full gc did not happen, although young gc has increased a little, the average response time is 50ms, which is normal.

② Analyze dump heap

Under a 126M dump package, there is a class that occupies 60m. First of all, I suspect that the memory is overflowing, but through analysis, this part of the cache must be done, and the dumped package at that time is indeed a bit small, and the data observation is not accurate enough.

③. View stack information

I found that there are many ForkJoinPool.commonPool-worker-threads waiting. In fact, students who have used CompletableFuture know that it is implemented by ForkJoin pool. If you want to understand the source code of the thread pool, you can read this article .

Why are there so many threads waiting here? The production server above uses a two-core server, and only one thread can execute in the thread pool. Why is one, please see the source code.

 
 
@Test
 public void test12 () throws InterruptedException { Do a unit test first
    CompletableFuture. runAsync (()->{ //Break point here
        System.out.println("111");
    });
    Thread.sleep(400000);
}

Post the code step by step, and watch the official look at it.

public static CompletableFuture<Void> runAsync(Runnable runnable) {  //运行线程的方法
    return asyncRunStage(asyncPool, runnable);
}

What is asyncPool? Take a look at the setting of this value.

private static final Executor asyncPool = useCommonPool ?
    ForkJoinPool.commonPool() : new ThreadPerTaskExecutor();

What is useCommonPool?

private static final boolean useCommonPool =
    (ForkJoinPool.getCommonPoolParallelism() > 1);
public static int getCommonPoolParallelism() {
    return commonParallelism;
}

commonParallelism is the number of concurrent threads, how does it come from?

static {
    // initialize field offsets for CAS etc
    。。。。。。
    commonMaxSpares = DEFAULT_COMMON_MAX_SPARES;
defaultForkJoinWorkerThreadFactory =
        new DefaultForkJoinWorkerThreadFactory();
modifyThreadPermission = new RuntimePermission("modifyThread");
common = java.security.AccessController.doPrivileged
(new java.security.PrivilegedAction<ForkJoinPool>() {
            public ForkJoinPool run() { return makeCommonPool()        
            ; }}) ; // Focus on the makeCommonPool method
     int par = common . config & SMASK ; // The value of the par SMASK obtained is 65535, which is 1111111111111111 & the operation is still common.config itself, it seems that we still need to see how config is The
 commonParallelism = par > 0 ? par : 1 ; I want to know what the value of par is, this value is negative by default and is 1
 }    
private static ForkJoinPool makeCommonPool () {
     int parallelism = - 1 ; //The default number of concurrent threads is -1
     ForkJoinWorkerThreadFactory factory = null;
   . . . . . . 
    if (parallelism < 0 && (parallelism = Runtime. getRuntime ().availableProcessors() - 1 ) <= 0 ) //See, the number of processing threads in the thread pool = the number of computer cores - 1
        
        parallelism = 1;
    if (parallelism > MAX_CAP)
        parallelism = MAX_CAP ;
     return new ForkJoinPool(parallelism , factory , handler , LIFO_QUEUE ,
 "ForkJoinPool.commonPool-worker-" ) ; //specify the name of the thread
 }
                            

At this point, the analysis is completed, and the inverse method is used.

Since the number of computer cores on the production server is small, and the default thread pool is used in the CompletableFuture code, the number of threads processed is the number of computer cores -1. In this way, when there is a large amount of requests, the processing logic is very complicated, and many threads are waiting for execution, which slowly drags down the server.

Resize the thread pool

In the book "Java Concurrent Programming in Action" (http://mng.bz/979c), Brian Goetz and co-authors provide many pertinent suggestions
for . This is very important, if there are too many threads in the thread pool, they will end up competing for
scarce processor and memory resources, wasting a lot of time on context switches. Conversely, if the number of threads is too low,
as the case with your application, some cores of the processor may not be fully utilized. Brian Goetz suggested that the ratio of thread pool
size to processor utilization can be estimated using the following formula:
N threads = N CPU * U CPU * (1 + W/C)
where:
❑N CPU is the core of the processor The number of , which can be obtained by Runtime.getRuntime().availableProce-
ssors()
❑U CPU is the expected CPU utilization (the value should be between 0 and 1)

❑W/C is the ratio of waiting time to computing time

Too long-winded here, the general rules for setting the size of the thread pool are

If the service is cpu-intensive, set to the number of cores in the computer

If the service is io-intensive, set to the number of computer cores * 2

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325645139&siteId=291194637