"In-depth understanding of JVM.2nd" notes (5): tuning case analysis and actual combat

concept

Practice is the only criterion for testing truth

case study

Program deployment strategy on high-performance hardware

Scene Reappearance 1

An online document type website of about 150,000 PV/day has recently changed its hardware system. The new hardware is 4 CPUs, 16GB physical memory, the operating system is 64-bit CentOS 5.4, and Resin is used as the web server.

No other applications have been deployed on the entire server for the time being, and all hardware resources can be provided for use by websites that are not too visited. In order to make the best use of hardware resources, the administrator chose 64-bit JDK 1.5, and fixed the Java heap at 12GB through the -Xmx and -Xms parameters. After using it for a period of time, it is found that the use effect is not satisfactory, and the website often fails to respond for a long time from time to time.

Problem analysis 1

After monitoring the server's operating status, it was found that the website's unresponsiveness was caused by the GC pause. The virtual machine was running in the Server mode. By default, the throughput priority collector was used to reclaim 12GB of heap. The pause time of a Full GC was as high as 14 seconds. And due to the program design, when accessing the document, it is necessary to extract the document from the disk to the memory, resulting in many large objects generated by the document serialization in the memory. Many of these large objects have entered the old age and have not been cleaned up in the Minor GC. Drop.

In this case, even if there is a 12GB heap, the memory is quickly exhausted, resulting in a pause of ten seconds every ten minutes, which makes website developers and administrators very frustrated.

I will not extend the discussion of program code here . The main problem in program deployment is obviously the long pause caused by the recovery of excessive heap memory . Before the hardware upgrade, using the 1.5GB heap of the 32-bit system, users only feel that the use of the website is slow, but there will be no obvious pauses. Therefore, they consider upgrading the hardware to improve the performance of the program. If the memory allocated to the Java heap is reduced again, Then the investment in hardware is very wasteful.


There are currently two main ways to deploy programs on high-performance hardware :

  • Use large memory through 64-bit JDK.
  • Use several 32-bit virtual machines to establish logical clusters to utilize hardware resources.

About Full GC

The administrator in this case adopted the first deployment method. For systems with strong user interaction and pause time-sensitive systems, the prerequisite for allocating a super-large heap to the Java virtual machine is to be sure to control the application's Full GC frequency low enough, at least so low that it will not affect the user's use, such as ten A Full GC occurs only once in a few hours or even a day, so that the Full GC can be triggered by executing timed tasks in the middle of the night and even automatically restart the application server to keep the available memory space at a stable level.

The key to controlling the frequency of Full GC is to see whether the vast majority of objects in the application can meet the principle of " survival " , that is, the survival time of most objects should not be too long, especially if there are batches and long-lived large Objects are generated, so as to ensure the stability of the space of the old age.

In most web-based applications, the life cycle of the main object should be at the request level or the page level, and there are relatively few long-lived objects at the session level and the global level. As long as the code is written reasonably, it should be able to achieve normal use in the super heap without Full GC. In this way, the website response speed will be more guaranteed when the super heap memory is used.

Use 64-bit JDK to manage large memory may encounter problems

In addition, if readers plan to use 64-bit JDK to manage large memory, they also need to consider the following problems that may be faced:

  • Long pause caused by memory reclamation.
  • At this stage, the performance test results of the 64-bit JDK are generally lower than the 32-bit JDK.
  • It is necessary to ensure that the program is stable enough, because if this application generates a heap overflow, it is almost impossible to generate a heap dump snapshot (because it needs to generate a dump file of more than ten GB or even larger), even if a snapshot is generated, it is almost impossible to analyze.
  • The memory consumed by the same program in the 64-bit JDK is generally larger than that of the 32-bit JDK. This is caused by factors such as pointer bloat and data type alignment .

Create a logical cluster

The above question sounds a bit scary, so at this stage, many administrators still choose the second method: use several 32-bit virtual machines to establish a logical cluster to utilize hardware resources . The specific method is to start multiple application server processes on a physical machine, each server process is assigned a different port, and then a load balancer is built on the front end to distribute access requests in a reverse proxy manner. Readers do not need to pay too much attention to the performance consumed by the equalizer forwarding. Even if a 64-bit JDK is used, many applications have more than one server, so the front-end equalizer always exists in many applications.

Considering that the purpose of establishing a logical cluster on a physical machine is only to use hardware resources as much as possible , it does not need to be concerned with high availability requirements such as state retention and hot transfer, nor does it need to ensure that each virtual machine process is absolutely accurate The load is balanced, so it is a pretty good choice to use an affinity cluster without Session replication. We only need to ensure that the cluster has affinity, that is, the equalizer will always allocate a fixed user request to a fixed cluster node for processing according to a certain rule algorithm (generally assigned according to SessionID), so that the program development stage is basically No special consideration for the cluster environment.

Problems that may be encountered when using logical clusters

Of course, there are few solutions without shortcomings. If you use a logical cluster to deploy programs, you may encounter the following problems:

  • Try to avoid node competition for global resources . The most typical is disk competition. If each node accesses a disk file at the same time (especially concurrent write operations are prone to problems), it is easy to cause IO exceptions.
  • It is difficult to make the most efficient use of certain resource pools , such as connection pools, which generally establish their own independent connection pools on each node, which may cause some node pools to be full while others still have more vacancies. Although centralized JNDI can be used, this is somewhat complicated and may bring additional performance overhead.
  • Each node is still inevitably limited by 32-bit memory . In a 32-bit Windows platform, each process can only use 2GB of memory. Taking into account the memory overhead other than the heap, the heap can generally only be opened up to 1.5GB. In some Linux or UNIX systems (such as Solaris), the memory can be increased to 3GB or even close to 4GB, but the 32-bit memory is still limited by the maximum 4GB (232) memory.
  • Applications that use a large number of local caches (such as the large use of HashMap as K/V cache) will cause a large waste of memory in the logical cluster, because each logical node has a cache. At this time, you can consider changing the local cache to Centralized cache.

Final solution

After introducing these two deployment methods, we return to this case, and the final deployment plan is adjusted to

  1. Establish 5 logical clusters of 32-bit JDK, and each process is calculated based on 2GB of memory (the heap is fixed at 1.5GB), which occupies 10GB of memory.
  2. In addition, establish an Apache service as a front-end balance agent to access the portal.
  3. Considering that users are more concerned about response speed, and the main pressure of document services is focused on disk and memory access, and the CPU resource sensitivity is low, the CMS collector is changed to garbage collection.

After the deployment method was adjusted, the service did not experience a long pause, and the speed was greatly improved compared to before the hardware upgrade.

Memory overflow caused by synchronization between clusters

Scene Reappearance 2

There is a B/S-based MIS system. The hardware is two HP minicomputers with 2 CPUs and 8GB memory. The server is WebLogic 9.2. Each machine starts 3 WebLogic instances to form a 6-node affinity cluster.

Since it is an affinity cluster, there is no Sessurn synchronization between the nodes, but there are some requirements to realize part of the data sharing between each node. At first, these data were stored in the database, but due to the fierce competition for reading and writing , the performance impact was great. Later, JBossCache was used to build a global cache.

After the global cache is enabled, the service has been used normally for a long time, but recently, there have been many memory overflow problems from time to time .

Problem analysis 2

When the memory overflow exception does not occur, the service memory recovery has been normal. After each memory recovery, it can be restored to a stable free space. It is suspected that there is a memory leak in some uncommon code paths of the program, but the administrator It reflects that the program has not been updated or upgraded recently, and no special operations have been performed. I had to let the service run for a period of time with the -XX: +HeapDumpOnOutOfMemoryError parameter. After the most recent overflow, the administrator sent back the heapdump file and found that there were a large number of org.jgroups.protocols.pbcast.NAKACK objects.

JBossCache is based on its own JGroups for data communication between clusters. JGroups uses protocol stacks to realize the free combination of various required characteristics of sending and receiving data packets. When receiving and sending data packets, they must go through the up() and of each layer of the protocol stack. The down() method, where the NAKACK stack is used to ensure the effective sequence and retransmission of each packet . The JBossCache protocol stack is shown in the figure below.

Insert picture description here

Since the information may fail to be transmitted and need to be retransmitted, the information sent must be retained in the memory before confirming that all nodes registered in the GMS (Group Membership Service) have received the correct information. And this MIS server has a global Filter responsible for security verification. Whenever a request is received, the last operation time will be updated once, and this time will be synchronized to all nodes, so that a user can be in a period of time Cannot log in on multiple machines. In the service use process, a page often generates several or even dozens of requests. Therefore, this filter leads to very frequent network interactions between nodes in the cluster. When the network conditions cannot meet the transmission requirements, the retransmission data keeps accumulating in the memory, and the memory overflow soon occurs .

The problems in this case include both the flaws of JBossCache and the flaws in the implementation of the MIS system . Similar memory overflow exceptions have been discussed many times in the official maillist of JBossCache, and it is said that subsequent versions have also been improved. The more important flaw is that if this type of data shared by the cluster is to be synchronized using a cluster cache like JBossCache, it can allow frequent read operations, because the data has a copy in the local memory, and the read action does not consume much resources. , But there should not be too frequent write operations, which will bring a lot of network synchronization overhead.

Overflow error caused by off-heap memory

Scene Reappearance 3

A small school project: B/S-based electronic test system. In order to realize that the client can receive test data from the server in real time, the system uses reverse AJAX technology (also known as Comet or Server Side Push), and CometD 1.1 is selected. 1 As a server-side push framework, the server is Jetty 7.1.4, the hardware is an ordinary PC, Core i5 CPU, 4GB memory, and running 32-bit Windows operating system.

During the test, it was found that the server throws out memory overflow exceptions from time to time, and the server may not be abnormal every time, but if it crashes once during the formal exam, it is estimated that the entire electronic exam will be messy. The webmaster has tried to maximize the heap. On the 32-bit system, up to 1.6GB, it is basically impossible to increase it, and it basically has no effect when it is enlarged. It seems that memory overflow exceptions are thrown more frequently. After adding -XX :+HeapDumpOnOutOfMemoryError, there was no response. No files were generated when the memory overflow exception was thrown. In desperation, I hung up jstat and kept staring at the screen. I found that GC was not frequent. Eden area, Survivor area, old generation and permanent generation memory all said "emotional stability, not much pressure", but they just kept throwing. A memory overflow is abnormal, and the administrator is under great pressure. Finally, after the memory overflow, find the exception stack from the system log, as shown in the code below.

Insert picture description here

Problem analysis 3

The operating system has a limit on the memory that each process can manage. The 32-bit Windows platform used by this server has a limit of 2GB, of which 1.6GB is allocated to the Java heap, and the Direct Memory memory is not included in the 1.6GB heap. Therefore, it can only split a part of the remaining 0.4GB space at the maximum. The key to overflow in this application is: While garbage collection is in progress, although the virtual machine will reclaim Direct Memory , Direct Memory cannot, like the new generation and the old generation, notify the collector to perform garbage collection when it finds that the space is insufficient . It can only wait for the Full GC when the old generation is full, and then "by the way" help it clean up the obsolete objects in the memory.

Otherwise, it can only wait until the memory overflow exception is thrown, first catch it, and then "yell: "System.gc()!" in the catch block. If the virtual machine still doesn't listen (for example, open -XX:+) DisableExplicitGC switch), you can only watch that there is a lot of free memory in the heap, but you have to throw a memory overflow exception. And the CometD 1.1.1 framework used in this case happens to have a lot of NIO operations. Direct Memory memory is used .

From the perspective of practical experience, in addition to the Java heap and permanent generation, we noticed that the following areas will take up more memory. The sum of all memory here is limited by the maximum memory of the operating system process.

  • Direct Memory : The size can be adjusted by -XX:MaxDirectMemorySize, and OutOfMemoryError or OutOfMemoryError will be thrown when the memory is insufficient: Direct buffer memory.
  • Thread stack : can be adjusted by -Xss, when the memory is insufficient, StackOverflowError (cannot be allocated vertically, that is, a new stack frame cannot be allocated) or OutOfMemoryError: unable to create new native thread (cannot be allocated horizontally, that is, a new thread cannot be created) .
  • Socket buffer area : Each Socket connection has two buffer areas, Receive and Send, which occupy approximately 37KB and 25KB of memory respectively. If there are many connections, this memory occupies a considerable amount. If it cannot be allocated, an IOException: Too many open files may be thrown.
  • JNI code : If the code uses JNI to call the local library, the memory used by the local library is not in the heap.
  • Virtual machine and GC : The code execution of virtual machine and GC also consumes a certain amount of memory.

External commands cause the system to slow down

Scene Reappearance 4

A digital campus application system, running on a 4 CPU Solaris 10 operating system, the middleware is the GlassFish server. When the system was doing large-scale concurrent stress testing, it was found that the request response time was relatively slow. Through the mpstat tool of the operating system, it was found that the CPU usage rate was high, and the program that occupied most of the CPU resources was not the application system itself. This is an abnormal phenomenon. Normally, the CPU occupancy rate of the user application should be the dominant position to show that the system is working normally.

Through the Dtrace script of Solaris 10, you can check which system calls spend the most CPU resources in the current situation. After Dtrace runs, it is found that the "fork" system call is the most consuming CPU resource. As we all know, the "fork" system call is used by Linux to generate new processes. In the Java virtual machine, the Java code written by the user has at most the concept of threads, and there should be no process generation.

Problem analysis 4

This is a very unusual phenomenon. Through the developers of this system, finally found the answer: each user request processing needs to execute an external shell script to obtain some information of the system. The execution of this shell script is called by Java's Runtime.getRuntime().exec() method. This calling method can achieve the purpose, but it is a very resource-consuming operation in the Java virtual machine. Even if the external command itself can be executed quickly, the overhead of creating a process when it is frequently called is very considerable. The process of executing this command by the Java virtual machine is: first clone a process that has the same environment variables as the current virtual machine, then use this new process to execute external commands, and finally exit the process. If this operation is performed frequently, the system will consume a lot, not only the CPU, but also the memory burden.

After the user removed the statement executed by the Shell script according to the suggestion, and changed to use the Java API to obtain the information, the system quickly returned to normal.

The server JVM process crashes

Scene Reappearance 5

A B/S-based MIS system. The hardware is two HP systems with 2 CRJs and 8GB of memory. The server is WebLogic 9.2. After a period of normal operation, I recently discovered that the virtual machine process of the cluster node frequently shuts down during the operation. After a hs_err_pid###.log file is left, the process disappears. Each of the two physical machines All nodes have experienced process crashes.

It can be seen from the system log that a large number of the same exceptions occurred in the virtual machine process of each node shortly before the crash, see the code as follows

Insert picture description here

This is an abnormal remote disconnection. The system administrator has learned that the system has recently been integrated with an OA portal. When the to-do items of the MIS system workflow change, the OA portal system must be notified through the Web service. The changes of the do items are synchronized to the OA portal. I tested several web services that synchronize to-do items through SoapU, and found that it took as long as 3 minutes to return after the call, and the return result was that the connection was interrupted.

Problem analysis 5

Due to the large number of users of the MIS system, the to-do items change rapidly. In order not to be dragged down by the speed of the OA system, the asynchronous method is used to call the Web service. However, because the speed of the two services is completely unequal, the longer the time, the more Web will be accumulated. The invocation of the service is not completed, resulting in more and more waiting threads and Socket connections, and finally the virtual machine process crashes after exceeding the capacity of the virtual machine. Solution : Notify the OA portal to repair the unusable integration interface, and change the asynchronous call to the producer/consumer message queue, and the system returns to normal.

Inappropriate data structure leads to excessive memory usage

Scene Reappearance 6

There is a background RPC server, using a 64-bit virtual machine, the memory configuration is -Xms4g -Xmx8g-Xmn1g, using the ParNew+CMS collector combination. The Minor GC time for external services is usually within 30 milliseconds, which is completely acceptable. But the business needs to load a data file of about 80MB to the memory every 10 minutes for data analysis. These data will form more than 1 million HashMap<Long, Long>Entry in the memory. During this time, the Minor GC will cause more than A pause of 500 milliseconds is not acceptable for this pause time. The specific situation is shown in the GC log below.

Insert picture description here

Problem analysis 6

Observing this case, it is found that the usual Minor GC time is very short. The reason is that most of the objects in the new generation can be cleared. After the Minor GC, Eden and Survivor are basically in a completely idle state. During the analysis of the data file, the 800MB Eden space was quickly filled up and GC was triggered. However, after the Minor GC, most of the objects in the new generation are still alive. We know that the ParNew collector uses a replication algorithm. The efficiency of this algorithm is based on the feature that most objects are "destroyed". If there are too many surviving objects, copy these objects to Survivor and maintain the references of these objects. Being correct becomes a heavy burden, which causes the GC pause time to be significantly longer.

If you do not modify the program and only solve this problem from the perspective of GC tuning, you can consider removing the Survivor space (adding the parameter -XX:SurvivorRatio=65536, -XX:MaxTenuringThreshold=0 or -XX:+AlwaysTenure) to let the new generation The surviving objects enter the old age immediately after the first Minor GC, and they will be cleaned up after the Major GC. This kind of measure can treat the symptoms, but it also has great side effects. The permanent solution needs to modify the program, because the root cause of the problem here is that the HashMap<Long, Long> structure is too inefficient to store the data file space .

Let's analyze the space efficiency in detail below. In the HashMap<Long, Long> structure, only the two long integer data stored in Key and Value are valid data, a total of 16B (2x8B). After these two long integer data are packaged into java.lang.Long objects, they have 8B MarkWord and 8B Klass pointers respectively, and 8B is added to store the long value of the data. After the two Long pairs form Map.Entry, there is an additional 16B object header, and then an 8B next field and a 4B int type hash field. For alignment, 4B blank padding must be added, and finally there is A reference to the 8B of this Entry in the HashMap, so adding two long integer numbers, the actual memory consumption is (Long(24B)x2)+Entry(32B)+HashMap Ref(8B)=88B, and the space efficiency is 16B/ 88B=18%, which is too low .

Long pause caused by Windows virtual memory

Scene Reappearance 7

There is a GUI desktop program with a heartbeat detection function, which will send a heartbeat detection signal every 15 seconds. If the other party does not return a signal within 30 seconds, it is considered that the connection with the other party's program has been disconnected. After the program was launched, it was found that there was a probability of false positives in the heartbeat detection. The reason for the false positives found in the log query was that the program occasionally appeared at an interval of about a minute or so with no log output at all, and it was in a pause state.

Because it is a desktop program, the memory required is not large (-Xmx256m), so I did not expect the program to stop due to GC at the beginning, but after adding the parameter -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCDateStamps -Xloggc:gclog.log , It is confirmed from the GC log file that the pause is indeed caused by GC. Most of the GC time is controlled within 100 milliseconds, but occasionally a GC close to 1 minute occurs.
Insert picture description here

Find the specific log information of the long pause from the GC log (the -XX:+PrintReferenceGC parameter has been added), and the log fragments found are as follows. It can be seen from the log that the time to actually execute the GC action is not very long, but the time spent from preparing to start the GC to the actual starting of the GC accounts for most of the time.

Insert picture description here

Problem analysis 7

In addition to the GC log, a feature of this GUI program memory change was also observed. When it is minimized, the occupied memory displayed in the resource management is greatly reduced, but the virtual memory has not changed, so it is suspected that the program is minimizing At that time, its working memory is automatically swapped into the page file of the disk, so that when a GC occurs, it may cause abnormal GC pauses due to the operation of restoring the page file.

This conjecture was confirmed after verification on MSDN. Therefore, to avoid this phenomenon in the Java GUI program, you can add the parameter "-Dsun.awt.keepWorkingSetOnMinimize=true" to solve it . This parameter is used in many AWT programs, such as the Visual VM that comes with the JDK, to ensure that the program can respond immediately when it is minimized. After adding this parameter in this case, the problem was solved.

Actual combat: Eclipse running speed tuning

Due to the large difference in the environment, the experiment is not reproduced in detail, only the relevant points are recorded

Program running status before tuning

VisualGC plug-in for VisualVM

Insert picture description here


Simple eclipse plug-in development: eclipse startup time display

Link

1. Download and install jdk and eclipse here to emphasize: you need to download Eclipse for RCP and RAP Developers, otherwise you cannot create a new Plug-in Development project.

2. Create a new project After installation, open eclipse and click File->NewProject. Select Plug-in Project and click Next. Create a new project named com.developer.showtime, all parameters use default values.

3. Create a new class under the src of the com.developer.showtime project : ShowTime, the code is as follows:

package com.developer.showtime;

import org.eclipse.jface.dialogs.MessageDialog;
import org.eclipse.swt.widgets.Display;
import org.eclipse.swt.widgets.Shell;
import org.eclipse.ui.IStartup;

public class ShowTime implements IStartup {
    public void earlyStartup() {
        Display.getDefault().syncExec(new Runnable() {
            public void run(){
                long eclipseStartTime = Long.parseLong(System.getProperty("eclipse.startTime"));
                long costTime = System.currentTimeMillis() - eclipseStartTime;
                Shell shell = Display.getDefault().getActiveShell();
                String message = "Eclipse start in " + costTime + "ms";
                MessageDialog.openInformation(shell, "Information", message);
            }
        });
    }
}

4. Modify the plugin.xml file as follows:

<?xml version="1.0" encoding="UTF-8"?>
<?eclipse version="3.4"?>

<plugin>
   <extension point="org.eclipse.ui.startup">
         <startup class="com.developer.showtime.ShowTime"/>
   </extension>
</plugin>

5. Trial run

Right-click Run as -> Eclipse Application. At this point, an eclipse will run, and the time required to start will be displayed after it is started.

6.Export the plugin.

Right-click Export -> Deployable plug-ins and fragments. Enter the path to be exported in the Directory, click Finish, a plugins directory will be generated in the directory, and the plugin package is inside: com.developer.showTime_1.0.0.201110161216.jar . Copy this package to the plugin directory under the eclipse directory. Then start eclipse and you can see the time it takes for eclipse to start.

Insert picture description here

Performance changes and compatibility issues of upgrading JDK1.6

Upgrading from JDK 1.5 to 1.6 does not necessarily bring performance improvements.

Optimization of compilation time and class loading time

-Xverify:none prohibits optimization of bytecode verification process


Compile Time

Compile time refers to the time it takes for the JIT compiler (Just In Time Compiler) of the virtual machine to compile Hot Spot Code.

We know that in order to achieve cross-platform features of the Java language, the Class file formed after the Java code is compiled is stored in bytecode (ByteCode). The virtual machine executes bytecode commands through interpretation, which is more costly than C/C++ compilation. For binary code, the speed is much slower .

In order to solve the problem of the speed of program interpretation and execution , after JDK 1.2, the virtual machine has two built-in runtime compilers1. If a Java method is called to a certain extent, it will be judged as hot code and handed over to the JIT compiler for instant compilation. For the native code , improve the running speed (this is the origin of the name of the HotSpot virtual machine).

There may even static translation compiled out of the code in the dynamic runtime compilation compile than C / C ++ is better , because the runtime can collect many compilers information can not know, or even use some very aggressive optimization methods, optimization conditions When it is not established, then reverse optimization and return. So as long as the Java program has no problems with the code (mainly leakage problems, such as memory leaks, connection leaks), as the code is compiled more and more thoroughly, the running speed should be faster and faster.

The biggest disadvantage of Java runtime compilation is that it consumes the normal running time of the program to compile, which is the " compile time " mentioned above .


The virtual machine provides a parameter -Xint prohibits the operation of the compiler, forcing the virtual machine to execute the bytecode in a purely interpreted manner.

But this kind of optimization effect is not useful


On the other hand, corresponding to the interpretation and execution, the virtual machine also has a stronger compiler: when the virtual machine runs in -client mode, it uses a lightweight compiler codenamed C1, and there is another The relatively heavyweight compiler code-named C2 can provide more optimization measures.

If you start Eclipse with the virtual machine of the -server module, the C2 compiler will be used. At this time, you can see from VisualGC that the virtual machine used more than 15 seconds to compile the code during the startup process. If the reader’s working habit is not to close Eclipse for a long time, the extra compilation time consumed by the C2 compiler will eventually be earned back through the increase in running speed, so using the -server mode is also a good choice.

Adjust the memory settings to control the frequency of garbage collection

The following parameters can be used to request the virtual machine to generate GC logs:

  • -XX:+PrintGCTimeStamps Print GC pause time
  • -XX:+PrintGCDetails print GC details
  • -verbose:gc prints GC information, the output content has been included in the previous parameter, you don’t need to write it
  • -Xloggc:gc.log

Full GC is mostly caused by the expansion of the capacity of the old generation, and part of it is caused by the expansion of the permanent generation space

-Xms/-Xmx
-XX:PermSize/-XX:MaxPermSize

The above parameters fix the capacity of the old generation and permanent generation to avoid automatic expansion during runtime


-XX:+DisableExplicitGC shields System.gc() to display the triggered GC

Choose a collector to reduce latency

While compiling, continue other coding work

CMS is the most suitable collector for this scenario

-XX:+UseConcMarkSweepGC
-XX:+UseParNewGC

(The ParNew collector is the default young generation collector after the CMS collector is used. It is only written to make the configuration clearer.) The virtual machine is required to use the ParNew and CMS collectors for garbage collection in the young and old generations, respectively.

Guess you like

Origin blog.csdn.net/u011863024/article/details/114463874