What should I do if the application frequently occurs FULLGC and how to troubleshoot?

        FULLGC often occurs in applications, how can it be counted as frequent? I personally understand that if it happens more than 2 times a day, even if it is frequent, the occurrence of FULLGC will also cause some problems for our application. For example, gc stw (stop the world) will occur, which will stop all worker threads. If gc time For a long time, some application software with a heartbeat will think that the application is no longer alive and make some error handling.

        Let me share with you here, the idea of ​​a co-check of FULLGC, because the cause of gc will be different due to different businesses, there will be many out-of-sync optimization and processing methods, so I won’t discuss them here.

       Normally, we still have to get the dump file at the time of FULLGC to analyze which data in the memory occupies the memory. There are two ways to get dump files:

1. By adding parameter configuration in jvm: +HeapDumpBeforeFullGC, +HeapDumpAfterFullGC  

This method needs to be configured in advance before the application starts. If it is not needed, you also need to modify the jvm parameters to restart the application

2. Use the jinfo command to set. (Methods commonly used in production environments)

No need to restart the vm, it takes effect immediately. After the dump file is generated, the VM parameters are cleared. Usually fullgc will happen frequently and there is no need to export the dump all the time. So after you get a dump sample, you can clear it, and then slowly analyze the dump file

The first step is to obtain the pid of the java program through jps (jps, ps, etc.)

#jps

5940 Main

3012 Jps

The second step is to call the jinfo command to set VM parameters

#jinfo -flag +HeapDumpBeforeFullGC 5940

#jinfo -flag +HeapDumpAfterFullGC 5940

Use #jinfo -flags pid to check whether it works

 

If you see the parameter in the red box, it means it has taken effect. Please note here : some students use ps and other commands to check if there is no jvm parameter, thinking that it is not effective. This is because it is a temporarily set parameter, which is not in ps. Arrived, these settings will also be invalid next time jvm starts

The dump file will be generated next time when fullgc occurs. If you cannot find the file, remember to configure the dump file storage path and use the parameter -XX:HeapDumpPath=/temp/

 

After the dump file is retrieved, we can clear the originally set parameters:

#jinfo -flag -HeapDumpBeforeFullGC 5940

#jinfo -flag -HeapDumpAfterFullGC 5940

Use #jinfo -flags pid to check whether it works

If you see the parameter in the red box, it means it has taken effect . Please note here : after we have made a clear comparison, the parameter has not disappeared, but the "+" in the front direction has become "-", where "-" means the parameter has been It's invalid!

We can refer to a help description of jinfo:

 

 

 

For the analysis of the dump file, we can use MAT and other tools to analyze. Here you can refer to: Through the mybatis source code, analyze an OutOfMemoryError caused by improper use of mybatis  .

 

 

 

 

 

 

 

Guess you like

Origin blog.csdn.net/kevin_mails/article/details/103404883