java.lang.OutOfMemoryError: GC overhead limit exceeded problem analysis and solution

1. Error reproduction

2022-12-29 10:12:07.210 ERROR 73511 --- [nio-8001-exec-6] o.a.c.c.C.[.[.[/].[dispatcherServlet]    : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Handler dispatch failed; nested exception is java.lang.OutOfMemoryError: GC overhead limit exceeded] with root cause

java.lang.OutOfMemoryError: GC overhead limit exceeded

The reason for this problem: This exception will be thrown when GC takes a lot of time to release a small space, that is (Sun's official definition of this, more than 98% of the time is used for GC and less than 2% of the heap is reclaimed This exception is thrown when out of memory). Usually because the heap is too small, the cause of the exception: there is not enough memory.

My startup command for this project is as follows: The heap memory space is 256m

nohup java -Xms256m -Xmx256m -Dspring.profiles.active=test -jar ...

When there is no huge increase in data volume, the memory space should be sufficient, because our files are basically within 1m, so the memory space opened up is also sufficient, but later due to changes in the algorithm business, the data volume directly reached more than 50m , so the above OOM situation appeared. In the case of one request, YGC reached 81 times, and FGC reached 51 times. Specific information is as follows:

2. Problem analysis

In fact, there are many cases of OOM. Today, I will mainly conduct a simple analysis of my situation. The error message:

java.lang.OutOfMemoryError: GC overhead limit exceeded

Of course, the main reason for this problem is: when the project starts, the heap memory space is not allocated enough, but we can also do some optimizations from our own code, for example: the process of processing data can be made easier. If there is too much memory overhead, it is also necessary to readjust the JVM startup parameters; on the other hand, we can directly optimize from the perspective of JVM allocation of heap memory. Of course, we need to understand our business and make the project a business model. The refinement, and then analyze the memory overhead of the model, and make a prediction for the JVM memory based on the analyzed memory overhead.

3. Problem solving

After we understand the problem, we will mainly share the process of solving the problem. In fact, many people here may ask, what is our main basis for JVM memory development? In the end how much memory is appropriate to open up?

I personally think that the key to this problem lies in our understanding of the main business of the system, then building a business model for the main business process, and then roughly estimating the amount of requested data for the business model, even for systems with high concurrency. Let's take our system as an example: the most memory overhead in our system model is the import of user data + the amount of data calculated by the algorithm, so that we can calculate the approximate memory overhead of a request in this model . If there is high concurrency, calculate the number of such requests per second, so that it is easy to calculate how much memory the JVM starts to open. When I adjusted the start command as follows:

nohup java -Xms1024m -Xmx1024m -Dspring.profiles.active=test -jar ...

The basis for my system demo calculation is:

1. The place that consumes the most memory in the system is: user import data + data processing after algorithm calculation;

2. The data calculated by the algorithm of the system is about 50m, which means that the memory occupied by my data volume will reach 50m when a request comes;

3. The other memory consumption in this request is estimated to be 50m, because the data imported by the user also consumes a lot of memory;

4. Of course, there are still a lot of additions, deletions, modifications, and checks: it is estimated to be 100m;

5. Finally, expand 5 to 10 times under the main overhead we calculated.

So I opened up a JVM memory space of 1024m for a single server, and there is no problem under normal circumstances.

It is very obvious here that YGC has been carried out 41 times, and FGC has been carried out 0 times:

Four. Summary

In fact, no matter how much theoretical knowledge we learn, it is not as good as a problem encountered in a production practice. Of course, if there are any mistakes shared here, please point them out. Only in this way can we make continuous progress.

Reference document: Java OOM Basics: Common OutOfMemoryError Scenario 2: GC overhead limit exceeded Problem Explanation | HeapDump Performance Community

Guess you like

Origin blog.csdn.net/whc888666/article/details/128496598