[Posts] an OutOfMemoryError series (2): GC overhead limit exceeded

OutOfMemoryError系列(2): GC overhead limit exceeded

 

This is the second article in this series, a list of related articles:

Java Runtime Environment built a  garbage collection (GC)  modules. Many of the previous generation programming languages and there is no automatic garbage collection mechanism, programmers need to manually write code to allocate and free memory, heap memory for reuse.

In Java programs, we only need to worry about memory allocation on the line. If a block of unused memory,  garbage collection (Garbage Collection)  module will automatically perform the cleanup. For details, please refer to GC  GC performance optimization  series of articles, in general, JVM built-in garbage collection algorithm can cope with most business scenarios.

java.lang.OutOfMemoryError: GC overhead limit exceeded  The reason this happens is that the program basically exhausted all available memory, GC can not clean up.

Cause Analysis

JVM throws  java.lang.OutOfMemoryError: GC overhead limit exceeded  error signal is issued this: the proportion of time to perform garbage collection is too large, too small an effective amount of computation By default, if the time it takes for more than 98% GC, and GC recovered memory is less than 2%, JVM will throw this error.

java.lang.OutOfMemoryError: GC overhead limit exceeded

Note,  java.lang.OutOfMemoryError: GC overhead limit exceeded Number of  error only repeatedly  GC  only recovered less than 2% under extreme circumstances will be thrown. If you do not throw an  GC overhead limit error what would happen? That is such a point of cleaning up GC memory will soon fill up again, forcing the GC to perform again. This creates a vicious cycle, CPU usage always 100%, while the GC does not have any the outcome of the system the user will see the system get stuck - just a few milliseconds before the operation, now take several minutes to complete.

It is also a good  fail-fast principles  cases.

Examples

Add the following code to the Map data in an infinite loop. This can lead to " GC overhead limit exceeded Number of " error:

package com.cncounter.rtime;
import java.util.Map;
import java.util.Random;
public class TestWrapper { public static void main(String args[]) throws Exception { Map map = System.getProperties(); Random r = new Random(); while (true) { map.put(r.nextInt(), "value"); } } }

 

JVM configuration parameters:  -Xmx12m. Error information generated when executed as follows:

Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
    at java.util.Hashtable.addEntry(Hashtable.java:435) at java.util.Hashtable.put(Hashtable.java:476) at com.cncounter.rtime.TestWrapper.main(TestWrapper.java:11)

Error messages you encounter is not necessarily this. Indeed, JVM parameters we performed as follows:

java -Xmx12m -XX:+UseParallelGC TestWrapper

 

Soon we saw  java.lang.OutOfMemoryError: GC overhead limit exceeded  error messages. But in fact this is an example of some of the pit. Because the configuration of different heap size, use of different GC algorithms , generating an error message is not the same. For example, when the Java heap memory is set to 10M:

java -Xmx10m -XX:+UseParallelGC TestWrapper

 

Error DEBUG mode is as follows:

Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
    at java.util.Hashtable.rehash(Hashtable.java:401) at java.util.Hashtable.addEntry(Hashtable.java:425) at java.util.Hashtable.put(Hashtable.java:476) at com.cncounter.rtime.TestWrapper.main(TestWrapper.java:11)

The reader should try to modify the parameters, look at the specific execution. Error messages and stack information may not be the same.

Here in the Map were  rehash thrown when the  java.lang.OutOfMemoryError: Java heap space  error message if you use another.  Garbage collection algorithms , such as  -XX: + UseConcMarkSweepGC , or  -XX: + UseG1GC , errors will be captured by the default exception handler but no stacktrace information, because when you create Exception  no way to fill stacktrace information .

For example configuration:

-Xmx12m -XX:+UseG1GC

In Win7x64, Java8 environment to run, an error message is generated:

Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "main"

The reader is advised to modify the memory configuration, as well as garbage collection algorithm for testing.

These real cases show that in the case of limited resources, can not accurately predict which specific reasons for the program will die. So in the face of such errors, you can not tie up a specific error handling sequence.

solution

There is a solution to cope with trouble, just do not throw " java.lang.OutOfMemoryError: GC overhead limit exceeded Number of " error message, add the following startup parameters:

// 不推荐
-XX:-UseGCOverheadLimit

We strongly recommend not specify this option: Because it does not really solve the problem, only delay a little  out of memory the wrong time, in the end had to additional processing. Specify this option causes the original  java.lang.OutOfMemoryError: GC overhead limit exceeded  error concealment, become more common  java.lang.OutOfMemoryError: Java heap space  error message.

Note: Sometimes the trigger GC overhead limit the wrong reasons, because of lack of memory allocated to the JVM heap. In this case only need to increase the heap memory size can be.

In most cases, increasing the heap memory and can not solve the problem. For example there is a program in memory leaks, increase the heap memory can only postpone produce  java.lang.OutOfMemoryError: Java heap space  the wrong time.

Of course, increasing the heap memory, and may increase the  GC pauses  time, thus affecting the program of  throughput or latency .

If you want to solve the problem fundamentally, you need to troubleshoot related to memory allocation code simple, the need to answer the following questions:

  1. What kind of objects are taking up the most memory?

  2. These objects are allocated in which part of the code.

To clarify this point, it may take several days. The following is a general procedure:

  • Obtain permission to perform heap dump (heap dump) on a production server. "Dump" (Dump) is a snapshot of heap memory, the memory can be used for subsequent analysis. These snapshots may contain confidential information, such as passwords, credit card numbers and so on, so sometimes, due to security restrictions enterprises, to get the production environment heap dump is not easy.

  • Heap dump at the appropriate time. In general, the analysis requires comparing the plurality of memory heap dump file, if the acquired timing was not right, it could be a snapshot of "waste" In addition, each perform a heap dump, the JVM will conduct a "freeze "so a production environment, you can not perform a lot of Dump operation, otherwise the system slow or stuck, you are in big trouble.

  • Use another machine to load Dump file. If the problem JVM memory is 8GB, then the analysis Heap Dump machine memory generally requires more than 8GB. Then open the dump analysis software (we recommend the Eclipse MAT  , of course you can also use other tools).

  • Detection snapshot memory-largest GC roots. For details, please refer to:  Solving OutOfMemoryError (Part 6) - A Waste Dump IS not . This may be a bit difficult for the novice, but this will deepen your understanding of heap memory structure and navigation mechanisms.

  • Next, the code may be assigned to identify a large number of objects. If you are very familiar with the whole system, may soon be able to locate the problem. Bad luck, only overtime to the investigation.

Make advertising, we recommend  Plumbr, only the Java Monitoring Solution at The root the cause is with Automatic Detection . Plumbr able to capture all java.lang.OutOfMemoryError  , and find other performance issues, such as the most memory-intensive data structures, and so on.

Plumbr responsible for collecting data in the background - including heap memory usage (only statistical distribution of the object, it does not involve actual data), as well as heap dump is not easy to find a variety of issues. If there is  java.lang.OutOfMemoryError  , but also in the case of non-stop, make the necessary data processing Here is Plumbr one.  Java.lang.OutOfMemoryError  reminder:

Plumbr OutOfMemoryError incident alert

Powerful it, do not need additional tools and analysis, can be seen directly:

  • What kind of objects are taking up the most memory (here 271  com.example.map.impl.PartitionContainer  example, consumes 173MB of memory, and heap memory is only 248MB)

  • These objects are created where (mostly in  MetricManagerImpl  class, the first line at 304)

  • Who is currently refer to these objects (GC root intact from the beginning of the chain of references)

That information, you can navigate to the root of the problem, such as streamlining the local data structure / model only takes the necessary memory can be.

Of course, according to the results memory analysis, and reporting Plumbr generated if the memory occupied by objects found very reasonable, you do not need to modify the source code, then it would increase the heap memory of it. In this case, to modify the JVM startup parameters (proportionally) increases following values:

java -Xmx1024m com.yourcompany.YourClass`

Here the maximum heap memory is configured  1GB. Please change this value according to the actual situation. If the JVM will still throw OutOfMemoryError, then you may also need to check the manual, or use tools to analyze and diagnose again.

Original link:  https://plumbr.eu/outofmemoryerror/gc-overhead-limit-exceeded

Translation Date: August 25, 2017

Translators:  anchor: http://blog.csdn.net/renfufei

Guess you like

Origin www.cnblogs.com/jinanxiaolaohu/p/12122887.html