Remember a project of online test happens OOM

A recent project on two lines in the test line there ran tests, the code is no problem. However, a problem arises OOM production, so the investigation.

First, start from the first log

Caused by: java.lang.OutOfMemoryError: Java heap space
	at com.alibaba.fastjson.serializer.SerializeWriter.expandCapacity(SerializeWriter.java:290)
	at com.alibaba.fastjson.serializer.SerializeWriter.writeStringWithDoubleQuote(SerializeWriter.java:870)
	at com.alibaba.fastjson.serializer.SerializeWriter.writeString(SerializeWriter.java:2113)
	at com.alibaba.fastjson.serializer.StringCodec.write(StringCodec.java:46)
	at com.alibaba.fastjson.serializer.StringCodec.write(StringCodec.java:35)
	at com.alibaba.fastjson.serializer.MapSerializer.write(MapSerializer.java:270)
	at com.alibaba.fastjson.serializer.MapSerializer.write(MapSerializer.java:44)
	at com.alibaba.fastjson.serializer.ListSerializer.write(ListSerializer.java:137)
	at com.alibaba.fastjson.serializer.JSONSerializer.write(JSONSerializer.java:281)
	at com.alibaba.fastjson.JSON.toJSONString(JSON.java:673)
	at com.alibaba.fastjson.JSON.toJSONString(JSON.java:611)
	at com.alibaba.fastjson.JSON.toJSONString(JSON.java:576)

Groan from the error in the error log about Ali fastjson, think before fastjson broke through toJSONString situation occurs when the OOM, then the investigation found that the version we are using version 1.2.47 version, this version does have this problem, they put fastjson upgrade to the latest version of 1.2.62 retest.

Still appear in the updated version Caused by: java.lang.OutOfMemoryError: GC overhead limit exceededof the problem
the investigation did not find fastjson log related error, nor any useful information ... so another way.

Second, the analysis stack dump file

Because the log file can not find the information available, it was decided to look at the investigation procedure stack dump file.
Here to use a command to generate a Java program stack dump

  • jmap -dump: format = b, file = / data / heapdump.hprof process of the above mentioned id
    File heel is generated directory file, heapdump are generated file name.

After generating use the Eclipse Memory Analyzer tool to analyze the dump file, download> Open Heap Dump open preserved heapdump.hprof file by clicking on File- (suffix is required .hprof, otherwise it can not find the file to open)
Here Insert Picture Description
by default use
Here Insert Picture Description
here you can see a thread takes up 2.6GB of memory,

Here Insert Picture Description
By looking at the details to find the database connection is consuming a lot of resources, they each investigation code database operation, and check every connection is properly closed, because we project using a native database, it is more difficult to troubleshoot greater than MySQL the ...

Finally, find the code does have a connection is not closed properly, but modify the code or the emergence of OOM, and finally through sql investigation of each place to find the last of the fundamental reason is: We produce libraries are doing other departments pressure measurement, the accumulation of a large number of invalid data , the amount of data a single table over twenty million rows when the query is already broke query result set is too large, Java virtual machine memory overflow ...

Third, the summary of reasons

Combined with the case of our project, OOM we encountered mainly in three cases

  • 1.fastjson low version is not updated, resulting OOM, you can update the version
  • 2. The database connection is not closed properly, resulting in a waste of resources when a large number of database operations
  • Pressure test data 3. Production database is not promptly removed, causing problems ... we test
Released two original articles · won praise 6 · views 222

Guess you like

Origin blog.csdn.net/weixin_43107388/article/details/103701476