(Translation) possible reasons for outofmemoryerror java heap space in elasticsearch

Original: http://stackoverflow.com/questions/30803404/what-are-the-possible-reasons-behind-the-java-lang-outofmemoryerror-java-heap-sp

When we use elasticsearch, we often encounter ERROR of outofmemoryerror java heap space, but we can't find the log describing the reason behind this ERROR. The ERROR description we only see is as follows:

(2015-04-09 13:56:47,527 DEBUGaction.index Emil Blonsky observer: timeout notification from cluster service. timeout setting 1m, time since start 1m) Caused by: java.lang.OutOfMemoryError: Java heap space:

Possible reasons for this type of error are as follows (in part):

1. Read too much data into memory, especially the field data field (using sorting and aggregations)

2. The configuration is wrong, the set heap size does not take effect, it may be set in the wrong place. The default heap size (min 256M, max 1G) is not enough.

3. Too much data for indexing, for example, the bulk size setting is too large

4. When querying, the amount of data required to be returned is too large (the size setting is too large)

5. The memory of the master node is not enough, which may be caused by the cluster state in this case. If the index uses a lot of aliases, the cluster state will use a lot of memory.

 

PS: The node with OOM must be restarted.

 

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326710085&siteId=291194637