elk error report and solution summary

es cluster error report
guide article http://www.cnblogs.com/reblue520/p/6972138.html
1. The field is too long
[2017-08-02T00:00:03,182][DEBUG][oeabTransportShardBulkAction] [node3] [gzq- rest07-notify-2017-08-01][3] failed to execute bulk item (index) index
java.lang.IllegalArgumentException: Document contains at least one immense term in field="message" (whose UTF8 encoding is longer than the max length 32766)
This error only appears in the log of the work circle. The
current solution is http://rockybean.info/2015/02/09/elasticsearch-immense-term-exception
2. The process is full
[2017-08-03T08 :01:56,086][DEBUG][oeabTransportShardBulkAction] [node3] [bro-conn-log-2017-08-03][0] failed to execute bulk item (index) index {[bro-conn-log-2017-08 -03]
org.elasticsearch.cluster.metadata.ProcessClusterEventTimeoutException: failed to process cluster event (put-mapping) within 30s
Plan to try http://blog.csdn.net/ypc123ypc/article/details/69945031
3. Connection timeout
[2017- 08-04T06:09:11,571][WARN ][oemjJvmGcMonitorService] [node1] [gc][68115] overhead, spent [571ms] collecting in the last [1s]
4. Insufficient memory
[2017-08-05T06:52:29,940 ][ERROR][oebElasticsearchUncaughtExceptionHandler] [node6] fatal error in thread [elasticsearch[node6][generic][T#375]], exiting
java.lang.OutOfMemoryError: Java heap space
5. The node leaves the cluster
[2017-08-08T10 :33:22,537][WARN ][r.suppressed ] path: /x3-router*-2017-06-09, params: {index=x3-router*-2017-06-09}
org.elasticsearch.transport.RemoteTransportException: [node11][10.20.3.25:19300][indices:admin/delete]
6. Failed to create index
[2017-08-10T09:10:43,664][DEBUG][oeaaimpTransportPutMappingAction] [node10- rep1] failed to put mappings on indices [[[db-postgresql-2017-08-10/AkH1cVBCSLewd_Vki--vaA]]], type [logs]




logstash error
[2017-08-03T14:13:10,613][ERROR][ logstash.outputs.elasticsearch] Action
reason: Because of the accumulation of a large number of unallocated shards, the new index cannot be allocated to each data node normally, so that all data requests are sent to the same server, resulting in channel blocking , the data cannot be sent normally
Solution : use the script on 10.17.4.247 for unallocated shard redistribution, and then resend the data http://aishu.iteye.com/blog/2363208
[2017-08-03T14:13: 34,708][WARN ][logstash.filters.grok ] Timeout executing grok
plan tried method http://blog.csdn.net/ypc123ypc/article/details/69945031

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326351275&siteId=291194637