Flume memory optimization

1) Problem description: If you start consumption Flume throws the following exception
ERROR hdfs.HDFSEventSink: process failed
java.lang.OutOfMemoryError: GC overhead limit exceeded

2) Solution steps:
(1) Add the following configuration
export JAVA_OPTS="-Xms100m -Xmx2000m -Dcom.sun.management.jmxremote" in the /opt/module/flume/conf/flume-env.sh file of the hadoop102 server
( 2) Synchronous configuration to hadoop103, hadoop104 server
[atguigu@hadoop102 conf]$ xsync flume-env.sh

3) Flume memory parameter settings and optimization
JVM heap is generally set to 4G or higher-
Xmx and -Xms are best set to be consistent to reduce the performance impact caused by memory jitter. If the settings are inconsistent, it is easy to cause frequent fullgc.
-Xms represents the minimum size of JVM Heap (heap memory), initially allocated; -Xmx represents the maximum allowable size of JVM Heap (heap memory), allocated on demand. If the settings are not consistent, it is easy to trigger fullgc frequently due to insufficient memory during initialization.

Guess you like

Origin blog.csdn.net/xie670705986/article/details/112888195