Hadoop Namenode start being given GC overhead limit exceeded

Reference:
https://blog.csdn.net/rongyongfeikai2/article/details/82878578

Described in the text, due to excessive fsimage of inodes, resulting in not enough memory due to start.

Fsimage need to see the memory size occupied by the file with the following command:

./hdfs oiv -p XML -printToScreen -i ${fsimage文件路径} -o /tmp/a

cat  /tmp/a | egrep "<inode><id>|<block><id>" | wc -l | awk '{printf "Objects=%d : Suggested Xms=%0dm Xmx=%0dm\n", $1, (($1 / 1000000 )*1024), (($1 / 1000000 )*1024)}'



Here Insert Picture Description

The hadoop-env.sh in the HADOOP_NAMENODE_OPTS years plus for Xmx and Xms configurations:

I adjusted here to 200g, after the configuration is complete, restart Hadoop.

At the same time, from other references mentioned documents, and HADOOP_NAMENODE_LIMIT_HEAPSIZE HADOOP_HEAPSIZE adjust to an appropriate value. In this example, I try to HADOOP_HEAPSIZE adjusted from 24,576 to 36,864 (units m), the HADOOP_NAMENODE_INIT_HEAPSIZE adjusted from 40,000 to 60,000 (units m), but the start is still being given.

Released six original articles · won praise 0 · Views 584

Guess you like

Origin blog.csdn.net/nickyu888/article/details/104327284