HDFS 2.7.4 is deployed in production, and some exceptions have been encountered recently, so record the memo:
一、dfs.datanode.directoryscan.throttle.limit.ms.per.sec
After the DataNode runs for a period of time, the following exception is reported:
ERROR org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: dfs.datanode.directoryscan.throttle.limit.ms.per.sec set to value below 1 ms/sec. Assuming default value of 1000
Googled it and found it to be a bug:
https://issues.apache.org/jira/browse/HDFS-9274
solution:
Edit hdfs-site.xml and add the following configuration
<property> <name>dfs.datanode.directoryscan.throttle.limit.ms.per.sec</name> <value>1000</value> </property>
After restarting HDFS, the problem is solved.
二、DataXceiver error processing WRITE_BLOCK operation
After the DataNode runs for a period of time, the following exception is reported:
ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: xxxxxx:50010:DataXceiver error processing WRITE_BLOCK operation src: /aaaaaa:58294 dst: /bbbbbb:50010
Googled it and found that the number of data transmission threads needs to be modified.
solution:
Edit hdfs-site.xml and add the following configuration
<property>
<name>dfs.datanode.max.transfer.threads</name>
<value>8192</value>
</property>
After restarting HDFS, the problem is solved.