解决DataNode不能全部启动问题 org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block

问题描述:
FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to master/192.168.235.129:8020. Exiting.
java.io.IOException: All specified directories are failed to load.
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:478)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1338)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1304)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:314)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:226)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:867)
at java.lang.Thread.run(Thread.java:748)
处理办法:
1.停止集群。stop-dfs.sh
2.删除dfs.namenode.name.dir和dfs.datanode.data.dir 目录下的所有文件
3.重新格式化:bin/hadoop namenode -format
4.启动。start-dfs.sh

猜你喜欢

转载自blog.csdn.net/perfer258/article/details/81432798