集群namenode和datanode的丢失找回

如果namenode丢失操作如下:

在hadoop-2.7.3的bin目录下:
格式化:hadoop namenode -format
查看状态:jps
停止hdfs:stop-dfs.sh
开始hdfs:start-dfs.sh
jps

如果DataNode丢失:
Hadoop-2.7.3下的tmp文件夹操作:
cd /tmp/dfs/name/current
vim VERSION (复制clusterID)
cd…/…or
cd /tmp/dfs/data/current
vim VERSION (粘贴clusterID)

停止hdfs:stop-dfs.sh
开始hdfs:start-dfs.sh

完成。

查看日志:
查看从节点机器hadoop中datanode的log文件 , 拉到最后 , 报错如下

java.io.IOException: Incompatible clusterIDs in /usr/local/hadoop/hadoop-2.7.3/tmp/dfs/data: namenode clusterID = CID-9d5c6194-c8ed-498b-bab0-4d88d8801e9e; datanode clusterID = CID-b8aec67f-13ee-4971-ae10-935d083ff734
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:775)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:300)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:416)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:395)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:573)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DHadoop中DataNode没有启动解决办法ataNode.java:1362)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1327)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802)
at java.lang.Thread.run(Thread.java:748)
2018-10-11 20:43:54,686 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool (Datanode Uuid unassigned) service to hive/192.168.222.118:9000. Exiting.
java.io.IOException: All specified directories are failed to load.
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:574)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1362)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1327)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802)
at java.lang.Thread.run(Thread.java:748)
2018-10-11 20:43:54,687 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool (Datanode Uuid unassigned) service to hive/192.168.222.118:9000
2018-10-11 20:43:54,788 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool (Datanode Uuid unassigned)
2018-10-11 20:43:56,789 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2018-10-11 20:43:56,791 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2018-10-11 20:43:56,793 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at hive/192.168.222.118
************************************************************/

此原因是namenode和datanode的clusterID不一致导致datanode无法启动.
产生的原因是多次hdfs namenode -format , 每一次format,namenode都会生成新的clusterID , 而datanode还是保持原来的clusterID.

解决办法:
cat /home/hadoop-2.7.2/tmp/dfs/name/current/VERSION 复制namenode的clusterID.
用该clusterID把所有datanode节点机器中hadoop-2.7.2/data/current/VERSION中的clusterID替换掉
完成
重新启动start-all.sh
正常启动

猜你喜欢

转载自blog.csdn.net/qq_43617838/article/details/85291476