hadoop格式化:java.io.IOException: Incompatible clusterIDs in /home/lxh/hadoop/hdfs/data: namenode clusterID

1 Overview
  solving hadoop start hdfs, datanode not start the problem. The error was:

java.io.IOException: Incompatible clusterIDs in /home/lxh/hadoop/hdfs/data: namenode clusterID = CID-a3938a0b-57b5-458d-841c-d096e2b7a71c; datanode clusterID = CID-200e6206-98b5-44b2-9e48-262871884eeb

2 Description of the problem
  after performing start-dfs.sh, according to the print log, can be performed as separate operations NameNode, DataNode of.

Starting namenodes on [localhost]
localhost: starting namenode, logging to /home/lxh/hadoop/hadoop-2.4.1/logs/hadoop-lxh-namenode-ubuntu.out
localhost: starting datanode, logging to /home/lxh/hadoop/hadoop-2.4.1/logs/hadoop-lxh-datanode-ubuntu.out


  However, the implementation of jps start to see results, cash back DataNode did not start.

10256 ResourceManager
29634 NameNode
29939 SecondaryNameNode
30054 Jps
10399 NodeManager

  


3 to find the problem
  very difficult to understand, but also just to run properly, and performed wordcount test program. So just thought for a moment back to operation, performed dfs format (hdfs namenode -format and hdfs datanode -format), and then restart the emergence of this situation. Is it related to the format? Then view the log:

2014-08-08 00:32:08,787 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000. Exiting. 
java.io.IOException: Incompatible clusterIDs in /home/lxh/hadoop/hdfs/data: namenode clusterID = CID-a3938a0b-57b5-458d-841c-d096e2b7a71c; datanode clusterID = CID-200e6206-98b5-44b2-9e48-262871884eeb
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:477)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:226)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:254)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:974)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:945)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:278)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
at java.lang.Thread.run(Thread.java:745)
2014-08-08 00:32:08,790 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000
2014-08-08 00:32:08,791 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)

  The description of the log, because the clusterID and namenode clusterID Datanode mismatch.

  Find reasons, such as to see if the log described.

  Open hdfs-site.xml regarding datanode namenode corresponding directory and, respectively, opened therein current / VERSION file comparison.

${datanode}/current/VERSION:

storageID=DS-be8dfa2b-17b1-4c9f-bbfe-4898956a39ed
clusterID=CID-200e6206-98b5-44b2-9e48-262871884eeb
cTime=0
datanodeUuid=406b6d6a-0cb1-453d-b689-9ee62433b15d
storageType=DATA_NODE
layoutVersion=-55

  

${namenode}/current/VERSION:

namespaceID=670379
clusterID=CID-a3938a0b-57b5-458d-841c-d096e2b7a71c
cTime=0
storageType=NAME_NODE
blockpoolID=BP-325596647-127.0.1.1-1407429078192
layoutVersion=-56

  

  Sure enough, as recorded in the log as a result of the modification datanode VERSION file clusterID, so consistent with namenode, then start dfs (execution start-dfs.sh), start to see the situation in the implementation of jps, found that all normal start.

10256 ResourceManager
30614 NameNode
30759 DataNode
30935 SecondaryNameNode
31038 Jps
10399 NodeManager

  


4 cause of the problem
  after performing hdfs namenode -format, current directory will be deleted and rebuild, which VERSION file clusterID will also change, and the datanode VERSION file clusterID remain unchanged, resulting in inconsistent two clusterID.

  So in order to avoid this, after namenode you can then perform formatting, deleting datanode the current folder, the VERSION file or modify datanode out clusterID and namenode the VERSION file clusterID the same, and then restart dfs.

Guess you like

Origin www.cnblogs.com/felixzh/p/12069843.html