Hadoop HA cluster HDFS construction - error record

  1. Live Nodes show as 0

Normal nodes should show up as 3
* Step 1: View the datanode log tail -100 /home/hadoop-2.5.1/logs/hadoop-root-datanode-node2.log
* Abnormal problem: The IDs of namenode and datanode are inconsistent
* Reason: Repeated formatting. After formatting dfs for the first time, hadoop was started and used, and then the format command (hdfs namenode -format) was re-executed. At this time, the clusterID of the namenode will be regenerated, and the clusterID of the datanode will remain unchanged.
* Solution: Copy the clusterID in the VERSION under name/current to the VERSION under data/current, and overwrite the original clusterID, that is, keep it consistent.
* Restart

参考链接:[https://my.oschina.net/zhongwenhao/blog/603744][3]

2. Reasons and solutions for the failure of eclipse to upload files to Hadoop
Upload files correctly
Reason : The system does not have permission
Solution: Modify the file vim hadoop/etc/hadoop/hdfs-site.xml
Add configuration:

<property>
        <name>dfs.permissions</name>
        <value>false</value>
    </property>
  1. Hadoop error: could only be replicated to 0 nodes, instead of 1
    Reason: Formatting hadoop multiple times leads to inconsistent version information
    Solution : Stop all services and reformat
    • Stop all services first stop-all.sh
    • format hadoop namenode -foramt
    • Restart all services start-all.sh
      Note, this may cause problem 1! !

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326351687&siteId=291194637