Apache hdfs恢复到一个namenode

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/dazuiba008/article/details/80761364

接之前federation和viewfs的实验,现在恢复到一个namenode的环境,恢复步骤很简单

1.停止stop-dfs.sh

2.恢复core-site.xml
<configuration>
        <property>
                 <name>fs.defaultFS</name>
                 <value>hdfs://192-168-100-142:9999</value>
        </property>
        <property>
                <name>fs.trash.interval</name>
                <value>3</value>
        </property>

</configuration>

3.恢复hdfs-site.xml
<configuration>
        <property>
                 <name>dfs.replication</name>
                 <value>3</value>
        </property>
        <property>
                <name>dfs.namenode.name.dir</name>
                <value>/opt/hadoop/dfs/name</value>
        </property>
        <property>
                <name>dfs.datanode.data.dir</name>
                <value>/opt/hadoop/dfs/data</value>
        </property>
        <property>
                <name>dfs.namenode.secondary.http-address</name>
                <value>192-168-100-217:50090</value>
        </property> 
</configuration>


4.同步配置文件
scp core-site.xml hdfs-site.xml root@192-168-100-217:/usr/local/hadoop-2.7.6/etc/hadoop
scp core-site.xml hdfs-site.xml root@192-168-100-225:/usr/local/hadoop-2.7.6/etc/hadoop
scp core-site.xml hdfs-site.xml root@192-168-100-34:/usr/local/hadoop-2.7.6/etc/hadoop


5.启动HDFS
start-dfs.sh

启动后,我们只能看到原来192-168-100-142的相关目录数据,之前新增的namenode数据无法看到


猜你喜欢

转载自blog.csdn.net/dazuiba008/article/details/80761364