hdfs HA

解压

 tar -zxvf hadoop-2.7.3.tar.gz -C /opt/modules/

删除文件:

rm -rf doc/
rm -rf *.cmd

配置文件:

export JAVA_HOME=/opt/modules/jdk1.8.0_91

core-site.xml

    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://ns1</value>
    </property>

    <property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/modules/hadoop-2.7.3/data/tmpData</value>
    </property>

    <property>
        <name>hadoop.http.staticuser.user</name>
        <value>huadian</value>
    </property>

hdfs-site.xml

    <property>
        <name>dfs.permissions.enabled</name>
        <value>false</value>
    </property>

slaves

bigdata-hpsk02.huadian.com
bigdata-hpsk03.huadian.com
bigdata-hpsk04.huadian.com

检查支持的压缩格式

 bin/hadoop checknative

替换原有的native

hdfs-site.xml

    <!--HDFS HA Using QJM -->

    <!--给2个namenode取一个组名称 -->
    <property>
        <name>dfs.nameservices</name>
        <value>ns1</value>
    </property>
    <!--给2个namenode分别取名字 -->
    <property>
        <name>dfs.ha.namenodes.ns1</name>
        <value>nn1,nn2</value>
    </property>
    <!--2个namenode的地址 -->
    <property>
        <name>dfs.namenode.rpc-address.ns1.nn1</name>
        <value>bigdata-hpsk01.huadian.com:8020</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.ns1.nn2</name>
        <value>bigdata-hpsk02.huadian.com:8020</value>
    </property>
    <!--2个namenode的web UI地址 -->
    <property>
        <name>dfs.namenode.http-address.ns1.nn1</name>
        <value>bigdata-hpsk01.huadian.com:50070</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.ns1.nn2</name>
        <value>bigdata-hpsk02.huadian.com:50070</value>
    </property>

    <!--共享编辑日志存放的node -->
    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://bigdata-hpsk01.huadian.com:8485;bigdata-hpsk02.huadian.com:8485;bigdata-hpsk03.huadian.com:8485/ns1</value>
    </property>
    <!--客户端代理 -->
    <property>
        <name>dfs.client.failover.proxy.provider.ns1</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    <!--防止脑裂 -->
    <property>
      <name>dfs.ha.fencing.methods</name>
      <value>sshfence</value>
    </property>

    <property>
      <name>dfs.ha.fencing.ssh.private-key-files</name>
      <value>/home/huadian/.ssh/id_rsa</value>
    </property>

    <!--journalnode 目录 -->
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/opt/modules/hadoop-2.7.3/data/jn</value>
    </property>

分发

scp -r ./hadoop-2.7.3/ bigdata-hpsk03.huadian.
com:/opt/modules/

启动测试
step1:在各个journalnode节点上,启动journalnode服务

sbin/hadoop-daemon.sh start journalnode

step2:在【nn 1】上,对其进行格式化,并启动

    bin/hdfs namenode -format   
    状态0
sbin/hadoop-daemon.sh start namenode
    http://bigdata-hpsk01.huadian.com:50070

step3:在【nn 2 】上,同步【nn 1】的元数据信息

     bin/hdfs namenode -bootstrapStandby
    状态是0

step5:启动【nn 1】切换为Active

bin/hdfs haadmin -transitionToActive nn1

step6:在【nn 1】上,启动所有的datanode

sbin/hadoop-daemons.sh start datanode

猜你喜欢

转载自blog.csdn.net/liyongshun_123/article/details/80301905