[大数据] hadoop HA 配置

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/u014679456/article/details/80189298

前提:已经通过上一篇的环境配置好了zookeeper环境

1 集群规划

bigdata01.com bigdata02.com bigdata02.com
namenode namenode
datanode datanode datanode
journalnode journalnode journalnode
zkfc zkfc
resourcemanager resourcemanager
nodemanager nodemanager nodemanager

2 HDFS HA

2.1 vi hdfs-core.xml

<configuration>

    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>

    <property>
      <name>dfs.nameservices</name>
      <value>ns</value>
    </property>

    <property>
      <name>dfs.ha.namenodes.ns</name>
      <value>nn1,nn2</value>
    </property>

    <property>
      <name>dfs.namenode.rpc-address.ns.nn1</name>
      <value>bigdata01.com:8020</value>
    </property>

    <property>
      <name>dfs.namenode.rpc-address.ns.nn2</name>
      <value>bigdata02.com:8020</value>
    </property>

    <property>
      <name>dfs.namenode.http-address.ns.nn1</name>
      <value>bigdata01.com:50070</value>
    </property>

    <property>
      <name>dfs.namenode.http-address.ns.nn2</name>
      <value>bigdata02.com:50070</value>
    </property>

    <property>
        <name>dfs.permissions.enabled</name>
        <value>false</value>
    </property>

    <property>
      <name>dfs.namenode.shared.edits.dir</name>
      <value>qjournal://bigdata01.com:8485;bigdata02.com:8485;bigdata03.com:8485/ns</value>
    </property>

    <property>
      <name>dfs.journalnode.edits.dir</name>
      <value>/opt/modules/hadoop-2.5.0/data/dfs/jn</value>
    </property>

    <property>
      <name>dfs.client.failover.proxy.provider.ns</name>
      <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>

    <property>
      <name>dfs.ha.fencing.methods</name>
      <value>sshfence</value>
    </property>

    <property>
      <name>dfs.ha.fencing.ssh.private-key-files</name>
      <value>/home/kfk/.ssh/id_rsa</value>
    </property>

    <property>
      <name>dfs.ha.automatic-failover.enabled.ns</name>
      <value>true</value>
    </property>
</configuration>

2.2 vi core-site.xml

    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://ns</value>
    </property>

    <property>
        <name>hadoop.http.staticuser.user</name>
        <value>kfk</value>
    </property>

    <property>
       <name>ha.zookeeper.quorum</name>
       <value>bigdata01.com:2181,bigdata02.com:2181,bigdata03.com:2181</value>
    </property>


    <property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/modules/hadoop-2.5.0/data/tmp</value>
    </property>

2.3 分发hdfs-site.xml 和core-site.xml 到其余机器中

2.4 启动HA

First Step:

开启hadoopHA的初始化
1. 在各个节点上启动journalnode

sbin/hadoop-daemon.sh start journalnode 
  1. 在[nn1]上,对其进行格式化,并启动
bin/hdfs namenode -format
sbin/hadoop-daemon.sh start namenode
  1. [nn2]中同步nn1的元数据
bin/hdfs namenode -bootstrapStandby
  1. 启动nn2
sbin/hadoop-daemon.sh start namenode
  1. 将nn1切换成Active
bin/hdfs haadmin -transitionToActive nn1
  1. 启动所有节点的datanode
sbin/hadoop-daemons.sh start datanode

Second Step:

故障自动转移
1. 先关闭所有的hdfs服务

sbin/stop-dfs.sh
  1. 启动zookeper集群
#所有的机器都要执行
bin/zkServer.sh
  1. 初始化HA在zookeeper中的状态
bin/hdfs zkfc -formatZK
  1. 启动hdfs服务
sbin/start-dfs.sh
  1. 启动DFSZK服务
sbin/hadoop-daemon.sh start zkfc

3 yarn HA

3.1 vi yarn-site.xml


<configuration>

    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>

    <property>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value> 
    </property>

    <property>
        <name>yarn.log-aggregation.retain-seconds</name>
        <value>10000</value> 
    </property>

    <property>
       <name>yarn.resourcemanager.ha.enabled</name>
       <value>true</value>
     </property>

     <property>
       <name>yarn.resourcemanager.cluster-id</name>
       <value>cluster1</value>
     </property>

     <property>
       <name>yarn.resourcemanager.ha.rm-ids</name>
       <value>rm1,rm2</value>
     </property>

     <property>
       <name>yarn.resourcemanager.hostname.rm1</name>
       <value>bigdata01.com</value>
     </property>
     <property>
       <name>yarn.resourcemanager.hostname.rm2</name>
       <value>bigdata02.com</value>
     </property>

     <property>
       <name>yarn.resourcemanager.zk-address</name>
       <value>bigdata01.com:2181,bigdata01.com:2181,bigdata01.com:2181</value>
     </property>

     <property>
       <name>yarn.resourcemanager.recovery.enabled</name>
       <value>true</value>
     </property>

      <property>
       <name>yarn.resourcemanager.store.class</name>
       <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
     </property>

</configuration>

3.2 分发到其他机器

3.3 启动

  1. 启动resourcemanager
  2. 启动nodemanager
  3. 测试mapreduce wordcount

猜你喜欢

转载自blog.csdn.net/u014679456/article/details/80189298