Hadoop2.X HA zookeeper集群搭建

/**
* author:it-WScheng
* 注:装载请注明出处
*/

Hadoop2.X HA 搭建

首先准备四台机器:node1,node2,node3,node4
这里写图片描述

1.编辑core-site.xml文件

vi opt/hadoop/etc/hadoop/core-site.xml

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://myhadoop</value>
    </property>
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>node1:2181,node2:2181,node3:2181</value>
    </property>
    <property>
            <name>hadoop.tmp.dir</name>
            <value>/opt/hadoop2</value>
    </property>
</configuration>

2.编辑hdfs-site.xml

vi opt/hadoop/etc/hadoop/hdfs-site.xml

<configuration>
    <property>
        <name>dfs.nameservices</name>
        <value>myhadoop</value>
    </property>
    <property>
        <name>dfs.ha.namenodes.myhadoop</name>
        <value>nn1,nn2</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.myhadoop.nn1</name>
        <value>node1:8020</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.myhadoop.nn2</name>
        <value>node2:8020</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.myhadoop.nn1</name>
        <value>node1:50070</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.myhadoop.nn2</name>
        <value>node2:50070</value>
    </property>
    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://node2:8485;node3:8485;node4:8485/myhadoop</value>
    </property>
    <property>
      <name>dfs.client.failover.proxy.provider.myhadoop
      </name>
       <value>org.apache.hadoop.hdfs.server.namenode.ha.
        ConfiguredFailoverProxyProvider
       </value>
    </property>
    <property>
       <name>dfs.ha.fencing.methods</name>
       <value>sshfence</value>
    </property>
    <property>
       <name>dfs.ha.fencing.ssh.private-key-files</name>
       <value>/root/.ssh/id_rsa</value>
    </property>
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/opt/jn/data</value>
    </property>
    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>
</configuration>

3.编辑mapred-site.xml

vi opt/hadoop/etc/hadoop/mapred-site.xml

<configuration>
   <property>
            <name>mapreduce.framework.name</name>
            <value>yarn</value>
    </property>
</configuration>

4.编辑yarn-site.xml

vi opt/hadoop/etc/hadoop/yarn-site.xml

<configuration>
        <property>
           <name>yarn.resourcemanager.hostname</name>
           <value>node1</value>
        </property>
        <property>
           <name>yarn.nodemanager.aux-services</name>
           <value>mapreduce_shuffle</value>
        </property>
        <property>
            <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class
            </name>
            <value>org.apache.hadoop.mapred.ShuffleHandler
            </value>
        </property>
</configuration>

5.准备zookeeper

a)三台zookeeper:node1,node2,node3
可以将zookeeper-3.4.10解压在/opt/下
b)配置zoo.cfg文件
1. cd /opt/zookeeper-3.4.10/conf
2.cp -d zoo_sample.cfg zoo.cfg
3.vi zoo.cfg
① 修改dataDir=/opt/zookeeper
② server.1=node1:2888:3888
server.2=node2:2888:3888
server.3=node3:2888:3888
c)在dataDir目录中创建一个myid的文件
vi /opt/zookeeper/myid
每个节点的dataDir都要创建,文件的内容分别是1,2,3

6.配置hadoop中的slaves(写上datenode)

vi opt/hadoop/etc/hadoop/slaves
node2
node3
node4

7.启动三个zookeeper:./zkServer.sh start

8.启动三个journalNode:./hadoop-daemon.sh start journalnode

9.在其中一个namenode上格式化:

./hdfs namenode -format

10.把刚刚格式化之后的元数据拷贝到另外一个 namenode上

a)启动刚刚格式化的namenode
b)在没有格式化的namenode上执行:

./hdfs namenode -bootstrapStandby

c) 启动第二个namenode

./hdfs zkfc -formatZk

12.停止上面节点:./stop-dfs.sh

13.全面启动:./start-all.sh

猜你喜欢

转载自blog.csdn.net/wsc912406860/article/details/81383269
今日推荐