Hadoop集群实现HA

软件版本:

  • CentOS 6.7
  • jdk-8u171-linux-x64.tar.gz
  • zookeeper-3.4.12.tar.gz
  • hadoop-2.7.4-with-centos-6.7.tar.gz

一、集群部署节点角色的规划

1、集群部署节点角色的规划(7节点)

    node01   namenode   zkfc
    node02   namenode   zkfc
    node03   resourcemanager
    node04   resourcemanager
    node05   datanode   nodemanager      zookeeper     journal node
    node06   datanode   nodemanager      zookeeper     journal node 
    node07   datanode   nodemanager      zookeeper     journal node 

2、集群部署节点角色的规划(3节点)

    node01   namenode  datanode  resourcemanager   nodemanager   zookeeper   journalnode    zkfc
    node02   namenode  datanode  resourcemanager   nodemanager   zookeeper   journalnode    zkfc
    node03                      datanode                                 nodemanager   zookeeper   journalnode

二、高可用集群搭建(采用3节点)

安装配置zooekeeper集群,此步省略或参考下面博客

https://blog.csdn.net/jinYwuM/article/details/81210353

修改如下配置

1、修改hadoop-env.sh
export JAVA_HOME=/export/servers/jdk1.8.0_171

2、修改core-site.xml

<configuration>
<!-- 集群名称在这里指定!该值来自于hdfs-site.xml中的配置 -->
  <property>
	<name>fs.defaultFS</name>
	<value>hdfs://hadoop-cluster</value>
  </property>
<!-- 这里的路径默认是NameNode、DataNode、JournalNode等存放数据的公共目录 -->
  <property>
	<name>hadoop.tmp.dir</name>
	<value>/export/data/hadoop</value>
  </property>
<!-- ZooKeeper集群的地址和端口。注意,数量一定是奇数,且不少于三个节点-->
  <property>
	<name>ha.zookeeper.quorum</name>
	<value>node01:2181,node02:2181,node03:2181</value>
  </property>
</configuration>

注意:/export/data/hadoop目录需要手动创建

3、修改hdfs-site.xml

<configuration>
<!--指定hdfs的nameservice为hadoop-cluster,需要和core-site.xml中的保持一致 -->
  <property>
	<name>dfs.nameservices</name>
	<value>hadoop-cluster</value>
  </property>
<!-- cluster1下面有两个NameNode,分别是nn1,nn2 -->
  <property>
	<name>dfs.ha.namenodes.hadoop-cluster</name>
	<value>nn1,nn2</value>
  </property>
<!-- nn1的RPC通信地址 -->
  <property>
	<name>dfs.namenode.rpc-address.hadoop-cluster.nn1</name>
	<value>node01:9000</value>
  </property>
<!-- nn1的http通信地址 -->
  <property>
	<name>dfs.namenode.http-address.hadoop-cluster.nn1</name>
	<value>node01:50070</value>
  </property>
<!-- nn2的RPC通信地址 -->
  <property>
	<name>dfs.namenode.rpc-address.hadoop-cluster.nn2</name>
	<value>node02:9000</value>
   </property>
<!-- nn2的http通信地址 -->
  <property>
	<name>dfs.namenode.http-address.hadoop-cluster.nn2</name>
	<value>node02:50070</value>
  </property>
<!-- 指定NameNode的edits元数据在JournalNode上的存放位置 -->
  <property>
	<name>dfs.namenode.shared.edits.dir</name>
	<value>qjournal://node01:8485;node02:8485;node03:8485/hadoop-cluster</value>
  </property>
<!-- 指定JournalNode在本地磁盘存放数据的位置 -->
  <property>
	<name>dfs.journalnode.edits.dir</name>  
	<value>/export/data/journaldata</value>
  </property>
<!-- 开启NameNode失败自动切换 -->
  <property>
	<name>dfs.ha.automatic-failover.enabled</name>
	<value>true</value>
  </property>
<!-- 指定该集群出故障时,哪个实现类负责执行故障切换 -->
  <property>
	<name>dfs.client.failover.proxy.provider.hadoop-cluster</name>
	<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
  </property>
<!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行-->
  <property>
	<name>dfs.ha.fencing.methods</name>
	<value>sshfence</value>
  </property>
<!-- 使用sshfence隔离机制时需要ssh免登陆 -->
  <property>
	<name>dfs.ha.fencing.ssh.private-key-files</name>
	<value>/root/.ssh/id_rsa</value>
  </property>
<!-- 配置sshfence隔离机制超时时间 -->
  <property>
	<name>dfs.ha.fencing.ssh.connect-timeout</name>
	<value>30000</value>
  </property>
</configuration>

4、修改mapred-site.xml

<configuration>
<!-- 指定mr框架为yarn方式 -->
  <property>
	<name>mapreduce.framework.name</name>
	<value>yarn</value>
  </property>
</configuration>	

5、修改yarn-site.xml

<configuration>
<!-- 开启RM高可用 -->
  <property>
	<name>yarn.resourcemanager.ha.enabled</name>
	<value>true</value>
  </property>
<!-- 指定RM的cluster id -->
  <property>
	<name>yarn.resourcemanager.cluster-id</name>
	<value>yrc</value>
  </property>
<!-- 指定RM的名字 -->
  <property>
	<name>yarn.resourcemanager.ha.rm-ids</name>
	<value>rm1,rm2</value>
  </property>
<!-- 分别指定RM的地址 -->
  <property>
	<name>yarn.resourcemanager.hostname.rm1</name>
	<value>node01</value>
  </property>
  <property>
	<name>yarn.resourcemanager.hostname.rm2</name>
	<value>node02</value>
  </property>
<!-- 指定zk集群地址 -->
  <property>
	<name>yarn.resourcemanager.zk-address</name>
	<value>node01:2181,node02:2181,node03:2181</value>
  </property>
  <property>
	<name>yarn.nodemanager.aux-services</name>
	<value>mapreduce_shuffle</value>
</property>
</configuration>

6、修改slaves

node01
node02
node03

三、初次启动高可用集群(格式化也适用

1、启动zookeeper集群
            分别在node01、node02、node03上启动zookeeper

2、分别在node01、node02、node03手动启动journalnode
            hadoop-daemon.sh start journalnode
            jps检查状态是否正常

3、格式化namenode(在node01上)
            hdfs namenode -format
            把生成的目录(hadoop目录),拷贝到node2相应位置上
            scp -r /export/data/hadoop root@node02:/export/data/

4、格式化ZKFC(在node01上执行即可)
            hdfs zkfc -formatZK

5、启动HDFS(在node01上执行)
            start-dfs.sh

6、启动YARN(在node01上执行)
            start-yarn.sh

7、node02上单独启动一个resourcemanager进程
            yarn-daemon.sh start resourcemanager 
            关闭集群时,也需要单独关闭node02上的resourcemanager

注意:初次启动需要按照流程执行,否则启动不成功,主要异常如下

主要是原因是journalnode初始化异常,导致创建目录journalnode失败,重新按照步骤格式化即可

18/08/08 20:24:20 INFO ipc.Client: Retrying connect to server: node1/192.168.66.201:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/08/08 20:24:20 INFO ipc.Client: Retrying connect to server: node2/192.168.66.202:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/08/08 20:24:20 INFO ipc.Client: Retrying connect to server: node3/192.168.66.203:8485. Already tried 8 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/08/08 20:24:21 INFO ipc.Client: Retrying connect to server: node2/192.168.66.202:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/08/08 20:24:21 INFO ipc.Client: Retrying connect to server: node1/192.168.66.201:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/08/08 20:24:21 INFO ipc.Client: Retrying connect to server: node3/192.168.66.203:8485. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/08/08 20:24:21 WARN namenode.NameNode: Encountered exception during format: 
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 3 exceptions thrown:
192.168.66.203:8485: Call From node1/192.168.66.201 to node3:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
192.168.66.201:8485: Call From node1/192.168.66.201 to node1:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
192.168.66.202:8485: Call From node1/192.168.66.201 to node2:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
	at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:247)
	at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232)
	at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:901)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:194)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:995)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1441)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1566)
18/08/08 20:24:21 ERROR namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Unable to check if JNs are ready for formatting. 3 exceptions thrown:
192.168.66.203:8485: Call From node1/192.168.66.201 to node3:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
192.168.66.201:8485: Call From node1/192.168.66.201 to node1:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
192.168.66.202:8485: Call From node1/192.168.66.201 to node2:8485 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
	at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:247)
	at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.hasSomeData(QuorumJournalManager.java:232)
	at org.apache.hadoop.hdfs.server.common.Storage.confirmFormat(Storage.java:901)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.confirmFormat(FSImage.java:194)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:995)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1441)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1566)
18/08/08 20:24:21 INFO util.ExitUtil: Exiting with status 1

猜你喜欢

转载自blog.csdn.net/jinYwuM/article/details/82120060