hadoop2.5.1集群搭建:(二)搭建自动切换HA的HDFS集群

搭建自动切换HA的HDFS集群(比手工切换ha集群,多了zookeeper集群)
----------------------------------------------------------------------------------------------
zookeeper:hadoop2-1、hadoop2-2、hadoop2-3
namenode:hadoop2-1和hadoop2-2
datanode:hadoop2-3、hadoop2-4、hadoop2-5、hadoop2-6
journalnode:hadoop2-1、hadoop2-2、hadoop2-3
 

2.0 搭建zookeeper集群,并启动
2.0.1  在hadoop2-1上解压缩,重命名为zookeeper,把conf/zoo_sample.cfg重命名为conf/zoo.cfg
  修改文件conf/zoo.cfg
  (1)dataDir=/usr/local/zookeeper/data
  (2)增加以下内容
     server.1=hadoop2-1:2888:3888
     server.2=hadoop2-2:2888:3888
     server.3=hadoop2-3:2888:3888
  创建目录mkdir zookeeper/data
  写入文件echo 1 >> zookeeper/data/myid
  
  复制zookeeper文件夹到hadoop2-2、hadoop2-3上
  scp -rq zookeeper  hadoop2-2:/usr/local
  scp -rq zookeeper  hadoop2-3:/usr/local
  
  在hadoop2上执行命令echo 2 >> zookeeper/data/myid
  在hadoop3上执行命令echo 3 >> zookeeper/data/myid
2.0.2 启动
  在hadoop2-1、hadoop2-2、hadoop2-3上,分别执行命令zookeeper/bin/zkServer.sh start
2.0.3 验证
  执行命令zookeeper/bin/zkCli.sh 
  进入后执行ls /
 
 
2.1 配置文件(hadoop-env.sh、core-site.xml、hdfs-site.xml、slaves)
2.1.1 hadoop-env.sh
  export JAVA_HOME=/usr/local/jdk1.7.0-45
2.1.2 core-site.xml

<property>
<name>fs.defaultFS</name>
<value>hdfs://cluster1</value>
</property>

<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/tmp</value>
</property>

<property>
<name>ha.zookeeper.quorum</name>
<value>hadoop2-1:2181,hadoop2-2:2181,hadoop2-3:2181</value>
</property>

2.1.3 hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>3</value>
</property>

<property>
<name>dfs.nameservices</name>
<value>cluster1</value>
</property>

<property>
<name>dfs.ha.namenodes.cluster1</name>
<value>hadoop2-1,hadoop2-2</value>
</property>

<property>
<name>dfs.namenode.rpc-address.cluster1.hadoop2-1</name>
<value>hadoop2-1:9000</value>
</property>

<property>
<name>dfs.namenode.http-address.cluster1.hadoop2-1</name>
<value>hadoop2-1:50070</value>
</property>

<property>
<name>dfs.namenode.rpc-address.cluster1.hadoop2-2</name>
<value>hadoop2-2:9000</value>
</property>

<property>
<name>dfs.namenode.http-address.cluster1.hadoop2-2</name>
<value>hadoop2-2:50070</value>
</property>

<property>
<name>dfs.ha.automatic-failover.enabled.cluster1</name>
<value>true</value>
</property>

<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://hadoop2-1:8485;hadoop2-2:8485;hadoop2:8485/cluster1</value>
</property>

<property>
<name>dfs.journalnode.edits.dir</name>
<value>/usr/local/hadoop/tmp/journal</value>
</property>

<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>

<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>

<property>
<name>dfs.client.failover.proxy.provider.cluster1</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>

2.1.6 slaves
hadoop2-3
hadoop2-4
hadoop2-5
hadoop2-6
 

2.1.7 删除其他节点的hadoop文件夹,然后把hadoop2-1上的hadoop文件夹复制到其他节点
2.2 格式化zk集群
  在hadoop2-1上执行hadoop/bin/hdfs zkfc -formatZK
2.3 启动journalnode集群
  在hadoop2-1、hadoop2-2、hadoop2-3上分别执行hadoop/sbin/hadoop-daemon.sh start journalnode
2.4 格式化namenode、启动namenode
  在hadoop2-1上执行hadoop/bin/hdfs namenode -format
  在hadoop2-1上分别执行hadoop/sbin/hadoop-daemon.sh start namenode
  在hadoop2-2上执行hadoop/bin/hdfs namenode -bootstrapStandby
  在hadoop2-2上分别执行hadoop/sbin/hadoop-daemon.sh start namenode
2.5 启动datanode
  在hadoop2-1上分别执行hadoop/sbin/hadoop-daemons.sh start datanode
2.6 启动ZKFC
  在hadoop2-1、hadoop2-2上 启动zkfc,执行命令hadoop/sbin/hadoop-daemon.sh start zkfc
 

总结:
  自动切换比手工切换多出来的
  (1)配置上core-site.xml增加了配置项ha.zookeeper.quorum;hdfs-site.xml中把dfs.ha.automatic-failover.enabled.cluster1改为true
  (2)操作上格式化zk,执行命令bin/hdfs zkfc -formatZK;启动zkfc,执行命令sbin/hadoop-daemon.sh start zkfc

猜你喜欢

转载自yehao0716.iteye.com/blog/2152079