I believe after reading a previous blog "takes you quickly recognize NamenodeHA and Yarn HA, lay the foundation for building HadoopHA cluster!" After, we must respect very much looking forward how to build HA cluster 9 (1❛ᴗ❛1) 6 not panic, this blog post will bring a detailed tutorial to build HA cluster for everyone!
Codeword is not easy, praise after the first look!
Article Directory
- Build Hadoop HA Cluster
- <1> Hadoop cluster installation configuration
- ① Backup Cluster
- ② extract the new cluster
- ③ Configuring HDFS
- ④ modify the core-site.xml
- ⑤ modify hdfs-site.xml
- ⑦ modify mapred-site.xml
- ⑧ modified yarn-site.xml
- ⑨ modify slaves
- ⑩ configuration-free dense Login
- <2> to start the process
- ① start zookeeper cluster
- ② manually start journalnode
- ③ formatting namenode
- ④ format ZKFC (can be performed on the active)
- ⑤ start HDFS (executed on node01)
- ⑥ start YARN
- <3> Browser Access
- <4> expand
Build Hadoop HA Cluster
friendly reminder
All the following operations are performed there on the basis of a Hadoop cluster . We recommend that you first install the cluster friends to see this blog "Hadoop (CDH) to build a distributed environment" , to build up a normal cluster available, look at this blog "zookeeper Detailed installation" installed on the cluster and then look at the Zookeeper this advanced blog.
<1> Hadoop cluster installation configuration
① Backup Cluster
Because we had already built a good cluster, so we need before the first cluster hadoop closed .
stop-all.sh
Then hadoop directory where the previous backup (three nodes)
cd /export/servers/
mv mv hadoop-2.6.0-cdh5.14.0 hadoop-2.6.0-cdh5.14.0_bk
② extract the new cluster
Hadoop into the directory where the installation package, re-extract.
cd /export/softwares/
tar -zxvf hadoop-2.6.0-cdh5.14.0.tar.gz -C ../servers/
③ Configuring HDFS
Note that, hadoop2.0 all configuration files are in the $ HADOOP_HOME / etc / hadoop directory
This step was supposed to do operations like adding some of the system environment variables, but because we are before the cluster has been completed these operations, so this step can skip the content - which is why I recommend you first good reason to build a cluster.
④ modify the core-site.xml
Into the specified directory:
cd /export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop
Edit core-site.xml, add the following configuration
vim core-site.xml
<configuration>
<!-- 集群名称在这里指定!该值来自于hdfs-site.xml中的配置 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://cluster1</value>
</property>
<!-- 这里的路径默认是NameNode、DataNode、JournalNode等存放数据的公共目录 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/export/servers/hadoop-2.6.0-cdh5.14.0/HAhadoopDatas/tmp</value>
</property>
<!-- ZooKeeper集群的地址和端口。注意,数量一定是奇数,且不少于三个节点-->
<property>
<name>ha.zookeeper.quorum</name>
<value>node01:2181,node02:2181,node03:2181</value>
</property>
</configuration>
⑤ modify hdfs-site.xml
vim hdfs-site.xml
<configuration>
<!--指定hdfs的nameservice为cluster1,需要和core-site.xml中的保持一致 -->
<property>
<name>dfs.nameservices</name>
<value>cluster1</value>
</property>
<!-- cluster1下面有两个NameNode,分别是nn1,nn2 -->
<property>
<name>dfs.ha.namenodes.cluster1</name>
<value>nn1,nn2</value>
</property>
<!-- nn1的RPC通信地址 -->
<property>
<name>dfs.namenode.rpc-address.cluster1.nn1</name>
<value>node01:8020</value>
</property>
<!-- nn1的http通信地址 -->
<property>
<name>dfs.namenode.http-address.cluster1.nn1</name>
<value>node01:50070</value>
</property>
<!-- nn2的RPC通信地址 -->
<property>
<name>dfs.namenode.rpc-address.cluster1.nn2</name>
<value>node02:8020</value>
</property>
<!-- nn2的http通信地址 -->
<property>
<name>dfs.namenode.http-address.cluster1.nn2</name>
<value>node02:50070</value>
</property>
<!-- 指定NameNode的edits元数据在JournalNode上的存放位置 -->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://node01:8485;node02:8485;node03:8485/cluster1</value>
</property>
<!-- 指定JournalNode在本地磁盘存放数据的位置 -->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/export/servers/hadoop-2.6.0-cdh5.14.0/journaldata</value>
</property>
<!-- 开启NameNode失败自动切换 -->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<!-- 指定该集群出故障时,哪个实现类负责执行故障切换 -->
<property>
<name>dfs.client.failover.proxy.provider.cluster1</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行-->
<property>
<name>dfs.ha.fencing.methods</name>
<value>
sshfence
</value>
</property>
<!-- 使用sshfence隔离机制时需要ssh免登陆 -->
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
<!-- 配置sshfence隔离机制超时时间 -->
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
</configuration>
⑦ modify mapred-site.xml
A first backup
cp mapred-site.xml.template mapred-site.xml
Edit mapred-site.xml, add the following
<configuration>
<!-- 指定mr框架为yarn方式 -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
⑧ modified yarn-site.xml
Modify yarn-site.xml, add the following:
vim yarn-site.xml
<configuration>
<!-- 开启RM高可用 -->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<!-- 指定RM的cluster id -->
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>yrc</value>
</property>
<!-- 指定RM的名字 -->
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<!-- 分别指定RM的地址 -->
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>node01</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>node02</value>
</property>
<!-- 指定zk集群地址 -->
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>node01:2181,node02:2181,node03:2181</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
⑨ modify slaves
vim slaves
node01
node02
node03
Copying the software to all nodes
Here scp command is, of course, xsync
but possible -
scp -r hadoop-2.6.0-cdh5.14.0 node02:/$PWD
scp -r hadoop-2.6.0-cdh5.14.0 node03:/$PWD
⑩ configuration-free dense Login
It should be on the standby node configuration-free dense logged in, but since when we install the original cluster has been configured, so this step can be skipped ~
# First, configure node01 to node01, node02, node03 the password-free login
# produce a pair of keys on node01
SSH-keygen
# copy the public key to other nodes, including their own
SSH-coyp-the above mentioned id node01
SSH-coyp-the above mentioned id node02
SSH node03 the above mentioned id--coyp
# Note: between two namenode to configure ssh ssh remote login password-free up the knife when needed
# node02 in the production of a pair of keys
ssh-keygen
# copy the public key to node01
ssh-coyp-the above mentioned id node01
<2> to start the process
In strict accordance with the following procedure, or we can not guarantee success
① start zookeeper cluster
Start zookeeper respectively on node01, node02, node03 node
bin/zkServer.sh start
#查看状态:一个leader,两个follower
bin/zkServer.sh status
② manually start journalnode
Respectively executed on node01, node02, node03
hadoop-daemon.sh start journalnode
#运行jps命令检验,node01、node02、node03上多了JournalNode进程
③ formatting namenode
#在node01上执行命令:
hdfs namenode -format
#格式化后会在根据core-site.xml中的hadoop.tmp.dir配置的目录下生成个hdfs初始化文件,
hadoop.tmp.dir配置的目录下所有文件拷贝到另一台namenode节点所在的机器
scp -r tmp/ node02:/home/hadoop/app/hadoop-2.6.4/
④ format ZKFC (can be performed on the active)
hdfs zkfc -formatZK
⑤ start HDFS (executed on node01)
start-dfs.sh
⑥ start YARN
start-yarn.sh
还需要手动在standby上手动启动备份的 resourcemanager
yarn-daemon.sh start resourcemanager
<3> Browser Access
After configuring the above, with a jps
view of the current process
[root@node01 helloworld]# jps
14305 QuorumPeerMain
15186 NodeManager
14354 JournalNode
14726 DataNode
20887 Jps
15096 ResourceManager
15658 NameNode
14991 DFSZKFailoverController
Finally you can access the browser -
Access node01
http://node01:50070
It can be found in the current state of the node namenodeActive
Access node02
http://node02:50070
You can find the current state of the node namenode to (standby)
illustrate the deployment of our HA cluster success ~
Next we upload a file to hdfs
hadoop fs -put /etc/profile /profile
Through the UI interface you can see the new file upload will come up
Then kill off the active node status (ie node01) of NameNode
kill -9 <pid of NN>
This time we visited by node02 browser
http://node02:50070
After the discovery node01 "down", node02 namenode in the Active
state!
Run the following command on node02 node, the data can be found in clusters with former node01 downtime is the same.
hadoop fs -ls /
-rw-r--r-- 3 root supergroup 1926 2014-02-06 15:36 /profile
We just manually "kill" the namenode on node01 node, and now we start it manually.
hadoop-daemon.sh start namenode
Front verify the HA, and now we look to authenticate Yarn!
Any node, run the demo hadoop WordCount program offered in:
hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0-cdh5.14.0.jar wordcount /profile /out
See above renderings, that we build the HA cluster considered a success !!!
<4> expand
OK and you're done! Here expand some test HA cluster command
hdfs dfsadmin -report 查看hdfs的各节点状态信息
hdfs haadmin -getServiceState nn1 获取一个namenode节点的HA状态
hadoop-daemon.sh start zkfc 单独启动一个zkfc进程
Share this on here, for the benefit of small or big data technology partners and interested friends may wish to look bloggers - learn from each other and progress together!