Last article, we completed a hadoop cluster installation, but the node profile loom we can see namenode only exist on the master machine, once the machine is down, then stop taking HDFS, so we need a mechanism namenode to ensure high availability, this risk also exists in resourcemanager. This article will use the zookeeper to ensure namenode resourcemanager and high availability.
1 systems, software and constraints premise
- Complete the installation hadoop cluster on three machines
https://www.jianshu.com/u/0b75036451ae - Complete the installation zookeeper cluster on three machine and start
https://www.jianshu.com/p/48f142f876d4
2 operation
2.1 Shut down all nodes [service] zk can do with the data before emptying
ssh master;
hadoop-2.5.2/sbin/stop-all.sh
cd hadoop-2.5.2/dfs;
rm -rf *
mkdir data
mkdir name
# 其他两台机子也做相同清空操作
2.2 modify the configuration file in the master and copy
- Modified core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/root/hadoop-2.5.2/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://myha01/</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>master:2181,slave1:2181,slave2:2181</value>
</property>
<property>
<name>ha.zookeeper.session-timeout.ms</name>
<value>5000</value>
</property>
</configuration>
- Modify hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:50090</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/root/hadoop-2.5.2/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/root/hadoop-2.5.2/dfs/data</value>
</property>
<!-- /meservice为myha01,需要和core-site.xml中的保持一致
dfs.ha.namenodes.[nameservice id]为在nameservice中的每一个NameNode设置唯一标示符。
配置一个逗号分隔的NameNode ID列表。这将是被DataNode识别为所有的NameNode。
例如,如果使用"myha01"作为nameservice ID,并且使用"nn1"和"nn2"作为NameNodes标示符
-->
<property>
<name>dfs.nameservices</name>
<value>myha01</value>
</property>
<!-- myha01下面有两个NameNode,分别是nn1,nn2 -->
<property>
<name>dfs.ha.namenodes.myha01</name>
<value>nn1,nn2</value>
</property>
<!-- nn1的RPC通信地址 -->
<property>
<name>dfs.namenode.rpc-address.myha01.nn1</name>
<value>master:9000</value>
</property>
<!-- nn1的http通信地址 -->
<property>
<name>dfs.namenode.http-address.myha01.nn1</name>
<value>master:50070</value>
</property>
<!-- nn2的RPC通信地址 -->
<property>
<name>dfs.namenode.rpc-address.myha01.nn2</name>
<value>slave1:9000</value>
</property>
<!-- nn2的http通信地址 -->
<property>
<name>dfs.namenode.http-address.myha01.nn2</name>
<value>slave1:50070</value>
</property>
<!-- 指定NameNode的edits元数据的共享存储位置。也就是JournalNode列表
该url的配置格式:qjournal://host1:port1;host2:port2;host3:port3/journalId
journalId推荐使用nameservice,默认端口号是:8485 -->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://master:8485;slave1:8485;slave2:8485/myha01</value>
</property>
<!-- 指定JournalNode在本地磁盘存放数据的位置 -->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/root/hadoop-2.5.2/data/journaldata</value>
</property>
<!-- 开启NameNode失败自动切换 -->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<!-- 配置失败自动切换实现方式 -->
<property>
<name>dfs.client.failover.proxy.provider.myha01</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行 -->
<property>
<name>dfs.ha.fencing.methods</name>
<value>
sshfence
</value>
</property>
<!-- 使用sshfence隔离机制时需要ssh免登陆 -->
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
<!-- 配置sshfence隔离机制超时时间 -->
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
<property>
<name>ha.failover-controller.cli-check.rpc-timeout.ms</name>
<value>60000</value>
</property>
</configuration>
- Modify yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!-- 开启RM高可用 -->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<!-- 指定RM的cluster id -->
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>yrc</value>
</property>
<!-- 指定RM的名字 -->
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<!-- 分别指定RM的地址 -->
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>master</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>slave1</value>
</property>
<!-- 指定zk集群地址 -->
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>master:2181,slave1:2181,slave2:2181</value>
</property>
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>86400</value>
</property>
<!-- 启用自动恢复 -->
<property>
<name>yarn.resourcemanager.recovery.enabled</name>
<value>true</value>
</property>
<!-- 制定resourcemanager的状态信息存储在zookeeper集群上 -->
<property>
<name>yarn.resourcemanager.store.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>
</configuration>
- Modify slaves
master
slave1
slave2
- Copy the profile to another machine
cd /root/hadoop-2.5.2/etc/hadoop
scp * slave1:$PWD
scp * slave2:$PWD
2.3 Start-related services
- In a master, slave1, slave2 respectively start journalnode
cd /root/hadoop-2.5.2/sbin
./hadoop-daemons.sh start journalnode
- 2 on the master formatting zkfc
cd /root/hadoop-2.5.2/bin
./hdfs zkfc -formatZK
- 3 on the master formatting namenode
cd /root/hadoop-2.5.2/bin
./hdfs namenode -format
- 4 Start namenode on the master
cd /root/hadoop-2.5.2/sbin
./hadoop-daemon.sh start namenode
- 5 Start standby in slave1 namenode
cd /root/hadoop-2.5.2/bin
./hdfs namenode -bootstrapStandby
cd /root/hadoop-2.5.2/sbin
./hadoop-daemon.sh start namenode
- 6 is provided on the primary master
cd /root/hadoop-2.5.2/bin
./hdfs haadmin -transitionToActive --forcemanual nn1
- 7 on the master boot datanode
cd /root/hadoop-2.5.2/sbin
./hadoop-daemons.sh start datanode
- 8 Start yarn on the master and slave1
cd /root/hadoop-2.5.2/sbin
./start-yarn.sh
- 9 on the master boot zkfc
cd /root/hadoop-2.5.2/sbin
./hadoop-daemons.sh start zkfc
All boot is completed, each node performs jps, process information is as follows:
Master:
[root@master ~]# jps
14962 Jps
12474 NameNode
12161 JournalNode
12757 DataNode
14838 NodeManager
7930 QuorumPeerMain
13757 ResourceManager
13366 DFSZKFailoverController
slave1:
[root@slave1 ~]# jps
10564 DataNode
10378 NameNode
12161 NodeManager
8736 QuorumPeerMain
11231 DFSZKFailoverController
10159 JournalNode
12287 Jps
12673 ResourceManager
slave2:
[root@slave2 ~]# jps
12706 Jps
11449 NodeManager
10764 QuorumPeerMain
10884 JournalNode
11171 DataNode