HA cluster structures can be built in a fully distributed basis
purpose:
Reduce the incidence of single points of failure
Planning configuration diagram
NN1 | NN2 | DN | zK | ZKFC | JNN | RM | |
---|---|---|---|---|---|---|---|
hadoop100 | * | * | * | * | * | * | |
hadoop101 | * | * | * | * | * | * | |
hadoop102 | * | * | * |
FIG. NN, DN, ZK, ZKFC, JNN, RM is hereinafter referred to as
NN: the NameNode
the DN: DataNodes
ZK: the Zookeeper
ZKFC: the Zookeeper client
JNN: Journalnode
RM: Resourcemanager
Ready to work:
1, configuration time synchronization server (optional)
All nodes perform
yum -y install ntp
ntpdate ntp1.aliyun.com
2, configure the mapping of each host (do not know to solve their own Baidu)
3, avoid close Login
All nodes perform
ssh-keygen-t rsa //生成密钥 中途停下询问的一直按回车到执行成功为止
ssh-copy-id hadoop101 //发送公钥到其他节点上
4, turn off the firewall settings
systemctl stop firewalld.service //暂时关闭
systemctl disable firewalld.service //永久关闭
Started to build a high availability Hadoop cluster
Jdk installation (if it is a minimal installation Centos default jdk is not installed directly extract the good, the graphical installation requires uninstall the original jdk to install)
Extracting installation package hadoop
tar xf hadoop2.7.7... -C /module/ha/
Hadoop configuration file into the directory
cd /module/ha/hadoop/etc/hadoop
Three (hadoop, yarn, mapred) -env.sh file, find the JAVA_HOME set jdk installation path
Placed core-site.xml
vi core-site.xml
<!-- 设置hdfs的nameservice服务 名称自定义 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
<!-- 设置hadoop的元数据存放路径 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/moudle/ha/hadoop/data</value>
</property>
<!-- 指定zookeeper地址 -->
<property>
<name>ha.zookeeper.quorum</name>
<value>hadoop100:2181,hadoop101:2181,hadoop102:2181</value>
</property>
vi hdfs-site.xml
<!-- 完全分布式集群名称 -->
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<!-- 集群中NameNode节点都有哪些 -->
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
<!-- nn1的RPC通信地址 -->
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>hadoop100:9000</value>
</property>
<!-- nn2的RPC通信地址 -->
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>hadoop101:9000</value>
</property>
<!-- nn1的http通信地址 -->
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>hadoop100:50070</value>
</property>
<!-- nn2的http通信地址 -->
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>hadoop101:50070</value>
</property>
<!-- 指定NameNode元数据在JournalNode上的存放位置 -->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://hadoop100:8485;hadoop101:8485;hadoop102:8485/mycluster</value>
</property>
<!-- 配置隔离机制,即同一时刻只能有一台服务器对外响应 -->
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<!-- 使用隔离机制时需要ssh无秘钥登录-->
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
<!-- 声明journalnode服务器存储目录-->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/module/ha/hadoop/data/jn</value>
</property>
<!-- 关闭权限检查-->
<property>
<name>dfs.permissions.enable</name>
<value>false</value>
</property>
<!-- 访问代理类:client,mycluster,active配置失败自动切换实现方式-->
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!--设置开启HA故障自动转移-->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
Placed mapred-site.xml
vi mapred-site.xml
Add the following configuration
<!-- 指定mr框架为yarn方式 -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
Configuration yarn-site.xml (optional)
vi yarn-site.xml
Add the following configuration
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!--启用resourcemanager ha-->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<!--声明两台resourcemanager的地址-->
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>cluster-yarn1</value>
</property>
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>hadoop100</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>hadoop101</value>
</property>
<!--指定zookeeper集群的地址-->
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>hadoop100:2181,hadoop101:2181,hadoop102:2181</value>
</property>
<!--启用自动恢复-->
<property>
<name>yarn.resourcemanager.recovery.enabled</name>
<value>true</value>
</property>
<!--指定resourcemanager的状态信息存储在zookeeper集群-->
<property>
<name>yarn.resourcemanager.store.class</name> <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>
Configuration slaves
vi slaves
hadoop100
hadoop101
hadoop102
Configuration finished distributed to all nodes
scp –r -p /usr/local/hadoop/ root@master2:/usr/local/
Zookeeper installation
unzip files
tar xf 文件名 -C 存放路径
Good self-configuration environment variable
zoo_sample.cfg under the modified conf directory
mv zoo_sample.cfg zoo.cfg
vi zoo.cfg
Storing the metadata stored in the path setting zk
dataDir=/module/zookeeper/zk
In the bottom of the file add the following data
Server.1=hadoop100:2888:3888
Server.2=hadoop101:2888:3888
Server.3=hadoop102:2888:3888
Generating location logs of modifying zookeeperCan not be modified
vi bin/zkEnv.sh
modifySecond arrow pointing to the pathTo place their own want to store
Creating a file metadata stored in the storage path of zk
mkdir /module/zookeeper/zk
Distribute zookeeper zookeeper configuration files to all nodes
written in different nodes corresponding to id
echo 1 > /module/zookeeper/zk/myid # hadoop100节点上:
ZooKeeper on each node of the cluster, perform the startup script ZooKeeper services
zkServer.sh start
Deployment details
1, you must start journalnode
hadoop-daemons.sh start journalnode
2, wherein the selecting is configured to perform a table format on hadoop node NN
bin/hdfs namenode –format
3. Select a NN start namenode
sbin/hadoop-daemon.sh start namenode
4, data synchronization is performed on another node NN
bin/hdfs namenode -bootstrapStandby
5, on a NN wherein the formatting performed zkfc
bin/hdfs zkfc –formatZK
In which the NN start hadoop cluster
sbin/start-dfs.sh
Use jps command to vieweachIs there a node following node
point of the HA cluster hadoop complete! ! ! !
RM is opened is disposed on the HA performs RM node
sbin/start-yarn.sh
In another configuration is also performed on the node RM
sbin/yarn-daemon.sh start resourcemanager
See all the nodes have the following node of the HA cluster configuration RM so far successfully! ! ! ! ! !
Frequently used commands NameNode process starts: hadoop-daemon.sh Start the NameNode
DataNode process starts: hadoop-daemon.sh start datanodeHA high availability environment, you need to start the process: zookeeper: zkServer.sh start zkServer.sh stop start stop
zkServer.sh status to view the status of leader follwerjournalnode cluster command hadoop-daemon.sh start journalnode start hadoop-daemon.sh
STOP Stop journalnodeZKFC Format: hdfs zkfc -formatZK start zkfc process: hadoop-daemon.sh start zkfc
stop zkfc process: hadoop-daemon.sh stop zkfcNameNode data synchronization name: hdfs namenode -bootstrapStandby
Start the process of yarn:
yarn-daemon.sh start resourcemanager
Analog switch automatically failover standby state NN
- Use jps View namenode process id
jps
- Stop namenode process
kill -9 namenode的进程id
Then see whether the two namenode web page to view the status of switching over
3. Restart namenode process:
hadoop-daemon.sh start namenode