hadoop hdfs 高可用性安装 测试 zookeeper 自动故障转移

安装
基于CentOS 7 安装,系统非最小化安装,选择部分Server 服务,开发工具组。全程使用root用户,因为操作系统的权限、安全,在启动时会和使用其它用户有差别。
Step 1:下载hadoop.apache.org
选择推荐的下载镜像结点;
https://hadoop.apache.org/releases.html

Step 2:下载JDK
http://www.oracle.com/technetwork/pt/java/javase/downloads/jdk8-downloads-2133151.html

Step 4: 解压下载好的文件
解压:JDK文件
命令 ## Tar –zxvf /root/Download/jdk-8u192-linux-x64.tar.gz -C /opt
解压:Hadoop文件
命令 ## Tar –zxvf /root/Download/ hadoop-2.9.2.tar.gz –C /opt

Step 5 安装JSVC
命令 ## rpm –ivh apache-commons-daemon-jsvc-1.0.13-7.el7.x86_64.rpm

* Step 6:修改主机名
命令 ## vi /etc/hosts
添加所有涉及的服务器别名
192.168.209.131 jacksun01.com
192.168.209.132 jacksun02.com
192.168.209.133 jacksun03.com
添加主机的名称
命令 ## vi /etc/hostname
jacksun01.com

* Step 7: ssh互信(免密码登录)
注意我这里配置的是root用户,所以以下的家目录是/root

如果你配置的是用户是xxxx,那么家目录应该是/home/xxxxx/

复制代码
#在主节点执行下面的命令:
# ssh-keygen -t rsa -P '' #一路回车直到生成公钥
命令 #ssh-keygen -t rsa;cd /root/.ssh;ssh-copy-id jacksun01.com;ssh-copy-id jacksun02.com;ssh-copy-id jacksun03.com

Step 8: 添加环境变量
命令 #: vi /root/.bash_profile

PATH=/usr/local/webserver/mysql/bin:/usr/python/bin:/opt/hadoop-2.9.2/etc/hadoop:/opt/jdk/bin:/opt/hadoop-2.9.2/bin:/opt/hadoop-2.9.2/sbin:$PATH:$HOME/bin:/opt/spark/bin:/opt/spark/sbin:/opt/hive/bin:/opt/flume/bin:/opt/kafka/bin
export PATH
JAVA_HOME=/opt/jdk
export JAVA_HOME
export HADOOP_HOME=/opt/hadoop-2.9.2
export LD_LIBRARY_PATH=/usr/local/lib:/usr/python/lib:/usr/local/webserver/mysql/lib

export SPARK_HOME=/opt/spark
export PATH=$PATH:$SPARK_HOME/bin
export HIVE_HOME=/opt/hive
export HIVE_CONF_DIR=$HIVE_HOME/conf
export PATH=$PATH:$HIVE_HOME/bin
export YARN_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export SQOOP_HOME=/opt/sqoop
export PATH=$PATH:$SQOOP_HOME/bin
export FLUME_HOM=/opt/flume

Step9: 修改 vi /opt/hadoop-2.9.2/etc/hadoop/hadoop-env.sh
添加

JAVA_HOME=/opt/jdk-10.0.2
export HDFS_DATANODE_SECURE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export HDFS_NAMENODE_USER=root
export JSVC_HOME=/usr/bin

Step 10: 修改 vi /opt/hadoop-2.9.2/etc/hadoop/core-site.xml

<!-- Put site-specific property overrides in this file. -->

<configuration>
<!-- 指定hadoop默认的Name node 地址 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
<!-- 指定hadoop运行时产生journalnode文件的存储路径 -->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/opt/hadoop-2.9.2/jndata</value>
</property>
<!-- 指定hadoop运行时产生文件的存储路径 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/hadoop-2.9.2/tmp</value>
</property>
</configuration>

Step 11: 修改 vi /opt/hadoop-2.9.2/etc/hadoop/hdfs-site.xml

<!-- Put site-specific property overrides in this file. -->
<configuration>
<!-- 设置命名空间 -->
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<!-- 设置namenode serviceID节点 -->
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
<!-- 设置namenode RPC访问接口 -->
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>jacksun01.com:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>jacksun02.com:8020</value>
</property>
<!-- 设置namenode HTTP访问接口 -->
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>jacksun01.com:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>jacksun02.com:50070</value>
</property>
<!-- 设置日志共享节点JNs 服务器 -->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://jacksun01.com:8485;jacksun02.com:8485;jacksun03:8485/mycluster</value>
</property>
<!-- Java类 HDFS clients use to contact the Active NameNode -->
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!-- 设置隔离方式 -->
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<!-- 设置私钥存放的路径 -->
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>

<!-- 设置namenode存放的路径 -->
<property>
<name>dfs.namenode.name.dir</name>
<value>/opt/hadoop-2.9.2/name</value>
</property>
<!-- 设置datanode存放的路径 -->
<property>
<name>dfs.datanode.data.dir</name>
<value>/opt/hadoop-2.9.2/data</value>
</property>
</configuration>

Step 12:修改vi /opt/hadoop-2.9.2/etc/hadoop/mapred-site.xml
如果此文件不存在,复制 template文件
# cp /opt/hadoop-2.9.2/etc/hadoop/mapred-site.xml.template /opt/hadoop-2.9.2/etc/hadoop/mapred-site.xml

<configuration>
<!-- 通知框架MR使用YARN -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>yarn.app.mapreduce.am.env</name>
<value>HADOOP_MAPRED_HOME=/opt/hadoop-2.9.2</value>
</property>
<property>
<name>mapreduce.map.env</name>
<value>HADOOP_MAPRED_HOME=/opt/hadoop-2.9.2</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>jacksun01.com:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>jacksun01.com:19888</value>
</property>
</configuration>

Step 13:修改 vi /opt/hadoop-2.9.2/etc/hadoop/yarn-site.xml
<configuration>
<!-- reducer取数据的方式是mapreduce_shuffle -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!-- 开启日志聚合 -->
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<!-- 日志聚合目录 -->
<property>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>/opt/hadoop-2.9.2/logs</value>
</property>
<property>
<!-- 指定ResourceManager 所在的节点 -->
<name>yarn.resourcemanager.hostname</name>
<value>jacksun01.com</value>
</property>
<property>
<!-- 指定yarn.log.server.url所在的节点 -->
<name>yarn.log.server.url</name>
<value>http://jacksun01.com:19888/jobhistory/logs</value>
</property>
</configuration>


Step 14:修改 vi /opt/hadoop-2.9.2/etc/hadoopslaves #设置datanode结点
jacksun01.com
jacksun02.com
jacksun03.com
##使用一台VM按cluster的方式搭建,属于分布式。当使用多台机器时,同样的配置方式,并将多台机器互信,则为真正的分布式。

Step 15:修改 vi /opt/hadoop-2.9.2/etc/hadoop/yarn-env.sh
YARN_RESOURCEMANAGER_USER=root
YARN_NODEMANAGER_USER=root

Step 16:复制服务器
关闭Linux系统: halt 或者 init 0 (reboot 或者 init 6)
A)复制VM文件 建立Server:jacksun02.com;jacksun03.com;
B)修改## vi /etc/hostname
C)ssh互信(免密码登录)
D)
Step 17: start the JournalNode daemons on the set of machines
# cd /opt/hadoop-2.9.2; ./sbin/hadoop-daemon.sh start journalnode
Step 18:格式化hadoop
# cd /opt/hadoop-2.9.2/etc/hadoop/
# hdfs namenode -format
格式化一次就好,多次格式化可能导致datanode无法识别,如果想要多次格式化,需要先删除数据再格式化
A)If you are setting up a fresh HDFS cluster, you should first run the format command (hdfs namenode -format) on one of NameNodes.

B)If you have already formatted the NameNode, or are converting a non-HA-enabled cluster to be HA-enabled, you should now copy over the contents of your NameNode metadata directories to the other, unformatted NameNode by running the command “hdfs namenode -bootstrapStandby” on the unformatted NameNode. Running this command will also ensure that the JournalNodes (as configured by dfs.namenode.shared.edits.dir) contain sufficient edits transactions to be able to start both NameNodes.

C)If you are converting a non-HA NameNode to be HA, you should run the command “hdfs namenode -initializeSharedEdits”, which will initialize the JournalNodes with the edits data from the local NameNode edits directories.

Step 19:启动hdfs和yarn在各自的结点;
sbin/start-dfs.sh
sbin/start-yarn.sh

Step 20:检查是否安装成功

hdfs haadmin -getAllServiceState;
hdfs haadmin -transitionToActive nn1;

Usage: haadmin
[-transitionToActive <serviceId>]
[-transitionToStandby <serviceId>]
[-failover [--forcefence] [--forceactive] <serviceId> <serviceId>]
[-getServiceState <serviceId>]
[-getAllServiceState]
[-checkHealth <serviceId>]
[-help <command>]

Step 22:上传文件测试
# cd ~
# vi helloworld.txt
# hdfs dfs -put helloworld.txt helloworld.txt
ssh互信(免密码登录)
#cd /opt/hadoop-2.9.2;bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.2.jar wordcount /user/jacksun/input/core-site.xml output2

========================ZooKeeper Automatic Failover=======================================================================
Failure detection Active NameNode election Health monitoring ZooKeeper session management ZooKeeper-based election

Step 1: 时间同步
A)安装NTP包
检查是否安装了ntp相关包。如果没有安装ntp相关包,使用rpm或yum安装,安装也非常简单方便。
[root@localhost ~]# rpm -qa | grep ntp
ntpdate-4.2.6p5-1.el6.x86_64
fontpackages-filesystem-1.41-1.1.el6.noarch
ntp-4.2.6p5-1.el6.x86_64
B)配置vi /etc/ntp.conf
#添加修改本地服务器
# Hosts on local network are less restricted.
restrict 192.168.209.131 mask 255.255.255.0 nomodify notrap

#注释掉同步服务器
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst

#添加或者去除注释
server 127.127.1.0 # local clock
fudge 127.127.1.0 stratum 10

C)添加vi /etc/sysconfig/ntpd 文件
SYNC_HWCLOCK=yes

D)其他结点 编辑 crontab -e 文件
0-59/10 * * * * /usr/sbin/ntpdate jacksun01.com

Step 2:Configuring automatic failover
hdfs-site.xml

<!-- 是否启用自动故障转移 -->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<!-- zookeeper监控服务器 -->
<property>
<name>ha.zookeeper.quorum</name>
<value>jacksun01.com:2181,jacksun02.com:2181,jacksun03.com:2181</value>
</property>

Step 3:sync hdfs-site.xml
cd /opt/hadoop-2.9.2/etc/hadoop/; scp core-site.xml hdfs-site.xml yarn-site.xml [email protected]:/opt/hadoop-2.9.2/etc/hadoop/ ; scp core-site.xml hdfs-site.xml yarn-site.xml [email protected]:/opt/hadoop-2.9.2/etc/hadoop/

Step 4:stop all hadoop daemon
cd /opt/hadoop-2.9.2;./sbin/stop-all.sh

Step 5:start zookeeper
cd /opt/zookeeper;./bin/zkServer.sh start


Step 6: Initializing HA state in ZooKeeper

cd /opt/hadoop-2.9.2;./bin/hdfs zkfc -formatZK

Step 7: 测试
Kill active namenode


















猜你喜欢

转载自www.cnblogs.com/sundy818/p/10115389.html
今日推荐