Centos7.5安装分布式Hadoop2.6.0+Hbase+Hive(CDH5.14.2离线安装tar包)

主机环境

基本配置:

节点数 5
操作系统 CentOS Linux release 7.5.1804 (Core)
内存 8GB

流程配置:

节点数 5
操作系统 CentOS Linux release 7.5.1804 (Core)
内存 16GB

注: 实际生产中按照需求分配内存,如果只是在vmvare中搭建虚拟机,内存可以调整为每台主机1-2GB即可

软件环境

软件 版本 下载地址
jdk jdk-8u172-linux-x64 点击下载
hadoop hadoop-2.6.0-cdh5.14.2 点击下载
zookeeeper zookeeper-3.4.5-cdh5.14.2 点击下载
hbase hbase-1.2.0-cdh5.14.2 点击下载
hive hive-1.1.0-cdh5.14.2 点击下载

注: CDH5的所有软件可以在此下载:http://archive.cloudera.com/cdh5/cdh/5/

主机规划

5个节点角色规划如下:

主机名 CDHNode1 CDHNode2 CDHNode3 CDHNode4 CDHNode5
IP 192.168.223.201 192.168.223.202 192.168.223.203 192.168.223.204 192.168.223.205
namenode yes yes no no no
dataNode no no yes yes yes
resourcemanager yes yes no no no
journalnode yes yes yes yes yes
zookeeper yes yes yes no no
hmaster(hbase) yes yes no no no
regionserver(hbase) no no yes yes yes

注: Journalnode和ZooKeeper保持奇数个,如果需要高可用则不少于 3 个节点。具体原因,以后详叙。

主机安装前准备

  1. 关闭所有节点的 SELinux
sed -i 's/^SELINUX=.*$/SELINUX=disabled/g' /etc/selinux/config 
setenforce 0
  1. 关闭所有节点防火墙 firewalld or iptables
systemctl disable firewalld;  
systemctl stop firewalld;
systemctl disable iptables; systemctl stop iptables; 
  1. 开启所有节点时间同步 ntpdate
echo "*/5 * * * * /usr/sbin/ntpdate asia.pool.ntp.org | logger -t NTP" >> /var/spool/cron/root
  1. 设置所有节点语言编码以及时区
echo 'export TZ=Asia/Shanghai' >> /etc/profile
echo 'export LANG=en_US.UTF-8' >> /etc/profile . /etc/profile 
  1. 所有节点添加hadoop用户
useradd -m hadoop
echo '123456' | passwd --stdin hadoop
# 设置PS1
su - hadoop
echo 'export PS1="\u@\h:\$PWD>"' >> ~/.bash_profile . ~/.bash_profile 
  1. 设置hadoop用户之间免密登录 首先在CDHNode1主机生成秘钥
su - hadoop
ssh-keygen -t rsa	# 一直回车即可生成hadoop用户的公钥和私钥
cd .ssh vi id_rsa.pub # 去掉私钥末尾的主机名 hadoop@CDHNode1 cat id_rsa.pub > authorized_keys chmod 600 authorized_keys 

压缩.ssh文件夹

su - hadoop
zip -r ssh.zip .ssh

随后分发ssh.zip到CDHNode2-5主机hadoop用户家目录解压即完成免密登录

  1. 主机内核参数优化以及最大文件打开数、最大进程数等参数优化 不同主机优化参数有可能不一样,故这里不作出具体优化方法,但如果Hadoop环境用于正式生产,必须优化,linux默认参数可能会导致hadoop集群性能低下。 
  2. datanode节点(CDHNode3-5)挂载数据盘/chunk1,大小15G,请挂载后目录需要授权给hadoop用户

注: 以上操作需要使用 root 用户,到目前为止操作系统环境已经准备完成,以下开始正式安装,后面的操作如果不做特殊说明均使用 hadoop 用户

安装jdk1.8

所有节点都需要安装,安装方式都一样 解压 jdk-8u172-linux-x64.tar.gz

tar zxvf jdk-8u172-linux-x64.tar.gz
mkdir -p /home/hadoop/app
mv jdk-8u172-linux-x64 /home/hadoop/app/jdk rm -f jdk-8u172-linux-x64.tar.gz 

配置环境变量 vi ~/.bash_profile 添加以下内容:

#java
export JAVA_HOME=/home/hadoop/app/jdk
export CLASSPATH=.:$JAVA_HOME/lib:$CLASSPATH export PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin 

加载环境变量

. ~/.bash_profile

查看是否安装成功 java -version

java version "1.8.0_172"
Java(TM) SE Runtime Environment (build 1.8.0_172-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.172-b11, mixed mode)

如果出现以上结果证明安装成功。

安装zookeeper

首先在CDHNode1上安装

解压 zookeeper-3.4.5-cdh5.14.2.tar.gz

tar zxvf zookeeper-3.4.5-cdh5.14.2.tar.gz
mv zookeeper-3.4.5-cdh5.14.2 /home/hadoop/app/zookeeper rm -f zookeeper-3.4.5-cdh5.14.2.tar.gz 

设置环境变量 vi ~/.bash_profile 添加以下内容:

#zk
export ZOOKEEPER_HOME=/home/hadoop/app/zookeeper
export PATH=$PATH:$ZOOKEEPER_HOME/bin 

加载环境变量

. ~/.bash_profile

添加配置文件 vi /home/hadoop/app/zookeeper/conf/zoo.cfg 添加以下内容:

# The number of milliseconds of each tick  
tickTime=2000
# The number of ticks that the initial  
# synchronization phase can take  
initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. #数据文件目录与日志目录 dataDir=/home/hadoop/data/zookeeper/zkdata dataLogDir=/home/hadoop/data/zookeeper/zkdatalog # the port at which the clients will connect clientPort=2181 #server.服务编号=主机名称:Zookeeper不同节点之间同步和通信的端口:选举端口(选举leader) server.1=CDHNode1:2888:3888 server.2=CDHNode2:2888:3888 server.3=CDHNode3:2888:3888 # 节点变更时只需在此添加或者删除相应的节点(所有节点配置都需要修改),然后在启动新增或者停止删除的节点即可 # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1 

创建所需目录

mkdir -p /home/hadoop/data/zookeeper/zkdata
mkdir -p /home/hadoop/data/zookeeper/zkdatalog
mkdir -p /home/hadoop/app/zookeeper/logs 

添加myid vim /home/hadoop/data/zookeeper/zkdata/myid,添加:

1

注: 此数字来源于zoo.cfg中配置 server.1=CDHNode1:2888:3888行server后面的1,故CDHNode2填写2,CDHNode3填写3

配置日志目录 vim /home/haoop/app/zookeeper/libexec/zkEnv.sh ,修改以下参数为:

ZOO_LOG_DIR="$ZOOKEEPER_HOME/logs"
ZOO_LOG4J_PROP="INFO,ROLLINGFILE"

注: /home/haoop/app/zookeeper/libexec/zkEnv.sh 与 /home/haoop/app/zookeeper/bin/zkEnv.sh 文件内容相同。启动脚本 /home/haoop/app/zookeeper/bin/zkServer.sh 会优先读取/home/haoop/app/zookeeper/libexec/zkEnv.sh,当其不存在时才会读取 /home/haoop/app/zookeeper/bin/zkEnv.sh

vim /home/haoop/app/zookeeper/conf/log4j.properties ,修改以下参数为:

zookeeper.root.logger=INFO, ROLLINGFILE
zookeeper.log.dir=/home/hadoop/app/zookeeper/logs
log4j.appender.ROLLINGFILE=org.apache.log4j.RollingFileAppender 

复制zookeeper到CDHNode2-3

scp ~/.bash_profile CDHNode2:/home/hadoop
scp ~/.bash_profile CDHNode3:/home/hadoop
scp -pr /home/hadoop/app/zookeeper CDHNode2:/home/hadoop/app scp -pr /home/hadoop/app/zookeeper CDHNode3:/home/hadoop/app ssh CDHNode2 "mkdir -p /home/hadoop/data/zookeeper/zkdata;mkdir -p /home/hadoop/data/zookeeper/zkdatalog;mkdir -p /home/hadoop/app/zookeeper/logs" ssh CDHNode2 "echo 2 > /home/hadoop/data/zookeeper/zkdata/myid" ssh CDHNode3 "mkdir -p /home/hadoop/data/zookeeper/zkdata;mkdir -p /home/hadoop/data/zookeeper/zkdatalog;mkdir -p /home/hadoop/app/zookeeper/logs" ssh CDHNode3 "echo 3 > /home/hadoop/data/zookeeper/zkdata/myid" 

启动zookeeper 3个节点均启动

/home/hadoop/app/zookeeper/bin/zkServer.sh start

查看节点状态

/home/hadoop/app/zookeeper/bin/zkServer.sh status

如果一个节点为leader,另2个节点为follower,则说明Zookeeper安装成功

查看进程

jps

其中 QuorumPeerMain 进程为zookeeper

停止zookeeper

/home/hadoop/app/zookeeper/bin/zkServer.sh stop

安装hadoop

首先在CDHNode1节点安装,然后复制到其他节点 解压 hadoop-2.6.0-cdh5.14.2.tar.gz

tar zxvf hadoop-2.6.0-cdh5.14.2.tar.gz
mv hadoop-2.6.0-cdh5.14.2 /home/hadoop/app/hadoop rm -f hadoop-2.6.0-cdh5.14.2.tar.gz 

设置环境变量 vi ~/.bash_profile 添加以下内容:

#hadoop
HADOOP_HOME=/home/hadoop/app/hadoop
PATH=$HADOOP_HOME/bin:$PATH
export HADOOP_HOME PATH 

加载环境变量

. ~/.bash_profile

配置HDFS

配置 /home/hadoop/app/hadoop/etc/hadoop/hadoop-env.sh, 修改以下内容

export JAVA_HOME=/home/hadoop/app/jdk

配置 /home/hadoop/app/hadoop/etc/hadoop/core-site.xml

<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://cluster1</value> </property> <!-- 这里的值指的是默认的HDFS路径 ,取名为cluster1 --> <property> <name>hadoop.tmp.dir</name> <value>/home/hadoop/data/tmp</value> </property> <!-- hadoop的临时目录,如果需要配置多个目录,需要逗号隔开,data目录需要我们自己创建 --> <property> <name>ha.zookeeper.quorum</name> <value>CDHNode1:2181,CDHNode2:2181,CDHNode3:2181</value> </property> <!-- 配置Zookeeper 管理HDFS --> </configuration> 

配置 /home/hadoop/app/hadoop/etc/hadoop/hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>dfs.replication</name> <value>3</value> </property> <!-- 数据块副本数为3 --> <property> <name>dfs.name.dir</name> <value>/home/hadoop/data/hdfs/name</value> </property> <!-- 元数据保存目录,多个以','隔开 --> <property> <name>dfs.data.dir</name> <value>/chunk1</value> </property> <!-- 数据保存目录,多个以','隔开 --> <property> <name>dfs.permissions</name> <value>false</value> </property> <property> <name>dfs.permissions.enabled</name> <value>false</value> </property> <!-- 权限默认配置为false --> <property> <name>dfs.nameservices</name> <value>cluster1</value> </property> <!-- 命名空间,它的值与fs.defaultFS的值要对应,namenode高可用之后有两个namenode,cluster1是对外提供的统一入口 --> <property> <name>dfs.ha.namenodes.cluster1</name> <value>CDHNode1,CDHNode2</value> </property> <!-- 指定 nameService 是 cluster1 时的nameNode有哪些,这里的值也是逻辑名称,名字随便起,相互不重复即可 --> <property> <name>dfs.namenode.rpc-address.cluster1.CDHNode1</name> <value>CDHNode1:9000</value> </property> <!-- CDHNode1 rpc地址 --> <property> <name>dfs.namenode.http-address.cluster1.CDHNode1</name> <value>CDHNode1:50070</value> </property> <!-- CDHNode1 http地址 --> <property> <name>dfs.namenode.rpc-address.cluster1.CDHNode2</name> <value>CDHNode2:9000</value> </property> <!-- CDHNode2 rpc地址 --> <property> <name>dfs.namenode.http-address.cluster1.CDHNode2</name> <value>CDHNode2:50070</value> </property> <!-- CDHNode2 http地址 --> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> <!-- 启动故障自动恢复 --> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://CDHNode1:8485;CDHNode2:8485;CDHNode3:8485;CDHNode4:8485;CDHNode5:8485/cluster1</value> </property> <!-- 指定journal --> <property> <name>dfs.client.failover.proxy.provider.cluster1</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <!-- 指定 cluster1 出故障时,哪个实现类负责执行故障切换 --> <property> <name>dfs.journalnode.edits.dir</name> <value>/home/hadoop/data/journaldata/jn</value> </property> <!-- 指定JournalNode集群在对nameNode的目录进行共享时,自己存储数据的磁盘路径 --> <property> <name>dfs.ha.fencing.methods</name> <value>shell(/bin/true)</value> </property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/hadoop/.ssh/id_rsa</value> </property> <property> <name>dfs.ha.fencing.ssh.connect-timeout</name> <value>10000</value> </property> <!-- 脑裂默认配置 --> <property> <name>dfs.namenode.handler.count</name> <value>100</value> </property> </configuration> 

配置 /home/hadoop/app/hadoop/etc/hadoop/slaves

CDHNode3
CDHNode4
CDHNode5

配置YARN

配置 /home/hadoop/app/hadoop/etc/hadoop/mapred-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <!-- 指定运行mapreduce的环境是Yarn,与hadoop1不同的地方 --> </configuration> 

配置 /home/hadoop/app/hadoop/etc/hadoop/yarn-site.xml

<?xml version="1.0"?>
<!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> <!-- Site specific YARN configuration properties --> <configuration> <property> <name>yarn.resourcemanager.connect.retry-interval.ms</name> <value>2000</value> </property> <!-- 超时的周期 --> <property> <name>yarn.resourcemanager.ha.enabled</name> <value>true</value> </property> <!-- 打开高可用 --> <property> <name>yarn.resourcemanager.ha.automatic-failover.enabled</name> <value>true</value> </property> <!-- 启动故障自动恢复 --> <property> <name>yarn.resourcemanager.ha.automatic-failover.embedded</name> <value>true</value> </property> <property> <name>yarn.resourcemanager.cluster-id</name> <value>yarn-rm-cluster</value> </property> <!-- 给yarn cluster 取个名字yarn-rm-cluster --> <property> <name>yarn.resourcemanager.ha.rm-ids</name> <value>rm1,rm2</value> </property> <!-- 给ResourceManager 取个名字 rm1,rm2 --> <property> <name>yarn.resourcemanager.hostname.rm1</name> <value>CDHNode1</value> </property> <!-- 配置ResourceManager rm1 hostname --> <property> <name>yarn.resourcemanager.hostname.rm2</name> <value>CDHNode2</value> </property> <!-- 配置ResourceManager rm2 hostname --> <property> <name>yarn.resourcemanager.recovery.enabled</name> <value>true</value> </property> <!-- 启用resourcemanager 自动恢复 --> <property> <name>yarn.resourcemanager.zk.state-store.address</name> <value>CDHNode1:2181,CDHNode2:2181,CDHNode3:2181</value> </property> <!-- 配置Zookeeper地址 --> <property> <name>yarn.resourcemanager.zk-address</name> <value>CDHNode1:2181,CDHNode2:2181,CDHNode3:2181</value> </property> <!-- 配置Zookeeper地址 --> <property> <name>yarn.resourcemanager.address.rm1</name> <value>CDHNode1:8032</value> </property> <!-- rm1端口号 --> <property> <name>yarn.resourcemanager.scheduler.address.rm1</name> <value>CDHNode1:8034</value> </property> <!-- rm1调度器的端口号 --> <property> <name>yarn.resourcemanager.webapp.address.rm1</name> <value>CDHNode1:8088</value> </property> <!-- rm1 webapp端口号 --> <property> <name>yarn.resourcemanager.address.rm2</name> <value>CDHNode2:8032</value> </property> <!-- rm2端口号 --> <property> <name>yarn.resourcemanager.scheduler.address.rm2</name> <value>CDHNode2:8034</value> </property> <!-- rm2调度器的端口号 --> <property> <name>yarn.resourcemanager.webapp.address.rm2</name> <value>CDHNode2:8088</value> </property> <!-- rm2 webapp端口号 --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce_shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <!-- 执行MapReduce需要配置的shuffle过程 --> </configuration> 

创建相应目录

mkdir -p /home/hadoop/data/tmp
mkdir -p /home/hadoop/data/hdfs/name
mkdir -p /home/hadoop/data/journaldata/jn mkdir -p /home/hadoop/data/pid touch /home/hadoop/app/hadoop/etc/hadoop/excludes 

复制hadoop到CDHNode2-5

scp ~/.bash_profile CDHNode2:/home/hadoop
scp ~/.bash_profile CDHNode3:/home/hadoop
scp ~/.bash_profile CDHNode4:/home/hadoop
scp ~/.bash_profile CDHNode5:/home/hadoop

scp -pr /home/hadoop/app/hadoop CDHNode2:/home/hadoop/app scp -pr /home/hadoop/app/hadoop CDHNode3:/home/hadoop/app scp -pr /home/hadoop/app/hadoop CDHNode4:/home/hadoop/app scp -pr /home/hadoop/app/hadoop CDHNode5:/home/hadoop/app ssh CDHNode2 "mkdir -p /home/hadoop/data/tmp;mkdir -p /home/hadoop/data/hdfs/name;mkdir -p /home/hadoop/data/journaldata/jn;mkdir -p /home/hadoop/data/pid;touch /home/hadoop/app/hadoop/etc/hadoop/excludes" ssh CDHNode3 "mkdir -p /home/hadoop/data/tmp;mkdir -p /home/hadoop/data/hdfs/name;mkdir -p /home/hadoop/data/journaldata/jn;mkdir -p /home/hadoop/data/pid;touch /home/hadoop/app/hadoop/etc/hadoop/excludes" ssh CDHNode4 "mkdir -p /home/hadoop/data/tmp;mkdir -p /home/hadoop/data/hdfs/name;mkdir -p /home/hadoop/data/journaldata/jn;mkdir -p /home/hadoop/data/pid;touch /home/hadoop/app/hadoop/etc/hadoop/excludes" ssh CDHNode5 "mkdir -p /home/hadoop/data/tmp;mkdir -p /home/hadoop/data/hdfs/name;mkdir -p /home/hadoop/data/journaldata/jn;mkdir -p /home/hadoop/data/pid;touch /home/hadoop/app/hadoop/etc/hadoop/excludes" 

集群初始化

启动 CDHNode1-3 节点上面的 zookeeper

/home/hadoop/app/zookeeper/bin/zkServer.sh start

启动 CDHNode1-5 节点上面的 journalnode

/home/hadoop/app/hadoop/sbin/hadoop-daemon.sh start journalnode

jps 如有 JournalNode 则启动正常

首先在主节点上CDHNode1执行格式化

/home/hadoop/app/hadoop/bin/hdfs namenode -format	# namenode 格式化 /home/hadoop/app/hadoop/bin/hdfs zkfc -formatZK # 格式化高可用 /home/hadoop/app/hadoop/bin/hdfs namenode # 启动namenode 

注: 执行完上述命令后,程序就会在等待状态,只有在CDHNode2上执行完下一步后,按下ctrl+c来结束此namenode进程。

在CDHNode2上面执行namenode数据同步

/home/hadoop/app/hadoop/bin/hdfs namenode -bootstrapStandby	# 同步主节点和备节点之间的元数据 

同步完成后,在CDHNode1节点上,按下ctrl+c来结束namenode进程。

然后关闭所有节点journalnode

/home/hadoop/app/hadoop/sbin/hadoop-daemon.sh stop journalnode

启动HDFS

如果上面操作没有问题,则可以集群中任何一台主机使用一键脚本启动hdfs所有相关进程,一般建议在namenode主节点上操作

/home/hadoop/app/hadoop/sbin/start-dfs.sh

注: start-dfs.sh 脚本原理是通过免密ssh登录到各节点启动相关进程,所以也会遇到ssh第一次连接需要确认的问题,请注意。

通过web界面查看hdfs namenode启动情况

http://CDHNode1:50070 http://CDHNode2:50070

通过web界面查看hdfs datanode启动情况

http://cdhnode3:50075 http://cdhnode4:50075 http://cdhnode5:50075

上传文件至HDFS测试 vi a.txt //本地创建一个test.txt文件

hadoop CDH
hello world
CDH hadoop

hdfs dfs -mkdir /test #在hdfs上创建一个文件目录 hdfs dfs -put test.txt /test #向hdfs上传一个文件 hdfs dfs -ls /test #查看a.txt是否上传成功 如果上面操作没有问题说明hdfs配置成功。

启动YARN

首先在CDHNode1节点执行

/home/hadoop/app/hadoop/sbin/start-yarn.sh

然后在CDHNode2节点执行

/home/hadoop/app/hadoop/sbin/yarn-daemon.sh start resourcemanager

通过web界面查看yarn resourcemanager启动情况

http://CDHNode1:8088 http://CDHNode2:8088

通过web界面查看yarn nodemanager启动情况

http://cdhnode3:8042/node http://cdhnode4:8042/node http://cdhnode5:8042/node

检查一下ResourceManager状态

/home/hadoop/app/hadoop/bin/yarn rmadmin -getServiceState rm1
/home/hadoop/app/hadoop/bin/yarn rmadmin -getServiceState rm2 

active 为主节点,standby为备节点

Wordcount示例测试

hadoop jar /home/hadoop/app/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0-cdh5.14.2.jar wordcount /test/test.txt /test/out 

如果上面执行没有异常,说明YARN安装成功。

整个集群启动顺序

启动

启动CDHNode1-3节点zookeeper

/home/hadoop/app/zookeeper/bin/zkServer.sh start

启动HDFS

/home/hadoop/app/hadoop/sbin/start-dfs.sh

启动YARN

# 首先CDHNode1执行:
/home/hadoop/app/hadoop/sbin/start-yarn.sh # 然后CDHNode2执行: /home/hadoop/app/hadoop/sbin/yarn-daemon.sh start resourcemanager 
停止

停止YARN

# 首先CDHNode2执行:
/home/hadoop/app/hadoop/sbin/yarn-daemon.sh stop resourcemanager # 然后CDHNode1执行: /home/hadoop/app/hadoop/sbin/stop-yarn.sh 

停止HDFS

/home/hadoop/app/hadoop/sbin/stop-dfs.sh

停止CDHNode1-3节点zookeeper

/home/hadoop/app/zookeeper/bin/zkServer.sh stop

未完待续,后面会添加Hbase,Hive

猜你喜欢

转载自www.cnblogs.com/leffss/p/9184171.html