说明
zookeeper和hadoop在头两篇文章已经介绍了,本文基于之前安装的集群安装hbase,安装的节点如下(标红的为本次安装):
机器 |
安装软件 |
进程 |
focuson1 |
zookeeper;hadoop namenode;hadoop DataNode;hbase master;hbase regionrerver |
JournalNode; DataNode; QuorumPeerMain; NameNode; NodeManager;DFSZKFailoverController;HMaster;HRegionServer |
focuson2 |
zookeeper;hadoop namenode;hadoop DataNode;yarn;hbase master;hbase regionrerver |
NodeManager;ResourceManager;JournalNode; DataNode; QuorumPeerMain; NameNode; DFSZKFailoverController;HMaster;HRegionServer
扫描二维码关注公众号,回复:
181728 查看本文章
|
focuson3 |
zookeeper;hadoop DataNode;yarn;hbase regionrerver |
NodeManager;ResourceManager;JournalNode; DataNode; QuorumPeerMain;HRegionServer |
注:zookeeper和hadoop戳链接:
安装步骤
1、将包上传至用户家目录,解压
cd/usr/local/src/ mkdir hbase mv ~/hbase-1.4.3.tar.gz . tar -xvf hbase-1.4.3.tar.gz rm -f hbase-1.4.3.tar.gz
2、配置类
配置文件一:$HBASE_HOME/conf/hbase-env.sh
export HBASE_MANAGES_ZK=false//默认hbase管理zookeeper,需要制定zookeeper的配置文件路径,修改成false,链接我们自己搭建的zookeeper集群 export JAVA_HOME=/usr/local/src/java/jdk1.7.0_51
配置文件二:$HBASE_HOME/conf/hbase-site.xml
<configuration> <property> <name>hbase.rootdir</name> <!--链接两个ns1,ns1在hadoop集群中指向两个namenode,实现高可用,如果写定一个namenode,当该namenode为standby时,会报错 该配置不需写端口--> <value>hdfs://ns1/hbase</value> </property> <!--指定为分布式--> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <!--指定两个master,注册zookeeper监听,实现高可用--> <property> <name>hbase.master</name> <value>focuson1:60000,focuson2:60000</value> </property> <!--zookeeper集群--> <property> <name>hbase.zookeeper.quorum</name> <value>focuson1:2181,focuson2:2181,focuson3:2181</value> </property> <!--该配置必须有,要不访问不了web界面--> <property> <name>hbase.master.info.port</name> <value>60010</value> </property> </configuration>
配置三:把hadoop的hdfs-site.xml文件考到hbase的配置路径下
cp /usr/local/src/hadoop/hadoop-2.6.0/etc/hadoop/hdfs-site.xml /usr/local/src/hbase/hbase-1.4.3/conf/
3、启动
在focuson1上执行(启动focuson1的master,以及三台regionserver):
[root@focuson1 hbase-1.4.3]# ./bin/start-hbase.sh running master, logging to /usr/local/src/hbase/hbase-1.4.3/logs/hbase-root-master-focuson1.out focuson3: running regionserver, logging to /usr/local/src/hbase/hbase-1.4.3/bin/../logs/hbase-root-regionserver-focuson3.out focuson1: running regionserver, logging to /usr/local/src/hbase/hbase-1.4.3/bin/../logs/hbase-root-regionserver-focuson1.out focuson2: running regionserver, logging to /usr/local/src/hbase/hbase-1.4.3/bin/../logs/hbase-root-regionserver-focuson2.out在focuson2上执行(只启动一个master,不会启动regionServer)
[root@focuson2 hbase-1.4.3]# ./bin/start-hbase.sh running master, logging to /usr/local/src/hbase/hbase-1.4.3/bin/../logs/hbase-root-master-focuson2.out focuson3: regionserver running as process 6705. Stop it first. focuson1: regionserver running as process 17806. Stop it first. focuson2: regionserver running as process 13949. Stop it first.
4 验证。
验证一:发现focuson1为active,focuson2为standby。
kill掉focuson1的hmaster进程,发现focuson2为active(不贴图了)
验证二:无论namenode为哪一个启动,hbase都能正常运行。
5、连接hdfs时出现的几个小问题。
问题一.不链接两个namenode实现高可用时,当链接focuson1,而此时focuson1正好为standby状态时,会报下面的错:
2018-05-02 00:01:20,746 FATAL [focuson1:16000.activeMasterManager] master.HMaster: Failed to become active master org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87) at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1719) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1350) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4132) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:838) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:821) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619) at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
问题二、当配置高可用之后,Hmaster正常,HregionServer出现如下错误:
2018-05-02 01:36:41,083 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting java.lang.RuntimeException: Failed construction of Regionserver: class org.apache.hadoop.hbase.regionserver.HRegionServer at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2812) at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:64) at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:127) at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2827) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2810) ... 5 more Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: ns1 at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:378) at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:320) at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:176) at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:678) at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:619) at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295) at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:1037) at org.apache.hadoop.hbase.util.FSUtils.isValidWALRootDir(FSUtils.java:1080) at org.apache.hadoop.hbase.util.FSUtils.getWALRootDir(FSUtils.java:1062) at org.apache.hadoop.hbase.regionserver.HRegionServer.initializeFileSystem(HRegionServer.java:659) at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:602) ... 10 more Caused by: java.net.UnknownHostException: ns1 ... 27 more
此时使用上述配置三完美解决!
---
完事!