Construction of hbase cluster (fully distributed)

illustrate

Zookeeper and hadoop have been introduced in the first two articles. This article installs hbase based on the previously installed cluster. The installed nodes are as follows (marked in red is this installation):

machine

install software

process

focuson1

zookeeper;hadoop namenode;hadoop DataNode;hbase master;hbase regionrerver

JournalNode; DataNode; QuorumPeerMain; NameNode; NodeManager; DFSZKFailoverController ; HMaster ; HRegionServer

focuson2

zookeeper;hadoop namenode;hadoop DataNode;yarn;hbase master;hbase regionrerver

NodeManager;ResourceManager;JournalNode; DataNode; QuorumPeerMain; NameNode; DFSZKFailoverController;HMaster;HRegionServer

focuson3

zookeeper;hadoop DataNode;yarn;hbase regionrerver

NodeManager;ResourceManager;JournalNode; DataNode; QuorumPeerMain;HRegionServer

Note: zookeeper and hadoop poke links:

zookeeper build

hadoop build

installation steps

1. Upload the package to the user's home directory and unzip it

cd/usr/local/src/  
mkdir hbase
mv ~ / hbase-1.4.3.tar.gz.  
tar -xvf hbase-1.4.3.tar.gz  
rm -f hbase-1.4.3.tar.gz

2. Configuration class

Configuration file one: $HBASE_HOME/conf/hbase-env.sh

export HBASE_MANAGES_ZK=false//The default hbase manages zookeeper, you need to formulate the zookeeper configuration file path, change it to false, and link the zookeeper cluster we built by ourselves
export JAVA_HOME=/usr/local/src/java/jdk1.7.0_51

Configuration file two: $HBASE_HOME/conf/hbase-site.xml

<configuration>
<property>
    <name>hbase.rootdir</name>
<!--Link two ns1, ns1 points to two namenodes in the hadoop cluster to achieve high availability, if a namenode is written, when the namenode is standby, an error will be reported
    This configuration does not need to write the port -->
 <value>hdfs://ns1/hbase</value>
  </property>
<!--Specify as distributed-->
 <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
  </property>
<!--Specify two masters, register zookeeper monitoring, achieve high availability -->
 <property>
    <name>hbase.master</name>
    <value>focuson1:60000,focuson2:60000</value>
  </property>
<!--zookeeper cluster-->
 <property>
    <name>hbase.zookeeper.quorum</name>
    <value>focuson1:2181,focuson2:2181,focuson3:2181</value>
  </property>
<!--This configuration must exist, or the web interface cannot be accessed-->
 <property>
    <name>hbase.master.info.port</name>
    <value>60010</value>
  </property>
</configuration>

Configuration 3: Put the hdfs-site.xml file of hadoop into the configuration path of hbase

cp /usr/local/src/hadoop/hadoop-2.6.0/etc/hadoop/hdfs-site.xml /usr/local/src/hbase/hbase-1.4.3/conf/

3. Start

Execute on focuson1 (start focuson1's master, and three regionservers):

[root@focuson1 hbase-1.4.3]# ./bin/start-hbase.sh
running master, logging to /usr/local/src/hbase/hbase-1.4.3/logs/hbase-root-master-focuson1.out
focuson3: running regionserver, logging to /usr/local/src/hbase/hbase-1.4.3/bin/../logs/hbase-root-regionserver-focuson3.out
focuson1: running regionserver, logging to /usr/local/src/hbase/hbase-1.4.3/bin/../logs/hbase-root-regionserver-focuson1.out
focuson2: running regionserver, logging to /usr/local/src/hbase/hbase-1.4.3/bin/../logs/hbase-root-regionserver-focuson2.out
Execute on focuson2 (only one master is started, regionServer will not be started)
[root@focuson2 hbase-1.4.3]# ./bin/start-hbase.sh
running master, logging to /usr/local/src/hbase/hbase-1.4.3/bin/../logs/hbase-root-master-focuson2.out
focuson3: regionserver running as process 6705. Stop it first.
focuson1: regionserver running as process 17806. Stop it first.
focuson2: regionserver running as process 13949. Stop it first.

4 Verify.

Verification 1: It is found that focuson1 is active and focuson2 is standby.

Kill the hmaster process of focuson1 and find that focuson2 is active (not textured)

Verification 2: No matter which namenode is started for, hbase can run normally.

5. Several small problems occurred when connecting to hdfs.

Question 1. When the two namenodes are not linked to achieve high availability, when focuson1 is linked, and focuson1 is in the standby state at this time, the following error will be reported:

2018-05-02 00:01:20,746 FATAL [focuson1:16000.activeMasterManager] master.HMaster: Failed to become active master
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby
	at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
	at org.apache.hadoop.hdfs.server.namenode.NameNode $ NameNodeHAContext.checkOperation (NameNode.java:1719)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1350)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:4132)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo (NameNodeRpcServer.java:838)
	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:821)
	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)

Question 2. After the configuration of high availability, Hmaster is normal, and the following error occurs in HregionServer:

2018-05-02 01:36:41,083 ERROR [main] regionserver.HRegionServerCommandLine: Region server exiting
java.lang.RuntimeException: Failed construction of Regionserver: class org.apache.hadoop.hbase.regionserver.HRegionServer
        at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2812)
        at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.start(HRegionServerCommandLine.java:64)
        at org.apache.hadoop.hbase.regionserver.HRegionServerCommandLine.run(HRegionServerCommandLine.java:87)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:127)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.main(HRegionServer.java:2827)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.constructRegionServer(HRegionServer.java:2810)
        ... 5 more
Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: ns1
        at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:378)
        at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy (NameNodeProxies.java:320)
        at org.apache.hadoop.hdfs.NameNodeProxies.createProxy (NameNodeProxies.java:176)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:678)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:619)
        at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
        at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
        at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:1037)
        at org.apache.hadoop.hbase.util.FSUtils.isValidWALRootDir(FSUtils.java:1080)
        at org.apache.hadoop.hbase.util.FSUtils.getWALRootDir(FSUtils.java:1062)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.initializeFileSystem(HRegionServer.java:659)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:602)
        ... 10 more
Caused by: java.net.UnknownHostException: ns1
        ... 27 more

At this point, use the above configuration three to solve it perfectly!

---

Done!












Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325881245&siteId=291194637