HBASE's fully distributed build

Today I will teach you how to build a fully distributed hbase cluster:

1. Environmental confirmation:

Since hbase data is stored on the hdfs cluster, you need to build an hdfs cluster, and we need to use zookeeper to manage our hbase cluster, so our machine needs to install the zookeeper cluster.
As shown in the figure:
hadoop cluster:
Insert picture description hereInsert picture description hereInsert picture description hereInsert picture description hereThere are three machines in the local environment: master, slave1, slave2, which are the namenode and datanode of hdfs, of which QuorumPeerMain is the java process of zookeeper. After confirming that there is no problem in the above environment, we can install our hbase cluster Too.

2. Upload the hbase installation package

Note that because hbase depends on the hadoop cluster, the version of hbase we need corresponds to hadoop. My local hadoop is version 2.7.3 (you can use hadoop version to view the version)

Insert picture description hereFollow the official website http://hbase.apache.org/book.html#java to view the relationship between the versions:
Insert picture description hereInsert picture description here
so the version of the hbase I downloaded is version 2.1.8 and upload it to the server
Insert picture description here

3. Unzip

tar -zxvf hbase-2.1.8-bin.tar.gz
My folder is / home / hbase, the effect is as follows:
Insert picture description here

4. Modify the configuration file

Enter the / conf directory:

  1. Modify hbase-env.sh

The one that needs to be modified is the installation location of jdk, the other is to use an external zk, you can use to echo $JAVA_HOMEquery the installation location of jdk
Insert picture description here
Insert picture description here
Insert picture description here

  1. Modify hbase-site.xml
<!-- 指定hbase在HDFS上存储的路径 -->
<property>
		<name>hbase.rootdir</name>
		<value>hdfs://master:9000/hbase</value>
</property>
<!-- 指定hbase是分布式的 -->
<property>
		<name>hbase.cluster.distributed</name>
		<value>true</value>
</property>
<!-- 指定zk的地址,多个用“,”分割 -->
<property>
		<name>hbase.zookeeper.quorum</name>
		<value>master:2181,slave1:2181,slave2:2181</value>
</property>

Insert picture description here

  1. Modify regionservers

Insert picture description here

  1. Note: You also need to put hdfs-site.xml and core-site.xml of hadoop under hbase / conf

We first find the location of this file: the
Insert picture description herepath is:, /home/hadoop/hadoop-2.7.3/etc/hadoopand then we copy these two files in the past:

cp /home/hadoop/hadoop-2.7.3/etc/hadoop/hdfs-site.xml ./


cp /home/hadoop/hadoop-2.7.3/etc/hadoop/core-site.xml ./

Insert picture description here

  1. Finally, we copied this configured hbase to the other two servers:
scp -r hbase-2.1.8 slave1:/home/hbase/
scp -r hbase-2.1.8 slave2:/home/hbase/

4. Start hbase

Enter the bin directory on the master node, use

./start-hbase.sh

Start habse, at this time some jar package conflicts will be reported:
Insert picture description herewe need to modify or delete the same jar, here my approach is to rename the jar in hbase:

cd lib/client-facing-thirdparty/
mv slf4j-log4j12-1.7.25.jar slfj-log4j12-1.7.25.jar.bak

Finally, use the close shell script in the bin directory to close hbase and restart:

./stop-hbase.sh
./start-hbase.sh

Insert picture description hereSuccessful start!
Since my version is 2.1.8, my port number is 16010

http://master:16010/master-status

Insert picture description here

Published 39 original articles · won praise 1 · views 4620

Guess you like

Origin blog.csdn.net/thetimelyrain/article/details/104039924