Today I will teach you how to build a fully distributed hbase cluster:
1. Environmental confirmation:
Since hbase data is stored on the hdfs cluster, you need to build an hdfs cluster, and we need to use zookeeper to manage our hbase cluster, so our machine needs to install the zookeeper cluster.
As shown in the figure:
hadoop cluster:
There are three machines in the local environment: master, slave1, slave2, which are the namenode and datanode of hdfs, of which QuorumPeerMain is the java process of zookeeper. After confirming that there is no problem in the above environment, we can install our hbase cluster Too.
2. Upload the hbase installation package
Note that because hbase depends on the hadoop cluster, the version of hbase we need corresponds to hadoop. My local hadoop is version 2.7.3 (you can use hadoop version to view the version)
Follow the official website http://hbase.apache.org/book.html#java to view the relationship between the versions:
so the version of the hbase I downloaded is version 2.1.8 and upload it to the server
3. Unzip
tar -zxvf hbase-2.1.8-bin.tar.gz
My folder is / home / hbase, the effect is as follows:
4. Modify the configuration file
Enter the / conf directory:
- Modify hbase-env.sh
The one that needs to be modified is the installation location of jdk, the other is to use an external zk, you can use to echo $JAVA_HOME
query the installation location of jdk
- Modify hbase-site.xml
<!-- 指定hbase在HDFS上存储的路径 -->
<property>
<name>hbase.rootdir</name>
<value>hdfs://master:9000/hbase</value>
</property>
<!-- 指定hbase是分布式的 -->
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<!-- 指定zk的地址,多个用“,”分割 -->
<property>
<name>hbase.zookeeper.quorum</name>
<value>master:2181,slave1:2181,slave2:2181</value>
</property>
- Modify regionservers
- Note: You also need to put hdfs-site.xml and core-site.xml of hadoop under hbase / conf
We first find the location of this file: the
path is:, /home/hadoop/hadoop-2.7.3/etc/hadoop
and then we copy these two files in the past:
cp /home/hadoop/hadoop-2.7.3/etc/hadoop/hdfs-site.xml ./
cp /home/hadoop/hadoop-2.7.3/etc/hadoop/core-site.xml ./
- Finally, we copied this configured hbase to the other two servers:
scp -r hbase-2.1.8 slave1:/home/hbase/
scp -r hbase-2.1.8 slave2:/home/hbase/
4. Start hbase
Enter the bin directory on the master node, use
./start-hbase.sh
Start habse, at this time some jar package conflicts will be reported:
we need to modify or delete the same jar, here my approach is to rename the jar in hbase:
cd lib/client-facing-thirdparty/
mv slf4j-log4j12-1.7.25.jar slfj-log4j12-1.7.25.jar.bak
Finally, use the close shell script in the bin directory to close hbase and restart:
./stop-hbase.sh
./start-hbase.sh
Successful start!
Since my version is 2.1.8, my port number is 16010
http://master:16010/master-status