Installation and deployment of hbase (very detailed)

One: Before installation, we must ensure that zookeeper and hadoop have been installed, and then we start to start zookeeper and hadoop cluster, the command to start zk cluster is to execute bin/zkServer.sh start in the zk installation directory, and then execute sbin in hadoop /start-dfs.sh and sbin/start-yarn.sh

Two: hbase installation and deployment
1. Execute on the hadoop102 machine

tar -zxvf hbase-1.3.1-bin.tar.gz -C /opt/module

Unzip our tar package to the /opt/module directory.
[root@hadoop102 software]# tar -zxvf hbase-1.3.1-bin.tar.gz -C /opt/module

2. Enter the /opt/module directory and rename hbase-1.3.1 to hbase

[root@hadoop102 module]# mv  hbase-1.3.1/  hbase

3. We enter the conf directory in hbase, then we need to modify the following three files in this directory

Insert picture description here

1) Change the hbase-env.sh file as shown in the figure below

Insert picture description here

Insert picture description here

2) Add the following content to the hbase-site.xml file. Note that the value corresponding to hbase.rootdir in the following content is the master node of your hadoop, and the value of hbase.zookeeper.quorum is filled in with the host name of your own cluster, hbase. zookeeper.property.dataDir is the actual directory where your zk stores data.

<property> 
  <name>hbase.rootdir</name> 
  <value>hdfs://hadoop102:9000/HBase</value> 
 </property> 
 
 <property> 
  <name>hbase.cluster.distributed</name> 
  <value>true</value> 
 </property> 
 
   <!-- 0.98后的新变动,之前版本没有.port,默认端口为60000 --> 
 <property> 
  <name>hbase.master.port</name> 
  <value>16000</value> 
 </property> 
 
 <property>    
  <name>hbase.zookeeper.quorum</name> 
      <value>hadoop102,hadoop103,hadoop104</value> 
 </property> 
 
 <property>    
  <name>hbase.zookeeper.property.dataDir</name> 
      <value>/opt/module/zookeeper-3.4.10/zkData</value> 
 </property>

  1. For the regionservers file, we only need to add each host name of the cluster. There is a line in this file for localhost, we can delete it first, and then add the following content, where hadoop102, 103, and 104 are me respectively The host name is configured according to your own host name.
hadoop102
hadoop103
hadoop104

4. Soft connect hadoop configuration file to HBase
and execute the following content on hadoop102

ln  -s  /opt/module/hadoop-2.7.2/etc/hadoop/core-site.xml       /opt/module/hbase/conf/core-site.xml
ln  -s  /opt/module/hadoop-2.7.2/etc/hadoop/hdfs-site.xml       /opt/module/hbase/conf/hdfs-site.xml

5. Copy hbase to other hadoop103, 104 machines. Note that the following steps must be executed under /opt/module/hbase . This process will last about 90 seconds.

scp -r /opt/module/hbase     root@hadoop103:/opt/module/
scp -r /opt/module/hbase     root@hadoop104:/opt/module/

6. After processing the above content, we can start the hbase service ( note that the hadoop and zk clusters need to be started in advance ), execute bin/start-hbase.sh under /opt/module/hbase on hadoop102, pay attention We only need to start it on one machine. There is no need to execute bin/start-hbase.sh on the other two machines. After
execution, we check the process of hbase on haddoop102, 103, and 104 and found that hbase has been successful. start up.

Insert picture description here

Insert picture description here

Insert picture description here

Regarding the shutdown of hbase, we can directly execute bin/stop-hbase.sh on one of the machines .

The web page of habse can be accessed directly with IP address: 16010.

Guess you like

Origin blog.csdn.net/weixin_44080445/article/details/107436127