Installation and deployment of Hbase cluster

Table of contents

1. Download the HBase binary source package

2. Configure environment variables

3. Configure ZooKeeper and Hbase configuration items

4. Modify the regionservers file

5. Modify the conflicting jar package

6. Check whether the relevant process is started

7. Visit the hbase page

Deploy the cluster:

192.168.20.11 node1
192.168.20.12 node2
192.168.20.13 node3

 

1. Download the HBase binary source package

Operate on node1, node2, node3

[hadoop@node1 ~]$ cd /usr/local
[hadoop@node1 local]$ sudo wget https://archive.apache.org/dist/hbase/2.4.8/hbase-2.4.8-bin.tar.gz
[hadoop@node1 local]$ sudo tar -xvf hbase-2.4.8-bin.tar.gz
[hadoop@node1 local]$ sudo ln -s hbase-2.4.8 hbase
[hadoop@node1 local]$ sudo  chown -R  hadoop:hadoop hbase
[hadoop@node1 local]$ sudo  chown -R  hadoop:hadoop hbase-2.4.8

##################################################### 

2. Configure environment variables

[hadoop@node1 conf]$ cd /usr/local/hbase/conf
[hadoop@node1 conf]$ vim conf/hbase-env.sh
export JAVA_HOME=/usr/local/jdk
export HBASE_HEAPSIZE=4G
export HBASE_MANAGES_ZK=false
export HBASE_PID_DIR=/data/hbase

#####################################################  

3. Configure ZooKeeper and Hbase configuration items

Operate on node1, node2, node3

[hadoop@node1 conf]$ cd /usr/local/hbase/conf
# 配置ZooKeeper和Hbase配置项
# 将 hbase-site.xml 中的内容替换为如下内容
[hadoop@node1 conf]$ vim hbase-site.xml 


 
 <!-- hbase是否部署为集群模式  -->
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
  </property>
  <!--zookeeper 集群ip -->
  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>node1,node2,node3</value>
  </property>
  <!--zookeeper data dir -->
  <property>
    <name>hbase.zoopkeeper.property.dataDir</name>
    <value>/data/zookeeper/data</value> 
  </property>
  <!--要把hbase的数据存储hdfs上的路径 -->
  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://node1:9000/hbase</value>
  </property>
  <!--hbase 数据目录 -->
   <property>
     <name>hbase.tmp.dir</name>
     <value>/data/hbase/tmp</value>
   </property>
  <property>
    <name>hbase.unsafe.stream.capability.enforce</name>
    <value>false</value>
  </property>

  <!-- Flink Secondary Index -->
  <property>
    <name>hbase.regionserver.wal.codec</name>
    <value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value>
  </property>

  <property>
    <name>hbase.column.max.version</name>
    <value>2</value>
  </property>
  <property>
    <name>hbase.regionserver.global.memstore.size</name>
    <value>0.2</value>
  </property>
  <property>
    <name>hbase.regionserver.global.memstore.size.lower.limit</name>
    <value>0.8</value>
  </property>

  <!-- 实际的Block Size = Heap size * hfile.block.cache.size, 需要重启才能生效,HDFS读较多时,可以增加zhe -->
  <property>
    <name>hfile.block.cache.size</name>
    <value>0.2</value>
  </property>

  <!--bytes,每个用户写缓存数,默认2097152; 总消耗内存=hbase.client.write.buffer * hbase.regionserver.handler.count -->
  <property>
    <name>hbase.hregion.memstore.flush.size</name>
    <value>134217728</value>
  </property>

  <!--RPC handler,default 30-->
  <property>
    <name>dfs.namenode.service.handler.count</name>
    <value>48</value>
  </property>

  <!-- Phoenix error-->
  <property>
    <name>phoenix.coprocessor.maxServerCacheTimeToLiveMs</name>
    <value>300000</value>
  </property>

  <!-- Region Split Policy -->
  <property>
    <name>hbase.regionserver.region.split.policy</name>
    <value>org.apache.hadoop.hbase.regionserver.DisabledRegionSplitPolicy</value>
  </property>

  
  <!-- Compacting 604800000 -->
  <property>
    <name>hbase.hstore.compactionthreshold</name>
    <value>5</value>
  </property>
  <property>
    <name>hbase.hstore.compaction.max</name>
    <value>10</value>
  </property>
  <property>
    <name>hbase.hstore.blockingStoreFiles</name>
    <value>16</value>
  </property>
  <property>
    <name>hbase.hregion.majorcompaction</name>
    <value>0</value>
  </property>

  <property>
    <name>hbase.hstore.compaction.throughput.higher.bound</name>
    <value>20971520</value>
    <description>The default is 20 MB/sec</description>
  </property>
  <property>
    <name>hbase.hstore.compaction.throughput.lower.bound</name>
    <value>10485760</value>
    <description>The default is 10 MB/sec</description>
  </property>

#####################################################  

4. Modify the regionservers file

Operate on node1, node2, node3

[hadoop@node1 conf]$ vim regionservers 
node1
node2
node3

#####################################################  

5. Modify the conflicting jar package

Both hbase and hadoop have slf4j-log4j12-1.7.30.jar package, there will be conflicts when starting hbase

Modify the jar package name of hbase.

[hadoop@node1 lib]$ cd /usr/local/hbase/lib/client-facing-thirdparty/
[hadoop@node1 client-facing-thirdparty]$ mv slf4j-log4j12-1.7.30.jar slf4j-log4j12-1.7.30.jar.bak

#####################################################  

6. Check whether the relevant process is started

[root@node1 ~]# jps
42128 HRegionServer
42848 ResourceManager
33729 QuorumPeerMain
41893 HMaster
40664 NameNode
40824 DataNode
44442 Jps



[root@node2 ~]# jps
15410 HRegionServer
17013 Jps
15813 NodeManager
15021 DataNode
15133 SecondaryNameNode
1583 Bootstrap
12223 QuorumPeerMain




[root@node3 ~]# jps
13058 NodeManager
7766 QuorumPeerMain
13531 Jps
12715 HRegionServer
12412 DataNode

#####################################################  

7. Visit the hbase page

 

Guess you like

Origin blog.csdn.net/qq_48391148/article/details/129883462