Hbase quick start (installation and deployment)

The installation package has been uploaded in my resources

  1. Cluster building
    2.1 Installation
    2.1.1 Upload and decompress the HBase installation package
tar -xvzf hbase-2.1.0.tar.gz -C ../server/

2.1.2 Modify HBase configuration file
2.1.2.1 hbase-env.sh

cd /export/server/hbase-2.1.0/conf
vim hbase-env.sh

Line 28

export JAVA_HOME=/export/server/jdk1.8.0_241/
export HBASE_MANAGES_ZK=false

2.1.2.2 hbase-site.xml

vim hbase-site.xml
<configuration>
        <property>
                <name>hbase.rootdir</name>
                <value>hdfs://node01:8020/hbase</value>  
        </property>

        <property>
                <name>hbase.cluster.distributed</name>
                <value>true</value>
        </property>

   <!-- 0.98后的新变动,之前版本没有.port,默认端口为60000 -->
        <property>
                <name>hbase.master.port</name>
                <value>16000</value>
        </property>

        <property>
                <name>hbase.zookeeper.quorum</name>
                <value>node01:2181,node02:2181,node03:2181</value>
        </property>

        <property>
                <name>hbase.zookeeper.property.dataDir</name>
         <value>/export/servers/zookeeper-3.4.5-cdh5.14.0/zkdata</value>
        </property>
</configuration>

2.1.3 Configure environment variables

Configure Hbase environment variables

vim /etc/profile
export HBASE_HOME=/export/servers/hbase-1.2.0-cdh5.14.0
export PATH=$PATH:${HBASE_HOME}/bin:${HBASE_HOME}/sbin

#Load environment variables

source /etc/profile
2.1.4 复制jar包到lib
cp $HBASE_HOME/lib/client-facing-thirdparty/htrace-core-3.1.0-incubating.jar $HBASE_HOME/lib/
2.1.5 修改regionservers文件
vim regionservers 
node1.itcast.cn
node2.itcast.cn
node3.itcast.cn
2.1.6 分发安装包与配置文件
cd /export/servers
scp -r hbase-2.1.0/ node2.itcast.cn:$PWD
scp -r hbase-2.1.0/ node3.itcast.cn:$PWD
scp -r /etc/profile node2.itcast.cn:/etc
scp -r /etc/profile node3.itcast.cn:/etc

Load environment variables on node2.itcast.cn and node3.itcast.cn

source /etc/profile

2.1.7 Start HBase

cd /export/onekey

Start ZK

./start-zk.sh

Start hadoop

start-dfs.sh

Start hbase

start-hbase.sh

2.1.8 Verify that Hbase is started successfully

Start the hbase shell client

hbase shell

Enter status

[root@node1 onekey]# hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/export/server/hadoop-2.7.5/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/export/server/hbase-2.1.0/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell
Use "help" to get list of supported commands.
Use "exit" to quit this interactive shell.
Version 2.1.0, re1673bb0bbfea21d6e5dba73e013b09b8b49b89b, Tue Jul 10 17:26:48 CST 2018
Took 0.0034 seconds                                                                                                                                           
Ignoring executable-hooks-1.6.0 because its extensions are not built. Try: gem pristine executable-hooks --version 1.6.0
Ignoring gem-wrappers-1.4.0 because its extensions are not built. Try: gem pristine gem-wrappers --version 1.4.0
2.4.1 :001 > status
1 active master, 0 backup masters, 3 servers, 0 dead, 0.6667 average load
Took 0.4562 seconds                                                                                                                                           
2.4.1 :002 >
2.2 WebUI
http://node1.itcast.cn:16010/master-status

2.3 Installation directory description
Directory name description

bin All hbase-related commands are stored in this directory conf
all hbase configuration files hbase-webapps hbase web
ui program location lib hbase dependent java library logs hbase log files

2.4 Reference hardware configuration

The typical memory configuration for each Java process in a cluster with about 800TB storage space: Process heap description NameNode 8
GB Every 100TB data or every 100W files occupies approximately 1GB of the NameNode heap memory.
SecondaryNameNode 8GB Redo the main NameNode's EditLog in the memory, so The configuration needs to be the same as that of the NameNode.
DataNode 1GB is adequate. ResourceManager 4GB is adequate (note that this is the recommended configuration for MapReduce)
NodeManager 2GB is adequate (note that this is the recommended configuration for MapReduce) HBase HMaster 4GB Lightweight load, appropriate can
HBase RegionServer 12GB most of the available memory, as well as the operating system cache, the task process leave enough space ZooKeeper 1GB moderate
recommendation: Master machine to run NameNode, ResourceManager, and HBase HMaster, recommend around 24GB
Slave machines need to run DataNode, NodeManager and HBase RegionServer, recommended 24GB (and above) Select the
number of processes running on a node according to the number of CPU cores, for example: two 4-core CPU=8 cores, each Java process can independently occupy a core (Recommendation: 8-core CPU)
The more memory, the better, it will produce more fragments during use. The larger the Java heap memory, the more
time it takes to organize the memory. For example: It is not a good choice to set the heap memory of RegionServer to 64GB. Once FullGC will cause a longer wait, and the wait is longer, the Master may think that the node has hung up, and then remove the node

Guess you like

Origin blog.csdn.net/xianyu120/article/details/114790970