Hadoop big data platform construction and application (3) pseudo-distributed HBase environment construction

Foreword:

Three operating modes of HBase:

Stand-alone mode

Pseudo-distributed

Distributed (notes only record this construction process)

Environmental requirements:

JAVA environment variables take effect;

Hadoop installation is complete and starts normally;

SSH can log in without secret

Hbase has been installed

Switch to the HBase conf file directory:

root@user01:/home/hbase/conf# vi hbase-env.sh

 

Add at the end:

export JAVA_HOME=/home/jdk/jdk1.8.0_171

export HBASE_CLASSPATH=/home/hbase/conf

export HBASE_MANAGES_ZK=true

Configuration:

root@user01:/home/hbase/conf# vi hbase-site.xml

<property>

<name>hbase.rootdir</name>
<value>hdfs://localhosst:9000/hbase<value>
</property>

<property>
<name>hbasse.clusster.distributed</name>
<value>true</value>  //如果出现HM,HQ,正常启动后,自动退出运行,很有可能是"true"这个字母打错了,哈哈哈,我实验中就遇到过这种低级的错误
</property>   //因为是手动输入的,具体请参考实验截图

start up:

start-dfs.sh  //启动hadoop

start-hbase.sh  //启动hbase

View startup:

You can see the start of the three processes of HM, HQ, and HR

Enter hbase:

hbase shell //command

exit

Stop hbase:

stop-hbase.sh

note:

About the startup sequence of hbase and hadoop

Startup: hadoop and then hbase (the opposite is the case when it is closed, and the startup is standardized, otherwise it is prone to situations)

Guess you like

Origin blog.csdn.net/qq_43575090/article/details/108814556