Note: HBase strong dependence zookeeper and hadoop, we must ensure that before you install HBase zookeeper and hadoop started successfully, and the service is operating normally
Step : download the corresponding HB ASE installation package
All about CDH Package Download version follows
http://archive.cloudera.com/cdh5/cdh/5/
HBase corresponding version download the following address
http://archive.cloudera.com/cdh5/cdh/5/hbase-1.2.0-cdh5.14.0.tar.gz
Step two: archive upload and unzip
The compressed package uploaded to the server node01 / export / soft path and extract
cd /export/soft/
tar -zxvf hbase-1.2.0-cdh5.14.0-bin.tar.gz -C ../servers/
The third step : modify the configuration file
The first machine to modify the configuration file
cd /export/servers/hbase-1.2.0-cdh5.14.0/conf
Modify the first profile hbase-env.sh
Comment out the HBase using internal zk
vim hbase-env.sh
export JAVA_HOME=/export/servers/jdk1.8.0_141
export HBASE_MANAGES_ZK=false
Modifying the second profile hbase-site.xml
Modify hbase-site.xml
vim hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://node01:8020/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<!-- 0.98后的新变动,之前版本没有.port,默认端口为60000 -->
<property>
<name>hbase.master.port</name>
<value>16000</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>node01:2181,node02:2181,node03:2181</value>
</property
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/export/servers/zookeeper-3.4.5-cdh5.14.0/zkdatas</value>
</property>
</configuration>
Modify third profile regionservers
vim regionservers
node01
node02
node03
Create a back-masters configuration file , implement HM ASTER's availability
cd /export/servers/hbase-1.2.0-cdh5.14.0/conf
vim backup-masters
node01
node02
node03
Step four : Install the package is distributed to other machines
We will first machine installation package hbase copied to other machines to the top
cd /export/servers/
scp -r hbase-1.2.0-cdh5.14.0/ node02:$PWD
scp -r hbase-1.2.0-cdh5.14.0/ node03:$PWD
Step five : three machines to copy the configuration file or create a soft link
Because hbase need hadoop of core-site.xml and profile information hdfs-site.xml which read, so we have three machines to copy the configuration file
cp /export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop/core-site.xml /export/servers/hbase-1.2.0-cdh5.14.0/conf/core-site.xml
cp /export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop/hdfs-site.xml /export/servers/hbase-1.2.0-cdh5.14.0/conf/hdfs-site.xml
Step six : three machines added HBASE_HOME of environment variables
vim /etc/profile·
export HBASE_HOME=/export/servers/hbase-1.2.0-cdh5.14.0
export PATH=:$HBASE_HOME/bin:$PATH
Step Seven: HB ASE Cluster start
The first machine execute the following command to start
cd /export/servers/hbase-1.2.0-cdh5.14.0
bin/start-hbase.sh
Warning: HBase start time will generate a warning, because the problem jdk7 and jdk8 caused, if linux server installation jdk8 will produce such a warning
We can just swap hbase-env.sh among all machines
"HBASE_MASTER_OPTS" and "HBASE_REGIONSERVER_OPTS" configuration to solve this problem. But the warning does not affect our normal operation, it can not solve
We can also execute the following command to boot a single node
Start command HMaster
bin/hbase-daemon.sh start master
Start HRegionServer command
bin/hbase-daemon.sh start regionserver
In order to solve HMaster single point of failure problem, we can in node02 and node03 above the machine to start the process HMaster nodes to achieve high availability HMaster
bin/hbase-daemon.sh start master
Step Seven : page views
Browser page views
http://node01:60010/master-status