1 Hadoop environment is successfully built
2 HBase installation (installed on the master node, scp issued to the slave node)
2.1 Upload hbase-2.2.6-bin.tar.gz
2.2 Unzip hbase-2.2.6-bin.tar.gz to the /app directory
2.3 Modify HBase configuration file
2.3.1 /app/hbase-2.2.6/conf/hbase-env.sh
Add at the end:
2.3.2 /app/hbase-2.2.1/conf/hbase-site.xml
The
content of vi /app/hbase-2.2.1/conf/hbase-site.xml is:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://master:9000/hbase</value>
</property>
<property>
<name>hbase.unsafe.stream.capability.enforce</name>
<value>false</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>master,slave1,slave2</value>
</property>
</configuration>
2.3.3 /app/hbase-2.2.6/conf/regionservers
vi /app/hbase-2.2.6/conf/regionservers
2.3.4 /app/hbase-2.2.6/conf/backup-masters
vi /app/hbase-2.2.6/conf/backup-masters
slave1
2.4 Deliver to the node machine
scp -r /app/hbase-2.2.6 angel@slave2:/app
scp -r /app/hbase-2.2.6 angel@slave1:/app
2.5 Edit HBase environment variables for all nodes
$ vi /home/angel/.profile
export JAVA_HOME=/app/jdk1.8.0_261
export PATH=$JAVA_HOME/bin:$PATH
export HADOOP_HOME=/app/hadoop-2.8.5
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export JAVA_LIBARAY_PATH=$HADOOP_HOME/lib/native:$JAVA_LIBRARY_PATH
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HBASE_HOME=/app/hbase-2.2.6
export PATH=$PATH:$HBASE_HOME/bin
export ZOOKEEPER_HOME=/app/zookeeper-3.4.14
export PATH=$PATH:$ZOOKEEPER_HOME/bin
2.6 HBase environment variables take effect on all nodes
$ source /home/angel/.profile
2.7 Solution to the inability to find mycluster
cp /app/hadoop-2.8.5/etc/hadoop/core-site.xml /app/hbase-2.2.6/conf/
cp /app/hadoop-2.8.5/etc/hadoop/hdfs-site.xml / app/hbase-2.2.6/conf/
Note: All node machines are copied
3 Install ZooKeeper
3.1 Upload zookeeper-3.4.14.tar.gz
3.2 Edit zoo.cfg
cp /app/zookeeper-3.4.14/conf/zoo_sample.cfg /app/zookeeper3.4.14/conf/zoo.cfg
vi /app/zookeeper-3.4.14/conf/zoo.cfg
add at the end:
maxClientCnxns=60
autopurge.snapRetainCount=3
autopurge.purgeInterval=1
server.0=172.25.0.10:2888:3888
server.1=172.25.0.11:2888:3888
server.2=172.25.0.12:2888:3888
3.3 Create myid
mkdir /app/zookeeper-3.4.14/data
echo 0 > /app/zookeeper-3.4.14/data/myid
3.4 Copy Zookeeper to other nodes
scp -r /app/zookeeper-3.4.14/ angel@slave1:/app
scp -r /app/zookeeper-3.4.14/ angel@slave2:/app
3.5 Other nodes create myid separately
On slave node 2, slave2
echo 2> /app/zookeeper-3.4.14/data/myid
On slave node 1,
slave1 echo 1> /app/zookeeper-3.4.14/data/myid
3.6 Start ZooKeeper on all nodes
zkServer.sh start
3.7 View zooKeeper status
Master node
zkServer.sh status
Mode:follower
Slave node 1 slave1
Mode:follower
Slave node 2 slave2
Mode: leader
4 HBase startup test
4.1 Start Hadoop
$ start-dfs.sh
$ start-yarn.sh
$ mr-jobhistory-daemon.sh start historyserver
4.2 Start HBase
The master node starts HBase
$ start-hbase.sh
Note: It is not standard to start HBase first. It should be zooKeeper first, and then HBase. However, I found that I started zookeeper first, and then started HBase. The computer could not run, so I started HBase first. Then start zookeeper.
4.3 Start zooKeeper
Three node machines
$zkServer.sh start to start the
service
$zkServer.sh status to
view the service
$zkServer.sh stop to
stop the service
Master node master
slave
node slave1 slave node 2 slave2
4.4 View process
Master node master (6 processes)
Slave node 1, slave1, slave2 (4 processes)
If the process is missing
, start the regionserver on the slave1 node and slave2 node
$ hbase-daemon.sh start regionserver
Start master on the master node
$ hbase-daemon.sh start master
4.5 Web page test
http://master:16010/
http:
//slave1:16030 / http://slave2:16030/ At
this point, HBase deployment is complete!
Click the link below, HBase table management.
link