NameNode | DataNode | JournalNode | Zookeeper | Hmaster | HregionServer | |
node1 | 1 | 1 | ||||
node2 | 1 | 1 | 1 | 1 | 1 | |
node3 | 1 | 1 | 1 | 1 | ||
node4 | 1 | 1 | 1 | 1 | 1 |
1, the first extract to the installation package hbase node node1
2, modify the configuration file
hbase-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_11 export HBASE_MANAGES_ZK=false #(关闭Hbase本身的zookeeper集群)
hbase-site.xml
<property> <name>hbase.rootdir</name> <value>hdfs://node2:8020/hbase</value> <--在HDFS上创建一个干净的节点,用于存放元数据--> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> <--是否开启分布式--> </property> <property> <name>hbase.zookeeper.quorum</name> <value>node2,node3,node4</value> <--对应的zookeeper集群,不用加端口--> </property>
regionservers from node
node2 node3 node4
backup-masters configuration (Note backup-masters need to manually create) the main backup
node4
3, the hdfs among hdfs-site.xml copy hbase conf directory (because the required data is stored on hdfs)
cp /opt/hadoop-2.6.5/etc/hadoop/hdfs-site.xml /opt/hbase0.98/conf/
4, will be distributed to hbase on node1 node2, node3, the current path of node4
scp -r hbase0.98/ root@node2:`pwd`
scp -r hbase0.98/ root@node3:`pwd`
scp -r hbase0.98/ root@node4:`pwd`
5, start
Performed node1 (Master node) start-hbase.sh start the master node.
In node4 (backup-Master node) to perform start-hbase.sh start the backup node.