HBase environment construction
Modify the configuration file
Modify hbase-env.sh
Before the change:
Change # export HBASE_MANAGES_ZK=true to export HBASE_MANAGES_ZK=false to
indicate that zookeeper is not started when HBase is started, and the user starts zookeeper separately.
After the change:
Modify hbase-site.xml
Before the change:
Add in hbase-site.xml
<property>
<name>hbase.rootdir</name>
<value>hdfs://dmcluster/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>app-11,app-12,app-13</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/hadoop/HBase/hbase-2.2.0/zookeeper</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.unsafe.stream.capability.enforce</name>
<value>false</value>
</property>
After the change:
Modify regionservers
Before the change:
After adding app-11
app-12
app-13
changes in regionservers :
Add backup-masters file
Only write app-13 in the file
Install HBase
Download the HBase installation package
1. Log in as root user
Command:sudo /bin/bash
2. Create HBase Directory
Command:mkdir /hadoop/HBase
3. Change all the directories under the HBase directory to all hadoop
commands:chown hadoop:hadoop /hadoop/HBase
4. Log in as hadoop user
Command:su - hadoop
5. Enter the HBase installation directory
Command:cd /hadoop/HBase/
6. Download the HBase installation package
Command:wget http://archive.apache.org/dist/hbase/2.2.0/hbase-2.2.0-bin.tar.gz
7. Unzip the installation package
Command:tar -xzf hbase-2.2.0-bin.tar.gz
Modify the configuration file
8. Enter the configuration file
command:cd hbase-2.2.0/conf/
9. Delete the configuration file that needs to be modified
Command:rm -rf hbase-env.sh hbase-site.xml regionservers
10. Copy the modified configuration file in the /tmp/Spark-stack/HBase/conf directory to the configuration file
Command:cp /tmp/Spark-stack/HBase/conf/* ./
Modify environment variables
11. Open the environment variable
command:vi ~/.bashrc
12. Addexport HBASE_HOME=/hadoop/HBase/hbase-2.2.0 export PATH=${HBASE_HOME}/bin:$PATH
13. Make environment variables take effect
Command:source ~/.bashrc
14. Check whether the environment variable is valid
Command:echo $PATH
Install HBase on the other two machines
15. First create a directory to install HBase on app-12 and app-13.
Command: ssh hadoop@app-12 "mkdir /hadoop/HBase"
、ssh hadoop@app-13 "mkdir /hadoop/HBase"
16. Copy HBsae to app-12 and app-13
Command: scp -r -q /hadoop/HBase/hbase-2.2.0 hadoop@app-12:/hadoop/HBase/
、scp -r -q /hadoop/HBase/hbase-2.2.0 hadoop@app-13:/hadoop/HBase/
17. Copy environment variables to app-12 and app-13.
Command: scp ~/.bashrc hadoop@app-12:~/
、scp ~/.bashrc hadoop@app-13:~/
Cleanup work (delete unclean files after the first installation failure)
18. Clear the hbase directory in hdfs, do not need to clear if it does not exist
Command:hdfs dfs -rm -r -f /hbase
19. Clear the hbase node in zookeeper, otherwise an error such as Master is initializing may occur.
Command:echo 'rmr /hbase' | zkCli.sh
20. Start HBase on app-12, because app-12 is the HBASE_MASTER
command:ssh hadoop@app-12 "cd /hadoop/HBase/hbase-2.2.0/bin && ./start-hbase.sh"
21. Check if the startup is successful
Command:ssh hadoop@app-12 "jps"
22. Open the HBaseWeb monitoring page
URL: http:// app-12:16030
Set up automation script
23. Add to automatic start
command:vi /hadoop/config.conf
24, add export HBASE_IS_INSTALL=True
25. Make environment variables take effect
Command:source ~/.bashrc
26. Confirm that the start.all script has HBase
commands: For vi /hadoop/startAll.sh
detailed learning content, you can watch Spark's fast big data processing scan~~~ or search for Spark Yu Haifeng