Fully distributed cluster based on Hbase

A fully distributed cluster based on Hbase, but the zookeeper of the cluster is different from the last one. We all know that Hbase has built-in zookeeper in order to simplify cluster deployment. Most of the time, the built-in zookeeper can be used to deploy the cluster very conveniently, and this One point is very similar to the cluster deployment of solrcloud in solr. solrcloud also has a built-in zookeeper. When starting, solr can be responsible for starting zookeeper, while in Hbase, Hbase is responsible for starting zookeeper.

In fact, most distributed application frameworks are inseparable from the unified collaboration service of zookeeper. Of course, we can also install and maintain an independent zookeeper cluster by ourselves without the built-in zookeeper. Let’s count the pros and cons. Sanxian will not comment here. Let’s get to the topic and configure an independent zookeeper cluster to manage Hbase.



Before this, it should be noted that if an external zookeeper is used, then this zookeeper version number is recommended to be consistent with the built-in zookeeper version in Hbase, so as to avoid some inexplicable errors as much as possible. The summary steps are shown in the following figure:

 

 

 

order content one Configure the hbase-env.sh file of Hbase two Configure downloaded zookeeper3.4.5 three Distribute zookeeper to each node



The first step is to configure hbase-env.sh. The screenshot is as follows: The second step is to configure zookeeper, modify the zoo_simple.cfg in its config directory and rename it to zoo.cfg, and create a new one in its data directory (created manually). In the myid file, the x number after server.x can always be used. Modify its content as shown in the following screenshot:



Xml code copy code  Favorite code
  1. tickTime = 2000  
  2. initLimit = 10  
  3. syncLimit=5  
  4. dataDir=/root/zookeeper/data  
  5. clientPort=2181  
  6.   
  7. server.1=10.2.143.5:2887:3887  
  8. server.2=10.2.143.36:2888:3888  
  9. server.3=10.2.143.37:2889:3889  
tickTime=2000
initLimit = 10
syncLimit=5
dataDir=/root/zookeeper/data
clientPort=2181

server.1=10.2.143.5:2887:3887
server.2=10.2.143.36:2888:3888
server.3=10.2.143.37:2889:3889







The third step is to use the scp command to remotely copy the zookeeper to the child node. It should be noted that the number of zookeepers can only be an odd number. Generally, it is recommended to use 3 or 5. Of course, you can also configure more. to ensure the stability of the cluster. , the screenshot is as follows:



Finally, we can turn off the firewall to start the cluster. Pay attention to the order in which the cluster is started. Start the hadoop cluster first, then start the Zookeeper on each node, and finally start the Hbase cluster. After the startup is successful, the jps print command is as follows:


The screenshot of accessing the Hbase homepage web is as follows:


Use the Java API to operate Hbase, the example is as shown in the screenshot:

Then, we use the Hbase shell to verify on the server that the table creation step just now is successful, the screenshot is as follows:




So far, we have successfully completed, using the external A cluster of zookeeper and Hbase. Finally, pay attention to the order of stopping the cluster, first shut down Hbase, then shut down zookeeper, and finally shut down hadoop, well, now, you can take your curiosity and try to deploy with confidence

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326865818&siteId=291194637