hbase summary (2) - hbase installation

This article introduces two HBase installation methods: local installation and pseudo-distributed installation.

The prerequisite for installation is that Hadoop has been successfully installed , and the version of hadoop should match the version of hbase.

The hbase I am going to install is hbase-0.94.11 version, and the hadoop I need is hadoop-1.2.1 version.

hbase download address: http://mirror.bit.edu.cn/apache/hbase/hbase-0.94.11/

Unzip the downloaded hbase-0.94.11 to the corresponding directory, such as /usr/hbase-0.94.11

Rename hbase-0.90.4 to hbase

mv hbase-0.94.11 hbase

First, you need to add the bin directory under hbase to the path of the system, modify /etc/profile, and add the following content:

export  PATH=$PATH:/usr/hbase/bin

 

1. Stand-alone installation  Modify the configuration file hbase-env.sh in the conf directory under hbase

First, modify the following properties in hbase-env.sh:

export JAVA_HOME=/usr/java/jdk1.6

export HBASE_MANAGES_ZK = true   // This configuration information is set to manage zookeeper by hbase itself, no separate zookeeper is required.



2. Pseudo-distributed installation  Modify the configuration files hbase-env.sh and hbase-site.xml in the conf directory under hbase-0.90.4

First, modify the following properties in hbase-env.sh:

export JAVA_HOME=/usr/java/jdk1.6 

export HBASE_CLASSPATH=/usr/hadoop/conf 

export HBASE_MANAGES_ZK=true

 


Then, modify the hbase-site.xml file

copy code
< configuration > 
    < property > 
      < name > hbase.rootdir </ name > 
     < value > hdfs://192.168.70.130:9000/hbase </ value > >//This property needs to be adjusted according to your own hadoop configuration information modify
     </ property > 
    < property > 
        < name > hbase.cluster.distributed </ name > 
        < value > true </ value > 
    </ property >

</configuration>
copy code




After completing the above operations, you can start Hbase normally. Startup sequence: first start Hadoop -> then start Hbase, shutdown sequence: first close Hbase -> then close Hadoop.


 


First start hadoop, ( if hadoop has been started normally, it can no longer be started, and directly check whether the process is correct. If the process is not correct, you must re-debug hadoop to ensure that hadoop is running normally, and then start hbase ),
here also let HDFS be in non-safe mode :bin/hadoop dfsadmin -safemode leave

start-all.sh     // Start hadoop 
jps                 // View the process

 

copy code

2564
SecondaryNameNode 2391 DataNode 2808 TaskTracker 2645 JobTracker 4581 Jps 2198 NameNode
copy code

 

Start hbase:

start-hbase.sh    

 

jps view:

copy code
2564 SecondaryNameNode 
 2391 DataNode 
 4767 HQuorumPeer 
 2808 TaskTracker 
 2645 JobTracker 
 5118 Jps 
 4998 HRegionServer 
 4821 HMaster 
 2198 NameNode
copy code

It can be seen that the related processes of HBase have been started

hbase shell

     

enter shell mode

HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.94.12, r1524863, Fri Sep 20 00:25:45 UTC 2013

hbase(main):001:0> 

通过:http://localhost:60010/master-status ,可以访问HBase状态

 

 


  停止hbase 如果在操作Hbase的过程中发生错误,可以通过hbase安装主目录下的logs子目录查看错误原因

 

先停止hbase

stop-hbase.sh

再停止hadoop

stop-all.sh

 

 

 

错误解决方法:

1.报错如下: localhost:  Exception in thread "main" org.apache.hadoop.ipc.RPC$VersionMismatch: Protocol  org.apache.hadoop.hdfs.protocol.ClientProtocol version mismatch. (client = 42,  server =  41) 所以如果遇到以上错误,就通过替换jar包解决。(一般使用新版本的hadoop 和 hbase不会出现这种错误

替换Hbase中的jar包 需要用{HADOOP_HOME}下的hadoop-1.2.1-core.jar  替换掉{HBASE_HOME}/lib目录下的hadoop-1.2.1-append-r1056497.jar  。如果不替换jar文件Hbase启动时会因为hadoop和Hbase的客户端协议不一致而导致HMaster启动异常。

2.错误如下:

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/........../lib/slf4j-log4j12-1.5.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/........../slf4j-log4j12-1.5.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

:hadoop中lib里slf4j-api-1.4.3.jar、slf4j-log4j12-1.4.3.jar和hbase中lib里对应的这两个jar包版本不一致导致冲突,把hbase里的替换成hadoop里的就可以了

3.有时候安装hbase完全分布式时会出现奇怪的问题,先去查下集群中各个服务器时间,是否相差太多,最好一致

 

NTP:集群的时钟要保证基本的一致。稍有不一致是可以容忍的,但是很大的不一致会 造成奇怪的行为。 运行 NTP 或者其他什么东西来同步你的时间.

如果你查询的时候或者是遇到奇怪的故障,可以检查一下系统时间是否正确!

 设置集群各个节点时钟:date -s “2012-02-13 14:00:00”

4.2014-10-09 14:11:53,824 WARN org.apache.hadoop.hbase.master.AssignmentManager: Failed assignment of -ROOT-,,0.70236052 to 127.0.0.1,60020,1412820297393, trying to assign elsewhere instead; retry=0
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed setting up proxy interface org.apache.hadoop.hbase.ipc.HRegionInterface to /127.0.0.1:60020 after attempts=1
at org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.Java:242)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1278)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1235)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1222)
at org.apache.hadoop.hbase.master.ServerManager.getServerConnection(ServerManager.java:496)
at org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:429)
at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1592)
at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1329)
at org.apache.hadoop.hbase.master.AssignCallable.call(AssignCallable.java:44)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.net.ConnectException: Connection refused

 

The solution is as follows:

Problems with configuration in /etc/hosts

127.0.1.1 is followed by the corresponding hostname

Change 127.0.1.1 to 127.0.0.1, the problem is solved

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=327036819&siteId=291194637