Hbase installation configuration

1. Download the installation package hbase-0.98.6-cdh5.3.6.tar.gz decompression,

Links: HTTPS: // pan.baidu.com/s/1vsz2Cqh2cp0n99sHS_xBzg 
extraction code: 4abh

2. Configure hbase-env.sh into the conf, configure the JAVA_HOME, configure whether to use hbase own zookeeper,

export JAVA_HOME=/home/cmcc/server/jdk1.8.0_181
export HBASE_MANAGES_ZK=false

3. hbase-site.xml

. 1 "NameNode node name (as a single node)
 <Property>
    <name>hbase.rootdir</name>
    <value>hdfs://hadoop1:9000</value>
</property>
2 "whether to allow support distributed hbase

 <property>
      <name>hbase.cluster.distributed</name>
      <value>true</value>
 </property>

3, "Configuring hbase port number 
  (1) The first method, write-only port number, because Hmaster will use high availability

    <property>
        <name>hbase.master.port</name>
        <value>600000</value>
    </property>

  (2) The second way is to specify a fixed machine table

    <property>
        <name>hbase.master.port</name>
        <value>hadoop1:600000</value>
    </property>

4 "Configuration zookeeper, zookeeper must be an odd number, if a plurality of> 1 table configured to: <value> hadoop1: 2181, hadoop2: 2181, hadoop3: 2181 </ value>
  <Property>

      <name>hbase.zookeeper.quorum</name>
      <value>hadoop1:2181</value>
  </property>

  5, "Configuring zookeeper data directory

  <property>
    <name>hbase.zookeeper.property.dataDir</name>
    <value>/home/cmcc/server/zookeeper/data</value>
  </property>

  6, "Configuring Port zookeeper

  <property>
      <name>hbase.zookeeper.property.clientPort</name>
      <value>2181</value>
   </property>

  7 "local file system is set to false, indicating hdfs set to true
   <Property>
       <name> hbase.unsafe.stream.capability.enforce </ name>
      <value> to true </ value>
  </ Property>

4. Edit regionservers, the equivalent of slave file

  If you are single, added: hadoop1

  If more than one, adding:

    hadoop1

    hadoop2

    hadoop3

5. All the beginning hadoop jar package lib, deleted, then in the following hadoop corresponding jar package into the lib directory, zookeeper jar package copied into zookeeper

  Hadoop first enters the directory, searching a jar package, copied to the specified directory

  find -name hadoop-annotations / home / cmcc / server / t1 / and finally all the jar packages are copied to the lib directory  (if it is a cluster, do not forget to do other machines)

hadoop-annotations-2.5.0.jar
hadoop-auth-2.5.0-cdh5.3.6.jar
hadoop-client-2.5.0-cdh5.3.6.jar
hadoop-common-2.5.0-cdh5.3.6.jar
hadoop-hdfs-2.5.0-cdh5.3.6.jar
hadoop-mapreduce-client-app-2.5.0-cdh5.3.6.jar
hadoop-mapreduce-client-common-2.5.0-cdh5.3.6.jar
hadoop-mapreduce-client-core-2.5.0-cdh5.3.6.jar
hadoop-mapreduce-client-hs-2.5.0-cdh5.3.6.jar
hadoop-mapreduce-client-hs-plugins-2.5.0-cdh5.3.6.jar
hadoop-mapreduce-client-jobclient-2.5.0-cdh5.3.6.jar
hadoop-mapreduce-client-jobclient-2.5.0-cdh5.3.6-tests.jar
hadoop-mapreduce-client-shuffle-2.5.0-cdh5.3.6.jar
hadoop-yarn-api-2.5.0-cdh5.3.6.jar
hadoop-yarn-applications-distributedshell-2.5.0-cdh5.3.6.jar
hadoop-yarn-applications-unmanaged-am-launcher-2.5.0-cdh5.3.6.jar
hadoop-yarn-client-2.5.0-cdh5.3.6.jar
hadoop-yarn-common-2.5.0-cdh5.3.6.jar
hadoop-yarn-server-applicationhistoryservice-2.5.0-cdh5.3.6.jar
hadoop-yarn-server-common-2.5.0-cdh5.3.6.jar
hadoop-yarn-server-nodemanager-2.5.0-cdh5.3.6.jar
hadoop-yarn-server-resourcemanager-2.5.0-cdh5.3.6.jar
hadoop-yarn-server-tests-2.5.0-cdh5.3.6.jar
hadoop-yarn-server-web-proxy-2.5.0-cdh5.3.6.jar
zookeeper-3.4.5-cdh5.3.6.jar

6. hbase + hadoop_repository.tar.gz CDH_HadoopJar.tar.gz copied to the lib directory, network disk to download 1  (if it is a cluster, do not forget to do other machines)

7. The core-site.xml hadoop in, hdfs-site.xml hbase copied to the conf in  (if it is a cluster, do not forget to do other machines)

8. Start Service

bin/start-hbase.sh

 

Guess you like

Origin www.cnblogs.com/redhat0019/p/11842035.html