hbase build complete distributed clusters

Summary:

  hadoop stand-alone, pseudo-distributed, distributed installation

  hadoop2.8 ha Cluster Setup

  hadoop cluster complete summary of the issues encountered

Distributed Hbase complete installation steps:

  note. I hbase cluster is to continue to build from in front of the hadoop cluster (after the input nodes to build a good jps each node to start the process as follows :)

  

Host computer Aliases install software Existing processes Service Address
192.168.248.138 cdh1 hadoop2.8 jdk1.8 namenode DFSZKFailoverController HMaster http://cdh1:50070 http://cdh1:16010/master-status
192.168.248.139 cdh2 hadoop2.8 jdk1.8 namenode DFSZKFailoverController http://cdh1:50070
192.168.248.140 cdh3 hadoop2.8 jdk1.8 ResourceManager  
192.168.248.141 Cdh4 hadoop2.8 jdk1.8 zookeeper3.4.13 QuorumPeerMain JournalNode DataNode NodeManager  
192.168.248.142 cdh5 hadoop2.8 jdk1.8 zookeeper3.4.13 hbase1.4.0 QuorumPeerMain JournalNode DataNode NodeManager HMaster HRegionServer http://cdh5:16010/master-status
192.168.248.143 cdh6 hadoop2.8 jdk1.8 zookeeper3.4.13 hbase1.4.0 QuorumPeerMain JournalNode DataNode NodeManager HRegionServer  
192.168.248.144 cdh7 hadoop2.8 jdk1.8 hbase1.4.0 JournalNode DataNode NodeManager HRegionServer  

   1> select a compatible version hadoop2.8 hbase, and in this I chose hbase1.4. Specific versions corresponding reference Baidu

   2> uploaded to / hadoop folder, extract (root privileges)

    

   3>, and arranged hbase-env.sh profile hbase-site.xml 

    Note that before modifying the configuration files, environment variables hbase first configuration to / etc / profile, this step build hadoop cluster have already said many times, in this direct screenshot

    

    Modified into hbase-env.sh j / HBASE / conf under

      export JAVA_HOME = / hadoop / jdk1.8.0_181 revise their jdk

      export HBASE_MANAGES_ZK = false without hbase comes zk, zk use our cluster

    Modify hbase-site.xml: [a closer look at their own will understand]

      

    4> from the node configuration modification regionservers

      

    Note: this I should start on cdh5, so cdh5 is HMaster, for high availability, I was on namedata node cdh1 start, so cdh1 and cdh5 are Hmater, while cdh5, cdh6, cdh7 completed structures like this are HRegionServer [ we will see]

       5> hbase synchronized to the machine cdh5 cdh6 cdh7

      scp -r  $HBASE_HOME  cdh5:/hadoop/

      scp -r  $HBASE_HOME  cdh6:/hadoop/

      scp -r  $HBASE_HOME  cdh7:/hadoop/

    6> starts on the primary hbase on cdh1, along with the entire cluster will be started

  Note: Note: Note: Before starting hbase cluster must ensure time synchronization, or not start hbase cluster, the cluster can be started or part of the time range, remember

    时间同步最简的是  date -s "2019-05-31 09:02:00" 【这种方式重启后时间同步失效】    还可以用netdate  (这种重启后不会失效,采用在线同步的方式同步时间)

    时间同步完成后就可以启动hbase集群  进入$HBASE_HOME/bin    执行  start-hbase-sh  即可

    

     到这hbase完整分布式集群也搭建完成了。


Web访问:

   http://192.168.248.138:16010/master-status

  

 

Guess you like

Origin www.cnblogs.com/huhongy/p/10953647.html