Hbase分布式环境搭建

一、环境准备

主机名 操作系统 IP NTP(平滑同步) JDK zookeeper hadoop hbase
node1 CentOS 7.8 192.168.198.241 与ntp.aliyun.com同步 JDK8 3.5.8 3.2.1 2.2.4
node2 CentOS 7.8 192.168.198.242 与node1同步 JDK8 3.5.8 3.2.1 2.2.4
node3 CentOS 7.8 192.168.198.243 与node1同步 JDK8 3.5.8 3.2.1 2.2.4

hbase官网依赖软件版本要求

下载地址:

JDK

hbase

hadoop

zookeeper

JDK
Hadoop

二、系统设置

1.静态IP地址(以node1为例)

vim /etc/sysconfig/network-scripts/ifcfg-ens33
#修改如下内容:
BOOTPROTO=static
IPADDR=192.168.198.241
NETMASK=255.255.255.0
GATEWAY=192.168.198.2
DNS1=8.8.8.8
DNS2=114.114.114.114
ONBOOT=yes
systemctl restart network

2.hostname与host映射(以node1为例)

hostname
hostnamectl set-hostname node1
hostname

vim /etc/hosts
#修改如下内容:
192.168.198.241 node1
192.168.198.242 node2
192.168.198.243 node3

3.关闭防火墙(以node1为例)

systemctl stop firewalld
systemctl disable firewalld
firewall-cmd --state

4.关闭selinux(以node1为例)

 vim /etc/selinux/config
#修改如下内容:
SELINUX=disabled
getenforce
setenforce 0
getenforce

5.新建用户及组(以node1为例)

groupadd hadoop
useradd -g hadoop hadoop
passwd hadoop

6.调大hadoop用户资源限制(以node1为例)

vim /etc/security/limits.conf
#追加如下内容:
hadoop soft nproc 2047
hadoop hard nproc 16384
hadoop soft nofile 10240
hadoop hard nofile 65535
vim /etc/pam.d/login

#追加如下内容:
session    required     pam_limits.so
service sshd restart

7.SSH免密码登录(仅node1执行)

su - hadoop
ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@node1
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@node2
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@node3

#测试免密登录
ssh hadoop@node1
exit
ssh hadoop@node2
exit
ssh hadoop@node3
exit

三、NTP安装与配置(以node1为例)

ntp服务器搭建

四、JDK安装(以node1为例)

1.yum源配置

su - root
yum install -y wget
cd /etc/yum.repos.d
mv CentOS-Base.repo CentOS-Base.repo.bak
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.163.com/.help/CentOS7-Base-163.repo
yum makecache

2.openjdk-devel安装

yum install -y java-1.8.0-openjdk-devel.x86_64
查看java安装目录(链接):
which java
ls -lrt /usr/bin/java
ls -lrt /etc/alternatives/java

vim /etc/profile
#追加如下内容:
JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk
export JAVA_HOME
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/jre/lib/rt.jar
export CLASSPATH
PATH=$JAVA_HOME/bin:$PATH
export PATH
source /etc/profile
echo $JAVA_HOME

五、上传安装包并解压

说明:zookeeper、hadoop、hbase的安装与配置均使用hadoop用户操作,现在node1节点安装、配置,然后直接通过scp命令分发到其它节点。

su - root
mkdir /home/softs
cd /home/softs
上传包:
apache-zookeeper-3.5.8-bin.tar.gz  hadoop-3.2.1.tar.gz  hbase-2.2.4-bin.tar.gz  hbase-2.2.4-client-bin.tar.gz

su - hadoop
tar -zxvf /home/softs/apache-zookeeper-3.5.8-bin.tar.gz
tar -zxvf /home/softs/hadoop-3.2.1.tar.gz
tar -zxvf /home/softs/hbase-2.2.4-bin.tar.gz
tar -zxvf /home/softs/hbase-2.2.4-client-bin.tar.gz

六、zookeeper安装及配置

1.node1

mkdir -p /home/hadoop/data/zookeeper/data
mkdir -p /home/hadoop/data/zookeeper/logs
echo '1'>/home/hadoop/data/zookeeper/data/myid
cd /home/hadoop/apache-zookeeper-3.5.8-bin
cd conf/
cp zoo_sample.cfg zoo.cfg
vim zoo.cfg
内容如下:
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/home/hadoop/data/zookeeper/data
dataLogDir=/home/hadoop/data/zookeeper/logs
# the port at which the clients will connect
clientPort=2181
server.1=node1:2888:3888
server.2=node2:2888:3888
server.3=node3:2888:3888
# the maximum number of client connections.
# increase this if you need to handle more clients
maxClientCnxns=60
minSessionTimeout=4000
maxSessionTimeout=300000
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

注意事项:server.1=node1:2888:3888前、后以及中间不要有空格。若服务启动失败,可使用./zkServer.sh start-foreground调试!

[hadoop@node1 bin]$ ./zkServer.sh --help
ZooKeeper JMX enabled by default
Using config: /home/hadoop/apache-zookeeper-3.5.8-bin/bin/../conf/zoo.cfg
Usage: ./zkServer.sh [--config <conf-dir>] {
    
    start|start-foreground|stop|restart|status|print-cmd}
[hadoop@node1 bin]$ ./zkServer.sh start-foreground

配置文件说明:

​ ①、tickTime:基本事件单元,这个时间是作为Zookeeper服务器之间或客户端与服务器之间维持心跳的时间间隔,每隔tickTime时间就会发送一个心跳;最小 的session过期时间为2倍tickTime

②、dataDir:存储内存中数据库快照的位置,除非另有说明,否则指向数据库更新的事务日志。注意:应该谨慎的选择日志存放的位置,使用专用的日志存储设备能够大大提高系统的性能,如果将日志存储在比较繁忙的存储设备上,那么将会很大程度上影像系统性能。

③、client:监听客户端连接的端口。

④、initLimit:允许follower连接并同步到Leader的初始化连接时间,以tickTime为单位。当初始化连接时间超过该值,则表示连接失败。

⑤、syncLimit:表示Leader与Follower之间发送消息时,请求和应答时间长度。如果follower在设置时间内不能与leader通信,那么此follower将会被丢弃。

⑥、server.A=B:C:D

A:其中 A 是一个数字,表示这个是服务器的编号;

B:是这个服务器的 ip 地址;

C:Leader选举的端口;

D:Zookeeper服务器之间的通信端口。

我们需要修改的第一个是 dataDir ,在指定的位置处创建好目录。

第二个需要新增的是 server.A=B:C:D 配置,其中 A 对应下面我们即将介绍的myid 文件。B是集群的各个IP地址,C:D 是端口配置。

cd /home/hadoop/apache-zookeeper-3.5.8-bin/conf
vim  java.env
加入如下内容:
export JVMFLAGS="-Xms1g -Xmx1g $JVMFLAGS"

启动zookeeper:

cd /home/hadoop/apache-zookeeper-3.5.8-bin/bin
./zkServer.sh start
./zkServer.sh status
[hadoop@node1 bin]$ jps -l
7395 sun.tools.jps.Jps
7284 org.apache.zookeeper.server.quorum.QuorumPeerMain

停止zookeeper:

cd /home/hadoop/apache-zookeeper-3.5.8-bin/bin
./zkServer.sh stop
./zkServer.sh status

登录验证:

./zkCli.sh -server node1:2181
[zk: node1:2181(CONNECTED) 0] ls /

分发文件给其它节点:

scp -r apache-zookeeper-3.5.8-bin/ hadoop@node2:/home/hadoop/
scp -r apache-zookeeper-3.5.8-bin/ hadoop@node3:/home/hadoop/
scp -r data hadoop@node3:/home/hadoop/
scp -r data hadoop@node2:/home/hadoop/

2.node2

echo '2'>/home/hadoop/data/zookeeper/data/myid

3.node3

echo '3'>/home/hadoop/data/zookeeper/data/myid

七、hadoop安装及配置

主机名 用途
node1 NameNode、Secondary NameNode、DataNode、ResourceManager、NodeManager
node2 DataNode、NodeManager
node3 DataNode、NodeManager

1.环境变量设置

su - hadoop
vim ~/.bashrc
#hadoop + hbase 
export HADOOP_HOME=/home/hadoop/hadoop-3.2.1
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HBASE_HOME=/home/hadoop/hbase-2.2.4
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HBASE_HOME/bin
source ~/.bashrc

2.配置hadoop-env.sh和yarn-env.sh

cd $HADOOP_HOME
vim etc/hadoop/hadoop-env.sh
#结尾处增加以下内容:
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk

vim etc/hadoop/yarn-env.sh
#结尾处增加以下内容:
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk

3.配置core-site.xml文件,配置HDFS的地址和端口号

cd $HADOOP_HOME/etc/hadoop
vim core-site.xml

#修改如下内容:
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
  <property>
        <name>fs.defaultFS</name>
        <value>hdfs://node1:9988</value>
  </property>
  <property>
      <name>hadoop.tmp.dir</name>
          <value>/home/hadoop/data/hadoop/data/tmp</value>
  </property>
  <property>
      <name>io.file.buffer.size</name>
          <value>131072</value>
  </property>
  <property>
      <name>ipc.server.tcpnodelay</name>
          <value>true</value>
  </property>
  <property>
      <name>io.native.lib.available</name>
          <value>true</value>
  </property>
  <property>
      <name>io.compression.codecs</name>
<value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,com.hadoop.compression.lzo.LzoCodec,co
m.hadoop.compression.lzo.LzopCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.SnappyCodec</value>
  </property>
  <property>
      <name>io.compression.codec.lzo.class</name>
          <value>com.hadoop.compression.lzo.LzoCodec</value>
  </property>
</configuration>

创建数据目录:

cd /home/hadoop/data
mkdir -p hadoop/data/tmp

4.配置hdfs-site.xml文件

cd $HADOOP_HOME/etc/hadoop
vim hdfs-site.xml

#修改如下内容:

注意:replication 是数据副本数量,默认为3,salve少于3台就会报错

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
    <property>
        <name>dfs.block.size</name>
        <value>67108864</value>
    </property>
    <property>
        <name>dfs.namenode.http-address</name>
        <value>node1:50071</value>
    </property>
    <property>
        <name>dfs.webhdfs.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>dfs.datanode.max.xcievers</name>
    </property>
    <property>
        <name>dfs.namenode.handler.count</name>
        <value>20</value>
    </property>
    <property>
        <name>dfs.datanode.handler.count</name>
        <value>50</value>
    </property>
    <property>
        <name>dfs.hosts</name>
        <value></value>
    </property>
    <property>
        <name>dfs.hosts.exclude</name>
        <value></value>
    </property>
    <property>
        <name>dfs.client.read.shortcircuit</name>
        <value>false</value>
    </property>
    <property>
        <name>dfs.data.dir</name>
        <value>/home/hadoop/data/hadoop/data/data</value>
    </property>
    <property>
        <name>dfs.name.dir</name>
        <value>/home/hadoop/data/hadoop/data/name</value>
    </property>
</configuration>

创建数据目录:

cd /home/hadoop/data/hadoop/data
mkdir data
mkdir name

5.配置mapred-site.xml文件,配置JobTracker的地址和端口

vim mapred-site.xml
内容如下:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
   <property>
      <name>mapreduce.framework.name</name>
          <value>yarn</value>
  </property>
  <property>
      <name>mapreduce.jobhistory.address</name>
          <value>node1:7070</value>
  </property>
  <property>
      <name>mapreduce.jobhistory.webapp.address</name>
          <value>node1:7080</value>
  </property>
  <property>
      <name>dfs.client.read.shortcircuit</name>
          <value>false</value>
  </property>
</configuration>

6.配置yarn-site.xml文件

vim yarn-site.xml
内容如下:
<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>
    <!-- Site specific YARN configuration properties -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <!--
             <property>
        <name>yarn.application.classpath</name>
        <value>$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*,$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,$HADOOP_MAPRED_HOME/share/hadoop/mapreduce1/*,$HADOOP_MAPRED_HOME/share/hadoop/mapreduce1/lib/*,$HADOOP_MAPRED_HOME/share/hadoop/mapreduce2/*,$HADOOP_MAPRED_HOME/share/hadoop/mapreduce2/lib/*,$HADOOP_YARN_HOME/share/hadoop/yarn/*,$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*</value>
    </property>
    -->
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>node1:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>node1:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>node1:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>node1:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>node1:8899</value>
    </property>
    <property>
        <name>dfs.client.read.shortcircuit</name>
        <value>false</value>
    </property>
</configuration>

7.$HADOOP_HOME/etc/hadoop目录下新建文件

vim masters
node1

#注意新版本(3.2版本)由slaves改为workers
vim workers
node1
node2
node3

8.格式化文件系统

cd $HADOOP_HOME/bin
hadoop namenode -format

9.分发文件到其它节点

cd ~
scp -r hadoop-3.2.1 hadoop@node2:/home/hadoop
scp -r hadoop-3.2.1 hadoop@node3:/home/hadoop
cd ~/data
scp -r hadoop hadoop@node2:/home/hadoop/data/
scp -r hadoop hadoop@node3:/home/hadoop/data/

10.启动hdfs和yarn,启动脚本在$HADOOP_HOME/sbin目录中(只需启动node1节点即可)

cd $HADOOP_HOME/sbin
./start-all.sh
./mr-jobhistory-daemon.sh start historyserver

hadoop启动成功进程截图

11.操作命令

cd $HADOOP_HOME/bin
hdfs dfsadmin -report

cd $HADOOP_HOME/sbin
#启动Hadoop集群
./start-all.sh
#停止Hadoop机器
./stop-all.sh
#job详情
./mr-jobhistory-daemon.sh start historyserver


一些简单的操作命令:
#列出根目录下所有文件,安装成功为空
hdfs fs –ls /
#上传文件到根目录
hdfs fs –put file /
#此时应该出现刚才上传好的文件
hdfs fs –ls /
#删除文件
hdfs fs –rm filepath

集群访问地址:http://192.168.198.241:50071

yarn访问地址:http://192.168.198.241:8899

八、hbase安装与部署

1.配置环境变量

vim .bash_profile

#追加以下内容:
export CLASSPATH=$CLASSPATH:$HBASE_HOME/lib/*
source .bash_profile

2.修改hbase-env.sh

cd $HBASE_HOME/conf
vim hbase-env.sh

修改以下内容:
export HBASE_MANAGES_ZK=false

export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk

3.修改HRegionServer机器

cd $HBASE_HOME/conf
vim regionservers

内容如下:
node1
node2
node3

4.修改配置文件hbase-site.xml

参照:

hbase官方文档

默认配置参照源码包hbase-2.2.4-src.tar.gz配置文件hbase-default.xml:

hbase-2.2.4-src\hbase-2.2.4\hbase-common\src\main\resources\hbase-default.xml
cd $HBASE_HOME
mkdir -p var/run/hadoop-hdfs

cd $HBASE_HOME/conf
vim hbase-site.xml
内容如下:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
/**
 *
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
-->
<configuration>
        <property>
                <name>hbase.rootdir</name>
                <value>hdfs://node1:9988/hbase</value>
        </property>
        <property>
                <name>hbase.replication</name>
                <value>true</value>
        </property>
        <property>
                <name>hbase.cluster.distributed</name>
                <value>true</value>
        </property>
        <property>
                <name>hbase.master</name>
                <value>hdfs://node1:60000</value>
        </property>
        <property>
                <name>hbase.zookeeper.quorum</name>
                <value>node1,node2,node3</value>
        </property>
        <property>
                <name>hbase.zookeeper.property.clientPort</name>
                <value>2181</value>
        </property>          
        <property>
                <name>zookeeper.session.timeout</name>
                <value>60000</value>
        </property>
        <property>
                <name>hbase.hregion.memstore.flush.size</name>
                <value>268435456</value>
        </property>
        <property>
                <name>hbase.regionserver.handler.count</name>
                <value>100</value>
        </property>
        <property>
                <name>hbase.hregion.max.filesize</name>
                <value>21474836480</value>
        </property>
        <property>
                <name>hfile.block.cache.size</name>
                <value>0.4</value>
        </property>
        <property>
                <name>hbase.coprocessor.region.classes</name>
                <value>org.apache.hadoop.hbase.coprocessor.AggregateImplementation</value>
        </property>
        <property>
                <name>ipc.server.tcpnodelay</name>
                <value>true</value>
        </property>
        <property>
                <name>hbase.hstore.compactionThreshold</name>
                <value>300</value>
        </property>
        <property>
                <name>dfs.client.read.shortcircuit</name>
                <value>false</value>
        </property>
        <property>
                <name>dfs.domain.socket.path</name>
                <value>/home/hadoop/hbase-2.2.4/var/run/hadoop-hdfs</value>
        </property>
        <property>
                <name>hbase.hregion.memstore.block.multiplier</name>
                <value>8</value>
        </property>
        <property>
                <name>hbase.server.thread.wakefrequency</name>
                <value>5000</value>
        </property>
        <property>
                <name>hbase.metrics.showTableName</name>
                <value>false</value>
        </property>
        <property>
                <name>fs.hdfs.impl</name>
                <value>org.apache.hadoop.hdfs.DistributedFileSystem</value>
        </property>
                <property>
                <name>replication.source.size.capacity</name>
                <value>4194304</value>
        </property>
 		<property>
                <name>replication.source.nb.capacity</name>
                <value>2000</value>
        </property>
        <property>
                <name>replication.source.ratio</name>
                <value>1</value>
        </property>
        <property>
          <name>hbase.unsafe.stream.capability.enforce</name>
          <value>false</value>
        </property>
        <property>
          <name>hbase.wal.provider</name>
          <value>multiwal</value>
        </property>
</configuration>

5.分发配置文件

cd ~
scp -r hbase-2.2.4 hadoop@node2:/home/hadoop/
scp -r hbase-2.2.4 hadoop@node3:/home/hadoop/

node2:

cd $HBASE_HOME
mkdir -p var/run/hadoop-hdfs

node3:

cd $HBASE_HOME
mkdir -p var/run/hadoop-hdfs

6.启动和停止hbase(主节点)

cd $HBASE_HOME/bin
./start-hbase.sh

./stop-hbase.sh

#单独启动某个从节点,一般用于某个从节点没有启动成功如端口占用,进行单独重启
bin/hbase-daemon.sh start regionserver 

hbase启动成功进程截图

master可视化页面:

http://192.168.198.241:16010/master-status

说明:端口16010为hbase默认端口,设置为-1,意味着你不想让他运行。(与0.9版本不一样)

regionServer可视化页面:

http://192.168.198.241:16030/rs-status

http://192.168.198.242:16030/rs-status

http://192.168.198.243:16030/rs-status

说明:端口16030为hbase默认端口,设置为-1,意味着你不想让他运行。(与0.9版本不一样)

7.操作命令

hbase shell 进入hbase console
#列出所有表
list
#创建表名为hbase,列族为info的表
create 'hbase','info'
#扫描hbase全表
scan 'hbase'
#禁用并删除表
disable 'hbase'
drop 'hbase'
truncate 'hbase'
#写入数据
put 'hbase','123','info:sex','male'
#删除123的sex列
delete 'hbase','123','info:sex'
deleteall 'hbase','123'
get 'hbase','123'

问题汇总:

问题1:

java.net.ConnectException: connect(2) error: Connection refused when trying to connect to '/home/hadoop/hbase-2.2.4/var/run/hadoop-hdfs'
        at org.apache.hadoop.net.unix.DomainSocket.connect0(Native Method)
        at org.apache.hadoop.net.unix.DomainSocket.connect(DomainSocket.java:256)
        at org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory.createSocket(DomainSocketFactory.java:165)
        at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.nextDomainPeer(BlockReaderFactory.java:792)
        at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.createShortCircuitReplicaInfo(BlockReaderFactory.java:530)
        at org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.create(ShortCircuitCache.java:764)
        at org.apache.hadoop.hdfs.shortcircuit.ShortCircuitCache.fetchOrCreate(ShortCircuitCache.java:702)
        at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getBlockReaderLocal(BlockReaderFactory.java:486)
        at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:367)
        at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:696)
        at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:655)
        at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:926)
        at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:982)
        at java.io.DataInputStream.read(DataInputStream.java:149)
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:206)
        at org.apache.hadoop.hbase.util.FSUtils.getVersion(FSUtils.java:344)
        at org.apache.hadoop.hbase.util.FSUtils.checkVersion(FSUtils.java:422)
        at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:273)
        at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:153)
        at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:124)
        at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:904)
        at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2124)
        at org.apache.hadoop.hbase.master.HMaster.lambda$run$0(HMaster.java:580)
        at java.lang.Thread.run(Thread.java:748)
2020-09-05 09:11:28,312 WARN  [master/node1:16000:becomeActiveMaster] shortcircuit.ShortCircuitCache: ShortCircuitCache(0x6c0c27c7): failed to load 1073741825_BP-1132550743-192.168.198.241-1599284943761

解决办法:修改hbase-site.xml配置文件,改为false(禁用hdfs短路本地读)。具体原因暂未查明,可能开源版的Hadoop缺少什么文件,下次可以使用CDH版本尝试下!请知道原因的留言说明,感谢!
<property>
    <name>dfs.client.read.shortcircuit</name>
    <value>false</value>
</property>

执行hadoop checknative命令,确实缺少文件,但libhadoop.so是存在的!

猜你喜欢

转载自blog.csdn.net/ory001/article/details/108425963