Hive/Hbase/Sqoop的安装教程

Hive/Hbase/Sqoop的安装教程

HIVE INSTALL

1.下载安装包https://mirrors.tuna.tsinghua.edu.cn/apache/hive/hive-2.3.3/
2.上传到Linux指定目录,解压:

mkdir hive 
mv apache-hive-2.3.3-bin.tar.gz hive
tar -zxvf apache-hive-2.3.3-bin.tar.gz
mv apache-hive-2.3.3-bin apache-hive-2.3.3

### 安装目录为:/app/hive/apache-hive-2.3.3 


3.配置环境变量
sudo vi /etc/profile
添加环境变量:

export HIVE_HOME=/app/hive/apache-hive-2.3.3
export PATH=$PATH:$HIVE_HOME/bin

:wq #保存退出


4.修改HIVE配置文件:
配置文件hive-env.sh (在原有的基础上修改,没有的项就添加):

cd /app/hive/apache-hive-2.3.3/conf
cp hive-env.sh.template hive-env.sh
###在文件中添加如下内容-- 去掉#,并把目录改为自己设定的目录
export HADOOP_HEAPSIZE=1024
export HADOOP_HOME=/app/hadoop/hadoop-2.7.7 #hadoop的安装目录
export HIVE_CONF_DIR=/app/hive/apache-hive-2.3.3/conf
export HIVE_HOME=/app/hive/apache-hive-2.3.3
export HIVE_AUX_JARS_PATH=/app/hive/apache-hive-2.3.3/lib
export JAVA_HOME=/app/lib/jdk

  

创建hdfs文件目录:

cd /app/hive/apache-hive-2.3.3
mkdir hive_site_dir
cd hive_site_dir
hdfs dfs -mkdir -p warehouse #使用这条命令的前提是hadoop已经安装好了
hdfs dfs -mkdir -p tmp
hdfs dfs -mkdir -p log
hdfs dfs -chmod -R 777 warehouse
hdfs dfs -chmod -R 777 tmp
hdfs dfs -chmod -R 777 log
创建临时文件夹:
cd /app/hive/apache-hive-2.3.3
mkdir tmp

  

配置文件hive-site.xml (在原有的基础上修改):
cp hive-default.xml.template  hive-site.xml
vi hive-site.xml
>>配置一些数据库的信息 ConnectionURL/ConnectionUserName/ConnectionPassword/ConnectionDriverName

<!--mysql database connection setting -->
<property>
  <name>javax.jdo.option.ConnectionDriverName</name>
  <value>com.mysql.jdbc.Driver</value>
</property>

<property>
  <name>javax.jdo.option.ConnectionURL</name>
  <value>jdbc:mysql://10.28.85.149:3306/hive?createDatabaseIfNotExist=true&characterEncoding=UTF-8</value>
</property>

<property>
  <name>javax.jdo.option.ConnectionUserName</name>
  <value>szprd</value>
</property>
<property>
  <name>javax.jdo.option.ConnectionPassword</name>
  <value>szprd</value>
</property>

  

>>配置hdfs文件目录

<property>
<name>hive.exec.scratchdir</name>
<!--<value>/tmp/hive</value>-->
<value>/app/hive/apache-hive-2.3.3/hive_site_dir/tmp</value>
<description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/<username> is created, with ${hive.scratch.dir.permission}.</description>
</property>

<property>
<name>hive.metastore.warehouse.dir</name>
<value>/app/hive/apache-hive-2.3.3/hive_site_dir/warehouse</value>
</property>

<property>
<name>hive.exec.local.scratchdir</name>
<!--<value>${system:java.io.tmpdir}/${system:user.name}</value> -->
<value>/app/hive/apache-hive-2.3.3/tmp/${system:user.name}</value>
<description>Local scratch space for Hive jobs</description>
</property>

<property>
<name>hive.downloaded.resources.dir</name>
<!--<value>${system:java.io.tmpdir}/${hive.session.id}_resources</value>-->
<value>/app/hive/apache-hive-2.3.3/tmp/${hive.session.id}_resources</value>
<description>Temporary local directory for added resources in the remote file system.</description>
</property>

<property>
<name>hive.querylog.location</name>
<!--<value>${system:java.io.tmpdir}/${system:user.name}</value>-->
<value>/app/hive/apache-hive-2.3.3/hive_site_dir/log/${system:user.name}</valu
<description>Location of Hive run time structured log file</description>
</property>


<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
<description>
Enforce metastore schema version consistency.
True: Verify that version information stored in is compatible with one from Hive jars. Also disable automatic
schema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensures
proper metastore schema migration. (Default)
False: Warn if the version information stored in metastore doesn't match with one from in Hive jars.
</description>
</property>

修改完配置文件后,:wq 保存退出

5.下载合适版本的mysql驱动包,复制到HIVE安装目录的 lib目录下
https://dev.mysql.com/downloads/connector/j/

6.初始化数据库(在启动hive前一定要先执行这个命令哦,如果失败了,请查看数据库配置信息是否准确~ )

cd /app/hive/apache-hive-2.3.3/bin
./schematool -initSchema -dbType mysql

  

7.启动hive
hive     #这里配置了环境变量后,可以在任意目录下执行 (/etc/profile)


8.实时查看日志启动hive命令(在hive安装目录的bin目录下执行):

./hive -hiveconf hive.root.logger=DEBUG,console


HBASE INSTALL


1.下载hbase安装包:  http://hbase.apache.org/downloads.html


2.解压: tar -zxvf  hbase-1.2.6.1-bin.tar.gz


3.配置环境变量: (加在最后面)
vi /etc/profile

#HBase Setting
export HBASE_HOME=/app/hbase/hbase-1.2.6.1
export PATH=$PATH:$HBASE_HOME/bin

  

4.编辑配置文件: hbase-env.sh

export HBASE_MANAGES_ZK=false
export HBASE_PID_DIR=/app/hadoop/hadoop-2.7.7/pids #如果该目录不存在,则先创建
export JAVA_HOME=/app/lib/jdk #指定JDK的安装目录

 

编辑配置文件: hbase-site.xml
在configuration节点添加如下配置:

<property>
<name>hbase.rootdir</name>
<value>hdfs://192.168.1.202:9000/hbase</value>
</property>


<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/vc/dev/MQ/ZK/zookeeper-3.4.12</value>
</property>


<property>
<name>zookeeper.znode.parent</name>
<value>/hbase</value>
</property>


<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>

<property>
<name>hbase.unsafe.stream.capability.enforce</name>
<value>false</value>
<description>
Controls whether HBase will check for stream capabilities (hflush/hsync). Disable this if you intend to run on LocalFileSystem, denoted by arootdir with the 'file://' scheme, but be mindful of the NOTE below.
WARNING: Setting this to false blinds you to potential data loss and inconsistent system state in the event of process and/or node failures.If HBase is complaining of an inability to use hsync or hflush it's most likely not a false positive.
</description>
</property>

  

5.启动zookeeper
进入zookeeper的安装目录下的bin目录,执行 ./zkServer.sh
然后启动客户端: ./zkCli.sh
启动成功后,输入: create /hbase hbase

6.启动hbase
进入Hbase的bin目录: ./start-hbase.sh
./hbase shell  #这里启动成功后就可以开始执行hbase相关命令了
list  #没有报错表示成功

7.web访问HBASE: http://10.28.85.149:16010/master-status   #ip为当前服务器的ip,端口为16010


#Sqoop install
1.下载安装包: https://mirrors.tuna.tsinghua.edu.cn/apache/sqoop/1.4.7/


2.解压: tar -zxvf  sqoop-1.4.7.bin__hadoop-2.6.0.tar.gz

更改文件名: mv sqoop-1.4.7.bin__hadoop-2.6.0 sqoop-1.4.7_hadoop-2.6.0


3. 配置环境变量:/etc/profile

#Sqoop Setting
export SQOOP_HOME=/app/sqoop/sqoop-1.4.7_hadoop-2.6.0
export PATH=$PATH:$SQOOP_HOME/bin

  

4.将mysql的驱动包复制到 Sqoop安装目录的lib目录下

https://dev.mysql.com/downloads/connector/j/

5.编辑配置文件: sqoop的安装目录下的 conf下
vi sqoop-env.sh

#Set path to where bin/hadoop is available
export HADOOP_COMMON_HOME=/app/hadoop/hadoop-2.7.7

#Set path to where hadoop-*-core.jar is available
export HADOOP_MAPRED_HOME=/app/hadoop/hadoop-2.7.7

#set the path to where bin/hbase is available
export HBASE_HOME=/app/hbase/hbase-1.2.6.1

#Set the path to where bin/hive is available
export HIVE_HOME=/app/hive/apache-hive-2.3.3

#Set the path for where zookeper config dir is
export ZOOCFGDIR=/app/zookeeper/zookeeper-3.4.12

  

6,输入命令:

sqoop help  #查看相关的sqoop命令

sqoop version #查看sqoop的版本

 ps:

关于停止hbase的命令: stop-hbase.sh   ,出现关于pid的错误提示时,请参考这篇博文:https://blog.csdn.net/xiao_jun_0820/article/details/35222699

hadoop的安装教程:http://note.youdao.com/noteshare?id=0cae2da671de0f7175376abb8e705406

zookeeper的安装教程:http://note.youdao.com/noteshare?id=33e37b0967da40660920f755ba2c03f0

猜你喜欢

转载自www.cnblogs.com/DFX339/p/9550213.html