Hive学习笔记(1)- Hive介绍 和安装配置

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/u012292754/article/details/86504386

1 Hive 介绍

  • Facebook 开源用于解决海量结构化日志的数据统计
  • 是基于 Hadoop 的一个数据仓库工具,可以将结构化的数据文件映射为一张表
  • 本质: 将 HQL 转化成 MapReduce 程序
  • 灵活性和扩展性好:支持 UDF,自定义存储格式
  • 适合离线数据处理

1.1 构建在 Hadoop 上的数据仓库

  • 使用 HQL 作为查询接口
  • 使用 HDFS 存储
  • 使用 MapReduce 计算

2 Hive 架构

  • 用户接口:Client
CLI(hive shell), JDBC/ODBC(java 访问 hive), WEBUI
  • 元数据: Metastore
表名,表所属的数据库,表的拥有者、列/分区字段,表的类型(是否是外部表),
表的数据所在的目录

默认存储在自带的 derby数据库,推荐采用 MySQL 存储Metastore
  • 驱动器 : Driver,包含解析器,编译器,优化器,执行器
- 解析器:将 SQL 字符串转换成抽象语法树 AST,这一步一般都用第三方工具库,比如 antlr;对
AST 进行语法分析,比如表是否存在,字段是否存在,SQL 语义是否有误

- 编译器: 将 AST 编译生成逻辑执行计划
- 优化器: 对逻辑执行计划优化
- 执行器: 把逻辑执行计划转换成可以运行的物理计划。如 MR , TEZ, Spark

3 Hive 安装

所用版本: hive-1.1.0-cdh5.7.0

3.1 Hadoop 的相关配置

  • core-site.xml
<configuration>
 <property>
        <name>fs.defaultFS</name>
        <value>hdfs://node1:8020</value>
 </property>
 
 <property>
	<name>hadoop.tmp.dir</name>
	<value>/home/hadoop/appsData/hdpData/tmp</value>
</property>
    
</configuration>
  • hdfs-site.xml
<configuration>
	<property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
    
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/home/hadoop/appsData/hdpData/namenode</value>
    </property>
    
     <property>
        <name>dfs.datanode.data.dir</name>
        <value>/home/hadoop/appsData/hdpData/datanode</value>
    </property>
    
    
</configuration>
  • mapred-site.xml
<configuration>
	<property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>node1:10020</value>
    </property>
    
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>node1:19888</value>
    </property>
     
</configuration>

  • yarn-site.xml
<configuration>

<!-- Site specific YARN configuration properties -->
	<property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>node1</value>
    </property>
    
    <property>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
    </property>

    <property>
        <name>yarn.log-aggregation.retain-seconds</name>
        <value>640800</value>
    </property> 
    
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>4096</value>
    </property>
    
    <property>
        <name>yarn.nodemanager.resource.cpu-vcores</name>
        <value>4</value>
    </property>
    
    
</configuration>
  • slaves
node1
node2
node3

3.1.1 启动 hadoop 相关的组件

  • start-dfs.sh
  • mr-jobhistory-daemon.sh start historyserver
[hadoop@node1 ~]$ mr-jobhistory-daemon.sh start historyserver
starting historyserver, logging to /home/hadoop/apps/hadoop-2.6.0-cdh5.7.0/logs/mapred-hadoop-historyserver-node1.out
  • start-yarn.sh

3.2 Hive 配置

/home/hadoop/apps/hive-1.1.0-cdh5.7.0/conf

  • hive-env,sh
HADOOP_HOME=/home/hadoop/apps/hadoop-2.6.0-cdh5.7.0
export HIVE_CONF_DIR=/home/hadoop/apps/hive-1.1.0-cdh5.7.0/conf
  • 在 HDFS 上创建 hive 目录并且修改权限
hadoop fs -mkdir /tmp
hadoop fs -mkdir -p /user/hive/warehouse
hadoop fs -chmod g+x /tmp
hadoop fs -chmod g+x /user/hive/warehouse

[hadoop@node1 ~]$ hdfs dfs -ls -R /
19/01/16 12:45:58 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
drwxrwx---   - hadoop supergroup          0 2019-01-16 12:28 /tmp
drwxrwx---   - hadoop supergroup          0 2019-01-16 12:28 /tmp/hadoop-yarn
drwxrwx---   - hadoop supergroup          0 2019-01-16 12:28 /tmp/hadoop-yarn/staging
drwxrwx---   - hadoop supergroup          0 2019-01-16 12:28 /tmp/hadoop-yarn/staging/history
drwxrwx---   - hadoop supergroup          0 2019-01-16 12:28 /tmp/hadoop-yarn/staging/history/done
drwxrwxrwt   - hadoop supergroup          0 2019-01-16 12:28 /tmp/hadoop-yarn/staging/history/done_intermediate
drwxr-xr-x   - hadoop supergroup          0 2019-01-16 12:40 /user
drwxr-xr-x   - hadoop supergroup          0 2019-01-16 12:40 /user/hive
drwxrwxr-x   - hadoop supergroup          0 2019-01-16 12:40 /user/hive/warehouse
[hadoop@node1 ~]$ 

3.3 启动 hive

[hadoop@node1 bin]$ ./hive
ls: cannot access /home/hadoop/apps/spark-2.2.2-bin-2.6.0-cdh5.7.0/lib/spark-assembly-*.jar: No such file or directory
2019-01-16 12:47:35,758 WARN  [main] mapreduce.TableMapReduceUtil: The hbase-prefix-tree module jar containing PrefixTreeCodec is not present.  Continuing without it.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/apps/hbase-1.2.0-cdh5.7.0/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/apps/hadoop-2.6.0-cdh5.7.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2019-01-16 12:47:35,857 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Logging initialized using configuration in jar:file:/home/hadoop/apps/hive-1.1.0-cdh5.7.0/lib/hive-common-1.1.0-cdh5.7.0.jar!/hive-log4j.properties
WARNING: Hive CLI is deprecated and migration to Beeline is recommended.
hive> 

3.3.1 测试 1

hive> show databases;
OK
default
Time taken: 0.204 seconds, Fetched: 1 row(s)
hive> use default;
OK
Time taken: 0.017 seconds
hive> create table bf_log(ip string,user string,requesturl string);
OK
Time taken: 0.287 seconds
hive> show tables;
OK
bf_log
hive_wordcount
Time taken: 0.038 seconds, Fetched: 2 row(s)
hive> desc bf_log;
OK
ip                  	string              	                    
user                	string              	                    
requesturl          	string              	                    
Time taken: 0.103 seconds, Fetched: 3 row(s)
hive> select count(*) from bf_log;
Query ID = hadoop_20190116143333_437d8f9f-c49a-4617-a610-4d79e14fb6c2
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1547613046309_0001, Tracking URL = http://node1:8088/proxy/application_1547613046309_0001/
Kill Command = /home/hadoop/apps/hadoop-2.6.0-cdh5.7.0/bin/hadoop job  -kill job_1547613046309_0001
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2019-01-16 14:37:36,087 Stage-1 map = 0%,  reduce = 0%
2019-01-16 14:37:41,289 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.34 sec
2019-01-16 14:37:46,503 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 2.87 sec
MapReduce Total cumulative CPU time: 2 seconds 870 msec
Ended Job = job_1547613046309_0001
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 2.87 sec   HDFS Read: 6461 HDFS Write: 2 SUCCESS
Total MapReduce CPU Time Spent: 2 seconds 870 msec
OK
0
Time taken: 23.504 seconds, Fetched: 1 row(s)
hive> 

3.3.2 测试2

  • 创建表 create table student(id int,name string) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t';
hive> create table student(id int,name string) ROW FORMAT DELIMITED FIELDS 
    > TERMINATED BY '\t';
OK
Time taken: 0.06 seconds
hive> show tables;
OK
bf_log
hive_wordcount
student
Time taken: 0.017 seconds, Fetched: 3 row(s)

测试数据
在这里插入图片描述

  • 导入数据 load data local inpath '/home/hadoop/student.txt' into table student;
hive> select * from student;
OK
1001	MIke
1002	John
1003	Mary
Time taken: 0.099 seconds, Fetched: 3 row(s)
hive> select id from student;
OK
1001
1002
1003
Time taken: 0.075 seconds, Fetched: 3 row(s)
hive> 

4 使用 MySQL

  • 将 mysql 驱动放到 lib 目录
    在这里插入图片描述
  • 查看安装的mysql
[hadoop@node1 ~]$ rpm -qa | grep mysql
mysql-community-server-5.7.10-1.el7.x86_64
mysql-community-common-5.7.10-1.el7.x86_64
mysql-community-libs-5.7.10-1.el7.x86_64
mysql-community-client-5.7.10-1.el7.x86_64

  • hive-site.xml
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://node1:3306/hivemetastore?createDatabaseIfNotExist=true&amp;useSSL=false</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>

<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>

<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
<description>username to use against metastore database</description>
</property>

<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>root</value>
<description>password to use against metastore database</description>
</property>

<property>
	<name>hive.cli.print.header</name>
	<value>true</value>
</property>

<property>
	<name>hive.cli.print.current.db</name>
	<value>true</value>
</property>

</configuration>
  • 查看 mysql 数据
    在这里插入图片描述

猜你喜欢

转载自blog.csdn.net/u012292754/article/details/86504386
今日推荐