大数据基本操作锦集之Hive的基本操作

大数据基本操作锦集之Hive的基本操作

目录

  • 简介
  • hive的数据类型
  • hive的数据存储
  • hive的数据模型
  • hive的DDL(数据库定义语言)
  • hive的DML操作
  • hive加载数据
  • hive导出数据
  • hive udf使用介绍

正文

简介

hive在hadoop生态圈属于数据仓库角色,他能够管理hadoop中的数据,同时可以查询hadoop中的数据。本质上来讲,hive就是sql解释器,可以将sql转换为mapreduce的job来运行。可以将sql中的表,字段转化为hdfs中的文件,以及文件中的列。hive在hdfs中的默认位置是/user/hive/warehouse。

hive的数据类型:

  1. 整型:TINYINT,SMALLINT,INT,BIGINT。
  2. 文本类型:VARCHAR:1 to 65355,CHAR:255,STRING
  3. 时间类型:timestamp:时间戳, date:日期
  4. 布尔及二进制:BOOLEAN表示二元的true或false,BINARY用于存储变长的二进制数据
  5. 浮点类型:float,double
  6. 复杂数据类型:Array/Map/Struct/UNION

hive的数据存储

  1. Hive的数据存储基于Hadoop HDFS
  2. Hive没有专门的数据存储格式,存储结构主要包括:数据库、文件、表、视图
  3. Hive默认可以直接加载文本文件(TextFile),还支持sequence file 创建表时,指定Hive数据的列分隔符与行分隔符,Hive即可解析数据

hive的数据模型

  1. 内部表:与数据库中的 Table 在概念上是类似每一个Table在Hive中都有一个相应的目录存储数据。例如,一个表test,它在HDFS中的路径为:/user/hive/warehouse,删除表时,元数据与数据都会被删除.
  2. 分区表:在 Hive 中,表中的一个 Partition 对应于表下的一个目录,所有的 Partition的数据都存储在对应的目录中。test表中包含 date 和 city 两个 Partition,则对应于date=20130201, city = bj 的HDFS子目录为:/user/hive/warehouse/test/date=20130201/city=bj
  3. 外部表:指向已经在 HDFS 中存在的数据,可以创建 Partition 它和 内部表在元数据的组织上是相同的,而实际数据的存储则有较大的差异 内部表 的创建过程和数据加载过程(这两个过程可以在同一个语句中完成),在加载数据的过程中,实际数据会被移动到数据仓库目录中。外部表只有一个过程,加载数据和创建表同时完成,并不会移动到数据仓库目录中,只是与外部数据建立一个链接。当删除一个外部表时,仅删除链接。想系统学习大数据的话,可以加入大数据技术学习扣扣君羊:522189307

hive的DDL(数据库定义语言)

  • 创建数据库
hive> show databases;
OK
default
Time taken: 0.049 seconds, Fetched: 1 row(s)

hive> create database test;
OK
Time taken: 0.201 seconds

hive> show databases;
OK
default
test
Time taken: 0.021 seconds, Fetched: 2 row(s)

hive> use test;
OK
Time taken: 0.02 seconds

hive> show tables;
OK
Time taken: 0.014 seconds

或者:
hive> create database hive_test location '/hive/hive_test';
OK
Time taken: 0.017 seconds
  • 创建表
hive> CREATE TABLE IF NOT EXISTS employee ( eid int, name String,
> salary String, destination String)
> COMMENT 'Employee details'
> ROW FORMAT DELIMITED
> FIELDS TERMINATED BY '\t'
> LINES TERMINATED BY '\n'
> STORED AS TEXTFILE;
OK
Time taken: 0.052 seconds
  • 修改表:alter语句
ALTER TABLE name RENAME TO new_name
ALTER TABLE name ADD COLUMNS (col_spec[, col_spec ...])
ALTER TABLE name DROP [COLUMN] column_name
ALTER TABLE name CHANGE column_name new_name new_type
ALTER TABLE name REPLACE COLUMNS (col_spec[, col_spec ...])

1.更改表名rename to,把 employee 修改为 emp。

hive> ALTER TABLE employee RENAME TO emp;
OK
Time taken: 0.107 seconds

hive> show tables;
OK
emp
Time taken: 0.012 seconds, Fetched: 1 row(s)

2.更改列名和列数据类型

		- 先查看一下这表数据结构:
		  hive> desc emp;
		  OK
		  eid int
		  name string
		  salary string
		  destination string
		  Time taken: 0.07 seconds, Fetched: 4 row(s)

		- 把name变成ename,把salary数据类型变为double。
		  hive> ALTER TABLE emp CHANGE name ename String;
		  OK
		  Time taken: 0.118 seconds
      
		  hive> ALTER TABLE emp CHANGE salary salary Double;
		  OK
		  Time taken: 0.077 seconds
      
		  hive> desc emp;
		  OK
		  eid int
		  ename string
		  salary double
		  destination string
		  Time taken: 0.055 seconds, Fetched: 4 row(s)

		- 增加一列:dept
		  hive> ALTER TABLE emp ADD COLUMNS (dept STRING COMMENT 'Department name');
		  OK
		  Time taken: 0.071 seconds
		  hive> desc emp;
		  OK
		  eid int
		  ename string
		  salary double
		  destination string
		  dept string Department name
		  Time taken: 0.073 seconds, Fetched: 5 row(s)
      
      此时再查看表结构,已加入新字段dept。
  • 删除表
hive> show tables;
OK
emp
employee
Time taken: 0.011 seconds, Fetched: 2 row(s)

hive> DROP TABLE IF EXISTS employee;
OK
Time taken: 0.295 seconds
hive> show tables;
OK
emp
Time taken: 0.011 seconds, Fetched: 1 row(s)
  • 清空表:先查看在清空。
hive> select * from employee;
OK
1201 Gopal 45000.0 Technical manager
1202 Manisha 45000.0 Proof reader
1203 Masthanvali 40000.0 Technicali writer
1204 Kiran 40000.0 Hr Admin
1205 Kranthi 30000.0 Op Admin
Time taken: 0.031 seconds, Fetched: 5 row(s)

hive> truncate table employee;
OK
Time taken: 0.064 seconds
hive> select * from employee;
OK
Time taken: 0.054 seconds

hive 加载数据

  • 加载数据
- 本地数据源:/home/hadoop/sample.txt
  hadoop@data2:~$ vim sample.txt
  1201 Gopal 45000 Technical manager
  1202 Manisha 45000 Proof reader
  1203 Masthanvali 40000 Technical writer
  1204 Kiran 40000 Hr Admin
  1205 Kranthi 30000 Op Admin
  
- 从本地加载数据
  hive> LOAD DATA LOCAL INPATH '/home/hadoop/sample.txt' OVERWRITE INTO TABLE employee;
  Loading data to table test.employee
  Table test.employee stats: [numFiles=1, numRows=0, totalSize=201, rawDataSize=0]
  OK
  Time taken: 0.513 seconds
  
- 从hdfs上面加载数据:
  	hive> LOAD DATA INPATH '/home/hadoop/sample.txt' OVERWRITE INTO TABLE employee;

- 使用hadoop 命令:'hadoop fs -put /home/hadoop/sample.txt /user/hive/warehouse/test.db/employee/'

- 查看数据:
  hive> select * from employee;
  OK
  1201 Gopal 45000 Technical
  1202 Manisha 45000 Proof
  1203 Masthanvali 40000 Technical writer
  1204 Kiran 40000 Hr
  1205 Kranthi 30000 Op
  Time taken: 0.07 seconds, Fetched: 5 row(s)

- 查看在hdfs上路径:
  hadoop@data2:~$ hadoop fs -ls /user/hive/warehouse/test.db
  SLF4J: Class path contains multiple SLF4J bindings.
  SLF4J: Found binding in [jar:file:/software/hadoop-2.6.0-cdh5.9.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  SLF4J: Found binding in [jar:file:/software/hbase-1.2.0-cdh5.9.0/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
  SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
  Found 2 items
  drwxr-xr-x - hadoop supergroup 0 2017-05-15 12:43 /user/hive/warehouse/test.db/emp
  drwxr-xr-x - hadoop supergroup 0 2017-05-15 13:10 /user/hive/warehouse/test.db/employee
- 导出到hdfs上来:去掉local
  hive> insert overwrite directory '/home/hadoop/emp'
  > select * from emp;
- 查看目录:
  hadoop@data2:~/emp$ hadoop fs -ls /home/hadoop/emp
  SLF4J: Class path contains multiple SLF4J bindings.
  SLF4J: Found binding in [jar:file:/software/hadoop-2.6.0-cdh5.9.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  SLF4J: Found binding in [jar:file:/software/hbase-1.2.0-cdh5.9.0/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
  SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
  SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
  Found 1 items
  -rwxr-xr-x 3 hadoop supergroup 330 2017-05-15 15:36 /home/hadoop/emp/000000_0
- 使用hadoop命令:hadoop fs -get /user/hive/warehouse/test.db/emp/* /home/hadoop/hive

hive导出数据

1.导出到本地:
    1.1. 命令
       hive> insert overwrite local directory '/home/hadoop/emp'
       > select * from emp;
       Query ID = hadoop_20170515153232_366cdc86-2146-423b-ab07-18779323edb6
       Total jobs = 1
       Launching Job 1 out of 1
       Number of reduce tasks is set to 0 since there's no reduce operator Starting Job = job_1492396415914_1296, Tracking URL = http://data1.XXXXXX.cn:8088/proxy/application_1492396415914_1296/ Kill Command = /software/hadoop-2.6.0-cdh5.9.0/bin/hadoop job -kill job_1492396415914_1296 Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0 2017-05-15 15:32:18,465 Stage-1 map = 0%, reduce = 0% 2017-05-15 15:32:23,584 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.23 sec MapReduce Total cumulative CPU time: 1 seconds 230 msec Ended Job = job_1492396415914_1296 Copying data to local directory /home/hadoop/emp MapReduce Jobs Launched: Stage-Stage-1: Map: 1 Cumulative CPU: 1.23 sec HDFS Read: 3583 HDFS Write: 330 SUCCESS Total MapReduce CPU Time Spent: 1 seconds 230 msec OK Time taken: 9.642 seconds
    1.2. 查看目录:
       hadoop@data2:~$ cd /home/hadoop/emp/
       hadoop@data2:~/emp$ ll
       total 16
       drwxrwxr-x 2 hadoop hadoop 4096 May 15 15:32 ./
       drwxr-xr-x 15 hadoop hadoop 4096 May 15 15:32 ../
       -rw-r--r-- 1 hadoop hadoop 330 May 15 15:32 000000_0
       -rw-r--r-- 1 hadoop hadoop 12 May 15 15:32 .000000_0.crc
       hadoop@data2:~/emp$ vim 000000_0
       1201^A Gopal^A45000^A Technical^Amanager
       1202^AManisha^A45000^AProof^Areader
       1203^AMasthanvali^A40000^ATechnicali^Awriter
       1204^AKiran^A40000^AHr^AAdmin
       1205^AKranthi^A30000^AOp^AAdmin
       1206^AGopal^A45000^A Technical^Amanager
       1207^AManisha 45000^AProof^Areader^A\N
       1208^AMasthanvali^A40000^ATechnicali^Awriter
       1209^AKiran^A40000^AHr^AAdmin
       1210^AKranthi^A30000^AOp^AAdmin

    1.3. 默认保存分割符号是^A(\\x01),我们想要更直观的数据可以通过自己制定列分割符号:
       hive> insert overwrite local directory '/home/hadoop/emp'
       > row format delimited
       > fields terminated by '\t'
       > select * from emp;

   1.4. 再次查看数据:
       hadoop@data2:~/emp$ ll
       total 16
       drwxrwxr-x 2 hadoop hadoop 4096 May 15 15:42 ./
       drwxr-xr-x 15 hadoop hadoop 4096 May 15 15:42 ../
       -rw-r--r-- 1 hadoop hadoop 330 May 15 15:42 000000_0
       -rw-r--r-- 1 hadoop hadoop 12 May 15 15:42 .000000_0.crc
       hadoop@data2:~/emp$ cat 000000_0
       1201 Gopal 45000 Technical manager
       1202 Manisha 45000 Proof reader
       1203 Masthanvali 40000 Technicali writer
       1204 Kiran 40000 Hr Admin
       1205 Kranthi 30000 Op Admin
       1206 Gopal 45000 Technical manager
       1207 Manisha 45000 Proof reader \N
       1208 Masthanvali 40000 Technicali writer
       1209 Kiran 40000 Hr Admin
       1210 Kranthi 30000 Op Admin
       
2.导出到hdfs上来:
    2.1.去掉local:
      hive> insert overwrite directory '/home/hadoop/emp'
      > select * from emp;

      2.2.查看目录:
      hadoop@data2:~/emp$ hadoop fs -ls /home/hadoop/emp
      SLF4J: Class path contains multiple SLF4J bindings.
      SLF4J: Found binding in [jar:file:/software/hadoop-2.6.0-cdh5.9.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
      SLF4J: Found binding in [jar:file:/software/hbase-1.2.0-cdh5.9.0/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
      SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
      SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
      Found 1 items
      -rwxr-xr-x 3 hadoop supergroup 330 2017-05-15 15:36 /home/hadoop/emp/000000_0

    2.3. 使用hadoop命令:
		hadoop fs -get /user/hive/warehouse/test.db/emp/* /home/hadoop/hive

hive的DML操作

1.select ...where ...
    hive> select * from emp where salary > 40000 ;
    OK
    1201 Gopal 45000 Technical manager
    1202 Manisha 45000 Proof reader
    Time taken: 0.116 seconds, Fetched: 2 row(s)

2.从一张表查询插入另一张表:insert into table ..select ..from ..
    hive> insert into table emp_bak select eid,ename,salary,destination,dept from emp where eid < 1206 ;
    hive> select * from emp_bak
    > ;
    OK
		1201 Gopal 45000 Technical manager
    1202 Manisha 45000 Proof reader
    1203 Masthanvali 40000 Technicali writer
    1204 Kiran 40000 Hr Admin
    1205 Kranthi 30000 Op Admin
    Time taken: 0.035 seconds, Fetched: 5 row(s)

3.覆盖表:insert overwrite table ... select ..from ...
hive> insert overwrite table emp_bak select eid,ename,salary,destination,dept from emp where eid >= 1206 ;
hive> select * from emp_bak;
OK
1206 Gopal 45000 Technical manager
1207 Manisha 45000 Proof reader NULL
1208 Masthanvali 40000 Technicali writer
1209 Kiran 40000 Hr Admin
1210 Kranthi 30000 Op Admin
Time taken: 0.034 seconds, Fetched: 5 row(s)
6.连接
		6.1.join
          hive> SELECT c.ID, c.NAME, c.AGE, o.AMOUNT
          > FROM CUSTOMERS c JOIN ORDERS o
          > ON (c.ID = o.CUSTOMER_ID);
          Query ID = hadoop_20170515135454_acb274d2-2be7-456d-9a3d-9a13e8e2086e
          Total jobs = 1
          SLF4J: Class path contains multiple SLF4J bindings.
          SLF4J: Found binding in [jar:file:/software/hadoop-2.6.0-cdh5.9.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
          SLF4J: Found binding in [jar:file:/software/hbase-1.2.0-cdh5.9.0/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
          SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
          SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
          Execution log at: /tmp/hadoop/hadoop_20170515135454_acb274d2-2be7-456d-9a3d-9a13e8e2086e.log
          2017-05-15 01:55:02 Starting to launch local task to process map join; maximum memory = 4776263682017-05-15 01:55:03 Dump the side-table for tag: 1 with group count: 3 into file: file:/tmp/hadoop/fbbf05c3-c70f-4a16-9033-5d57119a18d0/hive_2017-05-15_13-54-59_741_1498271684945237186-1/-local-10003/HashTable-Stage-3/MapJoin-mapfile01--.hashtable
          2017-05-15 01:55:03 Uploaded 1 File to: file:/tmp/hadoop/fbbf05c3-c70f-4a16-9033-5d57119a18d0/hive_2017-05-15_13-54-59_741_1498271684945237186-1/-local-10003/HashTable-Stage-3/MapJoin-mapfile01--.hashtable (332 bytes)
          2017-05-15 01:55:03 End of local task; Time Taken: 0.983 sec.
          Execution completed successfully
          MapredLocal task succeeded
          Launching Job 1 out of 1
          Number of reduce tasks is set to 0 since there's no reduce operator Starting Job = job_1492396415914_1283, Tracking URL = http://data1.XXXXXX.cn:8088/proxy/application_1492396415914_1283/ Kill Command = /software/hadoop-2.6.0-cdh5.9.0/bin/hadoop job -kill job_1492396415914_1283 Hadoop job information for Stage-3: number of mappers: 1; number of reducers: 0 2017-05-15 13:55:08,292 Stage-3 map = 0%, reduce = 0% 2017-05-15 13:55:13,419 Stage-3 map = 100%, reduce = 0%, Cumulative CPU 2.81 sec MapReduce Total cumulative CPU time: 2 seconds 810 msec Ended Job = job_1492396415914_1283 MapReduce Jobs Launched: Stage-Stage-3: Map: 1 Cumulative CPU: 2.81 sec HDFS Read: 6527 HDFS Write: 46 SUCCESS Total MapReduce CPU Time Spent: 2 seconds 810 msec OK 2 Kali 31 2050 3 Cham 20 3000 4 Muffi 25 1500 Time taken: 14.722 seconds, Fetched: 3 row(s)

		6.2.LEFT OUTER JOIN:LEFT JOIN返回左表中的所有的值,加上右表,或JOIN子句没有匹配的情况下返回NULL。
        hive> SELECT c.ID, c.NAME, o.AMOUNT, o.DATE
        > FROM CUSTOMERS c
        > LEFT OUTER JOIN ORDERS o
        > ON (c.ID = o.CUSTOMER_ID);
        Total MapReduce CPU Time Spent: 1 seconds 590 msec
        OK
        1 Ramsh NULL NULL
        2 Kali 2050 2009-05-08 00:00:00
        3 Cham 3000 2009-10-08 00:00:00
        4 Muffi 1500 2009-11-20 00:00:00
        5 Kaush NULL NULL
        Time taken: 14.277 seconds, Fetched: 5 row(s)

		6.3.RIGHT OUTER JOIN:RIGHT JOIN返回右表中的所有值,加上左表,或者没有匹配的情况下返回NULL。
        hive> SELECT c.ID, c.NAME, o.AMOUNT, o.DATE
        > FROM CUSTOMERS c
        > RIGHT OUTER JOIN ORDERS o
        > ON (c.ID = o.CUSTOMER_ID);
        Total MapReduce CPU Time Spent: 2 seconds 200 msec
        OK
        3 Cham 3000 2009-10-08 00:00:00
        2 Kali 2050 2009-05-08 00:00:00
        4 Muffi 1500 2009-11-20 00:00:00

		6.4.FULL OUTER JOIN :连接表包含两个表的所有记录,或两侧缺少匹配结果那么使用NULL值填补
				hive> SELECT c.ID, c.NAME, o.AMOUNT, o.DATE
        > FROM CUSTOMERS c
        > FULL OUTER JOIN ORDERS o
        > ON (c.ID = o.CUSTOMER_ID);
        Total MapReduce CPU Time Spent: 4 seconds 740 msec
        OK
        1 Ramsh NULL NULL
        2 Kali 2050 2009-05-08 00:00:00
        3 Cham 3000 2009-10-08 00:00:00
        4 Muffi 1500 2009-11-20 00:00:00
        5 Kaush NULL NULL
        Time taken: 15.693 seconds, Fetched: 5 row(s)

UDF函数:用户自定义函数。

1.首先要先继承 UDF

2.重写evale方法import org.apache.hadoop.hive.ql.exec.UDF;

import org.apache.hadoop.hive.ql.exec.UDF;

/**
* 先去掉空值然后匹配正则去掉一些特殊字符,空格 如果满足条件就返回数据,不满足置为null
*/
public class NameUDF extends UDF {
    // 剔除特殊字符,空格
    public static final String nameRegx = "\\pP|\\pS|\\s";

    public String evaluate(String name) {

        // 判断是否为空和null值
        if (name != null && !"".equals(name)) {
            // 将特殊字符使用空字符串来补充
            name = name.replaceAll(nameRegx, "");
            if ("".equals(name)) {
                return null;
            } else {
                return name;
            }
        }
        return null;
    }
}

3.jar包传入hdfs

hadoop@data2:~$ hadoop fs -put dw-udf-0.0.1-SNAPSHOT.jar /user/udf/

4.添加jar包

hive> add jar hdfs://XXXXX:9000/user/udf/dw-udf-0.0.1-SNAPSHOT.jar;
converting to local hdfs://XXXXX:9000/user/udf/dw-udf-0.0.1-SNAPSHOT.jar
Added [/tmp/fbbf05c3-c70f-4a16-9033-5d57119a18d0_resources/dw-udf-0.0.1-SNAPSHOT.jar] to class path
Added resources: [hdfs://XXXX:9000/user/udf/dw-udf-0.0.1-SNAPSHOT.jar]

5.创建临时函数

hive> create temporary function FN_CLS_Name as 'cn.XXXXXX.scrm.udf.NameUDF';OKTime taken: 0.013 seconds

6.使用udf函数

- 查看表:
  hive> select * from emp_bak;
  OK
  1201 @@# 45000 Technical manager
  1202 Manisha 45000 Proof reader
  1203 Masthanvali 40000 Technicali writer
  1204 Kiran 40000 Hr Admin
  1205 Kranthi 30000 Op Admin
  Time taken: 0.033 seconds, Fetched: 5 row(s)

- 使用udf函数然后查看
  hive> insert overwrite table emp_bak select eid,FN_CLS_Name(ename),salary,destination,dept from emp_bak ;
  hive> select * from emp_bak;
  OK
  1201 NULL 45000 Technical manager
  1202 Manisha 45000 Proof reader
  1203 Masthanvali 40000 Technicali writer
  1204 Kiran 40000 Hr Admin
  1205 Kranthi 30000 Op Admin
  Time taken: 0.043 seconds, Fetched: 5 row(s)
发布了187 篇原创文章 · 获赞 3 · 访问量 3万+

猜你喜欢

转载自blog.csdn.net/mnbvxiaoxin/article/details/104951558