Hive load data operation
一、load data
grammar structure:
load data [local] inpath 'filepath' [overwrite] into table table_name [partition(part1=val1,part2=val2)]
说明:
1、Load
操作只是单纯的复制/移动操作,将数据文件移动到 Hive 表对应的位置
2、filepath:
相对路径,例如:project/data1
绝对路径,例如:/user/hive/project/data1
包含模式的完整 URI,如:hdfs://namenode:9000/user/hive/project/data1
3、local关键字
如果指定了local,load命令会去查找本地文件系统中的filepath。如果没有指定local关键字,则根据inpath中的uri查找文件
4、overwrite 关键字
如果使用了overwrite关键字,则目标表(或者分区)中的内容会被删除,然后再将filepath指向的文件/目录中的内容添加到表/分区中。
如果目标表(分区)已经有一个文件,并且文件名和 filepath 中的文件名冲突,那么现有的文件会被新文件所替代。
1.1 Load local data
# 创建表
create table tb_load1(id int,name string)
row format delimited fields terminated by ',';
# 加载本地数据
load data local inpath '/home/hadoop/load1.txt' into table tb_load1;
1.2 Load hdfs data
load data inpath '/hive/test/load2.txt' into table tb_load1;
After the data is loaded from hdfs successfully, the data will be deleted.
1.3 Loading data into partition table
# 创建分区表
create table tb_load2(id int ,name string)
partitioned by (sex string)
row format delimited fields terminated by ',';
# 加载数据,数据本身要是分区的数据
load data inpath '/hive/test/load_part_male.txt' into table tb_load2 partition (sex='male');
load data inpath '/hive/test/load_part_female.txt' into table tb_load2 partition (sex='female');
1.4 Using the overwrite keyword
load data local inpath '/home/hadoop/load3.txt' overwrite into table tb_load1;
overwrite will overwrite the previous data
2. Insert statement to insert data
grammar structure:
# 重其他表查询结果,插入并覆盖新表
insert overwrite/into table table_name
[partition(part=val,part2=val2,...)]
select fileds,... from tb_other;
Multiple insert statements insert syntax structures:
from table_name t
insert overwrite table tb1 [partition(col=val,...)]
select 语句
insert overwrite table tb2 [partition(col=val,...)]
select 语句
...;
2.1 Simple use of insert statement
create table tb_select1 (id int,name string,sex string)
row format delimited fields terminated by ',';
create table tb_insert1(id int,name string);
insert overwrite table tb_insert1 select id,name from tb_select1;
View the tb_insert1
data in the table below
select * from tb_insert1;
Insert using insert into
insert into table tb_insert1 select id,name from tb_select1 limit 2;
search result:
2.2 Use the insert statement to insert into partitions
create table tb_insert_part(id int,name string)
partitioned by(sex string);
insert overwrite table tb_insert_part partition(sex = 'male')
select id,name from tb_select1 where sex='male';
search result:
select * from tb_insert_part;
2.3 Multiple inserts
create table tb_mutil_insert1(id int,name string)
partitioned by(sex string);
create table tb_mutil_insert2(id int,name string)
partitioned by(sex string);
from tb_select1 t
insert overwrite table tb_mutil_insert1 partition (sex='male')
select t.id,t.name where t.sex='male'
insert overwrite table tb_mutil_insert2 partition (sex='female')
select t.id,t.name where t.sex='female';
search result:
2.4 Dynamic Partition Insertion
create table tb_dy_part(id int,name string) partitioned by(sex string);
insert overwrite table tb_dy_part partition(sex)
select id,name,sex from tb_select1;
To use dynamic partition, the default is to use strict mode, you need to use partition, or set dynamic partition mode to non-strict mode.
FAILED: SemanticException [Error 10096]: Dynamic partition strict mode requires at least one static partition column. To turn this off set hive.exec.dynamic.partition.mode=nonstrict
Set dynamic partition mode to non-strict mode
set hive.exec.dynamic.partition.mode=nonstrict
3. Use the create table method to insert the query data into the table
create table tb_create_mode as
select id,name from tb_select1;
result:
Fourth, export data
4.1 Export a single piece of data to the local
insert overwrite local directory '/home/hadoop/'
select id,name from tb_select1;
4.2 Exporting multiple pieces of data to hdfs
from tb_select1 t
insert overwrite directory '/hive/test/male'
select t.id,t.name,t.sex where t.sex='male'
insert overwrite directory '/hive/test/female'
select t.id,t.name,t.sex where t.sex='female';