【hive】去重表数据、将hive查询结果保存到本地或者hdfs

一、去重hive表数据

1.复制表结构

CREATE TABLE <new_table> LIKE <old_table>;

2.插入去重后的数据

insert overwrite table <new_table>(
select t.id, t.local_path
from (
select
id, local_path, row_number() over(distribute by id sort by local_path) as rn
from <old_table>
) t where t.rn=1
);

结构如下

insert overwrite table <new_table> (
select <字段>
from (
select <字段>, row_number() over(distribute by <有重复的字段> sort by <重复字段的排列根据字段>) as rn
from <old_table>
) t where t.rn=1
);

也可以不创建新表,直接向原表中插入可以直接更新

3、分区表的去重 

#hive数据去重
      #获取表头
      fieldsTmp=$(hive -e "SET hive.cli.print.header=true;select * from ${dbname}.${table_name} limit 0;" | sed -e "s/\t/,/g;s/data\.//g" | grep -v "WARN")
      fields2=${fieldsTmp//,${table_name}.dt/}
      fields=${fields2//${table_name}./}
   hive -e "
   use $dbname;
   insert overwrite table $table_name partition(dt=$date_slice) select $fields from (select $fields,row_number() over(distribute by $tab_primarykey sort by $key_code desc)as rn from $table_name)t where t.rn=1"

示例代码中需要赋值的变量 如下

${dbname}:数据库名

${table_name}:表名

${date_slice}:分区值

${tab_primarykey}:有重复的字段

${key_code}:重复字段的排列根据字段

fields2=${fieldsTmp//,${table_name}.dt/}  中的dt是你的分区字段

二、在脚本中取出hive表的字段名

#存储到文本
hive -e "SET hive.cli.print.header=true;select * from dbname.tablename limit 0;" | sed -e "s/\t/,/g;s/data\.//g" | grep -v "WARN" > fileds.csv

#赋值给变量
fields=$(hive -e "SET hive.cli.print.header=true;select * from dbname.tablename limit 0;" | sed -e "s/\t/,/g;s/data\.//g" | grep -v "WARN")


获得使用逗号分隔hive表的字段名

三、将hive查询结果保存到本地或者hdfs

将hive查询结果保存到本地

hive -e "use test;insert overwrite local directory '/tmp/score' ROW FORMAT DELIMITED FIELDS TERMINATED BY',' select * from student"

保存到hdfs只需要将上边的命令中local去掉即可
 

猜你喜欢

转载自blog.csdn.net/qq_44065303/article/details/112786504