一个sqoop的shell脚本

#!/bin/bash
cur_date=`date +%Y-%m-%d`
table=pay
datebase=ods
hive_table=pay
dir=bigdata/public/ods/
if hdfs dfs -test -e /$dir/$table
then
        dates=$(hdfs dfs -ls /$dir/$table)
        lastest=$(echo $dates | grep -o -E -e "[0-9]{4}-[0-9]{1,2}-[0-9]{1,2}" |uniq -d )
        yestday=$(echo ${lastest:0-10:10})
else
        yestday=1900-01-01
fi
if test $yestday = $cur_date
then
    hdfs dfs -rm -r /$dir/$table/$cur_date
    hive -e "alter table $datebase.$hive_table drop partition (pdate='$cur_date');"
    dates=$(hdfs dfs -ls /$dir/$table)
    lastest=$(echo $dates | grep -o -E -e "[0-9]{4}-[0-9]{1,2}-[0-9]{1,2}" |uniq -d )
    yestday=$(echo ${lastest:0-10:10})
fi
if test $yestday==''
then
        yestday=1900-01-01
fi

output=$( \
sqoop import \
--connect jdbc:mysql://192.168.241.1/datebase \
--username UppDev \
--password Upp!Dev123 \
--target-dir /${dir}/${table}/${cur_date} \
--query "select * from ${table} where \$CONDITIONS and (CAST(create_date as DATE)<'${cur_date}' and CAST(create_date as DATE)>='${yestday}') or (CAST(last_edit_date as DATE)<'${cur_date}' and CAST(last_edit_date as DATE)>='${yestday}')" \
--fields-terminated-by '\001' \
--split-by fmid \
--m 1 \
2>&1)

if [ $? -ne 0 ];then
        echo "wrong"
        echo "---------------------------------------------------------------------------------------------"
        echo $output
fi
hive -e "
use $datebase;
create external table if not exists $hive_table  (
fmid string 
) partitioned by (pdate string) location 'hdfs://ns1/${dir}/${table}';
alter table $hive_table add partition (pdate='${cur_date}') location 'hdfs://ns1/${dir}/${table}/${cur_date}';"

最近改进的sqoop脚本

功能介绍:

按天从数据库拉取增量,如果是第一次跑会跑全量,在表名的目录下生成日期文件夹并且生成外部分区表.

每次运行会查询到该文件夹下最新的日期,如果和今天日期一样,将会删除并且新增今天新数据.

这样子重复执行增量脚本不会报错,不需要手动删除hdfs文件还有删除掉hive表当天的分区.

猜你喜欢

转载自blog.csdn.net/ck978105293/article/details/81811144