2020.9.29课堂笔记(Sqoop介绍及数据迁移)

一.Sqoop概述

Sqoop是一个用于在Hadoop和关系数据库之间传输数据的工具官网链接

  • 将数据从RDBMS导入到HDFS、Hive、HBase- 从HDFS导出数据到RDBMS- 使用MapReduce导入和导出数据,提供并行操作和容错
    目标用户
  • 系统管理员、数据库管理员- 大数据分析师、大数据开发工程师等

二.Sqoop操作:相关文档

1.从RDB导入数据到HDFS

sqoop import \
--connect jdbc:mysql://localhost:3306/retail_db \
--driver com.mysql.jdbc.Driver \
--table customers \
--username root \
--password ok \
--target-dir /data/retail_db/customers \
--m 3

sqoop-import是sqoop import的别名

  • 通过Where语句过滤导入表
sqoop import \
 --connect jdbc:mysql://localhost:3306/retail_db \
 --driver com.mysql.jdbc.Driver \
 --table orders --where "order_id < 500" \
 --username root \
 --password ok \
 --delete-target-dir \
 --target-dir /data/retail_db/orders \
 --m 3

  • 通过COLUMNS过滤导入表
sqoop import \
 --connect jdbc:mysql://localhost:3306/retail_db \
 --driver com.mysql.jdbc.Driver \
 --table customers \
 --columns "customer_id,customer_fname,customer_lname" \
 --username root \
 --password ok \
 --delete-target-dir \
 --target-dir /data/retail_db/customers \
 --m 3

  • 使用query方式导入数据
sqoop import \
 --connect jdbc:mysql://localhost:3306/retail_db \
 --driver com.mysql.jdbc.Driver \
 --query "select * from orders where order_status!='CLOSED' and \$CONDITIONS"  \
 --username root \
 --password ok \
 --split-by order_id \
 --delete-target-dir \
 --target-dir /data/retail_db/orders \
 -m 3
  • 使用Sqoop增量导入数据- Incremental指定增量导入的模式- 1)append:追加数据记录- 2)lastmodified:可追加更新的数据
 sqoop import \
 --connect jdbc:mysql://localhost:3306/retail_db \
 --table orders \
 --username root \
 --password ok \
 --incremental append \
 --check-column order_date \
 --last-value '2013-07-24 00:00:00' \
 --target-dir /data/retail_db/orders \
 --m 3 

2.导入数据到hive

在这里插入图片描述

Argument Description
--hive-home <dir> Override $HIVE_HOME
--hive-import Import tables into Hive (Uses Hive’s default delimiters if none are set.)
--hive-overwrite Overwrite existing data in the Hive table.
--create-hive-table If set, then the job will fail if the target hive table exits. By default this property is false.
--hive-table <table-name> Sets the table name to use when importing to Hive.
--hive-drop-import-delims Drops \n, \r, and \01 from string fields when importing to Hive.
--hive-delims-replacement Replace \n, \r, and \01 from string fields with user defined string when importing to Hive.
--hive-partition-key Name of a hive field to partition are sharded on
--hive-partition-value <v> String-value that serves as partition key for this imported into hive in this job.
--map-column-hive <map> Override default mapping from SQL type to Hive type for configured columns.
  • 1.复制jar包
 #复制hive的jar包
 cp /opt/install/hive/lib/hive-common-1.1.0-cdh5.14.2.jar /opt/install/sqoop/lib/
 cp /opt/install/hive/lib/hive-shims* /opt/install/sqoop/lib/

  • 2.导入
 sqoop import \
 --connect jdbc:mysql://localhost:3306/retail_db  \
 --table orders \
 --username root  \
 --password ok \
 --hive-import  \
 --create-hive-table \
--hive-database retail_db \
--hive-table orders \
--m 3

  • 3.导入到hive分区表,注意分区字段不能当作普同字段导入表中
 sqoop import \
 --connect jdbc:mysql://localhost:3306/retail_db  \
 --driver com.mysql.jdbc.Driver \
 --query "select order_id,order_status from orders where order_date&gt;='2013-11-03' and order_date&lt;'2013-11-04' and \$CONDITIONS" \
 --username root  \
 --password ok \
 --delete-target-dir \
 --target-dir /data/retail_db/orders \
 --split-by order_status \
 --hive-import  \
 --create-hive-table \
 --hive-database retail_db \
 --hive-table orders \
 --hive-partition-key "order_date" \
 --hive-partition-value "2013-11-03" \
 --m 3

3.导入数据到HBase

在这里插入图片描述

Argument Description
--column-family <family> Sets the target column family for the import
--hbase-create-table If specified, create missing HBase tables
--hbase-row-key <col> Specifies which input column to use as the row key In case, if input table contains composite key, then<col>must be in the form of a comma-separated list of composite key attributes
--hbase-table <table-name> Specifies an HBase table to use as the target instead of HDFS
--hbase-bulkload Enables bulk loading
#hbase创建表
create 'emp_hbase_import','details'
#导入到hbase
 sqoop import  \
 --connect jdbc:mysql://localhost:3306/sqoop \
 --username root  \
 --password ok \
 --table emp \
 --columns "EMPNO,ENAME,JOB,SAL,COMM"  \
 --hbase-table emp_hbase_import \
 --column-family details  \
 --hbase-row-key "EMPNO" \
 --m 1

4.hdfs导出到MySQL

#先在mysql创建一个空表
create table customers_demo as select * from customers where 1=2;
#创建目录,上传数据
hdfs dfs -mkdir /customerinput
hdfs dfs -put customers.csv /customerinput/
#导出到mysql
sqoop export \
--connect jdbc:mysql://localhost:3306/retail_db \
--username root \
--password ok \
--table customers_demo \
--export-dir /customerinput/ \
--fields-terminated-by ',' \
--m 1 

5.sqoop脚本

#sqoop脚本
#1编写脚本,内容如下
#############################
import
--connect
jdbc:mysql://hadoop01:3306/retail_db
--driver
com.mysql.jdbc.Driver
--table
customers
--username
root
--password
root
--target-dir
/data/retail_db/customers
--delete-target-dir
--m 3
##############################
#2执行脚本
sqoop --options-file job_RDBMS2HDFS.opt

6.sqoop的job任务

 #创建job 注意import前必须有空格
 sqoop job \
 --create mysqlToHdfs \
 -- import \
 --connect jdbc:mysql://localhost:3306/retail_db \
 --table orders \
 --username root \
 --password ok \
 --incremental append \
 --check-column order_date \
 --last-value '0' \
 --target-dir /data/retail_db/orders \
 --m 3 
 #查看job
 sqoop job --list
 #执行job,可设置crontab定时执行 用的比较多
 sqoop job --exec mysqlToHdfs

猜你喜欢

转载自blog.csdn.net/m0_48758256/article/details/108867095
今日推荐