Sqoop在CentOS7下的安装使用

背景

sqoop可以用在mysql、hdfs、hive、hbase等大数据组件之间迁移数据

安装

1、把sqoop-1.4.6.bin__hadoop-2.0.4-alpha.tars上传到CentOS7

2、解压,改名字

[root@localhost szc]# tar -zxvf sqoop-1.4.6.bin__hadoop-2.0.4-alpha.tar.gz

[root@localhost szc]# mv sqoop-1.4.6.bin__hadoop-2.0.4-alpha sqoop-1.4.6

3、进入sqoop-1.4.6目录,修改配置文件

[root@localhost szc]# cd sqoop-1.4.6/

[root@localhost sqoop-1.4.6]# mv conf/sqoop-env-template.sh conf/sqoop-env.sh

[root@localhost sqoop-1.4.6]# vim conf/sqoop-env.sh

添加几个环境变量,只需要添加自己安装过的组件即可

#Set path to where bin/hadoop is available

export HADOOP_COMMON_HOME=/home/szc/cdh/hadoop-2.5.0-cdh5.3.6

#Set path to where hadoop-*-core.jar is available

export HADOOP_MAPRED_HOME=/home/szc/cdh/hadoop-2.5.0-cdh5.3.6


#set the path to where bin/hbase is available

#export HBASE_HOME=


#Set the path to where bin/hive is available

export HIVE_HOME=/home/szc/apache-hive-2.3.7


#Set the path for where zookeper config dir is

export ZOOCFGDIR=/home/szc/zookeeper/zookeeper-3.4.9/conf

4、把mysql驱动上传到lib目录下

5、验证sqoop

[root@localhost sqoop-1.4.6]# bin/sqoop help
Warning: /home/szc/sqoop-1.4.6/bin/../../hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: /home/szc/sqoop-1.4.6/bin/../../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/szc/sqoop-1.4.6/bin/../../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
20/05/07 07:39:50 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6

usage: sqoop COMMAND [ARGS]

Available commands:
  codegen            Generate code to interact with database records
  create-hive-table  Import a table definition into Hive
  eval               Evaluate a SQL statement and display the results
  export             Export an HDFS directory to a database table
  help               List available commands
  import             Import a table from a database to HDFS
  import-all-tables  Import tables from a database to HDFS
  import-mainframe   Import datasets from a mainframe server to HDFS
  job                Work with saved jobs
  list-databases     List available databases on a server
  list-tables        List available tables in a database
  merge              Merge results of incremental imports
  metastore          Run a standalone Sqoop metastore
  version            Display version information

See 'sqoop help COMMAND' for information on a specific command.

查询mysql有哪些数据库

[root@localhost sqoop-1.4.6]# bin/sqoop list-database --connect jdbc:mysql://192.168.0.102 --username root --password root

Warning: /home/szc/sqoop-1.4.6/bin/../../hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: /home/szc/sqoop-1.4.6/bin/../../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/szc/sqoop-1.4.6/bin/../../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
No such sqoop tool: list-database. See 'sqoop help'.

[root@localhost sqoop-1.4.6]# bin/sqoop list-databases --connect jdbc:mysql://192.168.0.102 --username root --password root

Warning: /home/szc/sqoop-1.4.6/bin/../../hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: /home/szc/sqoop-1.4.6/bin/../../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/szc/sqoop-1.4.6/bin/../../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
20/05/07 07:45:11 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
20/05/07 07:45:11 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
20/05/07 07:45:11 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.

information_schema
azkaban
ke
knowlegegraph
mysql
oozie
performance_schema
sys
test

使用案例

从mysql导入表的全部数据到hdfs

[root@localhost sqoop-1.4.6]# bin/sqoop import --connect jdbc:mysql://192.168.0.102:3306/test --username root --password root --table users --target-dir /user/root/sqoop/users --delete-target-dir --num-mappers 1 --fields-terminated-by "\t"

import表示导入数据,然后指定数据库的url、用户名密码、导入的表、导入的目的路径(hdfs),并指定如果目的路径存在则将之删除(--delete-target-dir)、使用的mapper数量、字段分隔符

完成后,可以查看文件内容如下

[root@localhost sqoop-1.4.6]# /home/szc/cdh/hadoop-2.5.0-cdh5.3.6/bin/hadoop fs -cat /user/root/sqoop/users/part-m-00000

20/05/07 07:58:43 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

1    songzeceng    szc    [email protected]
2    zeceng    szc    [email protected]
3    szc    sda    fd

从mysql导入部分数据到hdfs,使用查询导入

[root@localhost sqoop-1.4.6]# bin/sqoop import --connect jdbc:mysql://192.168.0.102:3306/test --username root --password root --target-dir /user/root/sqoop/users --delete-target-dir --num-mappers 1 --fields-terminated-by "\t" --query 'select * from users where id <=2 and $CONDITIONS;'

--query指定查询语句,where子句里除了原有条件,还必须加上$CONDITIONS,用于mapper之间where条件的传递,如果--query参数值用"包围,$CONDITIONS前就要加一个\,亦即--query "select * from users where id <=2 and \$CONDITIONS;"。--table和--query不能同时存在

查看结果

[root@localhost sqoop-1.4.6]# /home/szc/cdh/hadoop-2.5.0-cdh5.3.6/bin/hadoop fs -cat /user/root/sqoop/users/part-m-00000

20/05/07 08:05:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

1    songzeceng    szc    [email protected]
2    zeceng    szc    [email protected]

从mysql导入部分数据到hdfs,导入指定列

可以使用--columns指定要导入的列

[root@localhost sqoop-1.4.6]# bin/sqoop import --connect jdbc:mysql://192.168.0.102:3306/test --username root --password root --table users --target-dir /user/root/sqoop/users --delete-target-dir --num-mappers 1 --fields-terminated-by "\t" --columns username,email

查看结果

[root@localhost sqoop-1.4.6]# /home/szc/cdh/hadoop-2.5.0-cdh5.3.6/bin/hadoop fs -cat /user/root/sqoop/users/part-m-00000

20/05/07 08:12:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

songzeceng    [email protected]
zeceng    [email protected]
szc    fd

从mysql导入部分数据到hdfs,查询条件导入

使用--where指定条件

[root@localhost sqoop-1.4.6]# bin/sqoop import --connect jdbc:mysql://192.168.0.102:3306/test --username root --password root --table users --target-dir /user/root/sqoop/users --delete-target-dir --num-mappers 1 --fields-terminated-by "\t" --where "id=1"

查看结果

[root@localhost sqoop-1.4.6]# /home/szc/cdh/hadoop-2.5.0-cdh5.3.6/bin/hadoop fs -cat /user/root/sqoop/users/part-m-00000

20/05/07 08:14:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

1    songzeceng    szc    [email protected]

--where也可以和--columns配合使用

[root@localhost sqoop-1.4.6]# bin/sqoop import --connect jdbc:mysql://192.168.0.102:3306/test --username root --password root --table users --target-dir /user/root/sqoop/users --delete-target-dir --num-mappers 1 --fields-terminated-by "\t" --where "id=1" --columns=username,email

查看结果

[root@localhost sqoop-1.4.6]# /home/szc/cdh/hadoop-2.5.0-cdh5.3.6/bin/hadoop fs -cat /user/root/sqoop/users/part-m-00000

20/05/07 08:15:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

songzeceng    [email protected]

但--where不能和--query共用

从mysql导入数据到hive

[root@localhost sqoop-1.4.6]# bin/sqoop import --connect jdbc:mysql://192.168.0.102:3306/test --username root --password root --table users --hive-import --hive-overwrite --hive-table users --num-mappers 1 --fields-terminated-by "\t"

--hive-import指明导入到hive,--hive-overwrite指明表覆写,--hive-table指明导入的hive表,如果此表不存在,则被创建

查看结果

hive> select * from users;
OK

1    songzeceng    szc    [email protected]
2    zeceng    szc    [email protected]
3    szc    sda    fd

Time taken: 0.741 seconds, Fetched: 3 row(s)

从hdfs\hive中导出数据到mysql

[root@localhost sqoop-1.4.6]# bin/sqoop export --connect jdbc:mysql://192.168.0.102:3306/test --username root --password root --table users_sqoop --num-mappers 1 --export-dir /user/hive/warehouse/users --input-fields-terminated-by "\t"

export指明这是导出命令,--export-dir指明要导出的hdfs目录,--input-fields-terminated-by指明文件内一行数据的分隔符

mysql的表默认不创建,而且要保证主键不重复,查看结果如下

mysql> select * from users_sqoop;

+----+------------+----------+------------+
| id | name       | password | email      |
+----+------------+----------+------------+
|  1 | songzeceng | szc      | [email protected] |
|  2 | zeceng     | szc      | [email protected] |
|  3 | szc        | sda      | fd         |
+----+------------+----------+------------+

3 rows in set

sqoop执行脚本

创建job目录,编写sqoop_test.opt文件

[root@localhost sqoop-1.4.6]# mkdir job

[root@localhost sqoop-1.4.6]# vim job/sqoop_test.opt

内容如下,参数和参数之间用换行分开

export
--connect
jdbc:mysql://192.168.0.102:3306/test
--username
root
--password
root
--table
users_sqoop
--num-mappers
1
--export-dir
/user/hive/warehouse/users
--input-fields-terminated-by
"\t"

执行脚本,--options-file指定脚本文件

[root@localhost sqoop-1.4.6]# bin/sqoop --options-file job/sqoop_test.opt

查看mysql中的结果(事先已清空users_sqoop表)

执行脚本前

mysql> select * from users_sqoop;
Empty set

执行脚本后

mysql> select * from users_sqoop;

+----+------------+----------+------------+
| id | name       | password | email      |
+----+------------+----------+------------+
|  1 | songzeceng | szc      | [email protected] |
|  2 | zeceng     | szc      | [email protected] |
|  3 | szc        | sda      | fd         |
+----+------------+----------+------------+

3 rows in set

结语

以上,HBase没有试,大家可自行查看官方文档

猜你喜欢

转载自blog.csdn.net/qq_37475168/article/details/107306286
今日推荐