sqoop2简单使用

Sqoop1.99.7

1.    HDFSà MYSQL

1.       启动服务:                sqoop2-serverstart

[root@slave2 bin]# sqoop2-server start

Setting conf dir:/opt/hadoop/packages/sqoop-1.99.7/bin/../conf

Sqoop home directory:/opt/hadoop/packages/sqoop-1.99.7

Starting the Sqoop2 server...

Sqoop2 server started.

2.       启动客户端:             sqoop2-shell

[root@slave2 bin]# sqoop2-shell

Setting conf dir:/opt/hadoop/packages/sqoop-1.99.7/bin/../conf

扫描二维码关注公众号,回复: 2327213 查看本文章

Sqoop home directory:/opt/hadoop/packages/sqoop-1.99.7

Loading resource file .sqoop2rc

sqoop:000> set server --host slave2.server--port 12000 --webapp sqoop 

Server is set successfully

===> OK

sqoop:000> set option --name verbose--value true

Verbose option was changed to true

===> OK

Resource file loaded.

Sqoop Shell: Type 'help' or '\h' for help.

之前直接写了配置文件./.sqoop2rc  启动时会读取该文件

3.       测试连接,显示版本说明连接成功

sqoop:000> show version --all

client version:

Sqoop 1.99.7 source revision435d5e61b922a32d7bce567fe5fb1a9c0d9b1bbb

Compiled by abefine on Tue Jul 19 16:08:27PDT 2016

0   [main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... usingbuiltin-java classes where applicable

server version:

Sqoop 1.99.7 source revision435d5e61b922a32d7bce567fe5fb1a9c0d9b1bbb

Compiled by abefine on Tue Jul 19 16:08:27PDT 2016

API versions:

[v1]

sqoop:000>

4.       创建HDFS的link对象

 

sqoop:000>createlink --connector hdfs-connector

Creating link for connector with namehdfs-connector

Please fill following values to create newlink object

Name: HDFS            //要创建的link名称

HDFS cluster

URI: hdfs://192.168.111.71:9000   //hadoopfs.default的值

Conf directory:/opt/hadoop/packages/hadoop-2.6.0/etc/hadoop  //hadoop配置文件的目录

Additional configs::

There are currently 0 values in the map:

entry#

New link was successfully created withvalidation status OK and name HDFS

5.       创建Mysql的link的对象

sqoop:000> createlink --connector generic-jdbc-connector

Creating link for connector with namegeneric-jdbc-connector

Please fill following values to create newlink object

Name: MYSQL         //要创建的link名称

Database connection

Driver class: com.mysql.jdbc.Driver        //驱动

Connection String:jdbc:mysql://192.168.111.73:3306/test  //数据库连接字符串

Username: root                                             //用户名

Password: ******                                        //密码

Fetch Size:

Connection Properties:

There are currently 0 values in the map:

entry#

SQL Dialect

Identifier enclose:      //此处是空格

New link was successfully created withvalidation status OK and name MYSQL

Identifierenclose:   

指定SQL中标识符的定界符,也就是说,有的SQL标示符是一个引号:select* from "table_name",这种定界符在MySQL中是会报错的。这个属性默认值就是双引号,所以不能使用回车,必须将之覆盖,我使用空格覆盖了这个值

 

 

查看创建好的link

sqoop:000> showlink

+-------+------------------------+---------+----------------------

| Name |     Connector Name     | Enabled |

+-------+------------------------+---------------------------------

| HDFS |   hdfs-connector         | true   |

| MYSQL | generic-jdbc-connector     | true   |

+-------+------------------------+---------+-----------------------

6.       创建job

sqoop:000>create job --from HDFS --to MYSQL

Creating job for links with from name HDFSand to name MYSQL

Please fill following values to create newjob object

Name: HDFStoMYSQL       //job名称

Input configuration

Input directory: /connect    //数据来源于HDFS上面的哪个目录

Override null value:

Null value:

Incremental import

Incremental type:

  0 :NONE

  1 :NEW_FILES

Choose:

Last imported date:

Database target

Schema name: test                             //导入数据库的那个databases

Table name: people                             //导入哪个表

Column names:

There are currently 0 values in the list:

element#

Staging table:

Clear stage table:

Throttling resources

Extractors:

Loaders:

Classpath configuration

Extra mapper jars:

There are currently 0 values in the list:

element#

New job was successfully created withvalidation status OK  and nameHDFStoMYSQL

说明:表名和表SQL语句是互斥的。如果提供了表名,则不应提供表SQL语句。如果提供表SQL语句,则不应提供表名称。
仅当提供表名时,才应提供表列名称。

 

7.       准备工作

在mysql中创建表

mysql> CREATE TABLE people (

   -> name varchar(255),

   -> sex varchar(255),

   -> id int(2)

   -> );

Query OK, 0 rows affected (0.16 sec)

8.       启动job

查看job

sqoop:000> show job

+----+-------------+--------------------------------+--------------------------------+---------+-------------------------

| Id |   Name     | From Connector      |  To Connector              | Enabled |

+----+-------------+--------------------------------+--------------------------------+---------+--------------------------

| 8 | HDFStoMYSQL | HDFS (hdfs-connector)  | MYSQL (generic-jdbc-connector) | true   |

+----+-------------+--------------------------------+--------------------------------+---------+-------------------------

 

启动job

sqoop:000> startjob --name HDFStoMYSQL

Submission details

Job Name: HDFStoMYSQL

Server URL:http://slave2.server:12000/sqoop/

Created by: root

Creation date: 2017-04-11 07:17:33 CST

Lastly updated by: root

External ID: job_1491832257951_0006

         http://master.server:8088/proxy/application_1491832257951_0006/

Target Connector schema: Schema{name= test. people ,columns=[

         Text{name=name,nullable=true,type=TEXT,charSize=null},

         Text{name=sex,nullable=true,type=TEXT,charSize=null},

         FixedPoint{name=id,nullable=true,type=FIXED_POINT,byteSize=4,signed=true}]}

2017-04-11 07:17:33 CST: BOOTING  - Progress is not available

可以在http://192.168.111.71:8088 查看进度

 

9.       验证

mysql> select * from people;

+----------+-------+------+-----------

| name    | sex   | id   |

+----------+-------+------+-----------

| xiaoming | man   |    1|

| xiaohong | woman |    2 |

+----------+-------+------+-----------

2 rows in set (0.13 sec)

猜你喜欢

转载自blog.csdn.net/eat_shopping/article/details/72599019