txt 文件导入到mysql 到 mysql 中表 及表中的数据使用 sqoop 导入到 hdfs中

1、创建用于导出数据用户并赋予权限。

以root用户登录mysql集群第一台节点。

在root用户权限下为集群中每一台mysql服务器创建一sqoop用户sqoopuser ,后续导出数据操作即是用sqoopuser用户进行操作。(其实生产环境中是从备库导出)
create user 'sqoopuser'@'192.168.135.135' identified by '123456';
create user 'sqoopuser'@'192.168.135.145' identified by '123456';
create user 'sqoopuser'@'192.168.135.155' identified by '123456';

赋予sqoopuser权限。

grant all privileges on *.* to sqoopuser@'192.168.135.135';
grant all privileges on *.* to sqoopuser@'192.168.135.145';
grant all privileges on *.* to sqoopuser@'192.168.135.155';
flush privileges;


2、创建表,写入数据,以供后续导出

create table if not exists TImport(
name varchar(100),
id int ,
age int 
)engine=innodb charset=gb2312;   

load data local infile '/var/lib/mysql-files/m.txt' into table TImport  fields terminated by ',' lines terminated by '\r\n' ;


select name,id,age into outfile '/var/lib/mysql-files/out.txt' fields terminated by ',' optionally enclosed BY '"' lines terminated by '\r\n' from TImport;

3、利用Sqoop开始导出 


bin/sqoop import \
--connect jdbc:mysql://192.168.135.135:3306/mysql \
--username sqoopuser \
--password 123456 \
--query 'select name,id,age from TImport where $CONDITIONS limit 2 ' \
--target-dir /opt/datas \
--delete-target-dir \
--num-mappers 1 \
--compress \
--compression-codec org.apache.hadoop.io.compress.BZip2Codec \
--direct \
--fields-terminated-by ','

部分注释:/opt/datas 这个目录是要在hdfs 中创建的路径,事前不需要创建

猜你喜欢

转载自blog.csdn.net/u011500419/article/details/89481893