Flume数据收集

1、创建flume监听的目录,如果不嫌创建,启动flume时会报错

2、配置flume

sources:目录

channels:内存

skin:hdfs

#定义agent名, source、channel、sink的名称
a4.sources = s1
a4.channels = c1
a4.sinks = s1

#具体定义source
a4.sources.s1.type = spooldir
a4.sources.s1.spoolDir = /root/logs

#具体定义channel
a4.channels.c1.type = memory
a4.channels.c1.capacity = 10000
a4.channels.c1.transactionCapacity = 100

#定义拦截器,为消息添加时间戳
a4.sources.s1.interceptors = i1
a4.sources.s1.interceptors.i1.type = org.apache.flume.interceptor.TimestampInterceptor$Builder

#具体定义sink
a4.sinks.s1.type = hdfs
a4.sinks.s1.hdfs.path = hdfs://ns1/flume/%Y%m%d
a4.sinks.s1.hdfs.filePrefix = events-
a4.sinks.s1.hdfs.fileType = DataStream
#不按照条数生成文件
a4.sinks.s1.hdfs.rollCount = 0
#HDFS上的文件达到128M时生成一个文件
a4.sinks.s1.hdfs.rollSize = 134217728
#HDFS上的文件达到60秒生成一个文件
a4.sinks.s1.hdfs.rollInterval = 60

#组装source、channel、sink
a4.sources.s1.channels = c1
a4.sinks.s1.channel = c1

3、

copy hadoop-common-x.x.x.jar、commons-configuration-x.x.jar、hadoop-auth-x.x.x.jar、hadoop-hdfs-x.x.x.jar到flume/lib下,flume把数据写如到hdfs时需要使用hadoop API

copy core-site.xml、hdfs-site.xml到flume/conf,flume需要知道hadoop的具体配置

4、启动flume,配置文件名称为a4.conf

bin/flume-ng agent -n a4 -c conf -f conf/a4.conf -Dflume.root.logger=INFO,console

此时只有文件被放入/root/logs就会被flume收集到,上传到hdfs

猜你喜欢

转载自mvplee.iteye.com/blog/2248347