flume常用采集动态文件配置

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接: https://blog.csdn.net/Romantic_sir/article/details/100671569

tail-hdfs.conf
####这个是解决动态文件,文件里,边放边采集
用tail命令获取数据,下沉到hdfs
启动命令:
bin/flume-ng agent -c conf/ -f tail-hdfs.conf -n ag1 -Dflume.root.logger=INFO,console
########

#定义三大组件的名称
ag1.sources = source1
ag1.sinks = sink1
ag1.channels = channel1

# 配置source组件
ag1.sources.source1.type = exec
ag1.sources.source1.command = tail -F /usr/local/nginx/logs/log.frame.access.log

# 配置sink组件
ag1.sinks.sink1.type = hdfs
ag1.sinks.sink1.hdfs.path =hdfs://hdp-1:9000/nginx_log/%y-%m-%d/%H-%M
ag1.sinks.sink1.hdfs.filePrefix = app_log
ag1.sinks.sink1.hdfs.fileSuffix = .log
ag1.sinks.sink1.hdfs.batchSize= 100
ag1.sinks.sink1.hdfs.fileType = DataStream
ag1.sinks.sink1.hdfs.writeFormat =Text

## roll:滚动切换:控制写文件的切换规则
## 按文件体积(字节)来切
ag1.sinks.sink1.hdfs.rollSize = 5120 
## 按event条数切 
ag1.sinks.sink1.hdfs.rollCount = 1000000
## 按时间间隔切换文件
ag1.sinks.sink1.hdfs.rollInterval = 60

## 控制生成目录的规则
ag1.sinks.sink1.hdfs.round = true
ag1.sinks.sink1.hdfs.roundValue = 10
ag1.sinks.sink1.hdfs.roundUnit = minute

ag1.sinks.sink1.hdfs.useLocalTimeStamp = true

# channel组件配置
ag1.channels.channel1.type = memory
## event条数
ag1.channels.channel1.capacity = 500000
##flume事务控制所需要的缓存容量600条event
ag1.channels.channel1.transactionCapacity = 600

# 绑定source、channel和sink之间的连接
ag1.sources.source1.channels = channel1
ag1.sinks.sink1.channel = channel1

猜你喜欢

转载自blog.csdn.net/Romantic_sir/article/details/100671569