flume的多agent连接

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/u012808902/article/details/78055540

1.需求

        将tail命令产生的信息在example01机器上用flume收集,并且将数据传到example02机器上,在example02机器上会将这些接收到的数据存储在hdfs集群上。


2.实现

1)example01机器上的tail-avro.conf文件
a1.sources = r1
a1.sinks = k1
a1.channels = c1

#这里数据源的配置和单机版一样
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /home/hadoop/logs/test.log
a1.sources.r1.channels = c1

#重点在下沉的配置
#绑定的不是本机, 是另外一台机器的服务地址, sink端的avro是一个发送端, avro的客户端, 往hadoop01这个机器上发
a1.sinks.k1.type = avro
a1.sinks.k1.channel = c1
a1.sinks.k1.hostname = example01
a1.sinks.k1.port = 4141
a1.sinks.k1.batch-size = 2

a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

2)example02机器上的avro-hdfs.conf文件
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
#source中的avro组件是接收者服务, 绑定本机
a1.sources.r1.type = avro
a1.sources.r1.channels = c1
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 4141


#配置下沉器
a1.sinks.k1.type = hdfs
a1.sinks.k1.channel = c1
a1.sinks.k1.hdfs.path = /flume/events/%y-%m-%d/%H%M/
a1.sinks.k1.hdfs.filePrefix = events-

a1.sinks.k1.hdfs.round = true
a1.sinks.k1.hdfs.roundValue = 10
a1.sinks.k1.hdfs.roundUnit = minute

a1.sinks.k1.hdfs.rollInterval = 3
a1.sinks.k1.hdfs.rollSize = 500
a1.sinks.k1.hdfs.rollCount = 20

a1.sinks.k1.hdfs.batchSize = 5
a1.sinks.k1.hdfs.useLocalTimeStamp = true

a1.sinks.k1.hdfs.fileType = DataStream

#配置管道
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100


#绑定管道
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

3)启动命令(在flume主目录下)
bin/flume-ng agent --conf conf --conf-file conf/tail-avro.conf --name a1 -Dflume.root.logger=INFO,console

bin/flume-ng agent --conf conf --conf-file conf/avro-hdfs.conf --name a1 -Dflume.root.logger=INFO,console




猜你喜欢

转载自blog.csdn.net/u012808902/article/details/78055540