1.启动Kafaka集群
这里的Kafka集群搭建就不再说了,如果不会搭建可以看我之前的博文
首先启动Zookeeper集群,然后再启动Kafka集群
bin/zkServer.sh start
bin/kafka-server-start.sh config/server.properties
2.创建Kafka主题
bin/kafka-topics.sh --zookeeper cdh0:2181 --create --replication-factor 3 --partitions 3 --topic ctlog
3.查看Kafka主题是否创建成功
bin/kafka-topics.sh --zookeeper cdh0:2181 --list
4.启动一个Kafka的消费者,等待Flume的信息的输入
bin/kafka-console-consumer.sh --bootstrap-server cdh0:9092 --topic ctlog --from-beginning
5.配置Flume
创建ct_log.conf
# define
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F -c +0 /opt/package/log.csv
a1.sources.r1.shell = /bin/bash -c
# sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.brokerList = cdh0:9092,cdh1:9092,cdh2:9092
a1.sinks.k1.topic = ctlog
a1.sinks.k1.batchSize = 20
a1.sinks.k1.requiredAcks = 1
# channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# bind
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
6.运行Flume
$ bin/flume-ng agent --conf conf/ --name a1 --conf-file testjob/ct_log.conf
到这里基本就稳了,前面的生产数据代码在生产数据,Flume监控产生数据的文件并将数据传到Kafka,Kafka进行消费
这时候应该可以在Kafka的消费端看到数据的输出