Docking between flume and kafka

everyone:

  it is good! For the connection between flume and kafka, please refer to

 

Essentially, flume is used as a producer of kafka, monitoring a directory, and kafka consumers display

Step 1: Edit the configuration file of flume_kafka, which is flume-kafka.sh in the conf directory of flume (the script is at the back)

Note: The premise of this step is to first create a topic named kafkatest in kafka,

Step 2: Start the flume script

 bin/flume-ng agent -c conf -f conf/flume-kafka.sh -n agent

Note: The following -n is the name of the agent, I have stepped on it once

The screenshot after starting flume is as follows:

Display the status of waiting for input

Step 3: Start the consumer program of Kafka

kafka-console-consumer.sh --topic kafkatest--zookeeper hadoop:2181

Step 4: Add data to the log file and observe whether the kafka client shows

  To add data to the log file, it can be an echo command, crontab, or azkaba

[root@hadoop test]# ll
total 40
-rw-r--r-- 1 root root     1 Mar 11 13:14 abc.log
-rw-r--r-- 1 root root   675 Aug  2  2018 derby.log
-rw-r--r-- 1 root root    85 Jul  9  2018 hbase.txt
drwxr-xr-x 5 root root  4096 Aug  2  2018 metastore_db
-rw-r--r-- 1 root root    36 Oct 20  2017 person.txt
-rw-r--r-- 1 root root   239 Jun 29  2018 stu.txt
-rw-r--r-- 1 root root 14246 Feb 22 16:59 zookeeper.out
[root@hadoop test]# echo " hello hai" >> abc.log
[root@hadoop test]# echo " hello hai1" >> abc.log

 Check whether Kafka has consumed

[root@hadoop ~]# kafka-console-consumer.sh --topic kafkatest --zookeeper hadoop:2181
 hello hai
 hello hai1

As you can see, Kafka has already consumed and verified

 

The script of ---flume-kafka.sh is as follows:

# 启动flume bin/flume-ng agent -c conf -f conf/flume-kafka.sh -n agent


agent.sources = s1                                                                                                                  
agent.channels = c1                                                                                                                 
agent.sinks = k1                                                                                                                    
                                                                                 
agent.sources.s1.type=exec                                                                                                          
agent.sources.s1.command=tail -F /root/test/abc.log                                                                                
agent.sources.s1.channels=c1                                                                                                        
agent.channels.c1.type=memory                                                                                                       
agent.channels.c1.capacity=10000                                                                                                    
agent.channels.c1.transactionCapacity=100 
#设置一个kafka接收器
agent.sinks.k1.type= org.apache.flume.sink.kafka.KafkaSink
#设置kafka的broker地址和端口号(所有的)
agent.sinks.k1.brokerList=192.168.16.100:9092
#设置kafka的topic
agent.sinks.k1.topic=kafkatest
#设置一个序列化方式
agent.sinks.k1.serializer.class=kafka.serializer.StringEncoder
#组装
agent.sinks.k1.channel=c1

 

Guess you like

Origin blog.csdn.net/zhaoxiangchong/article/details/78380295