Flume docks with Kafka to simulate real-time production data of producers
introduction
Flume can monitor the log in real time. Every time a log is added, Flume will perceive it, and then can transmit this new data to Kafka. In actual production, each user's behavior will generate a data and store it in the log or database. , And then use flume to pull data from the log.
Task: Use a shell script to simulate user behavior. Ten data generated per second are stored in the log. Flume pulls the data in the log and transfers it to
Kafka. Existing data: cmcc.json, directory log: cmcc.log
1. Write the script readcmcc.sh , append 10 records from cmcc.json to cmcc.log in 1 second
for line in `cat /root/log/cmcc.json`
do
`echo $line >> /root/log/cmcc.log`
sleep 0.1s
done
2. Write flume script
agent.sources = s1
agent.channels = c1
agent.sinks = k1
agent.sources.s1.type=exec
#监控的文件
agent.sources.s1.command=tail -F /root/log/cmcc.log
agent.sources.s1.channels=c1
agent.channels.c1.type=memory
agent.channels.c1.capacity=10000
agent.channels.c1.transactionCapacity=100
#设置一个kafka接收器
agent.sinks.k1.type= org.apache.flume.sink.kafka.KafkaSink
#设置kafka的broker地址和端口号(所有的)
agent.sinks.k1.brokerList=hadoop01:9092,hadoop02:9092,hadoop03:9092
#设置kafka的topic
agent.sinks.k1.topic=cmcc2
#设置一个序列化方式
agent.sinks.k1.serializer.class=kafka.serializer.StringEncoder
#组装
agent.sinks.k1.channel=c1
3. Start Kafka
#启动kafka:
nohup bin/kafka-server-start.sh config/server.properties &
#查看kafka的topic列表:
bin/kafka-topics.sh --list --zookeeper hadoop02:2181
#查看topic中的数据:
bin/kafka-console-consumer.sh --zookeeper hadoop02:2181 --from-beginning --topic cmcc
4. Execute flume script
bin/flume-ng agent -c conf -f conf/flume_kafka.sh -n agent -Dflume.root.logger=INFO,console
5. Execute shell script
sh readcmcc.sh
6.View on the Kafka side
#查看有没有生成目标topic
bin/kafka-topics.sh --list --zookeeper hadoop02:2181
#读取次topic中的数据
bin/kafka-console-consumer.sh --zookeeper hadoop02:2181 --from-beginning --topic cmcckafka