SparkStreaming学习札记4-2020-2-15--SparkStreaming实时流处理项目实战

12-8 -通过定时调度工具每一分钟产生一批数据

1.在线工具

https://tool.lu/crontab/

crontab -e

              */1 * * * * /hadoop/data/project/log_generator.sh

如果要取消用#注释掉

2.对接python日志产生器输出的日志到Flume

定义名字为streaming_project.conf

扫描二维码关注公众号,回复: 9835834 查看本文章

选型:access.log ==>控制台输出

           exec

          memory

          logger

streaming_project.conf文件具体配置:
exec-memory-logger.sources = exec-source
exec-memory-logger.sinks = logger-sink
exec-memory-logger.channels = memory-channel

exec-memory-logger.sources.exec-source.type = exec
exec-memory-logger.sources.exec-source.command = tail -F /home/hadoop/data/project/logs/access.log
exec-memory-logger.sources.exec-source.shell = /bin/sh -c

exec-memory-logger.channels.memory-channel.type = memory

exec-memory-logger.sinks.logger-sink.type = logger

exec-memory-logger.sources.exec-source.channels = memory-channel
exec-memory-logger.sinks.logger-sink.channel = memory-channel
 

启动命令:

flume-ng agent --name exec-memory-logger --conf $FLUME_HOME/conf --conf-fi
le /home/hadoop/data/project/streaming_project.conf -Dflume.root.logger=INFO,console

3日志 == 》Kafka

 (1)启动zk:

         进入目录

          cd /home/hadoop/app/zookeeper-3.4.5-cdh5.7.0/bin

         启动命令

           ./zkServer.sh start

(2)启动Kafka Server:

         

          进入目录:cd /home/hadoop/app/kafka_2.11-0.9.0.0/bin/

          启动命令:./kafka-server-start.sh -daemon /home/hadoop/app/kafka_2.11-0.9.0.0/config/server.properties

       修改flume配置文件使得Flume sink数据到Kafka,修改如下并以streaming_project2.conf命名

exec-memory-kafka.sources = exec-source
exec-memory-kafka.sinks = kafka-sink
exec-memory-kafka.channels = memory-channel

exec-memory-kafka.sources.exec-source.type = exec
exec-memory-kafka.sources.exec-source.command = tail -F /home/hadoop/data/project/logs/access.log
exec-memory-kafka.sources.exec-source.shell = /bin/sh -c

exec-memory-kafka.channels.memory-channel.type = memory

exec-memory-kafka.sinks.kafka-sink.type = org.apache.flume.sink.kafka.KafkaSink
exec-memory-kafka.sinks.kafka-sink.brokerList = hadoop000:9092
exec-memory-kafka.sinks.kafka-sink.topic = streamingtopic
exec-memory-kafka.sinks.kafka-sink.batchSize = 5
exec-memory-kafka.sinks.kafka-sink.requiredAcks = 1

exec-memory-kafka.sources.exec-source.channels = memory-channel
exec-memory-kafka.sinks.kafka-sink.channel = memory-channel
 

(3)开启Kafka消费者查看

kafka-console-consumer.sh --zookeeper hadoop000:2181 --topic streamingtopic

(4)启动flume

flume-ng agent --name exec-memory-kafka --conf $FLUME_HOME/conf --conf-file /home/hadoop/data/project/streaming_project2.conf -Dflume.root.logger=INFO,console

发布了22 篇原创文章 · 获赞 0 · 访问量 450

猜你喜欢

转载自blog.csdn.net/qq_36956082/article/details/104323272
今日推荐