Flume和kafka连接测试

Flume的配置文件:(和kafka连接的配置文件)

#文件名:kafka.properties

#配置内容:

分别在linux系统里面建两个文件夹:一个文件夹用于存储配置文件(flumetest),一个文件夹用于存储需要读取的文件(flume)

a1.sources = s1
a1.channels = c1
a1.sinks = k1

a1.sources.s1.type = netcat
a1.sources.s1.bind = 192.168.123.102
a1.sources.s1.port = 44455

a1.channels.c1.type = memory

a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.topic = t1
a1.sinks.k1.kafka.bootstrap.servers = 192.168.123.103:9092

a1.sources.s1.channels = c1
a1.sinks.k1.channel = c1

需要先启动zookeeper。

启动kafka集群:(配置的节点都要启动)

[hadoop@hadoop02 kafka_2.11-1.0.0]$ bin/kafka-server-start.sh config/server.properties

kafka集群需要有 t1 这个 topic

a1.sinks.k1.kafka.topic = t1

启动Flume:

[hadoop@hadoop02 apache-flume-1.8.0-bin]$ flume-ng agent --conf conf --conf-file /home/hadoop/apps/apache-flume-1.8.0-bin/flumetest/kafka.properties --name a1 -Dflume.root.logger=INFO,console

在hadoop03上启动kafka消费的信息:

[hadoop@hadoop03 kafka_2.11-1.0.0]$ bin/kafka-console-consumer.sh --zookeeper hadoop02:2181 --from-beginning --topic t1       
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
ok
aaa

然后在hadoop02上面连接:

[hadoop@hadoop02 kafka_2.11-1.0.0]$ telnet 192.168.123.102 44455  
Trying 192.168.123.102...
Connected to 192.168.123.102.
Escape character is '^]'.
aaa
OK
发送aaa会在hadoop03节点的kafka消费信息中显示。


猜你喜欢

转载自blog.csdn.net/qq_41851454/article/details/80245454