Flume Kafka test cases, Flume configuration.
S1 = a1.sources a1.channels = C1 a1.sinks = K1 a1.sources.s1.type = netcat a1.sources.s1.bind = Master a1.sources.s1.port = 44444 a1.channels.c1.type = Memory a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink a1.sinks.k1.topic Kafka # T1 = Topic not need to add k1.kafka.topic, directly remove Kafka a1.sinks.k1. brokerList = master: 9092 # new use brokerList, using the old kafka.bootstrap.servers a1.sources.s1.channels = c1 a1.sinks.k1.channel = c1
1. Start kafka.
kafka-server-start.sh config/server.properties
2. Create kafka topic, flume configuration topic is t1.
Here --replication-factor is # 1, because only started kafka the master, the slave node does not start kafka above, if the setting is greater than 1, it is necessary to start from the node of kafka also the number of partitions # Partitions replication-factor remains above partition big words can ease the problem of too much data, not enough to solve the memory, but the memory is still essentially solve the need to address from the machine. kafka-topics.sh --create --zookeeper master: 2181 --replication- factor 1 --partitions 2 --topic t1
3. Start flume.
flume-ng agent -c conf -f conf/kafka_test.conf -n a1 -Dflume.root.logger=INFO,console
4. Start kafka consumers, to observe to see if successful.
kafka-console-consumer.sh --bootstrap-server master:9092 --topic t1
5. Since the flume profile is monitored netcat command, a remote start to send the message.
# If not telnet, use yum install telnet installation # localhost native port # 44444, is specified in the configuration file Flume, Flume start starts port snoop telnet localhost 44444
6. Test
telnet localhost 44444 > hello >world >nice
View kafka Consumers window, find the corresponding content has been
# kafka-console-consumer.sh --bootstrap-server master:9092 --topic t1 hello world nice
Summary: Since the beginning of flume profiles did not write right, debugging a long time to adjust through, really should not. Secondly, after flume start to learn to see the corresponding log information, such as after the start flume, it should be observed kafka corresponding topic, but the absence of a closer look, found a few times before commissioning is unreasonable, no matter how do kafka consumption who is unable to get data. But in the end found that if flume profile is incorrect, then start flume, listening topic is the default default-topic, so in the end the problem in the flume profiles above, the corresponding flume configured on sink part to pay attention, because the version some configuration is not the same thing needs to be done to run a successful conversion. You must pay attention to check the logs.