Hadoop learning problems encountered (flume-kafka error)

Disclaimer: This article is a blogger original article, shall not be reproduced without the bloggers allowed. https://blog.csdn.net/madongyu1259892936/article/details/89333919
org.apache.kafka.common.errors.InterruptException: Flush interrupted.
	at org.apache.kafka.clients.producer.KafkaProducer.flush(KafkaProducer.java:546)
	at org.apache.flume.sink.kafka.KafkaSink.process(KafkaSink.java:236)
	at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:67)
	at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.InterruptedException
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
	at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
	at org.apache.kafka.clients.producer.internals.ProduceRequestResult.await(ProduceRequestResult.java:57)
	at org.apache.kafka.clients.producer.internals.RecordAccumulator.awaitFlushCompletion(RecordAccumulator.java:425)
	at org.apache.kafka.clients.producer.KafkaProducer.flush(KafkaProducer.java:544)
	... 4 more
2019-04-16 15:09:31,285 ERROR [SinkRunner-PollingRunner-DefaultSinkProcessor] flume.SinkRunner: Unable to deliver event. Exception follows.
org.apache.flume.EventDeliveryException: Failed to publish events
	at org.apache.flume.sink.kafka.KafkaSink.process(KafkaSink.java:264)
	at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:67)
	at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145)
	at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.kafka.common.errors.InterruptException: Flush interrupted.
	at org.apache.kafka.clients.producer.KafkaProducer.flush(KafkaProducer.java:546)
	at org.apache.flume.sink.kafka.KafkaSink.process(KafkaSink.java:236)
	... 3 more
Caused by: java.lang.InterruptedException
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:998)
	at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
	at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
	at org.apache.kafka.clients.producer.internals.ProduceRequestResult.await(ProduceRequestResult.java:57)
	at org.apache.kafka.clients.producer.internals.RecordAccumulator.awaitFlushCompletion(RecordAccumulator.java:425)
	at org.apache.kafka.clients.producer.KafkaProducer.flush(KafkaProducer.java:544)
	... 4 more

The reason is that in the flume of the hosts file is not configured kafka cluster service address mapping service

Guess you like

Origin blog.csdn.net/madongyu1259892936/article/details/89333919