ERROR Failed to send requests for topics

If you are in the same machine (localhost), Kafka Client work properly, Producer and Consumer can send and receive messages properly, but once deployed to the two machines, if the default configuration does not work. ": Failed to send messages after 3 tries kafka.common.FailedToSendMessageException" error will occur.
 

[2020-01-09 12:42:43,841] ERROR Failed to send requests for topics cache-message with correlation ids in [17,24] (kafka.producer.async.DefaultEventHandler)
[2020-01-09 12:42:43,841] ERROR Error in handling batch of 1 events (kafka.producer.async.ProducerSendThread)
kafka.common.FailedToSendMessageException: Failed to send messages after 3 tries.
        at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
        at kafka.utils.Utils$.read(Utils.scala:376)
        at kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)
        at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
        at kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
        at kafka.network.BlockingChannel.receive(BlockingChannel.scala:100)
        at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:74)
        at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:71)
        at kafka.producer.SyncProducer.send(SyncProducer.scala:112)
        at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:53)
        at kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
        at kafka.producer.BrokerPartitionInfo.getBrokerPartitionInfo(BrokerPartitionInfo.scala:49)
        at kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$getPartitionListForTopic(DefaultEventHandler.scala:186)
        at kafka.producer.async.DefaultEventHandler$$anonfun$partitionAndCollate$1.apply(DefaultEventHandler.scala:150)
        at kafka.producer.async.DefaultEventHandler$$anonfun$partitionAndCollate$1.apply(DefaultEventHandler.scala:149)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:60)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at kafka.producer.async.DefaultEventHandler.partitionAndCollate(DefaultEventHandler.scala:149)
        at kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:95)
        at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:72)
        at kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:104)
        at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:87)
        at kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:67)
        at scala.collection.immutable.Stream.foreach(Stream.scala:526)
        at kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:66)
        at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:44)
[2020-01-09 12:42:43,500] WARN Fetching topic metadata with correlation id 19 for topics [Set(cache-message)] from broker [id:2,host:192.168.0.108,port:2181] failed (kafka.client.ClientUtils$)
java.io.EOFException: Received -1 when reading from channel, socket has likely been closed.
        at kafka.utils.Utils$.read(Utils.scala:376)

The solution is very simple, just in the configuration file server.properties Kafka's, you can set the hostname:

# Hostname the broker will bind to. If not set, the server will bind to all interfaceshost.name=queue-server1

The reason, in fact, from the comments, we can know that this is a designated broker address (strictly speaking is listening network interface, or NIC), at the same time, it also can be seen below and related properties.

# Hostname the broker will advertise to producers and consumers. If not set, it uses the

# value for “host.name” if configured. Otherwise, it will use the value returned from

# java.net.InetAddress.getCanonicalHostName().

#advertised.host.name=

In other words, producer and consumer are connected by the host name (advertised.host.name) broker, and if this value is not set, it will use the value of the above host.name, if the above does not host.name provided, will be used java.net.InetAddress.getCanonicalHostName () value obtained.

As can be seen from Zookeeper, the default when the broker is localhost, of course, access from other machines can not succeed.


 
 

Published 41 original articles · won praise 20 · views 30000 +

Guess you like

Origin blog.csdn.net/m0_37598953/article/details/103951936