kafka java.io.EOFException: Received -1 when reading from channel, socket has

I am re-sending this thread from my personal account because my company
e-mail (visibletechnologies.com) is not receiving replies from yur mailing
list.


To recap, I was seeing a steady stream of exceptions both from the broker
and a client. Neha recommended setting ulimit. Our machines were at the
default of 1024. Setting the limit to 65536 dramatically reduced the
broker-side exception spew.

However, repeating the tests from yesterday, I am still seeing the
following repeated block of log entries on our client:

2013-03-08 18:44:41,063 INFO kafka.consumer.SimpleConsumer: Reconnect due
to socket error:
java.io.EOFException: Received -1 when reading from channel, socket has
likely been closed.
at kafka.utils.Utils$.read(Utils.scala:373)
at
kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:67)
at kafka.network.Receive$class.readCompletely(Transmission.scala:56)
at
kafka.network.BoundedByteBufferReceive.readCompletely(BoundedByteBufferReceive.scala:29)
at kafka.network.BlockingChannel.receive(BlockingChannel.scala:100)
at
kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:124)
at
kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:122)
at
kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SimpleConsumer.scala:161)
at
kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:161)
at
kafka.consumer.SimpleConsumer$$anonfun$fetch$1$$anonfun$apply$mcV$sp$1.apply(SimpleConsumer.scala:161)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at
kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply$mcV$sp(SimpleConsumer.scala:160)
at
kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:160)
at
kafka.consumer.SimpleConsumer$$anonfun$fetch$1.apply(SimpleConsumer.scala:160)
at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)
at kafka.consumer.SimpleConsumer.fetch(SimpleConsumer.scala:159)
at
kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:93)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:50)
2013-03-08 18:44:41,391 INFO kafka.consumer.ConsumerFetcherManager:
[ConsumerFetcherManager-1362767252595] removing fetcher on topic
VTFull-enriched, partition 0
2013-03-08 18:44:41,392 INFO kafka.utils.VerifiableProperties: Verifying
properties
2013-03-08 18:44:41,393 INFO kafka.utils.VerifiableProperties: Property
broker.list is overridden to 10.10.2.123:9092
2013-03-08 18:44:41,393 INFO kafka.utils.VerifiableProperties: Property
clientid is overridden to K-Router
2013-03-08 18:44:41,393 INFO kafka.client.ClientUtils$: Fetching metadata
for topic Set(VTFull-enriched)
2013-03-08 18:44:41,393 INFO kafka.producer.SyncProducer: Connected to
10.10.2.123:9092 for producing
2013-03-08 18:44:41,402 INFO kafka.producer.SyncProducer: Disconnecting
from 10.10.2.123:9092
2013-03-08 18:44:41,402 INFO kafka.consumer.ConsumerFetcherManager:
[ConsumerFetcherManager-1362767252595] adding fetcher on topic
VTFull-enriched, partion 0, initOffset 14935 to broker 0 with fetcherId 0


We are currently using the high-level consumer.

Jun's question about java 5 bugs: we are using Java 1.6, so I would assume
we have that fix. I will, however,folllow up and see if we can confirm
whether this bug is biting us.


翻译过来,就是编译器改成6.0,对于5.0可能是个bug

猜你喜欢

转载自lizhanxin.iteye.com/blog/2211307