kafka报错org.apache.kafka.clients.consumer.CommitFailedException

Given as follows:

org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:600)
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:498)
    at org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1104)
    at org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1072)
    at com.tanjie.kafka.ConsumerDemo.main(ConsumerDemo.java:29)

 Analyze the reasons:

kafka consumer poll method, passing parameters for the time, in milliseconds, such as poll (2000), took two seconds for the meaning of the message, from the beginning to take the 00:00, 00:02 to the end, in the past within two seconds, consumers have been to get the message

max.poll.interval.ms: kafka message processing time duration of the maximum heartbeat, as max.poll.interval.ms = 2000,

Combined with the poll method say what is the association between property and maybe next.

The consumer is just a poll message 2 seconds, 2 seconds, if the consumer 10 pulls up the message, message 10 which is theoretically required consumption 1000ms, at this time 1000ms  <  max.poll.interval.ms = 2000ms This is normal, no error.

If for other reasons such as network news consumption 10 took 3000ms  max.poll.interval.ms = 2000ms, time consuming messages have been greater than the maximum duration of a heartbeat, the heartbeat consumer is no longer within the range of maximum continuous heartbeat, and default the consumer is dead, kafka will rebalance (coordination) of consumers go to other partitions consumer news, follow the default dead consumers are in fact not dead, still consume news, waiting for him to complete consumption, kafka the new coordinator of the new the new consumer district also consumed this message, the results will lead to two consumer spending the same message.

Solution , or a few points to get the message that the incoming parameters poll smaller; or larger heartbeat duration max.poll.interval.ms can be.

for reference only.

Reference: https://www.cnblogs.com/syp172654682/p/9723108.html

Guess you like

Origin blog.csdn.net/John_Kry/article/details/90376131