Kafka multi-consumer kafka.common.ConsumerRebalanceFailedException exception solution

Application scenario: Kafka's multiple consumers consume the data of the same topic, and ensure that each record can be consumed.
Implementation: The topic needs to create multiple partitions (the number of partitions must be greater than or equal to the number of consumers), more Consumers in the same group

The first step: Create multiple partitions of the topic and verify it, as shown below:

[root@hadoop ~]# kafka-topics.sh --list --zookeeper hadoop:2181
[root@hadoop ~]# kafka-topics.sh --create --zookeeper hadoop:2181 --topic kafkatest --partitions 3 --replication-factor 1
Created topic "kafkatest".
[root@hadoop ~]# kafka-topics.sh --list --zookeeper hadoop:2181
kafkatest
[root@hadoop ~]# kafka-topics.sh --describe --zookeeper hadoop:2181
Topic:kafkatest	PartitionCount:3	ReplicationFactor:1	Configs:
	Topic: kafkatest	Partition: 0	Leader: 0	Replicas: 0	Isr: 0
	Topic: kafkatest	Partition: 1	Leader: 0	Replicas: 0	Isr: 0
	Topic: kafkatest	Partition: 2	Leader: 0	Replicas: 0	Isr: 0

The results show that the topic has been successfully created, and the number of partitions is 3 (because the number of kafka consumers verified later is 2)

Step 2: Add Kafka's consumer configuration file consumer2.properties to prepare for the following test, as shown below:

#consumer group id
group.id=group1

Note: Only make this modification, otherwise keep the original configuration

Step 3: Start the producer, don't enter data yet, the screenshot is as follows:

As you can see, the producer has been started, and the mouse is displayed as waiting for input

Step 4: Open two new windows, both start consumers, and simulate two consumers. The screenshot is as follows:

As you can see, the consumer has started and is preparing to consume producer data

Step 5: In the window of the production certificate, start to input data to verify whether the consumer has made a consumption. The screenshot is as follows:

The sixth step is to check the consumer's consumption situation and find that one consumer can consume normally, and the other consumer reports the following error:

root@hadoop ~]# kafka-console-consumer.sh --topic kafkatest --zookeeper hadoop:2181 --consumer.config /usr/local/kafka/config/consumer2.properties
[2019-03-16 14:52:57,873] ERROR [group1_hadoop-1552719160845-67df1458], error during syncedRebalance (kafka.consumer.ZookeeperConsumerConnector)
kafka.common.ConsumerRebalanceFailedException: group1_hadoop-1552719160845-67df1458 can't rebalance after 4 retries
	at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.syncedRebalance(ZookeeperConsumerConnector.scala:633)
	at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anon$1.run(ZookeeperConsumerConnector.scala:551)
[2019-03-16 14:53:06,009] ERROR [group1_hadoop-1552719160845-67df1458], error during syncedRebalance (kafka.consumer.ZookeeperConsumerConnector)
kafka.common.ConsumerRebalanceFailedException: group1_hadoop-1552719160845-67df1458 can't rebalance after 4 retries
	at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener.syncedRebalance(ZookeeperConsumerConnector.scala:633)
	at kafka.consumer.ZookeeperConsumerConnector$ZKRebalancerListener$$anon$1.run(ZookeeperConsumerConnector.scala:551)

error during syncedRebalance (kafka.consumer.ZookeeperConsumerConnector)

You can see the "group1_hadoop-1552719160845-67df1458" message, indicating that the consumer's configuration file has taken effect

The core error point is: "kafka.common.ConsumerRebalanceFailedException: group1_hadoop-1552719160845-67df1458 can't rebalance after 4 retries"

The following is the official website information, as well as the principle analysis, the screenshot is as follows:

After finding the reason, it is necessary to modify the consumer2.properties of Kafka's consumer configuration file, as shown below:

#consumer group id
group.id=group1

#consumer timeout
#consumer.timeout.ms=5000
zookeeper.session.timeout.ms=5000
zookeeper.connection.timeout.ms=10000
rebalance.backoff.ms=2000
rebalance.max.retries=10

Restart Kafka, delete the topic and rebuild, restart the producer and consumer, and input data for verification. The screenshot of the producer is shown below:

The screenshots of the two consumers are shown below:

It can be seen that two consumers consume the same topic data together, there is no repeated consumption and no data is omitted.

Guess you like

Origin blog.csdn.net/zhaoxiangchong/article/details/88668973