Different consumer kafka require different group id

Intended for use in a project while two consumer consumption two topic, configured with a default groupid consumer in the configuration file, do not specify their groupid to two consumer, so the consumer can use a two groupid same

# Specify the default consumer the above mentioned id Group 
spring.kafka.consumer.group-the above mentioned id = the Test-the Message-Group

But found two occasionally work in consumer breakpoint debugging process, occasionally but constantly rebanlance, and failed to send along with the heartbeat. In particular, frequently read data breakpoint debugging time or a long time frequent. And when a consumer can not successfully rebanlance, not consumption data.

Log output as the abnormal state, a consumer can not be completed rebanlance, the other can not send a heartbeat conumser

[Consumer clientId=consumer-1, groupId=test-message-group] Attempt to heartbeat failed since group is rebalancing
[Consumer clientId=consumer-1, groupId=test-message-group] Attempt to heartbeat failed since group is rebalancing
[Consumer clientId=consumer-1, groupId=test-message-group] Attempt to heartbeat failed since group is rebalancing
[Consumer clientId=consumer-2, groupId=test-message-group] (Re-)joining group
[Consumer clientId=consumer-2, groupId=test-message-group] (Re-)joining group
[Consumer clientId=consumer-1, groupId=test-message-group] Attempt to heartbeat failed since group is rebalancing
[Consumer clientId=consumer-1, groupId=test-message-group] Attempt to heartbeat failed since group is rebalancing

Output at two log normal consumer follows, are displayed as Successfully joined group with generation XXX

[Consumer clientId=consumer-1, groupId=test-message-group] (Re-)joining group
[Consumer clientId=consumer-2, groupId=test-message-group] Successfully joined group with generation 125
[Consumer clientId=consumer-1, groupId=test-message-group] Successfully joined group with generation 125
[Consumer clientId=consumer-1, groupId=test-message-group] Setting newly assigned partitions: HolderMsg-0, HolderMsg-1, HolderMsg-2
[Consumer clientId=consumer-2, groupId=test-message-group] Setting newly assigned partitions: TcMsg-2, TcMsg-0, TcMsg-1
[Consumer clientId=consumer-1, groupId=test-message-group] Setting offset for partition HolderMsg-0 to the committed offset FetchPosition{offset=7, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=192.168.202.128:9092 (id: 0 rack: null), epoch=0}}
[Consumer clientId=consumer-2, groupId=test-message-group] Setting offset for partition TcMsg-2 to the committed offset FetchPosition{offset=4, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=192.168.202.128:9092 (id: 0 rack: null), epoch=0}}
[Consumer clientId=consumer-1, groupId=test-message-group] Setting offset for partition HolderMsg-1 to the committed offset FetchPosition{offset=5, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=192.168.202.128:9092 (id: 0 rack: null), epoch=0}}
[Consumer clientId=consumer-2, groupId=test-message-group] Setting offset for partition TcMsg-0 to the committed offset FetchPosition{offset=2, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=192.168.202.128:9092 (id: 0 rack: null), epoch=0}}
[Consumer clientId=consumer-1, groupId=test-message-group] Setting offset for partition HolderMsg-2 to the committed offset FetchPosition{offset=7, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=192.168.202.128:9092 (id: 0 rack: null), epoch=0}}
[Consumer clientId=consumer-2, groupId=test-message-group] Setting offset for partition TcMsg-1 to the committed offset FetchPosition{offset=3, offsetEpoch=Optional[0], currentLeader=LeaderAndEpoch{leader=192.168.202.128:9092 (id: 0 rack: null), epoch=0}}

 

In the iterative process of debugging, I realized that these two consumer repeatedly to rejoin the group and their group is the same, guess the two consumer to use the same groupid lead and influence each other. After each specified groupid as two separate consumer, the abnormal situation does not occur again.

Based on this phenomenon, query some information, find a more detailed explanation in a blog in

https://olnrao.wordpress.com/2015/05/15/apache-kafka-case-of-mysterious-rebalances/

Mentioned in the article, in the registration zookeeper consumer, the consumer registration identifier (Consumer Identifiers Registry) is stored at the zookeeper / consumers / [group_id] / ids / [consumer_connector_id] path, forming these consumers registered nodes a tree, when consumers join or leave, all consumers will be notified to the trees, which were rebanlance.

Consumers zookeeper registered with the topic path does not matter, but with the groupid binding, it is because the same consumer can consume a different topic. If you use a different consumer with a different groupid consumer topic, a topic of consumer and any appearance change such as joining or leaving, all in the consumer group groupid occurs rebanlance. Which may cause problems when debugging it.

So that different kafka consumer needs to use a different group id, in order to reduce the mutual influence.

Guess you like

Origin www.cnblogs.com/zhaoshizi/p/12297646.html