Depth understanding of Kafka [four] consumer Offset Management

1、Offset Topic

Consumer Offset by submitting to record the current position of the final consumer, so the consumer crashes or new consumers to join consumer groups, triggered by a partition rebalancing operations, each consumer may be assigned to different partitions. I tested kafka version:, consumers to a special theme "_consumer_offset" to send a message, as shown:

offset topic.png

Content of the message include:

fields content
Key Consumer Group, topic, partition
Payload Offset, metadata, timestamp

Submit to "_consumer_offset" the theme of the message will be partitioned based on key consumer group, all messages within a consumer group, will be sent to only Partition.

FIG .png logical offset topic-

2、Offset Commit

Offset submission logic and fact common to transmit data kafka producers is the same.


Creates a built-producers when consumers start "_consumer_offset" theme for submitting Offset data.


Offset is to be submitted as normal production request, the same logic.

offset topic commit.png

"_Consumer_offset" theme will be created automatically when you submit the request in the cluster first Offset.

3, Offset of submission

Two problems occur when Offset submit: repeated consumption and leakage consumption.

  • When the Offset submitted less than last message client processing Offset, can cause repeated consumption. Scenario: the first consumer, after submitting Offset, if the consumer is successful, the commit fails, consumers get the next or previous Offset, will therefore result in duplication of spending.
  • When submitting Offset Offset is greater than the client-side processing of the last message, will cause the leakage consumption. Scenario: First submit Offset, post-consumer, if the submission is successful, consumer fails, the consumer has the next acquisition Offset is new, it will cause the leakage consumption.

Depending on the business case, select the appropriate delivery method can effectively get rid of the duplication of spending and consumption leakage problems.

3.1, auto-commit

Automatic submission is the easiest way to submit, by setting the parameters, you can submit on time can also be set to automatically submit interval. Drawback is that, when consumed some of the data, automatic submission has not yet reached the time, this time, there are new consumers to join, or the current consumer hang up, there will be partition rebalancing operations, after once again committed on the consumer Offset start of consumption, resulting in duplication of spending. Although it is possible to shorten automatically submit interval, but still can not solve this problem.

3.2 synchronous commit the current Offset

Close the manual submission, submission can be submitted through the synchronous interface to the current Offset, although you can get the initiative, but at the expense of throughput, because synchronous submission must be blocked, and there will retry mechanism.

3.3, asynchronous commit the current Offset

Use asynchronous submission, both the initiative, it can also increase the throughput kafka consumption, there is no retry mechanism, but also to solve the problem can not afford to repeat consumption.

3.4, a combination of synchronous and asynchronous submit

When using the normal use asynchronous submission speed. When you want to turn off consumers when using synchronous commit, if they fail, they would have been retried until the successful submission or unrecoverable error occurred. Whether synchronous or asynchronous submission can not be submitted in duplicate to avoid the problem of consumption and leakage consumption.

3.5, submit the specified Offset

Because the automatic submission, synchronous and asynchronous submit submission is the last Offset submit up. By submitting specified Offset, repeated consumption can reduce consumption and leakage problems, but the corresponding end consumer would require complex business processes, and the need to maintain their own Offset.


Guess you like