1
There is a configuration on the consumer side that does not consume, called fetch.message.max.bytes, the default is 1M, if there is a message larger than 1M at this time, it will stop consumption.
At this point, add props.put("fetch.message.max.bytes", "10485760"); to the configuration, and the
exception
ConsumerRebalanceFailedException ensures that rebalance.max.retries * rebalance.backoff.ms > zookeeper.session.timeout.ms (The default is consistent, if there are more machines, or more topics are consumed, it is recommended to set rebalance.max.retries to be larger)
Problems encountered during the use of kafka
Guess you like
Origin http://43.154.161.224:23101/article/api/json?id=326224191&siteId=291194637
Ranking