Spring Boot integrates kafka consumption mode AckMode and manual consumption

dependency management

Import dependencies in the pom.xml file

		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-data-redis</artifactId>
		</dependency>

Configuration file modification

Configuration when you need to configure AckMode yourself

spring:
  application:
    name: base.kafka
  kafka:
    bootstrap-servers: kafka服务地址1:端口,kafka服务地址2:端口,kafka服务地址3:端口
    producer:
      # 写入失败时,重试次数。当leader节点失效,一个repli节点会替代成为leader节点,此时可能出现写入失败,
      # 当retris为0时,produce不会重复。retirs重发,此时repli节点完全成为leader节点,不会产生消息丢失。
      retries: 0
      #procedure要求leader在考虑完成请求之前收到的确认数,用于控制发送记录在服务端的持久化,其值可以为如下:
      #acks = 0 如果设置为零,则生产者将不会等待来自服务器的任何确认,该记录将立即添加到套接字缓冲区并视为已发送。在这种情况下,无法保证服务器已收到记录,并且重试配置将不会生效(因为客户端通常不会知道任何故障),为每条记录返回的偏移量始终设置为-1。
      #acks = 1 这意味着leader会将记录写入其本地日志,但无需等待所有副本服务器的完全确认即可做出回应,在这种情况下,如果leader在确认记录后立即失败,但在将数据复制到所有的副本服务器之前,则记录将会丢失。
      #acks = all 这意味着leader将等待完整的同步副本集以确认记录,这保证了只要至少一个同步副本服务器仍然存活,记录就不会丢失,这是最强有力的保证,这相当于acks = -1的设置。
      #可以设置的值为:all, -1, 0, 1
      acks: 1
    consumer:
      group-id: testGroup
      # smallest和largest才有效,如果smallest重新0开始读取,如果是largest从logfile的offset读取。一般情况下我们都是设置smallest
      auto-offset-reset: earliest
      # 设置自动提交offset
      enable-auto-commit: true
      max-poll-records: 2
server:
  port: 8060

Consume kafka messages

The consumption mode supported by kafka is set in AbstractMessageListenerContainer.AckModethe enumeration. The difference between each mode is introduced below

	/**
	 * The offset commit behavior enumeration.
	 */
	public enum AckMode {

		/**
		 * Commit after each record is processed by the listener.
		 */
		RECORD,

		/**
		 * Commit whatever has already been processed before the next poll.
		 */
		BATCH,

		/**
		 * Commit pending updates after
		 * {@link ContainerProperties#setAckTime(long) ackTime} has elapsed.
		 */
		TIME,

		/**
		 * Commit pending updates after
		 * {@link ContainerProperties#setAckCount(int) ackCount} has been
		 * exceeded.
		 */
		COUNT,

		/**
		 * Commit pending updates after
		 * {@link ContainerProperties#setAckCount(int) ackCount} has been
		 * exceeded or after {@link ContainerProperties#setAckTime(long)
		 * ackTime} has elapsed.
		 */
		COUNT_TIME,

		/**
		 * User takes responsibility for acks using an
		 * {@link AcknowledgingMessageListener}.
		 */
		MANUAL,

		/**
		 * User takes responsibility for acks using an
		 * {@link AcknowledgingMessageListener}. The consumer
		 * immediately processes the commit.
		 */
		MANUAL_IMMEDIATE,

	}

AckMode mode

AckMode mode effect
MANUAL After each batch of poll() data is processed by the consumer listener (ListenerConsumer), manually call Acknowledgment.acknowledge() and submit
MANUAL_IMMEDIATE Commit immediately after manually calling Acknowledgment.acknowledge()
RECORD Committed after each record is processed by the consumer listener (ListenerConsumer)
BATCH Submitted after each batch of poll() data is processed by the consumer listener (ListenerConsumer)
TIME After each batch of poll() data is processed by the consumer listener (ListenerConsumer), submit when the time since the last submission is greater than TIME
COUNT After each batch of poll() data is processed by the consumer listener (ListenerConsumer), submit when the number of processed records is greater than or equal to COUNT
COUNT_TIME TIME or COUNT Submit when one of the conditions is met

The configuration class for the listener factory:

/**
 * kafka消费者配置
 */
@Configuration
@EnableKafka
public class KafkaConsumerConfig {
    @Value("${spring.kafka.bootstrap-servers}")
    private String servers;
    //会话过期时长,consumer通过ConsumerCoordinator间歇性发送心跳
    //超期后,会被认为consumer失效,服务迁移到其他consumer节点.(group)
    //需要注意,Coordinator与kafkaConsumer共享底层通道,也是基于poll获取协调事件,但是会在单独的线程中
    @Value("${spring.kafka.consumer.session.timeout}")
    private String sessionTimeout;

    @Value("${spring.kafka.consumer.concurrency}")
    private int concurrency;
    //单次最多允许poll的消息条数.
    //此值不建议过大,应该考虑你的业务处理效率.
    @Value("${spring.kafka.consumer.maxpoll.records}")
    private int maxPollRecords;
    //两次poll之间的时间隔间最大值,如果超过此值将会被认为此consumer失效,触发consumer重新平衡.
    //此值必须大于,一个batch所有消息处理时间总和.
    //最大500000
    //2分钟
    @Value("${spring.kafka.consumer.maxpoll.interval}")
    private int maxPollIntervalMS;

    @Value("${spring.kafka.consumer.group-id}")
    private String groupId;


    @Bean
    public StringJsonMessageConverter converter() {
        return new StringJsonMessageConverter();
    }

    @Bean
    public KafkaListenerContainerFactory<?> batchDataFactory() {
        ConcurrentKafkaListenerContainerFactory<String, String> factory =
                new ConcurrentKafkaListenerContainerFactory<>();
        Map<String, Object> consumerConfig =  consumerConfigs();
        consumerConfig.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
        ConsumerFactory<String, String>  consumerFactory = new DefaultKafkaConsumerFactory<>(consumerConfig);
        factory.setConsumerFactory(consumerFactory);
        factory.setConcurrency(concurrency);
        //设置为批量消费,每个批次数量在Kafka配置参数中设置ConsumerConfig.MAX_POLL_RECORDS_CONFIG
        factory.setBatchListener(true);
        factory.setMessageConverter(new BatchMessagingMessageConverter());
        //设置提交偏移量的方式
        factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
        factory.getContainerProperties().setPollTimeout(3000);
        return factory;
    }


    public Map<String, Object> consumerConfigs() {
        Map<String, Object> propsMap = new HashMap<>();
        propsMap.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, servers);
        propsMap.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
        propsMap.put(ConsumerConfig.SESSION_TIMEOUT_MS_CONFIG, sessionTimeout);
        propsMap.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        propsMap.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
        //每一批数量
        propsMap.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, this.maxPollRecords);
        propsMap.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG,this.maxPollIntervalMS);
        return propsMap;
    }

    @Bean
    public TestMessages listener() {
        return new TestMessages();
    }

}

The configuration used by the listener

@Component
public class TestMessages {

    /**
     * MANUAL   当每一批poll()的数据被消费者监听器(ListenerConsumer)处理之后, 手动调用Acknowledgment.acknowledge()后提交
     * @param message
     * @param ack
     */
    @KafkaListener(containerFactory = "batchDataFactory" , topics = "kafka(topic名称)")
    public void onMessageManual(List<Object> message, Acknowledgment ack){
        log.info("batchDataFactory处理数据量:{}",message.size());
        message.forEach(item -> log.info("batchDataFactory处理数据内容:{}",item));
        ack.acknowledge();//直接提交offset
    }

}

MANUAL_IMMEDIATE

Committed after each record is processed by the consumer listener (ListenerConsumer)

The same and difference between MANUAL and MANUAL_IMMEDIATE


Similarities
Both modes require manual confirmation of ack.acknowledge(); to complete message consumption, otherwise the data will be received by the consumer again when the consumer instance is restarted.

The difference between the two
is MANUAL: After all the results from the last poll have been processed, the queue is enqueued and the offsets are committed in one operation. Can be thought of as committing offsets at the end of the batch
MANUAL_IMMEDIATE: Commit offsets as soon as acknowledgment is performed on the listener thread. They are submitted one by one during batch execution.

For other modes, you can modify the settings in the batch processing factory class:

    factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);

The above content may be unclear or wrong. If the development students find it, please let me know in time, and I will modify the relevant content as soon as possible. If my content is of any help to you, please give me a thumbs up. Your praise is the driving force for me to move forward.

Guess you like

Origin blog.csdn.net/Angel_asp/article/details/131003406