Apache Kafka - Flexible Control of Kafka Consumption_Dynamic Open/Close Monitoring Implementation


insert image description here


overview

In practical applications, it is often necessary to dynamically enable/disable Kafka consumer monitoring according to business needs. For example, within a certain period of time, it may be necessary to suspend the consumption of a topic, or start the consumption of a topic only under certain conditions.

In Spring Boot, some functions provided by Spring Kafka can be used to realize dynamic control or closing of consumption and dynamic opening or closing of monitoring.


train of thought

First, you need to configure the relevant properties of the Kafka consumer. In Spring Boot, this can be achieved by adding corresponding configurations in the application.properties or application.yml files.

Here is an example configuration:

spring.kafka.consumer.bootstrap-servers=<Kafka服务器地址>
spring.kafka.consumer.group-id=<消费者组ID>

Next, you can create a Kafka consumer, use @KafkaListenerannotations to specify the Kafka topic to listen to, and write the corresponding message processing method. For example:

import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Component;

@Component
public class KafkaConsumer {
    
    

    @KafkaListener(topics = "<Kafka主题>")
    public void receive(String message) {
    
    
        // 处理接收到的消息
    }
}

Now, you can use the following two methods to control or close consumption and dynamically open or close monitoring:

Method 1: Use @KafkaListenerannotation autoStartupattributes

@KafkaListenerThe annotation has an autoStartupattribute named , which can be used to control whether the consumer is started automatically. By default, its value is true, which means autostart. If it is set to false, the consumer will not start automatically.

@KafkaListener(topics = "<Kafka主题>", autoStartup = "false")
public void receive(String message) {
    
    
    // 处理接收到的消息
}

To start a consumer dynamically at runtime, you can KafkaListenerEndpointRegistrymanually start it via a bean:

@Autowired
private KafkaListenerEndpointRegistry endpointRegistry;

// 启动消费者
endpointRegistry.getListenerContainer("<KafkaListener的bean名称>").start();

Similarly, you can also use stop()the method to stop the consumer:

// 停止消费者
endpointRegistry.getListenerContainer("<KafkaListener的bean名称>").stop();

Method 2: Using KafkaListenerEndpointRegistrythe bean's pause()sum resume()method

KafkaListenerEndpointRegistryThe bean provides pause()and resume()methods for suspending and resuming consumer listening.

@Autowired
private KafkaListenerEndpointRegistry endpointRegistry;

// 暂停消费者监听
endpointRegistry.getListenerContainer("<KafkaListener的bean名称>").pause();

// 恢复消费者监听
endpointRegistry.getListenerContainer("<KafkaListener的bean名称>").resume();

Using these methods, consumption can be dynamically controlled or closed at runtime, and listening can be turned on or off dynamically.


Code


import lombok.extern.slf4j.Slf4j;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
import org.springframework.kafka.config.KafkaListenerContainerFactory;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.listener.ConcurrentMessageListenerContainer;
import org.springframework.kafka.listener.ConsumerAwareListenerErrorHandler;
import org.springframework.kafka.listener.ContainerProperties;

import java.util.HashMap;
import java.util.Map;

/**
 * @author artisan
 */
@Slf4j
@Configuration
public class KafkaConfig {
    
    

    @Value("${spring.kafka.bootstrap-servers}")
    private String bootstrapServer;

    @Value("${spring.kafka.consumer.auto-offset-reset}")
    private String autoOffsetReset;

    @Value("${spring.kafka.consumer.enable-auto-commit}")
    private String enableAutoCommit;

    @Value("${spring.kafka.consumer.key-deserializer}")
    private String keyDeserializer;

    @Value("${spring.kafka.consumer.value-deserializer}")
    private String valueDeserializer;

    @Value("${spring.kafka.consumer.group-id}")
    private String group_id;

    @Value("${spring.kafka.consumer.max-poll-records}")
    private String maxPollRecords;

    @Value("${spring.kafka.consumer.max-poll-interval-ms}")
    private String maxPollIntervalMs;

    @Value("${spring.kafka.listener.concurrency}")
    private Integer concurrency;

    private final String consumerInterceptor = "net.zf.module.system.kafka.interceptor.FailureRateInterceptor";


    /**
     * 消费者配置信息
     */
    @Bean
    public Map<String, Object> consumerConfigs() {
    
    
        Map<String, Object> props = new HashMap<>(32);
        props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,bootstrapServer);
        props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, autoOffsetReset);
        props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG,enableAutoCommit);
        props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,keyDeserializer);
        props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,valueDeserializer);
        props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG,maxPollRecords);
        props.put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG,maxPollIntervalMs);
        props.put(ConsumerConfig.GROUP_ID_CONFIG,group_id);
        props.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG,consumerInterceptor );
        return props;
    }


    /**
     * 消费者批量工厂
     */
    @Bean
    public KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> batchFactory() {
    
    
        ConcurrentKafkaListenerContainerFactory<String,String> factory = new ConcurrentKafkaListenerContainerFactory<>();
        factory.setConsumerFactory(new DefaultKafkaConsumerFactory<>(consumerConfigs()));
        factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL);
        factory.setBatchListener(true);
        factory.setConcurrency(concurrency);
        return factory;
    }




    /**
     * 异常处理器
     *
     * @return
     */
    @Bean
    public ConsumerAwareListenerErrorHandler consumerAwareListenerErrorHandler() {
    
    
        return (message, exception, consumer) -> {
    
    
//            log.error("消息{} , 异常原因{}", message, exception.getMessage());
            log.error("consumerAwareListenerErrorHandler called");

            return null;
        };
    }

}


use

   @KafkaListener(topicPattern = KafkaTopicConstant.ATTACK_MESSAGE + ".*",
            containerFactory = "batchFactory",
            errorHandler = "consumerAwareListenerErrorHandler",
            id = "attackConsumer")
    public void processMessage(List<String> records, Acknowledgment ack)  {
    
    
        log.info("AttackKafkaConsumer 当前线程 {} , 本次拉取的数据总量:{} ", Thread.currentThread().getId(), records.size());
        try {
    
    
            List<AttackMessage> attackMessages = new ArrayList();
            records.stream().forEach(record -> {
    
    
                messageExecutorFactory.process(KafkaTopicConstant.ATTACK_MESSAGE).execute(record, attackMessages);
            });
            if (!attackMessages.isEmpty()) {
    
    
                attackMessageESService.addDocuments(attackMessages, false);
            }
        } finally {
    
    
            ack.acknowledge();
        }
    }

In this code, the @KafkaListener annotation indicates that this is a Kafka consumer,

  • The topicPattern parameter specifies the pattern of the topic that the consumer wants to listen to, that is, KafkaTopicConstant.ATTACK_MESSAGEall topics starting with .
  • The containerFactory parameter specifies the factory class name used to create the Kafka listener container.
  • The errorHandler parameter specifies the error handler used to handle exceptions thrown by the listener. The id parameter specifies the ID of the consumer.

In the method of the consumer, when a message arrives, the records parameter will contain a set of message records, and the ack parameter is used to manually confirm that these messages have been consumed.

In the method, the current thread ID and the total amount of data pulled are first recorded. Process message records one by one and store the processing results in a list named attackMessages. If the list is not empty, it is added to the ES search engine.

Finally, manually confirm that the messages have been consumed.


【control】


import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.config.KafkaListenerEndpointRegistry;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

 
@Slf4j
@RestController
public class KafkaConsumerController {
    
    

    @Autowired
    private KafkaListenerEndpointRegistry registry;

    /**
     * 开启监听
     */
    @GetMapping("/start")
    public void start() {
    
    
        // 判断监听容器是否启动,未启动则将其启动
        if (!registry.getListenerContainer("attackConsumer").isRunning()) {
    
    
            log.info("start  ");

            registry.getListenerContainer("attackConsumer").start();
        }
        // 将其恢复
        registry.getListenerContainer("attackConsumer").resume();

        log.info("resume over ");
    }

    /**
     * 关闭监听
     */
    @GetMapping("/pause")
    public void pause() {
    
    
        // 暂停监听
        registry.getListenerContainer("attackConsumer").pause();

        log.info("pause");
    }
}
    

expand

KafkaListenerEndpointRegistry

KafkaListenerEndpointRegistryIt is a component provided by Spring Kafka to manage the registration and startup of Kafka consumer listeners. It is an interface that provides methods for managing Kafka listener containers, such as registering and starting listener containers, pausing and resuming listener containers, etc.

When using annotations in a Spring Boot application @KafkaListener, Spring Kafka automatically creates an KafkaListenerEndpointRegistryinstance of and uses it to manage all Kafka listener containers. It is a core component in Spring Kafka and is used to monitor and control Kafka consumers.

insert image description here

Guess you like

Origin blog.csdn.net/yangshangwei/article/details/130967430