Spring uses three ways of kafka (listener, container, stream)

This article introduces three ways to use Kafka in spring. Among them, the container method is the most flexible, but the development is relatively complicated, the stream method is the easiest to use, and the listener method is the most widely used because it is the earliest provided.
For specific code, refer to the sample project https://github.com/qihaiyan/springcamp/tree/master/spring-kafka

I. Overview

In actual projects, scenarios where Kafka is used are very common, especially in the event-driven programming mode, where Kafka is basically standard configuration.

二、KafkaListener

KafkaListener should be one of the most used methods at present. It is easy to develop and easy to understand. But from the perspective of ease of use, it should be gradually replaced by spring-cloud-stream.
KafkaListener is an annotation. Add this annotation to the corresponding method, and the method can process the received kakfa message. In the annotation, the topic of kakfa to be consumed is specified through the topics parameter. The topics parameter supports SPEL expressions, and multiple consumptions can be made at the same time. Kafka topic:

@KafkaListener(topics = "test-topic")
    public void receive(ConsumerRecord<String, String> consumerRecord) {
    
    
        this.payload = consumerRecord.value();
        log.info("received payload='{}'", payload);
    }

The input parameter of the annotated method is a ConsumerRecord variable, which stores the messages received in Kafka.

At the same time, kafka needs to be configured, and the address of the kakfa server can be specified, as well as the serialization method:

spring.kafka:
    bootstrap-servers: 192.168.1.1:2181
    consumer:
      group-id: utgroup
      auto-offset-reset: earliest
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer

三、ConcurrentMessageListenerContainer

Through ConcurrentMessageListenerContainer, Kafka messages can be processed in a programmable way. The advantage of this method is that the topic is specified in the program, so that the configuration of the topic can be stored anywhere, such as in a database, and can also be branched according to different conditions. It is very flexible to specify different topics. This is not possible with the other two methods.

Configuration of ConcurrentMessageListenerContainer:

@Component
public class MessageListenerContainerConsumer {
    
    

    public static final String LISTENER_CONTAINER_TOPIC = "container-topic";

    public Set<String> consumedMessages = new HashSet<>();

    @PostConstruct
    void start() {
    
    
        MessageListener<String, String> messageListener = record -> {
    
    
            System.out.println("MessageListenerContainerConsumer received message: " + record.value());
            consumedMessages.add(record.value());
        };

        ConcurrentMessageListenerContainer<String, String> container =
                new ConcurrentMessageListenerContainer<>(
                        consumerFactory(),
                        containerProperties(LISTENER_CONTAINER_TOPIC, messageListener));

        container.start();
    }

    private DefaultKafkaConsumerFactory<String, String> consumerFactory() {
    
    
        return new DefaultKafkaConsumerFactory<>(
                new HashMap<String, Object>() {
    
    
                    {
    
    
                        put(BOOTSTRAP_SERVERS_CONFIG, System.getProperty("spring.kafka.bootstrap-servers"));
                        put(GROUP_ID_CONFIG, "groupId");
                        put(AUTO_OFFSET_RESET_CONFIG, "earliest");
                    }
                },
                new StringDeserializer(),
                new StringDeserializer());
    }

    private ContainerProperties containerProperties(String topic, MessageListener<String, String> messageListener) {
    
    
        ContainerProperties containerProperties = new ContainerProperties(topic);
        containerProperties.setMessageListener(messageListener);
        return containerProperties;
    }
}

The above code defines the class MessageListenerContainerConsumer, which is a spring bean. In the initialization code of the bean PostConstruct, we use ConcurrentMessageListenerContainer and specify a topic. In this case, we use a constant for the convenience of demonstration. In fact, the value of this public static final String LISTENER_CONTAINER_TOPIC = "container-topic"topic It can be any variable, it can be read from the database, and it can also be dynamically calculated through the actual scene, so that the topic can be flexibly configured.

四、spring-cloud-stream

spring-cloud-stream is a sub-project of springcloud. The goal of this project is an event-driven (Event-Driven) programming framework. spring-cloud-stream abstracts kafka very well. In addition to kakfa, it also supports RabbitMQ. Except for configuration files, there is no trace of kafka in the program, which means that we don't need to care about the underlying kakfa when developing For details, if you want to switch from kafka to RabbitMQ, you only need to modify the imported jar package and configuration file.

For details, see spring's official documentation: https://spring.io/projects/spring-cloud-stream

spring-cloud-stream uses spring-cloud-function. We only need to implement a function interface in the program to process kafka messages. The writing is very simple.

i@SpringBootApplication
public class Application {
    
    

    public static void main(String[] args) {
    
    
        SpringApplication.run(Application.class, args);
    }

    @Bean
    public Function<String, Object> handle() {
    
    
        return String::toUpperCase;
    }
}

In addition to the configuration file, only one line of code is needed public Function<String, Object> handle()to process kafka messages. This line of code does not have any relationship with kakfa at all. It is a common method that achieves the ultimate in abstraction.
Note that the name of the method matches the name in the configuration file.

Configuration:

spring:
  cloud.stream:
    bindings:
      handle-in-0:
        destination: testEmbeddedIn
        content-type: text/plain
        group: utgroup
      handle-out-0:
        destination: testEmbeddedOut
    kafka:
      binder:
        brokers: 192.168.1.1:2181
        configuration:
          key.serializer: org.apache.kafka.common.serialization.ByteArraySerializer
          value.serializer: org.apache.kafka.common.serialization.ByteArraySerializer

Note the two lines of configuration handle-in-0and , handle refers to the handle method handle-out-0in the previous code . public Function<String, Object> handle()spring-cloud-stream establishes the matching relationship between code and configuration through the method name and the name of the configuration item in the configuration file, which is also the embodiment of the programming idea that convention is better than configuration.

Guess you like

Origin blog.csdn.net/haiyan_qi/article/details/121066823
Recommended