Multiple consumers subscribe to a Kafka Topic (using @KafkaListener and KafkaTemplate)

Records : 465

Scenario : A Producer publishes a message on a Topic, and multiple Consumers subscribe to Kafka's Topic. Each Consumer specifies a specific ConsumerGroup, so that a message can be consumed by multiple different ConsumerGroups.

Version:JDK 1.8,Spring Boot 2.6.3,kafka_2.12-2.8.0,spring-kafka-2.8.2。

Kafka cluster installation : https://blog.csdn.net/zhangbeizhen18/article/details/131156084

1. Basic concepts

Topic : Kafka classifies messages according to Topic, and each message published to Kafka needs to specify a Topic.

Producer : The message producer, the client that sends messages to the Broker.

Consumer : Message consumer, the client that reads messages from Broker.

ConsumerGroup : Each Consumer belongs to a specific ConsumerGroup, and a message can be consumed by multiple different ConsumerGroups; but only one Consumer in a ConsumerGroup can consume the message.

publish : Publish, use Producer to write data to Kafka.

subscribe : Subscribe, use Consumer to read data from Kafka.

2. Configure Kafka information in microservices

(1) Add dependencies in pom.xml

<dependency>
  <groupId>org.springframework.kafka</groupId>
  <artifactId>spring-kafka</artifactId>
  <version>2.8.2</version>
</dependency>

Please know : the bottom layer of the spring-kafka framework uses native kafka-clients. This example corresponds to version: 3.0.0.

(2) Configure Kafka information in application.yml

For configuration, refer to the configuration on the official website: https://kafka.apache.org/documentation/

(1) application.yml configuration content

spring:
  kafka:
    #kafka集群的IP和端口,格式:(ip:port)
    bootstrap-servers:
      - 192.168.19.161:29092
      - 192.168.19.162:29092
      - 192.168.19.163:29092
    #生产者
    producer:
      #客户端发送服务端失败的重试次数
      retries: 2
      #多个记录被发送到同一个分区时,生产者将尝试将记录一起批处理成更少的请求.
      #此设置有助于提高客户端和服务器的性能,配置控制默认批量大小(以字节为单位)
      batch-size: 16384
      #生产者可用于缓冲等待发送到服务器的记录的总内存字节数(以字节为单位)
      buffer-memory: 33554432
      #指定key使用的序列化类
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      #指定value使用的序列化类
      value-serializer: org.apache.kafka.common.serialization.StringSerializer
      #生产者producer要求leader节点在考虑完成请求之前收到的确认数,用于控制发送记录在服务端的持久化
      #acks=0,设置为0,则生产者producer将不会等待来自服务器的任何确认.该记录将立即添加到套接字(socket)缓冲区并视为已发送.在这种情况下,无法保证服务器已收到记录,并且重试配置(retries)将不会生效(因为客户端通常不会知道任何故障),每条记录返回的偏移量始终设置为-1.
      #acks=1,设置为1,leader节点会把记录写入本地日志,不需要等待所有follower节点完全确认就会立即应答producer.在这种情况下,在follower节点复制前,leader节点确认记录后立即失败的话,记录将会丢失.
      #acks=all,acks=-1,leader节点将等待所有同步复制副本完成再确认记录,这保证了只要至少有一个同步复制副本存活,记录就不会丢失.
      acks: -1
    consumer:
      #开启consumer的偏移量(offset)自动提交到Kafka
      enable-auto-commit: true
      #consumer的偏移量(offset)自动提交的时间间隔,单位毫秒
      auto-commit-interval: 1000
      #在Kafka中没有初始化偏移量或者当前偏移量不存在情况
      #earliest,在偏移量无效的情况下,自动重置为最早的偏移量
      #latest,在偏移量无效的情况下,自动重置为最新的偏移量
      #none,在偏移量无效的情况下,抛出异常.
      auto-offset-reset: latest
      #一次调用poll返回的最大记录条数
      max-poll-records: 500
      #请求阻塞的最大时间(毫秒)
      fetch-max-wait: 500
      #请求应答的最小字节数
      fetch-min-size: 1
      #心跳间隔时间(毫秒)
      heartbeat-interval: 3000
      #指定key使用的反序列化类
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      #指定value使用的反序列化类
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer

(2) Analysis

The configuration class is automatically annotated in the spring boot package: spring-boot-autoconfigure-2.6.3.jar.

类:org.springframework.boot.autoconfigure.kafka.KafkaProperties。

Use the @ConfigurationProperties annotation to make it effective, the prefix is: spring.kafka.

The spring-kafka framework has different configurations for operating Kafka stand-alone version and Kafka cluster version:

In the bootstrap-servers attribute, the stand-alone version is configured with an IP: port pair. The cluster version can be configured with multiple IP: port pairs.

(3) Loading logic

When the Spring Boot microservice starts, Spring Boot will read the configuration information of application.yml, find KafkaProperties in spring-boot-autoconfigure-2.6.3.jar according to the configuration content, and inject them into the corresponding properties. After the Spring Boot microservice is started, the configuration information of KafkaProperties can be used seamlessly in the Spring environment.

Spring's spring-kafka framework injects KafkaProperties configuration information into the KafkaTemplate operation producer Producer.

Spring's spring-kafka framework uses KafkaProperties and @KafkaListener to operate Kafka's consumer Consumer.

3. Producer (ChangjiangDeltaCityProducerController)

(1) sample code

@RestController
@RequestMapping("/hub/example/delta/producer")
@Slf4j
public class ChangjiangDeltaCityProducerController {
  //1.注入KafkaTemplate
  @Autowired
  private KafkaTemplate<String, String> kafkaTemplate;
  //2.定义Kafka的Topic
  private final String topicName = "hub-topic-city-delta";
  @GetMapping("/f01_1")
  public Object f01_1(String msgContent) {
    try {
      //3.获取业务数据对象
      String uuid=UUID.randomUUID().toString().replace("-","");
      long now=System.currentTimeMillis();
      String msgKey = "delta" + ":" + uuid + ":" + now;
      MsgDto msgDto = MsgDto.buildDto(uuid,now,msgContent);
      String msgData = JSONObject.toJSONString(msgDto);
      log.info("KafkaProducer向Kafka集群的Topic: {},写入Key:", topicName);
      log.info(msgKey);
      log.info("KafkaProducer向Kafka集群的Topic: {},写入Data:", topicName);
      log.info(msgData);
      //4.使用KafkaTemplate向Kafka集群写入数据(topic,key,data)
      kafkaTemplate.send(topicName, msgKey, msgData);
    } catch (Exception e) {
      log.info("Producer写入Topic异常.");
      e.printStackTrace();
    }
    return "写入成功";
  }
}

(2) Analysis code

Use KafkaTemplate to write JSON string data to the Kafka cluster's Topic: hub-topic-city-delta, publish a message, and consume it to subscribed consumers.

4. Consumer 1 (HangzhouCityConsumer)

(1) sample code

@Component
@Slf4j
public class HangzhouCityConsumer {
  // 1.定义Kafka的Topic
  private final String topicName = "hub-topic-city-delta";
  // 2.使用@KafkaListener监听Kafka集群的Topic
  @KafkaListener(
      topics = {topicName},
      groupId = "hub-topic-city-delta-group-hangzhou")
  public void consumeMsg(ConsumerRecord<?, ?> record) {
    try {
        //3.KafkaConsumer从集群中监听的消息存储在ConsumerRecord
        String msgKey= (String) record.key();
        String msgData = (String) record.value();
        log.info("HangzhouCityConsumer从Kafka集群中的Topic:{},消费的原始数据的Key:",topicName);
        log.info(msgKey);
        log.info("HangzhouCityConsumer从Kafka集群中的Topic:{},消费的原始数据的Data:",topicName);
        log.info(msgData);
    } catch (Exception e) {
        log.info("HangzhouCityConsumer消费Topic异常.");
        e.printStackTrace();
    }
  }
}

(2) Analysis code

Use the attribute topics of @KafkaListener to specify the topic to listen to: hub-topic-city-delta.

Use the attribute groupId of @KafkaListener to specify the consumer group: hub-topic-city-delta-group-hangzhou.

5. Consumer 2 (ShanghaiCityConsumer)

(1) sample code

@Component
@Slf4j
public class ShanghaiCityConsumer {
  // 1.定义Kafka的Topic
  private final String topicName = "hub-topic-city-delta";
  // 2.使用@KafkaListener监听Kafka集群的Topic
  @KafkaListener(
          topics = {topicName},
          groupId = "hub-topic-city-delta-group-shanghai")
  public void consumeMsg(ConsumerRecord<?, ?> record) {
    try {
        //3.KafkaConsumer从集群中监听的消息存储在ConsumerRecord
        String msgKey = (String) record.key();
        String msgData = (String) record.value();
        log.info("ShanghaiCityConsumer从Kafka集群中的Topic:{},消费的原始数据的Key:", topicName);
        log.info(msgKey);
        log.info("ShanghaiCityConsumer从Kafka集群中的Topic:{},消费的原始数据的Data:", topicName);
        log.info(msgData);
    } catch (Exception e) {
        log.info("ShanghaiCityConsumer消费Topic异常.");
        e.printStackTrace();
    }
  }
}

(2) Analysis code

Use the attribute topics of @KafkaListener to specify the topic to listen to: hub-topic-city-delta.

Use the attribute groupId of @KafkaListener to specify the consumer group: hub-topic-city-delta-group-shanghai.

6. Test

(1) Use the Postman test to call the producer to write data

请求RUL:http://127.0.0.1:18208/hub-208-kafka/hub/example/delta/producer/f01_1

Parameters: msgContent="The Yangtze River Delta economic belt is powerful"

(2) Producer log

KafkaProducer向Kafka集群的Topic: hub-topic-city-delta,写入Key:
delta:b5a669933f4041588d53d53c22888943:1687789723647
KafkaProducer向Kafka集群的Topic: hub-topic-city-delta,写入Data:
{"msgContent":"长三角经济带实力强大","publicTime":"2023-06-26 22:28:43","uuid":"b5a669933f4041588d53d53c22888943"}

(3) Consumer-log

HangzhouCityConsumer从Kafka集群中的Topic:hub-topic-city-delta,消费的原始数据的Key:
delta:b5a669933f4041588d53d53c22888943:1687789723647
HangzhouCityConsumer从Kafka集群中的Topic:hub-topic-city-delta,消费的原始数据的Data:
{"msgContent":"长三角经济带实力强大","publicTime":"2023-06-26 22:28:43","uuid":"b5a669933f4041588d53d53c22888943"}

(4) Consumer 2 log

ShanghaiCityConsumer从Kafka集群中的Topic:hub-topic-city-delta,消费的原始数据的Key:
delta:b5a669933f4041588d53d53c22888943:1687789723647
ShanghaiCityConsumer从Kafka集群中的Topic:hub-topic-city-delta,消费的原始数据的Data:
{"msgContent":"长三角经济带实力强大","publicTime":"2023-06-26 22:28:43","uuid":"b5a669933f4041588d53d53c22888943"}

(5 Conclusion

Each Consumer specifies a specific ConsumerGroup, and a message can be consumed by multiple different ConsumerGroups.

7. Auxiliary class

@Data
@Builder
public class MsgDto implements Serializable {
  private String uuid;
  private String publicTime;
  private String msgContent;
  public static MsgDto buildDto(String uuid,
                      long publicTime,
                      String msgContent) {
      return builder().uuid(uuid)
          .publicTime(DateUtil.formatDateTime(new Date(publicTime)))
          .msgContent(msgContent).build();
  }
}

Above, thanks.

June 26, 2023

Guess you like

Origin blog.csdn.net/zhangbeizhen18/article/details/131407224