Integrate Kafka client (kafka-clients) to operate Kafka in Spring Boot microservices

Records : 459

Scenario : Integrating the Kafka client kafka-clients-3.0.0 in the Spring Boot microservice to operate Kafka. Use the native KafkaProducer of kafka-clients to operate the Kafka producer Producer. Use the native KafkaConsumer of kafka-clients to operate Kafka's consumer Consumer.

Versions : JDK 1.8, Spring Boot 2.6.3, kafka_2.12-2.8.0, kafka-clients-3.0.0.

Kafka installation : https://blog.csdn.net/zhangbeizhen18/article/details/129071395

1. Basic concepts

Event:An event records the fact that "something happened" in the world or in your business. It is also called record or message in the documentation.

Broker : A Kafka node is a broker; multiple Brokers can form a Kafka cluster.

Topic : Kafka classifies messages according to Topic, and each message published to Kafka needs to specify a Topic.

Producer : The message producer, the client that sends messages to the Broker.

Consumer : Message consumer, the client that reads messages from Broker.

ConsumerGroup : Each Consumer belongs to a specific ConsumerGroup, and a message can be consumed by multiple different ConsumerGroups; but only one Consumer in a ConsumerGroup can consume the message.

Partition : A topic can be divided into multiple partitions, and the internal messages of each partition are ordered.

publish : Publish, use Producer to write data to Kafka.

subscribe : Subscribe, use Consumer to read data from Kafka.

2. Configure Kafka information in microservices

(1) Add dependencies in pom.xml

pom.xml file:

<dependency>
  <groupId>org.apache.kafka</groupId>
  <artifactId>kafka-clients</artifactId>
  <version>3.0.0</version>
</dependency>

Analysis: Use native kafka-clients, version: 3.0.0. Operate Kafka's producers, consumers, and topics.

3. Configure Kafka producers and consumers

To use native kafka-clients, you need to configure KafkaProducer and KafkaConsumer, and inject Kafka configuration information into these two objects to operate the producer and consumer.

The configuration details are in the configuration of the official website: https://kafka.apache.org/documentation/

3.1 Configure KafkaProducer producer

(1) sample code

@Configuration
public class KafkaConfig {
  @Bean
  public KafkaProducer kafkaProducer() {
    Map<String, Object> configs = new HashMap<>();
    //#kafka服务端的IP和端口,格式:(ip:port)
    configs.put("bootstrap.servers", "192.168.19.203:29001");
    //客户端发送服务端失败的重试次数
    configs.put("retries", 2);
    //多个记录被发送到同一个分区时,生产者将尝试将记录一起批处理成更少的请求.
    //此设置有助于提高客户端和服务器的性能,配置控制默认批量大小(以字节为单位)
    configs.put("batch.size", 16384);
    //生产者可用于缓冲等待发送到服务器的记录的总内存字节数(以字节为单位)
    configs.put("buffer-memory", 33554432);
    //生产者producer要求leader节点在考虑完成请求之前收到的确认数,用于控制发送记录在服务端的持久化
    //acks=0,设置为0,则生产者producer将不会等待来自服务器的任何确认.该记录将立即添加到套接字(socket)缓冲区并视为已发送.在这种情况下,无法保证服务器已收到记录,并且重试配置(retries)将不会生效(因为客户端通常不会知道任何故障),每条记录返回的偏移量始终设置为-1.
    //acks=1,设置为1,leader节点会把记录写入本地日志,不需要等待所有follower节点完全确认就会立即应答producer.在这种情况下,在follower节点复制前,leader节点确认记录后立即失败的话,记录将会丢失.
    //acks=all,acks=-1,leader节点将等待所有同步复制副本完成再确认记录,这保证了只要至少有一个同步复制副本存活,记录就不会丢失.
    configs.put("acks", "-1");
    //指定key使用的序列化类
    Serializer keySerializer = new StringSerializer();
    //指定value使用的序列化类
    Serializer valueSerializer = new StringSerializer();
    //创建Kafka生产者
    KafkaProducer kafkaProducer = new KafkaProducer(configs, keySerializer, valueSerializer);
    return kafkaProducer;
  }
}

(2) Analysis code

Inject Kafka configuration information into KafkaProducer and create a KafkaProducer object.

Use @Configuration and @Bean annotations to inject the KafkaProducer object into Spring's IOC container, and you can use KafkaProducer in the Spring environment.

The underlying configuration class of KafkaProducer is ProducerConfig, which can be referred to during configuration.

全称:org.apache.kafka.clients.producer.ProducerConfig。

3.2 Configure KafkaConsumer consumer

(1) sample code

@Configuration
public class KafkaConfig {
  @Bean
  public KafkaConsumer kafkaConsumer() {
    Map<String, Object> configs = new HashMap<>();
    //kafka服务端的IP和端口,格式:(ip:port)
    configs.put("bootstrap.servers", "192.168.19.203:29001");
    //开启consumer的偏移量(offset)自动提交到Kafka
    configs.put("enable.auto.commit", true);
    //consumer的偏移量(offset) 自动提交的时间间隔,单位毫秒
    configs.put("auto.commit.interval", 5000);
    //在Kafka中没有初始化偏移量或者当前偏移量不存在情况
    //earliest, 在偏移量无效的情况下, 自动重置为最早的偏移量
    //latest, 在偏移量无效的情况下, 自动重置为最新的偏移量
    //none, 在偏移量无效的情况下, 抛出异常.
    configs.put("auto.offset.reset", "latest");
    //请求阻塞的最大时间(毫秒)
    configs.put("fetch.max.wait", 500);
    //请求应答的最小字节数
    configs.put("fetch.min.size", 1);
    //心跳间隔时间(毫秒)
    configs.put("heartbeat-interval", 3000);
    //一次调用poll返回的最大记录条数
    configs.put("max.poll.records", 500);
    //指定消费组
    configs.put("group.id", "hub-topic-city-01-group");
    //指定key使用的反序列化类
    Deserializer keyDeserializer = new StringDeserializer();
    //指定value使用的反序列化类
    Deserializer valueDeserializer = new StringDeserializer();
    //创建Kafka消费者
    KafkaConsumer kafkaConsumer = new KafkaConsumer(configs, keyDeserializer, valueDeserializer);
    return kafkaConsumer;
  }
}

(2) Analysis code

Inject Kafka configuration information into KafkaConsumer and create a KafkaConsumer object.

Use @Configuration and @Bean annotations to inject the KafkaConsumer object into Spring's IOC container, and you can use KafkaConsumer in the Spring environment.

The underlying configuration class of KafkaConsumer is ConsumerConfig, which can be referred to during configuration.

全称:org.apache.kafka.clients.consumer.ConsumerConfig。

4. Use KafkaProducer to operate Kafka producer Producer

Use KafkaProducer of native kafka-clients to operate Kafka producer Producer.

KafkaProducer全称:org.apache.kafka.clients.producer.KafkaProducer。

(1) sample code

@RestController
@RequestMapping("/hub/example/producer")
@Slf4j
public class UseKafkaProducerController {
  @Autowired
  private KafkaProducer kafkaProducer;
  private final String topicName = "hub-topic-city-02";
  @GetMapping("/f01_1")
  public Object f01_1() {
    try {
        //1.获取业务数据
        CityDTO cityDTO = CityDTO.buildDto(2023061501L, "杭州", "杭州是一个好城市");
        String cityStr = JSONObject.toJSONString(cityDTO);
        log.info("向Kafka的Topic: {} ,写入数据:", topicName);
        log.info(cityStr);
        //2.使用KafkaProducer向Kafka写入数据
        ProducerRecord<String, String> producerRecord = new ProducerRecord<>(topicName, cityStr);
        kafkaProducer.send(producerRecord);
    } catch (Exception e) {
        log.info("Producer写入Topic异常.");
        e.printStackTrace();
    }
    return "写入成功";
  }
}

(2) Analysis code

Create a ProducerRecord object, specify the topic name of the specified Kafka and the data to be written. ProducerRecord is a piece of data that needs to be written into Kafka.

Use the send method of KafkaProducer to pass in ProducerRecord, and the Producer can write data to the Broker node of Kafka.

5. Use KafkaConsumer to operate Kafka's consumer Consumer

Use KafkaConsumer of native kafka-clients to operate Kafka producer Consumer.

KafkaConsumer全称:org.apache.kafka.clients.consumer.KafkaConsumer。

(1) sample code

@Component
@Slf4j
public class UseKafkaConsumer implements InitializingBean {
  @Autowired
  private KafkaConsumer kafkaConsumer;
  private final String topicName = "hub-topic-city-02";
  @Override
  public void afterPropertiesSet() throws Exception {
    Thread thread = new Thread(() -> {
        log.info("启动线程监听Topic: {}", topicName);
        ThreadUtil.sleep(1000);
        Collection<String> topics = Lists.newArrayList(topicName);
        kafkaConsumer.subscribe(topics);
        while (true) {
            ConsumerRecords<String, String> consumerRecords = kafkaConsumer.poll(Duration.ofMillis(1000));
            for (ConsumerRecord consumerRecord : consumerRecords) {
                //1.从ConsumerRecord中获取消费数据
                String originalMsg = (String) consumerRecord.value();
                log.info("从Kafka中消费的原始数据: " + originalMsg);
                //2.把消费数据转换为DTO对象
                CityDTO cityDTO = JSONUtil.toBean(originalMsg, CityDTO.class);
                log.info("消费数据转换为DTO对象: " + cityDTO.toString());
            }
        }
    });
    thread.start();
  }
}

(2) Analysis code

Use while (true) to traverse KafkaConsumer consumers in real time, and actually monitor Kafka consumers in real time.

Use the subscribe method of KafkaConsumer to subscribe to the Kafka Topic that needs to be monitored.

Use the poll method of KafkaConsumer to poll the consumer to obtain the consumption message ConsumerRecord.

Obtain specific consumption business data from ConsumerRecord.

6. Test

(1) Use the Postman test to call the producer to write data

Request RUL: http://127.0.0.1:18209/hub-209-kafka/hub/example/producer/f01_1

(2) Consumers automatically consume data

Log information:

向Kafka的Topic: hub-topic-city-02 ,写入数据:
{"cityDescribe":"杭州是一个好城市","cityId":2023061501,"cityName":"杭州","updateTime":"2023-06-17 11:27:52"}
从Kafka中消费的原始数据: {"cityDescribe":"杭州是一个好城市","cityId":2023061501,"cityName":"杭州","updateTime":"2023-06-17 11:27:52"}
消费数据转换为DTO对象: CityDTO(cityId=2023061501, cityName=杭州, cityDescribe=杭州是一个好城市, updateTime=Sat Jun 17 11:27:52 CST 2023)

7. Auxiliary class

@Data
@Builder
public class CityDTO {
  private Long cityId;
  private String cityName;
  private String cityDescribe;
  @JsonFormat(
          pattern = "yyyy-MM-dd HH:mm:ss"
  )
  private Date updateTime;
  public static CityDTO buildDto(Long cityId, String cityName,
                                 String cityDescribe) {
      return builder().cityId(cityId)
              .cityName(cityName).cityDescribe(cityDescribe)
              .updateTime(new Date()).build();
  }
}

Above, thanks.

June 17, 2023

Guess you like

Origin blog.csdn.net/zhangbeizhen18/article/details/131265438