Use the Java API of spring-kafka to operate the Topic of the Kafka cluster

Records : 462

Scenario : Integrate spring-kafka-2.8.2 in Spring Boot microservices to create and delete Topics for operating Kafka clusters.

Version:JDK 1.8,Spring Boot 2.6.3,kafka_2.12-2.8.0,spring-kafka-2.8.2。

Kafka cluster installation : https://blog.csdn.net/zhangbeizhen18/article/details/131156084

1. Configure Kafka information in microservices

1.1 Add dependencies in pom.xml

pom.xml file:

<dependency>
  <groupId>org.springframework.kafka</groupId>
  <artifactId>spring-kafka</artifactId>
  <version>2.8.2</version>
</dependency>

Analysis: The choice of spring-kafka is generally the corresponding version integrated with spring-boot.

Please know : the bottom layer of the spring-kafka framework uses native kafka-clients. This example corresponds to version: 3.0.0.

1.2 Configure Kafka information in application.yml

The configuration details are in the configuration of the official website: https://kafka.apache.org/documentation/

(1) application.yml configuration content

spring:
  kafka:
    #kafka集群的IP和端口,格式:(ip:port)
    bootstrap-servers:
      - 192.168.19.161:29092
      - 192.168.19.162:29092
      - 192.168.19.163:29092
    #生产者
    producer:
      #客户端发送服务端失败的重试次数
      retries: 2
      #多个记录被发送到同一个分区时,生产者将尝试将记录一起批处理成更少的请求.
      #此设置有助于提高客户端和服务器的性能,配置控制默认批量大小(以字节为单位)
      batch-size: 16384
      #生产者可用于缓冲等待发送到服务器的记录的总内存字节数(以字节为单位)
      buffer-memory: 33554432
      #指定key使用的序列化类
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      #指定value使用的序列化类
      value-serializer: org.apache.kafka.common.serialization.StringSerializer
      #生产者producer要求leader节点在考虑完成请求之前收到的确认数,用于控制发送记录在服务端的持久化
      #acks=0,设置为0,则生产者producer将不会等待来自服务器的任何确认.该记录将立即添加到套接字(socket)缓冲区并视为已发送.在这种情况下,无法保证服务器已收到记录,并且重试配置(retries)将不会生效(因为客户端通常不会知道任何故障),每条记录返回的偏移量始终设置为-1.
      #acks=1,设置为1,leader节点会把记录写入本地日志,不需要等待所有follower节点完全确认就会立即应答producer.在这种情况下,在follower节点复制前,leader节点确认记录后立即失败的话,记录将会丢失.
      #acks=all,acks=-1,leader节点将等待所有同步复制副本完成再确认记录,这保证了只要至少有一个同步复制副本存活,记录就不会丢失.
      acks: -1
    consumer:
      #开启consumer的偏移量(offset)自动提交到Kafka
      enable-auto-commit: true
      #consumer的偏移量(offset)自动提交的时间间隔,单位毫秒
      auto-commit-interval: 1000
      #在Kafka中没有初始化偏移量或者当前偏移量不存在情况
      #earliest,在偏移量无效的情况下,自动重置为最早的偏移量
      #latest,在偏移量无效的情况下,自动重置为最新的偏移量
      #none,在偏移量无效的情况下,抛出异常.
      auto-offset-reset: latest
      #一次调用poll返回的最大记录条数
      max-poll-records: 500
      #请求阻塞的最大时间(毫秒)
      fetch-max-wait: 500
      #请求应答的最小字节数
      fetch-min-size: 1
      #心跳间隔时间(毫秒)
      heartbeat-interval: 3000
      #指定key使用的反序列化类
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      #指定value使用的反序列化类
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer

(2) Analysis

The configuration class is automatically annotated in the spring boot package: spring-boot-autoconfigure-2.6.3.jar.

类:org.springframework.boot.autoconfigure.kafka.KafkaProperties。

Use the @ConfigurationProperties annotation to make it effective, the prefix is: spring.kafka.

The spring-kafka framework has different configurations for operating Kafka stand-alone version and Kafka cluster version:

In the bootstrap-servers attribute, the stand-alone version is configured with an IP: port pair. The cluster version can be configured with multiple IP: port pairs.

1.3 Loading logic

When the Spring Boot microservice starts, Spring Boot will read the configuration information of application.yml, find KafkaProperties in spring-boot-autoconfigure-2.6.3.jar according to the configuration content, and inject them into the corresponding properties. After the Spring Boot microservice is started, the configuration information of KafkaProperties can be obtained in the Spring environment.

Spring's spring-kafka framework injects KafkaProperties configuration information into KafkaAdmin.

Use KafkaAdminClient to create AdminClient, and then use AdminClient to operate Topic.

2. Use AdminClient to create a Kafka cluster Topic

AdminClient全称:org.apache.kafka.clients.admin.AdminClient

Although spring-kafka is integrated, in terms of operating the topic of the Kafka cluster, the API of kafka-clients is mainly used.

(1) sample code

@RestController
@RequestMapping("/hub/example/cluster/topic")
@Slf4j
public class OperateKafkaClusterTopicController {
  @Autowired
  private KafkaAdmin kafkaAdmin;
  private final String topicName = "hub-topic-city-info-001";
  @GetMapping("/f01_1")
  public Object f01_1() {
      try {
          //1.获取Kafka集群配置信息
          Map<String, Object> configs = kafkaAdmin.getConfigurationProperties();
          //2.创建客户端AdminClient
          AdminClient adminClient = KafkaAdminClient.create(configs);
          //3.获取Kafka集群中Topic清单
          Set<String> topicSet = adminClient.listTopics().names().get();
          log.info("在Kafka集群已建Topic数量: {} ,清单:", topicSet.size());
          topicSet.forEach((topic) -> {
              System.out.println("" + topic);
          });
          //4.在Kafka集群创建Topic
          if (!topicSet.contains(topicName)) {
              log.info("新建Topic: {}", topicName);
              // Topic名称,分区Partition数目,复制因子(replication Factor)
              NewTopic newTopic = new NewTopic(topicName, 1, (short) 1);
              Collection<NewTopic> newTopics = Lists.newArrayList(newTopic);
              adminClient.createTopics(newTopics);
              ThreadUtil.sleep(1000);
              topicSet = adminClient.listTopics().names().get();
              log.info("创建后,在Kafka集群已建Topic数量: {} ,清单:", topicSet.size());
              topicSet.forEach((topic) -> {
                  System.out.println(topic);
              });
          }
      } catch (Exception e) {
          log.info("创建Topic异常.");
          e.printStackTrace();
      }
      return "创建成功";
  }
}

(2) Analysis code

The main purpose of KafkaAdmin injected into the spring-kafka framework is to obtain configuration content.

To operate the topic of the Kafka cluster, you need to create an AdminClient first, and use the AdminClient API to create a topic.

To create a topic, you only need to specify the topic name, the number of partitions, and the replication factor.

3. Use AdminClient to delete the Topic of the Kafka cluster

AdminClient全称:org.apache.kafka.clients.admin.AdminClient

Although spring-kafka is integrated, in terms of operating the topic of the Kafka cluster, the API of kafka-clients is mainly used.

(1) sample code

@RestController
@RequestMapping("/hub/example/cluster/topic")
@Slf4j
public class OperateKafkaClusterTopicController {
  @Autowired
  private KafkaAdmin kafkaAdmin;
  private final String topicName = "hub-topic-city-info-001";
  @GetMapping("/f01_2")
  public Object f01_2() {
      try {
          //1.获取Kafka集群配置信息
          Map<String, Object> configs = kafkaAdmin.getConfigurationProperties();
          //2.创建客户端AdminClient
          AdminClient adminClient = KafkaAdminClient.create(configs);
          //3.获取Kafka集群中Topic清单
          Set<String> topicSet = adminClient.listTopics().names().get();
          log.info("在Kafka集群已建Topic数量: {} ,清单:", topicSet.size());
          topicSet.forEach((topic) -> {
              System.out.println("" + topic);
          });
          //4.在Kafka集群删除Topic
          if (topicSet.contains(topicName)) {
              log.info("删除Topic: {}", topicName);
              Collection<String> topics = Lists.newArrayList(topicName);
              DeleteTopicsResult deleteTopicsResult = adminClient.deleteTopics(topics);
              deleteTopicsResult.all().get();
              ThreadUtil.sleep(1000);
              topicSet = adminClient.listTopics().names().get();
              log.info("删除后,在Kafka集群已建Topic数量: {} ,清单:", topicSet.size());
              topicSet.forEach((topic) -> {
                  System.out.println(topic);
              });
          }
      } catch (Exception e) {
          log.info("删除Topic异常.");
          e.printStackTrace();
      }
      return "删除成功";
  }
}

(2) Analysis code

The main purpose of KafkaAdmin injected into the spring-kafka framework is to obtain configuration content.

To operate the topic of the Kafka cluster, you need to create an AdminClient first, and use the API of the AdminClient to delete the topic.

To create a topic, you only need to specify the topic name.

4. Test

Create request RUL: http://127.0.0.1:18208/hub-208-kafka/hub/example/cluster/topic/f01_1

Delete request RUL: http://127.0.0.1:18208/hub-208-kafka/hub/example/cluster/topic/f01_2

Above, thanks.

June 18, 2023

Guess you like

Origin blog.csdn.net/zhangbeizhen18/article/details/131273092