kafka command script description and use in java

1. Command line usage

1.1. topic command

1. Regarding topic, window is used as an example here.

bin\windows\kafka-topics.bat

Insert image description here

2. Create first topic, five partitions, and 1 copy

bin\windows\kafka-topics.bat  --bootstrap-server localhost:9092 --create --partitions 5 --replication-factor 1 --topic first

Insert image description here
3. View all topics in the current server

bin\windows\kafka-topics.bat --list --bootstrap-server localhost:9092

Insert image description here

4. View the details of the first theme

bin\windows\kafka-topics.bat --bootstrap-server localhost:9092 --describe --topic first

Insert image description here
5. Modify the number of partitions** (Note: the number of partitions can only be increased, not reduced)**

bin\windows\kafka-topics.bat --bootstrap-server localhost:9092 --alter --topic first --partitions 6

Insert image description here

6. Delete the topic. This operation will cause file authorization problems in winodw. The log can be viewed in the startup command window of kafka. You only need to modify the file permissions. If If this problem occurs, we need to clear the contents of the two previously configured files data and kafka-logs and restart again.

bin\windows\kafka-topics.bat --bootstrap-server localhost:9092 --delete --topic first

Insert image description here

1.2. Producer command line operation

1. Regarding viewing the operating producer command parameters, window is used as an example here.

.\bin\windows\kafka-console-producer.bat

Insert image description here

2. Send a message. The data is sent twice here. The first time is hello and the second time is world.

.\bin\windows\kafka-console-producer.bat --bootstrap-server localhost:9092 --topic first

Insert image description here

1.3. Consumer command line operation

1. Regarding viewing the operating producer command parameters, window is used as an example here.

.\bin\windows\kafka-console-consumer.bat

Insert image description here
Insert image description here

2. Accept the message,Because the consumer did not start when we sent the message earlier, the data sent for the first time cannot be received here and is not stored. Go to topic

.\bin\windows\kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic first

Insert image description here

Insert image description here

3. Read all the data in the topic (including historical data). You can see that we have obtained all the data from before the consumer went online to after it went online, a total of 6 pieces.

.\bin\windows\kafka-console-consumer.bat --bootstrap-server localhost:9092 --from-beginning --topic first

Insert image description here

1.4. Script description

project Value
connect-standalone.sh Kafka Connect component used to start a single node in Standalone mode.
connect-distributed.sh Kafka Connect component used to start multi-node distributed mode.
kafka-acls.sh The script is used to set Kafka permissions, such as which users can access which TOPIC permissions of Kafka.
kafka-delegation-tokens.sh Used to manage Delegation Token. Authentication based on Delegation Token is a lightweight authentication mechanism that complements the SASL authentication mechanism.
kafka-topics.sh Used to manage all TOPIC.
kafka-console-producer.sh Used to produce messages.
kafka-console-consumer.sh Used to consume messages.
kafka-producer-perf-test.sh For producer performance testing.
kafka-consumer-perf-test.sh For consumer performance testing.
kafka-delete-records.sh Used to delete Kafka's partition messages. Since Kafka has its own automatic message deletion strategy, the usage rate is not high.
kafka-dump-log.sh Used to view the contents of Kafka message files, including various metadata information and message body data of the message.
kafka-log-dirs.sh Used to query the disk usage of each log path on each Broker.
kafka-mirror-maker.sh Used to implement data mirroring between Kafka clusters.
kafka-preferred-replica-election.sh Used to perform Preferred Leader election and replace the Leader for the specified topic.
kafka-reassign-partitions.sh Used to perform partition copy migration and copy file path migration.
kafka-run-class.sh Used to execute any Kafka class with a main method.
kafka-server-start.sh Used to start the Broker process.
kafka-server-stop.sh Used to stop the Broker process.
kafka-streams-application-reset.sh Used to reset the displacement for Kafka Streams applications to re-consume data.
kafka-verifiable-producer.sh Used to test and verify the functionality of the producer.
kafka-verifiable-consumer.sh Used to test and verify consumer functions.
trogdor.sh It is a testing framework for Kafka, used to perform various benchmarks and load tests.
kafka-broker-api-versions.sh The script is mainly used to verify the compatibility of servers and clients between different Kafka versions.

1.5. Close kafka

1, Be sure to shut down kafka first, and then shut down zookeeper, otherwise data confusion may occur

If there is data confusion, the easiest way is to clear the contents of the data and kafka-logs files and restart.

2. Close

.\bin\windows\kafka-server-stop.bat
.\bin\windows\zookeeper-server-stop.bat

Insert image description here

1.6. Select the number of partitions and kafka performance test

1. The main tools are kafka-producer-perf-test.bat and kafka-consumer-perf-test.bat two scripts. You can refer to How kafka chooses the number of partitions and kafka performance test

2. Use of java

2.1. Use native client

1. Dependence

        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka-clients</artifactId>
            <version>3.4.0</version>
        </dependency>

2. Send and consume messages. The specific code is as follows:

public class KafkaConfig {
    
    
 
    public static void main(String[] args) {
    
    
        // 声明主题
        String topic = "first";
        // 创建消费者
        Properties consumerConfig = new Properties();
        consumerConfig.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.189.128:9092,92.168.189.128:9093,192.168.189.128:9094");
        consumerConfig.put(ConsumerConfig.GROUP_ID_CONFIG,"boot-kafka");
        consumerConfig.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringDeserializer");
        consumerConfig.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringDeserializer");
        KafkaConsumer kafkaConsumer = new KafkaConsumer(consumerConfig);
        // 订阅主题并循环拉取消息
        kafkaConsumer.subscribe(Arrays.asList(topic));
        new Thread(new Runnable() {
    
    
            @Override
            public void run() {
    
    
                while (true){
    
    
                    ConsumerRecords<String, String> records = kafkaConsumer.poll(Duration.ofMillis(10000));
                    for(ConsumerRecord<String, String> record:records){
    
    
                        System.out.println(record.value());
                    }
                }
            }
        }).start();
        // 创建生产者
        Properties producerConfig = new Properties();
        producerConfig.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.189.128:9092,92.168.189.128:9093,192.168.189.128:9094");
        producerConfig.put(ProducerConfig.CLIENT_ID_CONFIG,"boot-kafka-client");
        producerConfig.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
        producerConfig.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer");
        KafkaProducer producer = new KafkaProducer<>(producerConfig);
        // 给主题发送消息
        producer.send(new ProducerRecord<>(topic, "hello,"+System.currentTimeMillis()));
    }
}

2.2. Using springBoot

1. Dependence

 <!-- 不使用kafka的原始客户端,使用spring集成的,这样比较方便  -->
        <dependency>
            <groupId>org.springframework.kafka</groupId>
            <artifactId>spring-kafka</artifactId>
            <!-- 可以不用指定,springBoot 会帮我们选择,如果有特殊需求,可以更改 -->
            <!--            <version>3.0.2</version>-->
        </dependency>

2. Configuration file

server:
  port: 7280
  servlet:
    context-path: /thermal-emqx2kafka
  shutdown: graceful

spring:
  application:
    name: thermal-api-demonstration-tdengine
  lifecycle:
    timeout-per-shutdown-phase: 30s
  mvc:
    pathmatch:
      matching-strategy: ant_path_matcher  # 不然spring boot 2.6以后的版本 和 swagger 会出现 问题,可以参考 https://blog.csdn.net/qq_41027259/article/details/125747298
  kafka:
    bootstrap-servers: 127.0.0.1:9092  # 192.168.189.128:9092,92.168.189.128:9093,192.168.189.128:9094  连接的 Kafka Broker 主机名称和端口号
    #properties.key-serializer: # 用于配置客户端的附加属性,对于生产者和消费者都是通用的,。 org.apache.kafka.common.serialization.StringSerializer
    producer: # 生产者
      retries: 3 # 重试次数
      #acks: 1 # 应答级别:多少个分区副本备份完成时向生产者发送ack确认(可选0、1、all/-1)
      #batch-size: 16384 # 一次最多发送数据量
      #buffer-memory: 33554432 # 生产端缓冲区大小
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      value-serializer: org.apache.kafka.common.serialization.StringSerializer
    consumer: # 消费者
      group-id: test-consumer-group #默认的消费组ID,在Kafka的/config/consumer.properties中查看和修改
      #enable-auto-commit: true # 是否自动提交offset
      #auto-commit-interval: 100 # 提交offset延时(接收到消息后多久提交offset)
      #auto-offset-reset: latest  #earliest,latest
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer

3. Send message

package cn.jt.thermalemqx2kafka.kafka.controller;

import com.alibaba.fastjson.JSON;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

import java.util.HashMap;
import java.util.Map;

/**
 * @author GXM
 * @version 1.0.0
 * @Description TODO
 * @createTime 2023年08月17日
 */
@Slf4j
@RestController
@RequestMapping("/test")
public class TestController {
    
    

    @Autowired
    private KafkaTemplate<String, String> kafkaTemplate;

    @GetMapping("/mock")
    public String sendKafkaMessage() {
    
    
        Map<String, Object> data = new HashMap<>(2);
        data.put("id", 1);
        data.put("name", "gkj");
        kafkaTemplate.send("first", JSON.toJSONString(data));
        return "ok";
    }
}

4. Accept the message

package cn.jt.thermalemqx2kafka.kafka.config;

import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Component;

/**
 * @author GXM
 * @version 1.0.0
 * @Description TODO
 * @createTime 2023年08月17日
 */
@Slf4j
@Component
public class KafkaListener {
    
    

    @org.springframework.kafka.annotation.KafkaListener(topics = "first")
    private void handler(String content) {
    
    
        log.info("consumer received: {} ", content);
    }
}

Guess you like

Origin blog.csdn.net/qq_38263083/article/details/132615302