kafka-command sorting

kafka version: 2.3.0
environment: centos7

Order

start

# 启动zookeeper
bin/zookeeper-server-start.sh -daemon config/zookeeper.properties
# 启动kafka
bin/kafka-server-start.sh -daemon config/server.properties

stop

bin/kafka-server-stop.sh config/server.properties

consumer

bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic testTopic
  • --from-beginning: consume from starting position
  • --group: Specify consumer group

View consumption details

bin/kafka-consumer-groups.sh \
--bootstrap-server 10.128.0.53:9092,10.128.0.54:9092,10.128.0.55:9092 \
--describe \
--group group_save_mysql

GROUP                 TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                     HOST            CLIENT-ID
group_save_clickhouse VehicleLoc      0          0               0               0               consumer-2-43bca1c6-2696-4894-9ba0-0a1994e9df8b /10.128.0.56    consumer-2
group_save_clickhouse VehicleLoc      1          0               0               0               consumer-2-43bca1c6-2696-4894-9ba0-0a1994e9df8b /10.128.0.56    consumer-2
group_save_clickhouse VehicleLoc      2          68473           68473           0               consumer-2-43bca1c6-2696-4894-9ba0-0a1994e9df8b /10.128.0.56    consumer-2
group_save_clickhouse originVehInfo   0          227             227             0               consumer-1-3810c5e9-281d-410c-bed7-461b33b5de13 /10.128.0.56    consumer-1
group_save_clickhouse originVehInfo   1          0               0               0               consumer-1-3810c5e9-281d-410c-bed7-461b33b5de13 /10.128.0.56    consumer-1
group_save_clickhouse originVehInfo   2          78335           78335           0               consumer-1-3810c5e9-281d-410c-bed7-461b33b5de13 /10.128.0.56    consumer-1
  • CURRENT-OFFSET indicates the current maximum offset of messages consumed by this consumer instance in this partition.
  • LOG-END-OFFSET is the offset of the latest message in this partition

Reset the consumption offset to the latest

bin/kafka-consumer-groups.sh --bootstrap-server \
10.128.0.53:9092,10.128.0.54:9092,10.128.0.55:9092 --group group_originVeh_prod \
--topic originVehInfo -reset-offsets --to-latest --execute

GROUP                          TOPIC                          PARTITION  NEW-OFFSET     
group_originVeh_prod           originVehInfo                  0          4163505        
group_originVeh_prod           originVehInfo                  2          9438050        
group_originVeh_prod           originVehInfo                  1          3995722

producer

bin/kafka-console-producer.sh --broker-list localhost:9092 --topic testTopic

Write file messages to kafka

bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic testTopic < test.txt

topic

Query the details of a topic

bin/kafka-topics.sh --describe --bootstrap-server localhost:9092 --topic testTopic

Topic:testTopic    PartitionCount:1    ReplicationFactor:3    Configs:segment.bytes=1073741824   
    Topic: testTopic    Partition: 0    Leader: 1    Replicas: 2,1,0    Isr: 1,0,2  
  • The first line of the result shows a summary of all partitions. Each of the following lines gives the information in a partition. If we have only one partition, only one line is displayed.
  • partition partition number
  • replias replica brokerid
  • Isr replica alive brokerid

Query topic list

bin/kafka-topics.sh --list --zookeeper localhost:2181

Delete topic

bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic testTopic

Create topic partitions and replication-factor copies (cannot be larger than the number of clusters)

bin/kafka-topics.sh --create --zookeeper localhost:2181 --topic testTopic --partitions 2 --replication-factor 2

Query the log of sending data to a topic

bin/kafka-run-class.sh kafka.tools.DumpLogSegments --files 00000000000000000038.log --print-data-log

Log file search method:

  • First, find the directory where the log is located based on the log.dir path configured in server.properties.
  • Find the "topic name-partition number" directory and enter
  • You can see that many log files are segmented by time and find the log files for the time period you want to view.
  • Use the command to view

Guess you like

Origin blog.csdn.net/weixin_43932590/article/details/119246469