kafka-命令整理

kafka版本:2.3.0
环境:centos7

命令

start

# 启动zookeeper
bin/zookeeper-server-start.sh -daemon config/zookeeper.properties
# 启动kafka
bin/kafka-server-start.sh -daemon config/server.properties

stop

bin/kafka-server-stop.sh config/server.properties

consumer

bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic testTopic
  • --from-beginning: 从起始位置消费
  • --group: 指定消费者组

查看消费详情

bin/kafka-consumer-groups.sh \
--bootstrap-server 10.128.0.53:9092,10.128.0.54:9092,10.128.0.55:9092 \
--describe \
--group group_save_mysql

GROUP                 TOPIC           PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                     HOST            CLIENT-ID
group_save_clickhouse VehicleLoc      0          0               0               0               consumer-2-43bca1c6-2696-4894-9ba0-0a1994e9df8b /10.128.0.56    consumer-2
group_save_clickhouse VehicleLoc      1          0               0               0               consumer-2-43bca1c6-2696-4894-9ba0-0a1994e9df8b /10.128.0.56    consumer-2
group_save_clickhouse VehicleLoc      2          68473           68473           0               consumer-2-43bca1c6-2696-4894-9ba0-0a1994e9df8b /10.128.0.56    consumer-2
group_save_clickhouse originVehInfo   0          227             227             0               consumer-1-3810c5e9-281d-410c-bed7-461b33b5de13 /10.128.0.56    consumer-1
group_save_clickhouse originVehInfo   1          0               0               0               consumer-1-3810c5e9-281d-410c-bed7-461b33b5de13 /10.128.0.56    consumer-1
group_save_clickhouse originVehInfo   2          78335           78335           0               consumer-1-3810c5e9-281d-410c-bed7-461b33b5de13 /10.128.0.56    consumer-1
  • CURRENT-OFFSET表示该消费者实例在该分区中所消耗消息的当前最大偏移量
  • LOG-END-OFFSET是该分区中最新消息的偏移量

重置消费偏移量到最新

bin/kafka-consumer-groups.sh --bootstrap-server \
10.128.0.53:9092,10.128.0.54:9092,10.128.0.55:9092 --group group_originVeh_prod \
--topic originVehInfo -reset-offsets --to-latest --execute

GROUP                          TOPIC                          PARTITION  NEW-OFFSET     
group_originVeh_prod           originVehInfo                  0          4163505        
group_originVeh_prod           originVehInfo                  2          9438050        
group_originVeh_prod           originVehInfo                  1          3995722

producer

bin/kafka-console-producer.sh --broker-list localhost:9092 --topic testTopic

将文件消息写入到kafka

bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic testTopic < test.txt

topic

查询一个topic的详情

bin/kafka-topics.sh --describe --bootstrap-server localhost:9092 --topic testTopic

Topic:testTopic    PartitionCount:1    ReplicationFactor:3    Configs:segment.bytes=1073741824   
    Topic: testTopic    Partition: 0    Leader: 1    Replicas: 2,1,0    Isr: 1,0,2  
  • 结果的第一个行显示所有partitions的一个总结,以下每一行给出一个partition中的信息,如果我们只有一个partition,则只显示一行
  • partition 分区编号
  • replias 副本brokerid
  • Isr 副本存活的brokerid

查询topic list

bin/kafka-topics.sh --list --zookeeper localhost:2181

删除topic

bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic testTopic

创建topic partitions分区,replication-factor副本(不能大于集群数)

bin/kafka-topics.sh --create --zookeeper localhost:2181 --topic testTopic --partitions 2 --replication-factor 2

查询往某个topic中发送数据的日志

bin/kafka-run-class.sh kafka.tools.DumpLogSegments --files 00000000000000000038.log --print-data-log

日志文件查找方式:

  • 首先根据 server.properties 中配置的 log.dir 路径找到日志所在目录
  • 找到 ”topic名-分区号“ 目录,并进入
  • 可以看到很多log文件按时间进行分段,找出想要查看的时间段的日志文件
  • 使用命令查看

猜你喜欢

转载自blog.csdn.net/weixin_43932590/article/details/119246469