kafka版本声明
- 使用的是
kafka 0.10.0.1
版本
常用命令
查看kafka版本
-
查看kafka版本
# find / -name \*kafka_\* | head -1 | grep -o '\kafka[^\n]*'
主题
-
创建
Topic
、Partition
(5个),replication
(1个)# bin/kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 5 --topic testTopic
-
查看
Topic
最大(最小)的offset
,--time,-1表示最大,-2表示最小
# bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list localhost:9092 -topic topic-A --time -1 topic-A:2:60 topic-A:4:60 topic-A:1:60 topic-A:3:60 topic-A:0:60
-
列出集群中所有的
topic
# bin/kafka-topics.sh --zookeeper zookeeper:2181 --list
-
查看
Topic
明细# bin/kafka-topics.sh --describe --zookeeper zookeeper:2181 --topic testTopic Topic:testTopic PartitionCount:5 ReplicationFactor:1 Configs: Topic: testTopic Partition: 0 Leader: 1 Replicas: 1 Isr: 1 Topic: testTopic Partition: 1 Leader: 1 Replicas: 1 Isr: 1 Topic: testTopic Partition: 2 Leader: 1 Replicas: 1 Isr: 1 Topic: testTopic Partition: 3 Leader: 1 Replicas: 1 Isr: 1 Topic: testTopic Partition: 4 Leader: 1 Replicas: 1 Isr: 1
-
修改
topic
参数,partitions
个数只能被增加不能减少。如果一定要减少partitions
个数,只能删除整个主题,重新建立# bin/kafka-topics.sh --zookeeper zookeeper:2181 --partition 6 --topic testTopic --alter
-
删除
Topic
,broker
的delete.topic.enable=true
才可以删除,否则删除topic
的请求会被忽略.删除topic
会丢弃topic
里所有数据。# bin/kafka-topics.sh --zookeeper zookeeper:2181 --delete --topic topicAgroup
生产
-
生产数据
# bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test --producer.config config/producer.properties
消费
-
消费
# bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --new-consumer --from-beginning --consumer.config config/consumer.properties # bin/kafka-simple-consumer-shell.sh --brist localhost:9092 --topic test --partition 0 --offset 1234 --max-messages 10
-
查看消费组
offset
信息,CURRENT-OFFSET
表示最近提交的offset
,LOG-END-OFFSET
最近一个被提交到集群的offset
,LOG-END-OFFSET
减去CURRENT-OFFSET
可以得到消息积压的个数(LAG的值
)1. 运行状态查询 # bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --new-consumer --describe --group topicAGroup GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG OWNER topicAGroup topic-A 0 20 60 40 consumer-1_/172.17.0.1 topicAGroup topic-A 1 20 60 40 consumer-1_/172.17.0.1 topicAGroup topic-A 2 20 60 40 consumer-1_/172.17.0.1 topicAGroup topic-A 3 20 60 40 consumer-1_/172.17.0.1 topicAGroup topic-A 4 20 60 40 consumer-1_/172.17.0.1 2. 未运行状态查询 bash-4.3# bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --new-consumer --describe --group topicAGroup Consumer group `topicAGroup` does not exist or is rebalancing.
-
查看消费组列表
# bin/kafka-consumer-groups.sh --new-consumer --bootstrap-server localhost:9092 --list topicAGroup
-
读取
__consumer_offsets
.所有的consumer
的offset
都以消息的形式写入到这个topic
.为了解码这个主题的消息,需要kafka.coordinator.GroupMetadataManager$OffsetsMessageFormatter
这个格式化器.通过Math.abs(groupID.hashCode()) % numPartitions
确定在__consumer_offsets
哪个分区,numPartitions
默认是50,这里要查看的groupId
是testgroup
,Math.abs("testgroup".hashCode()) % 50=27
,确定partition
为27.输出的结果中有offset
(消费者已经提交的偏移量),此处是100# bin/kafka-simple-consumer-shell.sh --topic __consumer_offsets --partition 27 --broker-list localhost:9092 --formatter "kafka.coordinator.GroupMetadataManager\$OffsetsMessageFormatter" [testgroup,testTopic,0]::[OffsetMetadata[100,NO_METADATA],CommitTime 1535613323875,ExpirationTime 1535699723875] [testgroup,testTopic,0]::[OffsetMetadata[100,NO_METADATA],CommitTime 1535613324884,ExpirationTime 1535699724884] [testgroup,testTopic,0]::[OffsetMetadata[100,NO_METADATA],CommitTime 1535613325888,ExpirationTime 1535699725888] [testgroup,testTopic,0]::[OffsetMetadata[100,NO_METADATA],CommitTime 1535613326893,ExpirationTime 1535699726893] [testgroup,testTopic,0]::[OffsetMetadata[100,NO_METADATA],CommitTime 1535613327903,ExpirationTime 1535699727903] [testgroup,testTopic,0]::[OffsetMetadata[100,NO_METADATA],CommitTime 1535613328912,ExpirationTime 1535699728912] [testgroup,testTopic,0]::[OffsetMetadata[100,NO_METADATA],CommitTime 1535613329920,ExpirationTime 1535699729920] ....
集群
-
查看集群描述
# bin/kafka-topics.sh --describe --zookeeper zookeeper:2181 Topic:__consumer_offsets PartitionCount:50 ReplicationFactor:1 Configs:segment.bytes=104857600,cleanup.policy=compact,compression.type=producer Topic: __consumer_offsets Partition: 0 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 1 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 2 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 3 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 4 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 5 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 6 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 7 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 8 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 9 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 10 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 11 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 12 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 13 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 14 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 15 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 16 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 17 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 18 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 19 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 20 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 21 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 22 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 23 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 24 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 25 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 26 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 27 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 28 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 29 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 30 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 31 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 32 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 33 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 34 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 35 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 36 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 37 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 38 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 39 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 40 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 41 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 42 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 43 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 44 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 45 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 46 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 47 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 48 Leader: 1 Replicas: 1 Isr: 1 Topic: __consumer_offsets Partition: 49 Leader: 1 Replicas: 1 Isr: 1 Topic:testTopic PartitionCount:5 ReplicationFactor:1 Configs: Topic: testTopic Partition: 0 Leader: 1 Replicas: 1 Isr: 1 Topic: testTopic Partition: 1 Leader: 1 Replicas: 1 Isr: 1 Topic: testTopic Partition: 2 Leader: 1 Replicas: 1 Isr: 1 Topic: testTopic Partition: 3 Leader: 1 Replicas: 1 Isr: 1 Topic: testTopic Partition: 4 Leader: 1 Replicas: 1 Isr: 1 Topic:testTopic3 PartitionCount:1 ReplicationFactor:1 Configs: Topic: testTopic3 Partition: 0 Leader: 1 Replicas: 1 Isr: 1 Topic:topic-A PartitionCount:5 ReplicationFactor:1 Configs: Topic: topic-A Partition: 0 Leader: 1 Replicas: 1 Isr: 1 Topic: topic-A Partition: 1 Leader: 1 Replicas: 1 Isr: 1 Topic: topic-A Partition: 2 Leader: 1 Replicas: 1 Isr: 1 Topic: topic-A Partition: 3 Leader: 1 Replicas: 1 Isr: 1 Topic: topic-A Partition: 4 Leader: 1 Replicas: 1 Isr: 1 Topic:topicAGroup PartitionCount:1 ReplicationFactor:1 Configs: Topic: topicAGroup Partition: 0 Leader: 1 Replicas: 1 Isr: 1 Topic:topicAgroup PartitionCount:1 ReplicationFactor:1 Configs: Topic: topicAgroup Partition: 0 Leader: 1 Replicas: 1 Isr: 1
压测命令
-
压测
# bin/kafka-producer-perf-test.sh --topic test --num-records 100 --record-size 1 --throughput 100 --producer-props bootstrap.servers=localhost:9092