Kafka--common commands

Insert picture description here

Theme creation/view/modification/delete

Automatically created

  • You can automatically create topics through the auto.create.topics.enable property. By default, this attribute value is true.
  • Therefore, when the producer application writes data to a non-existent topic in the Kafka cluster, a default topic is automatically created, and the number of partitions and replicas of the topic are also the default.
  • The value of the default partition is controlled by the property num.partitions = 2 in the $KAFKA_HOME/config/server.properties file,
  • The default replication factor value is controlled by the property default.replication.factor =1 in the $KAFKA_HOME/config/server.properties file

Create manually

  • Use /export/servers/kafka/bin/kafka-topics.sh
    Insert picture description here

1. Create a topic named order, the number of replicas is 3, the number of partitions is 6

kafka-topics.sh --zookeeper node01:2181 --create --topic order --replication-factor 3 --partitions 6

2. View the current list of topics

kafka-topics.sh  --zookeeper node01:2181 --list
  • You can check it out if you are interested below
进入Kafka 系统消息数据存储目录/export/data/kafka/kafka-logs 中查看
cd /export/data/kafka/kafka-logs
ll

也可以通过zkCli.sh 脚本连接到Zookeepr 去访问主题分区信息和元数据信息
/export/servers/zookeeper/bin/zkCli.sh 
ls /brokers/topics/order/partitions
get /brokers/topics/order
quit

3. View the details of the specified topic

kafka-topics.sh --zookeeper node01:2181 --describe  --topic order

Insert picture description here

4. Modify the theme-here is just to modify the config parameters, and the partition and copy modification will be discussed separately later

创建一个新的主题, 1 个分区, 1 个副本
kafka-topics.sh  --zookeeper node01:2181 --create --topic user --replication-factor 1 --partitions 1  --config max.message.bytes=102400
查看覆盖的配置参数
kafka-topics.sh --zookeeper node01:2181 --describe --topic user --topics-with-overrides  
修改大小
kafka-topics.sh  --zookeeper node01:2181  --alter --topic user --config max.message.bytes=204800 
再次查看
kafka-topics.sh --zookeeper node01:2181 --describe --topic user --topics-with-overrides  

5. Delete the theme

创建主题
kafka-topics.sh  --zookeeper node01:2181 --create --topic test_delete --replication-factor 1 --partitions 1  
删除主题
kafka-topics.sh  --zookeeper node01:2181  --delete --topic test_delete 
查看主题
kafka-topics.sh  --zookeeper node01:2181  --list
注意:如果要删除主题,必须在server.properties中配置了如下配置

delete.topic.enable=true

6. If you want to write a new line

kafka-topics.sh  --zookeeper node01:2181  \
--list
  • In e.g.
kafka-topics.sh  --zookeeper node01:2181 \
--create \
--topic test_delete \
--replication-factor 1 \
--partitions 1  

Partition and copy management of topics

Topic partition

  • The role of topic partition is to improve concurrent read and write
  • Then the modification of the topic partition can only increase the number of partitions, not reduce the number of partitions, Otherwise report an error
  • because主题分区中有分段文件,文件中的消息以offset偏移量作为唯一标识, 如果减小分区意味着要合并文件,那么offset会冲突
    Insert picture description here

1. Demonstrate adding partitions

查看
kafka-topics.sh  --zookeeper node01:2181 --describe --topic user
修改分区数(只能增加)
kafka-topics.sh  --zookeeper node01:2181 --alter --topic user --partitions 3 
查看
kafka-topics.sh  --zookeeper node01:2181 --describe --topic user

Insert picture description here

Copy management-just understand

  • Generally, when creating a topic, specify the number of partitions and the number of replicas. The number of partitions can only be increased, not reduced
  • Although the number of copies can be changed, it is more troublesome. It involves the balance of Leader and Follower, so generally it is rarely changed.
  • This requires that the number of partitions (the amount of data and concurrency must be estimated) and the number of copies (generally 2 to 3 are sufficient, and more will affect performance) when designing the theme
创建主题
kafka-topics.sh --zookeeper node01:2181 --create --topic user2 --replication-factor 1 --partitions  6  --config max.message.bytes=102400
查看
kafka-topics.sh -describe -zookeeper node01:2181 --topic user2
  • Modify through the following json file
    • vim user2_replicas.json
{
    
    
    "version": 1,
    "partitions": [{
    
    
        "topic": "user2",
        "partition": 0,
        "replicas": [2, 0, 1]
    }, {
    
    
        "topic": "user2",
        "partition": 1,
        "replicas": [0, 1, 2]
    }, {
    
    
        "topic": "user2",
        "partition": 2,
        "replicas": [1, 2, 0]
    }, {
    
    
        "topic": "user2",
        "partition": 3,
        "replicas": [2, 1, 0]
    }, {
    
    
        "topic": "user2",
        "partition": 4,
        "replicas": [0, 2, 1]
    }, {
    
    
        "topic": "user2",
        "partition": 5,
        "replicas": [1, 0, 2]
    }]
}
加载脚本修改副本数
kafka-reassign-partitions.sh --zookeeper node01:2181 --reassignment-json-file user2_replicas.json --execute
查看执行结果
kafka-reassign-partitions.sh --zookeeper node01:2181 --reassignment-json-file user2_replicas.json --verify
查看分区结果
kafka-topics.sh -describe -zookeeper node01:2181 --topic user2

Used by console producers-used during testing

  • The /export/servers/kafka/bin/kafka-console-producer.sh script can be used as a console producer client
  • 1. Use the following command to send a message to the specified topic
kafka-console-producer.sh --broker-list node01:9092 --topic test_topic
  • Note: The test_topic topic will be automatically created. It is better to create it manually and specify the partition and replica data of the topic.

Console consumer use-use during testing

  • In the Kafka system, consumers' implementation methods are divided into new/old APIs.
  • • New: In versions after Kafka 0.10.0.x, the Kafka system defaults toThe metadata information generated by the consumption instance is stored in an internal topic named "__consumer_offsets".
    • Use --bootstrap-server
  • • Old: In Kafka versions prior to 0.10.0.x, the default consumption method of the Kafka system was to store the metadata information generated by the consumption instance in the Zookeeper cluster.
    • Use --zookeeper
  • 1. Use the console consumer test to receive messages
  • Recommended
kafka-console-consumer.sh --bootstrap-server node01:9092  --topic test_topic --from-beginning 
  • You can also use the following-but it's expired
kafka-console-consumer.sh --zookeeper node01:2181  --topic test_topic --from-beginning 

Guess you like

Origin blog.csdn.net/qq_46893497/article/details/114178735