【Kafka】Kafka基础概念笔记

【Kafka】Kafka基础概念笔记

1. 两种模式

Kafka作为消息队列,有两种模式:

  1. 点对点模式
  2. 发布/订阅模式

1.1 点对点模式

特点:

  • 消费者主动拉取数据,消息收到后清除消息

image-20230705110531581


1.2 发布/订阅模式

  • 可以有多个topic主题(浏览、点赞、收藏、评论等)
  • 消费者消费数据之后,不删除数据
  • 每个消费者相互独立,都可以消费到数据

image-20230705110722828


2. 基础架构

Kafka的基础架构:

  1. 为方便扩展,并提高吞吐量,一个topic分为多个partition(分区)
  2. 配合分区的设计,提出消费者组的概念,组内每个消费者并行消费,一个分区只能由一个组内消费者消费,避免重复消费
  3. 为提高可用性,为每个partition增加若干副本
  4. Zookeeper中记录谁是leader,Kafka2.8.0以后也可以配置不采用ZK

image-20230705111049119

  • Consumer Group(CG):消费者组,由多个consumer组成。消费者组内每个消费者负责消费不同分区的数据,为避免详消息的重复消费,一个分区只能由一个组内消费者消费;消费者组之间互不影响。所有消费者都属于某个消费者组,即消费者组是逻辑上的一个订阅者。
  • Broker:一台Kafka服务器就是一个broker。一个集群由多个broker组成。一个broker可以容纳多个topic。
  • Topic : It can be understood as a queue, and both producers and consumers are facing a topic .
  • Partition : In order to achieve scalability, a very large topic can be distributed to multiple brokers (ie servers), a topic can be divided into multiple partitions , and each partition is an ordered queue .
  • Replica : copy. Each partition of a topic has several copies, a Leader and several Followers.
  • Leader : The "primary" of multiple copies of each partition, the object that the producer sends data, and the object that the consumer consumes data are all leaders.
  • Follower : The "slave" in multiple copies of each partition, synchronizes data from the Leader in real time, and maintains synchronization with the Leader data. When the Leader fails, a Follower will become the new Leader.

3. Topic command line operation

3.1 View Topic operation

① View the operation Topic command parameters

#在kafka的目录下
bin/kafka-topics.sh

After entering the command line, the console lists all parameters and their meanings:

image-20230705232832770

Summarized as follows:

image-20230705232404846


3.2 Create Topic

②Create a topic named first, the number of partitions is required to be 1, and the cluster has 3 nodes, so the partition copy is set to 3

bin/kafka-topics.sh --bootstrap-server node1:9092 --create --partitions 1 --replication-factor 3 --topic first

3.3 View all Topics

③ View all topics in the current server

bin/kafka-topics.sh --bootstrap-server node1:9092 --list

3.4 View Topic details

④ View the details of the first theme

bin/kafka-topics.sh --bootstrap-server node1:9092 --describe --topic first

image-20230705233257193

  • Replicas:1,2,0Indicates that replicas exist in three nodes
  • Leader:1Indicates that the Leader copy is stored in the node code-named 1, and the other two stores are Follower copies
  • Isr:1,2,0Represents a synchronous copy, the follower copy synchronizes the data of the leader copy, and the ISR is a set of copies used in Kafka to ensure data consistency and reliability

3.5 Modify the number of partitions

⑤ Note: the number of partitions can only be increased but not decreased

bin/kafka-topics.sh --bootstrap-server node1:9092 --alter --partitions 3 --topic first

Check the details of the first theme again after modification:

image-20230705233811497


3.6 Delete Topic

⑥Delete first theme

bin/kafka-topics.sh --bootstrap-server node1:9092 --delete --topic first

4. Producer command line operation

4.1 Producer command line operation

View operation producer command parameters

bin/kafka-console-producer.sh

image-20230706205550681

image-20230706205534461


4.2 Send message to topic

bin/kafka-console-producer.sh --bootstrap-server node1:9092 --topic first

image-20230706210323573

A message was sent to the first topic hello world.


5. Consumer command line operation

5.1 View operation consumer command parameters

bin/kafka-console-consumer.sh

image-20230706210917759

image-20230706210940712

image-20230706210834768


5.2 Consume messages in topic

Consume messages from the first topic

bin/kafka-console.consumer.sh --bootstrap-server node1:9092 --topic first

image-20230706211322198

We found that the cursor has been flashing, but did not receive any message, that is because kafka reads the message after the consumer script is started by default, and if you want to read all the data (including historical data), you need to add --from-beginningparameters

bin/kafka-console-consumer.sh --bootstrap-server node1:9092 --from-beginning --topic first

image-20230706211724910

In this way, the historical data is also read out.


Supongo que te gusta

Origin blog.csdn.net/Decade_Faiz/article/details/131566058
Recomendado
Clasificación