kafka工作流程| 命令行操作

1.  概述

同步通信|
异步通信:
消费者只有一个即点对点
消费者有多个即发布/订阅模式

kafka--->>分布式消息队列,把两种模式结合起来
消息队列好处:
发接消息解耦了;冗余(备份);扩展性;灵活性、峰值处理;可恢复性;有顺序的;缓冲

2. 架构

客户端:producer、cluster、consumer

producer:-->TopicA在某个节点上,节点存不下了,
可指定分区:消息的分布式存储,一个主题分成多个分区;一个分区即一个消息队列;分区中的消息要备份ReplicationA/0

数据的一致性,其他节点去同步消息时速度可能不一样
Leader和Follower都是针对分区中的多个副本,分区下面有多个副本,在副本中选一个leader
leader接收发送数据,读写数据;follower只负责数据的备份
zk中的leader和follower是针对节点的
分区中的消息都是有序的,每一个消息要进行编号,即偏移量(从0开始编),如消费者读取到1号message,把1保存zk;下次读取时从1开始,防止数据被重复被消费;
有些消费者消费能力有限--->消费者组(多个消费者)多个消费者去消费某一个主题
每一个消费者是消费主题下面的分区,而不能消费同一个分区的数据
分区数=消费者数,速度是最快的,才能保证资源的最大化利用;

分区:①实现对消息的分布式存储;②实现消费者消费消息时的并发且互不干扰,提高消费者消费的效率;
消费者只能去找leader(读写)去消费,follower只是作为存储备份数据;

zk-->①主题、节点分区信息都会存储在zk;②消费者消费消息的offset也会存在zk,但0.9版本之后偏移量offset存在本地;

Topic主题是对消息的分类;Topic主题中的容量>节点broker1容量时,会进行分区;误区:并不是0分区满了就去存储到1分区,1分区满了就去存储到2分区
数据的备份数<=节点数

创建主题是要创建分区,每个分区的leader要分到不同节点实现负载均衡

 3. 安装kafka集群

tar -zxvf kafka_2.11-0.11.0.0.tgz -C /opt/module/
[kris@hadoop101 module]$ mv kafka_2.11-0.11.0.0/ kafka
在/opt/module/kafka目录下创建logs文件夹
[kris@hadoop101 kafka]$ mkdir logs

[kris@hadoop101 kafka]$ cd config/
[kris@hadoop101 config]$ vim server.properties  #修改配置文件
#broker的全局唯一编号,不能重复
broker.id=0
#删除topic功能使能
delete.topic.enable=true
#处理网络请求的线程数量
num.network.threads=3
#用来处理磁盘IO的现成数量
num.io.threads=8
#发送套接字的缓冲区大小
socket.send.buffer.bytes=102400
#接收套接字的缓冲区大小
socket.receive.buffer.bytes=102400
#请求套接字的缓冲区大小
socket.request.max.bytes=104857600
#kafka运行日志存放的路径    
log.dirs=/opt/module/kafka/logs
#topic在当前broker上的分区个数
num.partitions=1
#用来恢复和清理data下数据的线程数量
num.recovery.threads.per.data.dir=1
#segment文件保留的最长时间,超时将被删除
log.retention.hours=168
#配置连接Zookeeper集群地址
zookeeper.connect=hadoop102:2181,hadoop103:2181,hadoop104:2181

分发安装包

[kris@hadoop101 module]$ xsync kafka/

分别在hadoop102和hadoop103上修改配置文件/opt/module/kafka/config/server.properties中的broker.id=1、broker.id=2

       注:broker.id不得重复

启动集群

依次在hadoop101、hadoop102、hadoop103节点上启动kafka
[kris@hadoop101 kafka]$ bin/kafka-server-start.sh config/server.properties &  ##后台启动加&
[kris@hadoop102 kafka]$ bin/kafka-server-start.sh config/server.properties &
[kris@hadoop103 kafka]$ bin/kafka-server-start.sh config/server.properties &
关闭集群
[kris@hadoop101 kafka]$ bin/kafka-server-stop.sh stop
[kris@hadoop102 kafka]$ bin/kafka-server-stop.sh stop
[kris@hadoop103 kafka]$ bin/kafka-server-stop.sh stop


jps -l会显示进程的详细信息
[kris@hadoop101 ~]$ jps -l
3444 org.apache.zookeeper.server.quorum.QuorumPeerMain
3524 kafka.Kafka
3961 sun.tools.jps.Jps

4. Kafka命令行操作

① 创建Topic主题

###创建Topics,指定名字,分区数,副本数
[kris@hadoop101 bin]$ ./kafka-topics.sh --zookeeper hadoop101:2181 --create --topic first --partitions 3 --replication-factor 3
Created topic "first".
[kris@hadoop101 bin]$ ./kafka-topics.sh --zookeeper hadoop101:2181 --list
first
[zk: localhost:2181(CONNECTED) 1] ls /cluster
[id]
[zk: localhost:2181(CONNECTED) 2] ls /cluster/id
[]
[zk: localhost:2181(CONNECTED) 3] get /cluster/id
{"version":"1","id":"ujFrs7F7SVuO2JwXw62vow"}
cZxid = 0x700000014
ctime = Wed Feb 27 11:51:23 CST 2019
mZxid = 0x700000014
mtime = Wed Feb 27 11:51:23 CST 2019
pZxid = 0x700000014
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 45
numChildren = 0 
[zk: localhost:2181(CONNECTED) 4] ls /brokers
[ids, topics, seqid]
[zk: localhost:2181(CONNECTED) 5] ls /brokers/topics
[first]
[zk: localhost:2181(CONNECTED) 6] ls /brokers/topics/first
[partitions]
[zk: localhost:2181(CONNECTED) 7] ls /brokers/topics/first/partitions
[0, 1, 2]
[zk: localhost:2181(CONNECTED) 8] ls /brokers/topics/first/partitions/0
[state]
[zk: localhost:2181(CONNECTED) 9] ls /prokers/topics/first/partitions/0/state
Node does not exist: /prokers/topics/first/partitions/0/state 

② 生产者生成| 消费者消费

控制台生产者执行脚本
[kris@hadoop101 bin]$ ./kafka-console-producer.sh --broker-list hadoop101:9092 --topic first
>Hello kafka!
>kris
>smile
   
消费者控制台① :默认从最大的offset消费开始消费,下次就是后边的数据,前边接收不到;再生产数据才能收到
[kris@hadoop101 bin]$ ./kafka-console-consumer.sh --zookeeper hadoop101:2181 --topic first
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
alex


消费者控制台② --from-beginning 从头消费
[kris@hadoop101 bin]$ ./kafka-console-consumer.sh --zookeeper hadoop101:2181 --topic first --from-beginning      
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
smile
kris
Hello kafka!
alex
a消费者是对3个分区一个个分区消费的,所以总的顺序不一样
第一个消费者组:
[zk: localhost:2181(CONNECTED) 12] ls /consumers  要保证消费者是在线的,偏移量是临时的
[console-consumer-27938, console-consumer-90053]  控制台消费者组id
[zk: localhost:2181(CONNECTED) 13] ls /consumers/console-consumer-27938
[ids, owners, offsets]
[zk: localhost:2181(CONNECTED) 14] ls /consumers/console-consumer-27938/offsets
[first]
[zk: localhost:2181(CONNECTED) 15] ls /consumers/console-consumer-27938/offsets/first
[0, 1, 2]
[zk: localhost:2181(CONNECTED) 16] ls /consumers/console-consumer-27938/offsets/first/0
[]
[zk: localhost:2181(CONNECTED) 17] get /consumers/console-consumer-27938/offsets/first/0
1
cZxid = 0x80000003f
ctime = Wed Feb 27 18:26:11 CST 2019
mZxid = 0x80000003f
mtime = Wed Feb 27 18:26:11 CST 2019
pZxid = 0x80000003f
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 1
numChildren = 0   
[zk: localhost:2181(CONNECTED) 18] get /consumers/console-consumer-27938/offsets/first/1
1
[zk: localhost:2181(CONNECTED) 19] get /consumers/console-consumer-27938/offsets/first/2
2   
   
   
   
第二个消费者组的情况:
[zk: localhost:2181(CONNECTED) 26] get /consumers/console-consumer-90053/offsets/first/0
1
[zk: localhost:2181(CONNECTED) 27] get /consumers/console-consumer-90053/offsets/first/1
1
[zk: localhost:2181(CONNECTED) 28] get /consumers/console-consumer-90053/offsets/first/2
2  
③ 偏移量存储在本地 --bootstrap-server
[kris@hadoop101 bin]$ ./kafka-console-consumer.sh --bootstrap-server hadoop101:9092 --topic first --from-beginning
Hello kafka!
alex
kris
smile
--bootstrap-server是不再把偏移量存储在zookeeper上,而是存储在本地;数据还是存储在分区的first-0/first-1/first-2
drwxrwxr-x. 2 kris kris  4096 2月  27 19:02 __consumer_offsets-0
drwxrwxr-x. 2 kris kris  4096 2月  27 19:02 __consumer_offsets-12
drwxrwxr-x. 2 kris kris  4096 2月  27 19:02 __consumer_offsets-15
drwxrwxr-x. 2 kris kris  4096 2月  27 19:02 __consumer_offsets-18
drwxrwxr-x. 2 kris kris  4096 2月  27 19:02 __consumer_offsets-21
drwxrwxr-x. 2 kris kris  4096 2月  27 19:02 __consumer_offsets-24
drwxrwxr-x. 2 kris kris  4096 2月  27 19:02 __consumer_offsets-27
drwxrwxr-x. 2 kris kris  4096 2月  27 19:02 __consumer_offsets-3
drwxrwxr-x. 2 kris kris  4096 2月  27 19:02 __consumer_offsets-30
drwxrwxr-x. 2 kris kris  4096 2月  27 19:02 __consumer_offsets-33
drwxrwxr-x. 2 kris kris  4096 2月  27 19:02 __consumer_offsets-36
drwxrwxr-x. 2 kris kris  4096 2月  27 19:02 __consumer_offsets-39
drwxrwxr-x. 2 kris kris  4096 2月  27 19:02 __consumer_offsets-42
drwxrwxr-x. 2 kris kris  4096 2月  27 19:02 __consumer_offsets-45
drwxrwxr-x. 2 kris kris  4096 2月  27 19:02 __consumer_offsets-48
drwxrwxr-x. 2 kris kris  4096 2月  27 19:02 __consumer_offsets-6
drwxrwxr-x. 2 kris kris  4096 2月  27 19:02 __consumer_offsets-9
[kris@hadoop101 __consumer_offsets-0]$ ll
总用量 0
-rw-rw-r--. 1 kris kris 10485760 2月  27 19:02 00000000000000000000.index
-rw-rw-r--. 1 kris kris        0 2月  27 19:02 00000000000000000000.log
-rw-rw-r--. 1 kris kris 10485756 2月  27 19:02 00000000000000000000.timeindex
-rw-rw-r--. 1 kris kris        0 2月  27 19:02 leader-epoch-checkpoint


[kris@hadoop101 kafka]$ bin/kafka-topics.sh --zookeeper hadoop101:2181 --list
__consumer_offsets
first
[zk: localhost:2181(CONNECTED) 41] ls /brokers/topics
[first, __consumer_offsets]
[zk: localhost:2181(CONNECTED) 47] ls /brokers/topics/__consumer_offsets/partitions
[44, 45, 46, 47, 48, 49, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43]
[zk: localhost:2181(CONNECTED) 48] ls /brokers/topics/__consumer_offsets/partitions/0
[state]
[zk: localhost:2181(CONNECTED) 49] ls /brokers/topics/__consumer_offsets/partitions/0/state
[]
[zk: localhost:2181(CONNECTED) 50] get /brokers/topics/__consumer_offsets/partitions/0/state
{"controller_epoch":2,"leader":0,"version":1,"leader_epoch":0,"isr":[0]}
cZxid = 0x80000008c
ctime = Wed Feb 27 19:02:06 CST 2019
mZxid = 0x80000008c
mtime = Wed Feb 27 19:02:06 CST 2019
pZxid = 0x80000008c
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 72
numChildren = 0
主题topic是逻辑上的概念,分区是物理上的;
[kris@hadoop101 kafka]$ cd logs/
[kris@hadoop101 logs]$ ll
drwxrwxr-x. 2 kris kris  4096 2月  27 18:21 first-0
drwxrwxr-x. 2 kris kris  4096 2月  27 18:21 first-1
drwxrwxr-x. 2 kris kris  4096 2月  27 18:21 first-2
[kris@hadoop101 logs]$ cd first-0
[kris@hadoop101 first-0]$ ll
总用量 8
-rw-rw-r--. 1 kris kris 10485760 2月  27 18:07 00000000000000000000.index
-rw-rw-r--. 1 kris kris       73 2月  27 18:21 00000000000000000000.log
-rw-rw-r--. 1 kris kris 10485756 2月  27 18:07 00000000000000000000.timeindex
-rw-rw-r--. 1 kris kris        8 2月  27 18:21 leader-epoch-checkpoint
数据序列化到磁盘而不是内存first-0,索引


[kris@hadoop101 kafka]$ bin/kafka-topics.sh --zookeeper hadoop101:2181 --describe --topic first
Topic:first     PartitionCount:3        ReplicationFactor:3     Configs:
        Topic: first    Partition: 0    Leader: 1       Replicas: 1,0,2 Isr: 1,0,2  ##Isr是同步副本队列
        Topic: first    Partition: 1    Leader: 2       Replicas: 2,1,0 Isr: 2,1,0
        Topic: first    Partition: 2    Leader: 0       Replicas: 0,2,1 Isr: 0,2,1
测试Kafka集群一共3个节点,
Topic为first, 编号为0的Partition, Leader在broker.id=0这个节点上,副本在broker.id为0 1 2这3个节点,并且所有副本都存活,并跟broker.id=0这个节点同步        
leader是在给出的所有partitons中负责读写的节点,每个节点都有可能成为leader
replicas 显示给定partiton所有副本所存储节点的节点列表,不管该节点是否是leader或者是否存活。
isr 副本都已同步的的节点集合,这个集合中的所有节点都是存活状态,并且跟leader同步;如果没有同步数据,则会从这个Isr中移除;

写入的顺序:
>Hello kafka!
>kris
>smile
>alex  
节点0-broker.id=0, smile在分区first-0    1
节点1-broker.id=1, kris在分区first-1    1
节点2-broker.id=2, Hello kafka!和alex在分区first-2    2(在zk中get /consumers/console-consumer-90053/offsets/first/2)

同一个partition可能会有多个replication,而这时需要在这些replication之间选出一个leader,
producer和consumer只与这个leader交互,其它replication作为follower从leader 中复制数据。
写
flume拉数据
kafka拉数据进行消费

producer-push

不同消费者组中消费者可消费同一个分区数据

分区原则:
每条消息以k,v
对key的hashcode/3888选举时用的
轮询-flume中sink
rundrobin

partition-->leader
本地log
所有followers同步完
ack=1数据可能丢失 存 索引index, 

 

读取本地保存的offset
1)修改配置文件consumer.properties
exclude.internal.topics=false
2)读取offset
bin/kafka-console-consumer.sh --topic __consumer_offsets --zookeeper hadoop101:2181 --formatter "kafka.coordinator.group.GroupMetadataManager\$OffsetsMessageFormatter" --consumer.config config/consumer.properties --from-beginning
k(控制台消费者组id| 主题名| 分区),k(偏移量,提交时间,过期时间)
[console-consumer-81371,first,1]::[OffsetMetadata[1,NO_METADATA],CommitTime 1551272323753,ExpirationTime 1551358723753]
[console-consumer-81371,first,0]::[OffsetMetadata[1,NO_METADATA],CommitTime 1551272323753,ExpirationTime 1551358723753]
[console-consumer-81371,first,2]::[OffsetMetadata[2,NO_METADATA],CommitTime 1551272328754,ExpirationTime 1551358728754]

猜你喜欢

转载自www.cnblogs.com/shengyang17/p/10443115.html