Kafka的描述和安装

一,Kafka的特性

1,消息列队的特点

2,生产者消费者模式

3,先进先出(FIFO)顺序保证

4,可靠性保证
   4.1,自己不丢数据
   4.2,消费者不丢数据:“至少一次,严格一次”

5,至少一次就是可能会有两次,会重

6,严格一次机制就会负责一点

二,Kafka的架构

1,producer:消息生存者

2,consumer:消息消费者

3,broker:kafka集群的server,负责处理消息读、写请求,存储消息

4,topic:消息队列/分类

5,Queue里面有生产者消费者模型

6,broker就是代理,在kafkacluster这一层这里,其实里面是有很多个broker

7,topic就相当于queue

8,图里没有画其实还有zookeeper,这个架构里面有些元信息是存在zookeeper上面的,整个集群的管理也和zookeeper有很大的关系

三,Kafka的消息存储和生产消费模型

1,一个topic分成多个partition

2,每个partition内部消息强有序,其中的每个消息都有一个序号叫offset

3,一个partition只对应一个broker,一个broker可以管多个partition

4,消息不经过内存缓冲,直接写入文件

5,根据时间策略删除,而不是消费完就删除

6,producer自己决定往哪个partition写消息,可以是轮询的负载均衡,或者是基于hash的partition策略

四,Kafka是怎生产消息,消费消息,和怎么存储消息的?

1,kafka里面的消息是有topic来组织的,简单的我们可以想象为一个队列,一个队列就是一个topic,然后它把每个topic又分为很多个partition,这个是为了做并行的,在每个partition里面是有序的,相当于有序的队列,其中每个消息都有个序号,比如0到12,从前面读往后面写

2,一个partition对应一个broker,一个broker可以管多个partition,比如说,topic有6个partition,有两个broker,那每个broker就管3个partition

3,这个partition可以很简单想象为一个文件,当数据发过来的时候它就往这个partition上面append,追加就行,kafka和很多消息系统不一样,很多消息系统是消费完了我就把它删掉,而kafka是根据时间策略删除,而不是消费完就删除,在kafka里面没有一个消费完这么个概念,只有过期这样一个概念

4,这里producer自己决定往哪个partition里面去写,这里有一些的策略,譬如如果hash就不用多个partition之间去join数据了

五,Kafka的消息存储和生产消费模型

1,consumer自己维护消费到哪个offset

2,每个consumer都有对应的group

3,group内是queue消费模型
  3.1,各个consumer消费不同的partition
  3.2,因此一个消息在group内只消费一次

4,group间是publish-subscribe消费模型
  4.1,各个group各自独立消费,互不影响
  4.2,因此一个消息在被每个group消费一次

六,Kafka的特点

1,消息系统的特点:生存者消费者模型,FIFO 
   消息系统基本的特点是保证了,有基本的生产者消费者模型,partition内部是FIFO的,partition之间呢不是FIFO的,当然我们可以把topic设为一个partition,这样就是严格的FIFO

2,高性能:单节点支持上千个客户端,百MB/s吞吐
   接近网卡的极限

3,持久性:消息直接持久化在普通磁盘上且性能好
   直接写到磁盘里面去,就是直接append到磁盘里面去,这样的好处是直接持久话,数据不会丢,第二个好处是顺序写,然后消费数据也是顺序的读,所以持久化的同时还能保证顺序,比较好,因为磁盘顺序读比较好

4,分布式:数据副本冗余、流量负载均衡、可扩展
   分布式,数据副本,也就是同一份数据可以到不同的broker上面去,也就是当一份数据,磁盘坏掉的时候,数据不会丢失,比如3个副本,就是在3个机器磁盘都坏掉的情况下数据才会丢,在大量使用情况下看这样是非常好的,负载均衡,可扩展,在线扩展,不需要停服务的

5,很灵活:消息长时间持久化+Client维护消费状态
   消费方式非常灵活,第一原因是消息持久化时间跨度比较长,一天或者一星期等,第二消费状态自己维护消费到哪个地方了,Queue的模型,发布订阅(广播)的模型,还有回滚的模型
 

七,Kafka的安装

本次安装使用的是2个节点(用的是node2和node3)

1,集群之间最好免秘钥

2,安装好JDK

[root@node2 ~]# java -version
java version "1.8.0_151"
Java(TM) SE Runtime Environment (build 1.8.0_151-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode)


3,ntp时间同步(和node1时间同步)

[root@node2 yum.repos.d]# ntpdate node1
25 Jul 07:20:28 ntpdate[13877]: step time server 192.168.2.231 offset -0.587725 sec
[root@node3 yum.repos.d]# ntpdate node1
25 Jul 07:20:36 ntpdate[13953]: adjust time server 192.168.2.231 offset -0.054590 sec


4,安装好Zookeeper(node1,node2,node3)

[root@node1 ~]#  zkServer.sh status
JMX enabled by default
Using config: /home/bigdata/zookeeper-3.4.6/bin/../conf/zoo.cfg
Mode: follower
[root@node3 ~]#  zkServer.sh status
JMX enabled by default
Using config: /home/bigdata/zookeeper-3.4.6/bin/../conf/zoo.cfg
Mode: leader


5,安装Kafka

5.1, 解压

 tar -zxvf kafka_2.12-0.11.0.2.tgz  -C /home/bigdata/

5.2,文件配置

[root@node3 config]# pwd
/home/bigdata/kafka_2.12-0.11.0.2/config
[root@node3 config]# vi server.properties 

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=2

# Switch to enable topic deletion or not, default value is false
#delete.topic.enable=true

############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://node3:9092

# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092

#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=2

# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

############################# Log Basics #############################

# A comma seperated list of directories under which to store log files
log.dirs=/opt/kafka-logs

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age

# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=node1:2181,node2:2181,node3:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000


############################# Group Coordinator Settings #############################

# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0

5.3,配置环境变量

vi /etc/profile

export KAFKA_HOME=/home/bigdata/kafka_2.12-0.11.0.2
export PATH=$PATH:$KAFKA_HOME/bin

source /etc/profile

5.4,启动Kafka(两个节点都需要)

[root@node2 kafka_2.12-0.11.0.2]# pwd
/home/bigdata/kafka_2.12-0.11.0.2
[root@node2 kafka_2.12-0.11.0.2]# kafka-server-start.sh config/server.properties &

5.5,查看的进程

[root@node3 kafka_2.12-0.11.0.2]# jps
13968 QuorumPeerMain
14048 Kafka
14340 Jps


6,Kafka的基本使用

6.1,创建topic

[root@node2 kafka_2.12-0.11.0.2]# kafka-topics.sh --create --zookeeper node1:2181,node2:2181,node3:2181 --replication-factor 2 --partitions 1 --topic mytopic
Created topic "mytopic".

6.2,查看创建了那些topic

[root@node2 kafka_2.12-0.11.0.2]# kafka-topics.sh --list --zookeeper node1:2181,node2:2181,node3:2181
my_test
my_topic
mytopic
test

6.3,查看topic的描述

[root@node2 kafka_2.12-0.11.0.2]# kafka-topics.sh --describe --zookeeper node1:2181,node2:2181,node3:2181 --topic my_topic
Topic:my_topic	PartitionCount:1	ReplicationFactor:2	Configs:
	Topic: my_topic	Partition: 0	Leader: 2	Replicas: 2,1	Isr: 2,1

6.4,生产者topic

[root@node2 kafka_2.12-0.11.0.2]# kafka-console-producer.sh --broker-list node2:9092,node3:9092 --topic mytopic
>asdsa
>sdas
>

6.5,消费者topic

[root@node3 kafka_2.12-0.11.0.2]# kafka-console-consumer.sh --zookeeper node1:2181,node2:2181,node3:2181 --topic mytopic --from-beginning  
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
asdsa
sdas

猜你喜欢

转载自blog.csdn.net/afafawfaf/article/details/81163198