kafka Chinese document old comsumer configuration parameters

         This document corresponds to the content of the kafka installation directory /config/consumer.properties file. This configuration is an old version of kafka. Since there are too many clauses in the original English version of the document and it is too difficult to understand, I spent four days translating a Chinese document, hoping to help everyone. If you have any questions, please leave a message. The webpage may not be displayed completely, please download the attached PDF.

 

name

default

describe

group.id

 

A unique string identifying the consumer process group to which this consumer belongs. By setting the same group ID, multiple processes indicate that they are all part of the same consumer group.

zookeeper.connect

 

Specify the ZooKeeper connection string in the format hostname : port , where hostname and port are the hostname and port of the ZooKeeper server. To allow connections to other ZooKeeper nodes when the ZooKeeper machine is down you can specify multiple hosts in the format hostname1:port1,hostname2:port2,hostname3:port3 . A server MAY also have a ZooKeeper mutable path as part of the ZooKeeper connection string, placing data in some path in the global ZooKeeper namespace. If so the consumer should use the same mutable path in its connection path. For example to give a variable path to /chroot/path , you can give a connection string like this

 

hostname1:port1,hostname2:port2,hostname3:port3/chroot/path

consumer.id

null

If not set, it will be created automatically.

socket.timeout.ms

30 * 1000

The network request socket timed out. The actual timeout is set to max.fetch.wait + socket.timeout.ms .

socket.receive.buffer.bytes

64 * 1024

Socket receive buffer for network requests.

fetch.message.max.bytes

1024 * 1024

The number of message bytes each topic partition attempts to read in each read request. Bytes are read into memory for each partition, so this helps control the memory used by consumers. The fetch request size must be at least as large as the maximum message size allowed by the server, otherwise the producer can send larger messages than the user can fetch.

num.consumer.fetchers

1

The number of reader threads used to read data.

auto.commit.enable

TRUE

If set to true , periodically commit offsets of messages read by consumers to ZooKeeper . This committed offset will be used when the process fails, as where new consumers start.

auto.commit.interval.ms

60 * 1000

 

How often consumer offsets are committed to zookeeper , in milliseconds.

queued.max.message.chunks

2

The number of buffered message blocks consumed. Each block can reach fetch.message.max.bytes

rebalance.max.retries

4

当一个新消费者加入一个消费者组,消费者集合尝试重平衡负载来分配分区给每个消费者。当这个分配正在发生时如果消费者集合改变了,重平衡会失败并重试。此设置控制在放弃之前最多尝试次数。

fetch.min.bytes

1

服务器为每个读取请求返回的数据的最小数量。如果没有足够的数据可用,在响应之前请求会等待有足够的数据。

fetch.wait.max.ms

100

如果没有足够的数据立即满足fetch.min.bytes的值,时间服务器在响应读取请求之前服务器阻塞的最大时间。

rebalance.backoff.ms

2000

在重新平衡期间重试之间的延迟时间。如果没有显式设置,用zookeeper.sync.time.ms的值。

refresh.leader.backoff.ms

200

在确定一个刚刚失去领导者的分区的领导者之前,需要等待的时间。

auto.offset.reset

largest

当在ZooKeeper中没有初始的偏移或偏移已超出范围:* smallest : 自动重置偏移到最小偏移
* largest :
自动重置偏移到最大偏移

* anything else: 抛出一个异常给消费者

consumer.timeout.ms

-1

抛出一个超时异常给消费者,如果在指定的间隔内没有没有可用的消息。

exclude.internal.topics

TRUE

来自内部主题(例如偏移)的消息是否暴露给消费者。

client.id

group id value

 

用户标识是用户指定的字符,在每次请求发送时用于追踪调用。它应该可以逻辑上区分应用请求。

zookeeper.session.timeout.ms

6000

 

ZooKeeper 会话超时时间。如果在这个周期内消费者到ZooKeeper之间的心跳失败,那么它被认为是死亡的并且会发生重新平衡。 

 

zookeeper.connection.timeout.ms

6000

客户端建立到zookeeper之间连接等待的最长时间。

zookeeper.sync.time.ms

2000

Zookeeper的跟随者在Zookeeper领导者之后的距离。

offsets.storage

zookeeper

选择偏移所保存的地方,(zookeeper   kafka)

offsets.channel.backoff.ms

1000

重新连接偏移渠道或试失败的偏移提取/提交请求的回退周期。

offsets.channel.socket.timeout.ms

10000

为偏移读/提交请求读取响应的超时时间。这个超时也用于用来偏移管理请求的ConsumerMetadata请求。

offsets.commit.max.retries

5

重试失败的偏移提交一直达到这个属性值的大小。这个重试计数只适用于在关闭期间的偏移提交。它不适用于自动提交线程的提交。它也不适用于在提交偏移之前查询偏移协调器的尝试。例如,如果一个消费者元数据请求因任何原因而失败,它将重试,重试不计入该限制。

dual.commit.enabled

TRUE

如果你使用"kafka" 作为offsets.storage的值,你可以提交两次偏移到ZooKeeper(除了kafka外)。这是在从zookeeper-based 偏移存储升级到 to kafka-based 偏移存储期间所必须的。对于任何给定的消费者组,在那个组里的所有实例升级到提交偏移到broker(取代直接到ZooKeeper)的新版本之后关闭它,这样才是安全的。

partition.assignment.strategy

range

"range" “roundrobin”策略之间选择用来分配分区给消费者流。循环赛分区分配器展示了所有的可用的分区和所有可用的消费者线程。然后从分区到用户进行循环赛分配。如果所有用户的订阅都是可识别的,分区将会统一分发。(分区所有者计数值将会是所有消费者线程中的一个具体的变量)。循环赛分配只有在以下条件下允许:(a)在一个消费者实例中每个主题都有相同的流数字。(b)订阅主题的集合对于组里每个消费者来说是可识别的。范围分区是基于每个主题。对于每个主题,我们按数字顺序展示了可用的分区并且按字典顺序勾画了消费者线程。然后,我们根据消费者流(线程)的总数来划分分区的数量,以确定分配给每个消费者的分区数。如果它不均匀地分配,那么头几个消费者将有一个额外的分区。

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326212738&siteId=291194637
Recommended