Attributes |
Defaults |
describe |
broker.id |
|
Required parameter, the unique identifier of the broker |
log.dirs |
/tmp/kafka-logs |
The directory where Kafka data is stored. Multiple directories can be specified, separated by commas. When a new partition is created, it will be stored in the directory with the fewest partitions currently. |
port |
9092 |
BrokerServer accepts client connection port number |
zookeeper.connect |
null |
The connection string of Zookeeper, the format is: hostname1:port1,hostname2:port2,hostname3:port3. You can fill in one or more, in order to improve reliability, it is recommended to fill in all. Note that this configuration allows us to specify a zookeeper path to store all data of this kafka cluster. In order to distinguish it from other application clusters, it is recommended to specify the storage directory of this cluster in this configuration. The format is: hostname1:port1,hostname2:port2,hostname3 :port3/chroot/path . It should be noted that the parameters of the consumer must be consistent with this parameter. |
message.max.bytes |
1000000 |
The maximum message size that the server can receive. Note that this parameter must be the same as the maximum.message.size of the consumer, otherwise the consumer will not be able to consume it because the message produced by the producer is too large. |
num.io.threads |
8 |
The number of IO threads used by the server to perform read and write requests. The number of this parameter must be at least equal to the number of disks on the server. |
queued.max.requests |
500 |
The size of the queue that the I/O thread can process requests. If the actual number of requests exceeds this size, the network thread will stop receiving new requests. |
socket.send.buffer.bytes |
100 * 1024 |
The SO_SNDBUFF buffer the server prefers for socket connections. |
socket.receive.buffer.bytes |
100 * 1024 |
The SO_RCVBUFF buffer the server prefers for socket connections. |
socket.request.max.bytes |
100 * 1024 * 1024 |
The maximum request allowed by the server, used to prevent memory overflow, its value should be less than the Java heap size. |
num.partitions |
1 |
The default number of partitions. If the topic does not specify the number of partitions when it is created, this value is used by default. It is recommended to change it to 5. |
log.segment.bytes |
1024 * 1024 * 1024 |
The size of the segment file. If it exceeds this value, a new segment will be created automatically. This value can be overridden by topic-level parameters. |
log.roll.{ms,hours} |
24 * 7 hours |
The time to create a segment file. This value can be overridden by topic-level parameters. |
log.retention.{ms,minutes,hours} |
7 days |
The storage period of the Kafka segment log. If the storage period exceeds this time, the log will be deleted. This parameter can be overridden by topic level parameters. When the amount of data is large, it is recommended to reduce this value. |
log.retention.bytes |
-1 |
The maximum capacity of each partition. If the data volume exceeds this value, the partition data will be deleted. Note that this parameter controls each partition rather than topic. This parameter can be overridden by log level parameters. |
log.retention.check.interval.ms |
5 minutes |
Deletion Policy Check Period |
auto.create.topics.enable |
true |
Automatically create topic parameters. It is recommended to set this value to false to strictly control topic management and prevent producers from writing topics by mistake. |
default.replication.factor |
1 |
The default number of copies is recommended to be changed to 2. |
replica.lag.time.max.ms |
10000 |
If no follower's fetch request is received within this window time, the leader will remove it from the ISR (in-sync replicas). |
replica.lag.max.messages |
4000 |
如果replica节点落后leader节点此值大小的消息数量,leader节点就会将其从ISR中移除。 |
replica.socket.timeout.ms |
30 * 1000 |
replica向leader发送请求的超时时间。 |
replica.socket.receive.buffer.bytes |
64 * 1024 |
The socket receive buffer for network requests to the leader for replicating data. |
replica.fetch.max.bytes |
1024 * 1024 |
The number of byes of messages to attempt to fetch for each partition in the fetch requests the replicas send to the leader. |
replica.fetch.wait.max.ms |
500 |
The maximum amount of time to wait time for data to arrive on the leader in the fetch requests sent by the replicas to the leader. |
num.replica.fetchers |
1 |
Number of threads used to replicate messages from leaders. Increasing this value can increase the degree of I/O parallelism in the follower broker. |
fetch.purgatory.purge.interval.requests |
1000 |
The purge interval (in number of requests) of the fetch request purgatory. |
zookeeper.session.timeout.ms |
6000 |
ZooKeeper session 超时时间。如果在此时间内server没有向zookeeper发送心跳,zookeeper就会认为此节点已挂掉。 此值太低导致节点容易被标记死亡;若太高,.会导致太迟发现节点死亡。 |
zookeeper.connection.timeout.ms |
6000 |
客户端连接zookeeper的超时时间。 |
zookeeper.sync.time.ms |
2000 |
H ZK follower落后 ZK leader的时间。 |
controlled.shutdown.enable |
true |
允许broker shutdown。如果启用,broker在关闭自己之前会把它上面的所有leaders转移到其它brokers上,建议启用,增加集群稳定性。 |
auto.leader.rebalance.enable |
true |
If this is enabled the controller will automatically try to balance leadership for partitions among the brokers by periodically returning leadership to the “preferred” replica for each partition if it is available. |
leader.imbalance.per.broker.percentage |
10 |
The percentage of leader imbalance allowed per broker. The controller will rebalance leadership if this ratio goes above the configured value per broker. |
leader.imbalance.check.interval.seconds |
300 |
The frequency with which to check for leader imbalance. |
offset.metadata.max.bytes |
4096 |
The maximum amount of metadata to allow clients to save with their offsets. |
connections.max.idle.ms |
600000 |
Idle connections timeout: the server socket processor threads close the connections that idle more than this. |
num.recovery.threads.per.data.dir |
1 |
The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. |
unclean.leader.election.enable |
true |
Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss. |
delete.topic.enable |
false |
启用deletetopic参数,建议设置为true。 |
offsets.topic.num.partitions |
50 |
The number of partitions for the offset commit topic. Since changing this after deployment is currently unsupported, we recommend using a higher setting for production (e.g., 100-200). |
offsets.topic.retention.minutes |
1440 |
Offsets that are older than this age will be marked for deletion. The actual purge will occur when the log cleaner compacts the offsets topic. |
offsets.retention.check.interval.ms |
600000 |
The frequency at which the offset manager checks for stale offsets. |
offsets.topic.replication.factor |
3 |
The replication factor for the offset commit topic. A higher setting (e.g., three or four) is recommended in order to ensure higher availability. If the offsets topic is created when fewer brokers than the replication factor then the offsets topic will be created with fewer replicas. |
offsets.topic.segment.bytes |
104857600 |
Segment size for the offsets topic. Since it uses a compacted topic, this should be kept relatively low in order to facilitate faster log compaction and loads. |
offsets.load.buffer.size |
5242880 |
An offset load occurs when a broker becomes the offset manager for a set of consumer groups (i.e., when it becomes a leader for an offsets topic partition). This setting corresponds to the batch size (in bytes) to use when reading from the offsets segments when loading offsets into the offset manager’s cache. |
offsets.commit.required.acks |
-1 |
The number of acknowledgements that are required before the offset commit can be accepted. This is similar to the producer’s acknowledgement setting. In general, the default should not be overridden. |
offsets.commit.timeout.ms |
5000 |
The offset commit will be delayed until this timeout or the required number of replicas have received the offset commit. This is similar to the producer request timeout. |