Kafka notes 2 - Kafka's server configuration

After the download, unzip kafka, there are three production profile to the topic under kafka / config, consumer-related.
  • server.properties-- server configuration
  • Production side configuration producer.properties--
  • consumer.properties-- consumer side configuration
This introduces the configuration parameters of the server.
 
server.properties 
# Number specified kafka cluster global broker, each broker's number can not be repeated. 
broker. ID = 0 
 
Address list specifies broker #Listeners listen for client connections, i.e. the connection to the client's broker address list entry, 
# configured to format Protocol: // hostname: Port, a plurality of intermediate addresses separated by commas, 
# protocol which represents the protocol type, Kafka currently supported protocols include PLAINTEXT, SSL, SASL_SSL etc. If security authentication is not turned on, you can use simple PLAINTEXT; 
# hostname represents the host name, host name is best not to be empty, here is localhost; 
#port behalf of the service port, producer or consumer will establish a connection on this port, this is 9092 . 
listeners = PLAINTEXT: // localhost: 9092 
 
# advertised.listeners mainly used IaaS (Infrastructure as a Service) environment, 
# if multiple network cards comprising a public network and private network card NIC, listeners configuration parameters for the private network IP address binding broker using interprocess communication, 
# advertised.listeners configuration parameter bindings for external public IP clients to use,
advertised.listeners = PLAINTEXT: //localhost: 9092 
 
number of threads #broker processing network requests, i.e. the number of threads of the received message, generally you do not need to modify. 
# Receiving thread will received message into memory, and then written to disk from memory. 
num.network.threads = 3 
 
the number of threads to use when writing to disk # messages from memory. 
# Of threads to handle the number of disk IO. 
num.io.threads = . 8 
 
# send socket buffer size. 
socket.send.buffer.bytes = 102400 
 
# to accept the socket buffer size. 
socket.receive.buffer.bytes = 102400 
 
# socket buffer size requested. 
socket.request.max.bytes = 104857600 
 
#kafka operation log storage path 
log.dirs = / usr / local / Kafka / log / Kafka 
 
#topic current slice number in the broker 
num.partitions = . 1 
 
# is the default log seven days time, overtime, then it will be clean up, clean up this thing you need to have some threads to do. 
# This is used to set the number of threads to restore and clean up the data under the data. 
num.recovery.threads.per.data. dir = 1 
 
#-themed number of copies 
offsets.topic.replication.factor = 1 
 
# affairs theme replication factor, set higher in order to ensure availability. 
transaction.state.log.replication.factor = 1 
 
# affairs topics covered by the min.insync.replicas configuration. 
transaction.state.log.min.isr = 1 
 
# thread receives the message will be received into memory, and then from memory to disk, 
when the message is written to the disk from memory, there is a time limit (time threshold) and a quantity (threshold number) 
# set here is the threshold number, the number of the message reaches a threshold, triggers the flush to disk. The next parameter is the time threshold. 
log.flush.interval.messages = 10000 
 
Time # of the message buffer, reaching a threshold, triggers the flush message from memory to disk, in milliseconds. 
log.flush.interval.ms = 1000
 
# Maximum time to keep log files, the unit hours, the default seven days (168 hours), overtime will be deleted. 
log.retention.hours = 168 
 
#topic partition segment is a bunch of files stored 
# This parameter controls the log file size of each segment, in bytes, which is the default for the 1G. 
log.segment.bytes = 1073741824 
 
time periodically checks the file size of # milliseconds. 
I.e. # periodically check segment file has not reached. 1G (a parameter) 
log.retention.check.interval.ms = 300000 
 
#zookeeper cluster address, may be a plurality, the plurality of addresses separated by commas, 
#broker need use zookeeper save meta data. 
zookeeper.connect = localhost: 2181 
 
#zookeeper connection timeout 
zookeeper.connection.timeout.ms = 6000 
 
# prior to performing the first re-balancing, group coordinator will wait for more consumers to join the group time. 
# Possibility of a longer delay time means rebalancing of the smaller, but the waiting time begins to increase.
group.initial.rebalance.delay.ms = 0 
 

Guess you like

Origin www.cnblogs.com/sunshineliulu/p/12012000.html