kafka producer producer client configuration parameters

 

  When the producer sends a message to the broker, you need to configure different parameters to ensure sent successfully.

= ACKs All # specify how many partitions must receive a copy of this message, the producers think this message was sent successfully
    
        ACKs = 0      # Manufacturer server need not wait for any response after sending the message
        ACKs = 1         # as long as the district leader successfully written copy of the message, it will end the service receives a successful response
        ACKs = -1 then all # producers or after sending a message, to wait for all copies of the ISR are successfully written message, to be able to receive a successful response from the server.

batch.size = 16384         #ProducerBatch maximum cache space, the default 16KB
    
bootstrap.servers = [192.1.1.2:9092 ] cluster #kafka
    
buffer.memory = 33554432             #RecordAccumulator accumulator the maximum message storage space, the default 32MB
    
client.id =                          # client id
    
compression.type = none # Set message compression format ( "gzip, snappy, lz4" ) \ 
      for message compression can greatly reduce network traffic, reduce the IO network, thereby improving overall performance. This is a way to optimize the time for space, if the delay requirement is high, the message compression is not recommended connections.max.idle.ms
= 540000 # Set long idle connections after closing, the default 9 minutes enable.idempotence = false interceptor.classes = [] # interceptor configuration key.serializer = class org.apache.kafka.common.serialization.IntegerSerializer #key的序列化器 linger.ms = 0 # producer client sends ProducerBatch filled or latency value is sent out over linger.ms # Waiting for more information before sending producerBatch designated producers join producerRecord join ProducerBatch time max.block.ms = 60000 # KafkaProducer for controlling the send () method and the blocking time partitionsFor () method, when the transmission buffer of the producer # Full or no metadata is available, these methods will be blocked. max.in.flight.requests.per.connection =. 5 # broker and the client terminal 5 connected to buffer up request is not a response (i.e., transmission end to the broker, not enough time to receive a response), If # no longer exceeds the transmission request, can use this parameter to determine whether the size of the message accumulation max.request.size = 1048576 # message producer client can send a maximum value, a default 1M (not recommended change, causes linkage) metadata.max.age.ms = 300000 # metadata update time, 5 minutes retries = 0 # Manufacturer retries, default 0, no retry when any abnormality occurs. Two kinds of abnormalities encountered when data is transmitted, one is recoverable, one is not # Restored, such as: leader election, network jitter, etc. These anomalies can be restored, this time set retries greater than zero can retry the network stable # Or leader after the election, this anomaly disappears, the data will be normal at the time of retransmission, when unrecoverable exceptions, such as exceeding the maximum max.request.size # When this error is unrecoverable retry.backoff.ms = 100 time interval between # retries, the best estimate at the time of abnormal recovery interval, so that retry recovery time is longer than an exception, value.serializer = class org.apache.kafka.common.serialization.StringSerializer #value序列化器

 

Guess you like

Origin www.cnblogs.com/MrRightZhao/p/11355991.html