A producer
1, important configuration
# high priority configuration
# comma separated list of host: port list, for establishing the initial connection to the cluster and Kafka
spring.kafka.producer.bootstrap-servers = TopKafka1: 9092, TopKafka2: 9092 , TopKafka3: 9092
# 0 is greater than the value of the client will re-transmit any data, once the data transmission failure. Note that these retry and retry the client receives an error when sending no different. Retry sequence allows the potential change of the data, if the messages are sent to the same recording Partition, the first message fails the second transmission successfully, the second message appears to be more than the first message early.
= 0 spring.kafka.producer.retries
# whenever a plurality of records are sent to the same partition, the producer will attempt will be recorded with fewer requests for the batch processing,
between # which helps to enhance the server and the client performance, this default configuration control batch size (in bytes), the default value is 16384
spring.kafka.producer.batch-size = 16384
# Producer memory size can be used to cache data. If the data generation speed is greater than the speed of transmission to the broker, producer clog or throw an exception, in "block.on.buffer.full" to indicate. This setting total memory and will be able to use the related producer, but it is not a hard limit, because not all memory is used for caching producer. Some additional memory will be used to compress (if introduced into the compression mechanism), also has options for maintenance requests.
Memory-33554432 = spring.kafka.producer.buffer
# Key Serializer of classes that implement the interface org.apache.kafka.common.serialization.Serializer
spring.kafka.producer.key-serializer = org.apache.kafka.common.serialization .StringSerializer
the Serializer class # values, implements the interface org.apache.kafka.common.serialization.Serializer
spring.kafka.producer.value-Serializer = org.apache.kafka.common.serialization.StringSerializer
# Procedure claim leader considered complete acknowledgment number received before the request for controlling the transmission side of the recording persistence services in which value may be as follows:
# = 0 if ACKs is set to zero, the producer will not wait for any acknowledgment from the server, the record will adding to the socket buffer and immediately deemed to have been sent. In this case, the server has received the recording can not be guaranteed, and retry configuration will have no effect (since the client usually does not know of any failures), the offset for each record is always set to -1 is returned.
# Acks = 1 This means that the leader will record write their local logs, but without waiting for all copies of the server can respond fully confirmed, in this case, if the leader fails immediately after confirming the record, but will before copying data to all replica servers, the records will be lost.
# Acks = all this mean leader will wait for a full replica set is synchronized to confirm the recording, which ensures that as long as at least one synchronous replica server is still alive, the record is not lost, this is the most powerful guarantee, which is equivalent acks = -1 settings.
Value can be set #: All, -1, 0,. 1
spring.kafka.producer.acks = -1
# when issuing a request to the server, this string is sent to the server. The purpose is to be able to track the source of the request, in order to allow some applications outside ip / port permit list can send a message. The application may set an arbitrary string, because there is no functional purpose, in addition to tracking and recording
spring.kafka.producer.client. 1-ID =
# Producer for the type of data compression. The default is no compression. The correct option value is none, gzip, snappy. Compression is preferably used for batch processing, batch processing more messages, the better the compression performance
spring.kafka.producer.compression none-type =
2, other arrangements
# priority configuration
# to time in milliseconds, and in our mandatory updated metadata interval. Even if we do not see any partition leadership change. Default: 60 *. 5 * 1000 = 300000
spring.kafka.producer.properties.metadata.max.age.ms = 300000
Summary # producer group will request a separate batch record any messages sent between the request and the arrival. Generally speaking time, which only records generated faster than the transmission speed can occur. However, under certain conditions, the client will want to reduce the number of requests, even down to medium load it. This setting will be done by adding a small delay - that is, not immediately send a record, producer will wait for a given time delay to allow other recorded messages sent, messages can be recorded batch processing. This can be considered similar to the kind of TCP Nagle algorithm. This setting boundaries set higher latency of batch processing: Once we get batch.size a partition, he will be sent immediately regardless of this setting, but if we get the number of bytes to news than this setting much smaller, we need to "linger" a specific period of time to get more information. This setting defaults to 0, ie no delay. Setting linger.ms = 5, for example, will reduce the number of requests, but will also increase the delay of 5ms.
spring.kafka.producer.properties.linger.ms = 0
buffer space when the transmission data # Default: 1024 * 128 = 131072
spring.kafka.producer.properties.send.buffer.bytes = 131072
receiving buffer space in the # Socket size, when data is read using the default: 32768 = 32 * 1024
spring.kafka.producer.properties.receive.buffer.bytes = 32768
# The maximum number of bytes requested. It is also effective coverage for maximum record size. Note: server has its own coverage message record size, and the sizes of these different settings. This setting will limit the number of batches each producer transmits a request, in case the request massive. Default: 1 * 1024 * 1024 = 1048576
spring.kafka.producer.properties.max.request.size = 1048576
# connection fails, when we reconnected waiting time. This avoids the client repeatedly reconnect, default value: 50
spring.kafka.producer.properties.reconnect.backoff.ms = 50
Total time # producer client to connect a kafka Services (broker) failed to reconnect each time the connection fails , reconnection time will increase exponentially, each time there will be an increase of 20% random jitter, to prevent connection storm. Default: 1000
# spring.kafka.producer.properties.reconnect.backoff.max.ms = 1000
# duration control block, the block is generated when the buffer is not enough space or metadata loss, default: 60 * 1000 = 60000
spring.kafka.producer = 60000 .properties.max.block.ms
# produce before attempting to retry failed requests waiting time. Send avoid falling into - an endless loop of failure, the default: 100
spring.kafka.producer.properties.retry.backoff.ms = 100
# Metrics system to maintain a configurable number of samples, a correction in the window size. The Configuration window size, for example. We may maintain two samples during the 30s. When a window exit, we will erase and rewrite the oldest window, default: 30000
spring.kafka.producer.properties.metrics.sample.window.ms = 30000
# number of samples used for maintenance metrics default: 2
= 2 spring.kafka.producer.properties.metrics.num.samples
# the highest level for the metrics.
# Spring.kafka.producer.properties.metrics.recording.level = Sensor.RecordingLevel.INFO.toString ()
list # class for measure. MetricReporter implement an interface that will allow increase the number of classes that will change when a new measure of production. JmxReporter register always contains JMX Statistical
# spring.kafka.producer.properties.metric.reporters Collections.emptyList = ()
# Kafka may send multiple requests a connection, the called a Flight, so that overhead can be reduced, but If an error, may result in the data transmission order is changed, the default is 5 (Review)
spring.kafka.producer.properties.max.in.flight.requests.per.connection = 5
# Close connection idle time, default:. 9 * 60 * 1000 = 540000
spring.kafka.producer.properties.connections.max.idle.ms = 540000
# partition type, default: org.apache.kafka.clients.producer.internals. DefaultPartitioner
spring.kafka.producer.properties.partitioner.class = org.apache.kafka.clients.producer.internals.DefaultPartitioner
maximum time # client will wait the response to the request, if no response is received within this time, the client the retransmission request; retry count exceeds Throws, default: 30000 = 30 * 1000
spring.kafka.producer.properties.request.timeout.ms = 30000
# custom interceptor.
spring.kafka.producer.properties.interceptor.classes = none #
# whether to use idempotency. If set to true, it indicates producer will ensure that each message a copy exactly; if set to false, it indicates that the transmission data to the producer because of failure retry broker to make, may be written into the multisection retry message data stream.
# spring.kafka.producer.properties.enable.idempotence = false
Transaction # status before the initiative to suspend ongoing transactions, transaction coordinator will wait for producers to update the maximum time (in ms).
spring.kafka.producer.properties.transaction.timeout.ms = 60000 #
# TransactionalId for transaction propagation. This makes the reliability of semantics across multiple producers sessions, because it allows the client to ensure the same before starting any new transaction TransactionalId transaction has been completed. If no TransactionalId, the producer is limited to transfer idempotent. Please note, if configured TransactionalId, you must enable enable.idempotence. The default value is empty, which means you can not use transactions.
# spring.kafka.producer.properties.transactional.id = null
connection turmoil
application startup, it is often possible when the number of connections for each application server abnormal surge occurred. Suppose the number of connections is provided: min value of 3, max value of 10, the number of connections using the normal traffic of about 5, when restarting the application, each application may be the number of connections 10 to soar, the instant application may even be part News did not get the connection. After completion within the next start time, connection slowly began to return to normal business. This is called connecting storm.
Second, the consumers
1, important configuration
# comma-separated list of host: port list, for establishing the initial connection to Kafka cluster
spring.kafka.consumer.bootstrap-servers = TopKafka1: 9092, TopKafka2: 9092, TopKafka3: 9092
# Is used to string that uniquely identifies where the group of consumer process, if you set the same group id, indicates that these processes belong to the same consumer group, default: ""
spring.kafka.consumer.group-the above mentioned id = TyyLoveZyy
# max.poll .records of data processed in need session.timeout.ms this time, default: 500
spring.kafka.consumer.max-poll-Records = 500
# timeout consumption, size can not exceed session.timeout.ms, default: 3000
= 3000-interval the spring.kafka.consumer.heartbeat
# If true, offset consumer fetch the message will be automatically synchronized to zookeeper. The offset will be submitted when the process is hung, use the new consumer, default: to true
spring.kafka.consumer.enable-Auto-the commit to true =
# Consumer automatic frequency offset submitted to the zookeeper, default: 5000
spring.kafka .consumer.auto-commit-interval = 5000
# when the offset is not initialized, you can set the following three cases :( default: Latest)
# Earliest
# when there is offset submitted under the district, from the offset submitted start spending; no offset submitted from scratch consumption
# Latest
# When offset has been submitted under the district, from the offset submitted start spending; when the offset no submitted data in this partition new consumer generated
# none
# Topic when the partitions are present offset submitted, after the offset from the beginning of the consumer; as long as there is a partition offset submitted does not exist, an exception is thrown
spring.kafka.consumer.auto-offset-the RESET Earliest =
# per request fetch, minimum number of bytes server should return. If there is not enough data is returned, the request will wait until sufficient data will be returned. Default:. 1
spring.kafka.consumer.fetch. 1-min-size =
the # Fetch requests to the broker, the broker may be blocked (in topic records when the total size is less than fetch.min.bytes), this when this fetch request time-consuming will be relatively long. This configuration is to configure the consumer how long to wait for the most response.
spring.kafka.consumer.fetch-max-wait = 500
logo # consumer process. If the value of a human-readable tracking problem would be more convenient. . Default: ""
spring.kafka.consumer.client. 1-ID =
# Key deserialization class. It implements the interface org.apache.kafka.common.serialization.Deserializer
spring.kafka.consumer.key-deserializer = org.apache.kafka.common.serialization.StringDeserializer
deserialize category # value. Org.apache.kafka.common.serialization.Deserializer implements the interface
spring.kafka.consumer.value-Deserializer org.apache.kafka.common.serialization.StringDeserializer =
2, other configurations
# consumer by way of a pull to the server pull data, when more than a specified time interval max.poll.interval.ms not send poll () request to the server, and heartbeat heartbeat thread is still going on, considers the consumer lock, it will exit the consumer group, and redistribution. Default: 300000
spring.kafka.consumer.properties.max.poll.interval.ms = 300000
timeout limit # session. If the consumer does not send a heartbeat message within this time, then it would be considered hung up, and reblance will have to be in [group.min.session.timeout.ms, group.max.session.timeout.ms] range. Default: 10000
spring.kafka.consumer.properties.session.timeout.ms = 10000
# Between the "range" and "roundrobin" policy to select a strategy to allocate partitions as consumer data stream; loop partition allocator partitions are available and all available consumer threads. It will be assigned to the partition cycle of consumer thread. If you subscribe to all consumer instances are determined, the division of partitions is to determine the distribution. Cyclic allocation strategy can be satisfied when the following conditions: (1) Each topic has the same number of data streams on each consumer instance. (2) a set of subscribed topic for each consumer group are for instance determined by consumer.
Range = spring.kafka.consumer.properties.partition.assignment.strategy
# a fetch request, records obtained from the maximum size of a broker. If at the topic from the first non-empty partition to take a message, if the size of the first record to take more than this configuration, still reads the record, that is to say in this case, only this returns a record. Default: 50 * 1024 * 1024 = 52428800
spring.kafka.consumer.properties.fetch.max.bytes = 52428800
# the Metadata data refresh interval. Even without any partition subscription relationship change can be performed. Default: 5 * 60 * 1000 = 300000
spring.kafka.consumer.properties.metadata.max.age.ms = 300000
# A fetch request, records obtained from the maximum size of a partition. If at the topic from the first non-empty partition to take a message, if the size of the first record to take more than this configuration, still reads the record, that is to say in this case, only this returns a record. broker, topic will be on the producer sent its message size limit to do. Therefore, when the value of this configuration, reference may message.max.bytes topic and configuration of the broker's max.message.bytes. Default: 1 * 1024 * 1024 = 1048576
spring.kafka.consumer.properties.max.partition.fetch.bytes = 1048576
# of TCP maximum transmission size. Default: 128 * 1024 = 131072, if the operating system is set to -1, compared with the default size
spring.kafka.consumer.properties.send.buffer.bytes = 131072
# consumer acceptance buffer size. This value will be used when creating a Socket connection. Ranges: [- 1, Integer.MAX]. The default value is: 65536 (64 KB), if the value is set to -1, the operating system default value is used. Default: 64-* 1024 = 65536
spring.kafka.consumer.properties.receive.buffer.bytes = 65536
# when connection fails, when we reconnected waiting time. This avoids the client to reconnect repeatedly, default: 50
spring.kafka.consumer.properties.reconnect.backoff.ms = 50
# Producer client to connect a kafka Services (broker) failure total time to reconnect each time the connection fails, it will increase the reconnection time exponentially, each time there will be an increase of 20% random jitter to avoid connecting storm. Default: 1000
spring.kafka.consumer.properties.reconnect.backoff.max.ms = 1000
wait time before # attempting to retry failed requests of produce. Send avoid falling into - an endless loop of failure, the default: 100
spring.kafka.consumer.properties.retry.backoff.ms = 100
# metrics system can be configured to maintain the number of samples, a correction in the window size. The Configuration window size, for example. We may maintain two samples during the 30s. When a window exit, we will erase and rewrite the oldest window, default: 30000
spring.kafka.consumer.properties.metrics.sample.window.ms = 30000
# number of samples used for maintenance metrics default: 2
= 2 spring.kafka.consumer.properties.metrics.num.samples
# the highest level for the metrics. Default: Sensor.RecordingLevel.INFO.toString ()
# spring.kafka.consumer.properties.metrics.recording.level = Sensor.RecordingLevel.INFO.toString ()
List # class for measure. MetricReporter implement an interface that will allow increase the number of classes that will change when a new measure of production. JmxReporter always contains JMX registration statistics. Default: Collections.emptyList ()
# = spring.kafka.consumer.properties.metric.reporters Collections.emptyList ()
# automatically checks consumption recorded CRC32. This ensures that the message is not online or disk corruption occurs. This check adds some overhead, and therefore may be disabled in seeking very high performance situations. Default: to true
spring.kafka.consumer.properties.check.crcs = to true
# connection idle timeout. Since the consumer only connection broker (Coordinator is a broker), so the consumer is arranged between the broker. Default: 60. 9 * * 1000 = 540000
spring.kafka.consumer.properties.connections.max.idle.ms = 540000
maximum time # client will wait the response to the request, if no response is received within this time the client end the retransmission request; retry count exceeds Throws, default: 30000
spring.kafka.consumer.properties.request.timeout.ms = 30000
The default # for blocking KafkaConsumer API timeout. KIP also add such a heavy-duty blocking API to support blocking API specify each specific use of time-out, rather than the default timeout default.api.timeout.ms settings. In particular, adds a new poll (duration) API, it does not prevent dynamic partition allocation. The old poll (long) API has been deprecated and will be removed in a future release. Added KafkaConsumer overloaded for other methods, e.g. partitionsFor, listTopics, offsetsForTimes, beginningOffsets, endOffsets and close, they receive duration. Default: 60 * 1000 = 60000
spring.kafka.consumer.properties.default.api.timeout.ms = 60000
# custom interceptor. Default: Collections.emptyList ()
# spring.kafka.consumer.properties.interceptor.classes = Collections.emptyList ()
# whether internal topics News exposed to consumer. Default: to true
spring.kafka.consumer.properties.exclude.internal.topics = to true
# default: to true
spring.kafka.consumer.properties.internal.leave.group.on.close to true =
# Default:. IsolationLevel.READ_UNCOMMITTED.toString () toLowerCase (Locale.ROOT)
. # Spring.kafka.consumer.properties.isolation.level = IsolationLevel.READ_UNCOMMITTED.toString () toLowerCase (Locale.ROOT)
Author: less Yi-day
source : CSDN
original: https: //blog.csdn.net/u014774648/article/details/90110508
copyright: This article is a blogger original article, reproduced, please attach Bowen link!
spring-kafka producer-consumer configuration in detail
Guess you like
Origin www.cnblogs.com/yx88/p/11013338.html
Recommended
Ranking