spring-kafka 设置发送消息的大小

环境

  1. spring boot2
  2. spring cloud
  3. spring-kafka
  4. kakfa 2.2.0

场景

程序调用spring-kakfa内置的kafkaTemplate进行发送消息,但是消息实体过大,超过默认配置,导致消息发送失败。报错提示如下:

The message is 2044510 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.

原因

  1. kakfa配置的消息大小(如max.request.size)偏小,导致入库失败;

  2. spring-kakfa的默认配置只有1M,导致报错。spring-kakfa在发送消息的时候,会先判断配置是否符合要求,符合再发送,否则就不发送。在这个过程,即使kakfa配置是偏大的,也会导致报错。通过查看类org.apache.kafka.clients.producer.ProducerConfig的说明可以得到,是先比较本地配置,如果不满足就不发送。查看注释如下:

    The maximum size of a request in bytes. This setting will limit the number of record  batches the producer will send in a single request to avoid sending huge requests. 
    This is also effectively a cap on the maximum record batch size. 
    Note that the server has its own cap on record batch size which may be different from this.
    

解决措施

kakfa配置

  1. server.properties中添加

    message.max.bytes=5242880
    # 每个分区试图获取的消息字节数。要大于等于message.max.bytes
    replica.fetch.max.bytes=6291456
    
  2. producer.properties中添加

    # 请求的最大大小为字节。要小于 message.max.bytes
    max.request.size = 5242880
    
  3. consumer.properties中添加

    # 每个提取请求中为每个主题分区提取的消息字节数。要大于等于message.max.bytes
    fetch.message.max.bytes=6291456
    
  4. 重启kakfa

    # 关闭kakfa
    sh kafka-server-stop.sh
    # 启动 kakfa
    nohup sh kafka-server-start.sh ../config/server.properties &
    

spring-boot 配置修改

在配置文件加上以上配置:

spring.kafka.producer.properties.max.request.size=5242880
发布了43 篇原创文章 · 获赞 4 · 访问量 2万+

猜你喜欢

转载自blog.csdn.net/u013084266/article/details/103297784