Spring Cloud Stream Elmhurst SR1 翻译

Spring Cloud Stream Reference Guide

Table of Contents

Spring Cloud Stream Core

1. Quick Start

You can try Spring Cloud Stream in less then 5 min even before you jump into any details by following this three-step guide.

 

您可以在不到5分钟的时间内尝试Spring Cloud Stream,甚至在您按照这个三步指南跳转到任何细节之前。

 

We show you how to create a Spring Cloud Stream application that receives messages coming from the messaging middleware of your choice (more on this later) and logs received messages to the console. We call it LoggingConsumer. While not very practical, it provides a good introduction to some of the main concepts and abstractions, making it easier to digest the rest of this user guide.

 

我们将向您展示如何创建一个Spring Cloud Stream应用程序,该应用程序接收来自您选择的消息传递中间件的消息(稍后将详细介绍)并将收到的消息记录到控制台。我们称之为LoggingConsumer。虽然不太实用,但它提供了一些主要概念和抽象的良好介绍,使其更容易消化本用户指南的其余部分。

 

The three steps are as follows:

  1. Creating a Sample Application by Using Spring Initializr
  2. Importing the Project into Your IDE
  3. Adding a Message Handler, Building, and Running

 

这三个步骤如下:

  1. 使用Spring Initializr创建示例应用程序
  2. 将项目导入IDE
  3. 添加消息处理程序,构建和运行

 

1.1. Creating a Sample Application by Using Spring Initializr

 

To get started, visit the Spring Initializr. From there, you can generate our LoggingConsumer application. To do so:

  1. In the Dependencies section, start typing stream. When the “Cloud Stream” option should appears, select it.
  2. Start typing either 'kafka' or 'rabbit'.
  3. Select “Kafka” or “RabbitMQ”.
    Basically, you choose the messaging middleware to which your application binds. We recommend using the one you have already installed or feel more comfortable with installing and running. Also, as you can see from the Initilaizer screen, there are a few other options you can choose. For example, you can choose Gradle as your build tool instead of Maven (the default).
  4. In the Artifact field, type 'logging-consumer'.
    The value of the Artifact field becomes the application name. If you chose RabbitMQ for the middleware, your Spring Initializr should now be as follows:

  1. Click the Generate Project button.
    Doing so downloads the zipped version of the generated project to your hard drive.
  2. Unzip the file into the folder you want to use as your project directory.

 

要开始使用,请访问Spring Initializr。从那里,您可以生成我们的LoggingConsumer应用程序。为此:

  1. 在“ 依赖关系部分中,开始键入stream。当出现“Cloud Stream”选项时,选择它。
  2. 开始输入'kafka'或'rabbit'。
  3. 选择“Kafka”或“RabbitMQ”。

基本上,您选择应用程序绑定的消息传递中间件。我们建议您使用已安装的或安装和运行时感觉更舒适。此外,从Initilaizer屏幕中可以看到,您可以选择其他一些选项。例如,您可以选择Gradle作为构建工具而不是Maven(默认值)。

  1. 在“ 工件字段中,键入“logging-consumer”。

Artifact字段的值成为应用程序名称。如果你选择RabbitMQ作为中间件,你的Spring Initializr现在应该如下:

  1. 单击“ 生成项目按钮。

这样做会将生成的项目的压缩版本下载到硬盘驱动器。

  1. 将文件解压缩到要用作项目目录的文件夹中。

 

We encourage you to explore the many possibilities available in the Spring Initializr. It lets you create many different kinds of Spring applications.

我们鼓励您探索Spring Initializr中的许多可能性。它允许您创建许多不同类型的Spring应用程序。

 

1.2. Importing the Project into Your IDE   将项目导入IDE

 

Now you can import the project into your IDE. Keep in mind that, depending on the IDE, you may need to follow a specific import procedure. For example, depending on how the project was generated (Maven or Gradle), you may need to follow specific import procedure (for example, in Eclipse or STS, you need to use File → Import → Maven → Existing Maven Project).

现在,您可以将项目导入IDE。请记住,根据IDE,您可能需要遵循特定的导入过程。例如,根据项目的生成方式(Maven或Gradle),您可能需要遵循特定的导入过程(例如,在Eclipse或STS中,您需要使用File→Import→Maven→Existing Maven Project)。

 

Once imported, the project must have no errors of any kind. Also, src/main/java should contain com.example.loggingconsumer.LoggingConsumerApplication.

导入后,项目必须没有任何错误。另外,src/main/java应该包含com.example.loggingconsumer.LoggingConsumerApplication。

 

Technically, at this point, you can run the application’s main class. It is already a valid Spring Boot application. However, it does not do anything, so we want to add some code.

从技术上讲,此时,您可以运行应用程序的主类。它已经是一个有效的Spring Boot应用程序。但是,它没有做任何事情,所以我们想添加一些代码。

 

1.3. Adding a Message Handler, Building, and Running   添加消息处理器,构建,并运行

 

Modify the com.example.loggingconsumer.LoggingConsumerApplication class to look as follows:

将com.example.loggingconsumer.LoggingConsumerApplication类修改为如下所示:

 

@SpringBootApplication
@EnableBinding(Sink.class)
public class LoggingConsumerApplication {

public static void main(String[] args) {
                SpringApplication.run(LoggingConsumerApplication.class, args);
        }

@StreamListener(Sink.INPUT)
        public void handle(Person person) {
                System.out.println("Received: " + person);
        }

public static class Person {
                private String name;
                public String getName() {
                        return name;
                }
                public void setName(String name) {
                        this.name = name;
                }
                public String toString() {
                        return this.name;
                }
        }
}

 

As you can see from the preceding listing:

  • We have enabled Sink binding (input-no-output) by using @EnableBinding(Sink.class). Doing so signals to the framework to initiate binding to the messaging middleware, where it automatically creates the destination (that is, queue, topic, and others) that are bound to the Sink.INPUT channel.
  • We have added a handler method to receive incoming messages of type Person. Doing so lets you see one of the core features of the framework: It tries to automatically convert incoming message payloads to type Person.

 

从前面的清单中可以看出:

  • 我们Sink通过使用启用了绑定(输入 - 无输出)@EnableBinding(Sink.class)。这样做会向框架发出信号,以启动与消息传递中间件的绑定,从而自动创建绑定到Sink.INPUT通道的目标(即队列,主题和其他)。
  • 我们添加了一种handler方法来接收类型的传入消息Person。这样做可以让您看到框架的核心功能之一:它尝试自动将传入的消息有效负载转换为类型Person。

 

You now have a fully functional Spring Cloud Stream application that does listens for messages. From here, for simplicity, we assume you selected RabbitMQ in step one. Assuming you have RabbitMQ installed and running, you can start the application by running its main method in your IDE.

您现在拥有一个功能齐全的Spring Cloud Stream应用程序,可以侦听消息。从这里开始,为简单起见,我们假设您在第一步中选择了RabbitMQ 。假设您已安装并运行RabbitMQ,则可以通过main在IDE中运行其方法来启动应用程序。

 

You should see following output:

你应该看到以下输出:

 

        --- [ main] c.s.b.r.p.RabbitExchangeQueueProvisioner : declaring queue for inbound: input.anonymous.CbMIwdkJSBO1ZoPDOtHtCg, bound to: input
        --- [ main] o.s.a.r.c.CachingConnectionFactory       : Attempting to connect to: [localhost:5672]
        --- [ main] o.s.a.r.c.CachingConnectionFactory       : Created new connection: rabbitConnectionFactory#2a3a299:0/SimpleConnection@66c83fc8. . .
        . . .
        --- [ main] o.s.i.a.i.AmqpInboundChannelAdapter      : started inbound.input.anonymous.CbMIwdkJSBO1ZoPDOtHtCg
        . . .
        --- [ main] c.e.l.LoggingConsumerApplication         : Started LoggingConsumerApplication in 2.531 seconds (JVM running for 2.897)

 

Go to the RabbitMQ management console or any other RabbitMQ client and send a message to input.anonymous.CbMIwdkJSBO1ZoPDOtHtCg. The anonymous.CbMIwdkJSBO1ZoPDOtHtCg part represents the group name and is generated, so it is bound to be different in your environment. For something more predictable, you can use an explicit group name by setting spring.cloud.stream.bindings.input.group=hello (or whatever name you like).

转到RabbitMQ管理控制台或任何其他RabbitMQ客户端并发送消息input.anonymous.CbMIwdkJSBO1ZoPDOtHtCg。该anonymous.CbMIwdkJSBO1ZoPDOtHtCg部分代表组名称并生成,因此它在您的环境中必然会有所不同。对于更可预测的内容,您可以通过设置spring.cloud.stream.bindings.input.group=hello(或任何您喜欢的名称)来使用显式组名。

 

The contents of the message should be a JSON representation of the Person class, as follows:

消息的内容应该是类的JSON表示Person,如下所示:

 

{"name":"Sam Spade"}

 

Then, in your console, you should see:

然后,在您的控制台中,您应该看到:

 

Received: Sam Spade

 

You can also build and package your application into a boot jar (by using ./mvnw clean install) and run the built JAR by using the java -jar command.

您还可以将应用程序构建并打包到引导jar中(通过使用./mvnw clean install),并使用该java -jar命令运行构建的JAR 。

 

Now you have a working (albeit very basic) Spring Cloud Stream application.

现在您有一个工作(尽管非常基本的)Spring Cloud Stream应用程序。

 

2. What’s New in 2.0?

 

Spring Cloud Stream introduces a number of new features, enhancements, and changes. The following sections outline the most notable ones:

 

Spring Cloud Stream引入了许多新功能,增强功能和更改。以下部分概述了最值得注意的部分:

 

2.1. New Features and Components   新功能和组件

 

  • Polling Consumers: Introduction of polled consumers, which lets the application control message processing rates. See “Using Polled Consumers” for more details. You can also read this blog post for more details.
  • Micrometer Support: Metrics has been switched to use Micrometer. MeterRegistry is also provided as a bean so that custom applications can autowire it to capture custom metrics. See “Metrics Emitter” for more details.
  • New Actuator Binding Controls: New actuator binding controls let you both visualize and control the Bindings lifecycle. For more details, see Binding visualization and control.
  • Configurable RetryTemplate: Aside from providing properties to configure RetryTemplate, we now let you provide your own template, effectively overriding the one provided by the framework. To use it, configure it as a @Bean in your application.

 

  • 轮询消费者:引入轮询的消费者,让应用程序控制消息处理速率。有关详细信息,请参阅“ 使用轮询的使用者 ”。您还可以阅读此博客文章了解更多详情。
  • 千分尺支持:度量标准已切换为使用千分尺。 MeterRegistry也作为bean提供,以便自定义应用程序可以自动装配它以捕获自定义指标。有关详细信息,请参阅“ 度量标准发射器 ”。
  • 新的执行器绑定控件:新的执行器绑定控件可让您可视化和控制Bindings生命周期。有关更多详细信息,请参阅绑定可视化和控件
  • 可配置的RetryTemplate:除了提供要配置的属性之外RetryTemplate,我们现在允许您提供自己的模板,有效地覆盖框架提供的模板。要使用它,请@Bean在应用程序中将其配置为a 。

 

2.2. Notable Enhancements   值得注意的增强功能

 

This version includes the following notable enhancements:

 

此版本包括以下显着增强功能:

 

2.2.1. Both Actuator and Web Dependencies Are Now Optional

 

This change slims down the footprint of the deployed application in the event neither actuator nor web dependencies required. It also lets you switch between the reactive and conventional web paradigms by manually adding one of the following dependencies.

如果不需要执行器或Web依赖性,则此更改会减少已部署应用程序的占用空间。它还允许您通过手动添加以下依赖项之一在响应和传统Web范例之间切换。

 

The following listing shows how to add the conventional web framework:

以下清单显示了如何添加传统的Web框架:

 

<dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
</dependency>

 

The following listing shows how to add the reactive web framework:

以下清单显示了如何添加响应式Web框架:

 

<dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-webflux</artifactId>
</dependency>

 

The following list shows how to add the actuator dependency:

以下列表显示了如何添加执行器依赖项:

 

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

 

2.2.2. Content-type Negotiation Improvements   内容类型协商改进

 

One of the core themes for verion 2.0 is improvements (in both consistency and performance) around content-type negotiation and message conversion. The following summary outlines the notable changes and improvements in this area. See the “Content Type Negotiation” section for more details. Also this blog post contains more detail.

  • All message conversion is now handled only by MessageConverter objects.
  • We introduced the @StreamMessageConverter annotation to provide custom MessageConverter objects.
  • We introduced the default Content Type as application/json, which needs to be taken into consideration when migrating 1.3 application or operating in the mixed mode (that is, 1.3 producer → 2.0 consumer).
  • Messages with textual payloads and a contentType of text/…​ or …​/json are no longer converted to Message<String> for cases where the argument type of the provided MessageHandler can not be determined (that is, public void handle(Message<?> message) or public void handle(Object payload)). Furthermore, a strong argument type may not be enough to properly convert messages, so the contentTypeheader may be used as a supplement by some MessageConverters.

 

verion 2.0的核心主题之一是围绕内容类型协商和消息转换的改进(在一致性和性能方面)。以下摘要概述了该领域的显着变化和改进。有关详细信息,请参阅“ 内容类型协商 ”部分。另外这个博客帖子中包含更多细节。

  • 现在,所有消息转换由MessageConverter对象处理。
  • 我们引入了@StreamMessageConverter注释来提供自定义MessageConverter对象。
  • 我们引入了默认值Content Typeas application/json,在迁移1.3应用程序或在混合模式下运行时需要考虑这一点(即1.3生产者→2.0使用者)。
  • 与文本消息的有效载荷和contentType的text/…​或…​/json不再转换为Message<String>对于其中提供的参数类型的情况下MessageHandler不能确定(即,public void handle(Message<?> message)或public void handle(Object payload))。此外,强大的参数类型可能不足以正确转换消息,因此contentType标题可能被某些人用作补充MessageConverters。

 

2.3. Notable Deprecations   值得注意的废弃

 

As of version 2.0, the following items have been deprecated:

 

从2.0版开始,不推荐使用以下项目:

 

2.3.1. Java Serialization (Java Native and Kryo)   Java序列化()

 

JavaSerializationMessageConverter and KryoMessageConverter remain for now. However, we plan to move them out of the core packages and support in the future. The main reason for this deprecation is to flag the issue that type-based, language-specific serialization could cause in distributed environments, where Producers and Consumers may depend on different JVM versions or have different versions of supporting libraries (that is, Kryo). We also wanted to draw the attention to the fact that Consumers and Producers may not even be Java-based, so polyglot style serialization (i.e., JSON) is better suited.

 

JavaSerializationMessageConverter并KryoMessageConverter保持现在。但是,我们计划在未来将它们从核心软件包和支持中移除。这种弃用的主要原因是标记基于类型的,特定于语言的序列化可能在分布式环境中引起的问题,其中生产者和消费者可能依赖于不同的JVM版本或具有不同版本的支持库(即Kryo)。我们还想提请注意消费者和生产者甚至可能不是基于Java的事实,因此多语言样式序列化(即JSON)更适合。

 

2.3.2. Deprecated Classes and Methods   不推荐使用的类和方法

 

The following is a quick summary of notable deprecations. See the corresponding {spring-cloud-stream-javadoc-current}[javadoc] for more details.

  • SharedChannelRegistry. Use SharedBindingTargetRegistry.
  • Bindings. Beans qualified by it are already uniquely identified by their type — for example, provided Source, Processor, or custom bindings:

public interface Sample {
        String OUTPUT = "sampleOutput";

@Output(Sample.OUTPUT)
        MessageChannel output();
}

  • HeaderMode.raw. Use none, headers or embeddedHeaders
  • ProducerProperties.partitionKeyExtractorClass in favor of partitionKeyExtractorName and ProducerProperties.partitionSelectorClass in favor of partitionSelectorName. This change ensures that both components are Spring configured and managed and are referenced in a Spring-friendly way.
  • BinderAwareRouterBeanPostProcessor. While the component remains, it is no longer a BeanPostProcessorand will be renamed in the future.
  • BinderProperties.setEnvironment(Properties environment). Use BinderProperties.setEnvironment(Map<String, Object> environment).

 

以下是显着弃用的快速摘要。有关更多详细信息,请参阅相应的{spring-cloud-stream-javadoc-current} [javadoc]。

  • SharedChannelRegistry。使用SharedBindingTargetRegistry。
  • Bindings。通过它合格豆已经通过独特的类型识别-例如,提供Source,Processor或自定义绑定:

public interface Sample {

    String OUTPUT =“sampleOutput”;

 

    @Output(Sample.OUTPUT)

    MessageChannel输出();

}

  • HeaderMode.raw。使用none,headers或embeddedHeaders
  • ProducerProperties.partitionKeyExtractorClass支持partitionKeyExtractorName和ProducerProperties.partitionSelectorClass赞成partitionSelectorName。此更改确保两个组件都是Spring配置和管理的,并以Spring友好的方式引用。
  • BinderAwareRouterBeanPostProcessor。虽然该组件仍然存在,但它不再是一个组件,BeanPostProcessor并且将来会重命名。
  • BinderProperties.setEnvironment(Properties environment)。使用BinderProperties.setEnvironment(Map<String, Object> environment)。

 

This section goes into more detail about how you can work with Spring Cloud Stream. It covers topics such as creating and running stream applications.

本节详细介绍了如何使用Spring Cloud Stream。它涵盖了创建和运行流应用程序等主题。

 

3. Introducing Spring Cloud Stream   介绍Spring Cloud Stream

 

Spring Cloud Stream is a framework for building message-driven microservice applications. Spring Cloud Stream builds upon Spring Boot to create standalone, production-grade Spring applications and uses Spring Integration to provide connectivity to message brokers. It provides opinionated configuration of middleware from several vendors, introducing the concepts of persistent publish-subscribe semantics, consumer groups, and partitions.

 

Spring Cloud Stream是一个用于构建消息驱动的微服务应用程序的框架。Spring Cloud Stream构建于Spring Boot之上,用于创建独立的生产级Spring应用程序,并使用Spring Integration提供与消息代理的连接。它提供了来自多个供应商的中间件的固定配置,介绍了持久性发布 - 订阅语义,消费者组,以及分区的概念。

 

You can add the @EnableBinding annotation to your application to get immediate connectivity to a message broker, and you can add @StreamListener to a method to cause it to receive events for stream processing. The following example shows a sink application that receives external messages:

 

您可以将@EnableBinding注解添加到应用程序以立即连接到消息代理,并且可以将@StreamListener注解添加到方法以使其接收流处理事件。以下示例显示了接收外部消息的接收器应用程序:

 

@SpringBootApplication

@EnableBinding(Sink.class)

public class VoteRecordingSinkApplication {

 

  public static void main(String[] args) {

    SpringApplication.run(VoteRecordingSinkApplication.class, args);

  }

 

  @StreamListener(Sink.INPUT)

  public void processVote(Vote vote) {

      votingService.recordVote(vote);

  }

}

 

The @EnableBinding annotation takes one or more interfaces as parameters (in this case, the parameter is a single Sink interface). An interface declares input and output channels. Spring Cloud Stream provides the Source, Sink, and Processor interfaces. You can also define your own interfaces.

 

@EnableBinding注解接收一个或多个接口参数(在这种情况下,该参数是一个单个的Sink接口)。接口声明输入和输出管道。Spring Cloud Stream提供了Source,Sink,和Processor接口。您还可以定义自己的接口。

 

The following listing shows the definition of the Sink interface:

 

下面显示了Sink接口的定义:

 

public interface Sink {

  String INPUT = "input";

 

  @Input(Sink.INPUT)

  SubscribableChannel input();

}

 

The @Input annotation identifies an input channel, through which received messages enter the application. The @Output annotation identifies an output channel, through which published messages leave the application. The @Input and @Output annotations can take a channel name as a parameter. If a name is not provided, the name of the annotated method is used.

 

@Input注解标识一个输入管道,通过它接收进入应用程序的消息。@Output注解标识一个输出通道,通过它发布离开应用程序的消息。@Input和@Output注解可以接收管道名称作为参数。如果未提供名称,则使用注解方法的名称。

 

Spring Cloud Stream creates an implementation of the interface for you. You can use this in the application by autowiring it, as shown in the following example (from a test case):

 

Spring Cloud Stream为您创建了一个接口实现。您可以通过自动装配在应用程序中使用它,如以下示例所示(来自测试用例):

 

@RunWith(SpringJUnit4ClassRunner.class)

@SpringApplicationConfiguration(classes = VoteRecordingSinkApplication.class)

@WebAppConfiguration

@DirtiesContext

public class StreamApplicationTests {

 

  @Autowired

  private Sink sink;

 

  @Test

  public void contextLoads() {

    assertNotNull(this.sink.input());

  }

}

 

4. Main Concepts   主要概念

 

Spring Cloud Stream provides a number of abstractions and primitives that simplify the writing of message-driven microservice applications. This section gives an overview of the following:

 

Spring Cloud Stream提供了许多抽象和原语,简化了消息驱动的微服务应用程序的编写。本节概述了以下内容:

 

 

4.1. Application Model   应用程序模型

 

A Spring Cloud Stream application consists of a middleware-neutral core. The application communicates with the outside world through input and output channels injected into it by Spring Cloud Stream. Channels are connected to external brokers through middleware-specific Binder implementations.

 

Spring Cloud Stream应用程序由中间件中立的核心组成。应用程序通过Spring Cloud Stream注入其中的输入和输出管道与外界通信。通过中间件特定的Binder实现,将管道连接到外部代理。

 

Figure 1. Spring Cloud Stream Application

 

4.1.1. Fat JAR   胖JAR

 

Spring Cloud Stream applications can be run in stand-alone mode from your IDE for testing. To run a Spring Cloud Stream application in production, you can create an executable (or “fat”) JAR by using the standard Spring Boot tooling provided for Maven or Gradle. See the Spring Boot Reference Guide for more details.

 

Spring Cloud Stream应用程序可以在IDE中以独立模式运行以进行测试。要在生产中运行Spring Cloud Stream应用程序,可以使用为Maven或Gradle提供的标准Spring Boot工具创建可执行(或“胖”)JAR。有关更多详细信息,请参见Spring Boot Reference Guide

 

4.2. The Binder Abstraction   Binder抽象

 

Spring Cloud Stream provides Binder implementations for Kafka and Rabbit MQ. Spring Cloud Stream also includes a TestSupportBinder, which leaves a channel unmodified so that tests can interact with channels directly and reliably assert on what is received. You can also use the extensible API to write your own Binder.

 

Spring Cloud Stream为KafkaRabbit MQ提供了Binder实现。Spring Cloud Stream还包含一个TestSupportBinder,它保留了一个未修改的管道,以便测试可以直接与管道交互,并可靠地断言收到的内容。您还可以使用可扩展API编写自己的Binder。

 

Spring Cloud Stream uses Spring Boot for configuration, and the Binder abstraction makes it possible for a Spring Cloud Stream application to be flexible in how it connects to middleware. For example, deployers can dynamically choose, at runtime, the destinations (such as the Kafka topics or RabbitMQ exchanges) to which channels connect. Such configuration can be provided through external configuration properties and in any form supported by Spring Boot (including application arguments, environment variables, and application.yml or application.properties files). In the sink example from the Introducing Spring Cloud Stream section, setting the spring.cloud.stream.bindings.input.destination application property to raw-sensor-data causes it to read from the raw-sensor-data Kafka topic or from a queue bound to the raw-sensor-data RabbitMQ exchange.

 

Spring Cloud Stream使用Spring Boot进行配置,Binder抽象使Spring Cloud Stream应用程序可以灵活地连接到中间件。例如,部署者可以在运行时动态选择管道连接的目的地(例如Kafka主题或RabbitMQ交换)。可以通过外部配置属性以及Spring Boot支持的任何形式(包括应用程序参数,环境变量,和application.yml或application.properties文件)来提供此类配置。在Introducing Spring Cloud Stream部分的接收器示例中,将spring.cloud.stream.bindings.input.destination应用程序属性设置为raw-sensor-data以使其从raw-sensor-data Kafka主题或绑定到raw-sensor-data RabbitMQ交换的队列中读取。

 

Spring Cloud Stream automatically detects and uses a binder found on the classpath. You can use different types of middleware with the same code. To do so, include a different binder at build time. For more complex use cases, you can also package multiple binders with your application and have it choose the binder( and even whether to use different binders for different channels) at runtime.

 

Spring Cloud Stream自动检测并使用类路径中找到的绑定器。您可以使用具有相同代码的不同类型的中间件。为此,请在构建时包含不同的绑定器。对于更复杂的用例,您还可以在应用程序中打包多个绑定器,并让它在运行时选择绑定器(甚至是否为不同的通道使用不同的绑定器)。

 

4.3. Persistent Publish-Subscribe Support   持久化发布-订阅支持

 

Communication between applications follows a publish-subscribe model, where data is broadcast through shared topics. This can be seen in the following figure, which shows a typical deployment for a set of interacting Spring Cloud Stream applications.

 

应用程序之间的通信遵循发布 - 订阅模型,其中数据通过共享主题广播。这可以在下图中看到,该图显示了一组交互式Spring Cloud Stream应用程序的典型部署。

 

Figure 2. Spring Cloud Stream Publish-Subscribe

 

Data reported by sensors to an HTTP endpoint is sent to a common destination named raw-sensor-data. From the destination, it is independently processed by a microservice application that computes time-windowed averages and by another microservice application that ingests the raw data into HDFS (Hadoop Distributed File System). In order to process the data, both applications declare the topic as their input at runtime.

 

传感器向HTTP端点报告的数据将发送到名为raw-sensor-data的公共目的地。从目的地开始,它由一个计算时间窗平均值的微服务应用程序和另一个将原始数据摄入HDFS(Hadoop分布式文件系统)的微服务应用程序单独处理。为了处理数据,两个应用程序都将主题声明为运行时的输入。

 

The publish-subscribe communication model reduces the complexity of both the producer and the consumer and lets new applications be added to the topology without disruption of the existing flow. For example, downstream from the average-calculating application, you can add an application that calculates the highest temperature values for display and monitoring. You can then add another application that interprets the same flow of averages for fault detection. Doing all communication through shared topics rather than point-to-point queues reduces coupling between microservices.

 

发布 - 订阅通信模型降低了生产者和消费者的复杂性,并允许将新应用程序添加到拓扑中,而不会中断现有流程。例如,在平均值计算应用程序的下游,您可以添加计算显示和监视的最高温度值的应用程序。然后,您可以添加另一个应用程序来解释相同的平均流量以进行故障检测。通过共享主题而不是点对点队列进行所有通信可以减少微服务之间的耦合。

 

While the concept of publish-subscribe messaging is not new, Spring Cloud Stream takes the extra step of making it an opinionated choice for its application model. By using native middleware support, Spring Cloud Stream also simplifies use of the publish-subscribe model across different platforms.

 

虽然发布 - 订阅消息的概念并不新鲜,但Spring Cloud Stream采取了额外的步骤,使其成为其应用程序模型的自觉选择。通过使用原生中间件支持,Spring Cloud Stream还简化了跨不同平台的发布 - 订阅模型的使用。

 

4.4. Consumer Groups   消费者组

 

While the publish-subscribe model makes it easy to connect applications through shared topics, the ability to scale up by creating multiple instances of a given application is equally important. When doing so, different instances of an application are placed in a competing consumer relationship, where only one of the instances is expected to handle a given message.

 

虽然发布 - 订阅模型使通过共享主题轻松连接应用程序,但通过创建给定应用程序的多个实例来扩展的能力同样重要。执行此操作时,应用程序的不同实例将放置在竞争的消费者关系中,其中只有一个实例需要处理给定的消息。

 

Spring Cloud Stream models this behavior through the concept of a consumer group. (Spring Cloud Stream consumer groups are similar to and inspired by Kafka consumer groups.) Each consumer binding can use the spring.cloud.stream.bindings.<channelName>.group property to specify a group name. For the consumers shown in the following figure, this property would be set as spring.cloud.stream.bindings.<channelName>.group=hdfsWrite or spring.cloud.stream.bindings.<channelName>.group=average.

 

Spring Cloud Stream通过消费者组的概念对此行为进行建模。(Spring Cloud Stream消费者组与Kafka消费者组类似并受其启发。)每个消费者绑定都可以使用spring.cloud.stream.bindings.<channelName>.group属性来指定组名称。对于下图中显示的消费者,此属性将设置为spring.cloud.stream.bindings.<channelName>.group=hdfsWrite或spring.cloud.stream.bindings.<channelName>.group=average。

 

Figure 3. Spring Cloud Stream Consumer Groups

 

All groups that subscribe to a given destination receive a copy of published data, but only one member of each group receives a given message from that destination. By default, when a group is not specified, Spring Cloud Stream assigns the application to an anonymous and independent single-member consumer group that is in a publish-subscribe relationship with all other consumer groups.

 

订阅给定目的地的所有组都会收到已发布数据的副本,但每个组中只有一个成员从该目的地接收给定的消息。默认情况下,当未指定组时,Spring Cloud Stream会将应用程序分配给与所有其他消费者组处于发布 - 订阅关系的一个匿名且独立的单成员消费者组。

 

4.5. Consumer Types   消费者类型

 

Two types of consumer are supported:

  • Message-driven (sometimes referred to as Asynchronous)
  • Polled (sometimes referred to as Synchronous)

 

支持两种类型的消费者:

  • 消息驱动(有时称为异步)
  • 轮询(有时称为同步)

 

Prior to version 2.0, only asynchronous consumers were supported. A message is delivered as soon as it is available and a thread is available to process it.

When you wish to control the rate at which messages are processed, you might want to use a synchronous consumer.

 

在2.0版之前,仅支持异步消费者。消息一旦可用就会传递,并且有一个线程可以处理它。

如果要控制处理消息的速率,可能需要使用同步消费者。

 

4.5.1. Durability   持久性

 

Consistent with the opinionated application model of Spring Cloud Stream, consumer group subscriptions are durable. That is, a binder implementation ensures that group subscriptions are persistent and that, once at least one subscription for a group has been created, the group receives messages, even if they are sent while all applications in the group are stopped.

 

与Spring Cloud Stream的固定应用模型一致,消费者组订阅是持久的。也就是说,绑定器实现确保组订阅是持久的,并且一旦创建了组的至少一个订阅,该组就接收消息,即使它们是在组中的所有应用程序都被停止时发送的。

 

Anonymous subscriptions are non-durable by nature. For some binder implementations (such as RabbitMQ), it is possible to have non-durable group subscriptions.

匿名订阅本质上是非持久的。对于某些绑定器实现(例如RabbitMQ),可以具有非持久的组订阅。

 

In general, it is preferable to always specify a consumer group when binding an application to a given destination. When scaling up a Spring Cloud Stream application, you must specify a consumer group for each of its input bindings. Doing so prevents the application’s instances from receiving duplicate messages (unless that behavior is desired, which is unusual).

 

通常,在将应用程序绑定到给定目的地时,最好始终指定消费者组。扩展Spring Cloud Stream应用程序时,必须为每个输入绑定指定一个消费者组。这样做可以防止应用程序的实例接收重复的消息(除非需要这种行为,这是不正常的)。

 

4.6. Partitioning Support   分区支持

 

Spring Cloud Stream provides support for partitioning data between multiple instances of a given application. In a partitioned scenario, the physical communication medium (such as the broker topic) is viewed as being structured into multiple partitions. One or more producer application instances send data to multiple consumer application instances and ensure that data identified by common characteristics are processed by the same consumer instance.

 

Spring Cloud Stream支持在给定应用程序的多个实例之间对数据进行分区。在分区方案中,物理通信介质(例如代理主题)被视为结构化进入多个分区。一个或多个生产者应用程序实例将数据发送到多个消费者应用程序实例,并确保由共同特征标识的数据由同一个消费者实例处理。

 

Spring Cloud Stream provides a common abstraction for implementing partitioned processing use cases in a uniform fashion. Partitioning can thus be used whether the broker itself is naturally partitioned (for example, Kafka) or not (for example, RabbitMQ).

 

Spring Cloud Stream提供了一种通用抽象,用于以统一的方式实现分区处理用例。因此,无论代理本身是否自然分区(例如,自然分区的Kafka)(例如,非自然分区的RabbitMQ),都可以使用分区。

 

Figure 4. Spring Cloud Stream Partitioning

 

Partitioning is a critical concept in stateful processing, where it is critical (for either performance or consistency reasons) to ensure that all related data is processed together. For example, in the time-windowed average calculation example, it is important that all measurements from any given sensor are processed by the same application instance.

 

分区是有状态处理中的一个关键概念,其中确保所有相关数据一起处理至关重要(出于性能或一致性原因)。例如,在时间窗口平均值计算示例中,重要的是来自任何给定传感器的所有测量值都由同一应用程序实例处理。

 

To set up a partitioned processing scenario, you must configure both the data-producing and the data-consuming ends.

要设置分区处理方案,必须同时配置数据生成和数据消费两端。

 

5. Programming Model   编程模型

 

To understand the programming model, you should be familiar with the following core concepts:

  • Destination Binders: Components responsible to provide integration with the external messaging systems.
  • Destination Bindings: Bridge between the external messaging systems and application provided Producers and Consumers of messages (created by the Destination Binders).
  • Message: The canonical data structure used by producers and consumers to communicate with Destination Binders (and thus other applications via external messaging systems).

 

要了解编程模型,您应该熟悉以下核心概念:

  • 目标绑定器:负责提供与外部消息系统集成的组件。
  • 目标绑定:外部消息系统和应用程序提供的消息的生产者和消费者的之间的桥接(由目标绑定器创建)。
  • 消息:生产者和使用者用于与目标绑定器(以及通过外部消息系统的其他应用程序)通信的规范数据结构。

 

 

5.1. Destination Binders   目标绑定器

 

Destination Binders are extension components of Spring Cloud Stream responsible for providing the necessary configuration and implementation to facilitate integration with external messaging systems. This integration is responsible for connectivity, delegation, and routing of messages to and from producers and consumers, data type conversion, invocation of the user code, and more.

 

目标绑定器是Spring Cloud Stream的扩展组件,负责提供必要的配置和实现,以促进与外部消息系统的集成。此集成负责与生产者和消费者之间的消息的连接,委派,和路由,数据类型转换,用户代码的调用等。

 

Binders handle a lot of the boiler plate responsibilities that would otherwise fall on your shoulders. However, to accomplish that, the binder still needs some help in the form of minimalistic yet required set of instructions from the user, which typically come in the form of some type of configuration.

 

绑定器处理许多锅炉板的责任,否则它们会落在你的肩膀上。然而,为了实现这一点,绑定器仍然需要来自用户的简约但需要的指令集的形式的一些帮助,其通常以某种类型的配置的形式出现。

 

While it is out of scope of this section to discuss all of the available binder and binding configuration options (the rest of the manual covers them extensively), Destination Binding does require special attention. The next section discusses it in detail.

 

虽然讨论所有可用的绑定器和绑定配置选项超出了本节的范围(本手册的其余部分将对其进行全面介绍),但目标绑定确实需要特别注意。下一节将详细讨论它。

 

5.2. Destination Bindings   目标绑定

 

As stated earlier, Destination Bindings provide a bridge between the external messaging system and application-provided Producers and Consumers.

 

如前所述,目标绑定在外部消息系统和应用程序提供的生产者和消费者之间提供了一个桥梁。

 

Applying the @EnableBinding annotation to one of the application’s configuration classes defines a destination binding. The @EnableBinding annotation itself is meta-annotated with @Configuration and triggers the configuration of the Spring Cloud Stream infrastructure.

 

将@EnableBinding注解应用于其中一个应用程序的配置类可定义目标绑定。@EnableBinding注解本身是具有@Configuration的元注解,并触发Spring Cloud Stream基础设施的配置。

 

The following example shows a fully configured and functioning Spring Cloud Stream application that receives the payload of the message from the INPUT destination as a String type (see Content Type Negotiation section), logs it to the console and sends it to the OUTPUT destination after converting it to upper case.

 

以下示例显示了一个完全配置且正常运行的Spring Cloud Stream应用程序,该应用程序将来自INPUT目标的消息负载接收为String类型(请参阅内容类型协商部分),将消息负载记录到控制台,并在将其转换为大写后将其发送到OUTPUT目标。

 

@SpringBootApplication

@EnableBinding(Processor.class)

public class MyApplication {

 

public static void main(String[] args) {

SpringApplication.run(MyApplication.class, args);

}

 

@StreamListener(Processor.INPUT)

@SendTo(Processor.OUTPUT)

public String handle(String value) {

System.out.println("Received: " + value);

return value.toUpperCase();

}

}

 

As you can see the @EnableBinding annotation can take one or more interface classes as parameters. The parameters are referred to as bindings, and they contain methods representing bindable components. These components are typically message channels (see Spring Messaging) for channel-based binders (such as Rabbit, Kafka, and others). However other types of bindings can provide support for the native features of the corresponding technology. For example Kafka Streams binder (formerly known as KStream) allows native bindings directly to Kafka Streams (see Kafka Streams for more details).

 

如您所见,@EnableBinding注解可以接收一个或多个接口类作为参数。这些参数称为绑定,它们包含表示可绑定组件的方法。这些组件通常是基于通道的绑定器(例如Rabbit,Kafka等)的消息通道(请参阅Spring Messaging)。然而,其他类型的绑定可以为相应技术的原生特征提供支持。例如,Kafka Streams binder(以前称为KStream)允许直接原生绑定到Kafka Streams(有关详细信息,请参阅Kafka Streams)。

 

Spring Cloud Stream already provides binding interfaces for typical message exchange contracts, which include:

  • Sink: Identifies the contract for the message consumer by providing the destination from which the message is consumed.
  • Source: Identifies the contract for the message producer by providing the destination to which the produced message is sent.
  • Processor: Encapsulates both the sink and the source contracts by exposing two destinations that allow consumption and production of messages.

 

Spring Cloud Stream已经为典型的消息交换协定提供了绑定接口,其中包括:

  • Sink通过提供消息所用的目标来标识消息消费者的合同。
  • Source通过提供发送生成的消息的目标来标识消息生产者的合同。
  • Processor通过公开允许消费和生成消息的两个目标来封装SinkSource合同。

 

public interface Sink {

 

  String INPUT = "input";

 

  @Input(Sink.INPUT)

  SubscribableChannel input();

}

 

public interface Source {

 

  String OUTPUT = "output";

 

  @Output(Source.OUTPUT)

  MessageChannel output();

}

 

public interface Processor extends Source, Sink {}

 

While the preceding example satisfies the majority of cases, you can also define your own contracts by defining your own bindings interfaces and use @Input and @Output annotations to identify the actual bindable components.

 

虽然前面的示例满足大多数情况,但您也可以通过定义自己的绑定接口以及使用@Input和@Output注解标识实际的可绑定组件来定义自己的合同。

 

For example:

 

public interface Barista {

 

    @Input

    SubscribableChannel orders();

 

    @Output

    MessageChannel hotDrinks();

 

    @Output

    MessageChannel coldDrinks();

}

 

Using the interface shown in the preceding example as a parameter to @EnableBinding triggers the creation of the three bound channels named orders, hotDrinks, and coldDrinks, respectively.

 

使用前面例子中显示的接口作为@EnableBinding注解的一个参数将触发三个绑定通道的创建,分别是命名为orders,hotDrinks和coldDrinks。

 

You can provide as many binding interfaces as you need, as arguments to the @EnableBinding annotation, as shown in the following example:

 

您可以根据需要提供尽可能多的绑定接口,作为@EnableBinding注解的参数,如以下示例所示:

 

@EnableBinding(value = { Orders.class, Payment.class })

 

In Spring Cloud Stream, the bindable MessageChannel components are the Spring Messaging MessageChannel(for outbound) and its extension, SubscribableChannel, (for inbound).

 

在Spring Cloud Stream中,可绑定MessageChannel组件是Spring Messaging MessageChannel(用于出站)及其扩展SubscribableChannel(用于入站)。

 

Pollable Destination Binding   可轮询的目的地绑定

 

While the previously described bindings support event-based message consumption, sometimes you need more control, such as rate of consumption.

 

虽然之前描述的绑定支持基于事件的消息消费,但有时您需要更多控制,例如消费速率。

 

Starting with version 2.0, you can now bind a pollable consumer:

 

从2.0版开始,您现在可以绑定可轮询消费者:

 

The following example shows how to bind a pollable consumer:

 

以下示例显示如何绑定可轮询消费者:

 

public interface PolledBarista {

 

    @Input

    PollableMessageSource orders();

. . .

}

 

In this case, an implementation of PollableMessageSource is bound to the orders “channel”. See Using Polled Consumers for more details.

 

在这种情况下,PollableMessageSource的实现被绑定到orders“通道”。有关详细信息,请参阅使用轮询的使用者

 

Customizing Channel Names   自定义管道名称

 

By using the @Input and @Output annotations, you can specify a customized channel name for the channel, as shown in the following example:

 

通过使用@Input和@Output注解,您可以为通道指定自定义通道名称,如以下示例所示:

 

public interface Barista {
    @Input("inboundOrders")
    SubscribableChannel orders();
}

 

In the preceding example, the created bound channel is named inboundOrders.

 

在前面的示例中,创建的绑定通道被命名为inboundOrders。

 

Normally, you need not access individual channels or bindings directly (other then configuring them via @EnableBinding annotation). However there may be times, such as testing or other corner cases, when you do.

 

通常,您无需直接访问单个通道或绑定(除了通过@EnableBinding注解配置它们之外)。但是,有时可能会出现测试或其他角落情况。

 

Aside from generating channels for each binding and registering them as Spring beans, for each bound interface, Spring Cloud Stream generates a bean that implements the interface. That means you can have access to the interfaces representing the bindings or individual channels by auto-wiring either in your application, as shown in the following two examples:

 

除了为每个绑定生成通道并将它们注册为Spring bean之外,对于每个绑定接口,Spring Cloud Stream都会生成一个实现该接口的bean。这意味着您可以通过在应用程序中自动装配来访问表示绑定或单个通道的接口,如以下两个示例所示:

 

Autowire Binding interface

 

自动装配绑定接口

 

@Autowire

private Source source

 

public void sayHello(String name) {

    source.output().send(MessageBuilder.withPayload(name).build());

}

 

Autowire individual channel

 

自动装配单个通道

 

@Autowire

private MessageChannel output;

 

public void sayHello(String name) {

    output.send(MessageBuilder.withPayload(name).build());

}

 

You can also use standard Spring’s @Qualifier annotation for cases when channel names are customized or in multiple-channel scenarios that require specifically named channels.

 

您还可以在自定义通道名称的情况下或在需要特定命名通道的多通道方案中使用标准Spring的@Qualifier注解。

 

The following example shows how to use the @Qualifier annotation in this way:

 

以下示例显示如何以这种方式使用@Qualifier注解:

 

@Autowire
@Qualifier("myChannel")
private MessageChannel output;

 

5.3. Producing and Consuming Messages   生产及消费消息

 

You can write a Spring Cloud Stream application by using either Spring Integration annotations or Spring Cloud Stream native annotation.

 

您可以使用Spring Integration注释或Spring Cloud Stream原生注释编写Spring Cloud Stream应用程序。

 

5.3.1. Spring Integration Support

 

Spring Cloud Stream is built on the concepts and patterns defined by Enterprise Integration Patterns and relies in its internal implementation on an already established and popular implementation of Enterprise Integration Patterns within the Spring portfolio of projects: Spring Integration framework.

 

Spring Cloud Stream建立在Enterprise Integration Patterns定义的概念和模式之上,并依赖于其内部实现,在Spring项目组合中已经建立和流行的企业集成模式实现: Spring Integration框架。

 

So its only natiural for it to support the foundation, semantics, and configuration options that are already established by Spring Integration

For example, you can attach the output channel of a Source to a MessageSource and use the familiar @InboundChannelAdapter annotation, as follows:

 

所以它唯一的理由就是支持Spring Integration已经建立的基础,语义,和配置选项。

例如,您可以将Source的输出通道附加到MessageSource上并使用熟悉的@InboundChannelAdapter注释,如下所示:

 

@EnableBinding(Source.class)

public class TimerSource {

 

  @Bean

  @InboundChannelAdapter(value = Source.OUTPUT, poller = @Poller(fixedDelay = "10", maxMessagesPerPoll = "1"))

  public MessageSource<String> timerMessageSource() {

    return () -> new GenericMessage<>("Hello Spring Cloud Stream");

  }

}

  

Similarly, you can use @Transformer or @ServiceActivator while providing an implementation of a message handler method for a Processor binding contract, as shown in the following example:

 

同样,您可以使用@Transformer或@ServiceActivator注解,同时为处理器绑定契约提供消息处理程序方法的实现,如以下示例所示:

 

@EnableBinding(Processor.class)

public class TransformProcessor {

  @Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT)

  public Object transform(String message) {

    return message.toUpperCase();

  }

}

 

While this may be skipping ahead a bit, it is important to understand that, when you consume from the same binding using @StreamListener annotation, a pub-sub model is used. Each method annotated with @StreamListener receives its own copy of a message, and each one has its own consumer group. However, if you consume from the same binding by using one of the Spring Integration annotation (such as @Aggregator, @Transformer, or @ServiceActivator), those consume in a competing model. No individual consumer group is created for each subscription.

虽然这可能会稍微跳过一点,但重要的是要理解,当您使用@StreamListener注释消费同一个绑定时,会使用发布-订阅模型。每个使用@StreamListener注释的方法都会收到自己的消息副本,每个消息都有自己的消费者组。不过,如果你通过使用Spring Integration的注解之一(如@Aggregator,@Transformer,或@ServiceActivator)消费同一个绑定时,这些消费于竞争模型。没有为每个订阅创建单个消费者组。

 

5.3.2. Using @StreamListener Annotation   使用@StreamListener注解

 

Complementary to its Spring Integration support, Spring Cloud Stream provides its own @StreamListener annotation, modeled after other Spring Messaging annotations (@MessageMapping, @JmsListener, @RabbitListener, and others) and provides conviniences, such as content-based routing and others.

 

作为其Spring Integration支持的补充,Spring Cloud Stream提供了自己的@StreamListener注释,在其他Spring消息的注解之后仿造(@MessageMapping,@JmsListener,@RabbitListener,等),并提供方便,如基于内容的路由等。

 

@EnableBinding(Sink.class)

public class VoteHandler {

 

  @Autowired

  VotingService votingService;

 

  @StreamListener(Sink.INPUT)

  public void handle(Vote vote) {

    votingService.record(vote);

  }

}

 

As with other Spring Messaging methods, method arguments can be annotated with @Payload, @Headers, and @Header.

 

与其他Spring Messaging的方法一样,方法的参数可以使用@Payload,@Headers,和@Header注解。

 

For methods that return data, you must use the @SendTo annotation to specify the output binding destination for data returned by the method, as shown in the following example:

 

对于返回数据的方法,必须使用@SendTo注释指定方法返回的数据的输出绑定目标,如以下示例所示:

 

@EnableBinding(Processor.class)

public class TransformProcessor {

 

  @Autowired

  VotingService votingService;

 

  @StreamListener(Processor.INPUT)

  @SendTo(Processor.OUTPUT)

  public VoteResult handle(Vote vote) {

    return votingService.record(vote);

  }

}

 

5.3.3. Using @StreamListener for Content-based routing   使用@StreamListener进行基于内容的路由

 

Spring Cloud Stream supports dispatching messages to multiple handler methods annotated with @StreamListener based on conditions.

 

Spring Cloud Stream支持将消息分派给使用基于条件的@StreamListener注释的多个处理程序方法。

 

In order to be eligible to support conditional dispatching, a method must satisfy the follow conditions:

  • It must not return a value.
  • It must be an individual message handling method (reactive API methods are not supported).

 

为了有资格支持条件分派,方法必须满足以下条件:

  • 它不能返回值。
  • 它必须是单独的消息处理方法(不支持反应式API方法)。

 

The condition is specified by a SpEL expression in the condition argument of the annotation and is evaluated for each message. All the handlers that match the condition are invoked in the same thread, and no assumption must be made about the order in which the invocations take place.

 

条件由注释的条件参数中的SpEL表达式指定,并对每条消息进行评估。匹配条件的所有处理程序都在同一个线程中调用,并且不必假设调用发生的顺序。

 

In the following example of a @StreamListener with dispatching conditions, all the messages bearing a header type with the value bogey are dispatched to the receiveBogey method, and all the messages bearing a header type with the value bacall are dispatched to the receiveBacall method.

 

在以下带调度条件的@StreamListener示例中,header类型为bogey值的所有消息都将被调度到receiveBogey方法,header类型为bacall值的所有消息都将被调度到receiveBacall方法。

 

@EnableBinding(Sink.class)

@EnableAutoConfiguration

public static class TestPojoWithAnnotatedArguments {

 

    @StreamListener(target = Sink.INPUT, condition = "headers['type']=='bogey'")

    public void receiveBogey(@Payload BogeyPojo bogeyPojo) {

       // handle the message

    }

 

    @StreamListener(target = Sink.INPUT, condition = "headers['type']=='bacall'")

    public void receiveBacall(@Payload BacallPojo bacallPojo) {

       // handle the message

    }

}

 

Content Type Negotiation in the Context of condition   条件上下文中的内容类型协商

 

It is important to understand some of the mechanics behind content-based routing using the condition argument of @StreamListener, especially in the context of the type of the message as a whole. It may also help if you familiarize yourself with the Content Type Negotiation before you proceed.

 

使用@StreamListener的条件参数来理解基于内容的路由背后的一些机制非常重要,尤其是在整个消息类型的上下文中。如果您在继续操作之前熟悉内容类型协商,这也可能有所帮助。

 

Consider the following scenario:

 

请考虑以下情形:

 

@EnableBinding(Sink.class)

@EnableAutoConfiguration

public static class CatsAndDogs {

 

    @StreamListener(target = Sink.INPUT, condition = "payload.class.simpleName=='Dog'")

    public void bark(Dog dog) {

       // handle the message

    }

 

    @StreamListener(target = Sink.INPUT, condition = "payload.class.simpleName=='Cat'")

    public void purr(Cat cat) {

       // handle the message

    }

}

 

The preceding code is perfectly valid. It compiles and deploys without any issues, yet it never produces the result you expect.

 

上述代码完全有效。它编译和部署没有任何问题,但它永远不会产生您期望的结果。

 

That is because you are testing something that does not yet exist in a state you expect. That is becouse the payload of the message is not yet converted from the wire format (byte[]) to the desired type. In other words, it has not yet gone through the type conversion process described in the Content Type Negotiation.

 

那是因为你正在测试一些在你期望的状态下尚不存在的东西。这是因为消息的有效负载尚未从有线格式(byte[])转换为所需类型。换句话说,它尚未经历内容类型协商中描述的类型转换过程。

 

So, unless you use a SPeL expression that evaluates raw data (for example, the value of the first byte in the byte array), use message header-based expressions (such as condition = "headers['type']=='dog'").

 

因此,除非使用评估原始数据的SPeL表达式(例如,字节数组中第一个字节的值),否则请使用基于消息头的表达式(例如condition = "headers['type']=='dog'")。

 

At the moment, dispatching through @StreamListener conditions is supported only for channel-based binders (not for reactive programming) support.

目前,通过@StreamListener条件进行调度只支持基于通道的绑定器(不支持响应式编程)。

 

5.3.4. Using Polled Consumers   使用轮询的消费者

 

When using polled consumers, you poll the PollableMessageSource on demand. Consider the following example of a polled consumer:

 

使用轮询消费者时,您可以按需轮询PollableMessageSource。考虑以下轮询消费者的示例:

 

public interface PolledConsumer {

 

    @Input

    PollableMessageSource destIn();

 

    @Output

    MessageChannel destOut();

 

}

 

Given the polled consumer in the preceding example, you might use it as follows:

 

鉴于前面示例中的轮询消费者,您可以按如下方式使用它:

 

@Bean

public ApplicationRunner poller(PollableMessageSource destIn, MessageChannel destOut) {

    return args -> {

        while (someCondition()) {

            try {

                if (!destIn.poll(m -> {

                    String newPayload = ((String) m.getPayload()).toUpperCase();

                    destOut.send(new GenericMessage<>(newPayload));

                })) {

                    Thread.sleep(1000);

                }

            }

            catch (Exception e) {

                // handle failure (throw an exception to reject the message);

            }

        }

    };

}

 

The PollableMessageSource.poll() method takes a MessageHandler argument (often a lambda expression, as shown here). It returns true if the message was received and successfully processed.

 

PollableMessageSource.poll()方法接受一个MessageHandler参数(通常是lambda表达式,如此处所示)。如果收到并成功处理了消息,则返回true。

 

As with message-driven consumers, if the MessageHandler throws an exception, messages are published to error channels, as discussed in “[binder-error-channels]”.

 

与消息驱动的消费者一样,如果MessageHandler抛出异常,则将消息发布到错误通道,如“ [binder-error-channels] ”中所述。

 

Normally, the poll() method acknowledges the message when the MessageHandler exits. If the method exits abnormally, the message is rejected (not re-queued). You can override that behavior by taking responsibility for the acknowledgment, as shown in the following example:

 

通常,poll()方法在MessageHandler退出时确认消息。如果方法异常退出,则拒绝该消息(不重新排队)。您可以通过承担确认责任来覆盖该行为,如以下示例所示:

 

@Bean

public ApplicationRunner poller(PollableMessageSource dest1In, MessageChannel dest2Out) {

    return args -> {

        while (someCondition()) {

            if (!dest1In.poll(m -> {

                StaticMessageHeaderAccessor.getAcknowledgmentCallback(m).noAutoAck();

                // e.g. hand off to another thread which can perform the ack

                // or acknowledge(Status.REQUEUE)

 

            })) {

                Thread.sleep(1000);

            }

        }

    };

}

 

You must ack (or nack) the message at some point, to avoid resource leaks.

您必须在某个时候确认(或否定确认)消息,以避免资源泄漏。

Some messaging systems (such as Apache Kafka) maintain a simple offset in a log. If a delivery fails and is re-queued with StaticMessageHeaderAccessor.getAcknowledgmentCallback(m).acknowledge(Status.REQUEUE);, any later successfully ack’d messages are redelivered.

某些消息系统(例如Apache Kafka)在日志中维护一个简单的偏移量。如果使用StaticMessageHeaderAccessor.getAcknowledgmentCallback(m).acknowledge(Status.REQUEUE);传递失败并重新排队,则会重新传递任何以后成功确认的消息。

 

There is also an overloaded poll method, for which the definition is as follows:

 

还有一个重载的poll方法,其定义如下:

 

poll(MessageHandler handler, ParameterizedTypeReference<?> type)

 

The type is a conversion hint that allows the incoming message payload to be converted, as shown in the following example:

 

type是一个转换提示,允许转换传入消息负载,如以下示例所示:

 

boolean result = pollableSource.poll(received -> {

Map<String, Foo> payload = (Map<String, Foo>) received.getPayload();

            ...

 

}, new ParameterizedTypeReference<Map<String, Foo>>() {});

 

5.4. Error Handling   错误处理

 

Errors happen, and Spring Cloud Stream provides several flexible mechanisms to handle them. The error handling comes in two flavors:

  • application: The error handling is done within the application (custom error handler).
  • system: The error handling is delegated to the binder (re-queue, DL, and others). Note that the techniques are dependent on binder implementation and the capability of the underlying messaging middleware.

 

错误发生时,Spring Cloud Stream提供了几种灵活的机制来处理它们。错误处理有两种形式:

  • application错误处理在应用程序中完成(自定义错误处理程序)。
  • system将错误处理委托给绑定器(重新排队,DL,等)。请注意,这些技术取决于绑定器实现和底层消息中间件的功能。

 

Spring Cloud Stream uses the Spring Retry library to facilitate successful message processing. See Retry Template for more details. However, when all fails, the exceptions thrown by the message handlers are propagated back to the binder. At that point, binder invokes custom error handler or communicates the error back to the messaging system (re-queue, DLQ, and others).

 

Spring Cloud Stream使用Spring Retry库来促进消息处理成功。有关详细信息,请参阅Retry Template。但是,当全部失败时,消息处理程序抛出的异常将传播回绑定器。此时,绑定器调用自定义错误处理程序或将错误传回消息系统(重新排队,DLQ,等)。

 

Application Error Handling   应用程序错误处理

 

There are two types of application-level error handling. Errors can be handled at each binding subscription or a global handler can handle all the binding subscription errors. Let’s review the details.

 

有两种类型的应用程序级错误处理。可以在每个绑定订阅处处理错误,或者全局处理程序可以处理所有绑定订阅错误。我们来看看细节。

 

Figure 5. A Spring Cloud Stream Sink Application with Custom and Global Error Handlers

 

For each input binding, Spring Cloud Stream creates a dedicated error channel with the following semantics <destinationName>.errors.

 

对于每个输入绑定,Spring Cloud Stream使用以下语义<destinationName>.errors创建专用错误通道。

 

The <destinationName> consists of the name of the binding (such as input) and the name of the group (such as myGroup).

<destinationName>由绑定名称(如名称的input)和组名(如名称myGroup)组成。

 

Consider the following:

 

考虑以下:

 

@StreamListener(Sink.INPUT) // destination name 'input.myGroup'

public void handle(Person value) {

throw new RuntimeException("BOOM!");

}

 

@ServiceActivator(inputChannel = Processor.INPUT + ".myGroup.errors") //channel name 'input.myGroup.errors'

public void error(Message<?> message) {

System.out.println("Handling ERROR: " + message);

}

 

In the preceding example the destination name is input.myGroup and the dedicated error channel name is input.myGroup.errors.

 

在前面的示例中,目标名称是input.myGroup,专用错误通道名称是input.myGroup.errors。

 

The use of @StreamListener annotation is intended specifically to define bindings that bridge internal channels and external destinations. Given that the destination specific error channel does NOT have an associated external destination, such channel is a prerogative of Spring Integration (SI). This means that the handler for such destination must be defined using one of the SI handler annotations (i.e., @ServiceActivator, @Transformer etc.).

@StreamListener注释的使用专门用于定义桥接内部通道和外部目标的绑定。鉴于目标特定的错误通道没有关联的外部目标,此类通道是Spring Integration(SI)的特权。这意味着必须使用SI处理程序注释之一(即@ServiceActivator,@Transformer等)定义此类目标的处理程序。

If group is not specified anonymous group is used (something like input.anonymous.2K37rb06Q6m2r51-SPIDDQ), which is not suitable for error handling scenarious, since you don’t know what it’s going to be until the destination is created.

如果未指定组则使用匿名组(类似于input.anonymous.2K37rb06Q6m2r51-SPIDDQ),这不适合错误处理,因为在创建目标之前您不知道它将是什么。

 

Also, in the event you are binding to the existing destination such as:

 

此外,如果您绑定到现有目标,例如:

 

spring.cloud.stream.bindings.input.destination=myFooDestination
spring.cloud.stream.bindings.input.group=myGroup

 

the full destination name is myFooDestination.myGroup and then the dedicated error channel name is myFooDestination.myGroup.errors.

 

则完整的目标名称是myFooDestination.myGroup,专用的错误通道名称是myFooDestination.myGroup.errors。

 

Back to the example…​

 

回到例子......

 

The handle(..) method, which subscribes to the channel named input, throws an exception. Given there is also a subscriber to the error channel input.myGroup.errors all error messages are handled by this subscriber.

 

订阅input通道的handle(..)方法会抛出异常。鉴于还存在input.myGroup.errors错误通道的订阅者,因此所有错误消息都由该订阅者处理。

 

If you have multiple bindings, you may want to have a single error handler. Spring Cloud Stream automatically provides support for a global error channel by bridging each individual error channel to the channel named errorChannel, allowing a single subscriber to handle all errors, as shown in the following example:

 

如果您有多个绑定,则可能需要单个错误处理程序。Spring Cloud Stream通过将每个独立的错误通道桥接到命名为errorChannel的通道自动为全局错误通道提供支持,允许单个订阅者处理所有错误,如以下示例所示:

 

@StreamListener("errorChannel")
public void error(Message<?> message) {
        System.out.println("Handling ERROR: " + message);
}

 

This may be a convenient option if error handling logic is the same regardless of which handler produced the error.

 

如果错误处理逻辑相同,无论哪个处理程序产生错误,这可能是一个方便的选项。

 

Also, error messages sent to the errorChannel can be published to the specific destination at the broker by configuring a binding named error for the outbound target. This option provides a mechanism to automatically send error messages to another application bound to that destination or for later retrieval (for example, audit). For example, to publish error messages to a broker destination named myErrors, set the following property:

 

此外,通过将命名为error的绑定配置为出站目标,可以将发送到errorChannel的错误消息发布到代理的特定目标。此选项提供了一种机制,可以将错误消息自动发送到绑定到该目标的另一个应用程序,或供以后检索(例如,审计)。例如,要将错误消息发布到命名为myErrors的代理目标,请设置以下属性:

 

spring.cloud.stream.bindings.error.destination=myErrors.

 

The ability to bridge global error channel to a broker destination essentially provides a mechanism which connects the application-level error handling with the system-level error handling.

将全局错误通道桥接到代理目标的能力实质上提供了一种将应用程序级错误处理与系统级错误处理相连接的机制。

 

System Error Handling   系统错误处理

 

System-level error handling implies that the errors are communicated back to the messaging system and, given that not every messaging system is the same, the capabilities may differ from binder to binder.

 

系统级错误处理意味着将错误传递回消息系统,并且假设并非每个消息系统都相同,则功能可能因绑定器而异。

 

That said, in this section we explain the general idea behind system level error handling and use Rabbit binder as an example. NOTE: Kafka binder provides similar support, although some configuration properties do differ. Also, for more details and configuration options, see the individual binder’s documentation.

 

也就是说,在本节中,我们将解释系统级错误处理背后的一般概念,并以Rabbit绑定器为例。注意:虽然某些配置属性有所不同,但Kafka绑定器提供了类似的支持。另外,有关更多详细信息和配置选项,请参阅各个绑定器的文档。

 

If no internal error handlers are configured, the errors propagate to the binders, and the binders subsequently propagate those errors back to the messaging system. Depending on the capabilities of the messaging system such a system may drop the message, re-queue the message for re-processing or send the failed message to DLQ. Both Rabbit and Kafka support these concepts. However, other binders may not, so refer to your individual binder’s documentation for details on supported system-level error-handling options.

 

如果未配置内部错误处理程序,则错误会传播到绑定器,然后绑定器会将这些错误传播回消息系统。根据消息系统的功能,这样的系统可以丢弃消息,重新排队消息以进行重新处理或将失败的消息发送到DLQ。Rabbit和Kafka都支持这些概念。但是,其他绑定器可能不会,因此请参阅各个绑定器的文档,以获取有关受支持的系统级错误处理选项的详细信息。

 

Drop Failed Messages   丢弃失败消息

 

By default, if no additional system-level configuration is provided, the messaging system drops the failed message. While acceptable in some cases, for most cases, it is not, and we need some recovery mechanism to avoid message loss.

 

默认情况下,如果未提供其他系统级配置,则消息系统将丢弃失败的消息。虽然在某些情况下可接受,但在大多数情况下,它不是,我们需要一些恢复机制来避免消息丢失。

 

DLQ - Dead Letter Queue   死信队列

 

DLQ allows failed messages to be sent to a special destination: - Dead Letter Queue.

 

DLQ允许将失败的消息发送到特殊目的地: - 死信队列。

 

When configured, failed messages are sent to this destination for subsequent re-processing or auditing and reconciliation.

 

配置后,失败的消息将发送到此目标,以便后续重新处理或审核和协调。

 

For example, continuing on the previous example and to set up the DLQ with Rabbit binder, you need to set the following property:

 

例如,继续上一个示例并使用Rabbit绑定器设置DLQ,您需要设置以下属性:

 

spring.cloud.stream.rabbit.bindings.input.consumer.auto-bind-dlq=true

 

Keep in mind that, in the above property, input corresponds to the name of the input destination binding. The consumer indicates that it is a consumer property and auto-bind-dlq instructs the binder to configure DLQ for input destination, which results in an additional Rabbit queue named input.myGroup.dlq.

 

请记住,在上面的属性中,input对应于输入目标绑定的名称。consumer表示它是一个消费者属性,并且auto-bind-dlq指示绑定器为input目标配置DLQ,这会生成一个命名为input.myGroup.dlq的额外Rabbit队列。

 

Once configured, all failed messages are routed to this queue with an error message similar to the following:

 

配置完成后,所有失败的消息都将路由到此队列,并显示类似于以下内容的错误消息:

 

delivery_mode:        1
headers:
x-death:
count:        1
reason:        rejected
queue:        input.hello
time:        1522328151
exchange:
routing-keys:        input.myGroup
Payload {"name”:"Bob"}

 

As you can see from the above, your original message is preserved for further actions.

 

从上面的内容可以看出,您的原始消息会被保留以供进一步操作。

 

However, one thing you may have noticed is that there is limited information on the original issue with the message processing. For example, you do not see a stack trace corresponding to the original error. To get more relevant information about the original error, you must set an additional property:

 

但是,您可能注意到的一件事是,有关消息处理的原始问题的信息有限。例如,您没有看到与原始错误对应的堆栈跟踪。要获取有关原始错误的更多相关信息,您必须设置其他属性:

 

spring.cloud.stream.rabbit.bindings.input.consumer.republish-to-dlq=true

 

Doing so forces the internal error handler to intercept the error message and add additional information to it before publishing it to DLQ. Once configured, you can see that the error message contains more information relevant to the original error, as follows:

 

这样做会强制内部错误处理程序拦截错误消息,并在将其发布到DLQ之前向其添加其他信息。配置完成后,您可以看到错误消息包含与原始错误相关的更多信息,如下所示:

 

delivery_mode:        2
headers:
x-original-exchange:
x-exception-message:        has an error
x-original-routingKey:        input.myGroup
x-exception-stacktrace:        org.springframework.messaging.MessageHandlingException: nested exception is
      org.springframework.messaging.MessagingException: has an error, failedMessage=GenericMessage [payload=byte[15],
      headers={amqp_receivedDeliveryMode=NON_PERSISTENT, amqp_receivedRoutingKey=input.hello, amqp_deliveryTag=1,
      deliveryAttempt=3, amqp_consumerQueue=input.hello, amqp_redelivered=false, id=a15231e6-3f80-677b-5ad7-d4b1e61e486e,
      amqp_consumerTag=amq.ctag-skBFapilvtZhDsn0k3ZmQg, contentType=application/json, timestamp=1522327846136}]
      at org.spring...integ...han...MethodInvokingMessageProcessor.processMessage(MethodInvokingMessageProcessor.java:107)
      at. . . . .
Payload {"name”:"Bob"}

 

This effectively combines application-level and system-level error handling to further assist with downstream troubleshooting mechanics.

 

这有效地结合了应用程序级和系统级错误处理,以进一步帮助下游故障排除机制。

 

Re-queue Failed Messages   重新排队失败消息

 

As mentioned earlier, the currently supported binders (Rabbit and Kafka) rely on RetryTemplate to facilitate successful message processing. See Retry Template for details. However, for cases when max-attempts property is set to 1, internal reprocessing of the message is disabled. At this point, you can facilitate message re-processing (re-tries) by instructing the messaging system to re-queue the failed message. Once re-queued, the failed message is sent back to the original handler, essentially creating a retry loop.

 

如前所述,当前支持的绑定器(Rabbit和Kafka)依赖于RetryTemplate以促进消息的成功处理。有关详细信息,请参阅Retry Template。但是,对于max-attempts属性设置为1的情况,将禁用消息的内部重新处理。此时,您可以通过指示消息系统重新排队失败的消息来促进消息重新处理(重新尝试)。重新排队后,失败的消息将被发送回原始处理程序,实质上是创建重试循环。

 

This option may be feasible for cases where the nature of the error is related to some sporadic yet short-term unavailability of some resource.

 

对于错误的性质与某些资源的某些零星但短期不可用相关的情况,此选项可能是可行的。

 

To accomplish that, you must set the following properties:

 

要实现此目的,您必须设置以下属性:

 

spring.cloud.stream.bindings.input.consumer.max-attempts=1
spring.cloud.stream.rabbit.bindings.input.consumer.requeue-rejected=true

 

In the preceding example, the max-attempts set to 1 essentially disabling internal re-tries and requeue-rejected (short for requeue rejected messages) is set to true. Once set, the failed message is resubmitted to the same handler and loops continuously or until the handler throws AmqpRejectAndDontRequeueException essentially allowing you to build your own re-try logic within the handler itself.

 

在前面的示例中,将max-attempts设置为1基本上禁用内部重试和requeue-rejected(重新排队拒绝消息的简称)被设置为true。一旦设置,失败的消息将重新提交到同一个处理程序并继续循环或直到处理程序抛出AmqpRejectAndDontRequeueException,基本上允许您在处理程序本身内构建自己的重试逻辑。

 

Retry Template   重试模板

 

The RetryTemplate is part of the Spring Retry library. While it is out of scope of this document to cover all of the capabilities of the RetryTemplate, we will mention the following consumer properties that are specifically related to the RetryTemplate:

 

RetryTemplate是Spring Retry库的一部分。虽然涵盖RetryTemplate的所有功能超出了本文档的范围,但我们仍将提及以下与RetryTemplate特别相关的消费者属性:

 

maxAttempts

The number of attempts to process the message.

处理消息的尝试次数。

 

Default: 3.

backOffInitialInterval

The backoff initial interval on retry.

重试时的退避初始间隔。

 

Default 1000 milliseconds.

backOffMaxInterval

The maximum backoff interval.

最大退避间隔。

 

Default 10000 milliseconds.

backOffMultiplier

The backoff multiplier.

退避乘数。

 

Default 2.0.

 

While the preceding settings are sufficient for majority of the customization requirements, they may not satisfy certain complex requirements at, which point you may want to provide your own instance of the RetryTemplate. To do so configure it as a bean in your application configuration. The application provided instance will override the one provided by the framework. Also, to avoid conflicts you must qualify the instance of the RetryTemplate you want to be used by the binder as @StreamRetryTemplate. For example,

 

虽然前面的设置足以满足大多数自定义要求,但它们可能无法满足某些复杂要求,您可能希望提供自己的RetryTemplate实例。为此,请将其配置为应用程序配置中的bean。应用程序提供的实例将覆盖框架提供的实例。另外,为了避免冲突,您必须将想要被绑定器使用的RetryTemplate实例限定为@StreamRetryTemplate。例如,

 

@StreamRetryTemplate
public RetryTemplate myRetryTemplate() {
    return new RetryTemplate();
}

 

As you can see from the above example you don’t need to annotate it with @Bean since @StreamRetryTemplate is a qualified @Bean.

 

从上面的例子可以看出,你不需要使用@Bean注释它,因为@StreamRetryTemplate是合格的@Bean。

 

5.5. Reactive Programming Support   反应式编程支持

 

Spring Cloud Stream also supports the use of reactive APIs where incoming and outgoing data is handled as continuous data flows. Support for reactive APIs is available through spring-cloud-stream-reactive, which needs to be added explicitly to your project.

 

Spring Cloud Stream还支持使用响应式API,其中传入和传出数据作为连续数据流进行处理。可以通过spring-cloud-stream-reactive支持反应式API,需要将其明确添加到您的项目中。

 

The programming model with reactive APIs is declarative. Instead of specifying how each individual message should be handled, you can use operators that describe functional transformations from inbound to outbound data flows.

 

具有反应式API的编程模型是声明性的。您可以使用描述从入站数据流到出站数据流的功能转换的运算符,而不是指定应如何处理每条消息。

 

At present Spring Cloud Stream supports the only the Reactor API. In the future, we intend to support a more generic model based on Reactive Streams.

 

目前Spring Cloud Stream仅支持Reactor API。将来,我们打算支持基于Reactive Streams的更通用的模型。

 

The reactive programming model also uses the @StreamListener annotation for setting up reactive handlers. The differences are that:

  • The @StreamListener annotation must not specify an input or output, as they are provided as arguments and return values from the method.
  • The arguments of the method must be annotated with @Input and @Output, indicating which input or output the incoming and outgoing data flows connect to, respectively.
  • The return value of the method, if any, is annotated with @Output, indicating the input where data should be sent.

 

反应式编程模型还使用@StreamListener注释来设置反应式处理程序。不同之处在于:

  • @StreamListener注释不能指定输入或输出,因为它们被提供为该方法的参数和返回值。
  • 方法参数必须用@Input和@Output注释,分别指示传入和传出数据流连接到哪个输入或输出。
  • 方法返回值(如果有)用@Output注释,表示应该发送数据的输入。

 

Reactive programming support requires Java 1.8.

反应式编程支持需要Java 1.8。

As of Spring Cloud Stream 1.1.1 and later (starting with release train Brooklyn.SR2), reactive programming support requires the use of Reactor 3.0.4.RELEASE and higher. Earlier Reactor versions (including 3.0.1.RELEASE, 3.0.2.RELEASE and 3.0.3.RELEASE) are not supported. spring-cloud-stream-reactive transitively retrieves the proper version, but it is possible for the project structure to manage the version of the io.projectreactor:reactor-core to an earlier release, especially when using Maven. This is the case for projects generated by using Spring Initializr with Spring Boot 1.x, which overrides the Reactor version to 2.0.8.RELEASE. In such cases, you must ensure that the proper version of the artifact is released. You can do so by adding a direct dependency on io.projectreactor:reactor-core with a version of 3.0.4.RELEASE or later to your project.

从Spring Cloud Stream 1.1.1及更高版本开始(从版本系列Brooklyn.SR2开始),反应式编程支持需要使用Reactor 3.0.4.RELEASE和更高版本。不支持早期的Reactor版本(包括3.0.1.RELEASE,3.0.2.RELEASE和3.0.3.RELEASE)。spring-cloud-stream-reactive传递性地检索正确的版本,但项目结构可以管理io.projectreactor:reactor-core早期版本的版本,尤其是在使用Maven时。对于使用Spring Initializr和Spring Boot 1.x生成的项目就是这种情况,它将Reactor版本覆盖到2.0.8.RELEASE。在这种情况下,您必须确保释放正确版本的工件。您可以通过向项目io.projectreactor:reactor-core的版本3.0.4.RELEASE或更高版本添加直接依赖项来实现此目的。

The use of term, “reactive”, currently refers to the reactive APIs being used and not to the execution model being reactive (that is, the bound endpoints still use a 'push' rather than a 'pull' model). While some backpressure support is provided by the use of Reactor, we do intend, in a future release, to support entirely reactive pipelines by the use of native reactive clients for the connected middleware.

术语“反应式”的使用目前指的是所使用的反应式API而不是被动反应的执行模型(即,绑定端点仍然使用“推”而不是'拉'模型)。虽然使用Reactor提供了一些背压支持,但我们打算在未来的版本中通过使用连接中间件的原生反应式客户端来支持完全的反应式管道。

 

Reactor-based Handlers   基于反应堆的处理程序

 

A Reactor-based handler can have the following argument types:

  • For arguments annotated with @Input, it supports the Reactor Flux type. The parameterization of the inbound Flux follows the same rules as in the case of individual message handling: It can be the entire Message, a POJO that can be the Message payload, or a POJO that is the result of a transformation based on the Message content-type header. Multiple inputs are provided.
  • For arguments annotated with Output, it supports the FluxSender type, which connects a Flux produced by the method with an output. Generally speaking, specifying outputs as arguments is only recommended when the method can have multiple outputs.

 

基于Reactor的处理程序可以具有以下参数类型:

  • 对于带@Input注释的参数,它支持Reactor Flux类型。入站Flux的参数化遵循与单个消息处理相同的规则:它可以是整个Message,可以是Message负载的POJO,或者是基于Message内容类型头的转换结果的POJO 。提供多个输入。
  • 对于带@Output注释的参数,它支持FluxSender类型,该类型将方法生成的Flux与输出连接。一般而言,仅当方法可以具有多个输出时,才建议将输出指定为参数。

 

A Reactor-based handler supports a return type of Flux. In that case, it must be annotated with @Output. We recommend using the return value of the method when a single output Flux is available.

 

基于Reactor的处理程序支持Flux返回类型。在这种情况下,它必须使用@Output注释。我们建议在单个输出Flux可用时使用方法的返回值。

 

The following example shows a Reactor-based Processor:

 

以下示例显示了基于Reactor的Processor:

 

@EnableBinding(Processor.class)
@EnableAutoConfiguration
public static class UppercaseTransformer {

@StreamListener
  @Output(Processor.OUTPUT)
  public Flux<String> receive(@Input(Processor.INPUT) Flux<String> input) {
    return input.map(s -> s.toUpperCase());
  }
}

 

The same processor using output arguments looks like the following example:

 

使用输出参数的同一处理器类似于以下示例:

 

@EnableBinding(Processor.class)
@EnableAutoConfiguration
public static class UppercaseTransformer {

@StreamListener
  public void receive(@Input(Processor.INPUT) Flux<String> input,
     @Output(Processor.OUTPUT) FluxSender output) {
     output.send(input.map(s -> s.toUpperCase()));
  }
}

 

Reactive Sources   反应源

 

Spring Cloud Stream reactive support also provides the ability for creating reactive sources through the @StreamEmitter annotation. By using the @StreamEmitter annotation, a regular source may be converted to a reactive one. @StreamEmitter is a method level annotation that marks a method to be an emitter to outputs declared with @EnableBinding. You cannot use the @Input annotation along with @StreamEmitter, as the methods marked with this annotation are not listening for any input. Rather, methods marked with @StreamEmitter generate output. Following the same programming model used in @StreamListener, @StreamEmitter also allows flexible ways of using the @Output annotation, depending on whether the method has any arguments, a return type, and other considerations.

 

Spring Cloud Stream反应式支持还通过@StreamEmitter注释提供了创建反应源的功能。通过使用@StreamEmitter注释,可以将常规源转换为反应源。@StreamEmitter是一个方法级别的注释,用于将方法标记为到使用@EnableBinding声明的输出的发射器。您不能同时使用@Input注释和@StreamEmitter注释,因为使用此注释标记的方法不会侦听任何输入。相反,标记为@StreamEmitter的方法生成输出。遵循@StreamListener,@StreamEmitter中使用的相同的编程模型还允许灵活的方式使用@Output注释,具体取决于方法是否具有任何参数,返回类型,和其他注意事项。

 

The remainder of this section contains examples of using the @StreamEmitter annotation in various styles.

 

本节的其余部分包含各种样式的使用@StreamEmitter注释的示例。

 

The following example emits the Hello, World message every millisecond and publishes to a Reactor Flux:

 

以下示例每毫秒发出一次Hello, World消息并发布到Reactor Flux:

 

@EnableBinding(Source.class)
@EnableAutoConfiguration
public static class HelloWorldEmitter {

@StreamEmitter
  @Output(Source.OUTPUT)
  public Flux<String> emit() {
    return Flux.intervalMillis(1)
            .map(l -> "Hello World");
  }
}

 

In the preceding example, the resulting messages in the Flux are sent to the output channel of the Source.

 

在前面的示例中,将Flux中的结果消息发送到Source的输出通道。

 

The next example is another flavor of an @StreamEmmitter that sends a Reactor Flux. Instead of returning a Flux, the following method uses a FluxSender to programmatically send a Flux from a source:

 

下一个例子是发送Reactor Flux的@StreamEmmitter的另一种例子。以下方法使用FluxSender以编程方式发送来自源的Flux,而不是返回Flux:

 

@EnableBinding(Source.class)

@EnableAutoConfiguration

public static class HelloWorldEmitter {

 

  @StreamEmitter

  @Output(Source.OUTPUT)

  public void emit(FluxSender output) {

    output.send(Flux.intervalMillis(1)

            .map(l -> "Hello World"));

  }

}

 

The next example is exactly same as the above snippet in functionality and style. However, instead of using an explicit @Output annotation on the method, it uses the annotation on the method parameter.

 

下一个示例在功能和样式上与上述代码段完全相同。但是,它不使用方法上的显式@Output注释,而是使用方法参数上的注释。

 

@EnableBinding(Source.class)

@EnableAutoConfiguration

public static class HelloWorldEmitter {

 

  @StreamEmitter

  public void emit(@Output(Source.OUTPUT) FluxSender output) {

    output.send(Flux.intervalMillis(1)

            .map(l -> "Hello World"));

  }

}

 

The last example in this section is yet another flavor of writing reacting sources by using the Reactive Streams Publisher API and taking advantage of the support for it in Spring Integration Java DSL. The Publisher in the following example still uses Reactor Flux under the hood, but, from an application perspective, that is transparent to the user and only needs Reactive Streams and Java DSL for Spring Integration:

 

本节的最后一个示例是另一种使用Reactive Streams Publisher API编写反应源的方法,并利用Spring Integration Java DSL中对它的支持。下面的例子中的Publisher仍然在引擎盖下使用Reactor Flux,但是,从应用的角度来看,这是对用户透明的,只需要Reactive 流和Spring Integration的Java DSL:

 

@EnableBinding(Source.class)

@EnableAutoConfiguration

public static class HelloWorldEmitter {

 

  @StreamEmitter

  @Output(Source.OUTPUT)

  @Bean

  public Publisher<Message<String>> emit() {

    return IntegrationFlows.from(() ->

                new GenericMessage<>("Hello World"),

        e -> e.poller(p -> p.fixedDelay(1)))

        .toReactivePublisher();

  }

}

 

6. Binders   绑定器

 

Spring Cloud Stream provides a Binder abstraction for use in connecting to physical destinations at the external middleware. This section provides information about the main concepts behind the Binder SPI, its main components, and implementation-specific details.

 

Spring Cloud Stream提供了一个Binder抽象,用于连接外部中间件的物理目标。本节提供有关Binder SPI背后的主要概念,其主要组件,以及特定于实现的细节的信息。

 

6.1. Producers and Consumers   生产者和消费者

 

The following image shows the general relationship of producers and consumers:

 

下图显示了生产者和消费者的一般关系:

 

Figure 6. Producers and Consumers

 

A producer is any component that sends messages to a channel. The channel can be bound to an external message broker with a Binder implementation for that broker. When invoking the bindProducer() method, the first parameter is the name of the destination within the broker, the second parameter is the local channel instance to which the producer sends messages, and the third parameter contains properties (such as a partition key expression) to be used within the adapter that is created for that channel.

 

生产者是向通道发送消息的任何组件。可以将通道绑定到具有该代理的Binder实现的外部消息代理。调用bindProducer()方法时,第一个参数是代理中目标的名称,第二个参数是生产者向其发送消息的本地通道实例,第三个参数包含要在为该通道创建的适配器中使用的属性(如分区键表达式)。

 

A consumer is any component that receives messages from a channel. As with a producer, the consumer’s channel can be bound to an external message broker. When invoking the bindConsumer() method, the first parameter is the destination name, and a second parameter provides the name of a logical group of consumers. Each group that is represented by consumer bindings for a given destination receives a copy of each message that a producer sends to that destination (that is, it follows normal publish-subscribe semantics). If there are multiple consumer instances bound with the same group name, then messages are load-balanced across those consumer instances so that each message sent by a producer is consumed by only a single consumer instance within each group (that is, it follows normal queueing semantics).

 

消费者是从通道接收消息的任何组件。与生产者一样,消费者的通道可以绑定到外部消息代理。调用bindConsumer()方法时,第一个参数是目标名称,第二个参数提供逻辑消费者组的名称。由给定目标的消费者绑定表示的每个组接收生产者发送到该目标的每个消息的副本(即,它遵循正常的发布 - 订阅语义)。如果有多个使用相同组名绑定的消费者实例,则会在这些消费者实例之间对消息进行负载平衡,以便生产者发送的每条消息仅由每个组中的单个消费者实例消费(即,它遵循正常的队列语义)。

 

6.2. Binder SPI   绑定器SPI(串行外围接口)

 

The Binder SPI consists of a number of interfaces, out-of-the box utility classes, and discovery strategies that provide a pluggable mechanism for connecting to external middleware.

 

绑定器SPI由许多接口,开箱即用的实用程序类,和发现策略组成,这些策略提供了可连接到外部中间件的可插拔机制。

 

The key point of the SPI is the Binder interface, which is a strategy for connecting inputs and outputs to external middleware. The following listing shows the definnition of the Binder interface:

 

SPI的关键点是Binder接口,这是一种将输入和输出连接到外部中间件的策略。以下清单显示了Binder接口的定义:

 

public interface Binder<T, C extends ConsumerProperties, P extends ProducerProperties> {

    Binding<T> bindConsumer(String name, String group, T inboundBindTarget, C consumerProperties);

 

    Binding<T> bindProducer(String name, T outboundBindTarget, P producerProperties);

}

 

The interface is parameterized, offering a number of extension points:

  • Input and output bind targets. As of version 1.0, only MessageChannel is supported, but this is intended to be used as an extension point in the future.
  • Extended consumer and producer properties, allowing specific Binder implementations to add supplemental properties that can be supported in a type-safe manner.

A typical binder implementation consists of the following:

  • A class that implements the Binder interface;
  • A Spring @Configuration class that creates a bean of type Binder along with the middleware connection infrastructure.
  • A META-INF/spring.binders file found on the classpath containing one or more binder definitions, as shown in the following example:
    kafka:\
    org.springframework.cloud.stream.binder.kafka.config.KafkaBinderConfiguration

 

接口已参数化,提供了许多扩展点:

  • 输入和输出绑定目标。从版本1.0开始,仅支持MessageChannel,但这将在未来用作扩展点。
  • 扩展的消费者和生产者属性,允许特定的Binder实现添加可以以类型安全的方式支持的补充属性。

典型的绑定器实现包括以下内容:

  • 一个实现Binder接口的类;
  • 一个Spring @Configuration类,它创建一个与中间件连接基础结构一起的Binder类型的bean。
  • 在包含一个或多个绑定器定义的类路径中找到的META-INF/spring.binders文件,如以下示例所示:

kafka:\

org.springframework.cloud.stream.binder.kafka.config.KafkaBinderConfiguration

 

6.3. Binder Detection   绑定器检测

 

Spring Cloud Stream relies on implementations of the Binder SPI to perform the task of connecting channels to message brokers. Each Binder implementation typically connects to one type of messaging system.

 

Spring Cloud Stream依赖于Binder SPI的实现来执行将通道连接到消息代理的任务。每个Binder实现通常连接到一种类型的消息系统。

 

6.3.1. Classpath Detection   类路径检测

 

By default, Spring Cloud Stream relies on Spring Boot’s auto-configuration to configure the binding process. If a single Binder implementation is found on the classpath, Spring Cloud Stream automatically uses it. For example, a Spring Cloud Stream project that aims to bind only to RabbitMQ can add the following dependency:

 

默认情况下,Spring Cloud Stream依靠Spring Boot的自动配置来配置绑定过程。如果在类路径上找到单个Binder实现,则Spring Cloud Stream会自动使用它。例如,旨在仅绑定到RabbitMQ的Spring Cloud Stream项目可以添加以下依赖项:

 

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-stream-binder-rabbit</artifactId>
</dependency>

 

For the specific Maven coordinates of other binder dependencies, see the documentation of that binder implementation.

 

有关其他绑定器依赖项的特定Maven坐标,请参阅该绑定器实现的文档。

 

6.4. Multiple Binders on the Classpath   类路径上的多个绑定器

 

When multiple binders are present on the classpath, the application must indicate which binder is to be used for each channel binding. Each binder configuration contains a META-INF/spring.binders file, which is a simple properties file, as shown in the following example:

 

当类路径上存在多个绑定器时,应用程序必须指示每个通道绑定使用哪个绑定器。每个绑定器配置都包含一个META-INF/spring.binders文件,该文件是一个简单的属性文件,如以下示例所示:

 

rabbit:\
org.springframework.cloud.stream.binder.rabbit.config.RabbitServiceAutoConfiguration

 

Similar files exist for the other provided binder implementations (such as Kafka), and custom binder implementations are expected to provide them as well. The key represents an identifying name for the binder implementation, whereas the value is a comma-separated list of configuration classes that each contain one and only one bean definition of type org.springframework.cloud.stream.binder.Binder.

 

其他提供的绑定器实现(例如Kafka)存在类似的文件,并且预期自定义绑定器实现也提供它们。键表示绑定器实现的标识名称,而值是以逗号分隔的配置类列表,每个配置类包含一个且仅包含一个org.springframework.cloud.stream.binder.Binder类型的bean定义。

 

Binder selection can either be performed globally, using the spring.cloud.stream.defaultBinder property (for example, spring.cloud.stream.defaultBinder=rabbit) or individually, by configuring the binder on each channel binding. For instance, a processor application (that has channels named input and output for read and write respectively) that reads from Kafka and writes to RabbitMQ can specify the following configuration:

 

绑定器选择可以全局执行,使用spring.cloud.stream.defaultBinder属性(例如spring.cloud.stream.defaultBinder=rabbit),或者单独执行,通过在每个通道绑定上配置绑定器。例如,从Kafka读取并写入RabbitMQ的处理器应用程序(具有已命名input和output分别用于读取和写入的通道)可指定以下配置:

 

spring.cloud.stream.bindings.input.binder=kafka
spring.cloud.stream.bindings.output.binder=rabbit

 

6.5. Connecting to Multiple Systems   连接多个系统

 

By default, binders share the application’s Spring Boot auto-configuration, so that one instance of each binder found on the classpath is created. If your application should connect to more than one broker of the same type, you can specify multiple binder configurations, each with different environment settings.

 

默认情况下,绑定器共享应用程序的Spring Boot自动配置,以便创建在类路径中找到的每个绑定器的一个实例。如果您的应用程序应连接到多个相同类型的代理,则可以指定多个绑定器配置,每个配置具有不同的环境设置。

 

Turning on explicit binder configuration disables the default binder configuration process altogether. If you do so, all binders in use must be included in the configuration. Frameworks that intend to use Spring Cloud Stream transparently may create binder configurations that can be referenced by name, but they do not affect the default binder configuration. In order to do so, a binder configuration may have its defaultCandidate flag set to false (for example, spring.cloud.stream.binders.<configurationName>.defaultCandidate=false). This denotes a configuration that exists independently of the default binder configuration process.

启用显式绑定器配置会完全禁用默认绑定器配置过程。如果这样做,则所有正在使用的绑定器必须包含在配置中。打算透明地使用Spring Cloud Stream的框架可以创建通过名称引用的绑定器配置,但它们不会影响默认的绑定器配置。为此,绑定器配置可以将其defaultCandidate标志设置为false(例如,spring.cloud.stream.binders.<configurationName>.defaultCandidate=false)。这表示独立于默认绑定器配置过程而存在的配置。

 

The following example shows a typical configuration for a processor application that connects to two RabbitMQ broker instances:

 

以下示例显示连接到两个RabbitMQ代理实例的处理器应用程序的典型配置:

 

spring:
  cloud:
    stream:
      bindings:
        input:
          destination: thing1
          binder: rabbit1
        output:
          destination: thing2
          binder: rabbit2
      binders:
        rabbit1:
          type: rabbit
          environment:
            spring:
              rabbitmq:
                host: <host1>
        rabbit2:
          type: rabbit
          environment:
            spring:
              rabbitmq:
                host: <host2>

 

6.6. Binding visualization and control   绑定可视化和控制

 

Since version 2.0, Spring Cloud Stream supports visualization and control of the Bindings through Actuator endpoints.

 

从2.0版开始,Spring Cloud Stream通过执行器端点支持绑定的可视化和控制。

 

Starting with version 2.0 actuator and web are optional, you must first add one of the web dependencies as well as add the actuator dependency manually. The following example shows how to add the dependency for the Web framework:

 

从版本2.0开始,执行器和Web是可选的,您必须首先添加一个Web依赖项,并手动添加执行器依赖项。以下示例显示如何添加Web框架的依赖项:

 

<dependency>
     <groupId>org.springframework.boot</groupId>
     <artifactId>spring-boot-starter-web</artifactId>
</dependency>

 

The following example shows how to add the dependency for the WebFlux framework:

 

以下示例显示如何为WebFlux框架添加依赖项:

 

<dependency>
       <groupId>org.springframework.boot</groupId>
       <artifactId>spring-boot-starter-webflux</artifactId>
</dependency>

 

You can add the Actuator dependency as follows:

 

您可以按如下方式添加执行器依赖关系:

 

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

 

To run Spring Cloud Stream 2.0 apps in Cloud Foundry, you must add spring-boot-starter-web and spring-boot-starter-actuator to the classpath. Otherwise, the application will not start due to health check failures.

要在云计算中运行Spring Cloud Stream 2.0的应用程序,您必须添加spring-boot-starter-web和spring-boot-starter-actuator到classpath中。否则,由于运行状况检查失败,应用程序将无法启动。

 

You must also enable the bindings actuator endpoints by setting the following property: --management.endpoints.web.exposure.include=bindings.

 

您还必须通过设置以下属性来启用绑定执行器端点:--management.endpoints.web.exposure.include=bindings。

 

Once those prerequisites are satisfied. you should see the following in the logs when application start:

 

一旦满足这些先决条件。应用程序启动时,您应该在日志中看到以下内容:

 

: Mapped "{[/actuator/bindings/{name}],methods=[POST]. . .
: Mapped "{[/actuator/bindings],methods=[GET]. . .
: Mapped "{[/actuator/bindings/{name}],methods=[GET]. . .

 

To visualize the current bindings, access the following URL: <host>:<port>/actuator/bindings

 

要显示当前绑定,请访问以下URL:<host>:<port>/actuator/bindings

 

Alternative, to see a single binding, access one of the URLs similar to the following: <host>:<port>/actuator/bindings/myBindingName

 

或者,要查看单个绑定,请访问与以下内容类似的其中一个URL:<host>:<port>/actuator/bindings/myBindingName

 

You can also stop, start, pause, and resume individual bindings by posting to the same URL while providing a state argument as JSON, as shown in the following examples:

 

您还可以通过发布POST请求到同一URL来停止,启动,暂停,和恢复单个绑定,同时提供state参数作为JSON,如以下示例所示:

 

curl -d '{"state":"STOPPED"}' -H "Content-Type: application/json" -X POST <host>:<port>/actuator/bindings/myBindingName curl -d '{"state":"STARTED"}' -H "Content-Type: application/json" -X POST <host>:<port>/actuator/bindings/myBindingName curl -d '{"state":"PAUSED"}' -H "Content-Type: application/json" -X POST <host>:<port>/actuator/bindings/myBindingName curl -d '{"state":"RESUMED"}' -H "Content-Type: application/json" -X POST <host>:<port>/actuator/bindings/myBindingName

 

PAUSED and RESUMED work only when the corresponding binder and its underlying technology supports it. Otherwise, you see the warning message in the logs. Currently, only Kafka binder supports the PAUSED and RESUMED states.

PAUSED和RESUMED只有在相应的绑定器及其底层技术支持时才能工作。否则,您会在日志中看到警告消息。目前,只有Kafka绑定器支持PAUSED和RESUMED状态。

 

6.7. Binder Configuration Properties   绑定器配置属性

 

The following properties are available when customizing binder configurations. These properties exposed via org.springframework.cloud.stream.config.BinderProperties

 

自定义绑定器配置时,可以使用以下属性。这些属性通过org.springframework.cloud.stream.config.BinderProperties暴露。

 

They must be prefixed with spring.cloud.stream.binders.<configurationName>.

它们必须以spring.cloud.stream.binders.<configurationName>为前缀。

 

type

The binder type. It typically references one of the binders found on the classpath — in particular, a key in a META-INF/spring.binders file.

绑定器类型。它通常引用类路径中找到的一个绑定器 - 特别是META-INF/spring.binders文件中的一个键。

 

By default, it has the same value as the configuration name.

默认情况下,它具有与配置名称相同的值。

 

inheritEnvironment

Whether the configuration inherits the environment of the application itself.

配置是否继承应用程序本身的环境。

 

Default: true.

environment

Root for a set of properties that can be used to customize the environment of the binder. When this property is set, the context in which the binder is being created is not a child of the application context. This setting allows for complete separation between the binder components and the application components.

一组属性的根,可用于自定义绑定器的环境。设置此属性后,创建绑定器的上下文不是应用程序上下文的子项。此设置允许绑定器组件和应用组件之间的完全分离。

 

Default: empty.

defaultCandidate

Whether the binder configuration is a candidate for being considered a default binder or can be used only when explicitly referenced. This setting allows adding binder configurations without interfering with the default processing.

绑定器配置是否可以被视为默认绑定器,或者只能在显式引用时使用。此设置允许添加绑定器配置,而不会干扰默认处理。

 

Default: true.

 

7. Configuration Options   配置选项

 

Spring Cloud Stream supports general configuration options as well as configuration for bindings and binders. Some binders let additional binding properties support middleware-specific features.

 

Spring Cloud Stream支持常规配置选项以及绑定和绑定器的配置。某些绑定器允许其他绑定属性支持特定于中间件的功能。

 

Configuration options can be provided to Spring Cloud Stream applications through any mechanism supported by Spring Boot. This includes application arguments, environment variables, and YAML or .properties files.

 

可以通过Spring Boot支持的任何机制向Spring Cloud Stream应用程序提供配置选项。这包括应用程序参数,环境变量,以及YAML或.properties文件。

 

7.1. Binding Service Properties   绑定服务属性

 

These properties are exposed via org.springframework.cloud.stream.config.BindingServiceProperties

 

这些属性通过org.springframework.cloud.stream.config.BindingServiceProperties暴露。

 

spring.cloud.stream.instanceCount

The number of deployed instances of an application. Must be set for partitioning on the producer side. Must be set on the consumer side when using RabbitMQ and with Kafka if autoRebalanceEnabled=false.

应用程序的已部署实例数。必须在生产者端设置以进行分区。使用RabbitMQ和Kafka(如果autoRebalanceEnabled=false)时必须在消费者端设置autoRebalanceEnabled=false。

 

Default: 1.

spring.cloud.stream.instanceIndex

The instance index of the application: A number from 0 to instanceCount - 1. Used for partitioning with RabbitMQ and with Kafka if autoRebalanceEnabled=false. Automatically set in Cloud Foundry to match the application’s instance index.

应用程序的实例索引:从0到instanceCount - 1的数字。用于RabbitMQ和Kafka(如果autoRebalanceEnabled=false)的分区。在云计算中自动设置以匹配应用程序的实例索引。

 

spring.cloud.stream.dynamicDestinations

A list of destinations that can be bound dynamically (for example, in a dynamic routing scenario). If set, only listed destinations can be bound.

可动态绑定的目标列表(例如,在动态路由方案中)。如果设置,则只能绑定列出的目标。

 

Default: empty (letting any destination be bound).

默认值:空(允许绑定任何目标)。

 

spring.cloud.stream.defaultBinder

The default binder to use, if multiple binders are configured. See Multiple Binders on the Classpath.

如果配置了多个绑定器,则使用默认绑定器。请参阅类路径上的多个绑定器

 

Default: empty.

spring.cloud.stream.overrideCloudConnectors

This property is only applicable when the cloud profile is active and Spring Cloud Connectors are provided with the application. If the property is false (the default), the binder detects a suitable bound service (for example, a RabbitMQ service bound in Cloud Foundry for the RabbitMQ binder) and uses it for creating connections (usually through Spring Cloud Connectors). When set to true, this property instructs binders to completely ignore the bound services and rely on Spring Boot properties (for example, relying on the spring.rabbitmq.* properties provided in the environment for the RabbitMQ binder). The typical usage of this property is to be nested in a customized environment when connecting to multiple systems.

 

此属性仅在cloud配置文件处于活动状态且Spring Cloud Connectors随应用程序提供时才适用。如果属性是false(默认值),则绑定器会检测到合适的绑定服务(例如,绑定在云计算中的RabbitMQ绑定器的RabbitMQ服务)并使用它来创建连接(通常通过Spring Cloud Connectors)。设置true为时,此属性指示绑定器完全忽略绑定服务并依赖Spring Boot属性(例如,依赖于RabbitMQ绑定器环境中提供的spring.rabbitmq.*属性)。在连接到多个系统时,此属性的典型用法是嵌套在自定义环境中。

 

Default: false.

spring.cloud.stream.bindingRetryInterval

The interval (in seconds) between retrying binding creation when, for example, the binder does not support late binding and the broker (for example, Apache Kafka) is down. Set it to zero to treat such conditions as fatal, preventing the application from starting.

 

例如,当绑定器不支持后期绑定和代理(例如,Apache Kafka)时,重试绑定创建之间的间隔(以秒为单位)已关闭。将其设置为零以将此类条件视为致命的,从而阻止应用程序启动。

 

Default: 30

 

7.2. Binding Properties   绑定属性

 

Binding properties are supplied by using the format of spring.cloud.stream.bindings.<channelName>.<property>=<value>. The <channelName> represents the name of the channel being configured (for example, output for a Source).

 

绑定属性使用spring.cloud.stream.bindings.<channelName>.<property>=<value>的格式提供。<channelName>表示被配置的通道名称(例如,output为Source)。

 

To avoid repetition, Spring Cloud Stream supports setting values for all channels, in the format of spring.cloud.stream.default.<property>=<value>.

 

为避免重复,Spring Cloud Stream支持设置所有通道的值,格式为spring.cloud.stream.default.<property>=<value>。

 

In what follows, we indicate where we have omitted the spring.cloud.stream.bindings.<channelName>.prefix and focus just on the property name, with the understanding that the prefix ise included at runtime.

 

在下文中,我们指出我们在哪里省略了spring.cloud.stream.bindings.<channelName>.前缀并仅关注属性名称,并了解运行时包含前缀ise。

 

7.2.1. Common Binding Properties   通用绑定属性

 

These properties are exposed via org.springframework.cloud.stream.config.BindingProperties

 

这些属性通过org.springframework.cloud.stream.config.BindingProperties暴露。

 

The following binding properties are available for both input and output bindings and must be prefixed with spring.cloud.stream.bindings.<channelName>. (for example, spring.cloud.stream.bindings.input.destination=ticktock).

 

以下绑定属性可用于输入和输出绑定,并且必须以spring.cloud.stream.bindings.<channelName>.(例如spring.cloud.stream.bindings.input.destination=ticktock)为前缀。

 

Default values can be set by using the spring.cloud.stream.default prefix (for example`spring.cloud.stream.default.contentType=application/json`).

 

可以使用spring.cloud.stream.default前缀设置默认值(例如`spring.cloud.stream.default.contentType=application/json`)。

 

destination

The target destination of a channel on the bound middleware (for example, the RabbitMQ exchange or Kafka topic). If the channel is bound as a consumer, it could be bound to multiple destinations, and the destination names can be specified as comma-separated String values. If not set, the channel name is used instead. The default value of this property cannot be overridden.

 

绑定中间件上通道的目标(例如,RabbitMQ交换或Kafka主题)。如果通道绑定为消费者,则可以绑定到多个目标,并且可以将目标名称指定为逗号分隔String值。如果未设置,则使用通道名称。无法覆盖此属性的默认值。

 

group

The consumer group of the channel. Applies only to inbound bindings. See Consumer Groups.

通道的消费者组。仅适用于入站绑定。见消费者组

 

Default: null (indicating an anonymous consumer).

默认值: null(表示匿名消费者)。

 

contentType

The content type of the channel. See “Content Type Negotiation”.

通道的内容类型。请参阅“ 内容类型协商 ”。

 

Default: null (no type coercion is performed).

默认值: null(不执行类型强制)。

 

binder

The binder used by this binding. See “Multiple Binders on the Classpath” for details.

此绑定使用的绑定器。有关详细信息,请参阅“ 类路径上的多个绑定器 ”。

 

Default: null (the default binder is used, if it exists).

默认值: null(如果存在,使用默认绑定器)。

 

7.2.2. Consumer Properties   消费者属性

These properties are exposed via org.springframework.cloud.stream.binder.ConsumerProperties

 

这些属性通过org.springframework.cloud.stream.binder.ConsumerProperties暴露。

 

The following binding properties are available for input bindings only and must be prefixed with spring.cloud.stream.bindings.<channelName>.consumer. (for example, spring.cloud.stream.bindings.input.consumer.concurrency=3).

 

以下绑定属性仅可用于输入绑定,并且必须以spring.cloud.stream.bindings.<channelName>.consumer.(例如spring.cloud.stream.bindings.input.consumer.concurrency=3)为前缀。

 

Default values can be set by using the spring.cloud.stream.default.consumer prefix (for example, spring.cloud.stream.default.consumer.headerMode=none).

可以使用spring.cloud.stream.default.consumer前缀(例如,spring.cloud.stream.default.consumer.headerMode=none)设置默认值。

 

concurrency

The concurrency of the inbound consumer.

入站消费者的并发性。

 

Default: 1.

partitioned

Whether the consumer receives data from a partitioned producer.

消费者是否从分区生产者接收数据。

 

Default: false.

headerMode

When set to none, disables header parsing on input. Effective only for messaging middleware that does not support message headers natively and requires header embedding. This option is useful when consuming data from non-Spring Cloud Stream applications when native headers are not supported. When set to headers, it uses the middleware’s native header mechanism. When set to embeddedHeaders, it embeds headers into the message payload.

 

设置none为时,禁用输入上的header解析。仅对本身不支持消息headers并且需要header嵌入的消息中间件有效。当不支持原生headers时,从非Spring Cloud Stream应用程序中消费数据时,此选项很有用。设置为headers时,它使用中间件的原生header机制。设置为embeddedHeaders时,它会将headers嵌入到消息负载中。

 

Default: depends on the binder implementation.

默认值:取决于绑定器实现。

 

maxAttempts

If processing fails, the number of attempts to process the message (including the first). Set to 1 to disable retry.

如果处理失败,则为处理消息的尝试次数(包括第一次)。设置1为禁用重试。

 

Default: 3.

backOffInitialInterval

The backoff initial interval on retry.

重试时的退避初始间隔。

 

Default: 1000.

backOffMaxInterval

The maximum backoff interval.

最大退避间隔。

 

Default: 10000.

backOffMultiplier

The backoff multiplier.

退避乘数。

 

Default: 2.0.

instanceIndex

When set to a value greater than equal to zero, it allows customizing the instance index of this consumer (if different from spring.cloud.stream.instanceIndex). When set to a negative value, it defaults to spring.cloud.stream.instanceIndex. See “Instance Index and Instance Count” for more information.

 

当设置为大于等于零的值时,它允许自定义此消费者的实例索引(如果与spring.cloud.stream.instanceIndex不同)。设置为负值时,默认为spring.cloud.stream.instanceIndex。有关详细信息,请参阅“ 实例索引和实例计数 ”。

 

Default: -1.

instanceCount

When set to a value greater than equal to zero, it allows customizing the instance count of this consumer (if different from spring.cloud.stream.instanceCount). When set to a negative value, it defaults to spring.cloud.stream.instanceCount. See “Instance Index and Instance Count” for more information.

 

设置为大于等于零的值时,它允许自定义此消费者的实例计数(如果与spring.cloud.stream.instanceCount不同)。设置为负值时,默认为spring.cloud.stream.instanceCount。有关详细信息,请参阅“ 实例索引和实例计数 ”。

 

Default: -1.

useNativeDecoding

When set to true, the inbound message is deserialized directly by the client library, which must be configured correspondingly (for example, setting an appropriate Kafka producer value deserializer). When this configuration is being used, the inbound message unmarshalling is not based on the contentType of the binding. When native decoding is used, it is the responsibility of the producer to use an appropriate encoder (for example, the Kafka producer value serializer) to serialize the outbound message. Also, when native encoding and decoding is used, the headerMode=embeddedHeaders property is ignored and headers are not embedded in the message. See the producer property useNativeEncoding.

 

设置为true时,客户端库直接反序列化入站消息,必须相应地对其进行配置(例如,设置适当的Kafka生产者值反序列化器)。使用此配置时,入站消息解组不基于绑定的contentType。当使用原生解码时,生产者有责任使用适当的编码器(例如,Kafka生产者值序列化器)来序列化出站消息。此外,使用原生编码和解码时,将忽略headerMode=embeddedHeaders属性,并且不会在消息中嵌入headers。查看生产者属性useNativeEncoding。

 

Default: false.

 

7.2.3. Producer Properties   生产者属性

 

These properties are exposed via org.springframework.cloud.stream.binder.ProducerProperties

这些属性通过org.springframework.cloud.stream.binder.ProducerProperties暴露。

 

The following binding properties are available for output bindings only and must be prefixed with spring.cloud.stream.bindings.<channelName>.producer. (for example, spring.cloud.stream.bindings.input.producer.partitionKeyExpression=payload.id).

 

以下绑定属性仅可用于输出绑定,并且必须以spring.cloud.stream.bindings.<channelName>.producer.(例如spring.cloud.stream.bindings.input.producer.partitionKeyExpression=payload.id)为前缀。

 

Default values can be set by using the prefix spring.cloud.stream.default.producer (for example, spring.cloud.stream.default.producer.partitionKeyExpression=payload.id).

 

可以使用前缀spring.cloud.stream.default.producer(例如,spring.cloud.stream.default.producer.partitionKeyExpression=payload.id)设置默认值。

 

partitionKeyExpression

A SpEL expression that determines how to partition outbound data. If set, or if partitionKeyExtractorClass is set, outbound data on this channel is partitioned. partitionCount must be set to a value greater than 1 to be effective. Mutually exclusive with partitionKeyExtractorClass. See “Partitioning Support”.

 

一个SpEL表达式,用于确定如何对出站数据进行分区。如果设置,或者设置了partitionKeyExtractorClass,则对此通道上的出站数据进行分区。partitionCount必须设置为大于1的值才能生效。与partitionKeyExtractorClass互斥。请参阅“ 分区支持 ”。

 

Default: null.

partitionKeyExtractorClass

A PartitionKeyExtractorStrategy implementation. If set, or if partitionKeyExpression is set, outbound data on this channel is partitioned. partitionCount must be set to a value greater than 1 to be effective. Mutually exclusive with partitionKeyExpression. See “Partitioning Support”.

 

一个PartitionKeyExtractorStrategy实现。如果设置,或者设置了partitionKeyExpression,则对此通道上的出站数据进行分区。partitionCount必须设置为大于1的值才能生效。与partitionKeyExpression互斥。请参阅“ 分区支持 ”。

 

Default: null.

partitionSelectorClass

A PartitionSelectorStrategy implementation. Mutually exclusive with partitionSelectorExpression. If neither is set, the partition is selected as the hashCode(key) % partitionCount, where key is computed through either partitionKeyExpression or partitionKeyExtractorClass.

 

一个PartitionSelectorStrategy实现。与partitionSelectorExpression互斥。如果没有设置,则分区被选择为hashCode(key) % partitionCount,其中key通过partitionKeyExpression或partitionKeyExtractorClass计算。

 

Default: null.

partitionSelectorExpression

A SpEL expression for customizing partition selection. Mutually exclusive with partitionSelectorClass. If neither is set, the partition is selected as the hashCode(key) % partitionCount, where key is computed through either partitionKeyExpression or partitionKeyExtractorClass.

 

用于自定义分区选择的SpEL表达式。与partitionSelectorClass互斥。如果没有设置,则分区被选择为hashCode(key) % partitionCount,其中key通过partitionKeyExpression或partitionKeyExtractorClass计算。

 

Default: null.

partitionCount

The number of target partitions for the data, if partitioning is enabled. Must be set to a value greater than 1 if the producer is partitioned. On Kafka, it is interpreted as a hint. The larger of this and the partition count of the target topic is used instead.

 

如果启用了分区,则为数据的目标分区数。如果生产者已分区,则必须设置为大于1的值。在Kafka上,它被解释为暗示。使用较大的这个和目标主题的分区计数来代替。

 

Default: 1.

requiredGroups

A comma-separated list of groups to which the producer must ensure message delivery even if they start after it has been created (for example, by pre-creating durable queues in RabbitMQ).

 

逗号分隔的组列表,生产者必须确保消息传递给它们,即使它们在它创建之后启动(例如,通过在RabbitMQ中预先创建持久队列)。

 

headerMode

When set to none, it disables header embedding on output. It is effective only for messaging middleware that does not support message headers natively and requires header embedding. This option is useful when producing data for non-Spring Cloud Stream applications when native headers are not supported. When set to headers, it uses the middleware’s native header mechanism. When set to embeddedHeaders, it embeds headers into the message payload.

 

设置none为时,它会禁用输出中的header嵌入。它仅对于本身不支持消息headers并且需要header嵌入的消息中间件有效。当不支持原生headers时,在为非Spring Cloud Stream应用程序生成数据时,此选项很有用。设置为headers时,它使用中间件的原生header机制。设置为embeddedHeaders时,它会将headers嵌入到消息负载中。

 

Default: Depends on the binder implementation.

默认值:取决于绑定器实现。

 

useNativeEncoding

When set to true, the outbound message is serialized directly by the client library, which must be configured correspondingly (for example, setting an appropriate Kafka producer value serializer). When this configuration is being used, the outbound message marshalling is not based on the contentType of the binding. When native encoding is used, it is the responsibility of the consumer to use an appropriate decoder (for example, the Kafka consumer value de-serializer) to deserialize the inbound message. Also, when native encoding and decoding is used, the headerMode=embeddedHeaders property is ignored and headers are not embedded in the message. See the consumer property useNativeDecoding.

 

设置true为时,出站消息由客户端库直接序列化,必须相应地配置(例如,设置适当的Kafka生产者值序列化器)。使用此配置时,出站消息编组不基于绑定的contentType。当使用原生编码时,消费者有责任使用适当的解码器(例如,Kafka消费者值反序列化器)来反序列化入站消息。此外,使用原生编码和解码时,将忽略headerMode=embeddedHeaders属性,并且不会在消息中嵌入headers。查看消费者属性useNativeDecoding。

 

Default: false.

errorChannelEnabled

When set to true, if the binder supports asynchroous send results, send failures are sent to an error channel for the destination. See “[binder-error-channels]” for more information.

 

设置为时true,如果绑定器支持异步发送结果,则发送失败将发送到目标的错误通道。有关详细信息,请参阅“ [binder-error-channels] ”。

 

Default: false.

 

7.3. Using Dynamically Bound Destinations   使用动态绑定目标

 

Besides the channels defined by using @EnableBinding, Spring Cloud Stream lets applications send messages to dynamically bound destinations. This is useful, for example, when the target destination needs to be determined at runtime. Applications can do so by using the BinderAwareChannelResolver bean, registered automatically by the @EnableBinding annotation.

 

除了使用@EnableBinding定义的通道外,Spring Cloud Stream还允许应用程序将消息发送到动态绑定的目标。例如,当需要在运行时确定目标时,这很有用。应用程序可以通过使用由@EnableBinding注释自动注册的BinderAwareChannelResolver bean来实现。

 

The 'spring.cloud.stream.dynamicDestinations' property can be used for restricting the dynamic destination names to a known set (whitelisting). If this property is not set, any destination can be bound dynamically.

 

'spring.cloud.stream.dynamicDestinations'属性可用于将动态目标名称限制为已知集合(白名单)。如果未设置此属性,则可以动态绑定任何目标。

 

The BinderAwareChannelResolver can be used directly, as shown in the following example of a REST controller using a path variable to decide the target channel:

 

可直接使用BinderAwareChannelResolver,如图在下面的REST controller 例子中,使用路径变量来决定目标通道:

 

@EnableBinding

@Controller

public class SourceWithDynamicDestination {

 

    @Autowired

    private BinderAwareChannelResolver resolver;

 

    @RequestMapping(path = "/{target}", method = POST, consumes = "*/*")

    @ResponseStatus(HttpStatus.ACCEPTED)

    public void handleRequest(@RequestBody String body, @PathVariable("target") target,

           @RequestHeader(HttpHeaders.CONTENT_TYPE) Object contentType) {

        sendMessage(body, target, contentType);

    }

 

    private void sendMessage(String body, String target, Object contentType) {

        resolver.resolveDestination(target).send(MessageBuilder.createMessage(body,

                new MessageHeaders(Collections.singletonMap(MessageHeaders.CONTENT_TYPE, contentType))));

    }

}

 

Now consider what happens when we start the application on the default port (8080) and make the following requests with CURL:

 

现在考虑当我们在默认端口(8080)上启动应用程序并使用CURL发出以下请求时会发生什么:

 

curl -H "Content-Type: application/json" -X POST -d "customer-1" http://localhost:8080/customers

 

curl -H "Content-Type: application/json" -X POST -d "order-1" http://localhost:8080/orders

 

The destinations, 'customers' and 'orders', are created in the broker (in the exchange for Rabbit or in the topic for Kafka) with names of 'customers' and 'orders', and the data is published to the appropriate destinations.

 

目的地,“customers”和“orders”,在代理(在Rabbit的交换中或在Kafka的主题中)中创建,其名称为“customers”和“orders”,并且数据将发布到适当的目的地。

 

The BinderAwareChannelResolver is a general-purpose Spring Integration DestinationResolver and can be injected in other components — for example, in a router using a SpEL expression based on the target field of an incoming JSON message. The following example includes a router that reads SpEL expressions:

 

BinderAwareChannelResolver是一个通用的Spring Integration DestinationResolver,可以注入其他组件 - 例如,在路由器中使用基于传入JSON消息的target字段的SpEL表达式。以下示例包含一个读取SpEL表达式的路由器:

 

@EnableBinding

@Controller

public class SourceWithDynamicDestination {

 

    @Autowired

    private BinderAwareChannelResolver resolver;

 

 

    @RequestMapping(path = "/", method = POST, consumes = "application/json")

    @ResponseStatus(HttpStatus.ACCEPTED)

    public void handleRequest(@RequestBody String body, @RequestHeader(HttpHeaders.CONTENT_TYPE) Object contentType) {

        sendMessage(body, contentType);

    }

 

    private void sendMessage(Object body, Object contentType) {

        routerChannel().send(MessageBuilder.createMessage(body,

                new MessageHeaders(Collections.singletonMap(MessageHeaders.CONTENT_TYPE, contentType))));

    }

 

    @Bean(name = "routerChannel")

    public MessageChannel routerChannel() {

        return new DirectChannel();

    }

 

    @Bean

    @ServiceActivator(inputChannel = "routerChannel")

    public ExpressionEvaluatingRouter router() {

        ExpressionEvaluatingRouter router =

            new ExpressionEvaluatingRouter(new SpelExpressionParser().parseExpression("payload.target"));

        router.setDefaultOutputChannelName("default-output");

        router.setChannelResolver(resolver);

        return router;

    }

}

 

The Router Sink Application uses this technique to create the destinations on-demand.

 

路由器接收器应用程序使用此技术按需创建目的地。

 

If the channel names are known in advance, you can configure the producer properties as with any other destination. Alternatively, if you register a NewBindingCallback<> bean, it is invoked just before the binding is created. The callback takes the generic type of the extended producer properties used by the binder. It has one method:

 

如果事先知道通道名称,则可以将生产者属性配置为与任何其他目标一样。或者,如果您注册NewBindingCallback<> bean,则会在创建绑定之前调用它。回调采用绑定器使用的扩展生产者属性的泛型类型。它有一个方法:

 

void configure(String channelName, MessageChannel channel, ProducerProperties producerProperties,

        T extendedProducerProperties);

 

The following example shows how to use the RabbitMQ binder:

 

以下示例显示如何使用RabbitMQ绑定器:

 

@Bean

public NewBindingCallback<RabbitProducerProperties> dynamicConfigurer() {

    return (name, channel, props, extended) -> {

        props.setRequiredGroups("bindThisQueue");

        extended.setQueueNameGroupOnly(true);

        extended.setAutoBindDlq(true);

        extended.setDeadLetterQueueName("myDLQ");

    };

}

 

If you need to support dynamic destinations with multiple binder types, use Object for the generic type and cast the extended argument as needed.

如果需要支持具有多个绑定器类型的动态目标,请使用Object泛型类型并根据需要转换扩展参数。

 

8. Content Type Negotiation   内容类型协商

 

Data transformation is one of the core features of any message-driven microservice architecture. Given that, in Spring Cloud Stream, such data is represented as a Spring Message, a message may have to be transformed to a desired shape or size before reaching its destination. This is required for two reasons:

  1. To convert the contents of the incoming message to match the signature of the application-provided handler.
  2. To convert the contents of the outgoing message to the wire format.

 

数据转换是任何消息驱动的微服务架构的核心功能之一。鉴于此,在Spring Cloud Stream中,此类数据表示为Spring Message,消息在到达目标之前可能必须被转换为所需的形状或大小。这有两个原因:

  1. 转换传入消息的内容以匹配应用程序提供的处理程序的签名。
  2. 将传出消息的内容转换为有线格式。

 

The wire format is typically byte[] (that is true for the Kafka and Rabbit binders), but it is governed by the binder implementation.

 

有线格式通常是byte[](对于Kafka和Rabbit绑定器也是如此),但它受绑定器实现的控制。

 

In Spring Cloud Stream, message transformation is accomplished with an org.springframework.messaging.converter.MessageConverter.

 

在Spring Cloud Stream中,消息转换是通过消息转换器org.springframework.messaging.converter.MessageConverter完成的。

 

As a supplement to the details to follow, you may also want to read the following blog post.

作为要遵循的细节的补充,您可能还想阅读以下博客文章。

 

8.1. Mechanics   机制

 

To better understand the mechanics and the necessity behind content-type negotiation, we take a look at a very simple use case by using the following message handler as an example:

 

为了更好地理解内容类型协商背后的机制和必要性,我们通过使用以下消息处理程序作为示例来查看一个非常简单的用例:

 

@StreamListener(Processor.INPUT)
@SendTo(Processor.OUTPUT)
public String handle(Person person) {..}

 

For simplicity, we assume that this is the only handler in the application (we assume there is no internal pipeline).

为简单起见,我们假设这是应用程序中唯一的处理程序(我们假设没有内部管道)。

 

The handler shown in the preceding example expects a Person object as an argument and produces a String type as an output. In order for the framework to succeed in passing the incoming Message as an argument to this handler, it has to somehow transform the payload of the Message type from the wire format to a Person type. In other words, the framework must locate and apply the appropriate MessageConverter. To accomplish that, the framework needs some instructions from the user. One of these instructions is already provided by the signature of the handler method itself (Person type). Consequently, in theory, that should be (and, in some cases, is) enough. However, for the majority of use cases, in order to select the appropriate MessageConverter, the framework needs an additional piece of information. That missing piece is contentType.

 

前面示例中显示的处理程序将Person对象作为参数,并生成String类型作为输出。为了使框架成功将传入Message作为参数传递给此处理程序,它必须以某种方式将Message类型的负载从有线格式转换为Person类型。换句话说,框架必须找到并应用适当的MessageConverter。为此,框架需要用户的一些指示。其中一条指令已由处理程序方法本身(Person类型)的签名提供。因此,从理论上讲,这应该(并且在某些情况下)应该足够了。但是,对于大多数用例,要选择合适的MessageConverter,框架需要额外的信息。那个缺失的部分是contentType。

 

Spring Cloud Stream provides three mechanisms to define contentType (in order of precedence):

  1. HEADER: The contentType can be communicated through the Message itself. By providing a contentType header, you declare the content type to use to locate and apply the appropriate MessageConverter.
  2. BINDING: The contentType can be set per destination binding by setting the spring.cloud.stream.bindings.input.content-type property.

The input segment in the property name corresponds to the actual name of the destination (which is “input” in our case). This approach lets you declare, on a per-binding basis, the content type to use to locate and apply the appropriate MessageConverter.

  1. DEFAULT: If contentType is not present in the Message header or the binding, the default application/json content type is used to locate and apply the appropriate MessageConverter.

 

Spring Cloud Stream提供了三种机制来定义contentType(按优先顺序排列):

  1. HEADER:contentType可以通过Message本身进行通信。通过提供contentType header,您可以声明要用于查找和应用适当的MessageConverter的内容类型。
  2. BINDING:每个目标绑定都可以通过spring.cloud.stream.bindings.input.content-type属性设置contentType。

 

input属性名称中 的段对应于目标的实际名称(在我们的示例中为“input”)。此方法允许您在每个绑定的基础上声明用于查找和应用适当内容的内容类型MessageConverter。

  1. DEFAULT:如果Message header或绑定中不存在contentType,则默认application/json内容类型用于查找和应用适当的MessageConverter。

 

As mentioned earlier, the preceding list also demonstrates the order of precedence in case of a tie. For example, a header-provided content type takes precedence over any other content type. The same applies for a content type set on a per-binding basis, which essentially lets you override the default content type. However, it also provides a sensible default (which was determined from community feedback).

 

如前所述,前面的列表还演示了绑定情况下的优先顺序。例如,header提供的内容类型优先于任何其他内容类型。这同样适用于基于每个绑定设置的内容类型,它基本上允许您覆盖默认内容类型。但是,它也提供了合理的默认值(根据社区反馈确定)。

 

Another reason for making application/json the default stems from the interoperability requirements driven by distributed microservices architectures, where producer and consumer not only run in different JVMs but can also run on different non-JVM platforms.

 

使application/json成为默认值的另一个原因源于分布式微服务架构驱动的互操作性要求,其中生产者和消费者不仅在不同的JVM中运行,而且还可以在不同的非JVM平台上运行。

 

When the non-void handler method returns, if the the return value is already a Message, that Message becomes the payload. However, when the return value is not a Message, the new Message is constructed with the return value as the payload while inheriting headers from the input Message minus the headers defined or filtered by SpringIntegrationProperties.messageHandlerNotPropagatedHeaders. By default, there is only one header set there: contentType. This means that the new Message does not have contentType header set, thus ensuring that the contentType can evolve. You can always opt out of returning a Message from the handler method where you can inject any header you wish.

 

当非void处理程序方法返回时,如果返回值已经是Message,那么该Message成为负载。但是,当返回值不是Message时,将构造新Message,使用返回值作为负载,同时继承输入中的Message,减去由SpringIntegrationProperties.messageHandlerNotPropagatedHeaders定义或过滤的headers。默认情况下,只有一个header集:contentType。这意味着新的Message没有contentType header 集,从而确保contentType可以进化。您可以随时选择不从处理程序方法返回Message,您可以在其中注入任何所需的header。

 

If there is an internal pipeline, the Message is sent to the next handler by going through the same process of conversion. However, if there is no internal pipeline or you have reached the end of it, the Message is sent back to the output destination.

 

如果存在内部管道,则通过相同的转换过程将Message发送到下一个处理程序。但是,如果没有内部管道或者您已到达它的末尾,则会将Message发送回输出目标。

 

8.1.1. Content Type versus Argument Type   内容类型与参数类型

 

As mentioned earlier, for the framework to select the appropriate MessageConverter, it requires argument type and, optionally, content type information. The logic for selecting the appropriate MessageConverter resides with the argument resolvers (HandlerMethodArgumentResolvers), which trigger right before the invocation of the user-defined handler method (which is when the actual argument type is known to the framework). If the argument type does not match the type of the current payload, the framework delegates to the stack of the pre-configured MessageConverters to see if any one of them can convert the payload. As you can see, the Object fromMessage(Message<?> message, Class<?> targetClass); operation of the MessageConverter takes targetClass as one of its arguments. The framework also ensures that the provided Message always contains a contentType header. When no contentType header was already present, it injects either the per-binding contentType header or the default contentType header. The combination of contentType argument type is the mechanism by which framework determines if message can be converted to a target type. If no appropriate MessageConverter is found, an exception is thrown, which you can handle by adding a custom MessageConverter (see “User-defined Message Converters”).

 

如前所述,对于选择适当的MessageConverter的框架,它需要参数类型和可选的内容类型信息。选择适当的MessageConverter的逻辑关键在于参数解析(HandlerMethodArgumentResolvers),这在用户定义的处理程序方法调用之前触发(在框架已知实际参数类型时)。如果参数类型与当前负载的类型不匹配,则框架委托给预先配置的MessageConverters堆栈,以查看它们中的任何一个是否可以转换负载。如您所见,MessageConverter的 Object fromMessage(Message<?> message, Class<?> targetClass); 操作将targetClass作为其参数之一。该框架还确保提供的Message始终包含contentType头。如果尚未存在contentType标头,则会注入每个绑定的contentType header或默认contentType header。contentType参数类型的组合是框架确定消息是否可以转换为目标类型的机制。如果找不到合适的MessageConverter,则抛出异常,您可以通过添加自定义MessageConverter来处理该异常(请参阅“ 用户定义的消息转换器 ”)。

 

But what if the payload type matches the target type declared by the handler method? In this case, there is nothing to convert, and the payload is passed unmodified. While this sounds pretty straightforward and logical, keep in mind handler methods that take a Message<?> or Object as an argument. By declaring the target type to be Object (which is an instanceof everything in Java), you essentially forfeit the conversion process.

 

但是如果负载类型与处理程序方法声明的目标类型匹配怎么办?在这种情况下,没有任何东西可以转换,并且载荷是未经修改的。虽然这听起来非常简单和合乎逻辑,但请记住采用Message<?>或Object作为参数的处理程序方法。通过将目标类型声明为Object(这是Java中的instanceof所有内容),您基本上会丧失转换过程。

 

Do not expect Message to be converted into some other type based only on the contentType. Remember that the contentType is complementary to the target type. If you wish, you can provide a hint, which MessageConverter may or may not take into consideration.

不要指望Message被转换为某些只基于contentType的其他类型。请记住,contentType是目标类型的补充。如果您愿意,可以提供一个可能会或可能不会考虑MessageConverter的提示。

 

8.1.2. Message Converters   消息转换器

 

MessageConverters define two methods:

MessageConverters定义两种方法:

 

Object fromMessage(Message<?> message, Class<?> targetClass);

 

Message<?> toMessage(Object payload, @Nullable MessageHeaders headers);

 

It is important to understand the contract of these methods and their usage, specifically in the context of Spring Cloud Stream.

 

了解这些方法及其用法的合同非常重要,特别是在Spring Cloud Stream的上下文中。

 

The fromMessage method converts an incoming Message to an argument type. The payload of the Message could be any type, and it is up to the actual implementation of the MessageConverter to support multiple types. For example, some JSON converter may support the payload type as byte[], String, and others. This is important when the application contains an internal pipeline (that is, input → handler1 → handler2 →. . . → output) and the output of the upstream handler results in a Message which may not be in the initial wire format.

 

fromMessage方法将传入Message转换为参数类型。Message的载荷可以是任何类型,并且它取决于支持多种类型的MessageConverter的实际实现。例如,某些JSON转换器可以支持byte[],String,和其他载荷类型。当应用程序包含内部管道(即,输入→处理程序1→处理程序2→...→输出)时,这很重要,并且上游处理程序的输出会生成一个可能不是初始有线格式的Message。

 

However, the toMessage method has a more strict contract and must always convert Message to the wire format: byte[].

 

但是,toMessage方法具有更严格的合同,并且必须始终将Message转换为有线格式:byte[]。

 

So, for all intents and purposes (and especially when implementing your own converter) you regard the two methods as having the following signatures:

 

因此,对于所有意图和目的(尤其是在实现您自己的转换器时),您认为这两种方法具有以下签名:

 

Object fromMessage(Message<?> message, Class<?> targetClass);

 

Message<byte[]> toMessage(Object payload, @Nullable MessageHeaders headers);

 

8.2. Provided MessageConverters   已提供的消息转换器

 

As mentioned earlier, the framework already provides a stack of MessageConverters to handle most common use cases. The following list describes the provided MessageConverters, in order of precedence (the first MessageConverter that works is used):

  1. ApplicationJsonMessageMarshallingConverter: Variation of the org.springframework.messaging.converter.MappingJackson2MessageConverter. Supports conversion of the payload of the Message to/from POJO for cases when contentType is application/json (DEFAULT).
  2. TupleJsonMessageConverter: DEPRECATED Supports conversion of the payload of the Message to/from org.springframework.tuple.Tuple.
  3. ByteArrayMessageConverter: Supports conversion of the payload of the Message from byte[] to byte[] for cases when contentType is application/octet-stream. It is essentially a pass through and exists primarily for backward compatibility.
  4. ObjectStringMessageConverter: Supports conversion of any type to a String when contentType is text/plain. It invokes Object’s toString() method or, if the payload is byte[], a new String(byte[]).
  5. JavaSerializationMessageConverter: DEPRECATED Supports conversion based on java serialization when contentType is application/x-java-serialized-object.
  6. KryoMessageConverter: DEPRECATED Supports conversion based on Kryo serialization when contentTypeis application/x-java-object.
  7. JsonUnmarshallingConverter: Similar to the ApplicationJsonMessageMarshallingConverter. It supports conversion of any type when contentType is application/x-java-object. It expects the actual type information to be embedded in the contentType as an attribute (for example, application/x-java-object;type=foo.bar.Cat).

 

如前所述,框架已经提供了一个MessageConverters栈来处理大多数常见用例。以下列表按优先顺序(使用的第一个有效的MessageConverter)描述了所提供的MessageConverters:

  1. ApplicationJsonMessageMarshallingConverter:org.springframework.messaging.converter.MappingJackson2MessageConverter的变体。支持的Message载荷转换到POJO,或者相反,当contentType是application/json(默认)时。
  2. TupleJsonMessageConverter:DEPRECATED 支持Message的负载转换为org.springframework.tuple.Tuple,或者相反。
  3. ByteArrayMessageConverter:支持Message的载荷转换从byte[]到byte[],当contentType是application/octet-stream的情况下。它本质上是一种传递,主要用于向后兼容。
  4. ObjectStringMessageConverter:支持任何类型到String的转换,当contentType是text/plain时。它调用Object的toString()方法,或者,如果负载是byte[],则调用new String(byte[])。
  5. JavaSerializationMessageConverter:DEPRECATED 支持基于Java序列化的转换,当contentType为application/x-java-serialized-object时。
  6. KryoMessageConverter:DEPRECATED 支持基于Kryo序列化的转换,当contentType为application/x-java-object时。
  7. JsonUnmarshallingConverter:类似于ApplicationJsonMessageMarshallingConverter。当contentType是application/x-java-object时,它支持任何类型的转换。它期望将实际类型信息嵌入到contentType属性中(例如,application/x-java-object;type=foo.bar.Cat)。

 

When no appropriate converter is found, the framework throws an exception. When that happens, you should check your code and configuration and ensure you did not miss anything (that is, ensure that you provided a contentType by using a binding or a header). However, most likely, you found some uncommon case (such as a custom contentType perhaps) and the current stack of provided MessageConverters does not know how to convert. If that is the case, you can add custom MessageConverter. See User-defined Message Converters.

 

如果找不到合适的转换器,框架将抛出​​异常。当发生这种情况时,您应该检查您的代码和配置,并确保您没有遗漏任何内容(即,确保您通过使用绑定或header提供了contentType)。但是,最有可能的是,您发现了一些不常见的情况(例如自定义contentType)并且当前提供的MessageConverters栈不知道如何转换。如果是这种情况,您可以添加自定义MessageConverter。请参阅用户定义的消息转换器

 

8.3. User-defined Message Converters   用户定义的消息转换器

 

Spring Cloud Stream exposes a mechanism to define and register additional MessageConverters. To use it, implement org.springframework.messaging.converter.MessageConverter, configure it as a @Bean, and annotate it with @StreamMessageConverter. It is then apended to the existing stack of `MessageConverter`s.

 

Spring Cloud Stream公开了一种定义和注册附加MessageConverters的机制。要使用它,请实现org.springframework.messaging.converter.MessageConverter,将其配置为@Bean,并使用@StreamMessageConverter注释它。然后将它附加到`MessageConverter`s的现有栈上。

 

It is important to understand that custom MessageConverter implementations are added to the head of the existing stack. Consequently, custom MessageConverter implementations take precedence over the existing ones, which lets you override as well as add to the existing converters.

重要的是要理解自定义MessageConverter实现被添加到现有栈的头部。因此,自定义MessageConverter实现优先于现有实现,这使您可以覆盖以及添加到现有转换器。

 

The following example shows how to create a message converter bean to support a new content type called application/bar:

 

以下示例说明如何创建消息转换器bean以支持名为application/bar的新内容类型:

 

@EnableBinding(Sink.class)

@SpringBootApplication

public static class SinkApplication {

 

    ...

 

    @Bean

    @StreamMessageConverter

    public MessageConverter customMessageConverter() {

        return new MyCustomMessageConverter();

    }

}

 

public class MyCustomMessageConverter extends AbstractMessageConverter {

 

    public MyCustomMessageConverter() {

        super(new MimeType("application", "bar"));

    }

 

    @Override

    protected boolean supports(Class<?> clazz) {

        return (Bar.class.equals(clazz));

    }

 

    @Override

    protected Object convertFromInternal(Message<?> message, Class<?> targetClass, Object conversionHint) {

        Object payload = message.getPayload();

        return (payload instanceof Bar ? payload : new Bar((byte[]) payload));

    }

}

 

Spring Cloud Stream also provides support for Avro-based converters and schema evolution. See “Schema Evolution Support” for details.

 

Spring Cloud Stream还为基于Avro的转换器和模式演变提供支持。有关详细信息,请参阅“ 架构演进支持 ”。

 

9. Schema Evolution Support   架构演进支持

Spring Cloud Stream provides support for schema evolution so that the data can be evolved over time and still work with older or newer producers and consumers and vice versa. Most serialization models, especially the ones that aim for portability across different platforms and languages, rely on a schema that describes how the data is serialized in the binary payload. In order to serialize the data and then to interpret it, both the sending and receiving sides must have access to a schema that describes the binary format. In certain cases, the schema can be inferred from the payload type on serialization or from the target type on deserialization. However, many applications benefit from having access to an explicit schema that describes the binary data format. A schema registry lets you store schema information in a textual format (typically JSON) and makes that information accessible to various applications that need it to receive and send data in binary format. A schema is referenceable as a tuple consisting of:

  • A subject that is the logical name of the schema
  • The schema version
  • The schema format, which describes the binary format of the data

This following sections goes through the details of various components involved in schema evolution process.

 

Spring Cloud Stream为模式演变提供支持,以便数据可以随着时间的推移而发展,并且仍然可以与较旧或较新的生产者和消费者一起使用,反之亦然。大多数序列化模型,特别是那些旨在跨不同平台和语言进行可移植性的模型,依赖于描述如何在二进制负载中序列化数据的模式。为了序列化数据然后解释它,发送方和接收方都必须能够访问描述二进制格式的模式。在某些情况下,可以从序列化的负载类型或反序列化的目标类型推断出模式。但是,许多应用程序可以访问描述二进制数据格式的显式模式。模式注册表允许您以文本格式(通常是JSON)存储模式信息,并使该信息可供需要它以二进制格式接收和发送数据的各种应用程序访问。模式可作为元组引用,包括:

  • 作为架构的逻辑名称的主题
  • 架构版本
  • 模式格式,描述数据的二进制格式

以下部分将详细介绍模式演变过程中涉及的各个组件。

 

9.1. Schema Registry Client   架构注册表客户端

 

The client-side abstraction for interacting with schema registry servers is the SchemaRegistryClient interface, which has the following structure:

 

用于与模式注册表服务器交互的客户端抽象是SchemaRegistryClient接口,它具有以下结构:

 

public interface SchemaRegistryClient {

 

    SchemaRegistrationResponse register(String subject, String format, String schema);

 

    String fetch(SchemaReference schemaReference);

 

    String fetch(Integer id);

 

}

 

Spring Cloud Stream provides out-of-the-box implementations for interacting with its own schema server and for interacting with the Confluent Schema Registry.

 

Spring Cloud Stream提供了开箱即用的实现,可以与自己的架构服务器进行交互,并与Confluent Schema Registry进行交互。

 

A client for the Spring Cloud Stream schema registry can be configured by using the @EnableSchemaRegistryClient, as follows:

 

可以使用@EnableSchemaRegistryClient配置Spring Cloud Stream模式注册表的客户端,如下:

 

  @EnableBinding(Sink.class)

  @SpringBootApplication

  @EnableSchemaRegistryClient

  public static class AvroSinkApplication {

    ...

  }

 

The default converter is optimized to cache not only the schemas from the remote server but also the parse() and toString() methods, which are quite expensive. Because of this, it uses a DefaultSchemaRegistryClient that does not cache responses. If you intend to change the default behavior, you can use the client directly on your code and override it to the desired outcome. To do so, you have to add the property spring.cloud.stream.schemaRegistryClient.cached=true to your application properties.

默认转换器经过优化,不仅可以缓存来自远程服务器的模式,还可以缓存非常昂贵的parse()和toString()方法。因此,它使用不缓存响应的DefaultSchemaRegistryClient。如果您打算更改默认行为,可以直接在代码上使用客户端并将其覆盖到所需的结果。为此,您必须将属性spring.cloud.stream.schemaRegistryClient.cached=true添加到应用程序属性中。

 

9.1.1. Schema Registry Client Properties   架构注册表客户端属性

The Schema Registry Client supports the following properties:

 

Schema Registry Client支持以下属性:

 

spring.cloud.stream.schemaRegistryClient.endpoint

The location of the schema-server. When setting this, use a full URL, including protocol (http or https) , port, and context path.

 

架构服务器的位置。设置此项时,请使用完整的URL,包括协议(http或https),端口和上下文路径。

 

Default

localhost:8990/

spring.cloud.stream.schemaRegistryClient.cached

Whether the client should cache schema server responses. Normally set to false, as the caching happens in the message converter. Clients using the schema registry client should set this to true.

 

客户端是否应缓存架构服务器响应。通常设置为false,因为缓存发生在消息转换器中。使用模式注册表客户端的客户端应将此设置为true。

 

Default

true

 

9.2. Avro Schema Registry Client Message Converters   Avro架构注册表客户端消息转换器

 

For applications that have a SchemaRegistryClient bean registered with the application context, Spring Cloud Stream auto configures an Apache Avro message converter for schema management. This eases schema evolution, as applications that receive messages can get easy access to a writer schema that can be reconciled with their own reader schema.

 

对于在应用程序上下文中注册了SchemaRegistryClient bean的应用程序,Spring Cloud Stream会自动配置Apache Avro消息转换器以进行模式管理。这样可以简化模式演变,因为接收消息的应用程序可以轻松访问可与自己的读取器模式协调的编写器模式。

 

For outbound messages, if the content type of the channel is set to application/*+avro, the MessageConverteris activated, as shown in the following example:

 

对于出站消息,如果通道的内容类型设置为application/*+avro,则MessageConverter激活,如以下示例所示:

 

spring.cloud.stream.bindings.output.contentType=application/*+avro

 

During the outbound conversion, the message converter tries to infer the schema of each outbound messages (based on its type) and register it to a subject (based on the payload type) by using the SchemaRegistryClient. If an identical schema is already found, then a reference to it is retrieved. If not, the schema is registered, and a new version number is provided. The message is sent with a contentType header by using the following scheme: application/[prefix].[subject].v[version]+avro, where prefix is configurable and subject is deduced from the payload type.

 

在出站转换期间,消息转换器尝试推断每个出站消息的模式(基于其类型),并使用SchemaRegistryClient将其注册到主题(基于负载类型)。如果已找到相同的模式,则获取对其的引用。如果不是,则注册模式,并提供新的版本号。通过以下模式使用contentType header发送消息:application/[prefix].[subject].v[version]+avro,其中prefix是可配置的并且subject从负载类型推导出。

 

For example, a message of the type User might be sent as a binary payload with a content type of application/vnd.user.v2+avro, where user is the subject and 2 is the version number.

 

例如,User类型的消息可以作为二进制载荷发送,其内容类型为application/vnd.user.v2+avro,其中user是主题,2是版本号。

 

When receiving messages, the converter infers the schema reference from the header of the incoming message and tries to retrieve it. The schema is used as the writer schema in the deserialization process.

 

接收消息时,转换器会从传入消息的header中推断出架构引用,并尝试获取它。该模式在反序列化过程中用作编写器模式。

 

9.2.1. Avro Schema Registry Message Converter Properties   Avro架构注册表客户端消息转换器属性

 

If you have enabled Avro based schema registry client by setting spring.cloud.stream.bindings.output.contentType=application/*+avro, you can customize the behavior of the registration by setting the following properties.

 

如果通过设置spring.cloud.stream.bindings.output.contentType=application/*+avro启用了基于Avro的架构注册表客户端,则可以通过设置以下属性来自定义注册行为。

 

spring.cloud.stream.schema.avro.dynamicSchemaGenerationEnabled

Enable if you want the converter to use reflection to infer a Schema from a POJO.

 

如果希望转换器使用反射从POJO中推断架构,则启用。

 

Default: false

spring.cloud.stream.schema.avro.readerSchema

Avro compares schema versions by looking at a writer schema (origin payload) and a reader schema (your application payload). See the Avro documentation for more information. If set, this overrides any lookups at the schema server and uses the local schema as the reader schema. Default: null

 

Avro通过查看编写器模式(原始负载)和读取器模式(您的应用程序负载)来比较模式版本。有关更多信息,请参阅Avro文档。如果设置,则会覆盖架构服务器上的任何查找,并使用本地架构作为读取器模式。默认:null

 

spring.cloud.stream.schema.avro.schemaLocations

Registers any .avsc files listed in this property with the Schema Server.

 

使用架构服务器注册此属性中列出的所有.avsc文件。

 

Default: empty

spring.cloud.stream.schema.avro.prefix

The prefix to be used on the Content-Type header.

 

要在Content-Type header上使用的前缀。

 

Default: vnd

 

9.3. Apache Avro Message Converters   Apache Avro消息转换器

 

Spring Cloud Stream provides support for schema-based message converters through its spring-cloud-stream-schema module. Currently, the only serialization format supported out of the box for schema-based message converters is Apache Avro, with more formats to be added in future versions.

 

The spring-cloud-stream-schema module contains two types of message converters that can be used for Apache Avro serialization:

  • Converters that use the class information of the serialized or deserialized objects or a schema with a location known at startup.
  • Converters that use a schema registry. They locate the schemas at runtime and dynamically register new schemas as domain objects evolve.

 

Spring Cloud Stream通过其spring-cloud-stream-schema模块为基于模式的消息转换器提供支持。目前,基于模式的消息转换器开箱即用的唯一序列化格式是Apache Avro,未来版本中将添加更多格式。

 

spring-cloud-stream-schema模块包含两种类型的消息转换器,可用于Apache Avro序列化:

  • 使用序列化或反序列化对象的类信息或具有启动时已知位置的模式的转换器。
  • 使用模式注册表的转换器。他们在运行时定位模式,并在域对象发展时动态注册新模式。

 

9.4. Converters with Schema Support   具有架构支持的转换器

The AvroSchemaMessageConverter supports serializing and deserializing messages either by using a predefined schema or by using the schema information available in the class (either reflectively or contained in the SpecificRecord). If you provide a custom converter, then the default AvroSchemaMessageConverter bean is not created. The following example shows a custom converter:

 

AvroSchemaMessageConverter通过使用预定义的模式,或使用类中可用的模式信息(反射性的或包含在SpecificRecord中)支持序列化和反序列化消息。如果您提供自定义转换器,则不会创建默认的AvroSchemaMessageConverter bean。以下示例显示了自定义转换器:

 

To use custom converters, you can simply add it to the application context, optionally specifying one or more MimeTypes with which to associate it. The default MimeType is application/avro.

 

要使用自定义转换器,只需将其添加到应用程序上下文中,可以选择指定一个或多个与之关联的MimeTypes。默认MimeType是application/avro。

 

If the target type of the conversion is a GenericRecord, a schema must be set.

 

如果转换的目标类型是GenericRecord,则必须设置模式。

 

The following example shows how to configure a converter in a sink application by registering the Apache Avro MessageConverter without a predefined schema. In this example, note that the mime type value is avro/bytes, not the default application/avro.

 

以下示例显示如何通过注册没有预定义模式的Apache Avro MessageConverter来在接收器应用程序中配置转换器。在此示例中,请注意mime类型值avro/bytes,而不是默认值application/avro。

 

@EnableBinding(Sink.class)

@SpringBootApplication

public static class SinkApplication {

 

  ...

 

  @Bean

  public MessageConverter userMessageConverter() {

      return new AvroSchemaMessageConverter(MimeType.valueOf("avro/bytes"));

  }

}

 

Conversely, the following application registers a converter with a predefined schema (found on the classpath):

 

相反,以下应用程序使用预定义模式(在类路径中找到)注册转换器:

 

@EnableBinding(Sink.class)

@SpringBootApplication

public static class SinkApplication {

 

  ...

 

  @Bean

  public MessageConverter userMessageConverter() {

      AvroSchemaMessageConverter converter = new AvroSchemaMessageConverter(MimeType.valueOf("avro/bytes"));

      converter.setSchemaLocation(new ClassPathResource("schemas/User.avro"));

      return converter;

  }

}

 

9.5. Schema Registry Server   架构注册表服务器

 

Spring Cloud Stream provides a schema registry server implementation. To use it, you can add the spring-cloud-stream-schema-server artifact to your project and use the @EnableSchemaRegistryServer annotation, which adds the schema registry server REST controller to your application. This annotation is intended to be used with Spring Boot web applications, and the listening port of the server is controlled by the server.port property. The spring.cloud.stream.schema.server.path property can be used to control the root path of the schema server (especially when it is embedded in other applications). The spring.cloud.stream.schema.server.allowSchemaDeletion boolean property enables the deletion of a schema. By default, this is disabled.

 

Spring Cloud Stream提供架构注册服务器实现。要使用它,您可以将spring-cloud-stream-schema-server工件添加到项目中并使用@EnableSchemaRegistryServer注释,该注释将模式注册表服务器REST控制器添加到您的应用程序。此注释旨在与Spring Boot Web应用程序一起使用,并且服务器的侦听端口由server.port属性控制。spring.cloud.stream.schema.server.path属性可用于控制模式服务器的根路径(特别是当它嵌入其他应用程序时)。spring.cloud.stream.schema.server.allowSchemaDeletion布尔属性允许模式的缺失。默认情况下,禁用此功能。

 

The schema registry server uses a relational database to store the schemas. By default, it uses an embedded database. You can customize the schema storage by using the Spring Boot SQL database and JDBC configuration options.

 

模式注册表服务器使用关系数据库来存储模式。默认情况下,它使用嵌入式数据库。您可以使用Spring Boot SQL数据库和JDBC配置选项自定义架构存储。

 

The following example shows a Spring Boot application that enables the schema registry:

 

以下示例显示了启用架构注册表的Spring Boot应用程序:

 

@SpringBootApplication
@EnableSchemaRegistryServer
public class SchemaRegistryServerApplication {
    public static void main(String[] args) {
        SpringApplication.run(SchemaRegistryServerApplication.class, args);
    }
}

 

9.5.1. Schema Registry Server API   架构注册表服务器API

 

The Schema Registry Server API consists of the following operations:

 

Schema Registry Server API包含以下操作:

 

 

Registering a New Schema   注册新架构

 

To register a new schema, send a POST request to the / endpoint.

The / accepts a JSON payload with the following fields:

  • subject: The schema subject
  • format: The schema format
  • definition: The schema definition

 

要注册新架构,请向/端点发送POST请求。

/接受具有以下字段的JSON载荷:

  • subject:架构主题
  • format:架构格式
  • definition:架构定义

 

Its response is a schema object in JSON, with the following fields:

  • id: The schema ID
  • subject: The schema subject
  • format: The schema format
  • version: The schema version
  • definition: The schema definition

 

它的响应是JSON中的模式对象,包含以下字段:

  • id:架构ID
  • subject:架构主题
  • format:架构格式
  • version:架构版本
  • definition:架构定义

 

Retrieving an Existing Schema by Subject, Format, and Version   按主题,格式,和版本检索现有架构

 

To retrieve an existing schema by subject, format, and version, send GET request to the /{subject}/{format}/{version} endpoint.

Its response is a schema object in JSON, with the following fields:

  • id: The schema ID
  • subject: The schema subject
  • format: The schema format
  • version: The schema version
  • definition: The schema definition

 

要按主题,格式和版本检索现有架构,请将GET请求发送到/{subject}/{format}/{version}端点。

它的响应是JSON中的模式对象,包含以下字段:

  • id:架构ID
  • subject:架构主题
  • format:架构格式
  • version:架构版本
  • definition:架构定义

 

Retrieving an Existing Schema by Subject and Format   按主题和格式检索现有架构

 

To retrieve an existing schema by subject and format, send a GET request to the /subject/format endpoint.

Its response is a list of schemas with each schema object in JSON, with the following fields:

  • id: The schema ID
  • subject: The schema subject
  • format: The schema format
  • version: The schema version
  • definition: The schema definition

 

要按主题和格式检索现有架构,GET请向/subject/format端点发送请求。

它的响应是JSON中每个模式对象的模式列表,包含以下字段:

  • id:架构ID
  • subject:架构主题
  • format:架构格式
  • version:架构版本
  • definition:架构定义

 

Retrieving an Existing Schema by ID   按ID检索现有架构

 

To retrieve a schema by its ID, send a GET request to the /schemas/{id} endpoint.

Its response is a schema object in JSON, with the following fields:

  • id: The schema ID
  • subject: The schema subject
  • format: The schema format
  • version: The schema version
  • definition: The schema definition

 

要通过其ID检索架构,GET请向/schemas/{id}端点发送请求。

它的响应是JSON中的模式对象,包含以下字段:

  • id:架构ID
  • subject:架构主题
  • format:架构格式
  • version:架构版本
  • definition:架构定义

 

Deleting a Schema by Subject, Format, and Version   按主题,格式,和版本删除架构

 

To delete a schema identified by its subject, format, and version, send a DELETE request to the /{subject}/{format}/{version} endpoint.

要删除由其主题,格式和版本标识的模式,DELETE请向/{subject}/{format}/{version}端点发送请求。

 

Deleting a Schema by ID   按ID删除架构

 

To delete a schema by its ID, send a DELETE request to the /schemas/{id} endpoint.

要按其ID删除架构,DELETE请向/schemas/{id}端点发送请求。

 

Deleting a Schema by Subject  按主题删除架构

DELETE /{subject}

Delete existing schemas by their subject.

DELETE /{subject}

按主题删除现有架构。

 

This note applies to users of Spring Cloud Stream 1.1.0.RELEASE only. Spring Cloud Stream 1.1.0.RELEASE used the table name, schema, for storing Schema objects. Schema is a keyword in a number of database implementations. To avoid any conflicts in the future, starting with 1.1.1.RELEASE, we have opted for the name SCHEMA_REPOSITORY for the storage table. Any Spring Cloud Stream 1.1.0.RELEASE users who upgrade should migrate their existing schemas to the new table before upgrading.

本说明仅适用于Spring Cloud Stream 1.1.0.RELEASE的用户。Spring Cloud Stream 1.1.0.RELEASE使用表名schema来存储Schema对象。Schema是许多数据库实现中的关键字。为了避免将来出现任何冲突,从1.1.1.RELEASE开始,我们选择SCHEMA_REPOSITORY了存储表的名称。任何升级的Spring Cloud Stream 1.1.0.RELEASE用户都应该在升级之前将其现有架构迁移到新表。

 

9.5.2. Using Confluent’s Schema Registry   使用汇合的架构注册表

 

The default configuration creates a DefaultSchemaRegistryClient bean. If you want to use the Confluent schema registry, you need to create a bean of type ConfluentSchemaRegistryClient, which supersedes the one configured by default by the framework. The following example shows how to create such a bean:

 

默认配置创建一个DefaultSchemaRegistryClient bean。如果要使用Confluent模式注册表,则需要创建一个ConfluentSchemaRegistryClient类型的bean,它取代框架默认配置的bean。以下示例显示如何创建此类bean:

 

@Bean

public SchemaRegistryClient schemaRegistryClient(@Value("${spring.cloud.stream.schemaRegistryClient.endpoint}") String endpoint){

  ConfluentSchemaRegistryClient client = new ConfluentSchemaRegistryClient();

  client.setEndpoint(endpoint);

  return client;

}

 

The ConfluentSchemaRegistryClient is tested against Confluent platform version 4.0.0.

ConfluentSchemaRegistryClient针对Confluent平台版本4.0.0进行测试。

 

9.6. Schema Registration and Resolution   架构注册和解析

 

To better understand how Spring Cloud Stream registers and resolves new schemas and its use of Avro schema comparison features, we provide two separate subsections:

 

为了更好地了解Spring Cloud Stream如何注册和解析新架构及其对Avro架构比较功能的使用,我们提供了两个单独的小节:

 

 

9.6.1. Schema Registration Process (Serialization)   架构注册过程(序列化)

 

The first part of the registration process is extracting a schema from the payload that is being sent over a channel. Avro types such as SpecificRecord or GenericRecord already contain a schema, which can be retrieved immediately from the instance. In the case of POJOs, a schema is inferred if the spring.cloud.stream.schema.avro.dynamicSchemaGenerationEnabled property is set to true (the default).

 

注册过程的第一部分是从通过通道发送的负载中提取模式。Avro类型,例如SpecificRecord或GenericRecord已经包含模式,可以立即从实例中检索。在POJO的情况下,如果spring.cloud.stream.schema.avro.dynamicSchemaGenerationEnabled属性设置为true(默认值),则推断出模式。

 

Figure 7. Schema Writer Resolution Process

 

Ones a schema is obtained, the converter loads its metadata (version) from the remote server. First, it queries a local cache. If no result is found, it submits the data to the server, which replies with versioning information. The converter always caches the results to avoid the overhead of querying the Schema Server for every new message that needs to be serialized.

 

获取模式之后,转换器从远程服务器加载其元数据(版本)。首先,它查询本地缓存。如果未找到任何结果,则会将数据提交给服务器,服务器会回复版本信息。转换器始终缓存结果,以避免为每个需要序列化的新消息进行查询架构服务器的开销。

 

Figure 8. Schema Registration Process

With the schema version information, the converter sets the contentType header of the message to carry the version information — for example: application/vnd.user.v1+avro.

 

使用模式版本信息,转换器设置消息的contentType header以携带版本信息 - 例如:application/vnd.user.v1+avro。

 

9.6.2. Schema Resolution Process (Deserialization)   架构解析过程(反序列化)

 

When reading messages that contain version information (that is, a contentType header with a scheme like the one described under “Schema Registration Process (Serialization)”), the converter queries the Schema server to fetch the writer schema of the message. Once it has found the correct schema of the incoming message, it retrieves the reader schema and, by using Avro’s schema resolution support, reads it into the reader definition (setting defaults and any missing properties).

 

当读取包含版本信息的消息(即,具有类似“ 模式注册过程(序列化) ”中描述的方案的contentType header)时,转换器查询模式服务器以获取消息的写入器模式。一旦找到传入消息的正确模式,它就会检索读取器模式,并通过使用Avro的模式解析支持将其读入读取器定义(设置默认值和任何缺少的属性)。

 

Figure 9. Schema Reading Resolution Process

 

You should understand the difference between a writer schema (the application that wrote the message) and a reader schema (the receiving application). We suggest taking a moment to read the Avro terminology and understand the process. Spring Cloud Stream always fetches the writer schema to determine how to read a message. If you want to get Avro’s schema evolution support working, you need to make sure that a readerSchema was properly set for your application.

您应该了解编写器模式(编写消息的应用程序)和读取器模式(接收应用程序)之间的区别。我们建议花点时间阅读Avro术语并理解该过程。Spring Cloud Stream始终获取编写器模式以确定如何阅读消息。如果您希望Avro的架构演变支持能够正常工作,您需要确保您的应用程序正确设置了读取器模式readerSchema。

 

10. Inter-Application Communication   应用程序之间通信

 

Spring Cloud Stream enables communication between applications. Inter-application communication is a complex issue spanning several concerns, as described in the following topics:

 

Spring Cloud Stream支持应用程序之间的通信。跨应用程序通信是一个复杂的问题,涉及多个问题,如以下主题中所述:

 

 

10.1. Connecting Multiple Application Instances   连接多个应用程序实例

 

While Spring Cloud Stream makes it easy for individual Spring Boot applications to connect to messaging systems, the typical scenario for Spring Cloud Stream is the creation of multi-application pipelines, where microservice applications send data to each other. You can achieve this scenario by correlating the input and output destinations of “adjacent” applications.

 

虽然Spring Cloud Stream使独立的Spring Boot应用程序连接到消息系统很容易,但是Spring Cloud Stream的典型场景是创建多个应用程序管道,其中微服务应用彼此之间发送数据。你可以通过关联“相邻”应用程序的输入和输出目标来实现此方案。

 

Suppose a design calls for the Time Source application to send data to the Log Sink application. You could use a common destination named ticktock for bindings within both applications.

 

假设一个设计要求Time Source应用程序将数据发送到Log Sink应用程序。您可以在两个应用程序中使用绑定的公共目标,命名为ticktock。

 

Time Source (that has the channel name output) would set the following property:

 

Time Source(具有通道名称output)将设置以下属性:

 

spring.cloud.stream.bindings.output.destination=ticktock

 

Log Sink (that has the channel name input) would set the following property:

 

Log Sink(具有通道名称input)将设置以下属性:

 

spring.cloud.stream.bindings.input.destination=ticktock

 

10.2. Instance Index and Instance Count   实例索引和实例计数

 

When scaling up Spring Cloud Stream applications, each instance can receive information about how many other instances of the same application exist and what its own instance index is. Spring Cloud Stream does this through the spring.cloud.stream.instanceCount and spring.cloud.stream.instanceIndex properties. For example, if there are three instances of a HDFS sink application, all three instances have spring.cloud.stream.instanceCount set to 3, and the individual applications have spring.cloud.stream.instanceIndex set to 0, 1, and 2, respectively.

 

在扩展Spring Cloud Stream应用程序时,每个实例都可以接收有关同一应用程序存在多少其他实例以及它自己的实例索引的信息。Spring Cloud Stream通过spring.cloud.stream.instanceCount和spring.cloud.stream.instanceIndex属性实现此目的。例如,如果HDFS接收器应用程序有三个实例,所有三个实例都将spring.cloud.stream.instanceCount设置为3,并且独自的应用程序分别将spring.cloud.stream.instanceIndex设置为0,1,和2。

 

When Spring Cloud Stream applications are deployed through Spring Cloud Data Flow, these properties are configured automatically; when Spring Cloud Stream applications are launched independently, these properties must be set correctly. By default, spring.cloud.stream.instanceCount is 1, and spring.cloud.stream.instanceIndex is 0.

 

当Spring Cloud Stream应用程序通过Spring Cloud Data Flow部署时,这些属性会自动配置; 当Spring Cloud Stream应用程序独立启动时,必须正确设置这些属性。默认情况下,spring.cloud.stream.instanceCount是1,spring.cloud.stream.instanceIndex是0。

 

In a scaled-up scenario, correct configuration of these two properties is important for addressing partitioning behavior (see below) in general, and the two properties are always required by certain binders (for example, the Kafka binder) in order to ensure that data are split correctly across multiple consumer instances.

 

在扩展方案中,正确配置这两个属性对于解决分区行为(见下文)非常重要,并且某些绑定器(例如,Kafka绑定器)始终需要这两个属性,以确保数据在多个消费者实例之间正确分割。

 

10.3. Partitioning   分区

Partitioning in Spring Cloud Stream consists of two tasks:

 

Spring Cloud Stream中的分区包含两个任务:

 

 

10.3.1. Configuring Output Bindings for Partitioning   配置输出绑定以进行分区

 

You can configure an output binding to send partitioned data by setting one and only one of its partitionKeyExpression or partitionKeyExtractorName properties, as well as its partitionCount property.

 

您可以通过设置一个且仅一个其partitionKeyExpression或partitionKeyExtractorName属性,以及它的partitionCount属性配置输出绑定来发送分区数据。

 

For example, the following is a valid and typical configuration:

 

例如,以下是有效且典型的配置:

 

spring.cloud.stream.bindings.output.producer.partitionKeyExpression=payload.id
spring.cloud.stream.bindings.output.producer.partitionCount=5

 

Based on that example configuration, data is sent to the target partition by using the following logic.

 

基于该示例配置,使用以下逻辑将数据发送到目标分区。

 

A partition key’s value is calculated for each message sent to a partitioned output channel based on the partitionKeyExpression. The partitionKeyExpression is a SpEL expression that is evaluated against the outbound message for extracting the partitioning key.

 

对于发送到分区输出通道的每条消息,基于partitionKeyExpression计算分区key的值。partitionKeyExpression是一个SpEL表达式,该表达式针对提取分区key的出站消息进行评估。

 

If a SpEL expression is not sufficient for your needs, you can instead calculate the partition key value by providing an implementation of org.springframework.cloud.stream.binder.PartitionKeyExtractorStrategy and configuring it as a bean (by using the @Bean annotation). If you have more then one bean of type org.springframework.cloud.stream.binder.PartitionKeyExtractorStrategy available in the Application Context, you can further filter it by specifying its name with the partitionKeyExtractorName property, as shown in the following example:

 

如果SpEL表达式不足以满足您的需要,您可以通过提供org.springframework.cloud.stream.binder.PartitionKeyExtractorStrategy实现并将其配置为bean(通过使用@Bean注释)来计算分区key值。如果在应用程序上下文中有多个org.springframework.cloud.stream.binder.PartitionKeyExtractorStrategy类型的bean可用,则可以通过使用partitionKeyExtractorName属性指定其名称来进一步过滤它,如以下示例所示:

 

--spring.cloud.stream.bindings.output.producer.partitionKeyExtractorName=customPartitionKeyExtractor
--spring.cloud.stream.bindings.output.producer.partitionCount=5
. . .
@Bean
public CustomPartitionKeyExtractorClass customPartitionKeyExtractor() {
    return new CustomPartitionKeyExtractorClass();
}

In previous versions of Spring Cloud Stream, you could specify the implementation of org.springframework.cloud.stream.binder.PartitionKeyExtractorStrategy by setting the spring.cloud.stream.bindings.output.producer.partitionKeyExtractorClass property. Since version 2.0, this property is deprecated, and support for it will be removed in a future version.

在以前版本的Spring Cloud Stream中,您可以通过设置spring.cloud.stream.bindings.output.producer.partitionKeyExtractorClass属性来指定org.springframework.cloud.stream.binder.PartitionKeyExtractorStrategy实现。从版本2.0开始,不推荐使用此属性,并且将在以后的版本中删除对该属性的支持。

 

Once the message key is calculated, the partition selection process determines the target partition as a value between 0 and partitionCount - 1. The default calculation, applicable in most scenarios, is based on the following formula: key.hashCode() % partitionCount. This can be customized on the binding, either by setting a SpEL expression to be evaluated against the 'key' (through the partitionSelectorExpression property) or by configuring an implementation of org.springframework.cloud.stream.binder.PartitionSelectorStrategyas a bean (by using the @Bean annotation). Similar to the PartitionKeyExtractorStrategy, you can further filter it by using the spring.cloud.stream.bindings.output.producer.partitionSelectorName property when more than one bean of this type is available in the Application Context, as shown in the following example:

 

一旦计算出消息key,分区选择过程就将目标分区确定为0和partitionCount - 1之间的值。适用于大多数情况的默认计算基于以下公式:key.hashCode() % partitionCount。这可以在绑定上自定义,通过设置要根据'key'(通过partitionSelectorExpression属性)计算的SpEL表达式,或者通过配置org.springframework.cloud.stream.binder.PartitionSelectorStrategy bean 的实现(通过使用@Bean注释)来定制。与PartitionKeyExtractorStrategy类似,如果在应用程序上下文中有多个此类型的bean可用时,您可以使用spring.cloud.stream.bindings.output.producer.partitionSelectorName属性进一步过滤它,如以下示例所示:

 

--spring.cloud.stream.bindings.output.producer.partitionSelectorName=customPartitionSelector
. . .
@Bean
public CustomPartitionSelectorClass customPartitionSelector() {
    return new CustomPartitionSelectorClass();
}

 

In previous versions of Spring Cloud Stream you could specify the implementation of org.springframework.cloud.stream.binder.PartitionSelectorStrategy by setting the spring.cloud.stream.bindings.output.producer.partitionSelectorClass property. Since version 2.0, this property is deprecated and support for it will be removed in a future version.

在以前版本的Spring Cloud Stream中,您可以通过设置spring.cloud.stream.bindings.output.producer.partitionSelectorClass属性来指定org.springframework.cloud.stream.binder.PartitionSelectorStrategy实现。从版本2.0开始,不推荐使用此属性,并且将在以后的版本中删除对该属性的支持。

 

10.3.2. Configuring Input Bindings for Partitioning   配置输入绑定以进行分区

 

An input binding (with the channel name input) is configured to receive partitioned data by setting its partitioned property, as well as the instanceIndex and instanceCount properties on the application itself, as shown in the following example:

 

通过设置输入绑定的partitioned属性,以及在应用程序上设置instanceIndex和instanceCount属性来设置输入绑定(与通道名称绑定input)以获取分区数据,如显示在下面的例子:

 

spring.cloud.stream.bindings.input.consumer.partitioned=true
spring.cloud.stream.instanceIndex=3
spring.cloud.stream.instanceCount=5

 

The instanceCount value represents the total number of application instances between which the data should be partitioned. The instanceIndex must be a unique value across the multiple instances, with a value between 0and instanceCount - 1. The instance index helps each application instance to identify the unique partition(s) from which it receives data. It is required by binders using technology that does not support partitioning natively. For example, with RabbitMQ, there is a queue for each partition, with the queue name containing the instance index. With Kafka, if autoRebalanceEnabled is true (default), Kafka takes care of distributing partitions across instances, and these properties are not required. If autoRebalanceEnabled is set to false, the instanceCount and instanceIndex are used by the binder to determine which partition(s) the instance subscribes to (you must have at least as many partitions as there are instances). The binder allocates the partitions instead of Kafka. This might be useful if you want messages for a particular partition to always go to the same instance. When a binder configuration requires them, it is important to set both values correctly in order to ensure that all of the data is consumed and that the application instances receive mutually exclusive datasets.

 

instanceCount值表示应在其间分区数据的应用程序实例的总数。instanceIndex必须是跨多个实例的唯一值,值介于0和instanceCount - 1之间。实例索引可帮助每个应用程序实例识别从中接收数据的唯一分区。绑定器需要使用不支持原生分区的技术。例如,使用RabbitMQ,每个分区都有一个队列,队列名称包含实例索引。使用Kafka,如果autoRebalanceEnabled是true(默认),则Kafka负责跨实例分发分区,并且不需要这些属性。如果autoRebalanceEnabled设置为false,则binder使用instanceCount和instanceIndex来确定实例所订阅的分区(您必须至少具有与实例一样多的分区)。绑定器分配分区而不是Kafka。如果您希望特定分区的消息始终转到同一个实例,这可能很有用。当绑定器配置需要它们时,重要的是正确设置两个值以确保消费所有数据并且应用程序实例接收互斥数据集。

 

While a scenario in which using multiple instances for partitioned data processing may be complex to set up in a standalone case, Spring Cloud Dataflow can simplify the process significantly by populating both the input and output values correctly and by letting you rely on the runtime infrastructure to provide information about the instance index and instance count.

 

虽然在单机的情况下使用多个实例进行分区数据处理的情况可能很复杂,但Spring Cloud Dataflow可以通过正确填充输入和输出值并让您依赖运行时基础结构来显着简化流程来提供有关实例索引和实例计数的信息。

 

11. Testing

 

Spring Cloud Stream provides support for testing your microservice applications without connecting to a messaging system. You can do that by using the TestSupportBinder provided by the spring-cloud-stream-test-support library, which can be added as a test dependency to the application, as shown in the following example:

 

Spring Cloud Stream支持在不连接消息系统的情况下测试您的微服务应用程序。您可以使用spring-cloud-stream-test-support库提供的TestSupportBinder,可以将其作为测试依赖项添加到应用程序中,如以下示例所示:

 

   <dependency>
       <groupId>org.springframework.cloud</groupId>
       <artifactId>spring-cloud-stream-test-support</artifactId>
       <scope>test</scope>
   </dependency>

 

The TestSupportBinder uses the Spring Boot autoconfiguration mechanism to supersede the other binders found on the classpath. Therefore, when adding a binder as a dependency, you must make sure that the test scope is being used.

在TestSupportBinder使用了Spring Boot自动配置机制,以取代在类路径中的其它绑定器。因此,在添加绑定器作为依赖关系时,必须确保正在使用test范围。

 

The TestSupportBinder lets you interact with the bound channels and inspect any messages sent and received by the application.

 

TestSupportBinder让你与绑定通道交互并检查应用程序发送和接收的任何消息。

 

For outbound message channels, the TestSupportBinder registers a single subscriber and retains the messages emitted by the application in a MessageCollector. They can be retrieved during tests and have assertions made against them.

 

对于出站消息通道,TestSupportBinder注册单个订阅者并保留应用程序在MessageCollector中发出的消息。可以在测试期间检索它们并对它们进行断言。

 

You can also send messages to inbound message channels so that the consumer application can consume the messages. The following example shows how to test both input and output channels on a processor:

 

您还可以将消息发送到入站消息通道,以便消费者应用程序可以消费消息。以下示例显示如何在处理器上测试输入和输出通道:

 

@RunWith(SpringRunner.class)

@SpringBootTest(webEnvironment= SpringBootTest.WebEnvironment.RANDOM_PORT)

public class ExampleTest {

 

  @Autowired

  private Processor processor;

 

  @Autowired

  private MessageCollector messageCollector;

 

  @Test

  @SuppressWarnings("unchecked")

  public void testWiring() {

    Message<String> message = new GenericMessage<>("hello");

    processor.input().send(message);

    Message<String> received = (Message<String>) messageCollector.forChannel(processor.output()).poll();

    assertThat(received.getPayload(), equalTo("hello world"));

  }

 

 

  @SpringBootApplication

  @EnableBinding(Processor.class)

  public static class MyProcessor {

 

    @Autowired

    private Processor channels;

 

    @Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT)

    public String transform(String in) {

      return in + " world";

    }

  }

}

 

In the preceding example, we create an application that has an input channel and an output channel, both bound through the Processor interface. The bound interface is injected into the test so that we can have access to both channels. We send a message on the input channel, and we use the MessageCollector provided by Spring Cloud Stream’s test support to capture that the message has been sent to the output channel as a result. Once we have received the message, we can validate that the component functions correctly.

 

在前面的示例中,我们创建了一个具有输入通道和输出通道的应用程序,两者都通过Processor接口绑定。绑定接口被注入到测试中,以便我们可以访问两个通道。我们在输入通道上发送消息,我们使用Spring Cloud Stream的测试支持提供的MessageCollector来捕获消息已经被发送到输出通道的结果。收到消息后,我们可以验证组件是否正常运行。

 

11.1. Disabling the Test Binder Autoconfiguration   关闭测试绑定器自动配置

 

The intent behind the test binder superseding all the other binders on the classpath is to make it easy to test your applications without making changes to your production dependencies. In some cases (for example, integration tests) it is useful to use the actual production binders instead, and that requires disabling the test binder autoconfiguration. To do so, you can exclude the org.springframework.cloud.stream.test.binder.TestSupportBinderAutoConfiguration class by using one of the Spring Boot autoconfiguration exclusion mechanisms, as shown in the following example:

 

测试绑定器取代类路径上所有其他绑定器的目的是使测试应用程序变得很容易,而无需更改生产依赖项。在某些情况下(例如,集成测试),使用实际的生产绑定器代替是有用的,这需要禁用测试绑定器自动配置。为此,您可以使用Spring Boot自动配置排除机制之一排除org.springframework.cloud.stream.test.binder.TestSupportBinderAutoConfiguration类,如以下示例所示:

 

    @SpringBootApplication(exclude = TestSupportBinderAutoConfiguration.class)

    @EnableBinding(Processor.class)

    public static class MyProcessor {

 

        @Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT)

        public String transform(String in) {

            return in + " world";

        }

    }

 

When autoconfiguration is disabled, the test binder is available on the classpath, and its defaultCandidateproperty is set to false so that it does not interfere with the regular user configuration. It can be referenced under the name, test, as shown in the following example:

 

禁用自动配置时,类路径上的测试绑定器可用,并且其defaultCandidate属性设置为false不会干扰常规用户配置。它可以在名称test下引用,如以下示例所示:

 

spring.cloud.stream.defaultBinder=test

 

12. Health Indicator   健康指标

 

Spring Cloud Stream provides a health indicator for binders. It is registered under the name binders and can be enabled or disabled by setting the management.health.binders.enabled property.

 

Spring Cloud Stream为绑定器提供了健康指示器。它是在名称binders下注册的,可以通过设置management.health.binders.enabled属性来启用或禁用。

 

By default management.health.binders.enabled is set to false. Setting management.health.binders.enabled to true enables the health indicator, allowing you to access the /health endpoint to retrieve the binder health indicators.

 

默认management.health.binders.enabled设置为false。设置management.health.binders.enabled为true启用健康指示器,允许您访问/health端点以检索绑定器健康指示器。

 

Health indicators are binder-specific and certain binder implementations may not necessarily provide a health indicator.

 

健康指标是特定于绑定器的,某些绑定器实现可能不一定提供健康指示器。

 

13. Metrics Emitter   指标发射器

 

Spring Boot Actuator provides dependency management and auto-configuration for Micrometer, an application metrics facade that supports numerous monitoring systems.

 

Spring Boot Actuator为Micrometer提供依赖关系管理和自动配置,Micrometer是一个支持众多监控系统的应用程序指标外观。

 

Spring Cloud Stream provides support for emitting any available micrometer-based metrics to a binding destination, allowing for periodic collection of metric data from stream applications without relying on polling individual endpoints.

 

Spring Cloud Stream支持将任何可用的基于微米的度量标准发送到绑定目标,允许定期从流应用程序收集度量标准数据,而无需依赖轮询各个端点。

 

Metrics Emitter is activated by defining the spring.cloud.stream.bindings.applicationMetrics.destination property, which specifies the name of the binding destination used by the current binder to publish metric messages.

 

通过定义spring.cloud.stream.bindings.applicationMetrics.destination属性来激活度量标准发射器,该属性指定当前绑定器用于发布度量标准消息的绑定目标的名称。

 

For example:

spring.cloud.stream.bindings.applicationMetrics.destination=myMetricDestination

 

The preceding example instructs the binder to bind to myMetricDestination (that is, Rabbit exchange, Kafka topic, and others).

 

前面的示例指示绑定器绑定到myMetricDestination(即,Rabbit交换,Kafka主题,和其他)。

 

The following properties can be used for customizing the emission of metrics:

 

以下属性可用于自定义指标的发布:

 

spring.cloud.stream.metrics.key

The name of the metric being emitted. Should be a unique value per application.

 

要发出的指标名称。每个应用程序应该是唯一值。

 

Default: ${spring.application.name:${vcap.application.name:${spring.config.name:application}}}

spring.cloud.stream.metrics.properties

Allows white listing application properties that are added to the metrics payload

 

允许添加到指标负载的白名单应用程序属性

 

Default: null.

spring.cloud.stream.metrics.meter-filter

Pattern to control the 'meters' one wants to capture. For example, specifying spring.integration.* captures metric information for meters whose name starts with spring.integration.

 

用于控制想要捕获的“米”的模式。例如,指定spring.integration.*捕获名称以spring.integration开头的计量表的度量标准信息。

 

Default: all 'meters' are captured.

spring.cloud.stream.metrics.schedule-interval

Interval to control the rate of publishing metric data.

 

用于控制发布度量标准数据的速率的时间间隔。

 

Default: 1 min

 

Consider the following:

考虑以下:

 

java -jar time-source.jar \

    --spring.cloud.stream.bindings.applicationMetrics.destination=someMetrics \

    --spring.cloud.stream.metrics.properties=spring.application** \

    --spring.cloud.stream.metrics.meter-filter=spring.integration.*

 

The following example shows the payload of the data published to the binding destination as a result of the preceding command:

 

以下示例显示了作为上述命令的结果发布到绑定目标的数据的负载:

 

{

"name": "application",

"createdTime": "2018-03-23T14:48:12.700Z",

"properties": {

},

"metrics": [

{

"id": {

"name": "spring.integration.send",

"tags": [

{

"key": "exception",

"value": "none"

},

{

"key": "name",

"value": "input"

},

{

"key": "result",

"value": "success"

},

{

"key": "type",

"value": "channel"

}

],

"type": "TIMER",

"description": "Send processing time",

"baseUnit": "milliseconds"

},

"timestamp": "2018-03-23T14:48:12.697Z",

"sum": 130.340546,

"count": 6,

"mean": 21.72342433333333,

"upper": 116.176299,

"total": 130.340546

}

]

}

 

Given that the format of the Metric message has slightly changed after migrating to Micrometer, the published message will also have a STREAM_CLOUD_STREAM_VERSION header set to 2.x to help distinguish between Metric messages from the older versions of the Spring Cloud Stream.

鉴于度量标准消息的格式在迁移到Micrometer后略有变化,已发布的消息也将STREAM_CLOUD_STREAM_VERSION设置标题,2.x以帮助区分旧版Spring Cloud Stream的度量标准消息。

 

14. Samples

 

For Spring Cloud Stream samples, see the spring-cloud-stream-samples repository on GitHub.

 

有关Spring Cloud Stream示例,请参阅GitHub上的spring-cloud-stream-samples存储库。

 

14.1. Deploying Stream Applications on CloudFoundry   在CloudFoundry上部署流应用程序

On CloudFoundry, services are usually exposed through a special environment variable called VCAP_SERVICES.

 

在CloudFoundry上,服务通常通过名为VCAP_SERVICES的特殊环境变量公开。

 

When configuring your binder connections, you can use the values from an environment variable as explained on the dataflow Cloud Foundry Server docs.

 

配置绑定器连接时,可以使用环境变量中的值,如数据流Cloud Foundry Server文档中所述。

 

Binder Implementations

 

15. Apache Kafka Binder

 

15.1. Usage

 

To use Apache Kafka binder, you need to add spring-cloud-stream-binder-kafka as a dependency to your Spring Cloud Stream application, as shown in the following example for Maven:

 

要使用Apache Kafka绑定器,您需要将spring-cloud-stream-binder-kafka作为依赖项添加到Spring Cloud Stream应用程序中,如以下Maven示例所示:

 

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-stream-binder-kafka</artifactId>
</dependency>

 

Alternatively, you can also use the Spring Cloud Stream Kafka Starter, as shown inn the following example for Maven:

 

或者,您也可以使用Spring Cloud Stream Kafka Starter,如下面的Maven示例所示:

 

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-stream-kafka</artifactId>
</dependency>

 

15.2. Apache Kafka Binder Overview   概述

 

The following image shows a simplified diagram of how the Apache Kafka binder operates:

 

下图显示了Apache Kafka绑定器如何运行的简化图:

 

Figure 10. Kafka Binder

The Apache Kafka Binder implementation maps each destination to an Apache Kafka topic. The consumer group maps directly to the same Apache Kafka concept. Partitioning also maps directly to Apache Kafka partitions as well.

 

Apache Kafka Binder实现将每个目标映射到Apache Kafka主题。消费者组直接映射到相同的Apache Kafka概念。分区也直接映射到Apache Kafka分区。

 

The binder currently uses the Apache Kafka kafka-clients 1.0.0 jar and is designed to be used with a broker of at least that version. This client can communicate with older brokers (see the Kafka documentation), but certain features may not be available. For example, with versions earlier than 0.11.x.x, native headers are not supported. Also, 0.11.x.x does not support the autoAddPartitions property.

 

绑定器当前使用Apache Kafka kafka-clients 1.0.0 jar,旨在与至少该版本的代理一起使用。此客户端可以与较旧的代理进行通信(请参阅Kafka文档),但某些功能可能不可用。例如,对于早于0.11.xx的版本,不支持原生headers。此外,0.11.xx不支持autoAddPartitions属性。

 

15.3. Configuration Options

This section contains the configuration options used by the Apache Kafka binder.

For common configuration options and properties pertaining to binder, see the core documentation.

 

本节包含Apache Kafka绑定器使用的配置选项。

有关绑定器的常见配置选项和属性,请参阅核心文档

 

Kafka Binder Properties

spring.cloud.stream.kafka.binder.brokers

A list of brokers to which the Kafka binder connects.

Kafka绑定器连接的brokers列表。

 

Default: localhost.

spring.cloud.stream.kafka.binder.defaultBrokerPort

brokers allows hosts specified with or without port information (for example, host1,host2:port2). This sets the default port when no port is configured in the broker list.

 

brokers允许使用具有或不具有端口信息的主机(例如,host1,host2:port2)。这在代理列表中未配置端口时设置默认端口。

 

Default: 9092.

spring.cloud.stream.kafka.binder.configuration

Key/Value map of client properties (both producers and consumer) passed to all clients created by the binder. Due to the fact that these properties are used by both producers and consumers, usage should be restricted to common properties — for example, security settings.

 

客户端属性(生产者和消费者)的键/值映射传递给由绑定器创建的所有客户端。由于生产者和消费者都使用这些属性,因此应将使用限制为通用属性 - 例如,安全设置。

 

Default: Empty map.

spring.cloud.stream.kafka.binder.headers

The list of custom headers that are transported by the binder. Only required when communicating with older applications (⇐ 1.3.x) with a kafka-clients version < 0.11.0.0. Newer versions support headers natively.

 

由绑定器传输的自定义headers列表。仅在与旧版应用程序(⇐ 1.3.x)通信且kafka kafka-clients <0.11.0.0时才需要。较新版本本身支持headers。

 

Default: empty.

spring.cloud.stream.kafka.binder.healthTimeout

The time to wait to get partition information, in seconds. Health reports as down if this timer expires.

 

等待获取分区信息的时间,以秒为单位。如果此计时器到期,健康状况将报告为关闭。

 

Default: 10.

spring.cloud.stream.kafka.binder.requiredAcks

The number of required acks on the broker. See the Kafka documentation for the producer acks property.

 

broker所需的确认数量。有关生产者acks属性,请参阅Kafka文档。

 

Default: 1.

spring.cloud.stream.kafka.binder.minPartitionCount

Effective only if autoCreateTopics or autoAddPartitions is set. The global minimum number of partitions that the binder configures on topics on which it produces or consumes data. It can be superseded by the partitionCount setting of the producer or by the value of instanceCount * concurrency settings of the producer (if either is larger).

 

仅在设置autoCreateTopics或autoAddPartitions时生效。绑定器在其生成或消费数据的主题上配置的全局最小分区数。它可以被生产者的partitionCount设置或生产者的instanceCount * concurrency设置的值取代(如果其中任何一个更大)。

 

Default: 1.

spring.cloud.stream.kafka.binder.replicationFactor

The replication factor of auto-created topics if autoCreateTopics is active. Can be overridden on each binding.

 

autoCreateTopics处于活动状态时,自动创建的主题的复制因子。可以在每个绑定上重写。

 

Default: 1.

spring.cloud.stream.kafka.binder.autoCreateTopics

If set to true, the binder creates new topics automatically. If set to false, the binder relies on the topics being already configured. In the latter case, if the topics do not exist, the binder fails to start.

 

如果设置为true,则绑定器会自动创建新主题。如果设置为false,则绑定器依赖于已配置的主题。在后一种情况下,如果主题不存在,则绑定器无法启动。

 

This setting is independent of the auto.topic.create.enable setting of the broker and does not influence it. If the server is set to auto-create topics, they may be created as part of the metadata retrieval request, with default broker settings.

此设置与代理的auto.topic.create.enable设置无关,并且不会影响它。如果服务器设置为自动创建主题,则可以使用默认代理设置将它们创建为元数据检索请求的一部分。

 

Default: true.

spring.cloud.stream.kafka.binder.autoAddPartitions

If set to true, the binder creates new partitions if required. If set to false, the binder relies on the partition size of the topic being already configured. If the partition count of the target topic is smaller than the expected value, the binder fails to start.

 

如果设置为true,则绑定器会根据需要创建新分区。如果设置为false,则绑定器依赖于已配置主题的分区大小。如果目标主题的分区计数小于预期值,则绑定器无法启动。

 

Default: false.

spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix

Enables transactions in the binder. See transaction.id in the Kafka documentation and Transactions in the spring-kafka documentation. When transactions are enabled, individual producer properties are ignored and all producers use the spring.cloud.stream.kafka.binder.transaction.producer.* properties.

 

启用绑定器中的事务。见Kafka文档中的transaction.id和spring-kafka文档中的事务。启用事务时,将忽略单独的producer属性,并且所有生产者都使用spring.cloud.stream.kafka.binder.transaction.producer.*属性。

 

Default null (no transactions)

spring.cloud.stream.kafka.binder.transaction.producer.*

Global producer properties for producers in a transactional binder. See spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix and Kafka Producer Propertiesand the general producer properties supported by all binders.

 

事务绑定器中生产者的全局生产者属性。请参阅spring.cloud.stream.kafka.binder.transaction.transactionIdPrefix和Kafka Producer属性以及所有绑定器支持的常规生产者属性。

 

Default: See individual producer properties.

spring.cloud.stream.kafka.binder.headerMapperBeanName

The bean name of a KafkaHeaderMapper used for mapping spring-messaging headers to and from Kafka headers. Use this, for example, if you wish to customize the trusted packages in a DefaultKafkaHeaderMapper that uses JSON deserialization for the headers.

 

用于映射spring-messaging headers与Kafka headers之间的KafkaHeaderMapper的bean名称。例如,如果您希望在使用用于headers的JSON反序列化的DefaultKafkaHeaderMapper中自定义受信任的包,请使用此选项。

 

Default: none.

 

Kafka Consumer Properties   Kafka消费者属性

 

The following properties are available for Kafka consumers only and must be prefixed with spring.cloud.stream.kafka.bindings.<channelName>.consumer..

 

以下属性仅适用于Kafka消费者,必须带有前缀spring.cloud.stream.kafka.bindings.<channelName>.consumer.。

 

admin.configuration

A Map of Kafka topic properties used when provisioning topics — for example, spring.cloud.stream.kafka.bindings.input.consumer.admin.configuration.message.format.version=0.9.0.0

 

Kafka主题属性的Map,配置主题时使用-例如,spring.cloud.stream.kafka.bindings.input.consumer.admin.configuration.message.format.version=0.9.0.0

 

Default: none.

admin.replicas-assignment

A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments. Used when provisioning new topics. See the NewTopic Javadocs in the kafka-clients jar.

 

副本分配的Map <Integer,List <Integer >>,其中键是分区,值是赋值。在配置新主题时使用。查看kafka-clients jar中的NewTopic Javadocs。

默认值:无。

 

Default: none.

admin.replication-factor

The replication factor to use when provisioning topics. Overrides the binder-wide setting. Ignored if replicas-assignments is present.

 

配置主题时使用的复制因子。覆盖绑定器范围的设置。如果replicas-assignments存在则忽略。

 

Default: none (the binder-wide default of 1 is used).

默认值:none(使用绑定器范围的默认值1)。

 

autoRebalanceEnabled

When true, topic partitions is automatically rebalanced between the members of a consumer group. When false, each consumer is assigned a fixed set of partitions based on spring.cloud.stream.instanceCount and spring.cloud.stream.instanceIndex. This requires both the spring.cloud.stream.instanceCount and spring.cloud.stream.instanceIndex properties to be set appropriately on each launched instance. The value of the spring.cloud.stream.instanceCount property must typically be greater than 1 in this case.

 

true时,主题分区会在消费者组的成员之间自动重新平衡。false时,为每个消费者分配一组基于spring.cloud.stream.instanceCount和spring.cloud.stream.instanceIndex的固定分区。这需要在每个已启动的实例上正确设置spring.cloud.stream.instanceCount和spring.cloud.stream.instanceIndex属性。在这种情况下,spring.cloud.stream.instanceCount属性的值通常必须大于1。

 

Default: true.

ackEachRecord

When autoCommitOffset is true, this setting dictates whether to commit the offset after each record is processed. By default, offsets are committed after all records in the batch of records returned by consumer.poll() have been processed. The number of records returned by a poll can be controlled with the max.poll.records Kafka property, which is set through the consumer configuration property. Setting this to true may cause a degradation in performance, but doing so reduces the likelihood of redelivered records when a failure occurs. Also, see the binder requiredAcks property, which also affects the performance of committing offsets.

 

当autoCommitOffset是true时,此设置指示每个记录处理之后是否提交偏移量。默认情况下,在处理完consumer.poll()返回的记录批中的所有记录后,将提交偏移量。可以使用max.poll.recordsKafka属性控制轮询返回的记录数,该属性通过消费者configuration属性设置。将此设置为true可能会导致性能下降,但这样做会降低发生故障时重新传送记录的可能性。另外,请参阅binder requiredAcks属性,该属性也会影响提交偏移量的性能。

 

Default: false.

autoCommitOffset

Whether to autocommit offsets when a message has been processed. If set to false, a header with the key kafka_acknowledgment of the type org.springframework.kafka.support.Acknowledgment header is present in the inbound message. Applications may use this header for acknowledging messages. See the examples section for details. When this property is set to false, Kafka binder sets the ack mode to org.springframework.kafka.listener.AbstractMessageListenerContainer.AckMode.MANUAL and the application is responsible for acknowledging records. Also see ackEachRecord.

 

是否在处理消息时自动提交偏移量。如果设置为false,则入站消息中将出现带有org.springframework.kafka.support.Acknowledgment类型的kafka_acknowledgment key的header。应用程序可以使用此header来确认消息。有关详细信息,请参阅示例部分。当此属性设置为false时,Kafka binder将ack模式设置为org.springframework.kafka.listener.AbstractMessageListenerContainer.AckMode.MANUAL,应用程序负责确认记录。另见ackEachRecord。

 

Default: true.

autoCommitOnError

Effective only if autoCommitOffset is set to true. If set to false, it suppresses auto-commits for messages that result in errors and commits only for successful messages. It allows a stream to automatically replay from the last successfully processed message, in case of persistent failures. If set to true, it always auto-commits (if auto-commit is enabled). If not set (the default), it effectively has the same value as enableDlq, auto-committing erroneous messages if they are sent to a DLQ and not committing them otherwise.

 

仅在autoCommitOffset设置为true时有效。如果设置为false,则禁止对导致错误的消息进行自动提交,仅自动提交成功的消息。它允许流在上次成功处理的消息中自动重放,以防出现持续故障。如果设置为true,则始终自动提交(如果启用了自动提交)。如果没有设置(默认值),它实际上具有与enableDlq相同的值,如果它们被发送到DLQ则自动提交错误消息,否则不提交它们。

 

Default: not set.

resetOffsets

Whether to reset offsets on the consumer to the value provided by startOffset.

 

是否将消费者的偏移重置为startOffset提供的值。

 

Default: false.

startOffset

The starting offset for new groups. Allowed values: earliest and latest. If the consumer group is set explicitly for the consumer 'binding' (through spring.cloud.stream.bindings.<channelName>.group), 'startOffset' is set to earliest. Otherwise, it is set to latest for the anonymous consumer group. Also see resetOffsets (earlier in this list).

 

新组的起始偏移量。允许的值:earliest和latest。如果为消费者“绑定”(通过spring.cloud.stream.bindings.<channelName>.group)明确设置了消费者组,则将“startOffset”设置为earliest。否则,它将为匿名使用者组设置为latest。另见resetOffsets(在此列表的前面)。

 

Default: null (equivalent to earliest).

默认值:null(相当于earliest)。

 

enableDlq

When set to true, it enables DLQ behavior for the consumer. By default, messages that result in errors are forwarded to a topic named error.<destination>.<group>. The DLQ topic name can be configurable by setting the dlqName property. This provides an alternative option to the more common Kafka replay scenario for the case when the number of errors is relatively small and replaying the entire original topic may be too cumbersome. See Dead-Letter Topic Processing processing for more information. Starting with version 2.0, messages sent to the DLQ topic are enhanced with the following headers: x-original-topic, x-exception-message, and x-exception-stacktrace as byte[].

 

设置为true时,它会为消费者启用DLQ行为。默认情况下,导致错误的消息将转发到名为error.<destination>.<group>的主题。可以通过设置dlqName属性来配置DLQ主题名称。对于错误数量相对较小并且重放整个原始主题的情况可能过于繁琐的情况,这为更常见的Kafka重放场景提供了备选选项。有关详细信息,请参阅死信主题处理处理。从2.0版开始,发送到DLQ主题的消息已使用以下标题得到增强:x-original-topic,x-exception-message,和x-exception-stacktrace作为byte[]。

 

Default: false.

configuration

Map with a key/value pair containing generic Kafka consumer properties.

 

包含通用Kafka消费者属性的键/值对映射。

 

Default: Empty map.

dlqName

The name of the DLQ topic to receive the error messages.

 

用于接收错误消息的DLQ主题的名称。

 

Default: null (If not specified, messages that result in errors are forwarded to a topic named error.<destination>.<group>).

 

默认值:null(如果未指定,则导致错误的消息将转发到名为error.<destination>.<group>的主题)。

 

dlqProducerProperties

Using this, DLQ-specific producer properties can be set. All the properties available through kafka producer properties can be set through this property.

 

使用它,可以设置DLQ特定的生产者属性。可以通过此属性设置通过kafka生产者属性提供的所有属性。

 

Default: Default Kafka producer properties.

 

默认值:默认Kafka生产者属性。

 

standardHeaders

Indicates which standard headers are populated by the inbound channel adapter. Allowed values: none, id, timestamp, or both. Useful if using native deserialization and the first component to receive a message needs an id (such as an aggregator that is configured to use a JDBC message store).

 

指示入站通道适配器填充的标准headers。允许值:none,id,timestamp,或both。如果使用本地反序列化并且第一个接收消息的组件需要id(例如配置为使用JDBC消息存储的聚合器),则非常有用。

 

Default: none

converterBeanName

The name of a bean that implements RecordMessageConverter. Used in the inbound channel adapter to replace the default MessagingMessageConverter.

 

实现RecordMessageConverter的bean 名称。在入站通道适配器中用于替换默认的MessagingMessageConverter。

 

Default: null

idleEventInterval

The interval, in milliseconds, between events indicating that no messages have recently been received. Use an ApplicationListener<ListenerContainerIdleEvent> to receive these events. See Example: Pausing and Resuming the Consumer for a usage example.

 

指示最近未收到消息的事件之间的间隔(以毫秒为单位)。使用ApplicationListener<ListenerContainerIdleEvent>来接收这些事件。有关用法示例,请参阅示例:暂停和恢复使用者

 

Default: 30000

 

Kafka Producer Properties   Kafka生产者属性

 

The following properties are available for Kafka producers only and must be prefixed with spring.cloud.stream.kafka.bindings.<channelName>.producer..

 

以下属性仅适用于Kafka生产者,必须以spring.cloud.stream.kafka.bindings.<channelName>.producer.为前缀。

 

admin.configuration

A Map of Kafka topic properties used when provisioning new topics — for example, spring.cloud.stream.kafka.bindings.input.consumer.admin.configuration.message.format.version=0.9.0.0

 

Kafka主题属性的Map,配置新主题时使用-例如,spring.cloud.stream.kafka.bindings.input.consumer.admin.configuration.message.format.version=0.9.0.0

 

Default: none.

admin.replicas-assignment

A Map<Integer, List<Integer>> of replica assignments, with the key being the partition and the value being the assignments. Used when provisioning new topics. See NewTopic javadocs in the kafka-clients jar.

 

副本分配的Map <Integer,List <Integer >>,其中键是分区,值是赋值。在配置新主题时使用。请参阅kafka-clients jar中的NewTopic javadocs。

 

Default: none.

admin.replication-factor

The replication factor to use when provisioning new topics. Overrides the binder-wide setting. Ignored if replicas-assignments is present.

 

配置新主题时使用的复制因子。覆盖绑定器范围的设置。如果replicas-assignments存在则忽略。

 

Default: none (the binder-wide default of 1 is used).

 

默认值:none(使用绑定器范围的默认值1)。

 

bufferSize

Upper limit, in bytes, of how much data the Kafka producer attempts to batch before sending.

 

Kafka生产者在发送之前尝试批量处理的数据的上限(以字节为单位)。

 

Default: 16384.

sync

Whether the producer is synchronous.

 

生产者是否是同步的。

 

Default: false.

batchTimeout

How long the producer waits to allow more messages to accumulate in the same batch before sending the messages. (Normally, the producer does not wait at all and simply sends all the messages that accumulated while the previous send was in progress.) A non-zero value may increase throughput at the expense of latency.

 

生产者在发送消息之前等待允许更多消息在同一批次中累积的时间。(通常,生产者根本不会等待,只是发送在上一次发送过程中累积的所有消息。)非零值可能会以延迟为代价来增加吞吐量。

 

Default: 0.

messageKeyExpression

A SpEL expression evaluated against the outgoing message used to populate the key of the produced Kafka message — for example, headers['myKey']. The payload cannot be used because, by the time this expression is evaluated, the payload is already in the form of a byte[].

 

针对用于填充生成的Kafka消息的key的传出消息评估的SpEL表达式 - 例如,headers['myKey']。无法使用负载,因为在评估此表达式时,负载已经是byte[]的形式。

 

Default: none.

headerPatterns

A comma-delimited list of simple patterns to match Spring messaging headers to be mapped to the Kafka Headers in the ProducerRecord. Patterns can begin or end with the wildcard character (asterisk). Patterns can be negated by prefixing with !. Matching stops after the first match (positive or negative). For example !ask,as* will pass ash but not ask. id and timestamp are never mapped.

 

逗号分隔的简单模式列表,用于匹配被映射到ProducerRecord中的Kafka Headers的Spring消息头。模式可以以通配符(星号)开头或结尾。可以通过添加前缀来否定模式!。首次匹配后即停止(正面或负面)。例如,!ask,as*将传递ash但不传递ask。 id和timestamp永远不会映射。

 

Default: * (all headers - except the id and timestamp)

 

默认值: * (所有标题 - 除了id和timestamp)

 

configuration

Map with a key/value pair containing generic Kafka producer properties.

 

包含通用Kafka生产者属性的键/值对映射。

 

Default: Empty map.

 

The Kafka binder uses the partitionCount setting of the producer as a hint to create a topic with the given partition count (in conjunction with the minPartitionCount, the maximum of the two being the value being used). Exercise caution when configuring both minPartitionCount for a binder and partitionCount for an application, as the larger value is used. If a topic already exists with a smaller partition count and autoAddPartitions is disabled (the default), the binder fails to start. If a topic already exists with a smaller partition count and autoAddPartitions is enabled, new partitions are added. If a topic already exists with a larger number of partitions than the maximum of (minPartitionCount or partitionCount), the existing partition count is used.

Kafka绑定器使用生产者的partitionCount设置作为提示来创建具有给定分区计数的主题(结合使用minPartitionCount,两者的最大值是正在使用的值)。在为绑定器配置minPartitionCount和为应用程序配置partitionCount时要小心,因为使用的值越大。如果主题已存在且分区计数较小且autoAddPartitions已禁用(默认值),则绑定器无法启动。如果已存在具有较小分区计数且autoAddPartitions已启用的主题,则会添加新分区。如果主题已存在且分区数大于(minPartitionCount或partitionCount)的最大分区数,则使用现有分区计数。

 

Usage examples

 

In this section, we show the use of the preceding properties for specific scenarios.

 

在本节中,我们将展示对特定方案使用前面的属性。

 

Example: Setting autoCommitOffset to false and Relying on Manual Acking

 

This example illustrates how one may manually acknowledge offsets in a consumer application.

 

此示例说明了如何在消费者应用程序中手动确认偏移量。

 

This example requires that spring.cloud.stream.kafka.bindings.input.consumer.autoCommitOffset be set to false. Use the corresponding input channel name for your example.

 

此示例需要将spring.cloud.stream.kafka.bindings.input.consumer.autoCommitOffset设置为false。使用相应的输入通道名称作为示例。

 

@SpringBootApplication

@EnableBinding(Sink.class)

public class ManuallyAcknowdledgingConsumer {

 

 public static void main(String[] args) {

     SpringApplication.run(ManuallyAcknowdledgingConsumer.class, args);

 }

 

 @StreamListener(Sink.INPUT)

 public void process(Message<?> message) {

     Acknowledgment acknowledgment = message.getHeaders().get(KafkaHeaders.ACKNOWLEDGMENT, Acknowledgment.class);

     if (acknowledgment != null) {

         System.out.println("Acknowledgment provided");

         acknowledgment.acknowledge();

     }

 }

}

 

Example: Security Configuration

 

Apache Kafka 0.9 supports secure connections between client and brokers. To take advantage of this feature, follow the guidelines in the Apache Kafka Documentation as well as the Kafka 0.9 security guidelines from the Confluent documentation. Use the spring.cloud.stream.kafka.binder.configuration option to set security properties for all clients created by the binder.

 

Apache Kafka 0.9支持客户端和代理之间的安全连接。要利用此功能,请遵循Apache Kafka文档中的准则以及Confluent文档中的Kafka 0.9 安全准则。使用spring.cloud.stream.kafka.binder.configuration选项为绑定器创建的所有客户端设置安全性属性。

 

For example, to set security.protocol to SASL_SSL, set the following property:

 

例如,要设置security.protocol为SASL_SSL,请设置以下属性:

 

spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_SSL

 

All the other security properties can be set in a similar manner.

 

可以以类似的方式设置所有其他安全属性。

 

When using Kerberos, follow the instructions in the reference documentation for creating and referencing the JAAS configuration.

 

使用Kerberos时,请按照参考文档中的说明创建和引用JAAS配置。

 

Spring Cloud Stream supports passing JAAS configuration information to the application by using a JAAS configuration file and using Spring Boot properties.

 

Spring Cloud Stream支持使用JAAS配置文件和Spring Boot属性将JAAS配置信息传递给应用程序。

 

Using JAAS Configuration Files   使用JAAS配置文件

 

The JAAS and (optionally) krb5 file locations can be set for Spring Cloud Stream applications by using system properties. The following example shows how to launch a Spring Cloud Stream application with SASL and Kerberos by using a JAAS configuration file:

 

可以使用系统属性为Spring Cloud Stream应用程序设置JAAS和(可选)krb5文件位置。以下示例显示如何使用JAAS配置文件启动使用SASL和Kerberos的Spring Cloud Stream应用程序:

 

 java -Djava.security.auth.login.config=/path.to/kafka_client_jaas.conf -jar log.jar \
   --spring.cloud.stream.kafka.binder.brokers=secure.server:9092 \
   --spring.cloud.stream.bindings.input.destination=stream.ticktock \
   --spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_PLAINTEXT

 

Using Spring Boot Properties   使用Spring Boot属性

 

As an alternative to having a JAAS configuration file, Spring Cloud Stream provides a mechanism for setting up the JAAS configuration for Spring Cloud Stream applications by using Spring Boot properties.

 

作为拥有JAAS配置文件的替代方法,Spring Cloud Stream提供了一种使用Spring Boot属性为Spring Cloud Stream应用程序设置JAAS配置的机制。

 

The following properties can be used to configure the login context of the Kafka client:

 

以下属性可用于配置Kafka客户端的登录上下文:

 

spring.cloud.stream.kafka.binder.jaas.loginModule

The login module name. Not necessary to be set in normal cases.

 

登录模块名称。没有必要在正常情况下设置。

 

Default: com.sun.security.auth.module.Krb5LoginModule.

spring.cloud.stream.kafka.binder.jaas.controlFlag

The control flag of the login module.

 

登录模块的控制标志。

 

Default: required.

spring.cloud.stream.kafka.binder.jaas.options

Map with a key/value pair containing the login module options.

 

包含登录模块选项的键/值对映射。

 

Default: Empty map.

 

The following example shows how to launch a Spring Cloud Stream application with SASL and Kerberos by using Spring Boot configuration properties:

 

以下示例说明如何使用Spring Boot配置属性启动带有SASL和Kerberos的Spring Cloud Stream应用程序:

 

 java --spring.cloud.stream.kafka.binder.brokers=secure.server:9092 \
   --spring.cloud.stream.bindings.input.destination=stream.ticktock \
   --spring.cloud.stream.kafka.binder.autoCreateTopics=false \
   --spring.cloud.stream.kafka.binder.configuration.security.protocol=SASL_PLAINTEXT \
   --spring.cloud.stream.kafka.binder.jaas.options.useKeyTab=true \
   --spring.cloud.stream.kafka.binder.jaas.options.storeKey=true \
   --spring.cloud.stream.kafka.binder.jaas.options.keyTab=/etc/security/keytabs/kafka_client.keytab \
   --spring.cloud.stream.kafka.binder.jaas.options.principal=kafka-client-1@EXAMPLE.COM

 

The preceding example represents the equivalent of the following JAAS file:

 

上面的示例与以下JAAS文件等效:

 

KafkaClient {
    com.sun.security.auth.module.Krb5LoginModule required
    useKeyTab=true
    storeKey=true
    keyTab="/etc/security/keytabs/kafka_client.keytab"
    principal="[email protected]";
};

 

If the topics required already exist on the broker or will be created by an administrator, autocreation can be turned off and only client JAAS properties need to be sent.

 

如果所需主题已存在于代理上或将由管理员创建,则可以关闭自动创建,并且只需要发送客户端JAAS属性。

 

Do not mix JAAS configuration files and Spring Boot properties in the same application. If the -Djava.security.auth.login.config system property is already present, Spring Cloud Stream ignores the Spring Boot properties.

不要在同一个应用程序中混合使用JAAS配置文件和Spring Boot属性。如果-Djava.security.auth.login.config系统属性已存在,则Spring Cloud Stream会忽略Spring Boot属性。

Be careful when using the autoCreateTopics and autoAddPartitions with Kerberos. Usually, applications may use principals that do not have administrative rights in Kafka and Zookeeper. Consequently, relying on Spring Cloud Stream to create/modify topics may fail. In secure environments, we strongly recommend creating topics and managing ACLs administratively by using Kafka tooling.

使用Kerberos时使用autoCreateTopics和autoAddPartitions要小心。通常,应用程序可能使用在Kafka和Zookeeper中没有管理权限的主体。因此,依赖Spring Cloud Stream来创建/修改主题可能会失败。在安全环境中,我们强烈建议您使用Kafka工具创建主题和管理ACL。

 

Example: Pausing and Resuming the Consumer   暂停和恢复消费者

If you wish to suspend consumption but not cause a partition rebalance, you can pause and resume the consumer. This is facilitated by adding the Consumer as a parameter to your @StreamListener. To resume, you need an ApplicationListener for ListenerContainerIdleEvent instances. The frequency at which events are published is controlled by the idleEventInterval property. Since the consumer is not thread-safe, you must call these methods on the calling thread.

 

如果您希望暂停消费但不会导致分区重新平衡,则可以暂停和恢复消费者。这可以通过将Consumer作为参数添加到您的@StreamListener来达成。要恢复,您需要一个ListenerContainerIdleEvent实例的ApplicationListener。发布事件的频率由idleEventInterval属性控制。由于消费者不是线程安全的,因此必须在调用线程上调用这些方法。

 

The following simple application shows how to pause and resume:

 

以下简单的应用程序显示了如何暂停和恢复:

 

@SpringBootApplication

@EnableBinding(Sink.class)

public class Application {

 

public static void main(String[] args) {

SpringApplication.run(Application.class, args);

}

 

@StreamListener(Sink.INPUT)

public void in(String in, @Header(KafkaHeaders.CONSUMER) Consumer<?, ?> consumer) {

System.out.println(in);

consumer.pause(Collections.singleton(new TopicPartition("myTopic", 0)));

}

 

@Bean

public ApplicationListener<ListenerContainerIdleEvent> idleListener() {

return event -> {

System.out.println(event);

if (event.getConsumer().paused().size() > 0) {

event.getConsumer().resume(event.getConsumer().paused());

}

};

}

 

}

 

15.4. Error Channels   错误管道

Starting with version 1.3, the binder unconditionally sends exceptions to an error channel for each consumer destination and can also be configured to send async producer send failures to an error channel. See Error Handling for more information.

 

The payload of the ErrorMessage for a send failure is a KafkaSendFailureException with properties:

  • failedMessage: The Spring Messaging Message<?> that failed to be sent.
  • record: The raw ProducerRecord that was created from the failedMessage

 

There is no automatic handling of producer exceptions (such as sending to a Dead-Letter queue). You can consume these exceptions with your own Spring Integration flow.

 

从版本1.3开始,绑定器无条件地为每个消费者目标向错误通道发送异常,并且还可以配置为将异步生产者发送失败发送到错误通道。有关更多信息,请参阅错误处理

 

发送失败的错误消息ErrorMessage的负载是一个KafkaSendFailureException,具有以下属性:

  • failedMessage:发送失败的Spring Messaging Message<?>。
  • record:从失败消息failedMessage中创建的原始生产者记录ProducerRecord

 

生产者异常没有自动处理(例如发送到死信队列)。您可以使用自己的Spring Integration流程来消费这些异常。

 

15.5. Kafka Metrics   Kafka指标

Kafka binder module exposes the following metrics:

spring.cloud.stream.binder.kafka.someGroup.someTopic.lag: This metric indicates how many messages have not been yet consumed from a given binder’s topic by a given consumer group. For example, if the value of the metric spring.cloud.stream.binder.kafka.myGroup.myTopic.lag is 1000, the consumer group named myGroup has 1000 messages waiting to be consumed from the topic calle myTopic. This metric is particularly useful for providing auto-scaling feedback to a PaaS platform.

 

Kafka绑定器模块公开以下指标:

spring.cloud.stream.binder.kafka.someGroup.someTopic.lag:此度量标准指示给定的消费者组从给定的绑定器主题尚未消费的消息数。例如,如果度量标准的spring.cloud.stream.binder.kafka.myGroup.myTopic.lag值为1000,则名为myGroup的消费者组具有1000个等待从myTopic主题消费的消息。此指标对于向PaaS平台提供自动缩放反馈特别有用。

 

15.6. Dead-Letter Topic Processing   死信Topic处理

Because you cannot anticipate how users would want to dispose of dead-lettered messages, the framework does not provide any standard mechanism to handle them. If the reason for the dead-lettering is transient, you may wish to route the messages back to the original topic. However, if the problem is a permanent issue, that could cause an infinite loop. The sample Spring Boot application within this topic is an example of how to route those messages back to the original topic, but it moves them to a “parking lot” topic after three attempts. The application is another spring-cloud-stream application that reads from the dead-letter topic. It terminates when no messages are received for 5 seconds.

 

因为您无法预测用户将如何处理死信消息,所以框架不提供任何标准机制来处理它们。如果死信的原因是暂时的,您可能希望将消息路由回原始主题。但是,如果问题是一个永久性问题,那么可能会导致无限循环。本主题中的示例Spring Boot应用程序是如何将这些消息路由回原始主题的示例,但是在三次尝试之后它将它们移动到“停车场”主题。该应用程序是另一个Spring-cloud-stream应用程序,它从死信主题中读取。它在5秒内没有收到任何消息时终止。

 

The examples assume the original destination is so8400out and the consumer group is so8400.

 

这些示例假设原始目标是so8400out,而消费者组是so8400。

 

There are a couple of strategies to consider:

  • Consider running the rerouting only when the main application is not running. Otherwise, the retries for transient errors are used up very quickly.
  • Alternatively, use a two-stage approach: Use this application to route to a third topic and another to route from there back to the main topic.

 

有几种策略需要考虑:

  • 考虑仅在主应用程序未运行时运行重新路由。否则,瞬态错误的重试会很快耗尽。
  • 或者,使用两阶段方法:使用此应用程序路由到第三个主题,使用另一个主题从那里路由回主要主题。

 

The following code listings show the sample application:

 

以下代码清单显示了示例应用程序:

 

application.properties

spring.cloud.stream.bindings.input.group=so8400replay

spring.cloud.stream.bindings.input.destination=error.so8400out.so8400

 

spring.cloud.stream.bindings.output.destination=so8400out

spring.cloud.stream.bindings.output.producer.partitioned=true

 

spring.cloud.stream.bindings.parkingLot.destination=so8400in.parkingLot

spring.cloud.stream.bindings.parkingLot.producer.partitioned=true

 

spring.cloud.stream.kafka.binder.configuration.auto.offset.reset=earliest

 

spring.cloud.stream.kafka.binder.headers=x-retries

 

Application

 

@SpringBootApplication

@EnableBinding(TwoOutputProcessor.class)

public class ReRouteDlqKApplication implements CommandLineRunner {

 

    private static final String X_RETRIES_HEADER = "x-retries";

 

    public static void main(String[] args) {

        SpringApplication.run(ReRouteDlqKApplication.class, args).close();

    }

 

    private final AtomicInteger processed = new AtomicInteger();

 

    @Autowired

    private MessageChannel parkingLot;

 

    @StreamListener(Processor.INPUT)

    @SendTo(Processor.OUTPUT)

    public Message<?> reRoute(Message<?> failed) {

        processed.incrementAndGet();

        Integer retries = failed.getHeaders().get(X_RETRIES_HEADER, Integer.class);

        if (retries == null) {

            System.out.println("First retry for " + failed);

            return MessageBuilder.fromMessage(failed)

                    .setHeader(X_RETRIES_HEADER, new Integer(1))

                    .setHeader(BinderHeaders.PARTITION_OVERRIDE,

                            failed.getHeaders().get(KafkaHeaders.RECEIVED_PARTITION_ID))

                    .build();

        }

        else if (retries.intValue() < 3) {

            System.out.println("Another retry for " + failed);

            return MessageBuilder.fromMessage(failed)

                    .setHeader(X_RETRIES_HEADER, new Integer(retries.intValue() + 1))

                    .setHeader(BinderHeaders.PARTITION_OVERRIDE,

                            failed.getHeaders().get(KafkaHeaders.RECEIVED_PARTITION_ID))

                    .build();

        }

        else {

            System.out.println("Retries exhausted for " + failed);

            parkingLot.send(MessageBuilder.fromMessage(failed)

                    .setHeader(BinderHeaders.PARTITION_OVERRIDE,

                            failed.getHeaders().get(KafkaHeaders.RECEIVED_PARTITION_ID))

                    .build());

        }

        return null;

    }

 

    @Override

    public void run(String... args) throws Exception {

        while (true) {

            int count = this.processed.get();

            Thread.sleep(5000);

            if (count == this.processed.get()) {

                System.out.println("Idle, terminating");

                return;

            }

        }

    }

 

    public interface TwoOutputProcessor extends Processor {

 

        @Output("parkingLot")

        MessageChannel parkingLot();

 

    }

 

}

 

15.7. Partitioning with the Kafka Binder   使用Kafka绑定器进行分区

 

Apache Kafka supports topic partitioning natively.

 

Apache Kafka原生支持主题分区。

 

Sometimes it is advantageous to send data to specific partitions — for example, when you want to strictly order message processing (all messages for a particular customer should go to the same partition).

 

有时将数据发送到特定分区是有利的 - 例如,当您要严格订购消息处理时(特定客户的所有消息都应该转到同一分区)。

 

The following example shows how to configure the producer and consumer side:

 

以下示例显示如何配置生产者和消费者方:

 

@SpringBootApplication

@EnableBinding(Source.class)

public class KafkaPartitionProducerApplication {

 

    private static final Random RANDOM = new Random(System.currentTimeMillis());

 

    private static final String[] data = new String[] {

            "foo1", "bar1", "qux1",

            "foo2", "bar2", "qux2",

            "foo3", "bar3", "qux3",

            "foo4", "bar4", "qux4",

            };

 

    public static void main(String[] args) {

        new SpringApplicationBuilder(KafkaPartitionProducerApplication.class)

            .web(false)

            .run(args);

    }

 

    @InboundChannelAdapter(channel = Source.OUTPUT, poller = @Poller(fixedRate = "5000"))

    public Message<?> generate() {

        String value = data[RANDOM.nextInt(data.length)];

        System.out.println("Sending: " + value);

        return MessageBuilder.withPayload(value)

                .setHeader("partitionKey", value)

                .build();

    }

 

}

 

application.yml

spring:
  cloud:
    stream:
      bindings:
        output:
          destination: partitioned.topic
          producer:
            partitioned: true
            partition-key-expression: headers['partitionKey']
            partition-count: 12

 

The topic must be provisioned to have enough partitions to achieve the desired concurrency for all consumer groups. The above configuration supports up to 12 consumer instances (6 if their concurrency is 2, 4 if their concurrency is 3, and so on). It is generally best to “over-provision” the partitions to allow for future increases in consumers or concurrency.

必须配置主题以具有足够的分区以实现所有消费者组的所需并发性。上面的配置最多支持12个消费者实例(如果它们concurrency是2,则为6,如果它们的并发性为3,则为4,依此类推)。通常最好“过度配置”分区以允许将来增加消费者或并发性。

The preceding configuration uses the default partitioning (key.hashCode() % partitionCount). This may or may not provide a suitably balanced algorithm, depending on the key values. You can override this default by using the partitionSelectorExpression or partitionSelectorClassproperties.

上述配置使用默认分区(key.hashCode() % partitionCount)。根据键值,这可能会或可能不会提供适当平衡的算法。您可以使用partitionSelectorExpression或partitionSelectorClass属性覆盖此默认值。

 

Since partitions are natively handled by Kafka, no special configuration is needed on the consumer side. Kafka allocates partitions across the instances.

 

由于分区由Kafka原生处理,因此在消费者方面不需要特殊配置。Kafka在实例之间分配分区。

 

The following Spring Boot application listens to a Kafka stream and prints (to the console) the partition ID to which each message goes:

 

以下Spring Boot应用程序侦听Kafka流并打印(到控制台)每条消息所针对的分区ID:

 

@SpringBootApplication

@EnableBinding(Sink.class)

public class KafkaPartitionConsumerApplication {

 

    public static void main(String[] args) {

        new SpringApplicationBuilder(KafkaPartitionConsumerApplication.class)

            .web(false)

            .run(args);

    }

 

    @StreamListener(Sink.INPUT)

    public void listen(@Payload String in, @Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition) {

        System.out.println(in + " received from partition " + partition);

    }

 

}

 

application.yml

spring:
  cloud:
    stream:
      bindings:
        input:
          destination: partitioned.topic
          group: myGroup

 

You can add instances as needed. Kafka rebalances the partition allocations. If the instance count (or instance count * concurrency) exceeds the number of partitions, some consumers are idle.

 

您可以根据需要添加实例。Kafka重新平衡分区分配。如果实例计数(或instance count * concurrency)超过分区数,则某些消费者处于空闲状态。

 

16. Apache Kafka Streams Binder

 

16.1. Usage

 

For using the Kafka Streams binder, you just need to add it to your Spring Cloud Stream application, using the following Maven coordinates:

 

要使用Kafka Streams绑定器,只需使用以下Maven坐标将其添加到Spring Cloud Stream应用程序:

 

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-stream-binder-kafka-streams</artifactId>
</dependency>

 

16.2. Kafka Streams Binder Overview

 

Spring Cloud Stream’s Apache Kafka support also includes a binder implementation designed explicitly for Apache Kafka Streams binding. With this native integration, a Spring Cloud Stream "processor" application can directly use the Apache Kafka Streams APIs in the core business logic.

 

Spring Cloud Stream的Apache Kafka支持还包括为Apache Kafka Streams绑定明确设计的绑定器实现。通过这种本地集成,Spring Cloud Stream“processor”应用程序可以直接在核心业务逻辑中使用 Apache Kafka Streams API。

 

Kafka Streams binder implementation builds on the foundation provided by the Kafka Streams in Spring Kafka project.

 

Kafka Streams绑定器实现建立在Spring Kafka 项目中Kafka Streams提供的基础之上。

 

As part of this native integration, the high-level Streams DSL provided by the Kafka Streams API is available for use in the business logic, too.

 

作为此原生集成的一部分,Kafka Streams API提供的高级Streams DSL也可用于业务逻辑。

 

An early version of the Processor API support is available as well.

 

还提供了早期版本的Processor API支持。

 

As noted early-on, Kafka Streams support in Spring Cloud Stream strictly only available for use in the Processor model. A model in which the messages read from an inbound topic, business processing can be applied, and the transformed messages can be written to an outbound topic. It can also be used in Processor applications with a no-outbound destination.

 

如前所述,Kafka Streams在Spring Cloud Stream中的支持严格仅适用于处理器模型。可以应用从入站主题读取的消息,业务处理以及转换后的消息可以写入出站主题的模型。它也可以在没有出站目的地的处理器应用程序中使用。

 

16.2.1. Streams DSL

This application consumes data from a Kafka topic (e.g., words), computes word count for each unique word in a 5 seconds time window, and the computed results are sent to a downstream topic (e.g., counts) for further processing.

 

该应用程序使用来自Kafka主题(例如words)的数据,在5秒时间窗口中计算每个唯一单词的单词计数,并且将计算结果发送到下游主题(例如counts)以进行进一步处理。

 

@SpringBootApplication

@EnableBinding(KStreamProcessor.class)

public class WordCountProcessorApplication {

 

@StreamListener("input")

@SendTo("output")

public KStream<?, WordCount> process(KStream<?, String> input) {

return input

                .flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))

                .groupBy((key, value) -> value)

                .windowedBy(TimeWindows.of(5000))

                .count(Materialized.as("WordCounts-multi"))

                .toStream()

                .map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value, new Date(key.window().start()), new Date(key.window().end()))));

    }

 

public static void main(String[] args) {

SpringApplication.run(WordCountProcessorApplication.class, args);

}

 

Once built as a uber-jar (e.g., wordcount-processor.jar), you can run the above example like the following.

 

一旦构建为超级jar(例如,wordcount-processor.jar),您可以运行上面的示例,如下所示。

 

java -jar wordcount-processor.jar  --spring.cloud.stream.bindings.input.destination=words --spring.cloud.stream.bindings.output.destination=counts

 

This application will consume messages from the Kafka topic words and the computed results are published to an output topic counts.

 

此应用程序将消费来自Kafka主题words的消息,并将计算结果发布到输出主题counts。

 

Spring Cloud Stream will ensure that the messages from both the incoming and outgoing topics are automatically bound as KStream objects. As a developer, you can exclusively focus on the business aspects of the code, i.e. writing the logic required in the processor. Setting up the Streams DSL specific configuration required by the Kafka Streams infrastructure is automatically handled by the framework.

 

Spring Cloud Stream将确保来自传入和传出主题的消息自动绑定为KStream对象。作为开发人员,您可以专注于代码的业务方面,即编写处理器中所需的逻辑。设置Kafka Streams基础结构所需的Streams DSL特定配置由框架自动处理。

 

16.3. Configuration Options

 

This section contains the configuration options used by the Kafka Streams binder.

For common configuration options and properties pertaining to binder, refer to the core documentation.

 

本节包含Kafka Streams绑定器使用的配置选项。

有关绑定器的常用配置选项和属性,请参阅核心文档

 

16.3.1. Kafka Streams Properties

 

The following properties are available at the binder level and must be prefixed with spring.cloud.stream.kafka.streams.binder. literal.

 

在绑定器级别可以使用以下属性,并且必须以spring.cloud.stream.kafka.streams.binder.为前缀。

 

configuration

Map with a key/value pair containing properties pertaining to Apache Kafka Streams API. This property must be prefixed with spring.cloud.stream.kafka.streams.binder.. Following are some examples of using this property.

 

包含与Apache Kafka Streams API相关的属性的键/值对映射。此属性必须以spring.cloud.stream.kafka.streams.binder.为前缀。以下是使用此属性的一些示例。

 

spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde
spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde
spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms=1000

 

For more information about all the properties that may go into streams configuration, see StreamsConfig JavaDocs in Apache Kafka Streams docs.

 

有关可能进入流配置的所有属性的更多信息,请参阅Apache Kafka Streams文档中的StreamsConfig JavaDocs。

 

brokers

Broker URL

Default: localhost

zkNodes

Zookeeper URL

Default: localhost

serdeError

Deserialization error handler type. Possible values are - logAndContinue, logAndFail or sendToDlq

 

反序列化错误处理程序类型。可能的值是 - logAndContinue,logAndFail或sendToDlq

 

Default: logAndFail

applicationId

Application ID for all the stream configurations in the current application context. You can override the application id for an individual StreamListener method using the group property on the binding. You have to ensure that you are using the same group name for all input bindings in the case of multiple inputs on the same methods.

 

当前应用程序上下文中所有流配置的应用程序ID。您可以使用绑定上的group属性覆盖单个StreamListener方法的应用程序ID。在相同方法的多个输入的情况下,您必须确保为所有输入绑定使用相同的组名。

 

Default: default

 

The following properties are only available for Kafka Streams producers and must be prefixed with spring.cloud.stream.kafka.streams.bindings.<binding name>.producer. literal.

 

以下属性仅适用于Kafka Streams生产者,并且必须以spring.cloud.stream.kafka.streams.bindings.<binding name>.producer.为前缀。

 

keySerde

key serde to use

 

要使用的键正反序列化

 

Default: none.

valueSerde

value serde to use

 

要使用的值正反序列化

 

Default: none.

useNativeEncoding

flag to enable native encoding

 

启用原生编码的标志

 

Default: false.

 

The following properties are only available for Kafka Streams consumers and must be prefixed with spring.cloud.stream.kafka.streams.bindings.<binding name>.consumer. literal.

 

以下属性仅适用于Kafka Streams消费者,并且必须以spring.cloud.stream.kafka.streams.bindings.<binding name>.consumer.为前缀。

 

keySerde

key serde to use

 

要使用的键正反序列化

 

Default: none.

valueSerde

value serde to use

 

要使用的值正反序列化

 

Default: none.

materializedAs

state store to materialize when using incoming KTable types

 

使用传入的KTable类型时要具体化的状态存储

 

Default: none.

useNativeDecoding

flag to enable native decoding

 

启用原生编码的标志

 

Default: false.

dlqName

DLQ topic name.

 

DLQ主题名称。

 

Default: none.

 

16.3.2. TimeWindow properties:

 

Windowing is an important concept in stream processing applications. Following properties are available to configure time-window computations.

 

窗口化是流处理应用程序中的一个重要概念。以下属性可用于配置时间窗口计算。

 

spring.cloud.stream.kafka.streams.timeWindow.length

When this property is given, you can autowire a TimeWindows bean into the application. The value is expressed in milliseconds.

 

给出此属性后,您可以将TimeWindows bean自动装入应用程序。该值以毫秒表示。

 

Default: none.

spring.cloud.stream.kafka.streams.timeWindow.advanceBy

Value is given in milliseconds.

 

值以毫秒为单位。

 

Default: none.

 

16.4. Multiple Input Bindings   多个输入绑定

 

For use cases that requires multiple incoming KStream objects or a combination of KStream and KTable objects, the Kafka Streams binder provides multiple bindings support.

 

对于需要多个传入KStream对象或KStream和KTable对象组合的用例,Kafka Streams绑定器提供多个绑定支持。

 

Let’s see it in action.

 

让我们看看它的实际效果。

 

16.4.1. Multiple Input Bindings as a Sink   多个输入绑定作为接收器

 

@EnableBinding(KStreamKTableBinding.class)

.....

.....

@StreamListener

public void process(@Input("inputStream") KStream<String, PlayEvent> playEvents,

                    @Input("inputTable") KTable<Long, Song> songTable) {

                    ....

                    ....

}

 

interface KStreamKTableBinding {

 

    @Input("inputStream")

    KStream<?, ?> inputStream();

 

    @Input("inputTable")

    KTable<?, ?> inputTable();

}

 

In the above example, the application is written as a sink, i.e. there are no output bindings and the application has to decide concerning downstream processing. When you write applications in this style, you might want to send the information downstream or store them in a state store (See below for Queryable State Stores).

 

在上面的示例中,应用程序被写为接收器,即没有输出绑定,应用程序也必须决定下游处理。以此样式编写应用程序时,您可能希望将信息发送到下游或将其存储在状态存储中(请参阅下面的可查询状态存储)。

 

In the case of incoming KTable, if you want to materialize the computations to a state store, you have to express it through the following property.

 

在传入KTable的情况下,如果要将计算具体化到状态存储,则必须通过以下属性表达它。

 

spring.cloud.stream.kafka.streams.bindings.inputTable.consumer.materializedAs: all-songs

 

16.4.2. Multiple Input Bindings as a Processor   多个输入绑定作为处理器

 

@EnableBinding(KStreamKTableBinding.class)

....

....

 

@StreamListener

@SendTo("output")

public KStream<String, Long> process(@Input("input") KStream<String, Long> userClicksStream,

                                     @Input("inputTable") KTable<String, String> userRegionsTable) {

....

....

}

 

interface KStreamKTableBinding extends KafkaStreamsProcessor {

 

    @Input("inputX")

    KTable<?, ?> inputTable();

}

 

16.5. Multiple Output Bindings (aka Branching)   多个输出绑定(又称分支)

Kafka Streams allow outbound data to be split into multiple topics based on some predicates. The Kafka Streams binder provides support for this feature without compromising the programming model exposed through StreamListener in the end user application.

 

Kafka Streams允许基于某些谓词将出站数据拆分为多个主题。Kafka Streams绑定器为此功能提供支持,而不会影响最终用户应用程序中通过StreamListener公开的编程模型。

 

You can write the application in the usual way as demonstrated above in the word count example. However, when using the branching feature, you are required to do a few things. First, you need to make sure that your return type is KStream[] instead of a regular KStream. Second, you need to use the SendTo annotation containing the output bindings in the order (see example below). For each of these output bindings, you need to configure destination, content-type etc., complying with the standard Spring Cloud Stream expectations.

 

您可以按照常规方式编写应用程序,如上面单词计数示例中所示。但是,在使用分支功能时,您需要执行一些操作。首先,您需要确保返回类型是KStream[]而不是常规类型KStream。其次,您需要在订单中使用包含输出绑定的SendTo注释(请参阅下面的示例)。对于每个输出绑定,您需要配置目标,内容类型等,符合标准的Spring Cloud Stream期望。

 

Here is an example:

 

这是一个例子:

 

@EnableBinding(KStreamProcessorWithBranches.class)

@EnableAutoConfiguration

public static class WordCountProcessorApplication {

 

    @Autowired

    private TimeWindows timeWindows;

 

    @StreamListener("input")

    @SendTo({"output1","output2","output3})

    public KStream<?, WordCount>[] process(KStream<Object, String> input) {

 

Predicate<Object, WordCount> isEnglish = (k, v) -> v.word.equals("english");

Predicate<Object, WordCount> isFrench =  (k, v) -> v.word.equals("french");

Predicate<Object, WordCount> isSpanish = (k, v) -> v.word.equals("spanish");

 

return input

.flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+")))

.groupBy((key, value) -> value)

.windowedBy(timeWindows)

.count(Materialized.as("WordCounts-1"))

.toStream()

.map((key, value) -> new KeyValue<>(null, new WordCount(key.key(), value, new Date(key.window().start()), new Date(key.window().end()))))

.branch(isEnglish, isFrench, isSpanish);

    }

 

    interface KStreamProcessorWithBranches {

 

            @Input("input")

            KStream<?, ?> input();

 

            @Output("output1")

            KStream<?, ?> output1();

 

            @Output("output2")

            KStream<?, ?> output2();

 

            @Output("output3")

            KStream<?, ?> output3();

        }

}

 

Properties:

spring.cloud.stream.bindings.output1.contentType: application/json

spring.cloud.stream.bindings.output2.contentType: application/json

spring.cloud.stream.bindings.output3.contentType: application/json

spring.cloud.stream.kafka.streams.binder.configuration.commit.interval.ms: 1000

spring.cloud.stream.kafka.streams.binder.configuration:

  default.key.serde: org.apache.kafka.common.serialization.Serdes$StringSerde

  default.value.serde: org.apache.kafka.common.serialization.Serdes$StringSerde

spring.cloud.stream.bindings.output1:

  destination: foo

  producer:

    headerMode: raw

spring.cloud.stream.bindings.output2:

  destination: bar

  producer:

    headerMode: raw

spring.cloud.stream.bindings.output3:

  destination: fox

  producer:

    headerMode: raw

spring.cloud.stream.bindings.input:

  destination: words

  consumer:

    headerMode: raw

 

16.6. Message Conversion   消息转换

 

Similar to message-channel based binder applications, the Kafka Streams binder adapts to the out-of-the-box content-type conversions without any compromise.

 

与基于消息通道的绑定器应用程序类似,Kafka Streams绑定器可以适应开箱即用的内容类型转换,而不会有任何妥协。

 

It is typical for Kafka Streams operations to know the type of SerDe’s used to transform the key and value correctly. Therefore, it may be more natural to rely on the SerDe facilities provided by the Apache Kafka Streams library itself at the inbound and outbound conversions rather than using the content-type conversions offered by the framework. On the other hand, you might be already familiar with the content-type conversion patterns provided by the framework, and that, you’d like to continue using for inbound and outbound conversions.

 

Kafka Streams操作通常会知道用于正确转换键和值的SerDe的类型。因此,依赖于Apache Kafka Streams库本身在入站和出站转换中提供的SerDe工具而不是使用框架提供的内容类型转换可能更为自然。另一方面,您可能已经熟悉框架提供的内容类型转换模式,并且您希望继续用于入站和出站转换。

 

Both the options are supported in the Kafka Streams binder implementation.

 

Kafka Streams绑定器实现中都支持这两个选项。

 

Outbound serialization   出站序列化

 

If native encoding is disabled (which is the default), then the framework will convert the message using the contentType set by the user (otherwise, the default application/json will be applied). It will ignore any SerDe set on the outbound in this case for outbound serialization.

 

如果禁用原生编码(这是默认设置),则框架将使用用户设置的contentType转换消息(否则,将应用默认的application/json)。在这种情况下,它将忽略出站序列化的出站上的任何SerDe设置。

 

Here is the property to set the contentType on the outbound.

 

以下是在出站上设置contentType属性。

 

spring.cloud.stream.bindings.output.contentType: application/json

 

Here is the property to enable native encoding.

 

以下是启用原生编码的属性。

 

spring.cloud.stream.bindings.output.nativeEncoding: true

 

If native encoding is enabled on the output binding (user has to enable it as above explicitly), then the framework will skip any form of automatic message conversion on the outbound. In that case, it will switch to the Serde set by the user. The valueSerde property set on the actual output binding will be used. Here is an example.

 

如果在输出绑定上启用了原生编码(用户必须如上所述显式启用它),那么框架将跳过出站的任何形式的自动消息转换。在这种情况下,它将切换到用户设置的Serde。将使用在实际输出绑定上设置的valueSerde属性。这是一个例子。

 

spring.cloud.stream.kafka.streams.bindings.output.producer.valueSerde: org.apache.kafka.common.serialization.Serdes$StringSerde

 

If this property is not set, then it will use the "default" SerDe: spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde.

 

如果未设置此属性,则它将使用“默认”SerDe : spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde.

 

It is worth to mention that Kafka Streams binder does not serialize the keys on outbound - it simply relies on Kafka itself. Therefore, you either have to specify the keySerde property on the binding or it will default to the application-wide common keySerde.

 

值得一提的是,Kafka Streams 绑定器不会在出站时序列化keys - 它只依赖于Kafka本身。因此,您必须在绑定上指定keySerde属性,否则它将默认为应用程序范围的公共keySerde。

 

Binding level key serde:

 

绑定级别的key serde:

 

spring.cloud.stream.kafka.streams.bindings.output.producer.keySerde

 

Common Key serde:

公共Key serde:

 

spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde

 

If branching is used, then you need to use multiple output bindings. For example,

 

如果使用分支,则需要使用多个输出绑定。例如,

 

interface KStreamProcessorWithBranches {

 

            @Input("input")

            KStream<?, ?> input();

 

            @Output("output1")

            KStream<?, ?> output1();

 

            @Output("output2")

            KStream<?, ?> output2();

 

            @Output("output3")

            KStream<?, ?> output3();

        }

 

If nativeEncoding is set, then you can set different SerDe’s on individual output bindings as below.

 

如果设置了nativeEncoding,那么您可以在各个输出绑定上设置不同的SerDe,如下所示。

 

spring.cloud.stream.kafka.streams.bindings.output1.producer.valueSerde=IntegerSerde
spring.cloud.stream.kafka.streams.bindings.output2.producer.valueSerde=StringSerde
spring.cloud.stream.kafka.streams.bindings.output3.producer.valueSerde=JsonSerde

 

Then if you have SendTo like this, @SendTo({"output1", "output2", "output3"}), the KStream[] from the branches are applied with proper SerDe objects as defined above. If you are not enabling nativeEncoding, you can then set different contentType values on the output bindings as below. In that case, the framework will use the appropriate message converter to convert the messages before sending to Kafka.

 

然后,如果您有这样的SendTo,@SendTo({"output1", "output2", "output3"}),分支中的KStream[]将应用上面定义的适当的SerDe对象。如果未启用nativeEncoding,则可以在输出绑定上设置不同的contentType值,如下所示。在这种情况下,框架将使用适当的消息转换器在发送到Kafka之前转换消息。

 

spring.cloud.stream.bindings.output1.contentType: application/json
spring.cloud.stream.bindings.output2.contentType: application/java-serialzied-object
spring.cloud.stream.bindings.output3.contentType: application/octet-stream

 

Inbound Deserialization   入站反序列化

 

Similar rules apply to data deserialization on the inbound.

 

类似的规则适用于入站数据反序列化。

 

If native decoding is disabled (which is the default), then the framework will convert the message using the contentType set by the user (otherwise, the default application/json will be applied). It will ignore any SerDe set on the inbound in this case for inbound deserialization.

 

如果禁用原生解码(这是默认设置),则框架将使用用户设置的contentType转换消息(否则,将应用默认的application/json)。在这种情况下,它将忽略入站反序列化的入站上的任何SerDe集。

 

Here is the property to set the contentType on the inbound.

 

以下是在入站中设置contentType属性。

 

spring.cloud.stream.bindings.input.contentType: application/json

 

Here is the property to enable native decoding.

 

以下是启用原生解码的属性。

 

spring.cloud.stream.bindings.input.nativeDecoding: true

 

If native decoding is enabled on the input binding (user has to enable it as above explicitly), then the framework will skip doing any message conversion on the inbound. In that case, it will switch to the SerDe set by the user. The valueSerde property set on the actual output binding will be used. Here is an example.

 

如果在输入绑定上启用了原生解码(用户必须如上所述明确启用它),那么框架将跳过对入站进行任何消息转换。在这种情况下,它将切换到用户设置的SerDe。将使用在实际输出绑定上设置的valueSerde属性。这是一个例子。

 

spring.cloud.stream.kafka.streams.bindings.input.consumer.valueSerde: org.apache.kafka.common.serialization.Serdes$StringSerde

 

If this property is not set, it will use the default SerDe: spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde.

 

如果未设置此属性,则将使用默认的SerDe: spring.cloud.stream.kafka.streams.binder.configuration.default.value.serde.

 

It is worth to mention that Kafka Streams binder does not deserialize the keys on inbound - it simply relies on Kafka itself. Therefore, you either have to specify the keySerde property on the binding or it will default to the application-wide common keySerde.

 

值得一提的是,Kafka Streams绑定器不会对入站keys进行反序列化 - 它只依赖于Kafka本身。因此,您必须在绑定上指定keySerde属性,否则它将默认为应用程序范围的公共keySerde。

 

Binding level key serde:

 

绑定级别的key serde:

 

spring.cloud.stream.kafka.streams.bindings.input.consumer.keySerde

 

Common Key serde:

公共Key serde:

 

spring.cloud.stream.kafka.streams.binder.configuration.default.key.serde

 

As in the case of KStream branching on the outbound, the benefit of setting value SerDe per binding is that if you have multiple input bindings (multiple KStreams object) and they all require separate value SerDe’s, then you can configure them individually. If you use the common configuration approach, then this feature won’t be applicable.

 

与出站时KStream分支的情况一样,每个绑定设置值SerDe的好处是,如果您有多个输入绑定(多个KStreams对象)并且它们都需要单独的SerDe值,那么您可以单独配置它们。如果使用公共配置方法,则此功能将不适用。

 

16.7. Error Handling   错误处理

 

Apache Kafka Streams provide the capability for natively handling exceptions from deserialization errors. For details on this support, please see this Out of the box, Apache Kafka Streams provide two kinds of deserialization exception handlers - logAndContinue and logAndFail. As the name indicates, the former will log the error and continue processing the next records and the latter will log the error and fail. LogAndFail is the default deserialization exception handler.

 

Apache Kafka Streams提供了原生处理反序列化错误的异常的功能。有关此支持的详细信息,请参阅开箱即用,Apache Kafka Streams提供了两种反序列化异常处理程序 - logAndContinue和logAndFail。如名称所示,前者将记录错误并继续处理下一条记录,后者将记录错误并失败。LogAndFail是默认的反序列化异常处理程序。

 

16.7.1. Handling Deserialization Exceptions   处理反序列化异常

 

Kafka Streams binder supports a selection of exception handlers through the following properties.

 

Kafka Streams binder通过以下属性支持一系列异常处理程序。

 

spring.cloud.stream.kafka.streams.binder.serdeError: logAndContinue

 

In addition to the above two deserialization exception handlers, the binder also provides a third one for sending the erroneous records (poison pills) to a DLQ topic. Here is how you enable this DLQ exception handler.

 

除了上述两个反序列化异常处理程序之外,绑定器还提供了第三个反序列化异常处理程序,用于将错误记录(毒丸)发送到DLQ主题。以下是启用此DLQ异常处理程序的方法。

 

spring.cloud.stream.kafka.streams.binder.serdeError: sendToDlq

 

When the above property is set, all the deserialization error records are automatically sent to the DLQ topic.

 

设置上述属性后,所有反序列化错误记录将自动发送到DLQ主题。

 

spring.cloud.stream.kafka.streams.bindings.input.consumer.dlqName: foo-dlq

 

If this is set, then the error records are sent to the topic foo-dlq. If this is not set, then it will create a DLQ topic with the name error.<input-topic-name>.<group-name>.

 

如果设置了此项,则会将错误记录发送到主题foo-dlq。如果未设置,则会创建名称为error.<input-topic-name>.<group-name>的DLQ主题。

 

A couple of things to keep in mind when using the exception handling feature in Kafka Streams binder.

  • The property spring.cloud.stream.kafka.streams.binder.serdeError is applicable for the entire application. This implies that if there are multiple StreamListener methods in the same application, this property is applied to all of them.
  • The exception handling for deserialization works consistently with native deserialization and framework provided message conversion.

 

在Kafka Streams绑定器中使用异常处理功能时要记住几件事。

  • spring.cloud.stream.kafka.streams.binder.serdeError属性适用于整个应用。这意味着如果在同一个应用程序中有多个StreamListener方法,则此属性将应用于所有这些方法。
  • 反序列化的异常处理与原生反序列化和框架提供的消息转换一致。

 

16.7.2. Handling Non-Deserialization Exceptions   处理非反序列化异常

 

For general error handling in Kafka Streams binder, it is up to the end user applications to handle application level errors. As a side effect of providing a DLQ for deserialization exception handlers, Kafka Streams binder provides a way to get access to the DLQ sending bean directly from your application. Once you get access to that bean, you can programmatically send any exception records from your application to the DLQ.

 

对于Kafka Streams绑定器中的一般错误处理,最终用户应用程序可以处理应用程序级错误。作为为反序列化异常处理程序提供DLQ的副作用,Kafka Streams绑定器提供了一种直接从应用程序访问发送bean的DLQ的方法。一旦访问该bean,就可以以编程方式将任何异常记录从应用程序发送到DLQ。

 

It continues to remain hard to robust error handling using the high-level DSL; Kafka Streams doesn’t natively support error handling yet.

 

使用高级DSL仍然难以进行强大的错误处理; Kafka Streams本身并不支持错误处理。

 

However, when you use the low-level Processor API in your application, there are options to control this behavior. See below.

 

但是,在应用程序中使用低级Processor API时,可以选择控制此行为。见下文。

 

@Autowired

private SendToDlqAndContinue dlqHandler;

 

@StreamListener("input")

@SendTo("output")

public KStream<?, WordCount> process(KStream<Object, String> input) {

 

    input.process(() -> new Processor() {

                ProcessorContext context;

 

                @Override

                public void init(ProcessorContext context) {

                    this.context = context;

                }

 

                @Override

                public void process(Object o, Object o2) {

 

                    try {

                        .....

                        .....

                    }

                    catch(Exception e) {

                        //explicitly provide the kafka topic corresponding to the input binding as the first argument.

                        //DLQ handler will correctly map to the dlq topic from the actual incoming destination.

                        dlqHandler.sendToDlq("topic-name", (byte[]) o1, (byte[]) o2, context.partition());

                    }

                }

 

                .....

                .....

    });

}

 

16.8. Interactive Queries   交互式查询

As part of the public Kafka Streams binder API, we expose a class called QueryableStoreRegistry. You can access this as a Spring bean in your application. An easy way to get access to this bean from your application is to "autowire" the bean in your application.

 

作为公共Kafka Streams binder API的一部分,我们公开了一个名为QueryableStoreRegistry的类。您可以在应用程序中将其作为Spring bean进行访问。从应用程序访问此bean的一种简单方法是在应用程序中“自动装配”该bean。

 

@Autowired
private QueryableStoreRegistry queryableStoreRegistry;

 

Once you gain access to this bean, then you can query for the particular state-store that you are interested. See below.

 

一旦获得对此bean的访问权限,就可以查询您感兴趣的特定状态存储。见下文。

 

ReadOnlyKeyValueStore<Object, Object> keyValueStore =
                                                queryableStoreRegistry.getQueryableStoreType("my-store", QueryableStoreTypes.keyValueStore());

 

16.9. Accessing the underlying KafkaStreams object   访问底层的KafkaStreams对象

StreamBuilderFactoryBean from spring-kafka that is responsible for constructing the KafkaStreams object can be accessed programmatically. Each StreamBuilderFactoryBean is registered as stream-builder and appended with the StreamListener method name. If your StreamListener method is named as process for example, the stream builder bean is named as stream-builder-process. Since this is a factory bean, it should be accessed by prepending an ampersand (&) when accessing it programmatically. Following is an example and it assumes the StreamListener method is named as process

 

可以通过编程方式访问负责构造KafkaStreams对象的spring-kafka中的StreamBuilderFactoryBean。每个StreamBuilderFactoryBean都被注册为stream-builder并附加StreamListener方法名称。例如,如果您的StreamListener方法被命名process,则流构建器bean的名称为stream-builder-process。由于这是一个工厂bean,因此应该通过在以编程方式访问它时添加一个&符号(&)来访问它。以下是一个示例,它假定该StreamListener方法命名为process

 

StreamsBuilderFactoryBean streamsBuilderFactoryBean = context.getBean("&stream-builder-process", StreamsBuilderFactoryBean.class);
                        KafkaStreams kafkaStreams = streamsBuilderFactoryBean.getKafkaStreams();

 

17. RabbitMQ Binder

 

17.1. Usage

 

To use the RabbitMQ binder, you can add it to your Spring Cloud Stream application, by using the following Maven coordinates:

 

要使用RabbitMQ绑定器,可以使用以下Maven坐标将其添加到Spring Cloud Stream应用程序中:

 

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-stream-binder-rabbit</artifactId>
</dependency>

 

Alternatively, you can use the Spring Cloud Stream RabbitMQ Starter, as follows:

 

或者,您可以使用Spring Cloud Stream RabbitMQ Starter,如下所示:

 

<dependency>
  <groupId>org.springframework.cloud</groupId>
  <artifactId>spring-cloud-starter-stream-rabbit</artifactId>
</dependency>

 

17.2. RabbitMQ Binder Overview

 

The following simplified diagram shows how the RabbitMQ binder operates:

 

以下简化图显示了RabbitMQ绑定器的运行方式:

 

Figure 11. RabbitMQ Binder

 

By default, the RabbitMQ Binder implementation maps each destination to a TopicExchange. For each consumer group, a Queue is bound to that TopicExchange. Each consumer instance has a corresponding RabbitMQ Consumer instance for its group’s Queue. For partitioned producers and consumers, the queues are suffixed with the partition index and use the partition index as the routing key. For anonymous consumers (those with no group property), an auto-delete queue (with a randomized unique name) is used.

 

默认情况下,RabbitMQ Binder实现将每个目标映射到一个TopicExchange。对于每个消费者组,都有一个Queue与此TopicExchange绑定。每个消费者实例都有一个与其组Queue对应的RabbitMQ Consumer实例。对于分区生产者和消费者,队列以分区索引为后缀,并使用分区索引作为路由键。对于匿名消费者(没有group属性的消费者),使用自动删除队列(具有随机的唯一名称)。

 

By using the optional autoBindDlq option, you can configure the binder to create and configure dead-letter queues (DLQs) (and a dead-letter exchange DLX, as well as routing infrastructure). By default, the dead letter queue has the name of the destination, appended with .dlq. If retry is enabled (maxAttempts > 1), failed messages are delivered to the DLQ after retries are exhausted. If retry is disabled (maxAttempts = 1), you should set requeueRejected to false (the default) so that failed messages are routed to the DLQ, instead of being re-queued. In addition, republishToDlq causes the binder to publish a failed message to the DLQ (instead of rejecting it). This feature lets additional information (such as the stack trace in the x-exception-stacktrace header) be added to the message in headers. This option does not need retry enabled. You can republish a failed message after just one attempt. Starting with version 1.2, you can configure the delivery mode of republished messages. See the republishDeliveryMode property.

 

通过使用可选autoBindDlq选项,您可以配置绑定器以创建和配置死信队列(DLQ)(以及死信交换DLX,以及路由基础结构)。默认情况下,死信队列的名称即目标名称,追加.dlq后缀。如果启用了重试(maxAttempts > 1),则在重试耗尽后,失败的消息将传递到DLQ。如果禁用重试(maxAttempts = 1),则应设置requeueRejected为false(默认值),以便将失败的消息路由到DLQ,而不是重新排队。此外,republishToDlq导致绑定器将失败的消息发布到DLQ(而不是拒绝它)。此功能可以将其他信息(例如,x-exception-stacktrace header中的堆栈跟踪)添加到headers中的消息。此选项不需要重试。只需一次尝试即可重新发布失败的消息。从1.2版开始,您可以配置重新发布的消息的传递模式。查看republishDeliveryMode属性。

 

Setting requeueRejected to true (with republishToDlq=false ) causes the message to be re-queued and redelivered continually, which is likely not what you want unless the reason for the failure is transient. In general, you should enable retry within the binder by setting maxAttempts to greater than one or by setting republishToDlq to true.

设置requeueRejected为true(with republishToDlq=false)会导致消息重新排队并连续重新传递,这可能不是您想要的,除非失败的原因是暂时的。通常,您应该通过设置maxAttempts为大于1或通过设置republishToDlq为true在绑定器中开启重试。

 

See RabbitMQ Binder Properties for more information about these properties.

 

有关这些属性的更多信息,请参见RabbitMQ Binder属性

 

The framework does not provide any standard mechanism to consume dead-letter messages (or to re-route them back to the primary queue). Some options are described in Dead-Letter Queue Processing.

 

该框架没有提供任何标准机制来消费死信消息(或将它们重新路由回主队列)。死信队列处理中描述了一些选项。

 

When multiple RabbitMQ binders are used in a Spring Cloud Stream application, it is important to disable 'RabbitAutoConfiguration' to avoid the same configuration from RabbitAutoConfiguration being applied to the two binders. You can exclude the class by using the @SpringBootApplication annotation.

当在Spring Cloud Stream应用程序中使用多个RabbitMQ绑定器时,禁用“RabbitAutoConfiguration”以避免将相同的RabbitAutoConfiguration配置应用于两个绑定器非常重要。您可以使用@SpringBootApplication注释排除此类。

 

Starting with version 2.0, the RabbitMessageChannelBinder sets the RabbitTemplate.userPublisherConnection property to true so that the non-transactional producers avoid deadlocks on consumers, which can happen if cached connections are blocked because of a memory alarm on the broker.

 

从版本2.0开始,RabbitMessageChannelBinder将RabbitTemplate.userPublisherConnection属性设置为true,以便非事务生产者避免在消费者上死锁,如果由于代理上的内存警报而阻塞高速缓存连接,则可能发生这种情况。

 

17.3. Configuration Options   配置选项

This section contains settings specific to the RabbitMQ Binder and bound channels.

For general binding configuration options and properties, see the Spring Cloud Stream core documentation.

 

本节包含特定于RabbitMQ Binder和绑定通道的设置。

有关常规绑定配置选项和属性,请参阅Spring Cloud Stream核心文档

 

RabbitMQ Binder Properties   RabbitMQ绑定器属性

By default, the RabbitMQ binder uses Spring Boot’s ConnectionFactory. Conseuqently, it supports all Spring Boot configuration options for RabbitMQ. (For reference, see the Spring Boot documentation). RabbitMQ configuration options use the spring.rabbitmq prefix.

 

默认情况下,RabbitMQ绑定器使用Spring Boot的ConnectionFactory。因此,它支持RabbitMQ的所有Spring Boot配置选项。(有关参考,请参阅Spring Boot文档)。RabbitMQ配置选项使用spring.rabbitmq前缀。

 

In addition to Spring Boot options, the RabbitMQ binder supports the following properties:

 

除Spring Boot选项外,RabbitMQ binder还支持以下属性:

 

spring.cloud.stream.rabbit.binder.adminAddresses

A comma-separated list of RabbitMQ management plugin URLs. Only used when nodes contains more than one entry. Each entry in this list must have a corresponding entry in spring.rabbitmq.addresses. Only needed if you use a RabbitMQ cluster and wish to consume from the node that hosts the queue. See Queue Affinity and the LocalizedQueueConnectionFactory for more information.

 

以逗号分隔的RabbitMQ管理插件URL列表。仅在nodes包含多个条目时使用。此列表中的每个条目都必须在spring.rabbitmq.addresses中包含相应的条目。仅在您使用RabbitMQ集群并希望从承载队列的节点消费时才需要。有关更多信息,请参阅Queue Affinity和LocalizedQueueConnectionFactory

 

Default: empty.

spring.cloud.stream.rabbit.binder.nodes

A comma-separated list of RabbitMQ node names. When more than one entry, used to locate the server address where a queue is located. Each entry in this list must have a corresponding entry in spring.rabbitmq.addresses. Only needed if you use a RabbitMQ cluster and wish to consume from the node that hosts the queue. See Queue Affinity and the LocalizedQueueConnectionFactory for more information.

 

以逗号分隔的RabbitMQ节点名称列表。当存在多个条目时,用于查找队列所在的服务器地址。此列表中的每个条目都必须在spring.rabbitmq.addresses中包含相应的条目。仅在您使用RabbitMQ集群并希望从承载队列的节点消费时才需要。有关更多信息,请参阅Queue Affinity和LocalizedQueueConnectionFactory

 

Default: empty.

spring.cloud.stream.rabbit.binder.compressionLevel

The compression level for compressed bindings. See java.util.zip.Deflater.

 

压缩绑定的压缩级别。见java.util.zip.Deflater。

 

Default: 1 (BEST_LEVEL).

spring.cloud.stream.binder.connection-name-prefix

A connection name prefix used to name the connection(s) created by this binder. The name is this prefix followed by #n, where n increments each time a new connection is opened.

 

用于命名此绑定器创建的连接的连接名称前缀。名称是此前缀后跟#n,其中每次打开新连接时n递增。

 

Default: none (Spring AMQP default).

 

RabbitMQ Consumer Properties   RabbitMQ消费者属性

 

The following properties are available for Rabbit consumers only and must be prefixed with spring.cloud.stream.rabbit.bindings.<channelName>.consumer..

 

以下属性仅适用于Rabbit消费者,必须以spring.cloud.stream.rabbit.bindings.<channelName>.consumer.为前缀。

 

acknowledgeMode

The acknowledge mode.

 

确认模式。

 

Default: AUTO.

autoBindDlq

Whether to automatically declare the DLQ and bind it to the binder DLX.

 

是否自动声明DLQ并将其绑定到绑定器DLX。

 

Default: false.

bindingRoutingKey

The routing key with which to bind the queue to the exchange (if bindQueue is true). For partitioned destinations, -<instanceIndex> is appended.

 

用于将队列绑定到交换机的路由密钥(如果bindQueue是true)。对于分区目的地,追加-<instanceIndex>。

 

Default: #.

bindQueue

Whether to bind the queue to the destination exchange. Set it to false if you have set up your own infrastructure and have previously created and bound the queue.

 

是否将队列绑定到目标交换机。如果您已设置自己的基础架构并且之前已创建并绑定队列,请将其设置为false。

 

Default: true.

deadLetterQueueName

The name of the DLQ

 

DLQ的名称

 

Default: prefix+destination.dlq

deadLetterExchange

A DLX to assign to the queue. Relevant only if autoBindDlq is true.

 

要分配给队列的DLX。仅在autoBindDlq是true时相关。

 

Default: 'prefix+DLX'

deadLetterRoutingKey

A dead letter routing key to assign to the queue. Relevant only if autoBindDlq is true.

 

用于分配给队列的死信路由密钥。仅在autoBindDlq是true时相关。

 

Default: destination

declareExchange

Whether to declare the exchange for the destination.

 

是否声明目的地的交换。

 

Default: true.

delayedExchange

Whether to declare the exchange as a Delayed Message Exchange. Requires the delayed message exchange plugin on the broker. The x-delayed-type argument is set to the exchangeType.

 

是否将交换声明为为Delayed Message Exchange。需要代理上的延迟消息交换插件。x-delayed-type参数设置为exchangeType。

 

Default: false.

dlqDeadLetterExchange

If a DLQ is declared, a DLX to assign to that queue.

 

如果声明了DLQ,则为分配给该队列的DLX。

 

Default: none

dlqDeadLetterRoutingKey

If a DLQ is declared, a dead letter routing key to assign to that queue.

 

如果声明了DLQ,则为分配给该队列的死信路由密钥。

 

Default: none

dlqExpires

How long before an unused dead letter queue is deleted (in milliseconds).

 

删除未使用的死信队列需要多长时间(以毫秒为单位)。

 

Default: no expiration

dlqLazy

Declare the dead letter queue with the x-queue-mode=lazy argument. See “Lazy Queues”. Consider using a policy instead of this setting, because using a policy allows changing the setting without deleting the queue.

 

声明带有x-queue-mode=lazy参数的死信队列。请参阅“懒惰队列”。请考虑使用策略而不是此设置,因为使用策略允许更改设置而不删除队列。

 

Default: false.

dlqMaxLength

Maximum number of messages in the dead letter queue.

 

死信队列中的最大消息数。

 

Default: no limit

dlqMaxLengthBytes

Maximum number of total bytes in the dead letter queue from all messages.

 

所有消息中死信队列中的最大总字节数。

 

Default: no limit

dlqMaxPriority

Maximum priority of messages in the dead letter queue (0-255).

 

死信队列中消息的最大优先级(0-255)。

 

Default: none

dlqTtl

Default time to live to apply to the dead letter queue when declared (in milliseconds).

 

声明时应用于死信队列的默认时间(以毫秒为单位)。

 

Default: no limit

durableSubscription

Whether the subscription should be durable. Only effective if group is also set.

 

订阅是否应该是持久的。仅group设置时有效。

 

Default: true.

exchangeAutoDelete

If declareExchange is true, whether the exchange should be auto-deleted (that is, removed after the last queue is removed).

 

如果declareExchange为true,则是否应自动删除交换(即,在删除最后一个队列后删除)。

 

Default: true.

exchangeDurable

If declareExchange is true, whether the exchange should be durable (that is, it survives broker restart).

 

如果declareExchange是true,则交换是否应该是持久的(即,它在代理重启后仍然存在)。

 

Default: true.

exchangeType

The exchange type: direct, fanout or topic for non-partitioned destinations and direct or topic for partitioned destinations.

 

交换类型:direct,fanout或用于非分区目标的topic和用于分区目标的direct或topic。

 

Default: topic.

exclusive

Whether to create an exclusive consumer. Concurrency should be 1 when this is true. Often used when strict ordering is required but enabling a hot standby instance to take over after a failure. See recoveryInterval, which controls how often a standby instance attempts to consume.

 

是否创建独家消费者。如果是true,则并发应该是1。通常在需要严格排序时使用,但在发生故障后启用热备用实例。请参阅recoveryInterval,它控制备用实例尝试使用的频率。

 

Default: false.

expires

How long before an unused queue is deleted (in milliseconds).

 

删除未使用的队列需要多长时间(以毫秒为单位)。

 

Default: no expiration

failedDeclarationRetryInterval

The interval (in milliseconds) between attempts to consume from a queue if it is missing.

 

队列缺失时,尝试从队列中消费的时间间隔(以毫秒为单位)。

 

Default: 5000

headerPatterns

Patterns for headers to be mapped from inbound messages.

 

从入站消息中映射的headers的模式。

 

Default: ['*'] (all headers).

lazy

Declare the queue with the x-queue-mode=lazy argument. See “Lazy Queues”. Consider using a policy instead of this setting, because using a policy allows changing the setting without deleting the queue.

 

声明带有x-queue-mode=lazy参数的队列。请参阅“懒惰队列”。请考虑使用策略而不是此设置,因为使用策略允许更改设置而不删除队列。

 

Default: false.

maxConcurrency

The maximum number of consumers.

 

最大消费者数量。

 

Default: 1.

maxLength

The maximum number of messages in the queue.

 

队列中的最大消息数。

 

Default: no limit

maxLengthBytes

The maximum number of total bytes in the queue from all messages.

 

所有消息中队列中的最大总字节数。

 

Default: no limit

maxPriority

The maximum priority of messages in the queue (0-255).

 

队列中消息的最大优先级(0-255)。

 

Default: none

missingQueuesFatal

When the queue cannot be found, whether to treat the condition as fatal and stop the listener container. Defaults to false so that the container keeps trying to consume from the queue — for example, when using a cluster and the node hosting a non-HA queue is down.

 

当无法找到队列时,是否将条件视为致命并停止监听器容器。默认设置为false以便容器继续尝试从队列中消费 - 例如,在使用群集时,托管非HA队列的节点已关闭。

 

Default: false

prefetch

Prefetch count.

 

预取计数。

 

Default: 1.

prefix

A prefix to be added to the name of the destination and queues.

 

要添加到destination和队列名称的前缀。

 

Default: "".

queueDeclarationRetries

The number of times to retry consuming from a queue if it is missing. Relevant only when missingQueuesFatalis true. Otherwise, the container keeps retrying indefinitely.

 

如果丢失,则从队列重试消费的次数。只有当missingQueuesFatal是true时有关。否则,容器将无限期地重试。

 

Default: 3

queueNameGroupOnly

When true, consume from a queue with a name equal to the group. Otherwise the queue name is destination.group. This is useful, for example, when using Spring Cloud Stream to consume from an existing RabbitMQ queue.

 

如果为true,则从名称等于group的队列中消费。否则队列名称是destination.group。例如,当使用Spring Cloud Stream从现有RabbitMQ队列中消费时,这很有用。

 

Default: false.

recoveryInterval

The interval between connection recovery attempts, in milliseconds.

 

连接恢复尝试之间的间隔,以毫秒为单位。

 

Default: 5000.

requeueRejected

Whether delivery failures should be re-queued when retry is disabled or republishToDlq is false.

 

当重试被关闭或republishToDlq的false时,是否发送故障应重新排队。

 

Default: false.

republishDeliveryMode

When republishToDlq is true, specifies the delivery mode of the republished message.

 

当republishToDlq是true时,指定重新发布消息的传递方式。

 

Default: DeliveryMode.PERSISTENT

republishToDlq

By default, messages that fail after retries are exhausted are rejected. If a dead-letter queue (DLQ) is configured, RabbitMQ routes the failed message (unchanged) to the DLQ. If set to true, the binder republishs failed messages to the DLQ with additional headers, including the exception message and stack trace from the cause of the final failure.

 

默认情况下,拒绝重试后失败的邮件。如果配置了死信队列(DLQ),RabbitMQ会将失败的消息(未更改)路由到DLQ。如果设置为true,则绑定器会使用其他headers将失败的消息重新发布到DLQ,包括异常消息和最终失败原因的堆栈跟踪。

 

Default: false

transacted

Whether to use transacted channels.

 

是否使用事务化通道。

 

Default: false.

ttl

Default time to live to apply to the queue when declared (in milliseconds).

 

声明时应用于队列的默认时间(以毫秒为单位)。

 

Default: no limit

txSize

The number of deliveries between acks.

 

确认之间的交付数量。

 

Default: 1.

 

Rabbit Producer Properties   Rabbit生产者属性

 

The following properties are available for Rabbit producers only and must be prefixed with spring.cloud.stream.rabbit.bindings.<channelName>.producer..

 

以下属性仅适用于Rabbit生产者,必须带有spring.cloud.stream.rabbit.bindings.<channelName>.producer.前缀。

 

autoBindDlq

Whether to automatically declare the DLQ and bind it to the binder DLX.

 

是否自动声明DLQ并将其绑定到绑定器DLX。

 

Default: false.

batchingEnabled

Whether to enable message batching by producers. Messages are batched into one message according to the following properties (described in the next three entries in this list): 'batchSize', batchBufferLimit, and batchTimeout. See Batching for more information.

 

是否启用生产者的消息批处理。根据以下属性将消息批处理为一条消息(在此列表的下三个条目中描述):'batchSize',batchBufferLimit,和batchTimeout。有关更多信息,请参阅批处理

 

Default: false.

batchSize

The number of messages to buffer when batching is enabled.

 

启用批处理时要缓冲的消息数。

 

Default: 100.

batchBufferLimit

The maximum buffer size when batching is enabled.

 

启用批处理时的最大缓冲区大小。

 

Default: 10000.

batchTimeout

The batch timeout when batching is enabled.

 

批处理启用时的批处理超时。

 

Default: 5000.

bindingRoutingKey

The routing key with which to bind the queue to the exchange (if bindQueue is true). Only applies to non-partitioned destinations. Only applies if requiredGroups are provided and then only to those groups.

 

用于将队列绑定到交换机的路由密钥(如果bindQueue是true)。仅适用于非分区目标。仅在提供requiredGroups时适用,然后仅适用于这些组。

 

Default: #.

bindQueue

Whether to bind the queue to the destination exchange. Set it to false if you have set up your own infrastructure and have previously created and bound the queue. Only applies if requiredGroups are provided and then only to those groups.

 

是否将队列绑定到目标交换机。如果您已设置自己的基础架构并且之前已创建并绑定队列,请将其设置为false。仅在提供requiredGroups时适用,然后仅适用于这些组。

 

Default: true.

compress

Whether data should be compressed when sent.

 

是否应在发送时压缩数据。

 

Default: false.

deadLetterQueueName

The name of the DLQ Only applies if requiredGroups are provided and then only to those groups.

 

DLQ的名称,仅在提供requiredGroups时适用,然后仅适用于这些组。

 

Default: prefix+destination.dlq

deadLetterExchange

A DLX to assign to the queue. Relevant only when autoBindDlq is true. Applies only when requiredGroupsare provided and then only to those groups.

 

要分配给队列的DLX。只有当autoBindDlq是true时有关。仅在提供requiredGroups时适用,然后仅适用于这些组。

 

Default: 'prefix+DLX'

deadLetterRoutingKey

A dead letter routing key to assign to the queue. Relevant only when autoBindDlq is true. Applies only when requiredGroups are provided and then only to those groups.

 

用于分配给队列的死信路由密钥。只有当autoBindDlq是true时有关。仅在提供requiredGroups时适用,然后仅适用于这些组。

 

Default: destination

declareExchange

Whether to declare the exchange for the destination.

 

是否声明目的地的交换。

 

Default: true.

delayExpression

A SpEL expression to evaluate the delay to apply to the message (x-delay header). It has no effect if the exchange is not a delayed message exchange.

 

用于评估应用于消息(x-delay header)的延迟的SpEL表达式。如果交换不是延迟消息交换,则无效。

 

Default: No x-delay header is set.

delayedExchange

Whether to declare the exchange as a Delayed Message Exchange. Requires the delayed message exchange plugin on the broker. The x-delayed-type argument is set to the exchangeType.

 

是否将交换声明为Delayed Message Exchange。需要代理上的延迟消息交换插件。x-delayed-type参数设置为exchangeType。

 

Default: false.

deliveryMode

The delivery mode.

 

投递模式。

 

Default: PERSISTENT.

dlqDeadLetterExchange

When a DLQ is declared, a DLX to assign to that queue. Applies only if requiredGroups are provided and then only to those groups.

 

声明DLQ时,则为将分配给该队列的DLX。仅在提供requiredGroups时适用,然后仅适用于这些组。

 

Default: none

dlqDeadLetterRoutingKey

When a DLQ is declared, a dead letter routing key to assign to that queue. Applies only when requiredGroupsare provided and then only to those groups.

 

声明DLQ时,则为分配给该队列的死信路由键。仅在提供requiredGroups时适用,然后仅适用于这些组。

 

Default: none

dlqExpires

How long (in milliseconds) before an unused dead letter queue is deleted. Applies only when requiredGroupsare provided and then only to those groups.

 

删除未使用的死信队列之前的时间(以毫秒为单位)。仅在提供requiredGroups时适用,然后仅适用于这些组。

 

Default: no expiration

dlqLazy

Declare the dead letter queue with the x-queue-mode=lazy argument. See “Lazy Queues”. Consider using a policy instead of this setting, because using a policy allows changing the setting without deleting the queue. Applies only when requiredGroups are provided and then only to those groups.

 

使用x-queue-mode=lazy参数声明死信队列。请参阅“懒惰队列”。请考虑使用策略而不是此设置,因为使用策略允许更改设置而不删除队列。仅在提供requiredGroups时适用,然后仅适用于这些组。

 

dlqMaxLength

Maximum number of messages in the dead letter queue. Applies only if requiredGroups are provided and then only to those groups.

 

死信队列中的最大消息数。仅在提供requiredGroups时适用,然后仅适用于这些组。

 

Default: no limit

dlqMaxLengthBytes

Maximum number of total bytes in the dead letter queue from all messages. Applies only when requiredGroupsare provided and then only to those groups.

 

所有消息中死信队列中的最大总字节数。仅在提供requiredGroups时适用,然后仅适用于这些组。

 

Default: no limit

dlqMaxPriority

Maximum priority of messages in the dead letter queue (0-255) Applies only when requiredGroups are provided and then only to those groups.

 

死信队列中消息的最大优先级(0-255)仅在提供requiredGroups时才适用,然后仅适用于这些组。

 

Default: none

dlqTtl

Default time (in milliseconds) to live to apply to the dead letter queue when declared. Applies only when requiredGroups are provided and then only to those groups.

 

声明时应用于死信队列的默认时间(以毫秒为单位)。仅在提供requiredGroups时适用,然后仅适用于这些组。

 

Default: no limit

exchangeAutoDelete

If declareExchange is true, whether the exchange should be auto-delete (it is removed after the last queue is removed).

 

如果declareExchange是true,是否应该自动删除交换(在删除最后一个队列后删除它)。

 

Default: true.

exchangeDurable

If declareExchange is true, whether the exchange should be durable (survives broker restart).

 

如果declareExchange是true,交换是否应该是持久的(在broker重启后仍然存活)。

 

Default: true.

exchangeType

The exchange type: direct, fanout or topic for non-partitioned destinations and direct or topic for partitioned destinations.

 

交换类型:direct,fanout或用于非分区目标的topic和用于分区目标的direct或topic。

 

Default: topic.

expires

How long (in milliseconds) before an unused queue is deleted. Applies only when requiredGroups are provided and then only to those groups.

 

删除未使用的队列之前的时间(以毫秒为单位)。仅在提供requiredGroups时适用,然后仅适用于这些组。

 

Default: no expiration

headerPatterns

Patterns for headers to be mapped to outbound messages.

 

要映射到出站消息的headers模式。

 

Default: ['*'] (all headers).

lazy

Declare the queue with the x-queue-mode=lazy argument. See “Lazy Queues”. Consider using a policy instead of this setting, because using a policy allows changing the setting without deleting the queue. Applies only when requiredGroups are provided and then only to those groups.

 

使用x-queue-mode=lazy参数声明队列。请参阅“懒惰队列”。请考虑使用策略而不是此设置,因为使用策略允许更改设置而不删除队列。仅在提供requiredGroups时适用,然后仅适用于这些组。

 

Default: false.

maxLength

Maximum number of messages in the queue. Applies only when requiredGroups are provided and then only to those groups.

 

队列中的最大消息数。仅在提供requiredGroups时适用,然后仅适用于这些组。

 

Default: no limit

maxLengthBytes

Maximum number of total bytes in the queue from all messages. Only applies if requiredGroups are provided and then only to those groups.

 

所有消息中队列中的最大总字节数。仅在提供requiredGroups时适用,然后仅适用于这些组。

 

Default: no limit

maxPriority

Maximum priority of messages in the queue (0-255). Only applies if requiredGroups are provided and then only to those groups.

 

队列中消息的最大优先级(0-255)。仅在提供requiredGroups时适用,然后仅适用于这些组。

 

Default: none

prefix

A prefix to be added to the name of the destination exchange.

 

要添加到destination交换机名称的前缀。

 

Default: "".

queueNameGroupOnly

When true, consume from a queue with a name equal to the group. Otherwise the queue name is destination.group. This is useful, for example, when using Spring Cloud Stream to consume from an existing RabbitMQ queue. Applies only when requiredGroups are provided and then only to those groups.

 

当true时,使用名称等于group的队列消费。否则队列名称是destination.group。例如,当使用Spring Cloud Stream从现有RabbitMQ队列中消费时,这很有用。仅在提供requiredGroups时适用,然后仅适用于这些组。

 

Default: false.

routingKeyExpression

A SpEL expression to determine the routing key to use when publishing messages. For a fixed routing key, use a literal expression, such as routingKeyExpression='my.routingKey' in a properties file or routingKeyExpression: '''my.routingKey''' in a YAML file.

 

一个SpEL表达式,用于确定发布消息时要使用的路由键。对于固定路由键,请使用文字表达式,例如在属性文件中routingKeyExpression='my.routingKey'或在YAML文件中routingKeyExpression: '''my.routingKey'''。

 

Default: destination or destination-<partition> for partitioned destinations.

 

默认值:用于分区目的地的destination或destination-<partition>。

 

transacted

Whether to use transacted channels.

 

是否使用事务化通道。

 

Default: false.

ttl

Default time (in milliseconds) to live to apply to the queue when declared. Applies only when requiredGroupsare provided and then only to those groups.

 

声明时应用于队列的默认时间(以毫秒为单位)。仅在提供requiredGroups时适用,然后仅适用于这些组。

 

Default: no limit

 

In the case of RabbitMQ, content type headers can be set by external applications. Spring Cloud Stream supports them as part of an extended internal protocol used for any type of transport — including transports, such as Kafka (prior to 0.11), that do not natively support headers.

在RabbitMQ的情况下,内容类型headers可以由外部应用程序设置。Spring Cloud Stream支持它们作为扩展内部协议的一部分,用于任何类型的传输 - 包括传输,如Kafka(0.11之前),非原支持headers。

 

17.4. Retry With the RabbitMQ Binder   使用RabbitMQ绑定器重试

 

When retry is enabled within the binder, the listener container thread is suspended for any back off periods that are configured. This might be important when strict ordering is required with a single consumer. However, for other use cases, it prevents other messages from being processed on that thread. An alternative to using binder retry is to set up dead lettering with time to live on the dead-letter queue (DLQ) as well as dead-letter configuration on the DLQ itself. See “RabbitMQ Binder Properties” for more information about the properties discussed here. You can use the following example configuration to enable this feature:

  • Set autoBindDlq to true. The binder create a DLQ. Optionally, you can specify a name in deadLetterQueueName.
  • Set dlqTtl to the back off time you want to wait between redeliveries.
  • Set the dlqDeadLetterExchange to the default exchange. Expired messages from the DLQ are routed to the original queue, because the default deadLetterRoutingKey is the queue name (destination.group). Setting to the default exchange is achieved by setting the property with no value, as shown in the next example.

 

在绑定器中启用重试时,将暂停监听器容器线程以用于配置的任何后退时段。当单个消费者需要严格的订购时,这可能很重要。但是,对于其他用例,它会阻止在该线程上处理其他消息。使用绑定器重试的另一种方法是使用死信队列(DLQ)上的生存时间以及DLQ本身上的死信配置设置死信。有关此处讨论的属性的更多信息,请参阅“ RabbitMQ Binder属性 ”。您可以使用以下示例配置来启用此功能:

  • 设置autoBindDlq为true。绑定器创建DLQ。(可选)您可以在deadLetterQueueName中指定名称。
  • 设置dlqTtl为您想要在重新开始之间等待的退避时间。
  • 设置dlqDeadLetterExchange为默认交换。来自DLQ的过期消息将路由到原始队列,因为默认的deadLetterRoutingKey是队列名称(destination.group)。通过将属性设置为无值来实现设置为默认交换,如下一个示例所示。

 

To force a message to be dead-lettered, either throw an AmqpRejectAndDontRequeueException or set requeueRejected to true (the default) and throw any exception.

 

要将消息强制为死信,请抛出AmqpRejectAndDontRequeueException或设置requeueRejected为true(默认值)并抛出任何异常。

 

The loop continue without end, which is fine for transient problems, but you may want to give up after some number of attempts. Fortunately, RabbitMQ provides the x-death header, which lets you determine how many cycles have occurred.

 

循环继续没有结束,这对于瞬态问题很好,但是你可能想在经过一些尝试后放弃。幸运的是,RabbitMQ提供了x-death header,可以让您确定发生了多少次循环。

 

To acknowledge a message after giving up, throw an ImmediateAcknowledgeAmqpException.

 

放弃后要确认一条消息,抛出ImmediateAcknowledgeAmqpException。

 

Putting it All Together   所有的放在一起

The following configuration creates an exchange myDestination with queue myDestination.consumerGroupbound to a topic exchange with a wildcard routing key #:

 

以下配置使用通配符路由键#创建一个到主题交换的队列myDestination.consumerGroup的交换myDestination:

 

---
spring.cloud.stream.bindings.input.destination=myDestination
spring.cloud.stream.bindings.input.group=consumerGroup
#disable binder retries
spring.cloud.stream.bindings.input.consumer.max-attempts=1
#dlx/dlq setup
spring.cloud.stream.rabbit.bindings.input.consumer.auto-bind-dlq=true
spring.cloud.stream.rabbit.bindings.input.consumer.dlq-ttl=5000
spring.cloud.stream.rabbit.bindings.input.consumer.dlq-dead-letter-exchange=
---

 

This configuration creates a DLQ bound to a direct exchange (DLX) with a routing key of myDestination.consumerGroup. When messages are rejected, they are routed to the DLQ. After 5 seconds, the message expires and is routed to the original queue by using the queue name as the routing key, as shown in the following example:

 

此配置使用路由密钥为myDestination.consumerGroup创建一个直接绑定到exchange(DLX)的DLQ。当消息被拒绝时,它们将被路由到DLQ。5秒后,消息将过期,并使用队列名称作为路由密钥路由到原始队列,如以下示例所示:

 

Spring Boot application

@SpringBootApplication

@EnableBinding(Sink.class)

public class XDeathApplication {

 

    public static void main(String[] args) {

        SpringApplication.run(XDeathApplication.class, args);

    }

 

    @StreamListener(Sink.INPUT)

    public void listen(String in, @Header(name = "x-death", required = false) Map<?,?> death) {

        if (death != null && death.get("count").equals(3L)) {

            // giving up - don't send to DLX

            throw new ImmediateAcknowledgeAmqpException("Failed after 4 attempts");

        }

        throw new AmqpRejectAndDontRequeueException("failed");

    }

 

}

 

Notice that the count property in the x-death header is a Long.

 

请注意,x-death header中的count属性是Long。

 

17.5. Error Channels   错误管道

 

Starting with version 1.3, the binder unconditionally sends exceptions to an error channel for each consumer destination and can also be configured to send async producer send failures to an error channel. See “[binder-error-channels]” for more information.

 

从版本1.3开始,绑定器无条件地为每个消费者目标向错误通道发送异常,并且还可以配置为将异步生成器发送失败发送到错误通道。有关详细信息,请参阅“ [binder-error-channels] ”。

 

RabbitMQ has two types of send failures:

 

RabbitMQ有两种类型的发送失败:

 

The latter is rare. According to the RabbitMQ documentation "[A nack] will only be delivered if an internal error occurs in the Erlang process responsible for a queue.".

 

后者很少见。根据RabbitMQ文档,“只有在负责队列的Erlang进程中发生内部错误时才会传递[A nack]。”

 

As well as enabling producer error channels (as described in “[binder-error-channels]”), the RabbitMQ binder only sends messages to the channels if the connection factory is appropriately configured, as follows:

  • ccf.setPublisherConfirms(true);
  • ccf.setPublisherReturns(true);

 

除了启用生产者错误通道(如“ [binder-error-channels] ”中所述),如果连接工厂配置正确,RabbitMQ绑定器仅向通道发送消息,如下所示:

  • ccf.setPublisherConfirms(true);
  • ccf.setPublisherReturns(true);

 

When using Spring Boot configuration for the connection factory, set the following properties:

  • spring.rabbitmq.publisher-confirms
  • spring.rabbitmq.publisher-returns

 

将Spring Boot配置用于连接工厂时,请设置以下属性:

  • spring.rabbitmq.publisher-confirms
  • spring.rabbitmq.publisher-returns

 

The payload of the ErrorMessage for a returned message is a ReturnedAmqpMessageException with the following properties:

  • failedMessage: The spring-messaging Message<?> that failed to be sent.
  • amqpMessage: The raw spring-amqp Message.
  • replyCode: An integer value indicating the reason for the failure (for example, 312 - No route).
  • replyText: A text value indicating the reason for the failure (for example, NO_ROUTE).
  • exchange: The exchange to which the message was published.
  • routingKey: The routing key used when the message was published.

 

返回消息的ErrorMessage的负载是ReturnedAmqpMessageException,具有以下属性的:

  • failedMessage:发送失败的spring-messaging Message<?>。
  • amqpMessage:原始的spring-amqp Message。
  • replyCode:一个整数值,指示失败的原因(例如,312 - 无路由)。
  • replyText:指示失败原因的文本值(例如,NO_ROUTE)。
  • exchange:消息发布的交换。
  • routingKey:发布消息时使用的路由密钥。

 

For negatively acknowledged confirmations, the payload is a NackedAmqpMessageException with the following properties:

  • failedMessage: The spring-messaging Message<?> that failed to be sent.
  • nackReason: A reason (if available — you may need to examine the broker logs for more information).

 

对于否定确认的确认,负载是一个NackedAmqpMessageException,具有以下属性:

  • failedMessage:发送失败的spring-messaging Message<?>。
  • nackReason:一个原因(如果可用 - 您可能需要检查代理日志以获取更多信息)。

 

There is no automatic handling of these exceptions (such as sending to a dead-letter queue). You can consume these exceptions with your own Spring Integration flow.

 

没有自动处理这些异常(例如发送到死信队列)。您可以使用自己的Spring Integration流程来使用这些异常。

 

17.6. Dead-Letter Queue Processing   死信队列处理

 

Because you cannot anticipate how users would want to dispose of dead-lettered messages, the framework does not provide any standard mechanism to handle them. If the reason for the dead-lettering is transient, you may wish to route the messages back to the original queue. However, if the problem is a permanent issue, that could cause an infinite loop. The following Spring Boot application shows an example of how to route those messages back to the original queue but moves them to a third “parking lot” queue after three attempts. The second example uses the RabbitMQ Delayed Message Exchange to introduce a delay to the re-queued message. In this example, the delay increases for each attempt. These examples use a @RabbitListener to receive messages from the DLQ. You could also use RabbitTemplate.receive() in a batch process.

 

因为您无法预测用户将如何处理死信消息,所以框架不提供任何标准机制来处理它们。如果死信的原因是暂时的,您可能希望将消息路由回原始队列。但是,如果问题是一个永久性问题,那么可能会导致无限循环。以下Spring Boot应用程序显示了如何将这些消息路由回原始队列但在三次尝试后将它们移动到第三个“停车场”队列的示例。第二个示例使用RabbitMQ延迟消息交换为重新排队的消息引入延迟。在此示例中,每次尝试的延迟都会增加。这些示例使用@RabbitListener来接收来自DLQ的消息。您也可以RabbitTemplate.receive()在批处理中使用。

 

The examples assume the original destination is so8400in and the consumer group is so8400.

 

这些示例假设原始目标是so8400in,而消费者组是so8400。

 

Non-Partitioned Destinations   未分区目标

The first two examples are for when the destination is not partitioned:

 

前两个示例是针对目标分区的时间:

 

@SpringBootApplication

public class ReRouteDlqApplication {

 

    private static final String ORIGINAL_QUEUE = "so8400in.so8400";

 

    private static final String DLQ = ORIGINAL_QUEUE + ".dlq";

 

    private static final String PARKING_LOT = ORIGINAL_QUEUE + ".parkingLot";

 

    private static final String X_RETRIES_HEADER = "x-retries";

 

    public static void main(String[] args) throws Exception {

        ConfigurableApplicationContext context = SpringApplication.run(ReRouteDlqApplication.class, args);

        System.out.println("Hit enter to terminate");

        System.in.read();

        context.close();

    }

 

    @Autowired

    private RabbitTemplate rabbitTemplate;

 

    @RabbitListener(queues = DLQ)

    public void rePublish(Message failedMessage) {

        Integer retriesHeader = (Integer) failedMessage.getMessageProperties().getHeaders().get(X_RETRIES_HEADER);

        if (retriesHeader == null) {

            retriesHeader = Integer.valueOf(0);

        }

        if (retriesHeader < 3) {

            failedMessage.getMessageProperties().getHeaders().put(X_RETRIES_HEADER, retriesHeader + 1);

            this.rabbitTemplate.send(ORIGINAL_QUEUE, failedMessage);

        }

        else {

            this.rabbitTemplate.send(PARKING_LOT, failedMessage);

        }

    }

 

    @Bean

    public Queue parkingLot() {

        return new Queue(PARKING_LOT);

    }

 

}

 

@SpringBootApplication

public class ReRouteDlqApplication {

 

    private static final String ORIGINAL_QUEUE = "so8400in.so8400";

 

    private static final String DLQ = ORIGINAL_QUEUE + ".dlq";

 

    private static final String PARKING_LOT = ORIGINAL_QUEUE + ".parkingLot";

 

    private static final String X_RETRIES_HEADER = "x-retries";

 

    private static final String DELAY_EXCHANGE = "dlqReRouter";

 

    public static void main(String[] args) throws Exception {

        ConfigurableApplicationContext context = SpringApplication.run(ReRouteDlqApplication.class, args);

        System.out.println("Hit enter to terminate");

        System.in.read();

        context.close();

    }

 

    @Autowired

    private RabbitTemplate rabbitTemplate;

 

    @RabbitListener(queues = DLQ)

    public void rePublish(Message failedMessage) {

        Map<String, Object> headers = failedMessage.getMessageProperties().getHeaders();

        Integer retriesHeader = (Integer) headers.get(X_RETRIES_HEADER);

        if (retriesHeader == null) {

            retriesHeader = Integer.valueOf(0);

        }

        if (retriesHeader < 3) {

            headers.put(X_RETRIES_HEADER, retriesHeader + 1);

            headers.put("x-delay", 5000 * retriesHeader);

            this.rabbitTemplate.send(DELAY_EXCHANGE, ORIGINAL_QUEUE, failedMessage);

        }

        else {

            this.rabbitTemplate.send(PARKING_LOT, failedMessage);

        }

    }

 

    @Bean

    public DirectExchange delayExchange() {

        DirectExchange exchange = new DirectExchange(DELAY_EXCHANGE);

        exchange.setDelayed(true);

        return exchange;

    }

 

    @Bean

    public Binding bindOriginalToDelay() {

        return BindingBuilder.bind(new Queue(ORIGINAL_QUEUE)).to(delayExchange()).with(ORIGINAL_QUEUE);

    }

 

    @Bean

    public Queue parkingLot() {

        return new Queue(PARKING_LOT);

    }

 

}

 

Partitioned Destinations   已分区目标

 

With partitioned destinations, there is one DLQ for all partitions. We determine the original queue from the headers.

 

对于已分区目标,所有分区都有一个DLQ。我们从headers中确定原始队列。

 

republishToDlq=false

 

When republishToDlq is false, RabbitMQ publishes the message to the DLX/DLQ with an x-death header containing information about the original destination, as shown in the following example:

 

当republishToDlq是false,RabbitMQ使用含有关于原始目的地信息的x-death header将消息发布到DLX/DLQ,如图以下示例:

 

@SpringBootApplication

public class ReRouteDlqApplication {

 

private static final String ORIGINAL_QUEUE = "so8400in.so8400";

 

private static final String DLQ = ORIGINAL_QUEUE + ".dlq";

 

private static final String PARKING_LOT = ORIGINAL_QUEUE + ".parkingLot";

 

private static final String X_DEATH_HEADER = "x-death";

 

private static final String X_RETRIES_HEADER = "x-retries";

 

public static void main(String[] args) throws Exception {

ConfigurableApplicationContext context = SpringApplication.run(ReRouteDlqApplication.class, args);

System.out.println("Hit enter to terminate");

System.in.read();

context.close();

}

 

@Autowired

private RabbitTemplate rabbitTemplate;

 

@SuppressWarnings("unchecked")

@RabbitListener(queues = DLQ)

public void rePublish(Message failedMessage) {

Map<String, Object> headers = failedMessage.getMessageProperties().getHeaders();

Integer retriesHeader = (Integer) headers.get(X_RETRIES_HEADER);

if (retriesHeader == null) {

retriesHeader = Integer.valueOf(0);

}

if (retriesHeader < 3) {

headers.put(X_RETRIES_HEADER, retriesHeader + 1);

List<Map<String, ?>> xDeath = (List<Map<String, ?>>) headers.get(X_DEATH_HEADER);

String exchange = (String) xDeath.get(0).get("exchange");

List<String> routingKeys = (List<String>) xDeath.get(0).get("routing-keys");

this.rabbitTemplate.send(exchange, routingKeys.get(0), failedMessage);

}

else {

this.rabbitTemplate.send(PARKING_LOT, failedMessage);

}

}

 

@Bean

public Queue parkingLot() {

return new Queue(PARKING_LOT);

}

 

}

 

republishToDlq=true

 

When republishToDlq is true, the republishing recoverer adds the original exchange and routing key to headers, as shown in the following example:

 

当republishToDlq是true时,重新发布恢复器将原始交换和路由关键添加到headers中,因为显示在下面的例子:

 

@SpringBootApplication

public class ReRouteDlqApplication {

 

private static final String ORIGINAL_QUEUE = "so8400in.so8400";

 

private static final String DLQ = ORIGINAL_QUEUE + ".dlq";

 

private static final String PARKING_LOT = ORIGINAL_QUEUE + ".parkingLot";

 

private static final String X_RETRIES_HEADER = "x-retries";

 

private static final String X_ORIGINAL_EXCHANGE_HEADER = RepublishMessageRecoverer.X_ORIGINAL_EXCHANGE;

 

private static final String X_ORIGINAL_ROUTING_KEY_HEADER = RepublishMessageRecoverer.X_ORIGINAL_ROUTING_KEY;

 

public static void main(String[] args) throws Exception {

ConfigurableApplicationContext context = SpringApplication.run(ReRouteDlqApplication.class, args);

System.out.println("Hit enter to terminate");

System.in.read();

context.close();

}

 

@Autowired

private RabbitTemplate rabbitTemplate;

 

@RabbitListener(queues = DLQ)

public void rePublish(Message failedMessage) {

Map<String, Object> headers = failedMessage.getMessageProperties().getHeaders();

Integer retriesHeader = (Integer) headers.get(X_RETRIES_HEADER);

if (retriesHeader == null) {

retriesHeader = Integer.valueOf(0);

}

if (retriesHeader < 3) {

headers.put(X_RETRIES_HEADER, retriesHeader + 1);

String exchange = (String) headers.get(X_ORIGINAL_EXCHANGE_HEADER);

String originalRoutingKey = (String) headers.get(X_ORIGINAL_ROUTING_KEY_HEADER);

this.rabbitTemplate.send(exchange, originalRoutingKey, failedMessage);

}

else {

this.rabbitTemplate.send(PARKING_LOT, failedMessage);

}

}

 

@Bean

public Queue parkingLot() {

return new Queue(PARKING_LOT);

}

 

}

 

17.7. Partitioning with the RabbitMQ Binder   使用RabbitMQ绑定器进行分区

 

RabbitMQ does not support partitioning natively.

 

RabbitMQ本身不支持分区。

 

Sometimes, it is advantageous to send data to specific partitions — for example, when you want to strictly order message processing, all messages for a particular customer should go to the same partition.

 

有时,将数据发送到特定分区是有利的 - 例如,当您要严格订购消息处理时,特定客户的所有消息都应该转到同一分区。

 

The RabbitMessageChannelBinder provides partitioning by binding a queue for each partition to the destination exchange.

 

RabbitMessageChannelBinder通过将每个分区的队列绑定到目的地交换提供分区。

 

The following Java and YAML examples show how to configure the producer:

 

以下Java和YAML示例显示如何配置生产者:

 

Producer

 

@SpringBootApplication

@EnableBinding(Source.class)

public class RabbitPartitionProducerApplication {

 

    private static final Random RANDOM = new Random(System.currentTimeMillis());

 

    private static final String[] data = new String[] {

            "abc1", "def1", "qux1",

            "abc2", "def2", "qux2",

            "abc3", "def3", "qux3",

            "abc4", "def4", "qux4",

            };

 

    public static void main(String[] args) {

        new SpringApplicationBuilder(RabbitPartitionProducerApplication.class)

            .web(false)

            .run(args);

    }

 

    @InboundChannelAdapter(channel = Source.OUTPUT, poller = @Poller(fixedRate = "5000"))

    public Message<?> generate() {

        String value = data[RANDOM.nextInt(data.length)];

        System.out.println("Sending: " + value);

        return MessageBuilder.withPayload(value)

                .setHeader("partitionKey", value)

                .build();

    }

 

}

 

application.yml

    spring:
      cloud:
        stream:
          bindings:
            output:
              destination: partitioned.destination
              producer:
                partitioned: true
                partition-key-expression: headers['partitionKey']
                partition-count: 2
                required-groups:
                - myGroup

 

The configuration in the prececing example uses the default partitioning (key.hashCode() % partitionCount). This may or may not provide a suitably balanced algorithm, depending on the key values. You can override this default by using the partitionSelectorExpression or partitionSelectorClass properties.

The required-groups property is required only if you need the consumer queues to be provisioned when the producer is deployed. Otherwise, any messages sent to a partition are lost until the corresponding consumer is deployed.

 

前面示例中的配置使用默认分区(key.hashCode() % partitionCount)。根据键值,这可能会或可能不会提供适当平衡的算法。您可以使用partitionSelectorExpression或partitionSelectorClass属性覆盖此默认值。

仅当您需要在部署生产者时配置消费者队列时,才需要required-groups属性。否则,在部署相应的消费者之前,发送到分区的任何消息都将丢失。

 

The following configuration provisions a topic exchange:

 

以下配置提供了主题交换:

 

 

The following queues are bound to that exchange:

 

以下队列绑定到该交换:

 

 

The following bindings associate the queues to the exchange:

 

以下绑定将队列关联到交换:

 

 

The following Java and YAML examples continue the previous examples and show how to configure the consumer:

 

以下Java和YAML示例继续前面的示例,并说明如何配置消费者:

 

Consumer

@SpringBootApplication

@EnableBinding(Sink.class)

public class RabbitPartitionConsumerApplication {

 

    public static void main(String[] args) {

        new SpringApplicationBuilder(RabbitPartitionConsumerApplication.class)

            .web(false)

            .run(args);

    }

 

    @StreamListener(Sink.INPUT)

    public void listen(@Payload String in, @Header(AmqpHeaders.CONSUMER_QUEUE) String queue) {

        System.out.println(in + " received from queue " + queue);

    }

 

}

 

application.yml

    spring:
      cloud:
        stream:
          bindings:
            input:
              destination: partitioned.destination
              group: myGroup
              consumer:
                partitioned: true
                instance-index: 0

 

The RabbitMessageChannelBinder does not support dynamic scaling. There must be at least one consumer per partition. The consumer’s instanceIndex is used to indicate which partition is consumed. Platforms such as Cloud Foundry can have only one instance with an instanceIndex.

RabbitMessageChannelBinder不支持动态扩展。每个分区必须至少有一个消费者。消费者的instanceIndex用于指示消费了哪个分区。Cloud Foundry等平台只能有一个带有instanceIndex的实例。

 

Appendices

 

Appendix A: Building

 

A.1. Basic Compile and Test   基本编译和测试

 

To build the source you will need to install JDK 1.7.

 

要构建源代码,您需要安装JDK 1.7。

 

The build uses the Maven wrapper so you don’t have to install a specific version of Maven. To enable the tests for Redis, Rabbit, and Kafka bindings you should have those servers running before building. See below for more information on running the servers.

 

构建使用Maven包装器,因此您不必安装特定版本的Maven。要为Redis,Rabbit,和Kafka绑定启用测试,您应该在构建之前运行这些服务器。有关运行服务器的更多信息,请参见下文。

 

The main build command is

 

主构建命令是

 

$ ./mvnw clean install

 

You can also add '-DskipTests' if you like, to avoid running the tests.

 

如果愿意,您还可以添加'-DskipTests',以避免运行测试。

 

You can also install Maven (>=3.3.3) yourself and run the mvn command in place of ./mvnw in the examples below. If you do that you also might need to add -P spring if your local Maven settings do not contain repository declarations for spring pre-release artifacts.

您也可以自己安装Maven(> = 3.3.3)并在下面的示例中运行mvn命令代替./mvnw。如果这样做,您可能还需要添加,-P spring如果您的本地Maven设置不包含spring pre-release工件的存储库声明。

Be aware that you might need to increase the amount of memory available to Maven by setting a MAVEN_OPTS environment variable with a value like -Xmx512m -XX:MaxPermSize=128m. We try to cover this in the .mvn configuration, so if you find you have to do it to make a build succeed, please raise a ticket to get the settings added to source control.

请注意,您可能需要通过设置MAVEN_OPTS值为的环境变量来增加Maven可用的内存量-Xmx512m -XX:MaxPermSize=128m。我们尝试在.mvn配置中介绍这一点,因此如果您发现必须这样做才能使构建成功,请提出一个票证以将设置添加到源代码管理中。

 

The projects that require middleware generally include a docker-compose.yml, so consider using Docker Compose to run the middeware servers in Docker containers. See the README in the scripts demo repository for specific instructions about the common cases of mongo, rabbit and redis.

 

需要中间件的项目通常包括docker-compose.yml,因此请考虑使用 Docker Compose在Docker容器中运行middeware服务器。有关mongo,rabbit,和redis常见情况的具体说明,请参阅脚本演示存储库中的README 。

 

A.2. Documentation

There is a "full" profile that will generate documentation.

 

有一个“完整”的配置文件将生成文档。

 

A.3. Working with the code   使用代码

 

If you don’t have an IDE preference we would recommend that you use Spring Tools Suite or Eclipse when working with the code. We use the m2eclipe eclipse plugin for maven support. Other IDEs and tools should also work without issue.

 

如果您没有IDE首选项,我们建议您在使用代码时使用 Spring Tools Suite或 Eclipse。我们使用 m2eclipe eclipse插件来支持maven。其他IDE和工具也应该没有问题。

 

A.3.1. Importing into eclipse with m2eclipse   使用m2eclipse导入eclipse

 

We recommend the m2eclipe eclipse plugin when working with eclipse. If you don’t already have m2eclipse installed it is available from the "eclipse marketplace".

 

在使用eclipse时,我们建议使用m2eclipe eclipse插件。如果您还没有安装m2eclipse,可以从“eclipse marketplace”获得。

 

Unfortunately m2e does not yet support Maven 3.3, so once the projects are imported into Eclipse you will also need to tell m2eclipse to use the .settings.xml file for the projects. If you do not do this you may see many different errors related to the POMs in the projects. Open your Eclipse preferences, expand the Maven preferences, and select User Settings. In the User Settings field click Browse and navigate to the Spring Cloud project you imported selecting the .settings.xml file in that project. Click Apply and then OK to save the preference changes.

 

不幸的是m2e还不支持Maven 3.3,所以一旦将项目导入Eclipse,你还需要告诉m2eclipse将该.settings.xml文件用于项目。如果不这样做,您可能会看到许多与项目中的POM相关的错误。打开Eclipse首选项,展开Maven首选项,然后选择用户设置。在“用户设置”字段中,单击“浏览”并导航到导入的Spring Cloud项目,选择该.settings.xml项目中的文件。单击应用,然后单击确定以保存首选项更改。

 

Alternatively you can copy the repository settings from .settings.xml into your own ~/.m2/settings.xml.

或者,您可以将存储库设置复制.settings.xml到您自己的设置中~/.m2/settings.xml。

 

A.3.2. Importing into eclipse without m2eclipse   不使用m2eclipse导入eclipse

 

If you prefer not to use m2eclipse you can generate eclipse project metadata using the following command:

 

如果您不想使用m2eclipse,可以使用以下命令生成eclipse项目元数据:

 

$ ./mvnw eclipse:eclipse

 

The generated eclipse projects can be imported by selecting import existing projects from the file menu. [[contributing] == Contributing

 

可以通过从file菜单中选择import existing projects导入生成的eclipse项目。[[贡献] ==贡献

 

Spring Cloud is released under the non-restrictive Apache 2.0 license, and follows a very standard Github development process, using Github tracker for issues and merging pull requests into master. If you want to contribute even something trivial please do not hesitate, but follow the guidelines below.

 

Spring Cloud是在非限制性Apache 2.0许可下发布的,遵循非常标准的Github开发过程,使用Github跟踪器解决问题并将拉取请求合并到master中。如果您想贡献一些微不足道的东西,请不要犹豫,但请遵循以下指南。

 

A.4. Sign the Contributor License Agreement   签署贡献者许可协议

 

Before we accept a non-trivial patch or pull request we will need you to sign the contributor’s agreement. Signing the contributor’s agreement does not grant anyone commit rights to the main repository, but it does mean that we can accept your contributions, and you will get an author credit if we do. Active contributors might be asked to join the core team, and given the ability to merge pull requests.

 

在我们接受非平凡的补丁或拉取请求之前,我们需要您签署 贡献者的协议。签署贡献者的协议不会授予任何人对主存储库的提交权利,但它确实意味着我们可以接受您的贡献,如果我们这样做,您将获得作者信用。可能会要求活跃的贡献者加入核心团队,并且能够合并拉取请求。

 

A.5. Code Conventions and Housekeeping   代码约定和内务管理

None of these is essential for a pull request, but they will all help. They can also be added after the original pull request but before a merge.

  • Use the Spring Framework code format conventions. If you use Eclipse you can import formatter settings using the eclipse-code-formatter.xml file from the Spring Cloud Build project. If using IntelliJ, you can use theEclipse Code Formatter Plugin to import the same file.
  • Make sure all new .java files to have a simple Javadoc class comment with at least an @author tag identifying you, and preferably at least a paragraph on what the class is for.
  • Add the ASF license header comment to all new .java files (copy from existing files in the project)
  • Add yourself as an @author to the .java files that you modify substantially (more than cosmetic changes).
  • Add some Javadocs and, if you change the namespace, some XSD doc elements.
  • A few unit tests would help a lot as well — someone has to do it.
  • If no-one else is using your branch, please rebase it against the current master (or other target branch in the main project).
  • When writing a commit message please follow these conventions, if you are fixing an existing issue please add Fixes gh-XXXX at the end of the commit message (where XXXX is the issue number).

 

这些都不是拉取请求所必需的,但它们都会有所帮助。它们也可以在原始拉取请求之后但在合并之前添加。

  • 使用Spring Framework代码格式约定。如果使用Eclipse,则可以使用Spring Cloud Build项目中的eclipse-code-formatter.xml文件 导入格式化程序设置 。如果使用IntelliJ,则可以使用 Eclipse Code Formatter Plugin导入同一文件。
  • 确保所有新.java文件都有一个简单的Javadoc类注释,至少有一个@author标识您的 标记,最好至少有一个关于该类所用内容的段落。
  • 将ASF许可证头注释添加到所有新.java文件(从项目中的现有文件复制)
  • 将您自己添加为@author您实际修改的.java文件(超过整容更改)。
  • 添加一些Javadoc,如果更改命名空间,则添加一些XSD doc元素。
  • 一些单元测试也会有很多帮助 - 有人必须这样做。
  • 如果没有其他人使用您的分支,请将其重新绑定到当前主服务器(或主项目中的其他目标分支)。
  • 在编写提交消息时,请遵循这些约定,如果要修复现有问题,请Fixes gh-XXXX在提交消息的末尾添加(其中XXXX是问题编号)。

Last updated 2018-07-11 12:49:33 UTC

猜你喜欢

转载自blog.csdn.net/huanzhulouzhu/article/details/84065790
今日推荐