Spring Cloud distributed messaging-Spring Cloud Stream custom channel and packet partition application

In Spring Cloud Distributed Messaging—Introduction and Getting Started with Spring Cloud Stream, we briefly introduced Spring Cloud Stream, and used the default channel input and output provided by Spring Cloud Stream to make a simple example. In this article, we will use self Define the channel as an example, and introduce the advanced application of Spring Cloud Stream. If you don't know Spring Cloud Stream, you can read Spring Cloud Distributed Message-Introduction to Spring Cloud Stream and Getting Started.

In the previous article, we introduced two annotations @Input and @Output to define input and output channels, respectively. The SInk and Source defined by Spring Cloud Stream use these two annotations. We also need to use these two annotations for custom channels. Using these two annotations, we can easily define input and output channels. For example, we define an input and output channel for a log. The code is as follows:

//定义日志输入通道
public interface LogSink {
    String INPUT = "logInput";
    @Input("logInput")
    SubscribableChannel input();
}
//定义日志输出通道
public interface LogSource {
    String OUTPUT = "logOutput";
    @Output("logOutput")
    MessageChannel output();
}

After writing the channel, we need to define the channel in the configuration file, bind the channel to the binder, and bind the binder to the message middleware. The above channel configuration is as follows:

spring:
  cloud:
    stream:
      bindings: #用于绑定通道到binder
        logInput: #ChannelName 这里是输入通道的名称,如果@Input不指定默认为方法名
          destination: log #从哪里接收消息,这里指topic或者队列名称,在rabbit中为exchange
          binder: logBinder #绑定到名为logBinder的binder
        logOutput: #ChannelName 这里是输出通道的名称,如果@Output不指定默认为方法名
          destination: log #将消息发送到哪里,这里指topic或者队列名称,在rabbit中为exchange
          binder: logBinder #绑定到名为logBinder的binder
          content-type:
      binders: #配置binder
        logBinder: #配置名称为hello1的binder
          type: rabbit #binder类型为rabbitMq
          environment: #配置运行环境
            spring:
              rabbitmq:
                host: 10.0.10.63  #地址
                port: 5672        #端口
                username: guest   #用户名
                password: guest   #密码
server:
  port: 8092

Then you need the @EnableBinding annotation to bind the channel. The message receiving code is shown below. The code for sending the message is not shown here. You only need @EnableBinding(LogSource.class), and then use the @Autowire annotation to inject the LogSource.

@EnableBinding(LogSink.class)
public class LogReceiver {
    private static Logger logger = LoggerFactory.getLogger(LogReceiver.class);

    @StreamListener(LogSink.INPUT)
    public void receive(String payload) {
        logger.info("Received: " + payload);
    }
}

In the above code, we define the input channel and output channel in different interfaces. If there are too many channels, we need to define multiple interfaces. Therefore, we can define multiple channels in one interface. @EnableBinding also only needs Just bind an interface. For the following code, we only need @EnableBinding(LogChannel.class), and use @Autowire to annotate the LogChannel interface when sending messages.

public interface LogChannel {
    String INPUT = "service1logInput";
    @Input("service1logInput")
    SubscribableChannel service1input();
    String INPUT2 = "service2logInput";
    @Input("service1logInput")
    SubscribableChannel service2input();
    String OUTPUT = "service1logOutput";
    @Output("service1logOutput")
    MessageChannel service1logOutput();
    String OUTPUT2 = "service2logOutput";
    @Output("service12ogOutput")
    MessageChannel service2logOutput();
}

Spring Cloud Stream is based on the concepts and patterns defined by Enterprise Integration Patterns and relies on its internal implementation, which relies on the implementation of the established and popular Enterprise Integration Patterns in the Spring project portfolio: Spring Integration Framework. Therefore, it naturally supports the established foundation, semantics and configuration options of Spring Integration. For example, you can use @InboundChannelAdapter to attach Source's output channel to MessageSource. Similarly, you can use @Transformer or @ServiceActivator when providing the implementation of the message handler method of the Processor binding contract. The code example is as follows:

@EnableBinding(Source.class)
public class TimerSource {
  @Bean
  @InboundChannelAdapter(value = Source.OUTPUT, poller = @Poller(fixedDelay = "10", maxMessagesPerPoll = "1"))
  public MessageSource<String> timerMessageSource() {
    return () -> new GenericMessage<>("Hello Spring Cloud Stream");
  }
}

@EnableBinding(Processor.class)
public class TransformProcessor {
  @Transformer(inputChannel = Processor.INPUT, outputChannel = Processor.OUTPUT)
  public Object transform(String message) {
    return message.toUpperCase();
  }
}

Regarding the @StreamListener annotation, it is Spring Cloud Stream's supplement to Spring Integration. Like other Spring Messaging annotations (@JmsListener), it can provide us with functions such as routing. For methods annotated with @StreamListener, you can return a piece of data, but you must use the @SendTo annotation to specify the output destination of the data returned by the method, the code is as follows:

@EnableBinding(Processor.class)
public class TransformProcessor {
  @Autowired
  VotingService votingService;
  @StreamListener(Processor.INPUT)
  @SendTo(Processor.OUTPUT)
  public VoteResult handle(Vote vote) {
    return votingService.record(vote);
  }
}

Spring Cloud Stream supports forwarding messages to multiple processing methods based on the conditions on the @SpringListener annotation. These methods cannot have return values ​​and are separate message processing methods. We can pass in a specified SpEL expression for the condition parameter of @StreamListener. In this way, only the processor that matches the expression will be called. The code example is as follows:

@EnableBinding(Sink.class)
@EnableAutoConfiguration
public static class TestPojoWithAnnotatedArguments {
    @StreamListener(target = Sink.INPUT, condition = "headers['type']=='bogey'")
    public void receiveBogey(@Payload BogeyPojo bogeyPojo) {
       // handle the message
    }
    @StreamListener(target = Sink.INPUT, condition = "headers['type']=='bacall'")
    public void receiveBacall(@Payload BacallPojo bacallPojo) {
       // handle the message
    }
}

The publish-subscribe model makes it easier to interact between applications through shared topics, but the ability to scale by creating multiple instances of a given application is equally important. In doing so, different instances of the application are placed in a competing consumer relationship, in which only one instance can process a given message. Spring Cloud Stream provides the concept of consumer groups. You can configure a group through spring.cloud.stream.bindings.<channelName>.group. Consumers in the following figure will use this configuration: spring.cloud.stream.bindings.<channelName>.group=hdfsWrite or spring.cloud.stream.bindings.<channelName>.group=average.

All groups that subscribe to a given target will receive a copy of the published data, but only one member of each group receives a given message from that target. By default, when no group is specified, Spring Cloud Stream will assign the application to an anonymous and independent single-member consumer group, which has a publish-subscribe relationship with all other consumer groups.

 In addition to grouping,

Spring Cloud Stream provides support for data partitioning between multiple instances of a given application. In the partitioning scheme, the physical communication medium (such as proxy topics) is considered structured into multiple partitions. One or more producer application instances send data to multiple consumer application instances and ensure that data identified by common characteristics is processed by the same consumer instance. Spring Cloud Stream provides a general abstraction for implementing partition processing use cases in a unified way. Therefore, whether the broker itself is a natural partition (for example, Kafka) or an unnatural partition (for example, RabbitMQ), partitioning can be used. The partition structure and required configuration are as follows:

#下面是生产者配置
#通过该参数指定了分区键的表达式规则
spring.cloud.stream.bindings.<channel-name>.producer.partitionKeyExpression=payload
#指定了消息分区的数量。 
spring.cloud.stream.bindings.<channel-name>.producer.partitionCount=2

#下面是消费者配置
#开启消费者分区功能
spring.cloud.stream.bindings.<channel-name>.consumer.partitioned=true
#指定了当前消费者的总实例数量
spring.cloud.stream.instanceCount=2 
#设置当前实例的索引号,从 0 开始
spring.cloud.stream.instanceIndex=1

This article introduces how Spring Cloud Stream customizes channel names and Spring Cloud Stream grouping and partitioning and routing and other advanced applications. In the next article, we will introduce Spring Cloud Stream exception handling and some advanced configuration knowledge.

Guess you like

Origin blog.csdn.net/wk19920726/article/details/108397676