[Redis] Redis Learning Tutorial (10) Using Redis to Implement Message Queuing

Requirements that message queues need to meet:

  1. Consistent order: Ensure that the order in which messages are sent is consistent with the order in which messages are consumed. Inconsistency may lead to business errors.
  2. Message confirmation mechanism: a message that has been consumed (ACK has been received) cannot be consumed again
  3. Message persistence: It must have the ability to persist to avoid message loss, so that when the consumer unexpectedly shuts down and needs to consume the message again after restarting, it can obtain it again

Redis provides three different ways to implement message queues:

  1. list structure: simulate message queue based on list structure
  2. pubsub: peer-to-peer messaging model
  3. stream: a relatively complete message queue model

1. Based on list structure

Because the underlying implementation of list is a "linked list", the time complexity of operating elements at the head and tail is O(1), which means that it is very consistent with the message queue model.

If your business needs are simple enough and you want to use Redis as a queue, the first thing that comes to mind is to use the list data type.

Commonly used commands:

  • lpush:make an announcement
  • rpop: Pull messages
  • brpop: Block pulling messages

Producer:

Insert image description here

consumer:

Insert image description here

This model is very simple, as shown below:

Insert image description here

When there are no messages in the queue, the consumer will return NULL when executing RPOP.

Insert image description here

When we write consumer logic, it is usually an "infinite loop". This logic needs to continuously pull messages from the queue for processing. The pseudo code is usually written like this:

while true:
    msg = redis.rpop("queue")
    // 没有消息,继续循环
    if msg == null:
        continue
    // 处理消息
    handle(msg)

Question 1: If the queue is empty at this time, the consumer will still frequently pull messages, which will cause "CPU idling", which not only wastes CPU resources, but also puts pressure on Redis.

How to solve this problem?

When the queue is empty, we can "sleep" for a while and try to pull messages again. The code can be modified like this:

while true:
    msg = redis.rpop("queue")
    // 没有消息,休眠2s
    if msg == null:
        sleep(2)
        continue
    // 处理消息        
    handle(msg)

This solves the CPU idling problem

Question 2: But it brings another problem: when the consumer is sleeping and waiting, and a new message comes, there will be a "delay" in the consumer's processing of the new message.

Assuming that the set sleep time is 2s, there will be a maximum delay of 2s for new messages.

To shorten this delay, you can only reduce the sleep time. However, the shorter the sleep time, the more likely it will cause CPU idling problems.

Redis does provide "blocking" commands for pulling messages: BRPOP / BLPOP . The B here refers to blocking (Block)

Insert image description here
Now, you can pull messages like this:

while true:
    // 没消息阻塞等待,0表示不设置超时时间
    msg = redis.brpop("queue", 0)
    if msg == null:
        continue
    // 处理消息
    handle(msg)

When using the blocking method of BRPOP to pull messages, it also supports passing in a "timeout period". If set to 0, it means that the timeout will not be set and will not return until there is a new message. Otherwise, NULL will be returned after the specified timeout period.

Note: If the timeout is set too long and the connection has not been active for a long time, it may be judged as an invalid connection by Redis Server, and then Redis Server will force the client to be kicked offline. Therefore, using this solution, the client must have a reconnection mechanism

Implemented using Jedis: https://blog.csdn.net/jam_yin/article/details/130967040

advantage:

  • Utilize Redis storage without JVM memory limit
  • Based on the persistence mechanism of Redis, data security is guaranteed
  • Can satisfy message orderliness

shortcoming:

  • Repeated consumption is not supported : After the consumer pulls the message, the message is deleted from the List and cannot be consumed again by other consumers. That is, multiple consumers are not supported to consume the same batch of data.
  • Message loss : After the consumer pulls the message, if an abnormal downtime occurs, the message will be lost (because after a message is POPed from the List, the message will be immediately deleted from the linked list. In other words, the message will be deleted from the linked list immediately. , no matter whether the consumer processes it successfully or not, this message cannot be consumed again)

2. Based on Pub-Sub model

[Redis] Redis Learning Tutorial (9) Publishing Pub and Subscribing Sub

Redis provides the following commands to complete publishing and subscribing operations:

  • SUBSCRIBE:Subscribe to one or more channels
  • UNSUBSCRIBE: Unsubscribe from one or more channels
  • PSUBSCRIBE:Subscribe to one or more schemas
  • PUNSUBSCRIBE: Unsubscribe from one or more patterns

2.1 Publish and subscribe through channels (Channel)

Insert image description here
1. Consumer subscription queue

Use the SUBSCRIBE command to start 2 consumers and "subscribe" to the same queue

Insert image description here

At this time, both consumers will be blocked, waiting for the arrival of new messages.

2. Producers publish news

Insert image description here

3. The consumer unblocks and receives the message

Insert image description here

Using the Pub/Sub solution not only supports blocking message pulling, but also satisfies the business needs of multiple groups of consumers consuming the same batch of data.

2.2 Use pattern matching to implement publish and subscribe

Insert image description here

1. Consumer subscription queue

Consumers subscribe to queue.* related queue messages

Insert image description here

2. Producers publish news

The producer publishes messages to queue.p1 and queue.p2 respectively.

Insert image description here

3. The consumer unblocks and receives the message

The consumer receives messages from these two producers

Insert image description here

The biggest advantage of Pub/Sub is that it supports multiple groups of producers and consumers to process messages; the biggest problem is: data loss

Data loss may occur if the following scenarios occur:

  • Consumer offline
  • Redis is down
  • Messages pile up

Pub/Sub is very simple to implement. It is not based on any data type, nor does it do any data storage. It simply establishes a "data forwarding channel" for producers and consumers, and forwards data that conforms to the rules from one end to the other. to the other end

A complete publish and subscribe message processing process is as follows:

  1. When a consumer subscribes to a specified queue, Redis will record a mapping relationship: Queue -> Consumer
  2. The producer publishes a message to this queue, and Redis finds the corresponding consumer from the mapping relationship and forwards the message to it.

Insert image description here

During the entire process, there is no data storage and everything is forwarded in real time.

This design solution leads to the problems mentioned above: for example, if a consumer hangs up abnormally, it can only receive new messages after it comes back online. The messages released by the producer during the offline period cannot be found because If it doesn't reach the consumer, it will be thrown away. If all consumers are offline, all messages released by the producer will be "discarded" because no consumer can be found.

so,When you are using Pub/Sub, be sure to note that the consumer must first subscribe to the queue before the producer can publish the message, otherwise the message will be lost. Pub/Sub related operations will not be written to RDB and AOF. When Redis crashes and restarts, all Pub/Sub data will be lost.

Why does Pub/Sub also lose data when dealing with "message backlog"?

When the speed of consumers cannot keep up with producers, data backlog will occur.

If a list is used as a queue, when messages are backlogged, the linked list will become very long. The most direct impact is that the Redis memory will continue to grow until the consumer takes out all the data from the linked list.

However, Pub/Sub is handled differently. When messages are backlogged, consumption failure and message loss may occur!

From the implementation details of Pub/Sub : When each consumer subscribes to a queue, Redis will allocate a "buffer" to the consumer on the server. This buffer is actually a piece of memory. When a producer publishes a message, Redis first writes the message to the buffer corresponding to the consumer. After that, the consumer continues to read messages from the buffer and process the messages.

Insert image description here
However, the problem lies in this buffer.

Because this buffer actually has an "upper limit" (configurable), if the consumer is slow to pull messages, it will cause a backlog of messages published by the producer to the buffer, and the buffer memory will continue to grow. If the upper limit of the buffer configuration is exceeded, Redis will "forcibly" kick the consumer offline. At this time, the consumer will fail to consume and lose data.

You can see the default configuration of this buffer from the Redis configuration file:client-output-buffer-limit pubsub 32mb 8mb 60

  • 32mb: Once the buffer exceeds 32MB, Redis directly forcibly kicks the consumer offline.
  • 8mb + 60: If the buffer exceeds 8MB and lasts for 60 seconds, Redis will also kick the consumer offline.

This feature of Pub/Sub is quite different from the list queue: list actually belongs to the "pull" model, while Pub/Sub actually belongs to the "push" model.

  • The data in the list can always be accumulated in the memory, and the consumer can "pull" it anytime.
  • Pub/Sub first "pushes" the message to the consumer's buffer on the Redis Server, and then waits for the consumer to retrieve it. When the production and consumption speeds do not match, the memory in the buffer will begin to expand. In order to control the upper limit of the buffer, Redis has a mechanism to force consumers to be kicked offline.

advantage:

  1. Support publish/subscribe, support multiple groups of producers and consumers to process messages

shortcoming:

  1. When consumers go offline, data will be lost
  2. Data persistence is not supported. If Redis goes down, data will be lost.
  3. Messages accumulate, buffer overflows, consumers will be forced offline, and data will be lost.

3. Stream-based message queue

During the development of Redis, the author of Redis also developed an open source project disque. The positioning of this project is a memory-based distributed message queue middleware. But for various reasons, the project has been tepid. Finally, in Redis 5.0 version, the author transplanted the disque function into Redis and defined a new data type for it: Stream

Stream is essentially the key in Redis, and related instructions can be divided into two categories: message queue related instructions and consumer group related instructions.

Message queue related instructions:

Command name command function
XADD Add message to the end of the queue
XREAD Get messages (blocking/non-blocking) and return messages greater than the specified ID
XLEN Get the message length in Stream
XDEL Delete message
XRANGE Get a list of messages (range can be specified), ignore deleted messages
XREVRANGE Compared with XRANGE, the difference lies in reverse retrieval, with IDs from large to small.
XTRIM Limit the length of the Stream. If it is too long, it will be intercepted.

Consumer group related instructions:

Command name command function
XGROUP CREATE Create consumer group
XREADGROUP Read messages in consumer group
XACK ack message, the message is marked as "processed"
XGROUP SETID Set the ID of the last message delivered by the consumer group
XGROUP DELCONSUMER Delete consumer group
XPENDING Print details of pending messages
XCLAIM Transfer ownership of messages (messages that have not been processed/cannot be processed for a long time will be transferred to other consumer groups for processing)
XINFO Print detailed information of Stream\Consumer\Group
XINFO GROUPS Print consumer group details
XINFO STREAM Print detailed information of Stream

3.1 Read messages through XREAD command

The command is as follows:

  • XADD:make an announcement.XADD key [NOMKSTREAM] [MAXLEN|MINID [= | ~] threshold [LIMIT count]] *|ID field value [field value ...]
    • [NOMKSTREAM]: If the queue does not exist, whether to automatically create the queue. The default is
    • [MAXLEN|MINID [= | ~] threshold [LIMIT count]]: Set the maximum number of messages in the message queue
    • |ID: The unique ID of the message. Represents automatically generated by Redis, format: timestamp-increasing number
    • field value [field value...]: Message Entry sent to the queue, the format is key-value

For example: Create a queue named mystream and send messages {"name": "zzc", "age": 26} to it, using the incremental ID of Redis

xadd mystream * name zzc age 26
  • XREAD: Read the message.XREAD [COUNT count] [BLOCK milliseconds] STREAMS key [key...] ID [ID ...]
    • [COUNT count]: The maximum number of messages read each time
    • [BLOCK milliseconds]: When there is no message, whether to block and the blocking duration
    • STREAMS key [key...]: Which queue to read messages from, key is the queue name
    • ID [ID …]: Starting ID, only messages greater than this ID are returned. 0: Start from the first message; $: Start from the latest message

For example: read the latest messages from the queue named mystream, one message at a time

XREAD COUNT 1 BLOCK 0 STREAMS mystream $

Producer:

Insert image description here

consumer:

Insert image description here

3.2 Read messages through consumer group commands

Consumer group: Divide multiple consumers into a group and monitor the same queue. It has the following characteristics:

  • Message offloading: Messages in the queue will be offloaded to different consumers in the group instead of repeated consumption, thereby speeding up message processing.
  • Message ID: The consumer group will maintain an ID that records the last processed message. Even if the consumer crashes and restarts, the message will be read from the mark to ensure that every message will be consumed.
  • Message confirmation: After the consumer reads the message, the message is in the pending state and stored in a pending-list. When the processing is completed, the message needs to be confirmed by ACK and marked as processed before it is removed from the pending-list.

The command is as follows:

  • XGROUP CREATE:Create a consumer group.XGROUP CREATE key groupName ID|$ [NOMKSTREAM]
    • key: queue name
    • groupName: consumer group name
    • ID: starting ID identification. 0: first message; $: from the latest message
    • NOMKSTREAM: If the queue does not exist, whether to automatically create the queue. The default is

Create a consumer group: Create a consumer group mystreamGroup in the queue mystream and start reading from the first message

XGROUP CREATE mystream mystreamGroup 0
  • XREADGROUP: Read messages from the consumer group.XREADGROUP GROUP group consumer [COUNT count] [BLOCK milliseconds] [NOACK] STREAMS key [key ...] ID [ID ...]
    • group: consumer group name
    • consumer: consumer name. If the consumer name does not exist, a consumer will be automatically created.
    • count: the maximum number for this query
    • milliseconds: the maximum waiting time when there is no message
    • NOACK: No need for manual ACK, automatic confirmation when the message is obtained
    • STREAMS key: Specify the queue name
    • ID: Get the starting ID of the message. ">": Start from the next unconsumed message (recommended under normal circumstances); Others: Get consumed but unconfirmed messages from the pending-list according to the specified id. For example: 0 starts from the first message in the pending-list

Consumer c1 reads messages from the consumer group mystreamGroup in the queue mystream, but cannot read and return within 2000 milliseconds.

XREADGROUP GROUP mystreamGroup c1 COUNT 1 BLOCK 2000 STREAMS mystream

Other commands:

// 删除指定的消费者组
XGROUP DESTROY key groupName
// 给指定的消费者组添加消费者
XGROUP CREATECONSUMER key groupName consumername
// 删除消费者组中指定的消费者
XGROUP DELCONSUMER key groupName consumername

Producer:

The producer sends two messages:

Insert image description here

Create a consumer group:

To enable two groups of consumers to process the same batch of data, you need to create two consumer groups. 0-0: Pull messages from the beginning

Insert image description here

consumer:

After the consumer group is created, we can attach a "consumer" to each "consumer group" so that they can process the same batch of data respectively.

The first consumer group starts consuming:

Insert image description here

The second consumer group starts consuming:

Insert image description here

It can be seen that these two groups of consumers can obtain the same batch of data for processing. In this way, the purpose of "subscription" consumption by multiple groups of consumers is achieved.

Insert image description here

3.2.1 Exception occurs during message processing. Stream ensures that the message is not lost and can be consumed again.

If a consumer consumes a message but does not process it successfully (for example, the consumer process crashes), the message may be lost because other consumers in the group cannot consume the message again.

After a group of consumers has processed the message, they need to execute the XACK command to inform Redis. At this time, Redis will mark the message as "processing completed"

  • XPENDING: In order to solve the problem of message loss caused by consumer crashes during message reading but processing in the group, Stream designed a Pending list to record messages that have been read but not confirmed.XPENDING key group [start end count] [consumer]
    • key: queue name
    • group: consumer group name
    • start: start value. -: minimum value
    • end: end value. +: maximum value
    • count quantity
  • XACK: For messages that have been read but not processed, use the command XACK to complete to notify that the message processing is completed. The XACK command confirms the consumed information. Once the information is confirmed and processed, it means that the information has been completely processed.XACK key group ID [ID ...]

Query messages that have been consumed but not processed (not ACKed):

Insert image description here

ACK message:

Insert image description here

If the consumer crashes abnormally, XACK will definitely not be sent, and Redis will still retain this message.

After this group of consumers comes back online, Redis will resend the data that was not processed successfully to this consumer. In this way, even if the consumer is abnormal, the data will not be lost.

3.2.2 Code implementation

①: Introduce redis dependency

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>

②: Configuration

spring:
  redis:
    host: localhost
    port: 6379
    password:
    timeout: 2000s
    # 配置文件中添加 lettuce.pool 相关配置,则会使用到lettuce连接池
    lettuce:
      pool:
        max-active: 8  # 连接池最大连接数(使用负值表示没有限制) 默认为8
        max-wait: -1ms # 接池最大阻塞等待时间(使用负值表示没有限制) 默认为-1ms
        max-idle: 8    # 连接池中的最大空闲连接 默认为8
        min-idle: 0    # 连接池中的最小空闲连接 默认为 0
  main:
    allow-circular-references: true

redis:
  mq:
    streams:
      # key名称
      - name: redis:mq:streams:key1
        groups:
          # 消费者组名称
          - name: group1
            # 消费者名称
            consumers: group1-con1, group1-con2
      - name: redis:mq:streams:key2
        groups:
          - name: group2
            consumers: group2-con1, group2-con2
      - name: redis:mq:streams:key3
        groups:
          - name: group3
            consumers: group3-con1, group3-con2

Queues, consumer groups, and consumers are configured through configuration files.

③: Redis configuration class

@Slf4j
@Configuration
public class RedisConfig {
    
    

    @Resource
    private RedisMqProperties redisMqProperties;

    @Resource
    private RedisStreamUtil redisStreamUtil;

    @Bean
    public RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory redisConnectionFactory) {
    
    
        RedisTemplate<String, Object> template = new RedisTemplate<>();
        template.setConnectionFactory(redisConnectionFactory);

        ObjectMapper om = new ObjectMapper();
        om.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY);
        om.enableDefaultTyping(ObjectMapper.DefaultTyping.NON_FINAL);
        // json 序列化配置
        Jackson2JsonRedisSerializer<Object> jackson2JsonRedisSerializer = new Jackson2JsonRedisSerializer<>(Object.class);
        jackson2JsonRedisSerializer.setObjectMapper(om);
        // String 序列化
        StringRedisSerializer stringRedisSerializer = new StringRedisSerializer();
        // 所有的 key 采用 string 的序列化
        template.setKeySerializer(stringRedisSerializer);
        // 所有的 value 采用 jackson 的序列化
        template.setValueSerializer(jackson2JsonRedisSerializer);
        // hash 的 key 采用 string 的序列化
        template.setHashKeySerializer(stringRedisSerializer);
        // hash 的 value 采用 jackson 的序列化
        template.setHashValueSerializer(jackson2JsonRedisSerializer);
        template.afterPropertiesSet();
        return template;
    }

    @Bean
    public RedisMessageListenerContainer container(RedisConnectionFactory redisConnectionFactory, RedisMessageListener listener, MessageListenerAdapter adapter) {
    
    
        RedisMessageListenerContainer container = new RedisMessageListenerContainer();
        // 设置连接工厂
        container.setConnectionFactory(redisConnectionFactory);
        // 所有的订阅消息,都需要在这里进行注册绑定,new PatternTopic("topic")表示发布的主题信息。可以添加多个 messageListener,配置不同的通道
        container.addMessageListener(listener, new PatternTopic("topic1"));
        container.addMessageListener(adapter, new PatternTopic("topic2"));
        // 设置序列化对象:① 发布的时候需要设置序列化;订阅方也需要设置序列化;② 设置序列化对象必须放在[加入消息监听器]这一步后面,否则会导致接收器接收不到消息
        Jackson2JsonRedisSerializer seria = new Jackson2JsonRedisSerializer(Object.class);
        ObjectMapper objectMapper = new ObjectMapper();
        objectMapper.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY);
        objectMapper.enableDefaultTyping(ObjectMapper.DefaultTyping.NON_FINAL);
        seria.setObjectMapper(objectMapper);
        container.setTopicSerializer(seria);
        return container;
    }

    @Bean
    public MessageListenerAdapter listenerAdapter(PrintMessageReceiver printMessageReceiver) {
    
    
        MessageListenerAdapter receiveMessage = new MessageListenerAdapter(printMessageReceiver, "receiveMessage");
        Jackson2JsonRedisSerializer seria = new Jackson2JsonRedisSerializer(Object.class);
        ObjectMapper objectMapper = new ObjectMapper();
        objectMapper.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY);
        objectMapper.enableDefaultTyping(ObjectMapper.DefaultTyping.NON_FINAL);
        seria.setObjectMapper(objectMapper);
        receiveMessage.setSerializer(seria);
        return receiveMessage;
    }

    @Bean
    public List<Subscription> subscription(RedisConnectionFactory factory){
    
    
        List<Subscription> resultList = new ArrayList<>();
        AtomicInteger index = new AtomicInteger(1);
        int processors = Runtime.getRuntime().availableProcessors();
        ThreadPoolExecutor executor = new ThreadPoolExecutor(processors, processors, 0, TimeUnit.SECONDS,
                new LinkedBlockingDeque<>(), r -> {
    
    
            Thread thread = new Thread(r);
            thread.setName("async-stream-consumer-" + index.getAndIncrement());
            thread.setDaemon(true);
            return thread;
        });
        StreamMessageListenerContainer.StreamMessageListenerContainerOptions<String, MapRecord<String, String, String>> options =
                StreamMessageListenerContainer
                        .StreamMessageListenerContainerOptions
                        .builder()
                        // 一次最多获取多少条消息
                        .batchSize(5)
                        .executor(executor)
                        .pollTimeout(Duration.ofSeconds(1))
                        .errorHandler(throwable -> log.error("[MQ handler exception]" + throwable.getMessage()))
                        .build();
        for (RedisMqStream redisMqStream : redisMqProperties.getStreams()) {
    
    
            String streamName = redisMqStream.getName();
            RedisMqGroup redisMqGroup = redisMqStream.getGroups().get(0);

            initStream(streamName,redisMqGroup.getName());
            var listenerContainer = StreamMessageListenerContainer.create(factory,options);
            // 手动ask消息
            Subscription subscription = listenerContainer.receive(Consumer.from(redisMqGroup.getName(), redisMqGroup.getConsumers()[0]),
                    StreamOffset.create(streamName, ReadOffset.lastConsumed()), new ReportReadMqListener());
            // 自动ask消息
           /* Subscription subscription = listenerContainer.receiveAutoAck(Consumer.from(redisMqGroup.getName(), redisMqGroup.getConsumers()[0]),
                    StreamOffset.create(streamName, ReadOffset.lastConsumed()), new ReportReadMqListener());*/
            resultList.add(subscription);
            listenerContainer.start();
        }
        ReportReadMqListener.redisStreamUtil = redisStreamUtil;
        return resultList;
    }

    private void initStream(String key, String group) {
    
    
        boolean hasKey = redisStreamUtil.hasKey(key);
        if(!hasKey){
    
    
            Map<String,Object> map = new HashMap<>(1);
            map.put("field","value");
            //创建主题
            String result = redisStreamUtil.addMap(key, map);
            //创建消费组
            redisStreamUtil.createGroup(key, group);
            //将初始化的值删除掉
            redisStreamUtil.del(key, result);
            log.info("stream:{}-group:{} initialize success",key, group);
        }
    }

}

④: Java class corresponding to the configuration of the consumer group

RedisMqProperties: all queues

@Data
@Configuration
@EnableConfigurationProperties
@ConfigurationProperties(prefix = "redis.mq")
public class RedisMqProperties {
    
    

    // 所有队列
    public List<RedisMqStream> streams;
    
}

RedisMqStream:Queue encapsulation class

@Data
public class RedisMqStream {
    
    

    // 队列
    public String name;

    // 消费者组
    public List<RedisMqGroup> groups;

}

RedisMqGroup: Consumer Group

@Data
public class RedisMqGroup {
    
    

    // 消费者组名
    private String name;

    // 消费者
    private String[] consumers;
    
}

⑤: RedisStreamUtil: Tool class for operating Stream

@Component
public class RedisStreamUtil {
    
    

    @Autowired
    private RedisTemplate<String, Object> redisTemplate;

    // 创建消费组
    public String createGroup(String key, String group){
    
    
        return redisTemplate.opsForStream().createGroup(key, group);
    }

    // 获取消费者信息
    public StreamInfo.XInfoConsumers queryConsumers(String key, String group){
    
    
        return redisTemplate.opsForStream().consumers(key, group);
    }

    public StreamInfo.XInfoGroups queryGroups(String key) {
    
    
        return redisTemplate.opsForStream().groups(key);
    }

    // 添加Map消息
    public String addMap(String key, Map<String, Object> value){
    
    
        return redisTemplate.opsForStream().add(key, value).getValue();
    }

   // 读取消息
    public List<MapRecord<String, Object, Object>> read(String key){
    
    
        return redisTemplate.opsForStream().read(StreamOffset.fromStart(key));
    }

    // 确认消费
    public Long ack(String key, String group, String... recordIds){
    
    
        return redisTemplate.opsForStream().acknowledge(key, group, recordIds);
    }

    // 删除消息。当一个节点的所有消息都被删除,那么该节点会自动销毁
    public Long del(String key, String... recordIds){
    
    
        return redisTemplate.opsForStream().delete(key, recordIds);
    }

    // 判断是否存在key
    public boolean hasKey(String key){
    
    
        Boolean aBoolean = redisTemplate.hasKey(key);
        return aBoolean != null && aBoolean;
    }
}

⑥:Consumer

@Slf4j
@Component
public class ReportReadMqListener implements StreamListener<String, MapRecord<String, String, String>> {
    
    

    public static RedisStreamUtil redisStreamUtil;

    @Override
    public void onMessage(MapRecord<String, String, String> message) {
    
    
        // stream的key值
        String streamKey = message.getStream();
        //消息ID
        RecordId recordId = message.getId();
        //消息内容
        Map<String, String> msg = message.getValue();
        //TODO 处理逻辑

        log.info("【streamKey】= " + streamKey + ",【recordId】= " + recordId + ",【msg】=" + msg);
        //逻辑处理完成后,ack消息,删除消息,group为消费组名称
        StreamInfo.XInfoGroups xInfoGroups = redisStreamUtil.queryGroups(streamKey);
        xInfoGroups.forEach(xInfoGroup -> redisStreamUtil.ack(streamKey, xInfoGroup.groupName(), recordId.getValue()));
        redisStreamUtil.del(streamKey, recordId.getValue());
    }
}

⑦: Publish news

@GetMapping("/testStream")
public String testStream() {
    
    
    HashMap<String, Object> message = new HashMap<>(2);
    message.put("body", "消息主题" );
    message.put("sendTime", "消息发送时间");
    String streamKey = "redis:mq:streams:key2";
    redisStreamUtil.addMap(streamKey, message);
    return "testStream";
}

4. Summary

Insert image description here

Guess you like

Origin blog.csdn.net/sco5282/article/details/132904956