Implement message queue with redis (real-time consumption + ack mechanism)

message queue

Do a simple import first.

MQ is mainly used to:

  • decoupled applications,

  • Asynchronous message

  • Traffic peak shaving and valley filling

Currently, ActiveMQ, RabbitMQ, ZeroMQ, Kafka, MetaMQ, RocketMQ, etc. are used more.

Online resources have detailed explanations for each situation, so I won't go into details here. This article

Only the process of how to use Redis to implement lightweight MQ is introduced.

Why use Redis to implement lightweight MQ?

In the process of business implementation, even if there is not a large amount of traffic, decoupling and asynchrony are almost always available. At this time, MQ is particularly important. But at the same time, MQ is also a very heavy component. For example, if we use RabbitMQ, we must build a server for it. At the same time, if we want to consider availability, we must build a cluster for the server, and if there is a problem in production, we also need to find the function . In the development process of small and medium-sized businesses, other entire implementations of the business may not have this weight. Over-heavy component services multiply the workload.

Fortunately, the list data structure provided by Redis is very suitable for message queues.

But how to achieve instant consumption? How to implement the ack mechanism? These are the keys to implementation.

How to achieve instant consumption?

The method circulating on the Internet is to use the list operation BLPOP or BRPOP in Redis, that is, the blocking pop-up of the list.

Let's see how blocking pops are used:

BRPOP key [key ...] timeout

The description for this command is:

1. When there are no elements to pop in the given list, the connection will be blocked by the BRPOP command until the wait times out or an element that can be popped is found.

2. When multiple key parameters are given, check each list in order of the parameter keys, and pop up the tail element of the first non-empty list.

In addition, BRPOP behaves the same as BLPOP except that the position of the pop-up element is different from that of BLPOP.

From this point of view, the blocking popup of the list has two characteristics:

1. If there are no tasks in the list, the connection will be blocked

2. The blocking of the connection has a timeout

At this point, the problem is obvious. How to set the timeout here? Can it be guaranteed that only one will remain blocked until a message enters the queue? It is obviously difficult to do because the two are not mutually bound.

Fortunately, Redis also supports Pub/Sub (publish/subscribe). When message A enters the queue list, message B is published (PUBLISH) to the channel channel. At this time, the worker who has subscribed to the channel receives message B. Knowing that there is message A in the list, it can cycle lpop or rpop to consume the list. news. The process is as follows:

Implement message queue with redis (real-time consumption + ack mechanism)

The worker can be a separate thread or an independent service, which acts as a Consumer and a business processor. An example is given below.

Instant consumption instance

The example scenario is: the worker needs to synchronize files, and synchronizes immediately when a file is generated.

First start a thread on behalf of the worker to subscribe to the channel channel:

@Servicepublic class SubscribeService { @Resource private RedisService redisService; @Resource private SynListener synListener;//订阅者 @PostConstruct public void subscribe() { new Thread(new Runnable() { @Override public void run() { LogCvt.info("服务已订阅频道:{}", channel); redisService.subscribe(synListener, channel); } }).start(); }}

The SynListener in the code is the declared subscriber, and the channel is the subscribed channel name. The specific subscription logic is as follows:

@Servicepublic class SynListener extends JedisPubSub { @Resource private DispatchMessageHandler dispatchMessageHandler; @Override public void onMessage(String channel, String message) { LogCvt.info("channel:{},receives message:{}",channel,message); try { //处理业务(同步文件) dispatchMessageHandler.synFile(); } catch (Exception e) { LogCvt.error(e.getMessage(),e); } }}

When processing business, go to the list to consume messages:

@Servicepublic class DispatchMessageHandler { @Resource private RedisService redisService; @Resource private MessageHandler messageHandler; public void synFile(){ while(true){ try { String message = redisService.lpop(RedisKeyUtil.syn_file_queue_key()); if (null == message){ break; } Thread.currentThread().setName(Tools.uuid()); // 队列数据处理 messageHandler.synfile(message); } catch (Exception e) { LogCvt.error(e.getMessage(),e); } } }}

In this way, we achieve the purpose of real-time consumption of messages.

How to implement the ack mechanism?

ack , the message acknowledgement mechanism (Acknowledge).

First look at the ack mechanism of RabbitMQ:

  • The Publisher notifies the Consumer of the message, and if the Consumer has finished processing the task, it will send an ACK message to the Broker to inform that a message has been successfully processed and can be removed from the queue. If the Consumer does not send back an ACK message, the Broker will consider that the message processing failed, and will distribute the message and subsequent messages to other Consumers for processing (the redeliver flag is set to true).

  • This confirmation mechanism is similar to that established by the TCP/IP protocol. The difference is that TCP/IP needs to go through three handshakes to establish a connection, while RabbitMQ only needs one ACK.

  • It is worth noting that RabbitMQ will redistribute the message to other Consumers if and only when it detects that the ACK message has not been sent and the Consumer's connection is terminated, so there is no need to worry about the message processing time being too long and being redistributed.

So what should we do when we use Redis to implement the ack mechanism of the message queue?

Two points to note:

  1. After the work processing fails, the message should be rolled back to the original pending queue

  2. If the worker hangs, the message should also be rolled back to the original pending queue

The first point above can be done in the business, that is, the rollback message is executed after failure.

Implementation plan

(This solution mainly solves the situation where the worker hangs up)

  1. Two queues are maintained: pending queue and doing queue.

  2. worker is defined as ThreadPool.

  3. After being dequeued from the pending queue, the worker assigns a thread to process the message - append a current timestamp and current thread name to the target message, and then enqueue the doing queue.

  4. Enable a scheduled task, scan the doing queue at regular intervals, and check the timestamp of each element. If it times out, the worker's ThreadPoolExecutor will check whether the thread exists. If it exists, cancel the current task execution and rollback the transaction. Finally, pop the task from the doing queue, and then push it into the pending queue again.

  5. In a thread of the worker, if the processing of the business fails, it will actively roll back the task, remove the task from the doing queue, and push it into the pending queue again.

Summarize

Redis as a message queue has great limitations. Because of its main features and uses, it can only implement lightweight message queues. Written at the end: There is no absolute best technology, only the most business-friendly technology, dedicated to all developers.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325339479&siteId=291194637