redis implements queue

Transfer: https://www.cnblogs.com/nullcc/p/5924244.html

Question: If you have a large concurrent messaging application, you want to process requests based on priority?

Answer: with Redis

Detailed explanation:

One is the large amount of concurrency, and the other is the priority of the request.

Let’s talk about the large amount of concurrency first. For a message system, the server will inevitably accept many requests from clients. These requests are generally asynchronous, and users do not have to wait for the requests to be processed. For this kind of demand, we need something that can cache a large number of message requests, and it is very suitable to use redis to do this. Basically, the number of messages that redis can cache only depends on the memory size, and all we need is the most basic operations of the queue: enqueue and dequeue, their time complexity is O(1), so the performance is very high. high.

Specifically, there is a list structure in redis. We can use the list to construct a FIFO (first-in, first-out) queue, and all requests are queued in this queue for processing. The list of redis has several commonly used operations such as lpush, rpush, lpop and rpop. If we want to construct a FIFO queue, we can use lpush and rpop (or rpush and lpop), and pay attention to the opposite direction of entering and exiting the queue.

The second keyword, the priority of the request. Let's assume the simplest scenario first, with three priorities: high, high, and low. Three list structures can be set, such as queue_h, queue_m, queue_l, corresponding to three priorities respectively. Our code flow can be written like this:

First set a list of 3 priorities.

Write side:

1. According to the priority of the request, lpush data into the corresponding list.

read end:

1. You can use timed polling to check the lengths of the high, middle and low lists in order (you can use the llen command). If the length of the list is greater than 0, it means that the current queue needs to be processed immediately.

2. Rpop data from this list, and then process the data.

It should be noted that because there are priorities, requests with low and medium priorities can only be processed after all high-priority requests have been processed. This is a major premise.

Some people may ask, what if I have more than 3 priority categories, such as 1000 priorities, I can't set 1000 lists, it's too painful. This situation is not completely impossible, maybe some systems have so many priorities.

We can deal with this requirement in combination with segments, such as 0-99, 100-199...900-999, first divide the priority into several equal parts, and then use ordered sets in each segment, ordered sets The elements in the set can be sorted. The sorted set uses the binary search method when inserting an element, so the efficiency is still ok in the face of a relatively large amount of data. If the number of requests is too many, you can consider further subdividing the priority. Segmentation to reduce the number of ordered list elements. When a request comes in, first determine its priority segment, and put the request into the corresponding ordered set. In the processing part, a service book needs to traverse the priority segments in the order of high priority to low priority, and then directly take the request with the highest priority for processing (the time complexity of taking the highest or lowest element in the ordered set is all O(1)).

  

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324761260&siteId=291194637