Thinking about spikes and current limiting with the help of Redis

Original: Thinking of using Redis to do spikes and current limiting

Recently, the group chatted about spikes and current limiting. I have not done similar applications myself, but I have encountered larger data and concurrency in my work.

So a simple model is proposed:

var count = rds.inc(key);

if(count > 1000) throw "已抢光!"

With Redis's single-threaded model, its inc is safe, making sure to add one each time, and then return the result after adding one. If the original is 234, adding one is 235, and the returned must be 235. In the middle, there will be no other requests to interrupt, resulting in the return of 236 or other.

In fact, we can understand that the business of inc is to occupy pits and queues, each person occupies a pit, and after getting the queuing ticket, we can see if there is an excess, and then output the seckill results from the business level, and even do some more complex business.

The six mentioned current limit may be based on some consideration, and it is hoped that the count corresponding to the key should be limited to around 1000, and a 1% deviation is acceptable.

So there is an improved model:

var count = rds.inc(key);

if(count > 1000){

    rds.dec(key);

    throw "Exceeded limit!"

Just added a sentence, after the limit is exceeded, the small ticket will be reduced back ^_^

 

There is an advantage to using Redis, such as supporting many application servers to grab together...

Of course, for a large number of spikes , this model is not necessarily reasonable. For example, if you want 100,000 mobile phones, and then 3 million users come, they will be crowded in instantly.

Here is a workaround you can try, that is to prepare 10 Redis instances, each with 10,000. When a user requests, they can take a random number or a hash modulo, and find a corresponding instance to snap up.

In the same way, you can directly use more user scenarios. In general, when the data is large, random and hash have a certain statistical significance and are relatively balanced.

 

The above is a simple scene with a large number of spikes, but what about the small data scene? For example, there are only tens of thousands of concurrent scenarios .

In small data scenarios and single application instances, you can consider saving Redis.

Primary model:

Interlocked.Increase(ref count);

if(count >= 1000) throw "抢光啦!"

Intermediate model:

private volatile Int32 count;

var old = 0;

do {

    old = count;

    if(old >= 1000) throw "Stolen!"

}while(Interlocked.CompareExchange(ref count, old + 1, old) != old);

This CAS atomic operation is a good thing. There is a special instruction CMPXCHG under the x86 instruction set to process it, ensuring the atomicity of comparing and exchanging data at the processor level. If most systems want to cross the threshold of 100,000 tps and get closer to 1 million tps, they must achieve lock-free operation. Among them, CAS is the most simple and easy to understand. Although there are sometimes ABA problems, we can find many Solution.

 

In actual usage scenarios, there may be more complex requirements, which is another matter. Here, we can only use a few simple and easy-to-use models.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326053164&siteId=291194637