Spike system technical points

1. The server request pressure problem caused by high concurrency

Since it is a seckill system, there must be a lot of people buying a product together. The first thing to do is that your website needs to be able to withstand the pressure of tens of thousands of visits. You can't be like the school's class grabbing system, where 
thousands of people grab it. class, the system collapses and cannot enter. 
To solve this problem: 1: The front-end page is static 
, what is the page static? For example, on the seckill page, in addition to the seckill button and the seckill countdown, there may also be background images, product recommendations, etc. For data on these pages, you should avoid thinking about the server request data repeatedly. When refreshing the seckill page, you should use partial refresh. way to request the latest data with the smallest amount of data. 2: Front-end interface current limiting 
For example, the button can only be submitted once, and then it is grayed out, which can simply prevent repeated submissions on the general user page (request access using tools cannot be avoided); 3: Back-end interface current-limiting 
backend You can use custom annotations to cache the user's id in redis to limit how many times the user can request in seconds. 4: Back-end product data cache 
, the user queries the data of the product, if each user re-database If you go to query, the database will definitely not be able to withstand it. At this time, you need to use redis to cache commodity data 
(you need to pay attention to the cache breakdown of redis, cache penetration, etc.) 5: Cluster deployment , if it is a high concurrency environment, single For a single server, a single redis must be difficult to meet the demand. It needs to do cluster deployment of the system. Cluster deployment of reids
 6: Distributed session 
 User login cannot simply use session to cache data. At this time, redis can be used as distributed session





2. When the seckill time is up, the problem of multi-user ordering 

As soon as the seckill time arrives, this time is the time when the number of user requests is the largest. How should we handle these requests?

 1. Traffic peak clipping ; What is traffic peak clipping? Under normal circumstances, the traffic access size of our seckill system should be,

A few minutes before the seckill, people who need to participate in the seckill come in one after another, and the requested traffic gradually increases. At the moment when the seckill starts, the traffic reaches the maximum (this is the peak), and then the traffic slowly decreases. ,

Peak shaving is to make the access traffic not suddenly increase in an instant, but gradually increase in a controlled range. Why do we need to shave peaks? Usually, why does the spike system need to shave peaks? Or what are the disadvantages of peaking? We know that the processing resources of the server are constant, and the processing power is the same whether you use it or not, so if there is a peak, the server will be very busy at that moment, and then it will be idle, but to ensure the quality of service, Many of our processing resources can only be estimated according to the busy time, which will lead to a waste of resources. For example, its peak value needs to be processed by 6 server clusters, and subsequent requests can be processed by only 3 servers, so three servers are idle most of the time, so we need to make user requests as smooth as possible. Some operational ideas: queuing, answering questions, hierarchical filtering

 1. Message queue to solve peak clipping; 2. Traffic peak clipping funnel: layer-by-layer peak clipping, 3. Verification code to solve peak clipping

1. Message queues solve peak shaving To shaving traffic, the easiest solution to think of is to use message queues to buffer instantaneous traffic, convert synchronous direct calls into asynchronous indirect pushes, and use a queue in the middle to undertake instantaneous traffic at one end. Traffic peaks, and messages are pushed out smoothly on the other end. However, although this protects the smoothness of traffic requests at the system, the traffic at the entrance of the message queue is still very large, which may overwhelm the message queue.

2. Traffic peak clipping funnel: layer-by-layer peak clipping There is another method for the spike scenario, which is to filter requests in layers, so as to filter out some invalid requests. The system can pass some checks to determine which requests may come from scripts, which requests are repeated invalid requests, which are requests from abnormal users, or which requests are special wool parties, and filter out these requests by hierarchical filtering. In fact, it uses a "funnel" design to process requests, as shown in the following figure:

 

3. When the verification code is clicked by the user, the verification code is popped up and the request is initiated. What is the benefit of doing this? It may take 1-3s for the user to enter the verification code, so the request received by the server will be in the interval of 1-3s

 

3. Use mq to place orders asynchronously and asynchronously: The seckill system is a high-concurrency system, and the use of asynchronous processing mode can greatly increase the concurrency of the system. In fact, asynchronous processing is a way to achieve peak shaving.

For example, in this code, after confirming the inventory, it does not directly call the secKill method to generate an order, but sends a message to the mq message queue to let the message queue process the order asynchronously

 

4. Solve the oversold problem with atomic redis locks

 

Check the number of items cached in redis before placing an order. If it is greater than 0, deduct the inventory and place an order, as shown in the following code

The function of the decrement method is to read the value of the redis key and subtract 1. During the process of subtracting 1, when other threads read the value of redis, they will not read the same value.

//预减库存
Long decrement = valueOperations.decrement("seckillGoods:" + goodsId);
if (decrement<0){
//如果小于0,说明没有库存了,内存中标记商品为true,表示卖完
emptyStockMap.put(goodsId,true);
valueOperations.increment("seckillGoods:" + goodsId);
return RespBean.error(RespBeanEnum.EMPTY_STOCK);
}

When a single redis is used for distributed locks, it is necessary to pay attention to its principled problems, such as the following code: This is a problematic lock. If there is still one item left, before the third line of code is executed, there are multiple locks. If the thread executes the first line, then there will be multiple threads thinking that there is still 1 commodity, and go through the process of placing an order at the same time, resulting in the oversold commodity 

Integer o = (Integer)valueOperations.get("seckillGoods:" + goodsId);
o--;
valueOperations.set("seckillGoods:" + goodsId,o);
if (o < 0) {
//如果小于0,说明没有库存了,内存中标记商品为true,表示卖完
emptyStockMap.put(goodsId,true);
valueOperations.increment("seckillGoods:" + goodsId);
return RespBean.error(RespBeanEnum.EMPTY_STOCK);
}

 5. Link exposure problem

The url of the snap-up interface should not be fixed. If it is fixed, if someone knows the address of the snap-up interface, he can open the script in advance, write a scheduled task, and initiate a request when the time is up. Generally, the speed of the script is It is much larger than the number of clicks of human fingers, so it is very likely that all the snapped-up products will be taken away by scalpers. How to solve this problem can be solved by dynamically generating urls. Each user can dynamically generate their own unique url. When rushing to buy, do not request the real access address, but to obtain the real dynamic access address, and then request the real address 

 
 

Guess you like

Origin blog.csdn.net/qq_45171957/article/details/122408110