High concurrency solution for spike scenarios

 

 

<!--[if !supportLists]--> 1.         <!--[endif]--> Seckill Architecture Design Concept:

Traffic restriction: Since only a small number of users can successfully kill in seconds, it is necessary to restrict most of the traffic and allow only a small part of the traffic to enter the backend of the service. 

Peak clipping: There will be a large influx of users in the instant kill system, so there will be a high instantaneous peak at the beginning of the panic buying. High peak traffic is an important reason to overwhelm the system, so how to change the instantaneous high traffic into a stable traffic for a period of time is also a very important idea in designing a spike system. Common methods to achieve peak clipping include the use of technologies such as caching and message middleware.

Asynchronous processing: The seckill system is a high-concurrency system. The asynchronous processing mode can greatly increase the concurrency of the system. In fact, asynchronous processing is an implementation of peak shaving.

Memory cache: The biggest bottleneck of the seckill system is generally database read and write. Since database read and write belongs to disk IO , the performance is very low. If part of the data or business logic can be transferred to the memory cache, the efficiency will be greatly improved.

Scalable: Of course, if we want to support more users and greater concurrency, it is best to design the system to be flexible and scalable. If traffic comes, it’s good to expand the machine. During Double Eleven events such as Taobao and JD.com, a large number of machines will be added to cope with the transaction peak.

 

<!--[if !supportLists]--> 2.         <!--[endif]--> Specific implementation plan:

2.1 Front-end part:

Browser side (js) :

Page static: Make all static elements on the active page static, minimize dynamic elements, and use CDN to resist peaks. Prohibit repeated submissions: After the user submits, the button is grayed out, and repeated submissions are prohibited. User current limit: users are only allowed to submit a request once within a certain period of time. For example, IP current limit can be adopted. 

 

2.2 Back- end solution

Only a part of the access requests are intercepted above. When the number of users in the second kill is large, even if there is only one request per user, the number of requests to the service layer is still very large. For example, we have 100W users grabbing 100 mobile phones at the same time, and the concurrent request pressure of the service layer is at least 100W .

1. Use the message queue to cache requests: Since the service layer knows that there are only 100 mobile phones in stock , there is no need to pass all 100W requests to the database. Then you can write these requests to the message queue cache first, and the database layer subscribes to messages Inventory reduction, successful request to reduce inventory returns seckill success, failure returns seckill end.

2. Use the cache to deal with read requests: For ticket-purchasing services such as 12306 , it is a typical read-more-write-less service, and most requests are query requests, so the cache can be used to share the database pressure.

3. Use the cache to respond to write requests: The cache can also respond to write requests. For example, we can transfer the inventory data in the database to the Redis cache. All inventory reduction operations are performed in Redis , and then through the background process Redis is stored in Redis. The user seckill request is synchronized to the database.

 

Specific back-end implementation scheme 1:

You can use redis distributed cache to implement a high-concurrency spike service.

Use the value of the atomic class AtomicInteger under the concurrent package as the key , and the user ID as the value .

Use the list data type of redis , use the rpush operation, and put the user ID of the subsequent request at the end of the list . When the upper limit of the seckill resource is reached, stop inserting the list and return directly.

Then use the lpop operation on the elements in the list as a user who has succeeded in the second kill, and then perform operations such as deducting inventory and generating orders asynchronously.

 

Back-end specific implementation plan two:

Using the message queue method, put the requested user ID into the queue, and then consume it asynchronously.

 

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326026548&siteId=291194637
Recommended