Design idea of seckill system in high concurrency scenario

1 Overview

The reason why the seckill system is difficult to do is because a large number of requests are flooded in a very short period of time to access limited service resources at the same time, resulting in high system load pressure, and even system service paralysis and the possibility of downtime. This article will introduce the pain points in the seckill system and the optimization ideas for these points.

2 What the hell is the seckill system?

For example: 12306 Spring Festival ticket rush, timed rush buying activities organized by major e-commerce companies, such as Xiaomi mobile phone online rush buying, etc. Students who have snatched train tickets know that the moment the ticket is released, it may be less than 1s, and the ticket will be cancelled. Sold out.

3 Difficulties of the seckill system

(1) High concurrency in a short time, high system load pressure

(2) The competing resources are limited, and the database lock conflict is serious

(3) Avoid impact on other businesses

4 Common Internet Layered Architectures

(1) Client layer: the client page operated by the mobile phone or PC, the domain name is routed to the NG through DNS resolution

(2) Reverse proxy layer: Generally, NG is used as a reverse proxy to route client requests to back-end site services in a balanced manner. NG can also be horizontally extended to multiple instances, and each instance can be deployed separately for master-slave high availability plan.

(3) Site layer: The site layer can be horizontally expanded into multiple instance deployments to balance the high concurrent load generated by client requests. The session information between multiple web servers can be centrally stored in the distributed cache service (Redis). , MemCache).

(4) Service layer: The service layer can also be horizontally extended to multiple instance deployments, which is the most popular microservice method in real time.

(5) Database layer: Common deployment methods of the database layer, such as read-write separation, sub-database sub-table, etc.

5 Architectural principles of the seckill system

(1) Try to intercept requests upstream

For the seckill system, the bottleneck of the system is generally in the database layer. Due to the limited resources, if there are 10,000 tickets in the database, and 1 million requests are sent in in an instant, then 990,000 are useless requests. It is good to protect the limited underlying database resources and try to intercept requests upstream.

(2) Make full use of the cache

Cache not only greatly shortens the data access efficiency, but also bears the access pressure of the underlying database, so make full use of the cache for business scenarios with more reads and fewer writes

(3) Hotspot isolation

Business isolation: such as 12306 ticket sales in different time periods, the hotspot data is processed in a decentralized manner to reduce system load pressure

System isolation: To achieve software and hardware isolation of the system, not only software isolation, but also hardware isolation can be achieved to minimize the high concurrency security problems caused by spikes.

Data isolation: enable a separate cache cluster or database to store hot data

6 Optimization plan

(1) On-page optimization, such as:

  • The button is grayed out: users are prohibited from submitting repeated requests
  • Controlled by JS: Only one request can be submitted within a certain period of time

(2) Web server layer optimization, such as:

  • Dynamic and static separation: if almost unchanged static pages are routed directly through NG or CDN, only dynamically transformed pages can be requested to the web server side
  • page caching
  • Nginx reverse proxy realizes horizontal expansion of web server side

(3) Optimization of back-end service service layer

  • Use cache (Redis, Memchched): put the business data that reads more and writes less into the cache. For example, in the seckill business, you can put frequently updated commodity inventory information into the Redis cache for processing

Note: When the inventory information is put into the Redis cache, it is best to divide it into multiple copies and put them in the cache with different keys. For example, if the inventory is 100,000, it can be divided into 10 copies and put into the cache with different keys, so that the data can be distributed to achieve Higher read and write performance.

  • Use queue processing: put requests into queues for processing, and access the underlying DB at a controllable speed
  • Asynchronous processing: For example, the order notification information of the successful seckill is processed asynchronously through the message queue (RabbitMQ, Kafka)

(4) DB layer optimization

  • read-write separation
  • Sub-table and sub-library
  • database cluster

 

 

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325474670&siteId=291194637