A series of questions about the seckill system

How to implement blocking queue? Oversold problem? How to achieve the whole?

5 Design a seckill system
Features: High concurrency, the amount of requests is much greater than the inventory, and only a few can succeed; the logic is relatively simple, the order is placed to reduce the inventory;
design concept: **limited traffic,** only a small part of the traffic can enter the backend; Peak clipping , which converts instantaneous high traffic into smooth traffic (such as asynchronous processing). Memory cache : The biggest bottleneck of the seckill system is generally database read and write. Because database read and write belongs to disk IO, the performance is very low. If some data or business logic can be transferred to memory cache, the efficiency will be greatly improved. Distributed processing.
Process: front-end seckill interface-server controller (gateway)-service layer-database layer What the
front-end browser can do: use static elements on the page that can be static (static does not involve the server), and use CDN to combat peaks; prohibit duplication Submit: After the user submits, the button is grayed out, and repeated submissions are prohibited; user current limit: users are only allowed to submit one request within a certain period of time, for example, IP current limit can be adopted.
The server controller layer (gateway layer)
restricts the access frequency of uid (UserID): We have intercepted the browser access request above, but for some malicious attacks or other plug-ins, the server control layer needs to target the same access uid, restrict Frequency of visits.

Service layer
1. Use message queue to cache requests : Since the service layer knows that there are only 100 mobile phones in stock, there is no need to pass all 100W requests to the database. Then you can write these requests to the message queue to cache first. The database layer Subscribe to the message to reduce the inventory. If the inventory reduction is successful, it will return the success of the seckill, and if it fails, it will return the end of the seckill.
2. Use the cache to deal with read requests : For ticketing services like 12306, it is a typical business that reads more and writes less. Most of the requests are query requests, so the cache can be used to share the pressure on the database.
3. **Use cache to deal with write requests: **Cache can also respond to write requests. For example, we can transfer the inventory data in the database to the Redis cache. All inventory reduction operations are performed in Redis, and then through the background The process synchronizes the user seckill request in Redis to the database. (redis is a non-relational database, which can process data in memory, and read and write faster.)
The seckill system is characterized by a huge amount of concurrency, but the actual number of successful seckill requests is very small, so if it is not intercepted at the front end, it may cause database read and write locks Conflicts, and even lead to deadlocks, and eventually the request times out.

insert image description here
Summary: The seckill system is characterized by instantaneous high concurrency peaks.
The first is front-end current limiting .
For example, the button can only be clicked once, and the IP flow is limited;
for example, static pages, users browsing products and other routine operations will not be requested to the server. Access to the server is only allowed when the seckill time is reached and the user actively clicks the seckill button.
For example, the use of CDN enables users to obtain the required content nearby, reduces network congestion, and improves user access response speed and hit rate.
For example, only members can participate by raising the threshold for participation; for example, grabbing in batches, these are all problems of product managers, not technical problems.
The second point: Since there are only a small number of products, most users fail to return, and the inventory will be written only when the order is successful. It is a typical scenario of more reads and less writes, and it is time to use caching.
For scenarios with more reads and fewer writes, a large number of read requests may sink the database, so you can use a cache such as redis. You may encounter problems such as cache breakdown (hot data never expires, lock), penetration (interface verification, Bloom filter, return fixed value and write cache), avalanche (expiration time breaks up), etc. . More words can use redis cluster, and some problems related to cluster.

The third point is for inventory issues.
For example, in the real flash sale scenario, it doesn’t mean that the inventory is deducted and it’s over. If the user has not completed the payment within a certain period of time, the deducted inventory must be added back. So, here is a concept of withholding inventory
and the problem of oversold inventory ? When we reduce inventory, we generally check whether the inventory is greater than 0, and if so, execute the inventory reduction, but these two operations are not atomic operations , so it is very likely that the check is greater than 0, but it is bought by other users before the inventory is reduced.
The solution can be to add a mutex so that multiple threads do not access the same shared variable. But the performance is too low.

** Optimistic lock: use CAS, version number to solve, suitable for scenarios with more reads and less writes. ** A little more efficient.

insert image description hereThe real concurrency is the seckill function, and the actual concurrency of the order and payment functions is very small. Therefore, when we design the seckill system, it is necessary to separate the order and payment functions from the main process of the seckill, especially the order function should be processed asynchronously by mq.
If you use mq, you need to pay attention to the following issues:
message loss problem: there are many reasons, such as: network problems, broker hangs, mq server disk problems, etc. To be processed, only successful consumption, the callback function changes the status to processed, check the message table every once in a while, and retry when it is pending) Repeated consumption: When the
consumer consumes the message, when the ack responds, if the network times out, itself It is possible to consume duplicate messages. However, since the message sender adds a retry mechanism, the probability of consumers repeating messages will increase. So, how to solve the duplicate message problem? (Add a message processing table, first judge whether there is any in the table, return it directly, if not, place an order and add it to the processing table, to ensure atomic operation)

(Summary: front-end current limit (button, ip, CDN) read more write less add cache (cache-related issues), solve inventory oversold problem (optimistic lock pessimistic lock), message queue asynchronously process order payment operation ( peak shaving ) consumption missing duplicate question)

What are the implementation forms of message queue?

The role of the message queue: decoupling, asynchronous, peak cutting.
Decoupling is the decoupling of the producer and many consumers. A only needs to write the message to the queue, and does not care who uses it. It has nothing to do with A if the consumer hangs up or times out.
Asynchronous peak shaving: For example, A is the main pressure business, that is, spikes, and order payment is a secondary business. If the main business needs to be completed at one time, the main business will be delayed for too long, there will be too much lock contention, and the queue will be temporarily stored first. The spike is successful, and other secondary tasks are not in a hurry to complete.

There are mature rabitMQ, kafka, etc., as well as blocking queues, bounded and unbounded, linked list forms, etc.

Guess you like

Origin blog.csdn.net/weixin_53344209/article/details/130274373