[SpringBoot Mall spike system project summary 25] and highlights the difficulty of the project and problem-solving (source address)

[SpringBoot Mall spike Department Course Description
What is spike
spike scenes usually held a number of activities in the electricity business website or holiday encounters grab votes on the 12306 website. For some scarce electricity supplier website or specials, General Electric's Web site will be limited sales at the appointed time, because of the special nature of these commodities, will attract a lot of customers come to buy, and will at the same time at the agreed point in time pages spike rush.

Spike system scene features
a large number of users take place at the same time buying spike at the same time, the site instantaneous traffic surge.
Spike is usually far greater than the number of access requests inventory number, only a few users to spike success.
Spike business process is relatively simple, the general is under orders inventory reduction.
Spike business scenario is not like other business scenarios, mainly instant spike, concurrency is very large, how big concurrent for this is that we need to take to solve. Spike business, is typical of a lot of short burst access.

Spike architecture design
limiting: Given that only a small number of users to spike success, so most of the traffic restrictions, allowing only a small part of the flow into the back-end service.

Clipping: spike system for instantaneous influx of large numbers of users have, so start there will be a rush to buy high instantaneous peak. Peak flow system is overwhelmed very important reason, so how high flow instantly becomes smooth flow over time is also very important design ideas spike system. A method implemented using a conventional buffer are clipped and messaging middleware technology.

Asynchronous processing: spike system is a highly concurrent systems, the use of asynchronous processing mode can greatly improve system concurrency, in fact, is an implementation of asynchronous processing of clipping.

Memory cache: the biggest bottleneck in the system are generally spike database read and write, because the database disk read and write belongs to IO, low performance, if we can put some data or business logic into the cache memory, the efficiency will be greatly improved.

May expand: Of course, if we want to support more users and greater concurrency, it will be the best system is designed to elastically expand, if traffic comes, expand the machine just fine. It will increase when a large number of machines to deal with high transaction as Taobao, Jingdong double eleven activities.

Spike system architecture design ideas
will intercept the request in the system upstream, reducing the downstream pressure: spike system is characterized by a great amount of concurrency, but the actual spike in the number of requests are rarely successful, so if the interception is not likely to cause the front end database to read and write lock conflicts, The final request timed out.
Using caching: the use of caching can greatly increase the speed of read and write system.
Message queues: a message queue can be peak clipping, will intercept a large number of concurrent requests, this is an asynchronous process, according to their background traffic handling capacity from the message queue of the active pull service processing request message.
Front-end program
static pages: all can be static elements on the event page of all static and dynamic elements to minimize. To resist the peak by CDN.
Do not re-submitted to: submit button after the user gray and resubmit prohibit
users limiting: In only allow users to submit a request for a certain period of time, such as IP flow restrictor may take

Back-
limit uid (UserID) access frequency: above us blocked access to the browser's request, but for some plug-ins or other malicious attacks, the server-side control layer with a need for access uid, restrict access frequency.

The above intercepts only a part of the access request, when the number of users is large spike, even if each user has only one request to the service layer is still very large number of requests. For example, we have 10 10W users to simultaneously grab the phone, the service layer concurrent requests pressure of at least 10W.

The use of message queues cache requests: Since the service layer knows inventory only 10 mobile phones, and that there is no need to 10W requests are passed to the database ah, you can put these requests are written to the message queue cache it, the database layer subscribe messaging inventory reduction , reduce inventory successful request returns spike success, failure to return the spike end.

Use the cache read request to respond: 12306 for similar tickets and other services and commodities spike scene, is a typical reading and writing small business, most of the query request is a request, it can be used to share database cache pressure.

Use caching to deal with the write request: cache also can handle write requests, such as we can to transfer the inventory data in the database to Redis cache, all reduce inventory operations are carried out in Redis, and then by a background process to Redis users spike request to synchronize to the database.
Database layer
Database layer is most vulnerable layer, application design in general when it is necessary to request the upstream interception off, the database layer bear only access requests "within the capability range". Therefore, by introducing the above queues and buffers in the service layer, so that the bottom of the database peace of mind.

Use technology
front-end: Thymeleaf, Bootstrap, JQuery
backend: SpringBoot, Mybatis
middleware: RabbitMQ, Redis, Druid
Note: Thymeleaf is actually a server-side templates; RabbitMQ asynchronous orders; redis do use cache (compared to a relatively Memcached many advantages, Redis can do persistence); druid: Alibaba development of connection pooling, the benefits: You can do surveillance, monitoring connection pool inside connections, the maximum number of connections, the maximum number of concurrent longest time and other characteristics.

The main project technical description
using distributed Seesion, so that multiple servers can respond.
Redis do use cache to improve the speed and amount of concurrent access, reducing database stress.
The use of static pages, cached page to the browser, the front and rear end of the separator to reduce pressure on the server.
Use complete asynchronous message queue order, to enhance the user experience, clipping and downflow.
Security optimization: double hidden password md5 checksum, spike interface address of the interface limiting anti-brush, a mathematical formula verification code.
----------------
Disclaimer: This article is the original article CSDN bloggers "pitt1997", and follow CC 4.0 BY-SA copyright agreement, reproduced, please attach the original source link and this statement. .
Original link: https: //blog.csdn.net/Brad_PiTt7/article/details/90603428 spike system optimization study notes]

Published 100 original articles · won praise 12 · views 10000 +

Guess you like

Origin blog.csdn.net/hmh13548571896/article/details/104023742