High concurrency solution (easy to understand)

1. Front-end solution:

1. Static page

HttpClient page staticization technology is used to cache the information on the homepage, all static elements on the active page are made static, and dynamic elements are minimized. Use CDN to resist peak traffic. 


2. User current limit

Users are only allowed to submit one request within a certain period of time, for example, IP current limiting can be adopted.


3. Repeat submissions are prohibited

After the user submits, the button becomes gray and repeated submissions are prohibited. 

2. Back-end solution:

1. Server controller layer (gateway layer)

The server control layer needs to limit the access frequency for the same access uid.

2. Service layer (4 types)

a. Message queue cache request

Commonly used message queue MQs include: RabbitMQ, RocketMQ, ActiveMQ, Kafka, ZeroMQ, MetaMQ, etc.

Common scenarios for message queues:

Application coupling: multiple applications process the same message through the message queue to avoid calling the interface failure and causing the entire process to fail;
asynchronous processing: multiple applications process the same message in the message queue, and messages are processed concurrently between applications. Compared with serial processing, Reduce processing time;
current limiting and peak shaving: widely used in flash sales or rush buying activities to avoid excessive traffic causing the application system to hang up;
message-driven system: the system is divided into message queue, message producer, message consumer, production The provider is responsible for generating messages, and the consumer (there may be multiple) is responsible for processing the messages.

b. Use cache to store data in redis cache

c. Synchronization mechanism

d. Things + locks to prevent concurrent data confusion

3. Database solution:

1. Table design into sub-databases and sub-tables

A database is split into multiple libraries. Multiple libraries can resist higher concurrency, and divided libraries reduce the burden of a single database.

Split a table into multiple tables to prevent the performance of the database from being reduced due to an increase in the amount of data.

2. Separation of database reading and writing

This means that most of the time, the database may read more and write less. There is no need to concentrate all requests on one database. You can establish a master-slave architecture, write to the master database, read from the slave database, and separate reading and writing. When there is too much read traffic, you can add more slave libraries.

4. Server

1. Load Balance Cluster

Servers can be used as load balancing clusters to share system work and reduce the resource burden on a single server.

Guess you like

Origin blog.csdn.net/m0_58823014/article/details/129953264