Design a high concurrency system

The upgrade process is: Initial system-add load balancing-database sub-database sub-table + read and write separation-cache cluster + message middleware cluster

1. Initial system

Suppose the system machine is 4 cores 8G, and the database server is 16 cores 32G. Daily active users are 1W, 10 requests per second at the system level, and 30 requests per second at the database level.

2. Add load balancing

The number of users has increased by 50 times, with 500,000 daily active users, 500/s requests per second to the system and 1500/s requests per second to the database during peak periods

Problem: The system CPU load is too high and the database can accept it

 3. Database sub-database sub-table + read-write separation

The number of users continued to grow, reaching 10 million registered users, with 1 million daily active users every day.

Problem: The system level can be solved by load balancing, and the database level will receive too much load when the request volume reaches 3000/s.

 4. Cache cluster + message middleware cluster

The number of users continues to grow.

Problem: The system level keeps adding machines, which can carry higher concurrent requests. The write concurrency at the database level is getting higher and higher, and the database server can be expanded; the read concurrency is getting higher and higher, and more slave libraries can be expanded.

The cost of database expansion is high.

The pressure of more reads and less writes: cache clusters

When writing the database, write a copy of data to the cache cluster at the same time, and then use the cache cluster to carry most of the read requests.

High write pressure: messaging middleware cluster

Asynchronize write requests. Allows asynchronous 500 requests per second to be written to MQ, and then performs a peak-shaving and valley-filling based on MQ. It is consumed at a steady rate of 100/s and then falls into the database.

 

Guess you like

Origin blog.csdn.net/wh672843916/article/details/105503764