3. System optimization under high concurrency-write operation pressure test

Regarding the write operation, I thought it was almost the same as the read operation for a long time, so here is a simulation of the spike situation, look at the number of 500, how long the order is completed when the order is snapped, and the situation of queuing.

Test scenario: Redis authentication is performed after the request comes in, query whether the product exists (database million level), obtain user detailed information, order reduction inventory (level million), order storage, increase sales (level million)


Under the previous configuration, redis records commodity information and does not participate in read and write operations:

Not yet sold out:

After sold out:

 For continuous read and write operations, performance. . .


Introduce rocketmq + redis (the application waits for mq to return before returning):

When there is still stock

 And from the database and redis, the difference between the two is more than 1,000.


When there is no return value: 

There is not much difference between the two. As for the current limit like the spike token bucket, it is unexpected here.

In general, for write operations, because of the interaction with the database, performance loss is inevitable. We can only try to use asynchronous methods to convert these losses to the back. Of course, it is still necessary for the user experience See the actual scene.

Next, it will be optimized by expanding the database and redis master-slave structure

Published 97 original articles · won 28 · 10,000+ views

Guess you like

Origin blog.csdn.net/haozi_rou/article/details/105500298