The basic point of spike business

Spike is a very common business. It is at a certain moment that a large number of users will snap up a small number of preferential products, so as to achieve product exposure and e-commerce website exposure, increase user traffic, and thus increase overall sales.

For example, under the epidemic of this year, major e-commerce websites launched a spike business for masks.

General spike logic

1. The spike page dynamically obtains the server time. The front end counts down according to the time
2. The countdown ends, starts the spike
3, obtains the back-end spike interface
4. Really executes the spike (reducing inventory, placing order insert)
Generally, reducing inventory is placed on top of redis , Because at the moment of the spike, there will be a lot of requests coming.

Place a single spike

Talk about a previous spike scenario, is to place a single spike.
That is, everyone can place an order. Only if you place an order successfully, and then succeed in reducing inventory, can you count as a spike.
This spike, compared to the above, will be heavier and more difficult.
In this case, it is essential to use the database.
The complication is to save all the operations in nosql first, or issue an mq message, and then let the database slowly digest, but this may cause inconsistencies.
Therefore, if the number of users is not particularly large, you can also directly consider using the database top.
If you want to use the database top, you must also consider, for example, to place an order first and then reduce the inventory, or to reduce the inventory first and then place the order.
To say the result directly, it is better to place the order first and then reduce the inventory.

Taking MySQL as the DB for example, the order is the insert. In the case of using the index, the insert is a row-level lock, which supports 4W per second concurrency. Reducing inventory is an update operation, which is also a row-level lock when hitting an index, but this is an exclusive lock. All operations must wait for the previous one to release the lock before continuing to update.

The problem is here, according to the MySQL two-segment lock protocol, we should put the hotspot operation close to the commit, which can reduce the holding time of row-level locks! Natural processing efficiency is better.

Purchase qualification spike

For example, Jingdong ’s mask-selling spike, the successful spike only means that you have obtained the qualification to purchase, and then you jump to the settlement page, you can buy successfully, but those who have not obtained the qualification to spike, can also jump to the settlement page in a short time. Eventually the purchase will fail. If the inventory is reduced, the page will be grayed out directly, indicating that the spike is over.

That way of implementation is much simpler than the scenario where an order is placed after the spike is completed, redis engages in an atomic counter, sets up inventory, spikes successfully reduces inventory, and records the users who successfully spiked at the same time. Only users with these records can purchase later.

mysql

If you are a small mall service, the cost is limited, there is no time and money to maintain so many nosql, mq and other high-availability services, there is really no way to use mysql first to resist it, after all, small malls, both have to kill this kind of I do n’t want to spend much money on activities, but fortunately, the number of users is not large, so using mysql, I think if you use it well, you can resist a lot.

For example, you can write a stored procedure and put insert and update in a stored procedure. This has the advantage of reducing the number of executions of the java client and mysql, and ultimately reducing the time for transaction locks.
But stored procedures, I have not seen such use in a real production environment, even some large concurrent departments are prohibited to use stored procedures, so the use of stored procedures is just a folk saying, can be used as a thinking Direction.

Cache

cdn cache

The static resources of the general spike page, including some basically unchanged css, js, can be statically processed and pushed on the cdn cache.

redis cache

Redis cache, in order to block most requests to the database.
The consistency of the cache and the database needs to be judged according to the business.
For data that is almost unchangeable, you can use the expiration time of redis, and check it again from the database once it expires.
For frequently changed data, you need to synchronize redis every time you change the database.
Ordinary business, as long as the product manager is reliable, will make unchangeable data, if wrong, discard it and create a new one.

Serialization

Since redis cannot store objects, it can generally store json or byte arrays.
It is generally okay to use Java native serialization, just save the byte array, or a bit coarser, and save the json string, but pay attention to it, you can choose another serialization method.
Here I personally recommend using protobuf serialization.
The protostuff serialization package uses the protobuf serialization protocol. Compared with java's native serialization, it is faster and takes up less space.
Why is it fast and small, please refer to: https://github.com/eishay/jvm-serializers/wiki

Published 203 original articles · praised 186 · 210,000 views

Guess you like

Origin blog.csdn.net/java_zhangshuai/article/details/105499779