Use redis cache to solve back-end repeated request measures under high concurrency

 During the recent stress test, it was found that under high concurrency, some interfaces may not operate the data in the database as you want due to repeated requests. For example, when a user checks in, you only want the user to check in once a day. In order to prevent multiple check-ins, you must check the database to see if they have already checked in today. If they do, they will not be allowed to check in. , insert check-in data, update point data, etc. However, there is time for database operations. Under high concurrency, this method obviously cannot limit the submission of repeated requests. It is possible for a user to check in several times a day, as long as the submission time is within a very short range (this is indeed the case for personal testing. of).

     So that leads to the question to be discussed today, how to deal with the problem of repeated submissions.
    First look at the exact reason for the repeated request problem (Rong Laofu ctrl+v a paragraph of text):

     In business development, we often face the problem of preventing duplicate requests. When the server's response to the request involves data modification or state change, it may cause great harm. The consequences of repeated requests are particularly severe in transaction systems, after-sales rights protection, and payment systems.

The jitter of the foreground operation, fast operation, slow network communication or back-end response will increase the probability of the back-end repeated processing.

 For the measures of de-jitter and anti-quick operation in front-end operations, we will first think of a layer of control in the front-end. When the front-end triggers an operation, or pops up a confirmation interface, or disables the entry and counts down, etc., it is not detailed here.

 However, the front-end restrictions can only solve a small number of problems, and they are not thorough enough.

  Well, the reason for the long-winded explanation is finished, let's talk about the specific solution:

  We say that it is a way to handle repeated requests based on redis caching. Repeated requests are sending multiple requests in a very short period of time. This time is quite short, so that you can't stop it by querying the database to verify. This way, we can use a cached counter to prevent this from happening. At the beginning of the interface, define a cache counter (the cache time can be a few seconds, tens of seconds, or one or two minutes, which is enough to complete multiple requests in a short period of time), each member of the same member When a request comes in, the counter will be +1 (of course, the member's id + some specific strings will be used as the key), and requests greater than 1 will not be accepted. In this way, only one can be uniquely ensured during the period of time that the cache is in progress. Well, the specific implementation is like this. (Pro-test valid)

  Several processing methods are recommended below (basically, the redis caching mechanism is the best, of course, I mainly borrow it from here):

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325068802&siteId=291194637