How mysql and redis ensure database consistency

For a stand-alone server of a small company, it is enough to update or delete the redis cache while updating and deleting the mysql data. Generally, there are two options, for example:

  1. Update MySQL first, then delete (or update) Redis
  2. Delete (or update) Redis first, then update MySQL

But no matter which method is used, there are two possible problems:

  1. Since the first step and the second step are not atomic, there will be a short time interval in between. If a request arrives within the interval, inconsistent data may be accessed.
  2. There may be situations where the system is abnormal before the second step is completed after the first step is completed; this will lead to data inconsistencies between MySQL and Redis.

solution:

Delayed double deletion strategy

The delayed double-deletion strategy is a common strategy for maintaining consistency in database storage and cache data in a distributed system, but it is not strong consistency. In fact, no matter which solution is used, the problem of dirty data in Redis cannot be avoided. It can only be alleviated. To solve it completely, it is necessary to use synchronization locks and the corresponding business logic level to solve it.

When the business program is running, count the operation time of the business logic to execute the read data and write cache, and estimate based on this. Because this scheme will delete the cache value after a delay for a period of time after the first deletion, it is called "delayed double deletion".

Perform cache clearing first, then update, and finally (delay N seconds) and then perform cache clearing. Perform two deletions with a delay in between

RedisUtils.del(key);// 先删除缓存    
updateDB(user);// 更新db中的数据    
Thread.sleep(N);// 延迟一段时间,在删除该缓存key    
RedisUtils.del(key);// 先删除缓存

The above (delay N seconds) time is greater than the time of a write operation. Reason: If the delay time is less than the time to write to redis, it will cause request 1 to clear the cache, but the embarrassment that request 2 cache has not been written yet. . .

The delayed double-delete strategy is just a means of synchronizing the database and the cache. This method can be used when the system concurrency is not high, but it is difficult to control the delay period for the second deletion.

Update synchronous data to redis through MQ asynchronously

This method is to publish the corresponding data (addition, deletion and modification) that needs to be updated to the message queue server through the MQ message queue when updating the db operation, and the MQ client subscribes to the message and updates it to the redis cache synchronously, but this method requires Guarantee the order of the message queue. In this case, there will be a certain delay in high concurrency.

Subscribe to the database binlog service through the canal component

In this way, by subscribing to the binlog log of mysql, the database addition, deletion and modification log is converted into json data, and the message is published through MQ, and the client subscribes to the message and updates it to the redis cache synchronously. This method can well solve high concurrency and distribution The inconsistency between mysql and redis caused by the data update of the type service, and the data manually updated through sql using the database client can also be synchronized to redis.

Guess you like

Origin blog.csdn.net/lwpoor123/article/details/130240148