MySQL and Redis cache data consistency scheme (transfer)

Demand Causes

In a highly concurrent business scenarios, concurrent database users access most cases the weakest link. So, do you need to use a buffer redis operation, so the first request access to redis, rather than direct access to MySQL and other databases.

This business scenario, mainly to resolve the read data from the cache Redis, generally in accordance with the flow chart to business operations.

Read caching step is generally no problem, but when it comes to data update: update the database and cache, it is easy to data consistency between the cache (Redis) and database (MySQL) appears.

Whether the first to write MySQL database, and then delete Redis cache; or delete the cache, write libraries have data inconsistencies may occur. As an example:

1. If you delete the cache Redis, has not had time to write database MySQL, another thread to read, find the cache is empty, then go to the database to read data written to the cache, then the cache is dirty.

2. If the first to write a library before deleting cache, write down the thread library, not deleted the cache, data inconsistencies can also occur.

Because writing and reading are concurrent, can not guarantee the sequence, there will be inconsistencies in the data cache and database problems.

Tathagata solve? Here are two solutions, the easier issues first, combining business and technology costs choose to use.

Cache and database consistency solutions
1. The first scenario: using delay tactics double deletion

Before and after the write libraries were redis.del (key) operation, and set a reasonable time-out.

Pseudo code

public void write(String key,Object data){

redis.delKey(key);

db.updateData(data);

Thread.sleep(500);

redis.delKey(key);

}

2. Specific step is to:

1) delete the cache

2) write database

3) Sleep 500 ms

4) delete the cache again

So, this 500 milliseconds how determined the specific sleep how long?

Read data time-consuming need to assess the business logic of their own projects. The purpose of doing so is to ensure that the end of the read requests, write requests can delete cached read requests caused by dirty data.

Of course, this strategy should also be considered time-consuming and redis database master-slave synchronization. The final write data of the sleep time: the time-consuming foundation in reading data on the business logic, we can add a few hundred ms. For example: Sleep one second.

3. Set the cache expiration

In theory, to set the cache expiration time, it is to ensure the consistency of the final solution. All writes to the database prevail, as long as the cache expiration time is reached, the read request back naturally read the new values ​​from the database and then backfill the cache.

4. The drawbacks of the program

Combined dual strategy + delete cache timeout settings, so the worst case is that there is data inconsistency within the timeout period, and also increased the time-consuming written request.

2, the second option: asynchronous update cache (subscription-based binlog synchronization mechanisms)

1. Technical whole idea:

MySQL binlog incremental subscription message queue consumption + + incremental data updates to redis

1) Read Redis: thermal data substantially in Redis

2) write MySQL: MySQL CRUD operations are

3) Update Redis data: MySQ data manipulation binlog, to update the Redis

2.Redis update

1) the operation data is divided into two blocks:

It is a total amount (all data to the write-once Redis)

One is the incremental (live update)

Here that is incremental, referring to the mysql the update, insert, delate change data.

2) After reading binlog analysis, using message queues, push updates cache data of each redis station.

Once such a MySQL generated new write, update, or delete operation, it can be pushed to the associated message binlog Redis, Redis binlog then the recording of Redis update.

In fact, this mechanism is very similar to the master-slave MySQL backup mechanism, because the MySQL data consistency standby is achieved by binlog.

Here can combine canal (Ali, an open source framework), you can subscribe to the MySQL binlog through the frame, while the canal is the imitation of a backup request slave mysql database, so the data update Redis reached the same effect.
Can also use python-my-replication read the mysql binlog, plus python pika (rabbitmq) subscribe released.
Of course, here the message push tool you can use other third party: kafka, rabbitMQ push updates, etc. to achieve Redis.

Reprinted
Author: Java Advanced Advanced Guide
link: https://www.jianshu.com/p/b28fb9d5acb7

Guess you like

Origin www.cnblogs.com/vinic-xxm/p/11917998.html