Several cases Redis MySQL data consistency and appears in

1. MySQL persistent data, Redis read-only data

redis after starting, loading data from the database.

Read request:

Does not require strong consistency read request, go redis, it requires strong consistency read directly from mysql

Write request:

First of all data written to the database, and then update redis (the first to write redis write mysql, if the write fails transaction rollback will result in the presence of dirty data redis)

2.MySQL and process different data types Redis

MySQL handle real-time data, such as financial data, transaction data

Redis less demanding real-time data processing, such as the hottest website posted list, buddy list, etc.

In the concurrent is not high, priority read operation read redis, does not exist, then went to visit MySQL, read and write data back Redis in; write it, write directly to MySQL, Redis and then successfully written (can at the end of the definition of CRUD MySQL trigger when triggered to write data Redis CRUD operations, may be resolved in Redis binlog end, a corresponding operation do)

In the case of high concurrency, read and above, the write operation is asynchronous write, direct return after writing Redis, and then writes regularly MySQL

A few examples:

1. When updating data, such as updating an inventory of goods, merchandise inventory is currently 100, now updated to 99, first update the database changes to 99, and then delete the cache, delete cache found fail, which means that data inventory 99, while the cache is 100, which leads to inconsistent databases and caching.

Solution: 
This situation should first delete the cache, and then update the database, if you delete the cache fails, then do not update the database, delete the cache if successful, and failed to update the database, when it's just checked the old query from the database the data only, so you can keep the database and cache coherency.

2. In the case of high concurrency, if the cache when you are finished removing, then to update the database, but has not finished updating, another request to query the data and found no cache, and went to search the database, or to the top commodity stocks, for example, if the database product inventory is 100, then the query to inventory is 100, and then insert the cache, the cache after insertion, the original thread that updates the database to update the database 99, resulting in an inconsistent database and cache case

: Solution 
When this happens, the queue can be used to address this, create several queues, such as 20, to do the hash value in accordance with item ID, and then taken queue number touch, when there is data update request it is first thrown into the queue to go, when an update after removing from the queue, if the update process, meet the above scenario, go look there is no data cache, if not, you can go to the queue to see if the same product ID in doing the update, if there is also a request to send a query to the queue to go, then wait for cache synchronization update is complete. 
There is an optimal point, if found to queue a query request, and then do not put new query go with a while (true) loop to query cache, loop around a 200MS, if the cache has not been directly database taken old data, it is generally taken to be.

Second, we must pay attention to the scene to solve the problem in a high-concurrency:

(1) blocking request read length

As a result of a very slight asynchronous of the read request, so be sure to pay attention to the problem of reading timeout, each read request must return within the timeout period, the solution may be the biggest risk is that the data is updated very frequently, resulting in queues squeeze a large number of update operations on the inside, and then read a large number of requests time out will occur, leading to a large number of requests go directly to the database, like this happens, usually enough to do the stress test, if the pressure is too large, according to the actual needs case of adding machine.

(2) simultaneous requests too high

Here is to do the stress tests, multiple real world scenarios, the number of concurrent QPS highest amount of time, could not carry the machine will pay more, there is a good proportion of the number of read and write

(3) multi-service request routing instances deployed

This service may be deployed in multiple instances, it must ensure that the implementation of data update, as well as perform the requested cache update operation, are routed through nginx server to the same service instance

(4) hot goods routing problems, resulting in the inclined request

Read request of certain commodities is particularly high, all hit the same machine in the same column lost, could cause a server too much pressure, because only in product data will be updated when empty the cache, then will lead to read and write concurrent, the update frequency is not too high, the impact of this problem is not great, but it does have some server load may be higher.

 

Guess you like

Origin www.cnblogs.com/richard713/p/11531505.html
Recommended