On redis Cache Cache avalanche breakdown cache penetration

In our daily development, involving high concurrency requirements related to the large amount of data, often need to use redis as a cache, but the cache may occur avalanche, penetrating cache, cache breakdown and other issues, a simple example here I speak and reason under question Solution.

Cache avalanche

If I opened a new website, the first day some people come visit, I am also very happy, but I ignored a case, today visited the establishment of a cache failure almost at the same time, and people in the future will be more accessible, At this point a lot of requests to the database will impact all the above, the database may fail because of bear and downtime.
When the cache server reboot or a large number of cache concentrated in one period of time failure, the cache is called an avalanche.

solution

  1. Different key, to set a different expiration time . For example after the default expiration time plus 0-10 minutes random value , to avoid collective failure.

  2. Set secondary cache , the expiration time doubled. If the original cache expiration notification triggered another thread in the background to update the actual cache, and returns in the secondary cache of old data.

  3. Data warm . Before the formal deployment ahead of the relevant data directly loaded into the cache, on the line after a scheduled task to refresh.

Cache breakdown

I assume that the site has a big star wrote Bowen, a lot of people come here just to see him when he's related to the cache expires the moment,
there may be a lot of requests to the database.
Single hot data is high at the expiration time concurrent access, the cache is called a breakdown.

solution

  1. Set hot data never expires.
  2. After a cache miss, by adding a mutex to control the number of threads read write cache database. When a thread finds key failure, go to the key lock while the value of the database, the database will not be accessible when other threads waiting to read but, here, the lock must be set the expiration time, or in case the value of thread hang permanently blocked.

Cache penetration

If I did not set up a firewall, malicious hackers use tools such as postman unreasonable request data, such as id = -1, there can be no such data caches, each to the value of the database, and constantly attack will cause excessive pressure on the database, even defeat the database.
corresponding to the key data does not exist, the request to the database bypass the cache, the cache is called penetration.

solution

  1. Configure the server firewall , management and monitoring of illegal behavior with a large number of ip access. Different ip ineffective.
  2. Increase in the interface layer checksum . For example the user authentication check, calibration parameters, the filter parameters are not valid.
  3. Database did not get, this is still an empty result cache, a short time the value is set to null, the disadvantage is likely to cache a lot of null values, will also affect the consistency of the data.
  4. Bloom filter . The hash all possible data to a bitmap large enough, they are possible to visually determine whether a bit different if the data exists, if there is a 0 would certainly not have direct access to the database does not return. The disadvantage is that when the total amount of data is large, a Bloom filter will be very long, which is not the absolutely accurate, it is possible and all the other data bit is a case where the data does not exist.

to sum up

"One foot in mind that" of course we can also take advantage of high availability features built redis clusters , or when the traffic surge, the service issues arise, in order to ensure core services are still available, non-essential services to return some default value, or tips, allowing users to refresh again.

Released five original articles · won praise 0 · Views 74

Guess you like

Origin blog.csdn.net/jy615183000/article/details/105345293
Recommended