Redis cache breakdown, cache penetration, and cache avalanche

1. Cache breakdown

  • Definition: The key in the cache generally has an expiration time. If a key expires, at this time, there are a large number of concurrent requests to access the key, then these requests will reach the DB, causing the DB to be overwhelmed and overwhelming. .

  • solution:

  • 1. Set the mutex, mutex: when the cache fails, go to access the database from time to time, but use the operation of the cache tool to succeed with return value operations, such as redis setnx (set if not exit), memcache add, and use setnx to achieve The effect of the lock.

  • 2. Set hotspot data to never expire.

Disadvantages: may cause deadlock, or thread pool blocking

  • 2. Use the mutex lock in advance: the timeout time of redist is timeout1, and the timeout time of value is timeout2, timeout2 <timeout1. When timeout2 times out, extend the time of timeout2. And reset to redis.

  • 3. Never expire: no expiration time is set. Set the expiration time to the value. If it is about to expire, the cache is built through a background asynchronous thread, that is, the logic expires.

2. Cache penetration

  • Definition: Refers to someone accessing with a key that does not exist in the database. If there is no such key value in the database, there is no natural cache. The request will go directly to the database. If the concurrent access to the key is too large, the database will be overwhelmed.

solution:

  • 1. Use filters to hash all data that cannot exist in the database into a large bitmap. If the key does not exist in the database, it will be intercepted by the bitmap.

  • 2. Can not get from cache and db, you can write key-value as key-null, set a shorter expiration time, such as 30 seconds (setting too long will cause normal conditions to be unusable). This can prevent the attacking user from repeatedly using the same id brute force attack.

  • 3. Interceptor, direct interception with id<=0.

3. Cache avalanche

  • Definition: Refers to a large amount of data in the cache invalidation at the same time. At this time, a large number of requests will be directly transferred to the database, causing excessive pressure on the database.

solution:

  • 1. Add locks and queues, such as mq, to ​​ensure single-threaded writes in the cache to avoid a large number of concurrent arrivals to the database when the key fails.

  • 2. Stagger the cache expiration time. For example, adding a random value to the original failure time, such as 1-5 minutes, reduces the repetition rate of the failure time and reduces the probability of collective failure.

Guess you like

Origin blog.csdn.net/JISOOLUO/article/details/104686923
Recommended