Redis cache of cache penetration breakdown cache, the cache avalanche

A cache penetration

Cache penetration refers to a data query cache and database are not, because most of cache policy is passive loading, and for fault tolerance consideration, if not find out from the storage layer data is not written to the cache, which will result in the every time there is no data requests go to the storage layer to make inquiries, lost the meaning of the cache. Users continue to initiate a request, when high flow, it is possible to form a huge pressure on the DB, there is no problem using the frequent attacks of the key applications is great.

solution:

1, and the cache data does not exist in the database, this key can be set to a value corresponding to the default value, such as "NULL", and set a cache invalidation time, before the cache miss case, all through this key the visit will be cached blocked. If this back key corresponding data exists in the DB, then a cache miss, data access by this key again, be able to get a new value.

2, the interface layer increases the check, such as user authentication check, id foundation scene data according to the check, id <= 0 direct interception.

Second, the cache breakdown

Cache breakdown refers to a hotspot Key (such as a spike commodity) cache, expire at some point in time when, just at this point in time traffic surge, a large number of concurrent requests come to this Key, requesting discovery cache during the expiration usually load data from the backend DB and back in the back cache, but on the data in the cache has not yet come fully loaded from the DB in this time period, resulting in a large number of concurrent instant request directly to the breakdown DB, the DB form huge stress.

Cache breakdown, also known as hot key issue is that the three issues most classic of a problem.

solution:

1, a hot key, plus the mutex.

2, set the hot data never expires.

3, resource protection, service degradation.

Third, the cache avalanche

Avalanche cache, the cache means is provided in the same data at the same time large quantities expiration time expired, and at this moment traffic surge, failure buffer is almost all requests all turned DB, DB instantaneous pressure is too avalanche, and even down machine. And the cache is different breakdown, breakdown refers to the cache concurrency check the same data, different data cache avalanche expired, are finding out a lot of data in order to check the database.

solution:

1, set the hot data never expires.

2, the expiration time is set randomly, to prevent the same time a lot of data expired phenomenon.

3, if the cache is distributed deployment redis, it can be uniformly distributed in a different hot spot data in the cache database.

Guess you like

Origin blog.csdn.net/qq_33500554/article/details/91796421
Recommended