Redis cache breakdown, penetration, and avalanche causes and solutions

Redis cache breakdown, penetration, and avalanche causes and solutions

1. Cache breakdown
There is no data in the cache, and the database has data. High concurrency rushes to the database at the same time, and the pressure on the DB increases.

When Redis obtains a key, because the key does not exist, the act of initiating a request to the DB is called "Redis breakdown".
There is no data in the cache but the data in the database (usually because the cache time expires). At this time, because there are so many concurrent users, and the data is not read in the read cache, and the data is retrieved from the database at the same time, the pressure on the database increases instantly, resulting in Excessive pressure.

Causes of breakdown:
First time access
Malicious access to non-existent key
Key expired

Reasonable circumvention plan:
When the server starts, it writes in advance
the naming of the standardized key, through the middleware interception
and the mutual exclusion lock.
For some high-frequency access keys, set a reasonable TTL or never expire

2. Cache penetration
. There is no data in the cache and no data in the database. Malicious attacks overwhelm the database.

The data corresponding to the key does not exist in the data source. Every time a request for this key cannot be obtained from the cache, the request will go to the data source, which may overwhelm the data source. For example, using a non-existent user id to obtain user information, no matter whether there is a cache or a database, if a hacker exploits this vulnerability to attack, the database may be overwhelmed.

Reasonable circumvention plan:
Bloom filter: Hash all possible data into a sufficiently large bitmap, and a data that must not exist will be intercepted by this bitmap, thus avoiding the query pressure on the underlying storage system
Cache empty objects: When the storage layer does not hit, even the returned empty object will be cached, and an expiration time will be set at the same time. Later access to this data will be obtained from the cache, protecting the back-end data source

3. Cache avalanche A
large number of data in the cache expired, and high concurrent requests rushed to the database, causing the database to go down.

The data in the cache is in large quantities until the expiration time, and the amount of query data is huge, causing excessive pressure on the database or even downtime. Unlike cache breakdown, cache breakdown refers to checking the same piece of data concurrently. Cache avalanche means that different data has expired, and many data cannot be found, so you can check the database.

Reasonable circumvention plan:

1. The expiration time of cached data is set randomly to prevent a large amount of data from expiring at the same time.
2. If the cache database is a distributed deployment, evenly distribute the hot data in different cache databases.
3. Set hotspot data to never expire.

If there are any shortcomings, please point out, thanks!

Guess you like

Origin blog.csdn.net/d1332508051/article/details/107847047
Recommended