Cache penetration, cache breakdown, cache avalanche, what are these three and how to deal with them

Usually, we use caching middleware to cache hot data of the database in Redis as much as possible to find data in the cache, the purpose is to reduce the pressure on the database

image-20220311204750451

So what is cache penetration, cache breakdown and cache avalanche?

cache penetration

When a key does not exist in Redis, the database will be queried, but if the database does not exist, it will cause every request to query redis to query the database. This is cache penetration.

image-20220311205718308

For example, a query interface has a query range of 1~1000, but encounters a large number of malicious users requesting 1000 ~ … to query information that does not exist

Will put huge pressure on the database and the Redis cache will be useless

Solution

There are two general approaches

Method 1: Use the bloom filter to first add all the keys in Redis to the bloom filter. The bloom filter judges that there is no certain Redis that does not exist and can filter out most malicious requests, but the bloom filter is not a hundred 100% correct judgment may not exist in redis

Method 2: Cache empty value

When no data is queried from the database, a null value is also cached in redis and the null value is returned, so that the next time a request for the same information comes, redis can process it directly without querying the database.

cache breakdown

When a hot key in redis expires at a certain time, a large number of concurrent requests hitting the database may cause the database to crash at one point, which is cache breakdown.

The difference between cache penetration and cache penetration is that cache penetration is expired in Redis, and there is in the database, while cache penetration is not in Redis, nor in the database.

image-20220311210842094

The biggest problem with cache breakdown is that a large number of requests find that the data in Redis cannot be found, so we have to look it up in the database, and there is data in the database, so we only need to control the number of requests

Put a request to query the database, and after finding the data, write the data back to Redis and then put other requests to query Redis. At this time, Redis already has the corresponding data

Solution

Adding distributed locks can use Redis or Mysql. Redis is relatively simple to do distributed locks

Interpret whether the key exists. If it does not exist, create the key and value, and set the expiration time at the same time. To prevent deadlock, the setIfAbsent method must be called atomically.

 //原子操作 设置 UUID 与 过期时间 30 秒
            Boolean aBoolean = redisTemplate.opsForValue().setIfAbsent(RedisConst.SKUKEY_LOCK + skuId, uuid, 30, TimeUnit.SECONDS);

The key is set to the prefix and the identifier value is set to the unique identifier of the lock holder to prevent the lock of other threads from being accidentally deleted after the lock expires

Cache Avalanche

Cache Avalanche At the same time, a large number of hot keys in Redis expire, causing the database to process a large number of requests and finally crash

The number of hot keys in the cache breakdown period is higher than that of the cache

Solution

When setting the hotspot key expiration time, add a random value to avoid expiration at the same time

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324142419&siteId=291194637