Talking about Redis cache processing, how to implement it in the project

Hello students, today Qianfeng Lao Xu will talk to you about the usage scenarios of Redis in actual projects, especially how to implement caching. When we are applying for jobs, Redis cache is often asked questions, such as cache penetration, cache breakdown, cache avalanche, etc. , and common solutions to these problems

1. What is cache avalanche

Cache avalanche refers to when a large number of keys in the cache expire at the same time, or Redis directly crashes, causing a large number of query requests to reach the database, resulting in a sudden increase in the query pressure of the database, or even directly hanging up. So how to solve these problems?

For the situation where a large number of keys expire at the same time, the solution is relatively simple. We only need to disperse the expiration time of each key, so that their failure points are distributed as evenly as possible.

In the case of redis failure, several high-availability solutions of redis can be used when deploying redis.

2. What is cache breakdown

Cache breakdown means that when a hot data in the cache expires, a large number of query requests pass through the cache and directly query the database before the hot data is reloaded into the cache. This situation will cause the pressure on the database to increase suddenly, causing a large number of requests to be blocked, or even hang up directly. To solve this problem, we can solve it in the following way.

The first is to set the key to never expire; the second is to use distributed locks to ensure that only one query request reloads hotspot data into the cache at the same time, so that other threads only need to wait for the thread to finish running, that is Data can be retrieved from Redis.

The first method is relatively simple. When setting a hotspot key, it is sufficient not to set an expiration time for the key. However, there is another way to achieve the goal of not expiring the key, which is to set the expiration time for the key normally, but at the same time start a scheduled task in the background to update the cache regularly.

3. What is cache penetration

Cache penetration refers to querying data that does not exist in the cache or the database, so that every time the data is queried, it will go through the cache, directly check the database, and finally return empty. When a user frantically initiates a query request using this piece of non-existent data, the pressure on the database is very high, and it may even hang up directly.

There are generally two ways to solve cache penetration. The first is to cache empty objects, and the second is to use Bloom filters.

The first method is easier to understand, that is, when the data cannot be found in the database, I cache an empty object, and then set an expiration time for the cache of the empty object, so that the next time you query the data, you can directly Get it from the cache, thus achieving the purpose of reducing the pressure on the database.

But this solution has two disadvantages:

(1) The cache layer needs to provide more memory space to cache these empty objects. When there are many such empty objects, more memory will be wasted;

(2) It will lead to data inconsistency between the cache layer and the storage layer. Even if a short expiration time is set for an empty object when it is cached, it will also cause data inconsistency within this period of time.

The second option is to use a Bloom filter, which is also the recommended method.

Now do you know how to solve the three problems of cache penetration, cache breakdown, and cache avalanche ? Pay attention to Xiaoqian, dry goods continue every day!

Guess you like

Origin blog.csdn.net/GUDUzhongliang/article/details/132206211