Redis [cache avalanche, cache penetration, cache breakdown] detailed explanation

The normal flow chart of the cache requested by the user

 The picture above is a normal and simple caching process! ! !

When a Tudou user visits a certain treasure, a certain treasure requests redis to see if there is any data requested by the Tudou user in the cache.

If there is a cache of the data in redis, it will directly return the data and display it to meet the needs of users.

If there is no data requested by the user in redis, redis will do one thing, look it up in the database, and do two things after finding the value in the database. The first thing is to return the data to the redis cache. The second thing: return the queried data to a treasure for the user's needs

Cache Avalanche:

 For example: when the user wants to go to a certain treasure to buy goods on Double Twelve, the user clicks in and enters the home page to display a variety of goods. Most of these goods are cached in redis and correspond to many keys. , the cache time for adding these keys is three hours or four hours. When the cache time expires, the cache becomes invalid, causing a large number of users to send requests to the database. The peak suddenly collapsed.

List four solutions:

The first one: Set the invalidation time of the cache so that it does not invalidate at the same time, and randomly initialize the cache time when setting the cache time, so as to avoid invalidation at the same time

The second: build a redis cluster and distribute the hotspot keys on different redis clusters

The third type: do not set the time for cache invalidation

The fourth: set a timed task, and regularly refresh the time of the cache

Cache penetration:

 

For example: For example, Tudou has developed a website, and it is very popular. People of all ages, men, women and children who should come and those who shouldn’t have come, and the peers are jealous when they see it, so they need to arrange Tudou, and then the peers remember, The keys in the redis cache have no negative numbers. If a negative number is used to request that the redis cache does not have a negative number, the request can be sent directly to the database. The peers use this feature to deliberately request data that does not exist in the cache, and write a few loops to send a large amount of data. The key is -1 or a negative number, etc., which causes the data to not exist in the redis cache, and the redis penetrates the request to the database. The database receives a large number of requests for a while, which causes the database to crash.

solution:

The first is: the requested data penetrates into the database, and the parameter information is cached in redis regardless of whether there is a value in the database. The next request can be intercepted and processed by the redis cache. But the next time the peers may use another parameter to visit, it can only be said that the symptoms are not cured.

The second is: block the malicious requesting user ip, but may change to another ip, which is also a temporary solution.

The third is: the validity of the object request parameters can be checked, and if it is not legal, it will be returned directly.

The fourth is: use a Bloom filter

Cache breakdown:

Another example: On Double Eleven, Lao Ma put out the old Beijing cloth shoes that he had collected for many years for auction. The big guy saw that the old Beijing was worn by the old horse and wanted to cherish the memory of the old horse's success. The programmer followed the old horse The instructions are also arranged to be put on the shelves. The old Beijing shoes were auctioned for four hours at once. The data of the old Beijing shoes was in the redis cache for four and a half hours. After another auction, the old Beijing in the redis cache was invalid. , a large number of requests from users reached the database in an instant, and the database couldn't carry it anymore and hung up. The old horse was not happy, and he also successfully sent the programmer to Africa to dig coal for himself.

solution:

1, that is, the cache time does not expire, which is definitely not good

2. Mutual exclusion lock is used, and a lock is placed in the step of requesting to the database. When a large number of requests come in, only one thread grabs the lock, enters the database to find the data and returns it to redis. If the other threads are not grabbed, the thread sleeps for a few seconds. , after a few seconds, let other threads go to redis to find data, which reduces the problem of large database requests! !

Guess you like

Origin blog.csdn.net/m0_64550837/article/details/127012758