Detailed explanation of Redis memory elimination strategy

Redis takes up memory

Redis is an in-memory database. When the business volume is too large, there will be insufficient memory (up to the setting value of [maxmemory]). Of course, the machine memory is strong enough and the data can withstand a larger range, but this is not a solution to the problem method.

Redis provides two solutions:

  • Set key timeout
  • LRU algorithm

Set the data timeout time (TTL)

Regarding the method of setting the data timeout period, we often use it. At the same time, setting the timeout period can also meet our business needs in some scenarios:

  • Verification code expiration time
  • User token expiration time
  • ......

But have you ever thought about the expiration strategy of TTL?

Redis expiration strategy

Redis's expiration strategy is divided into: regular deletion and lazy deletion

Delete regularly

Redis will put each key set with an expiration time into an independent dictionary, and will periodically traverse this dictionary to delete expired keys in the future.

By default, Redis will perform ten expired scans per second (every 100ms). Expired scans will not traverse all the keys in the expired dictionary, but adopt a simple greedy strategy:

  1. Randomly 20 keys from the expired dictionary
  2. Delete the expired keys among these 20 keys
  3. If the ratio of expired keys exceeds 1/4, repeat step 1

The random extraction here is to traverse a large number of keys when the amount of data is too large, and increase the burden on the CPU

Lazy deletion

Lazy deletion means that when the client accesses the key, redis checks the expiration time of the key, and deletes it immediately if it expires, without returning anything to you. Under normal circumstances, it is easier to trigger this mechanism when we query the expired key in the daily use process.

Periodic deletion is the expiration deletion of keys from a relatively large level. Because of its randomness, there will be cases where expired keys cannot be deleted in time. Lazy deletion is a compensation strategy for periodic deletion in some cases.

LRU

What is the LRU algorithm?

  1. When adding a key value, first add a Node node at the end of the linked list. If the threshold set by LRU is exceeded, the node at the head of the team will be eliminated and the corresponding node in the HashMap will be deleted.
  2. When modifying the value corresponding to the key, first modify the value in the corresponding Node, and then move the Node node to the end of the queue.
  3. When accessing the value corresponding to the key, move the visited Node node to the end of the queue.

Redis uses the approximate LRU algorithm: the approximate LRU algorithm eliminates data through random sampling, each time 5 (default) keys are randomly generated, and the least recently used key is eliminated from it. The number of samples can be modified through the maxmemory-samples parameter: Example : Maxmemory-samples 10 The larger the maxmenory-samples configuration is, the closer the result of elimination is to the strict LRU algorithm, but at the same time the greater the memory consumption.

Redis uses the approximate LRU algorithm to reduce memory consumption and CPU usage as much as possible when the policy needs are basically met. When the hardware is satisfied, it can also be adjusted by maxmemory-samples, and the wireless is close to the standard LRU algorithm. .

In order to implement the approximate LRU algorithm, Redis adds an extra 24-bit field to each key to store the time when the key was last accessed.

Redis3.0 has made some optimizations to the approximate LRU algorithm. The new algorithm will maintain a candidate pool (size 16). The data in the pool is sorted according to the access time. The first randomly selected key will be put into the pool, and then each randomly selected key will only be accessed when the access time is less than the pool. The minimum time will be placed in the pool until the candidate pool is full. When it is full, if there is a new key that needs to be placed, the one with the longest access time (recently accessed) in the pool will be removed. When it needs to be eliminated, the key with the least recent access time (the longest not accessed) is directly selected from the pool and eliminated.

Memory elimination strategy

1. noeviction: When the memory usage exceeds the configuration, an error will be returned and no keys will be expelled

2. allkeys-lru: When adding keys, if the limit is exceeded, the keys that have not been used for the longest time are first expelled through the LRU algorithm

3. Volatile-lru: If the limit is exceeded when adding a key, the key that has not been used for the longest time is first expelled from the set of keys with an expiration time.

4. allkeys-random: If the key is added, if the limit is exceeded, it will be randomly deleted from all keys

5. Volatile-random: If the key is added, if the limit is exceeded, it will be randomly expelled from the set of expired keys

6. volatile-ttl: expel keys that are about to expire from the keys with expiration time configured

7. volatile-lfu: expel the least frequently used key from all the keys configured with expiration time (after Reidis4)

8. allkeys-lfu: expel the least frequently used key from all keys (after Reidis4)

LFU

LFU appeared after Redis 4.0. The least recently used LRU is actually not accurate. Consider the following situation. If it is deleted at |, then the distance between A is the longest, but in fact A is used more frequently than B Frequent, so the reasonable elimination strategy should be to eliminate B. LFU was born in response to this situation.

A~~A~~A~~A~~A~~A~~A~~A~~A~~A~~~|

B~~~~~B~~~~~B~~~~~B~~~~~~~~~~~~B|

 

reference:

https://zhuanlan.zhihu.com/p/105587132

https://www.cnblogs.com/vegetableDD/p/11890570.html

https://www.bilibili.com/video/BV1Cb411j7RA?p=8

https://segmentfault.com/a/1190000017555834

 

Guess you like

Origin blog.csdn.net/qq_25805331/article/details/109287489