What should I do if the Redis cache is full?

Redis cache uses memory to save data. As the amount of data that needs to be cached increases, the limited cache space will inevitably be filled up. At this point, what should we do? Next in this article, we will talk about the data elimination mechanism after the cache is full.

It is worth noting that expiration policy and memory elimination policy are two completely different concepts in Redis  . The Redis expiration policy refers to which strategy Redis uses to delete expired key-value pairs; while the memory elimination mechanism refers to what strategy will be used to delete qualified keys after the running memory of Redis has exceeded the set maximum memory. Value pairs to ensure the efficient operation of Redis.

Redis maximum running memory

Only when the running memory of Redis reaches a certain threshold, the memory elimination mechanism will be triggered. This threshold is the maximum running memory we set. This value can be found in the Redis configuration file, and the configuration item is  maxmemory.

The execution process of memory elimination is shown in the figure below:

picture

Query the maximum running memory

We can use the command  config get maxmemory to view the set maximum running memory, the command is as follows:

127.0.0.1:6379> config get maxmemory
1) "maxmemory"
2) "0"

We found that this value is actually 0, which is the default value for 64-bit operating systems. When maxmemory is 0, it means there is no memory size limit.

Note: For 32-bit operating systems, the default maximum memory value is 3GB.

Memory elimination strategy

View Redis memory elimination strategy

We can use  config get maxmemory-policy the command to view the current Redis memory elimination strategy. The command is as follows:

127.0.0.1:6379> config get maxmemory-policy
1) "maxmemory-policy"
2) "noeviction"

It can be seen that this Redis uses a noeviction type of memory elimination mechanism, which means that when the running memory exceeds the maximum set memory, no data will be eliminated, but an error will be reported for new operations.

Memory elimination strategy classification

Earlier versions of Redis had the following 6 elimination strategies:

  1. noeviction : No data will be eliminated. When the memory is insufficient, the new operation will report an error. Redis default memory elimination strategy;

  2. allkeys-lru : Eliminate the longest unused key value among the entire key value;

  3. allkeys-random : Randomly eliminate any key value;

  4. volatile-lru : Eliminate the longest unused key value among all key values ​​with expiration time set;

  5. volatile-random : Randomly eliminate any key value with an expiration time set;

  6. volatile-ttl : Prioritize the elimination of earlier expired key values.

Two new elimination strategies have been added in Redis 4.0:

  1. volatile-lfu : Eliminate the least used key value among all key values ​​with expiration time set;

  2. allkeys-lfu : Eliminate the least used key value among the entire key value.

Which  allkeys-xxx means to eliminate data from all key values, and  volatile-xxx means to eliminate data from key values ​​with expired keys set.

Modify the Redis memory elimination strategy

There are two methods for setting the memory elimination strategy. Both methods have their own pros and cons, and users need to weigh them themselves.

  • Method 1: Set via the "config set maxmemory-policy policy" command. Its advantage is that it takes effect immediately after setting, and there is no need to restart the Redis service. The disadvantage is that the setting will become invalid after restarting Redis.

  • Method 2: Set the "maxmemory-policy policy" by modifying the Redis configuration file. The advantage is that the configuration will not be lost after restarting the Redis service. The disadvantage is that the Redis service must be restarted for the setting to take effect.

memory elimination algorithm

From the classification of memory elimination strategies, we can know that in addition to random deletion and no deletion, there are two main elimination algorithms: LRU 算法and  LFU 算法.

LRU algorithm

The full name of LRU is Least Recently Used, which is translated as least recently used. It is a commonly used page replacement algorithm that selects the pages that have not been used for the longest time and eliminate them.

1. LRU algorithm implementation

The LRU algorithm needs to be based on a linked list structure. The elements in the linked list are arranged from front to back in the order of operations. The keys of the latest operations will be moved to the head of the table. When memory elimination is required, only the elements at the end of the linked list need to be deleted.

2. Near LRU algorithm

Redis uses an approximate LRU algorithm to better save memory. It is implemented by adding an additional field to the existing data structure to record the last access time of this key value. Redis memory During elimination, random sampling will be used to eliminate data. It randomly picks 5 values ​​(this value is configurable), and then eliminates the one that has not been used for the longest time.

3. Disadvantages of LRU algorithm

The LRU algorithm has a shortcoming. For example, if a key value has not been used for a long time, if it has been accessed once recently, it will not be eliminated. Even if it is the least used cache, it will not be eliminated. Therefore, in The LFU algorithm was introduced after Redis 4.0. Let's take a look at it together.

LFU algorithm

The full name of LFU is Least Frequently Used, which translates to the least commonly used. The least commonly used algorithm eliminates data based on the total number of visits. Its core idea is that "if the data has been accessed many times in the past, it will be accessed more frequently in the future." high".

LFU solves the problem that data will not be eliminated after being accessed once in a while, and is more reasonable than the LRU algorithm.

Each object header in Redis records LFU information. The source code is as follows:

typedef struct redisObject {
    unsigned type:4;
    unsigned encoding:4;
    unsigned lru:LRU_BITS; /* LRU time (relative to global lru_clock) or
                            * LFU data (least significant 8 bits frequency
                            * and most significant 16 bits access time). */
    int refcount;
    void *ptr;
} robj;

In Redis, LFU storage is divided into two parts, 16-bit ldt (last decrement time) and 8-bit logc (logistic counter).

  1. logc is used to store access frequency. The maximum integer value that can be represented by 8 bits is 255. The smaller its value, the lower the frequency of use and the easier it is to eliminate;

  2. ldt is used to store the last update time of logc.

Summarize

To sum up, we understand that the Redis memory elimination strategy and the expired recycling strategy are completely different concepts. The memory elimination strategy is to solve the problem of excessive Redis running memory. By comparing with, it is decided whether to eliminate the data. According to the parameters  maxmemory ,  maxmemory-policy decide Which elimination strategy to use? There are already 8 elimination strategies after Redis 4.0  . The default strategy is  noeviction not to eliminate any key values ​​when the memory is exceeded, but an error will be reported for new operations.

Guess you like

Origin blog.csdn.net/LinkSLA/article/details/132540280