Detailed Explanation of Redis Memory Elimination Strategy

1. Introduction

Redis

Redis is a high-performance non-relational database that supports multiple data structures such as strings, hashes, lists, sets, sorted sets, and HyperLogLog. Redis can be used in scenarios such as caching, message queues, and data structure storage in applications. Its advantages are fast response speed, support for rich data structures, and good scalability.

memory management issues

Redis stores all data in memory, so memory management becomes an important issue to be solved in Redis. When the memory usage of Redis reaches the maximum memory limit, Redis will no longer be able to accept write requests, and some data will be eliminated according to the memory elimination strategy to free up memory.

2. Memory elimination strategy

1. Why do we need a memory elimination strategy

When the Redis memory usage reaches the maximum memory limit, if the write operation continues, the Redis service will crash. Therefore, in order to ensure the stability of the Redis service, Redis takes a series of measures when the memory usage reaches the maximum limit, such as memory elimination, warning, etc.

2. Classification of memory elimination strategies

There are six main Redis memory elimination strategies:

(1)noeviction

Do not execute any elimination strategy, and directly return an error message. This approach may lead to excessive memory usage and cause Redis to crash.

(2)allkeys-lru

Select the least recently used data from all keys for elimination. This method can usually retain hot data, but memory fragmentation may occur, resulting in memory waste.

(3)allkeys-lfu

Select the data with the least access frequency from all keys for elimination. This method is suitable for processing data with relatively even access distribution.

(4)volatile-lru

Only the least-recently-used data is eliminated from the keys with an expiration time (ttl) set. This method is suitable for usage scenarios such as caching.

(5)volatile-lfu

Only the data with the least access frequency is selected from the keys with the expiration time (ttl) set for elimination.

(6)volatile-ttl

Only select the data that is about to expire from the keys with an expiration time (ttl) set for elimination. This method is suitable for usage scenarios such as caching.

3. Detailed strategy

In Redis, in order to prevent memory overflow, when the memory is not enough to accommodate more data, Redis will eliminate some key-value pairs through the memory elimination strategy to make room for new key-value pairs. Below we will introduce the memory elimination strategies supported by Redis in detail.

noeviction

This is the default policy of Redis, which means that when the memory is full, any write operation (such as SET, HSET, etc.) will return an error. In this case, the administrator needs to manually clear some key-value pairs or increase the memory to continue to perform a write operation.

allkeys-lru

This strategy uses the LRU (least recently used) algorithm for all keys to eliminate key-value pairs. For all Redis keys, according to the least recently used principle, those key-value pairs that have not been used recently or have not been used for a long time are prioritized.

# 配置为 allkeys-lru 策略
maxmemory-policy allkeys-lru

volatile-lru

In this mode, Redis will choose keys with a shorter expiration time to evict. If an expiration time is set for a key and it is far away from the expiration time, Redis will not consider eliminating the key-value pair. Conversely, if a key is only a short time away from expiring, Redis will prioritize eviction of that key-value pair.

# 配置为 volatile-lru 策略
maxmemory-policy volatile-lru

allkeys-random

In this mode, Redis will randomly eliminate some key-value pairs, focusing on the access frequency and key space of key-value pairs.

# 配置为 allkeys-random 策略
maxmemory-policy allkeys-random

volatile-random

Similar to allkeys-random mode, but it only eliminates keyspaces with expiration time enabled.

# 配置为 volatile-random 策略
maxmemory-policy volatile-random

volatile-ttl

In this mode, Redis will choose the key closest to timeout to evict. It is similar to volatile-lru, except that it focuses on timed-out keys rather than least-recently-used keys.

# 配置为 volatile-ttl 策略
maxmemory-policy volatile-ttl

volatile-lfu

The LFU (Least Frequently Used) algorithm is used in this strategy. Redis will first eliminate the key-value pairs that are used the least frequently.

# 配置为 volatile-lfu 策略
maxmemory-policy volatile-lfu

LFU least frequent use algorithm

In addition to the support of the volatile-lfu strategy, Redis also supports the elimination strategy using the LFU algorithm. This strategy is suitable for scenarios where key-value pairs need to be eliminated using the least frequently used algorithm.

# 配置为 LFU 策略
maxmemory-policy lfu

LRU least recently used algorithm

In addition to the allkeys-lru and volatile-lru strategies, Redis also supports elimination strategies using the LRU algorithm. This strategy is suitable for scenarios where key-value pairs need to be eliminated using the least recently used algorithm.

# 配置为 LRU 策略
maxmemory-policy lru

4. How to choose a suitable memory elimination strategy

When using Redis, in order to maintain the stability and performance of the system, we need to choose a suitable memory elimination strategy for different business scenarios. The following will introduce the factors that need to be considered when making strategy selection and how to choose the appropriate memory elimination strategy according to the business scenario.

Factors to be considered when choosing a memory elimination strategy

system tolerance

The tolerance of the system refers to the amount of tolerable cached data being eliminated in the current system. When the tolerance of the system is high, we can adopt some more aggressive elimination strategies, such as noeviction strategy or volatile-lru strategy. When the tolerance of the system is low, some conservative elimination strategies need to be selected.

The Importance of Caching Data

For different cached data, its importance is also different. For example, some cached data may correspond to important information such as the user's personal information or payment orders, and these data need to be protected by a stricter elimination strategy. For some cached data that is only used for auxiliary calculations, a looser elimination strategy can be adopted.

memory usage

When the memory usage is sufficient, we can appropriately adopt some more aggressive elimination strategies to improve the performance of the system. And when the memory usage is relatively tight, you need to choose some more conservative elimination strategies to ensure the stable operation of the system.

Business scene

Different business scenarios have different requirements for elimination strategies. For example, some cached data needs to be stored for a long time. At this time, we need to choose some strategies with high long-term usage rate to eliminate. For some data that needs to be updated frequently or only needs to be used for temporary calculations, some more aggressive strategies can be adopted.

How to choose the appropriate memory elimination strategy according to the business scenario

noeviction policy

This strategy is the default elimination strategy of Redis, which is characterized by not eliminating the cached data, and when the Redis memory reaches the limit, the write operation will fail. This strategy is suitable for situations where most of the data in the system must be indispensable and cannot be deleted. Usually we also deal with cache penetration and avalanche for this situation.

# 在 Redis 配置文件中添加以下配置
maxmemory-policy noeviction

volatile-lru policy

This strategy will eliminate the least recently used (Least Recently Used) cached data with an expiration time. This strategy can be used when there is a clear expiration time limit for cached data and the probability of using historical data is relatively small.

# 在 Redis 配置文件中添加以下配置
maxmemory-policy volatile-lru
maxmemory 4mb

volatile-ttl policy

This strategy will eliminate the cached data with the shortest remaining time, which is usually suitable for scenarios where the expiration time of the cached data is close or the expiration time is difficult to estimate.

# 在 Redis 配置文件中添加以下配置
maxmemory-policy volatile-ttl
maxmemory 4mb

allkeys-lru policy

This strategy will eliminate all cached data that has been least recently used. It is suitable for situations where there are no restrictions on cached data elimination, and the data pattern and access frequency of the service are unpredictable, and it is impossible to determine whether it is a hot access or a cold access.

# 在 Redis 配置文件中添加以下配置
maxmemory-policy allkeys-lru
maxmemory 4mb

Guess you like

Origin blog.csdn.net/u010349629/article/details/130895415