Redis cache eviction strategy

When using Redis as a cache, sometimes you can conveniently have it automatically evict old data when new data is present. Everyone knows this, because memcached does this by default. Redis only supports LRU eviction strategy. The following mainly describes the Redis maxmemory instruction, which is used to limit the amount of memory used, and the LRU algorithm used by Redis , which is an approximate LRU algorithm.

  • maxmemory configuration directive

       The maxmemory directive is used to limit memory usage. It can be set in the redis.conf file or at runtime via the CONFIG SET command. For example , add the following instructions to the redis.conf file to limit the memory to 100M maxmemory100MB .

  • When set to 0 , there is no limit. Unlimited by default under 64 -bit systems, and 3GB for 32 -bit systems

  • When memory usage hits a limit, there are several different strategies to choose from. For example, Redis can directly return an error when calling instructions (these instructions will cause more memory usage), or evict old data and make room for new data to keep memory usage within limits.

  • eviction strategy

The specific eviction policy is configured through the maxmemory-policy command, which mainly includes the following policies:

  1. noeviction : When the memory usage reaches the threshold, all commands that cause memory application will report an error.

  2. allkeys-lru : In the primary key space, preferentially remove recently unused keys .

  3. volatile-lru : In the key space with the expiration time set, the recently unused keys are preferentially removed .

  4. allkeys-random : Randomly remove a key in the primary key space .

  5. volatile-random : Randomly remove a key in the key space with the expiration time set .

  6. volatile-ttl : In a key space with an expiration time set, keys with an earlier expiration time are removed first.

  7. The volatile-* series of instructions behave as noeviction when no key value satisfies the condition (for example, no expiration time is set)

Here, add the primary key space and the key space with the expiration time set. For example, if we have a batch of keys stored in Redis , there is a hash table used to store the batch of keys and their values. If the batch of keys is in If a part of the expiration time is set, then the batch of keys will also be stored in another hash table, and the value in this hash table corresponds to the expiration time of the key set. A keyspace with an expiration time set is a subset of the primary keyspace.

We have learned that Redis probably provides such several elimination strategies, so how to choose? The choice of eviction policy can be specified by the following configuration: # maxmemory-policy noeviction

But what to fill in this value? To solve this problem, we need to understand how our application requests access the dataset stored in Redis and what our demands are. At the same time, Redis also supports Runtime to modify the elimination strategy, which allows us to adjust the memory elimination strategy in real time without restarting the Redis instance.

  • Applicable scenarios for several strategies:

    1. allkeys-lru : If our application's access to the cache follows a power-law distribution (that is, there is relatively hot data), or we don't know the distribution of our application's cache access, we can choose the allkeys-lru strategy.

    2. allkeys-random : This strategy can be used if our application has equal access probability to cache keys .

    3. volatile-ttl : This strategy allows us to hint to Redis which keys are more suitable for eviction .

In addition, volatile-lru strategy and volatile-random strategy are suitable when we use one Redis instance for both cache and persistent storage, however we can also achieve the same effect by using two Redis instances, it is worth mentioning The point is that setting the key expiration time will actually consume more memory, so we recommend using the allkeys-lru strategy to use memory more efficiently.

  • How the eviction process is implemented: It is best to understand the eviction process from the following aspects

  1. The client runs a command that consumes memory;

  2. After Redis checks the memory usage, it finds that the limit is exceeded and executes the eviction policy;

  3. Execute a new instruction, and so on;

  4. That is to repeatedly let the memory usage of Redis fluctuate up and down the limit value to observe and verify the eviction strategy;

  5. When an instruction consumes a lot of memory, an obvious memory overrun can be observed within a certain time range;

  • Approximate LRU Algorithm

The LRU algorithm used by Redis is an approximate implementation, i.e. eviction keys do not necessarily actually access the least keys. It does this by sampling the keys in the eviction range, and then evicts the samples. what the hell.

There have been some improvements in Redis 3.0 to make the approximate LRU result closer to the real LRU while improving performance .

The important thing about Redis LRU is that you can adjust the algorithm precision, that is, adjust the number of samples per eviction. It can be adjusted with this command:

    maxmemory-samples5

The main purpose of Redis using approximate LRU is to save memory. For applications, approximate and real are actually equivalent.




Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324508235&siteId=291194637