Redis used as cache server

        Redis is used as a cache, and sometimes it is very convenient to automatically recycle old data when adding new data. This feature is widely circulated in the developer community because it is the default feature of the popular memcached system.

        LRU is really just one way of the storage reclamation strategy supported by Redis. The maxmemory instruction can be used to limit the memory size. This article covers more general discussions of the maxmemory instruction in Redis, as well as an in-depth discussion of the LRU algorithm used by Redis. In fact, the LRU in Redis is only an approximate algorithm of the LRU algorithm. Why, will be discussed below.

       Configuration of the maxmemory directive

       The configuration of the maxmomory command has actually been discussed in the article "Configuration of the Redis Database", which will be discussed in more depth in this article. The maxmemory directive is the configuration that Redis implements to save datasets with a fixed memory size. As we have already discussed, this directive can be configured at runtime directly in the redis.conf configuration file or using the config set command in newer versions of Redis.

        For example, in order to configure Redis as 100M memory, the following commands can be used in redis.conf:

maxmemory 100MB

        It should be noted that if maxmemory is set to 0, Redis will have no memory limit, which is the default configuration of 64-bit systems, and the default configuration of 32-bit systems is 3GB.

        When the specified memory size is reached, there are multiple strategies for Redis to choose from, which is also a Redis instruction. For example, it just returns an error message to the client, or recycles some old data so that new data can be stored.

        recycling strategy

        The maxmemory-policy command is a configuration command for Redis's recycling policy. The values ​​that can be selected are as follows:

        1) noeviction: When the memory limit reaches the configured size, when the client executes a command that takes up more memory, such as sadd, Redis only returns an error message to the client.

        2) allkeys-lru: The least recently used (LRU) keys are always reclaimed first, while releasing memory to ensure that new data can be saved.

        3) volatile-lru: Always give priority to recycling the latest and least used (LRU) keys with an expiration time set. to ensure that new data can be saved.

       4) allkeys-random: Randomly recycle enough keys to ensure that new data can be saved.

       5) volatile-random: Randomly recycle keys with expiration time set, yi ensures that new data can be saved normally.

       6) volatile-ttl: Prioritize the recovery of keys with an expiration time set and with less remaining survival time (ttl is used to test the remaining amount of key expiration time).

       Volatile-lru, volatile-random, and volatile-ttl behave the same as noeviction when there is no prerequisite. The prerequisite is that there are enough keys to set the expiration time.

       It is very important to choose an appropriate memory reclamation strategy based on the access patterns of your application. However, you can still reconfigure maxmemory-policy without downtime, but monitor the output while setting it up to adjust the settings.

       Several common set rules:

        1) If not quite sure, allkeys-lru is a good choice. At least it ensures that most of the keys in memory are accessed frequently rather than sitting idle, because this can improve the hit rate of client requests.

        2) If all keys are expected to have equal access probability, or all keys are scanned periodically, using allkeys-random is a good choice.

        3) If you want Redis to adjust the expiration time settings of keys, volatile-ttl will be a good choice.

        The volatile-lru and volatile-random strategies are very useful in non-clustered scenarios, either to persist data or just to cache it. However, we still recommend using a Redis cluster of at least two instances, as this will maximize its benefits.

        It is worth noting that setting the expiration time for keys will consume a certain amount of memory, so using a recycling strategy such as allkeys-lru is very memory-saving and can reduce the pressure of memory recycling.

       How recycling works

       The prerequisites for the recycling process to run are:

        1) The client executes a new command that will cause more data to be added to the database memory;

        2) Redis checks the maxmemory setting and memory usage. If the memory is insufficient, the keys will be recycled according to the recycling strategy.

        3) The new command is executed and the data is saved in the database.

       In this way, the database memory repeatedly exceeds the limit, and then some keys are recycled according to the recycling strategy, so that the memory usage drops below the limit size.

       If a command will cause a large amount of memory to be used, the memory limit may be exceeded.

       Approximate LRU Algorithm

       Redis's LRU algorithm is not an exact algorithm implementation. This means that Redis can't choose the best candidate, in other words, the best candidate is the least used of all objects in the past, which Redis can't do (finding an LRU object in a huge keyspace It is possible, but it is unnecessary from the perspective of time and space consumption, personal understanding), but an approximate LRU logic is used: that is: sample a small number of keys, and select the best LRU candidate among them.

        Nonetheless, version 3.0 of Redis optimizes this: keeping a set of best LRU candidates (understood as the concept of a pool), which improves the algorithm performance and makes its performance closer to that of the real LRU algorithm.

        The imperfect LRU algorithm also provides the user with a good opportunity to adjust the accuracy of the algorithm: change the sample size of the algorithm. From a statistical point of view, when the sample size is infinite, it is basically close to the real LRU algorithm, but the memory consumption also increases accordingly, which requires the user to choose a balance between the two, which is positive It is the greatest fun for players.

       The following is  the sample size configuration of redis.conf:

maxmemory-samples  5

       A related proof shows that Redis 3.0 outperforms Redis 2.8 at a sample size of 5, but the latter retains more recently used keys. When the sample size is 10, the performance of Redis 3.0 is basically very close to the real LRU algorithm performance.

       In practice, if you are using Redis 3.0, you can increase the sample size to 10 to get closer to the real LRU algorithm performance at the expense of a little CPU, and then observe the difference in error rate.

       Finally, the command format for the config set command to set the sample size is:

config set maxmemory-samples  <count>

 

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326404615&siteId=291194637