Summary of java interview questions (10)--Interviewer: What should I do if Redis memory is full?

Original link

The more you know, the more you don’t know. Amateurs are like a grass!

Come, let's work hard together! If you don't come, I will work hard with your competitors!

Edit: amateur grass

juejin.im/post/5d674ac2e51d4557ca7fdd70

Recommended: https://www.xttblog.com/?p=5155

  • Redis occupies memory size

  • Redis memory obsolescence

  • LRU algorithm

  • Implementation of LRU in Redis

  • LFU algorithm

  • problem


Redis occupies memory size

We know that Redis is a memory-based key-value database. Because the memory size of the system is limited, we can configure the maximum memory size that Redis can use when using Redis.

1. Configure via configuration file

Set the memory size by adding the following configuration to the redis.conf configuration file under the Redis installation directory

 
  1. //设置Redis最大占用内存大小为100M

  2. maxmemory 100mb

  3.  

The redis configuration file does not necessarily use the redis.conf file under the installation directory. When starting the redis service, you can pass a parameter to specify the redis configuration file

2. Modify by command

Redis supports dynamic modification of memory size through commands at runtime

  1. //设置Redis最大占用内存大小为100M

  2. 127.0.0.1:6379> config set maxmemory 100mb

  3.  
  4. //获取设置的Redis能使用的最大内存大小

  5. 127.0.0.1:6379> config get maxmemory

  6.  

If you do not set the maximum memory size or set the maximum memory size to 0, the memory size is not limited under the 64-bit operating system, and the maximum memory size is 3GB under the 32-bit operating system

Redis memory obsolescence

Since the maximum memory size of Redis can be set, the configured memory will be used up. Then when the memory is used up, if you continue to add data to Redis, will there be no memory available?

In fact, Redis defines several strategies to deal with this situation:

  • noeviction (default strategy) : No more services are provided for write requests, and an error is returned directly (except for DEL requests and some special requests)

  • allkeys-lru : Use the LRU algorithm to eliminate all keys

  • volatile-lru : Use the LRU algorithm to eliminate the key with expiration time set

  • allkeys-random : randomly eliminate data from all keys

  • volatile-random : Randomly eliminated from the keys with expiration time set

  • Volatile-ttl : Among the keys with expiration time set, they are eliminated according to the expiration time of the key. The earlier the expiration time, the priority will be eliminated

When using the three strategies of volatile-lru , volatile-random and volatile-ttl , if there is no key to be eliminated, it will return an error like noeviction

How to obtain and set memory elimination strategy

Get the current memory elimination strategy:

  1. 127.0.0.1:6379> config get maxmemory-policy

  2.  

Set the elimination strategy through the configuration file (modify the redis.conf file):

  1. maxmemory-policy allkeys-lru

  2.  

Modify the elimination strategy through commands:

  1. 127.0.0.1:6379> config set maxmemory-policy allkeys-lru

  2.  

LRU algorithm

What is LRU?

As mentioned above, the maximum memory that can be used by Redis is used up, and the LRU algorithm can be used to eliminate memory. So what is the LRU algorithm?

LRU (Least Recently Used) , that is, the least recently used, is a cache replacement algorithm.

When using memory as a cache, the size of the cache is generally fixed. When the cache is full, at this time continue to add data to the cache, it is necessary to eliminate a part of the old data, and release the memory space to store new data.

At this time, the LRU algorithm can be used. The core idea is: if a piece of data has not been used in the recent period, it is unlikely to be used in the future, so it can be eliminated.

Use java to implement a simple LRU algorithm

public class LRUCache<k, v> {
    //容量
    private int capacity;
    //当前有多少节点的统计
    private int count;
    //缓存节点
    private Map<k, node> nodeMap;
    private Node head;
    private Node tail;
 
    public LRUCache(int capacity) {
        if (capacity < 1) {
            throw new IllegalArgumentException(String.valueOf(capacity));
        }
        this.capacity = capacity;
        this.nodeMap = new HashMap<>();
        //初始化头节点和尾节点,利用哨兵模式减少判断头结点和尾节点为空的代码
        Node headNode = new Node(null, null);
        Node tailNode = new Node(null, null);
        headNode.next = tailNode;
        tailNode.pre = headNode;
        this.head = headNode;
        this.tail = tailNode;
    }
 
    public void put(k key, v value) {
        Node node = nodeMap.get(key);
        if (node == null) {
            if (count >= capacity) {
                //先移除一个节点
                removeNode();
            }
            node = new Node<>(key, value);
            //添加节点
            addNode(node);
        } else {
            //移动节点到头节点
            moveNodeToHead(node);
        }
    }
 
    public Node get(k key) {
        Node node = nodeMap.get(key);
        if (node != null) {
            moveNodeToHead(node);
        }
        return node;
    }
 
    private void removeNode() {
        Node node = tail.pre;
        //从链表里面移除
        removeFromList(node);
        nodeMap.remove(node.key);
        count--;
    }
 
    private void removeFromList(Node node) {
        Node pre = node.pre;
        Node next = node.next;
 
        pre.next = next;
        next.pre = pre;
 
        node.next = null;
        node.pre = null;
    }
 
    private void addNode(Node node) {
        //添加节点到头部
        addToHead(node);
        nodeMap.put(node.key, node);
        count++;
    }
 
    private void addToHead(Node node) {
        Node next = head.next;
        next.pre = node;
        node.next = next;
        node.pre = head;
        head.next = node;
    }
 
    public void moveNodeToHead(Node node) {
        //从链表里面移除
        removeFromList(node);
        //添加节点到头部
        addToHead(node);
    }
 
    class Node<k, v> {
        k key;
        v value;
        Node pre;
        Node next;
 
        public Node(k key, v value) {
            this.key = key;
            this.value = value;
        }
    }
}
 

 

The above code implements a simple LUR algorithm. The code is very simple and has comments. It is easy to understand if you look carefully.

Implementation of LRU in Redis

Approximate LRU algorithm

Redis uses the approximate LRU algorithm, which is not the same as the conventional LRU algorithm.

The approximate LRU algorithm eliminates data through random sampling, and randomly generates 5 (default) keys each time, and eliminates the least recently used key from it.

The number of samples can be modified through the maxmemory-samples parameter: Example: maxmemory-samples 10 The larger the maxmenory-samples configuration is, the closer the result of elimination is to the strict LRU algorithm

In order to implement the approximate LRU algorithm, Redis adds an extra 24-bit field to each key to store the time when the key was last accessed.

Redis3.0 optimization of approximate LRU

Redis3.0 has made some optimizations to the approximate LRU algorithm. The new algorithm will maintain a candidate pool (the size is 16), the data in the pool is sorted according to the access time, and the first randomly selected key will be put into the pool

Each subsequent randomly selected key will only be placed in the pool when the access time is less than the minimum time in the pool, until the candidate pool is full.

When it is full, if there is a new key that needs to be placed, the one with the longest access time (recently accessed) in the pool will be removed.

When it needs to be eliminated, the key with the least recent access time (the longest not accessed) is directly selected from the pool and eliminated.

Comparison of LRU algorithm

We can compare the accuracy of each LRU algorithm through an experiment. First, add a certain amount of data n to Redis to use up the available memory of Redis, and then add n/2 new data to Redis. At this time, we need to eliminate a part of it. The data

If the strict LRU algorithm is followed, the n/2 data that should be added first should be eliminated.

Generate the following comparison chart of each LRU algorithm (source of picture):

You can see that there are three different colored points in the picture:

  • Light gray is the eliminated data

  • Gray is old data that has not been eliminated

  • Green is the newly added data

We can see that the sample number of Redis3.0 is 10 and the graph generated is the closest to the strict LRU. With the same use of 5 samples, Redis3.0 is better than Redis2.8.

LFU algorithm

The LFU algorithm is a new elimination strategy added in Redis 4.0. Its full name is Least Frequently Used

Its core idea is to eliminate keys based on the frequency of their recent access. Those that are rarely accessed will be eliminated first, and those that have been accessed more will be left behind.

The LFU algorithm can better indicate how hot a key is being accessed. If you are using the LRU algorithm, a key has not been accessed for a long time, but only occasionally accessed once, then it is considered hot data and will not be eliminated, and some keys are likely to be accessed in the future Was eliminated.

If the LFU algorithm is used, this will not happen, because using it once does not make a key hot data.

There are two strategies for LFU:

  • Volatile-lfu: Use the LFU algorithm to eliminate the key in the key with the expiration time set

  • allkeys-lfu: Use the LFU algorithm to eliminate data in all keys

The settings and use of these two elimination strategies are the same as the ones mentioned above, but one thing to note is that the two-week strategy can only be set in Redis4.0 and above, if it is set below Redis4.0, an error will be reported

problem

Finally, there is a small question. Some people may have noticed that I did not explain in the article why Redis uses an approximate LRU algorithm instead of an accurate LRU algorithm.

Regarding this issue, you can open your mind and think about it and discuss and study together.

 

Guess you like

Origin blog.csdn.net/lsx2017/article/details/114040684