Redis out of memory

Redis memory footprint size

We know that Redis is a memory-based key-value database, because of limited memory size of the system, so we can configure the maximum memory size Redis can be used when using the Redis.

1, the profile configuration

By adding the following configuration settings installation directory memory size Redis profile following redis.conf

//设置Redis最大占用内存大小为100M
maxmemory 100mb

redis configuration file does not use the installation directory of redis.conf files, startup redis services can pass a parameter to specify the configuration file redis

2, modifying the command

Redis dynamically modify the memory size by run command support

//设置Redis最大占用内存大小为100M
127.0.0.1:6379> config set maxmemory 100mb
//获取设置的Redis能使用的最大内存大小
127.0.0.1:6379> config get maxmemory

If not set the maximum memory size or the maximum memory size set to 0 in the 64-bit operating system does not limit the memory size, use up memory 3GB in 32-bit operating system

Redis out of memory

Since you can set the maximum occupancy Redis memory size, the configuration of the memory have run out of time. When that runs out of memory, but also continue to add Redis there is no memory available data does not it?

Redis actually defines several strategies to deal with this situation:

noeviction (default policy) : For a write request is no longer available, directly returns an error (DEL and some special request, unless the request)

LRU-AllKeys : carried out using the LRU algorithm from all the key

LRU-volatile : carried out using LRU algorithm from set an expiration time of the key

Random-AllKeys : random out of all the key data from the

Random-volatile : the key expiration time set randomly eliminated

TTL-volatile : in the setting of key expiration time, the expiration time carried out according to the key, the sooner the more preferentially eliminated expired

When using volatile-the LRU , volatile-Random , volatile-ttl These three strategies can be eliminated if there is no key, and then noeviction as an error is returned

How to get out of the policy and set the memory

Get out of the current memory strategy:

127.0.0.1:6379> config get maxmemory-policy

Setting out of the policy through a configuration file (modify redis.conf file):

maxmemory-policy allkeys-lru

Modify elimination policy command:

127.0.0.1:6379> config set maxmemory-policy allkeys-lru

LRU algorithm

What is LRU?

Speaking above the Redis can use the maximum memory usage is over, you can use LRU algorithm is out of memory, then what is the LRU algorithm it?

The LRU (Least Recently Logs in Used) , that is, the least recently used, a cache replacement algorithm. When used as a cache memory, the cache size is generally fixed. When the cache is full, this time to continue to add data to the cache inside, we need to eliminate part of the old data, free up memory space for new data storage. This time you can use the LRU algorithm. The core idea is: if a data not used in the most recent period, then the future is to use the possibility of very small, so it can be eliminated.

Using java achieve a simple LRU algorithm

public class LRUCache<k, v> {
    //容量
    private int capacity;
    //当前有多少节点的统计
    private int count;
    //缓存节点
    private Map<k, Node<k, v>> nodeMap;
    private Node<k, v> head;
    private Node<k, v> tail;

    public LRUCache(int capacity) {
        if (capacity < 1) {
            throw new IllegalArgumentException(String.valueOf(capacity));
        }
        this.capacity = capacity;
        this.nodeMap = new HashMap<>();
        //初始化头节点和尾节点,利用哨兵模式减少判断头结点和尾节点为空的代码
        Node headNode = new Node(null, null);
        Node tailNode = new Node(null, null);
        headNode.next = tailNode;
        tailNode.pre = headNode;
        this.head = headNode;
        this.tail = tailNode;
    }

    public void put(k key, v value) {
        Node<k, v> node = nodeMap.get(key);
        if (node == null) {
            if (count >= capacity) {
                //先移除一个节点
                removeNode();
            }
            node = new Node<>(key, value);
            //添加节点
            addNode(node);
        } else {
            //移动节点到头节点
            moveNodeToHead(node);
        }
    }

    public Node<k, v> get(k key) {
        Node<k, v> node = nodeMap.get(key);
        if (node != null) {
            moveNodeToHead(node);
        }
        return node;
    }

    private void removeNode() {
        Node node = tail.pre;
        //从链表里面移除
        removeFromList(node);
        nodeMap.remove(node.key);
        count--;
    }

    private void removeFromList(Node<k, v> node) {
        Node pre = node.pre;
        Node next = node.next;

        pre.next = next;
        next.pre = pre;

        node.next = null;
        node.pre = null;
    }

    private void addNode(Node<k, v> node) {
        //添加节点到头部
        addToHead(node);
        nodeMap.put(node.key, node);
        count++;
    }

    private void addToHead(Node<k, v> node) {
        Node next = head.next;
        next.pre = node;
        node.next = next;
        node.pre = head;
        head.next = node;
    }

    public void moveNodeToHead(Node<k, v> node) {
        //从链表里面移除
        removeFromList(node);
        //添加节点到头部
        addToHead(node);
    }

    class Node<k, v> {
        k key;
        v value;
        Node pre;
        Node next;

        public Node(k key, v value) {
            this.key = key;
            this.value = value;
        }
    }
}

The above code implements a simple LUR algorithm, code is very simple, but also added a note, a closer look it is easy to understand.

LRU is realized Redis

Approximate LRU algorithm

Redis using approximate LRU algorithm, which with conventional LRU algorithm is also not the same. Approximate LRU algorithm data out random sampling method, each time a random 5 (default) a key, key out from the inside out of the least recently used.

The number of samples can be modified by maxmemory-samples Parameters: Example: larger maxmemory-samples 10 maxmenory-samples configuration, the closer the result out strict LRU algorithm

To achieve approximation Redis LRU algorithm, for each key an extra increase of a 24bit field, the time for which the key was last accessed stored.

Redis3.0 optimized approximation of LRU

Redis3.0 LRU algorithm approximate number of optimization. The new algorithm maintains a candidate pool (size 16), the data pool sorted according to the access time, the first key are randomly selected into the pool, and then each of the randomly selected key only if the access time is less than the pool the minimum time before into the pool until the candidate pool is filled. When filled, if there is need to put in a new key, then (most recently accessed) to remove the largest pool of last access time.

When the need to eliminate, then select from a pool of recently accessed directly minimum time (the longest not be accessed) eliminated the key on the line.

Contrast LRU algorithm

We can compare the accuracy of each LRU algorithm, the first to add Redis inside by a certain number of experimental data n, the Redis available memory is used up, go down inside to add new data Redis n / 2, and this time you need to eliminate part the data, according to strict LRU algorithm, the data should eliminate the first to join the n / 2 in. FIG generate the following comparison of the respective LRU algorithm (Source):

You can see the figure, there are points in three different colors:

  • Light gray is out of the data

  • Gray is not eliminated the old data

  • Green is the newly added data

We can see the number of samples is generated Redis3.0 10 closest to strict LRU. 5 while the same number of samples used, Redis3.0 also superior Redis2.8.

LFU algorithm

LFU algorithm is Redis4.0 inside plus a new phase-out strategy. Its full name is Least Frequently Used , its core idea is to be phased out according to the frequency of recently visited key priority seldom accessed been eliminated, and more were being accessed to stay.

LFU algorithm can better represent the heat of a key being accessed. If you are using the LRU algorithm, a key for a long time will not be visited, is only occasionally visited just once, then it is considered to be hot data, will not be eliminated, and some key future is likely to be accessed the were eliminated. If you use the LFU algorithm does not happen, because one does not make use of a key to be hot data.

LFU There are two strategies:

  • volatile-lfu: LFU algorithm using the key out of the expiration time set in the key

  • allkeys-lfu: LFU algorithm using phase-out in all of the key data in

Using these two strategies setting out of speaking with the same as before, but the point to note is that this strategy can only be two weeks or more in Redis4.0 setting, if provided in the following Redis4.0 will complain

problem

Finally, leave a small problem, some people may have noticed that I did not explain why Redis approximate LRU algorithm instead of using accurate LRU algorithm, can give your answer in the comments section in the text, we can discuss learning

Guess you like

Origin www.cnblogs.com/zhangfengshi/p/11599485.html