[] Redis redis expiration policy of

redis expiration policy

When using redis do caching, we often set the expiration time. So redis is getting rid of those outdated data?

The answer is: + inert periodically delete delete

  • Deleted regularly: redis every 100ms will be 随机spot checks to delete outdated data. However, this method can sometimes leave a lot of checks to expire without being stale data, wasted memory.
  • Inert Delete: Delete this point inertia comes in handy when users access data, redis will first check that the data has not expired, if you delete expired.

It sounds periodically delete inert + Delete seems to be the perfect look, but outdated and no timely access to user data, then the memory will still exist a large number of obsolete data. At this memory should be used redis elimination mechanism.

redis memory elimination mechanism

  • noeviction: Insufficient memory when new data is written directly error.
  • allKeys-lru: Not enough memory to write new data when the key is removed the least recently used.
  • allKeys-random: insufficient memory when new data is written, randomly removes key.
  • volatile-lru: Insufficient memory when new data is written, the key expiration time set in among key to remove the least recently used.
  • volatile-random: When insufficient memory to write new data, in the setting of key expiration of time, then removed key.
  • volatile-ttl: When insufficient memory to write new data, in the setting of key expiration time among the first to remove expired key.

The above six kinds so you can remember:

  • Without removing direct error: noeviction.
  • Remove all key in: 1.allKeys-lru 2. allKeys-random
  • Removed set an expiration time of the key: 1. volatile-lru 2. volatile-random 3.volatile-ttl

Commonly used allKeys-lru

Implement a simple lru (least recently used algorithm)

package com.amber;

import java.util.Iterator;
import java.util.LinkedHashMap;
import java.util.Map;

public class LRULinkedHashMap<K, V> extends LinkedHashMap<K, V> {
    //最大容量
    private final int maxCapacity ;
    // 默认增长因子
    private static final float DEFAULT_LOAD_FACTOR = 0.75f;

    public LRULinkedHashMap(int maxCapacity) {
        super(maxCapacity, DEFAULT_LOAD_FACTOR, true);
        this.maxCapacity = maxCapacity;
    }
    
    @Override
    protected boolean removeEldestEntry(Map.Entry<K, V> entry) {
        if(size()>maxCapacity)
            return true;
        else
            return false;
    }

    public static void main(String[] args) {
        LRULinkedHashMap<String, String> lruLinkedHashMap = new LRULinkedHashMap(5);
        lruLinkedHashMap.put("1", "1");
        lruLinkedHashMap.put("2", "2");
        lruLinkedHashMap.put("3", "3");
        lruLinkedHashMap.put("4", "4");
        lruLinkedHashMap.put("5", "5");
        Iterator<Map.Entry<String, String>> iterator = lruLinkedHashMap.entrySet().iterator();
        while (iterator.hasNext()) {
            System.out.println(iterator.next());
        }
        lruLinkedHashMap.get("1");
        System.out.println("超出最大容量");
        lruLinkedHashMap.put("6", "6");
        iterator = lruLinkedHashMap.entrySet().iterator();
        while (iterator.hasNext()) {
            System.out.println(iterator.next());
        }

    }
}

result

1=1
2=2
3=3
4=4
5=5
超出最大容量
3=3
4=4
5=5
1=1
6=6

Process finished with exit code 0

From the above results it can be seen removed exceeds the maximum capacity when the second node, instead of the first node, so a simple algorithm is realized lru

super(maxCapacity, DEFAULT_LOAD_FACTOR, true);

It is called the parent class

    public LinkedHashMap(int var1, float var2, boolean var3) {
        super(var1, var2);
        this.accessOrder = var3;
    }

accessOrder true to access the latest data put into the last node, default false

Guess you like

Origin www.cnblogs.com/amberbar/p/11771273.html