Simple Design and Implementation of Lru Cache Algorithm


foreword

The LRU algorithm is also known as the least recently used algorithm. Its basic idea is that data that has not been used for a long time has a low chance of being used in the future, so when new data comes in, we can replace these data first.


1. The order of keys in LinkedHashMap

LinkedHashMap is a special Map object. The bottom layer will record the order of adding or accessing the keys in the Map based on the two-way list:

1. When a LinkedHashMap object is constructed using a no-argument constructor, the storage object will record the order in which keys are added by default. In the following example, the output result is: {F=100, A=200, M=300}

 public static void main(String[] args) {
        LinkedHashMap<Object,Object> map=new LinkedHashMap<>();
        map.put("F", 100);
        map.put("A", 200);
        map.put("M", 300);
        map.get("A");
        System.out.println(map);

    }

2. When constructing a LinkedHashMap object through the construction method with parameters, the key access sequence will be recorded, as shown in the following example, the output result is:

{F=100,A=200,M=300,P=400}

 public static void main(String[] args) {
        LinkedHashMap<Object,Object> map=new LinkedHashMap<>(3,0.75f,true);
        map.put("F", 100);
        map.put("A", 200);
        map.put("M", 300);
        map.get("A");
        map.put("P",400);
        System.out.println(map);

    }

3. The LinkedHashMap class has a removeEldestEntry() method: this method will be called every time the put method is executed to determine whether the container is full. The return value of this method is Boolean. If the return value is true, it will be Remove first (least recent access: the earliest added and no later access) and then put in, the default return value is false

Create a LinkedHashMap anonymous internal class object, and then rewrite the removeEldestEntry() method to realize that when the container is full, remove the element and add it. The example is as follows, and the output result is:

{F=100,A=200,E=400}

With this result, we can design a simple Lru cache based on LinkedHashMap

 public static void main(String[] args) {
        LinkedHashMap<Object,Object> map=
                new LinkedHashMap(3,0.75f,true){
                    //每次执行put方法都会调用此方法,用于判断容器是否已经满了
                    //方法返回true表示已满,此时会先移除(最近访问最少)再放入。
                    @Override
                    protected boolean removeEldestEntry(Map.Entry eldest) {
                        return size()>3;//默认值是false(表示不移除,即便是满了)
                    }
                };
        map.put("F", 100);
        map.put("A", 200);
        map.put("M", 300);
        map.get("F");
        map.get("A");
        map.put("E", 400);
        System.out.println(map);

    }

Graphic:


2. Design Lru

1. Prepare Cache standard interface

This interface acts as a specification definition, and the code is as follows:

public interface Cache {
    
    void putObject(Object key,Object value);

    Object getObject(Object key);

    Object removeObject(Object key);

    void clear();

    int size();
}

2. Prepare the interface implementation class

For simple Cache implementation:

/**
 * 简易Cache实现
 * 1)存储结构:散列表(基于JAVA中的hashmap存储)
 * 2)淘汰算法:没有(直到到内存溢出)
 * 3)线程安全:否
 * 4)缓存对对象的引用?强引用
 * 5)对象获取:浅拷贝(获取对象地址)
 */
public class PerpetualCache implements Cache{

    private HashMap<Object,Object> cache=new HashMap<>();
    @Override
    public void putObject(Object key, Object value) {
        cache.put(key, value);
    }

    @Override
    public Object getObject(Object key) {
        return cache.get(key);
    }

    @Override
    public Object removeObject(Object key) {
        return cache.remove(key);
    }

    @Override
    public void clear() {
        cache.clear();
    }

    @Override
    public int size() {
        return cache.size();
    }

    @Override
    public String toString() {
        return "PerpetualCache{" +
                "cache=" + cache.toString() +
                '}';
    }
}

3. Design Lru

Given four attributes:

Cache cache, used to store data

int maxCap, set the maximum capacity

LinkedHashMap<Object,Object> keyAccessOrders, used to record the access order of the key

Object eldEstKey, used to record the key with the least number of visits

a. When we add elements to the cache, add elements to keyAccessOrders, the value of keyAccessOrders can be added according to preference, and the key must be consistent with the cache, because the key is the key to ensure the order

b. Similarly, when we access, remove or clear the cache, keyAccessOrders needs to be synchronized

c. When the container is full, eldEstKey has a value and is the key with the least number of accesses, so by judging whether eldEstKey has a value, you can judge whether the container is full. If it is full, remove the cache with the least number of accesses according to the value of eldEstKey (keyAccessOrders will be automatically removed), and reassign eldEstKey to null

Override
            protected boolean removeEldestEntry(Map.Entry<Object, Object> eldest) {
                boolean isFull= size()>maxCap;
                if(isFull)eldEstKey=eldest.getKey();//获取最近访问次数最少的key
                return isFull;
            }
        };
    }

    @Override
    public void putObject(Object key, Object value) {
        //1.向Cache中添加新的元素
        cache.putObject(key, value);
        //2.记录key的访问顺序
        keyAccessOrders.put(key, key);
        //3.Cache满了则移除最近访问次数最少的key/value
        if(eldEstKey!=null){
            cache.removeObject(eldEstKey);
            eldEstKey=null;
        }
    }

    @Override
    public Object getObject(Object key) {
        //1.记录key的访问顺序
        keyAccessOrders.get(key);
        //2.返回cache中的指定key对应的value
        return cache.getObject(key);
    }

    @Override
    public Object removeObject(Object key) {
        Object object = cache.removeObject(key);
        keyAccessOrders.remove(key);
        return object;
    }

    @Override
    public void clear() {
           cache.clear();
           keyAccessOrders.clear();
    }

    @Override
    public int size() {
        return cache.size();
    }

    @Override
    public String toString() {
        return "LruCache{" +
                "cache=" + cache.toString() +
                '}';
    }

The following test, the output result is: LruCache{cache=PerpetualCache{cache={A=100, D=400, E=500}}}

public static void main(String[] args) {
        Cache cache=new LruCache(//负责添加算法
                new PerpetualCache(),//负责存数据
                3);
        cache.putObject("A", 100);
        cache.putObject("B", 200);
        cache.putObject("C", 300);
        cache.getObject("A");
        cache.putObject("D", 400);
        cache.putObject("E", 500);
        System.out.println(cache);
    }

}

Three, use

Take actual business as an example:

When the client sends a request to the server for the first time, the console outputs Get Data from Database, because the corresponding data cannot be queried through the key query cache, so the data is requested from the database and stored in the cache

When the client sends a request to the server again, as long as the cached data is not removed, the console will not output anything, because the server directly fetches the data from the cache

private Cache cache=new LruCache(new PerpetualCache(),3);
@GetMapping("/list")
public List<User> list(){
    Object obj = cache.getObject("userListKey");
    if(obj != null){
        return (List<User>)obj;//向下造型
    }
    System.out.println("Get Data from Database");
    List<User> list = mapper.list();
    cache.putObject("userListKey",list);
    return list;
}


Summarize

If you just design the Lru algorithm and implement it simply, the design method can be simpler, as long as you think about the order of the keys of the HashLinkedMap class. The above design method is more flexible, and it is also convenient for enhanced decoration later.

Guess you like

Origin blog.csdn.net/weixin_72125569/article/details/126754008
Recommended