Summary of java data structure and algorithm (35)--LRU algorithm principle and implementation

Original link

What is LRU

In modern computers, memory is still quite expensive, so if the limited memory is used and managed well to provide users with better performance, it is a meaningful issue.

LRU (Least Recently Used) is the least recently used, which is a typical memory elimination mechanism.

In layman's terms, the LRU algorithm believes that data that has been frequently accessed recently will have a higher retention, and that data that is rarely accessed will be eliminated.

LRU algorithm implementation ideas

According to the concept of the LRU algorithm, we need:

A parameter cap as the maximum capacity

A data structure to store data, and need 1. Easily update the latest accessed data. 2. Easily find out the least recently used data, and clean it up when it reaches the cap.

Here, the data structure we use is: hashmap + doubly linked list.

1. Use the time complexity of hashmap's get and put methods O(1) to quickly fetch and store data.

2. Use the feature of doublelinkedlist (you can access the nodes before and after a certain node) to realize O(1) adding and deleting data.

As shown below:

Principle and realization of LRU algorithm

 

When key2 is used again, its corresponding node3 is updated to the head of the linked list.

Principle and realization of LRU algorithm

 

Assuming cap=3, when key4 is created and accessed, node2 at the end of the linked list will be eliminated, and key1 will be cleared.

Simple implementation of LRU

Node node, store key, val value, pre-node, post-node

class Node{
    public int key;
    public int val;
    public Node next;
    public Node previous;

    public Node() {
    }

    public Node(int key, int val) {
        this.key = key;
        this.val = val;
    }
}

Doubly linked list, the attributes are size, head node, and tail node.

Provide api:

  • addFirst(): Insert the first method into the linked list
  • remove(): delete the last node
  • remove(Node node): delete a specific node
  • size(): Get the length of the linked list
class DoubleList{
    private int size;
    private Node head;
    private Node tail;

    public DoubleList() {
        this.head = new Node();
        this.tail = new Node();
        size = 0;
        head.next = tail;
        tail.previous = head;
    }

    public void addFirst(Node node){
        Node temp = head.next;
        head.next = node;
        node.previous = head;
        node.next = temp;
        temp.previous = node;
        size++;
    }

    public void remove(Node node){
        if(null==node|| node.previous==null|| node.next==null){
            return;
        }

        node.previous.next = node.next;
        node.next.previous = node.previous;
        node.next=null;
        node.previous=null;
        size--;
    }

    public void remove(){
        if(size<=0) return;
        Node temp = tail.previous;
        temp.previous.next = temp.next;
        tail.previous = temp.previous;
        temp.next = null;
        temp.previous=null;
        size--;
    }

    public int size(){
        return size;
    }
}

 

LRU algorithm implementation class

API

  • get(int key): returns -1 for null
  • put(int key, int value) If there is in the map, delete the original node, and add a new node. If not in the map, add new data to the map and the linked list.
public class LRUCache {

    Map<Integer,Node> map;
    DoubleList cache;
    int cap;


    public LRUCache(int cap) {
        map = new HashMap<>();
        cache = new DoubleList();
        this.cap = cap;
    }

    public int get(int key){
        Node node = map.get(key);
        return  node==null? -1:node.val;

    }

    public void put(int key, int val){
        Node node = new Node(key,val);
        if(map.get(key)!=null){
            cache.remove(map.get(key));
            cache.addFirst(node);
            map.put(key,node);
            return;
        }

        map.put(key,node);
        cache.addFirst(node);
        if(cache.size()>cap){
            cache.remove();
        }

    }

    public static void main(String[] args) {
        //test, cap = 3
        LRUCache lruCache = new LRUCache(3);
        lruCache.put(1,1);
        lruCache.put(2,2);
        lruCache.put(3,3);
        //<1,1>来到链表头部
        lruCache.put(1,1);
        //<4,4>来到链表头部, <2,2>被淘汰。
        lruCache.put(4,4);
    }

}

LRU application scenarios

  • Low-level memory management, page replacement algorithm
  • General cache service, memcache\redis and the like
  • Part of the business scenario

reference

Detailed explanation and implementation of LRU strategy

LRU principle and application scenarios

Author: ZzAllenZz

Source: https://segmentfault.com/a/1190000039256321

Guess you like

Origin blog.csdn.net/lsx2017/article/details/114040825