Data structures and algorithms -LRU

LRU, one kind of memory management page replacement algorithm, but not in the memory for data blocks (memory blocks) is called the LRU, the operating system which data belongs based on LRU and move it out of memory to make room to load additional data

There are three cache usage policy

  • FIFO (first in first out queue)
  • LFU - Least frequently Used (least recently used)
    If a piece of data in a small number of the most recent being accessed, so few in number in the future be accessed
  • LRU - Least Recently Used (least recently used)
    If a piece of data in recent times has not been accessed, then the probability is very small after being accessed

A scheme: O (n)

Maintain a single ordered list. Tail is recently used, the head is the first to use. When a new data is accessed when traversing the list, the following circumstances

  1. If you have been cached in the list, get this node, move to the tail of the deleted
  2. If not cached
    • If the list is not full, directly into the tail
    • If the list is full, remove the head and tail of the new node

If the direct use of 单链表true, seek operation time complexity is O (n), so usually there will be another means 散列表to increase the search speed. However, when the space is full of data need to be removed first use, so it needs to ensure that the use of sequence

Option II: O (1)

Orderly hash table: With two-way hash table and linked list implementation, the hash table is used to quickly locate, doubly linked list used to store data

achieve

Use OrderDict achieve

# coding:utf-8

from collections import OrderedDict


class LRUCache(object):
    """
    借助OrderedDict的有序性实现, 内部使用了双向链表
    """

    def __init__(self, max_length: int = 5):
        self.max_length = max_length
        self.o_dict = OrderedDict()

    def get(self, key):
        """
        如果找到的话移动到尾部
        :param key:
        :return:
        """
        value = self.o_dict.get(key)
        if value:
            self.o_dict.move_to_end(key)
        return value

    def put(self, key, value):
        if key in self.o_dict:
            self.o_dict.move_to_end(key)
        else:
            if len(self.o_dict) >= self.max_length:
                # 弹出最先插入的元素
                self.o_dict.popitem(last=False)
        self.o_dict[key] = value


if __name__ == "__main__":
    lru = LRUCache(max_length=3)
    lru.put(1, "a")
    lru.put(2, "b")
    lru.put(3, "c")
    assert lru.o_dict == OrderedDict([(1, 'a'), (2, 'b'), (3, 'c')])
    lru.get(2)
    assert lru.o_dict == OrderedDict([(1, 'a'), (3, 'c'), (2, 'b')])
    lru.put(4, "d")
    assert lru.o_dict == OrderedDict([(3, 'c'), (2, 'b'), (4, "d")])

Use dict and list realization

# coding:utf-8

from collections import deque


class LRUCache(object):
    def __init__(self, max_length: int = 5):
        self.max_length = max_length
        self.cache = dict()
        self.keys = deque()

    def get(self, key):
        if key in self.cache:
            value = self.cache[key]
            self.keys.remove(key)
            self.keys.append(key)
        else:
            value = None
        return value

    def put(self, key, value):
        if key in self.cache:
            self.keys.remove(key)
            self.keys.append(key)
        else:
            if len(self.keys) >= self.max_length:
                self.keys.popleft()
                self.keys.append(key)
            else:
                self.keys.append(key)
        self.cache[key] = value


if __name__ == "__main__":
    lru = LRUCache(max_length=3)
    lru.put(1, "a")
    lru.put(2, "b")
    lru.put(3, "c")
    assert lru.keys == deque([1, 2, 3])
    lru.get(2)
    assert lru.keys == deque([1, 3, 2])
    lru.put(4, "d")
    assert lru.keys == deque([3, 2, 4])

lru_cache

You may be used in carrying python3.7 cache module lru

from functools import lru_cache


@lru_cache(maxsize=32)
def fibs(n: int):
    if n == 0:
        return 0
    if n == 1:
        return 1
    return fibs(n-1) + fibs(n-2)


if __name__ == '__main__':
    print(fibs(10))

application

  • The LRU policy Redis
  • Java's LinkedHashMap
    using a hash table and implement a doubly linked list

to sum up

  • Array use the index to quickly locate, but the drawback is the need contiguous memory
  • Chain advantage is that memory can not be continuous, but slow to find
  • Hash and list / jump table mix is ​​to combine the advantages of arrays and linked lists

data

  • US data structures and algorithms - The King of dispute

Guess you like

Origin www.cnblogs.com/zlone/p/11012097.html