LeetCode (力锋) 146 questions: LRU cache mechanism - hash table + doubly linked list solution with detailed notes

题目描述

Using the data structures you have mastered , design and implement an LRU (least recently used) cache mechanism.

Implement the LRUCache class:

  • LRUCache(int capacity) Initialize the LRU cache with a positive integer as the capacity capacity
  • int get(int key) If the keyword key exists in the cache, return the value of the key, otherwise return -1
  • void put(int key, int value) If the keyword already exists, change its data value; if the keyword does not exist, insert the set of "keyword-value" . When the cache capacity reaches its upper limit, it should delete the oldest unused data values ​​before writing new data, thus making room for new data values .

示例

输入
["LRUCache", "put", "put", "get", "put", "get", "put", "get", "get", "get"]
[[2], [1, 1], [2, 2], [1], [3, 3], [2], [4, 4], [1], [3], [4]]
输出
[null, null, null, 1, null, -1, null, -1, 3, 4]

解释
LRUCache lRUCache = new LRUCache(2);
lRUCache.put(1, 1); // 缓存是 {1=1}
lRUCache.put(2, 2); // 缓存是 {1=1, 2=2}
lRUCache.get(1);    // 返回 1
lRUCache.put(3, 3); // 该操作会使得关键字 2 作废,缓存是 {1=1, 3=3}
lRUCache.get(2);    // 返回 -1 (未找到)
lRUCache.put(4, 4); // 该操作会使得关键字 1 作废,缓存是 {4=4, 3=3}
lRUCache.get(1);    // 返回 -1 (未找到)
lRUCache.get(3);    // 返回 3
lRUCache.get(4);    // 返回 4

思路分析

In the description of the topic, let me design and implement an LRU cache mechanism with my own data structure. My first reaction is to use a hash table, and complete the 'put, get' method of the LRU cache mechanism by adding, deleting, checking, and modifying the hash table. At the beginning, I thought about defining a hash table to store the number of times each element was operated to determine which element should be deleted, but later found that it was not possible, so it still cannot represent the usage of elements in the cache mechanism (sequence problem ) , since the sequence problem is involved, the most reflective sequence in the data structure should be the linked list. Here we use a two-way linked list, but I always think that a one-way linked list should also work, but I haven’t tried it. Everyone is interested You can try it, I will wait for you in the comment area. Let me talk about the general logic of the problem-solving idea.
In the initialization method, a doubly linked list with two virtual nodes and a hash table are defined, and the capacity of the LRU cache and the current cache size are specified.
The next step is to add elements. If the element is contained in the current cache, you only need to find the element and change its value;
if the element is not contained in the current cache, first determine whether the current cache has reached the upper limit. If If the upper limit is not reached, then add it directly and move the node to the front of the doubly linked list ( here, put the most recently operated element at the beginning of the doubly linked list, so that elements that have not been operated for a long time will be ranked at the end of the doubly linked list ); If the upper limit is reached, you need to delete the tail element of the doubly linked list, delete it in the cache, then add a new element, and move the new element to the head of the doubly linked list.
The last is the method of fetching the element. If the element to be fetched is not in the cache, it will return -1 directly; if the element to be fetched is in the cache, it will directly return the value of the element.

Note : The more ingenious thing in this topic is to use a doubly linked list + hash table. Each node of the doubly linked list stores elements . The position at the end, the ones that have not been operated recently are placed at the end ), so that we can delete the element that is currently least used ( the end element of the doubly linked list ). The mapping relationship between key and value is retained in the cache cache.

Let’s talk about the idea and logic first, let’s take a look at the specific implementation of the code:

代码

# 双向链表的类定义
class dual_linkList_node:
    def __init__(self, key=0, val=0):
        self.key = key
        self.val = val
        self.prev = None
        self.next = None
# LRU缓存机制的类定义
class LRUCache:
	# LRU缓存机制的初始化方法
    def __init__(self, capacity: int):
        self.cache = dict() # 初始化存储映射关系的字典
        self.head = dual_linkList_node() # 双向链表的头节点
        self.tail = dual_linkList_node() # 双向链表的尾节点
        self.head.next = self.tail # 头节点指向尾节点
        self.tail.prev = self.head # 尾节点指向头节点, 至此双向链表构建完成
        self.capacity = capacity # 初始化LRUCache的容量大小
        self.size = 0
    # 在这里将元素储存在双向链表中, 映射关系存在字典里 #
    def get(self, key: int) -> int: # 获取LRUCache中的某个元素
        if key in self.cache: # 若待取元素在cache内, 则直接根据key取出该节点, 再将该节点移动到双向链表的头部, 并返回该元素的值
            node = self.cache[key]
            self.move_to_head(node)
            return node.val
        else: # 如果待取元素不在cache内, 返回 -1
            return -1

    # 删除双向链表的一个元素
    def remove_node(self, node):
        node.prev.next = node.next # node的前一个节点向后指向node后一个节点
        node.next.prev = node.prev # node的后一个节点向前指向node前一个节点

    # 在head节点之后添加node
    def add_to_head(self, node):
        # 在双向链表中, node和self.head都需要进行变换
        node.prev = self.head
        node.next = self.head.next
        self.head.next.prev = node
        self.head.next = node

    # node节点移动到双向链表头节点之后
    def move_to_head(self, node):
        self.remove_node(node) # 先在双向链表中移除该node节点
        self.add_to_head(node) # 再将该node节点添加在head之后

    # 移除双向链表尾节点
    def remove_tail(self):
    	# 因为头节点和尾节点都为虚拟节点,因此删除尾节点即删除虚拟尾节点的前一个节点
        node = self.tail.prev
        self.remove_node(node)
        return node

    # 添加元素
    def put(self, key: int, value: int) -> None:
        if key not in self.cache:
            # 如果 key 不存在,创建一个新的节点
            node = dual_linkList_node(key, value)
            # 添加进哈希表
            self.cache[key] = node
            # 添加至双向链表的头部
            self.add_to_head(node)
            self.size += 1
            if self.size > self.capacity:
                # 如果超出容量,删除双向链表的尾部节点
                removed = self.remove_tail()
                # 删除哈希表中对应的项
                self.cache.pop(removed.key)
                self.size -= 1
        else:
        	# 如果带输入的元素存在于cache,则只需取出存储该元素的节点,改变其值并移动到双向链表头节点的位置即可
            node = self.cache[key]
            node.val = value
            self.move_to_head(node)

运行结果

insert image description here

おすすめ

転載: blog.csdn.net/Just_do_myself/article/details/118499267