Unity study notes--use C# to develop an LRU

What is LRU

In computer systems, LRU (Least Recently Used, least recently used) is a cache replacement algorithm. Cache is a medium that can obtain data at a high speed in a computer system, and the cache replacement algorithm needs to eliminate some cached data to make room when the cache space is insufficient, so that later access data can have more cache space available.

The LRU algorithm divides the data in the cache into "hot data" and "cold data". Hot data is frequently used data, while cold data is rarely used. When the cache is full and new data needs to be added to the cache, we need to eliminate some data from the cache through the LRU algorithm, then eliminate the data that has not been accessed for the longest time, that is, "cold data", and keep the "hot data" In the cache, because "hot data" is likely to be used, this can effectively improve the cache hit rate, reduce memory usage, and optimize system performance.

In layman's terms, data that has been frequently accessed recently will have a higher retention rate, and data that is not frequently accessed will be eliminated.

LRU core idea

The core idea of ​​LRU is the "time locality principle". This principle shows that the data accessed by the program within a period of time has a high probability of being accessed in the near future, so it is likely to be cached. The data that has not been accessed for a long time is likely to not be accessed in the near future, so the probability of being eliminated is high. Based on this principle, the LRU algorithm should replace the data in the cache that has not been used for a long time when the cache is not enough, so as to ensure that the more commonly used data is in the cache and improve the hit rate of the cache.

Specifically, a linked list is used in the cache object to maintain the order of the cached data. Whenever the cache object is used, the data is moved from the linked list to the end of the linked list. Whenever the cache object is full, the data at the head of the linked list is moved disuse.

This ensures that the data at the end of the linked list is the most recently used data, and the data at the head of the linked list has not been used for the longest time, so that the order of the data in the cache can be maintained by moving the linked list nodes, and the data at the head of the linked list can be eliminated. Ensure that the cache size is not greater than a certain limit, so as to achieve the purpose of improving the cache hit rate. Therefore, the core idea of ​​the LRU algorithm is to "eliminate the data that has not been used for the longest time and keep the data that has been used the least recently" to optimize the performance of the system.

Code implementation one: doubly linked list + hash table

using System.Collections.Generic;

public class LRUCache<K, V>
{
    
    
    private int capacity;
    private Dictionary<K, LinkedListNode<Tuple<K, V>>> dict;
    private LinkedList<Tuple<K, V>> linkedList;

    public LRUCache(int capacity)
    {
    
    
        this.capacity = capacity;
        this.dict = new Dictionary<K, LinkedListNode<Tuple<K, V>>>();
        this.linkedList = new LinkedList<Tuple<K, V>>();
    }
    
    public V Get(K key)
    {
    
    
        if (!dict.ContainsKey(key))
        {
    
    
            return default(V);
        }
        
        var node = dict[key];
        linkedList.Remove(node);
        linkedList.AddLast(node);
        
        return node.Value.Item2;
    }
    
    public void Put(K key, V value)
    {
    
    
        if (dict.ContainsKey(key))
        {
    
    
            var node = dict[key];
            linkedList.Remove(node);
        }
        
        var newNode = new LinkedListNode<Tuple<K, V>>(Tuple.Create(key, value));
        dict[key] = newNode;
        linkedList.AddLast(newNode);
        
        if (dict.Count > capacity)
        {
    
    
            var firstNode = linkedList.First;
            linkedList.RemoveFirst();
            dict.Remove(firstNode.Value.Item1);
        }
    }
}

// Usage:
var lruCache = new LRUCache<string, int>(2);
lruCache.Put("a", 1);
lruCache.Put("b", 2);
Console.WriteLine(lruCache.Get("a")); // Output: 1
lruCache.Put("c", 3);
Console.WriteLine(lruCache.Get("b")); // Output: 0 (not found)

analyze

Using a doubly linked list (a doubly linked list node has pointers to the previous node and the next node) can achieve O(1) time deletion of nodes. When deleting a node, we only need to update the pointer of its previous node and the pointer of the next node to delete this node from the linked list.

Code implementation two: OrderedDictionary

using System.Collections.Specialized;


namespace Tools
{
    
    
    public class LRUCache<K, V>
    {
    
    
        private OrderedDictionary dict;
        private int capacity;

        public LRUCache(int capacity)
        {
    
    
            this.capacity = capacity;
            dict = new OrderedDictionary();
        }

        public V Pop(K key)
        {
    
    
            if (!dict.Contains(key))
            {
    
    
                return default(V);
            }

            var value = (V)dict[key];
            dict.Remove(key);
            dict.Add(key, value);
            return value;
        }

        public void Push(K key, V value)
        {
    
    
            if (dict.Contains(key))
            {
    
    
                dict.Remove(key);
            }
            else if (dict.Count >= capacity)
            {
    
    
                dict.RemoveAt(0);
            }

            dict.Add(key, value);
        }
    }
}

analyze

OrderedDictionary is a C# built-in data structure, which is an ordered collection of key-value pairs, which supports obtaining and traversing key-value pairs in order.
Compared with the previous implementation, the advantages are: the code is concise and only one data structure is used. The disadvantage is that the operating efficiency will be slow.

OrderedDictionary is similar to Dictionary, but it has the following differences:

  • OrderedDictionary internally maintains a list of keys sorted in order of addition.
  • OrderedDictionary internally uses two ArrayLists, one for storing the keys and the other for storing the values ​​associated with the keys. This means that an OrderedDictionary is not as efficient as a Dictionary because an extra operation is required every time a key-value pair is retrieved or added.

Project cases

In Unity development, we often use object pools, so we can use LRU to optimize the object pool to avoid excessive memory usage.

notice

In the next article, we will come to actual combat and implement an LRU object pool.

end

I wrote an article about the object pool before, but now it’s not very good, so let’s consider optimizing it.
Unity study notes – how to use the object pool to generate game objects elegantly and easily

Guess you like

Origin blog.csdn.net/qq_52855744/article/details/132191122