Look at others - Glide's LruCache

Look at others - Glide's LruCache

insert image description here

In the field of android, glide is a very good image loading framework. Excellentness naturally has excellent reasons. If you look at the star of github, you will find that it is very popular. Today, I have time to take a look at the implementation of LruCache in Glide. I still insist on reading the source code This principle: don't rush to pay attention to the details, first extract the skeleton, and then pull the fur

1. Basic operation interface

When we use an implementation framework such as LRU, these four methods basically summarize all the daily routines. The reason for locking is to ensure thread safety and data consistency.

/**
* T = key
* Y = value
*/
public class LruCache<T, Y> {
    
    
	
	/**
	* 缓存中是否包含此KEY
	*/
	public synchronized boolean contains(T key);

	/**
	* 从缓存中获取目标数据
	*/
	public synchronized Y get(T key);
	
	/**
	* 把item放入缓存
	*/
	public synchronized Y put(T key, Y item);

	/**
	* 从缓存中移除
	*/
	public synchronized Y remove(T key);
	
}

2. The basic data structure that cannot escape

We all know that most of LRU is implemented using the basic data structure of linked list, but the access efficiency of linked list is not ideal. Using LinkedHasmap is a good choice. Let's look at the definition of LruCache.

public class LruCache<T, Y> {
    
    
	/**初始容量100,加载因子0.75,也就是说当容量达到75%,就触发扩容机制。同时按照访问顺序排序*/
  private final Map<T, Entry<Y>> cache = new LinkedHashMap<>(100, 0.75f, true);
  private final long initialMaxSize;
  private long maxSize;
  private long currentSize;
...

Using LinkedHasmap takes into account the efficiency of insertion, deletion, and access. Then let's observe the details next, first look at the implementation of put.

3. Implementation details

3.1 put

Put is divided into two cases, one is to add if the key does not exist; the other is to replace the key if it already exists.

  public synchronized Y put(@NonNull T key, @Nullable Y item) {
    
    
  	//itemSize始终为1
    final int itemSize = getSize(item);
    //边界处理,我们的LruCache容量初始化的时候不能为1
    if (itemSize >= maxSize) {
    
    
    	//这个onItemEvicted方法,源码解释是一个回调,子类可以复写得到一个驱逐通知
      onItemEvicted(key, item);
      return null;
    }
	//当前容量+1
    if (item != null) {
    
    
      currentSize += itemSize;
    }
    //把item保存到linkedhasmap
    @Nullable Entry<Y> old = cache.put(key, item == null ? null : new Entry<>(item, itemSize));
    //old不为null,说明之前有相同key的item存在,这次put是替换了,
    if (old != null) {
    
    
      currentSize -= old.size;//所有size都是1,这里就是减1,currentSize和linkedhasmap的size应该是保持一致的。

      if (!old.value.equals(item)) {
    
    //之前的old和这次put的数据,key相同,value不一样
        onItemEvicted(key, old.value);//告诉子类,old被驱逐了
      }
    }
    evict();//这个稍后再看。

    return old != null ? old.value : null;
  }

3.2 get

I just took it directly from linkedhasmap. It’s strange, why there is no implementation of the LRU algorithm. The one used recently should have a mobile operation. Don’t worry, I’ll talk about it later.

  public synchronized Y get(@NonNull T key) {
    
    
    Entry<Y> entry = cache.get(key);
    return entry != null ? entry.value : null;
  }

3.2 remove

It is basically a simple linkedhasmap operation

  public synchronized Y remove(@NonNull T key) {
    
    
    Entry<Y> entry = cache.remove(key);
    if (entry == null) {
    
    
      return null;
    }
    currentSize -= entry.size;
    return entry.value;
  }

You can see that put, get, and remove basically use the linkedhasmap api, so I think linkedhasmap is responsible for the adjustment of the linked list data. So take a look at its source code.

4. LinkedHasmap is a nanny

LinkedHasmap inherits Hasmap, we first found clues in Hasmap.

    // Callbacks to allow LinkedHashMap post-actions
    void afterNodeAccess(Node<K,V> p) {
    
     }//节点被访问之后调用了此方法
    void afterNodeInsertion(boolean evict) {
    
     }//节点被插入之后调用了此方法
    void afterNodeRemoval(Node<K,V> p) {
    
     }//节点被移除之后调用了此方法

Hasmap does not implement these three methods, the comment is very clear, reserved for LinkedHashMap to expand, then you can see how LinkedHashMap is implemented.

After node access

	 /**
	 * 访问之后,把节点移动到最后,last就对应了LRU中最近使用的item。
	 */
    void afterNodeAccess(Node<K,V> e) {
    
     // move node to last
        LinkedHashMapEntry<K,V> last;
        if (accessOrder && (last = tail) != e) {
    
    
            LinkedHashMapEntry<K,V> p =
                (LinkedHashMapEntry<K,V>)e, b = p.before, a = p.after;
            p.after = null;
            if (b == null)
                head = a;
            else
                b.after = a;
            if (a != null)
                a.before = b;
            else
                last = b;
            if (last == null)
                head = p;
            else {
    
    
                p.before = last;
                last.after = p;
            }
            tail = p;
            ++modCount;
        }

After node deletion

    void afterNodeRemoval(Node<K,V> e) {
    
     // unlink
        LinkedHashMapEntry<K,V> p =
            (LinkedHashMapEntry<K,V>)e, b = p.before, a = p.after;
        p.before = p.after = null;//前后指针断开
        if (b == null)//如果原本是头节点,直接删除头节点
            head = a;
        else
            b.after = a;//b后驱指向a,稍后再把a前驱指向b,即可删除p
        if (a == null)//如果原本是尾节点,把前驱当尾节点接口删除p
            tail = b;
        else 
            a.before = b;//a前驱指向b完成重建
    }

After adding a new node, whether to delete the head node, the head node is actually the old node. Do you want to delete? It depends on the return value of removeEldestEntry

    void afterNodeInsertion(boolean evict) {
    
     // possibly remove eldest
        LinkedHashMapEntry<K,V> first;
        if (evict && (first = head) != null && removeEldestEntry(first)) {
    
    
            K key = first.key;
            removeNode(hash(key), key, null, false, true);
        }
    }

The removeEldestEntry of LinkedHashMap returns false by default, that is to say, each insertion will not actively delete the oldest data.

 protected boolean removeEldestEntry(Map.Entry<K,V> eldest) {
    
    
        return false;
    }

In fact, the LRU should be deleted, so just rewrite this method and return true, but the author of Glide did not do this. So how did he delete the old data? Remember the evict() method in LruCache?

  /**
   * Removes the least recently used items from the cache until the current size is less than the
   * given size.
   *
   * @param size The size the cache should be less than.
   */
  protected synchronized void trimToSize(long size) {
    
    
    Map.Entry<T, Entry<Y>> last;
    Iterator<Map.Entry<T, Entry<Y>>> cacheIterator;
    while (currentSize > size) {
    
    
      cacheIterator = cache.entrySet().iterator();
      last = cacheIterator.next();
      final Entry<Y> toRemove = last.getValue();
      currentSize -= toRemove.size;
      final T key = last.getKey();
      cacheIterator.remove();
      onItemEvicted(key, toRemove.value);
    }
  }

  private void evict() {
    
    
    trimToSize(maxSize);
  }

Yes, I chose manual management. When the maxSize is exceeded, the LinkedHashMap is traversed to find the oldest data, which is the head node of the linked list, and deleted.

Summarize

Today, I paid attention to the implementation details of LruCache inside Glide. It can be seen that there is nothing too profound, and the source code is relatively simple, with a total of 208 lines. But at the same time, I also lament that such a simple lru component can also write unit tests. So perfect, it is also the embodiment of the gap between oneself and others.

thank you

LruCache that Glide is using, have you learned it?

What is the function of the afterNodeInsertion method in HashMap?

Understanding of the accessOrder attribute in LinkedHashMap

Guess you like

Origin blog.csdn.net/lucky_tom/article/details/120667038