Glide之DiskLruCache(三级磁盘缓存)

阅读别人优秀的源码,才知道自己之前写的代码都是垃圾呀

在Glide里面好多对象都是通过工厂类生成的,DiskCache也是

先看GlideBuilder的build方法:


  @NonNull
  Glide build(@NonNull Context context) {
    if (sourceExecutor == null) {
      sourceExecutor = GlideExecutor.newSourceExecutor();
    }

    if (diskCacheExecutor == null) {
      diskCacheExecutor = GlideExecutor.newDiskCacheExecutor();
    }

    ....
    //这里实例化一个工厂类
    if (diskCacheFactory == null) {
      diskCacheFactory = new InternalCacheDiskCacheFactory(context);
    }

    if (engine == null) {
      engine =
          new Engine(
              memoryCache,
              diskCacheFactory,//工厂类传给Engine
              diskCacheExecutor,
              sourceExecutor,
              GlideExecutor.newUnlimitedSourceExecutor(),
              GlideExecutor.newAnimationExecutor(),
              isActiveResourceRetentionAllowed);
    }

    ....
 }

InternalCacheDiskFactory 是构造存储目录在应用私有目录下的

也可以外部直接提供DiskCache.Factory,不用内部提供的。

看看Engine构造方法中:

  Engine(MemoryCache cache,
      DiskCache.Factory diskCacheFactory,
      GlideExecutor diskCacheExecutor,
      GlideExecutor sourceExecutor,
      GlideExecutor sourceUnlimitedExecutor,
      GlideExecutor animationExecutor,
      Jobs jobs,
      EngineKeyFactory keyFactory,
      ActiveResources activeResources,
      EngineJobFactory engineJobFactory,
      DecodeJobFactory decodeJobFactory,
      ResourceRecycler resourceRecycler,
      boolean isActiveResourceRetentionAllowed) {
    this.cache = cache;
    this.diskCacheProvider = new LazyDiskCacheProvider(diskCacheFactory);
    ...
}

把diskCacheFactory包装了一下,没啥特殊的。简单说明一下,我们看到DiskCache.Factory是被包装到了LazyDiskCacheProvider中,它实现了DecodeJob.DiskCacheProvider,从这种结构我们也能知道DiskCacheProvider是DecodeJob中定义的接口,是通过这个接口形式来交互的,所以需要包装一下。


  private static class LazyDiskCacheProvider implements DecodeJob.DiskCacheProvider {

    private final DiskCache.Factory factory;
    private volatile DiskCache diskCache;

    LazyDiskCacheProvider(DiskCache.Factory factory) {
      this.factory = factory;
    }

    @VisibleForTesting
    synchronized void clearDiskCacheIfCreated() {
      if (diskCache == null) {
        return;
      }
      diskCache.clear();
    }

    @Override
    public DiskCache getDiskCache() {
      if (diskCache == null) {
        synchronized (this) {
          if (diskCache == null) {
            diskCache = factory.build();
          }
          if (diskCache == null) {
            diskCache = new DiskCacheAdapter();
          }
        }
      }
      return diskCache;
    }
  }

diskCacheProvider在构造DecodeJob的时候会传入,进而传入到DecodeHelper中,在DecodeHelper中有如下方法

  DiskCache getDiskCache() {
    return diskCacheProvider.getDiskCache();
  }

看LazyDiskCacheProvider的代码,会调用到DiskLruCacheFactory的build方法,看如下代码:

package com.bumptech.glide.load.engine.cache;

import java.io.File;

/**
 * Creates an {@link com.bumptech.glide.disklrucache.DiskLruCache} based disk cache in the specified
 * disk cache directory.
 *
 * <p>If you need to make I/O access before returning the cache directory use the {@link
 * DiskLruCacheFactory#DiskLruCacheFactory(CacheDirectoryGetter, long)} constructor variant.
 */
// Public API.
@SuppressWarnings("unused")
public class DiskLruCacheFactory implements DiskCache.Factory {
  private final long diskCacheSize;
  private final CacheDirectoryGetter cacheDirectoryGetter;

  /**
   * Interface called out of UI thread to get the cache folder.
   */
  public interface CacheDirectoryGetter {
    File getCacheDirectory();
  }

  public DiskLruCacheFactory(final String diskCacheFolder, long diskCacheSize) {
    this(new CacheDirectoryGetter() {
      @Override
      public File getCacheDirectory() {
        return new File(diskCacheFolder);
      }
    }, diskCacheSize);
  }

  public DiskLruCacheFactory(final String diskCacheFolder, final String diskCacheName,
                             long diskCacheSize) {
    this(new CacheDirectoryGetter() {
      @Override
      public File getCacheDirectory() {
        return new File(diskCacheFolder, diskCacheName);
      }
    }, diskCacheSize);
  }

  /**
   * When using this constructor {@link CacheDirectoryGetter#getCacheDirectory()} will be called out
   * of UI thread, allowing to do I/O access without performance impacts.
   *
   * @param cacheDirectoryGetter Interface called out of UI thread to get the cache folder.
   * @param diskCacheSize        Desired max bytes size for the LRU disk cache.
   */
  // Public API.
  @SuppressWarnings("WeakerAccess")
  public DiskLruCacheFactory(CacheDirectoryGetter cacheDirectoryGetter, long diskCacheSize) {
    this.diskCacheSize = diskCacheSize;
    this.cacheDirectoryGetter = cacheDirectoryGetter;
  }

  @Override
  public DiskCache build() {
    File cacheDir = cacheDirectoryGetter.getCacheDirectory();

    if (cacheDir == null) {
      return null;
    }

    if (!cacheDir.mkdirs() && (!cacheDir.exists() || !cacheDir.isDirectory())) {
      return null;
    }

    return DiskLruCacheWrapper.create(cacheDir, diskCacheSize);
  }
}
看DiskLruCacheWrapper.create(cacheDir, diskCacheSize)

DiskLruCacheWrapper实现了DiskCache接口,create返回DiskLruCacheWrapper实例

上面说的这些类和接口都是Glide的相关逻辑,其实本身还没有涉及到磁盘缓存,到了DisLruCacheWrapper的实现才是重点,里面封装了DiskLruCache,这个在Android界很有名,也是google推荐的库。其实DiskLruCache可以被换成别的,这样就实现了上层和底层的解耦。

重点分析一下DiskLruCache

其实实现原理也不复杂,首先是使用了LinkedHashMap accessOrder为true的特性,会把最近访问的数据放到列表尾部。

  @Override
  public void put(Key key, Writer writer) {
    // We want to make sure that puts block so that data is available when put completes. We may
    // actually not write any data if we find that data is written by the time we acquire the lock.
    String safeKey = safeKeyGenerator.getSafeKey(key);
    writeLocker.acquire(safeKey);
    try {
      if (Log.isLoggable(TAG, Log.VERBOSE)) {
        Log.v(TAG, "Put: Obtained: " + safeKey + " for for Key: " + key);
      }
      try {
        // We assume we only need to put once, so if data was written while we were trying to get
        // the lock, we can simply abort.
        DiskLruCache diskCache = getDiskCache();
        Value current = diskCache.get(safeKey);
        if (current != null) {
          return;
        }

        DiskLruCache.Editor editor = diskCache.edit(safeKey);
        if (editor == null) {
          throw new IllegalStateException("Had two simultaneous puts for: " + safeKey);
        }
        try {
          File file = editor.getFile(0);
          if (writer.write(file)) {
            editor.commit();
          }
        } finally {
          editor.abortUnlessCommitted();
        }
      } catch (IOException e) {
        if (Log.isLoggable(TAG, Log.WARN)) {
          Log.w(TAG, "Unable to put to disk cache", e);
        }
      }
    } finally {
      writeLocker.release(safeKey);
    }
  }

当需要存文件的时候,生成一个key,和一个Entry,entry里面包装了key,一个long数组,两个File数组,数组的大小表示一个key能对应一个缓存数据(DiskLruCache实例化的时候传入的),一个File数组cleanFiles存的是数据缓存的位置,另一个File数组dirtryFiles存的是临时数据缓存的位置,long数组表示缓存文件的大小。生成了key和entry后,放到lruEntries(LinkedHashMap)中,把Entry包装到Editor里面,Editor和Entry相互引用,Editor对外暴露(这个时候在journal里面写一条dirty数据,表示要开始写了),通过Editor可以获取需要写到的文件地址,写入文件(可以在外部写入,很灵活),很重要的一步是commit,里面主要是计算了文件大小给long数组赋值,计算已经使用的总大小size(这个时候在journal里面写一条clean数据),最后很重要的是,如果发现size大于了maxSize,需要把list头的数据删除直到size小于maxSize。

get就简单了,通过key获取File

网上这方面文章很多,大家自己看多看

需要进一步了解内容:

(1)SafeKeyGenerator和DiskCacheWriteLocker的实现和作用?

(2)LinkedHashMap原理

Java File中renameTo的介绍和使用说明

发布了189 篇原创文章 · 获赞 25 · 访问量 22万+

猜你喜欢

转载自blog.csdn.net/lizhongyisailang/article/details/104162500