Java缓存淘汰算法

目录

1、Redis缓存淘汰算法

2、Memcache缓存淘汰算法

3、Ehcache缓存淘汰算法

4、Guava缓存淘汰算法

 4.1 Eviction by Size

4.1  Eviction by Weight

4.3   Eviction by Time

4.4  Weak Keys

4.5  Soft Values

5、Caffein缓存淘汰算法

5.1 Size-based

5.1 Time-based

5.3 Reference-based

6、LFU淘汰算法的Java实现

7、LRU淘汰算法的Java实现

8、FIFO淘汰算法的Java实现


1、Redis缓存淘汰算法

   摘自Redis原文:https://redis.io/topics/config

       If you plan to use Redis just as a cache where every key will have an expire set, you may consider using the following configuration instead (assuming a max memory limit of 2 megabytes as an example):

maxmemory 2mb
maxmemory-policy allkeys-lru
#面试题 maxmemory-samples是干什么的?
maxmemory-samples 5

The following policies are available:  原文点击

  • noeviction: return errors when the memory limit was reached and the client is trying to execute commands that could result in more memory to be used (most write commands, but DEL and a few more exceptions).
  • allkeys-lru: evict keys by trying to remove the less recently used (LRU) keys first, in order to make space for the new data added.
  • volatile-lru: evict keys by trying to remove the less recently used (LRU) keys first, but only among keys that have an expire set, in order to make space for the new data added.
  • allkeys-random: evict keys randomly in order to make space for the new data added.
  • volatile-random: evict keys randomly in order to make space for the new data added, but only evict keys with an expire set.
  • volatile-ttl: evict keys with an expire set, and try to evict keys with a shorter time to live (TTL) first, in order to make space for the new data added.

Redis4之后新增了两种 allkey-lfuvolatile-lfu 两种策略

问题1: Redis为什么使用近似LFU算法?
问题2:如何提高近似LFU算法精度?
问题3:如何选择适合自己的maxmemory-policy?

2、Memcache缓存淘汰算法

https://memcached.org/blog/modern-lru/

3、Ehcache缓存淘汰算法

        参考文档:参考原文

<cache name="myCache"
      maxEntriesLocalDisk="10000" eternal="false" timeToIdleSeconds="3600"
      timeToLiveSeconds="0" memoryStoreEvictionPolicy="LFU">
</cache>

Note the following about the myCache configuration:

  • Accessing an entry in myCache that has been idle for more than an hour (timeToIdleSeconds) causes that element to be evicted.
  • If an entry expires but is not accessed, and no resource constraints force eviction, then the expired entry remains in place.
  • Entries in myCache can remain in the cache forever if accessed at least once per 60 minutes (timeToLiveSeconds). However, unexpired entries may still be flushed based on other limitations (see How to Size Caches).
  • In all, myCache can store a maximum of 10000 entries (maxEntriesLocalDisk). This is the effective maximum number of entries myCache is allowed. Note, however, that this value may be exceeded as it is overridden by settings such as pinning.

memoryStoreEvictionPolicy支持以下淘汰策略:

  1. LRU - least recently used
  2. LFU - least frequently used
  3. FIFO - first in first out, the oldest element by creation time

4、Guava缓存淘汰算法

        参考文档:参考原文

 4.1 Eviction by Size

      We can limit the size of our cache using maximumSize(). If the cache reaches the limit, the oldest items will be evicted.

@Test
public void whenCacheReachMaxSize_thenEviction() {
    CacheLoader<String, String> loader;
    loader = new CacheLoader<String, String>() {
        @Override
        public String load(String key) {
            return key.toUpperCase();
        }
    };
    LoadingCache<String, String> cache;
    cache = CacheBuilder.newBuilder().maximumSize(3).build(loader);

    cache.getUnchecked("first");
    cache.getUnchecked("second");
    cache.getUnchecked("third");
    cache.getUnchecked("forth");
    assertEquals(3, cache.size());
    assertNull(cache.getIfPresent("first"));
    assertEquals("FORTH", cache.getIfPresent("forth"));
}

4.1  Eviction by Weight

We can also limit the cache size using a custom weight function. In the following code, we use the length as our custom weight function:

@Test
public void whenCacheReachMaxWeight_thenEviction() {
    CacheLoader<String, String> loader;
    loader = new CacheLoader<String, String>() {
        @Override
        public String load(String key) {
            return key.toUpperCase();
        }
    };

    Weigher<String, String> weighByLength;
    weighByLength = new Weigher<String, String>() {
        @Override
        public int weigh(String key, String value) {
            return value.length();
        }
    };

    LoadingCache<String, String> cache;
    cache = CacheBuilder.newBuilder()
      .maximumWeight(16)
      .weigher(weighByLength)
      .build(loader);

    cache.getUnchecked("first");
    cache.getUnchecked("second");
    cache.getUnchecked("third");
    cache.getUnchecked("last");
    assertEquals(3, cache.size());
    assertNull(cache.getIfPresent("first"));
    assertEquals("LAST", cache.getIfPresent("last"));
}

Note: The cache may remove more than one record to leave room for a new large one.

4.3   Eviction by Time

Beside using size to evict old records, we can use time. In the following example, we customize our cache to remove records that have been idle for 2ms:

@Test
public void whenEntryIdle_thenEviction()
  throws InterruptedException {
    CacheLoader<String, String> loader;
    loader = new CacheLoader<String, String>() {
        @Override
        public String load(String key) {
            return key.toUpperCase();
        }
    };

    LoadingCache<String, String> cache;
    cache = CacheBuilder.newBuilder()
      .expireAfterAccess(2,TimeUnit.MILLISECONDS)
      .build(loader);

    cache.getUnchecked("hello");
    assertEquals(1, cache.size());

    cache.getUnchecked("hello");
    Thread.sleep(300);

    cache.getUnchecked("test");
    assertEquals(1, cache.size());
    assertNull(cache.getIfPresent("hello"));
}

We can also evict records based on their total live time. In the following example, the cache will remove the records after 2ms of being stored:

@Test
public void whenEntryLiveTimeExpire_thenEviction()
  throws InterruptedException {
    CacheLoader<String, String> loader;
    loader = new CacheLoader<String, String>() {
        @Override
        public String load(String key) {
            return key.toUpperCase();
        }
    };

    LoadingCache<String, String> cache;
    cache = CacheBuilder.newBuilder()
      .expireAfterWrite(2,TimeUnit.MILLISECONDS)
      .build(loader);

    cache.getUnchecked("hello");
    assertEquals(1, cache.size());
    Thread.sleep(300);
    cache.getUnchecked("test");
    assertEquals(1, cache.size());
    assertNull(cache.getIfPresent("hello"));
}

4.4  Weak Keys

         Next, let's see how to make our cache keys have weak references – allowing the garbage collector to collect cache keys that are not referenced elsewhere.
By default, both cache keys and values have strong references but we can make our cache store the keys using weak references using weakKeys() as in the following example:

@Test
public void whenWeakKeyHasNoRef_thenRemoveFromCache() {
    CacheLoader<String, String> loader;
    loader = new CacheLoader<String, String>() {
        @Override
        public String load(String key) {
            return key.toUpperCase();
        }
    };

    LoadingCache<String, String> cache;
    cache = CacheBuilder.newBuilder().weakKeys().build(loader);
}

4.5  Soft Values

         We can allow the garbage collector to collect our cached values by using softValues() as in the following example:

@Test
public void whenSoftValue_thenRemoveFromCache() {
    CacheLoader<String, String> loader;
    loader = new CacheLoader<String, String>() {
        @Override
        public String load(String key) {
            return key.toUpperCase();
        }
    };

    LoadingCache<String, String> cache;
    cache = CacheBuilder.newBuilder().softValues().build(loader);
}
Note: Many soft references may affect the system performance – it's preferred to use maximumSize().

5、Caffein缓存淘汰算法

      参考文档:参考原文

Caffeine provides three types of eviction: size-based eviction, time-based eviction, and reference-based eviction.

5.1 Size-based

// Evict based on the number of entries in the cache
LoadingCache<Key, Graph> graphs = Caffeine.newBuilder()
    .maximumSize(10_000)
    .build(key -> createExpensiveGraph(key));

// Evict based on the number of vertices in the cache
LoadingCache<Key, Graph> graphs = Caffeine.newBuilder()
    .maximumWeight(10_000)
    .weigher((Key key, Graph graph) -> graph.vertices().size())
    .build(key -> createExpensiveGraph(key));

5.1 Time-based

// Evict based on a fixed expiration policy
LoadingCache<Key, Graph> graphs = Caffeine.newBuilder()
    .expireAfterAccess(5, TimeUnit.MINUTES)
    .build(key -> createExpensiveGraph(key));
LoadingCache<Key, Graph> graphs = Caffeine.newBuilder()
    .expireAfterWrite(10, TimeUnit.MINUTES)
    .build(key -> createExpensiveGraph(key));

// Evict based on a varying expiration policy
LoadingCache<Key, Graph> graphs = Caffeine.newBuilder()
    .expireAfter(new Expiry<Key, Graph>() {
      public long expireAfterCreate(Key key, Graph graph, long currentTime) {
        // Use wall clock time, rather than nanotime, if from an external resource
        long seconds = graph.creationDate().plusHours(5)
            .minus(System.currentTimeMillis(), MILLIS)
            .toEpochSecond();
        return TimeUnit.SECONDS.toNanos(seconds);
      }
      public long expireAfterUpdate(Key key, Graph graph, 
          long currentTime, long currentDuration) {
        return currentDuration;
      }
      public long expireAfterRead(Key key, Graph graph,
          long currentTime, long currentDuration) {
        return currentDuration;
      }
    })
    .build(key -> createExpensiveGraph(key));

5.3 Reference-based

// Evict when neither the key nor value are strongly reachable
LoadingCache<Key, Graph> graphs = Caffeine.newBuilder()
    .weakKeys()
    .weakValues()
    .build(key -> createExpensiveGraph(key));

// Evict when the garbage collector needs to free memory
LoadingCache<Key, Graph> graphs = Caffeine.newBuilder()
    .softValues()
    .build(key -> createExpensiveGraph(key));

6、LFU淘汰算法的Java实现

                         LFU(least frequently used) 最少使用率缓存 ,根据使用次数来判定对象是否被持续缓存  , 使用率是通过访问次数计算的。 
          ,当缓存满时清理过期对象。 , 清理后依旧满的情况下清除最少访问(访问计数最小)的对象并将其他对象的访问数减去这个最小访问数,以便新对象进入后可以公平计数。

参考:HuTool工具包中的 cn.hutool.cache.impl.LFUCache 

/**
	 * 清理过期对象。<br>
	 * 清理后依旧满的情况下清除最少访问(访问计数最小)的对象并将其他对象的访问数减去这个最小访问数,以便新对象进入后可以公平计数。
	 * 
	 * @return 清理个数
	 */
	@Override
	protected int pruneCache() {
		int count = 0;
		CacheObj<K, V> comin = null;

		// 清理过期对象并找出访问最少的对象
		Iterator<CacheObj<K, V>> values = cacheMap.values().iterator();
		CacheObj<K, V> co;
		while (values.hasNext()) {
			co = values.next();
			if (co.isExpired() == true) {
				values.remove();
				onRemove(co.key, co.obj);
				count++;
				continue;
			}

			//找出访问最少的对象
			if (comin == null || co.accessCount.get() < comin.accessCount.get()) {
				comin = co;
			}
		}

		// 减少所有对象访问量,并清除减少后为0的访问对象
		if (isFull() && comin != null) {
			long minAccessCount = comin.accessCount.get();

			values = cacheMap.values().iterator();
			CacheObj<K, V> co1;
			while (values.hasNext()) {
				co1 = values.next();
				if (co1.accessCount.addAndGet(-minAccessCount) <= 0) {
					values.remove();
					onRemove(co1.key, co1.obj);
					count++;
				}
			}
		}
		
		return count;
	}

7、LRU淘汰算法的Java实现

           LRU (least recently used)最近最久未使用缓存,根据使用时间来判定对象是否被持续缓存 ,当对象被访问时放入缓存,当缓存满了,最久未被使用的对象将被移除。 
,此缓存基于LinkedHashMap,因此当被缓存的对象每被访问一次,这个对象的key就到链表头部。 这个算法简单并且非常快,他比FIFO有一个显著优势是经常使用的对象不太可能被移除缓存。
 缺点是当缓存满时,不能被很快的访问。

参考:HuTool工具包中的 cn.hutool.cache.impl.LRUCache

	/**
	 * 只清理超时对象,LRU的实现会交给<code>LinkedHashMap</code>
	 */
	@Override
	protected int pruneCache() {
		if (isPruneExpiredActive() == false) {
			return 0;
		}
		int count = 0;
		Iterator<CacheObj<K, V>> values = cacheMap.values().iterator();
		CacheObj<K, V> co;
		while (values.hasNext()) {
			co = values.next();
			if (co.isExpired()) {
				values.remove();
				onRemove(co.key, co.obj);
				count++;
			}
		}
		return count;
	}

8、FIFO淘汰算法的Java实现

元素不停的加入缓存直到缓存满为止,当缓存满时,清理过期缓存对象,清理后依旧满则删除先入的缓存(链表首部对象)。
 优点:简单快速 <br>
 缺点:不灵活,不能保证最常用的对象总是被保留

参考:HuTool工具包中的 cn.hutool.cache.impl.FIFOCache

/**
	 * 先进先出的清理策略<br>
	 * 先遍历缓存清理过期的缓存对象,如果清理后还是满的,则删除第一个缓存对象
	 */
	@Override
	protected int pruneCache() {
		int count = 0;
		CacheObj<K, V> first = null;

		// 清理过期对象并找出链表头部元素(先入元素)
		Iterator<CacheObj<K, V>> values = cacheMap.values().iterator();
		while (values.hasNext()) {
			CacheObj<K, V> co = values.next();
			if (co.isExpired()) {
				values.remove();
				count++;
			}
			if (first == null) {
				first = co;
			}
		}

		// 清理结束后依旧是满的,则删除第一个被缓存的对象
		if (isFull() && null != first) {
			cacheMap.remove(first.key);
			onRemove(first.key, first.obj);
			count++;
		}
		return count;
	}

猜你喜欢

转载自blog.csdn.net/s2008100262/article/details/111150346