What are redis expiration policy? What mechanisms are out of memory? LRU handwritten code to achieve what?

What are redis expiration policy?

Set an expiration time:

    set key when using the expire time, time is expired. For example, specify the key can only survive one hour? 10 minutes? specify cache expiration will fail.

  The expiration policy, then redis

      Plus inert periodically delete delete

    Deleted regularly : redis default is every 100ms will be randomly selected set of some of the key expiration time, check if it has expired, expired delete,

        Assume redis put a 100,000 key, have set an expiration time, you every few hundred milliseconds, checks 100,000 key, then redis basically died, cpu load will be very high, consumption in your checking expired key on. Note that is not here

        Every 100ms to traverse all set the expiration time of the key, as is a disaster on a performance. In fact some of the key redis are randomly selected every 100ms to check and delete.

    Inert Delete : you get a key time, redis will check that the key if you set an expiration time does it expire? If this time has expired will be deleted and will not give you anything in return.

    It is not the key to the time it was removed, but when you query this key, redis lazy to check again

      By combining the above two methods, to ensure that expired key will be killed.

    But in fact this is a problem, if you delete regularly missed a lot of back key, then you have no time to investigate, also did not go inert delete, then what will happen? If the accumulation of a large number of expired key in memory, causing memory block redis exhausted,

        Zezheng?

What mechanisms are out of memory?

    noeviction: When insufficient memory to accommodate the new data, the new write operation is generally given by nobody.

    allkeys-lru: When the memory is not sufficient to accommodate the new data is written in the key space, remove the key the least recently used (regardless of whether or not expired)

    allkeys-random: When the memory is not sufficient to accommodate the new data is written in the key space randomly remove a key, this is generally not people use it, why should random, the key is certainly the least recently used to kill ah

    volatile-random: when the memory is insufficient to accommodate the new data is written in the key space is provided in the expiration time, remove a random key 

    volatile-ttl: When insufficient memory to write new data received in the space provided key expiration time, the expiration time has an earlier priority key removal

LRU handwritten code to achieve what?

public  class LURCache <K, V> the extends a LinkedHashMap <K, V> {
     Private  Final  int CACHE_SIZEL;
     // this is the maximum number of data transfer can be cached 
    public LURCache ( int cacheSize) {
         // initial size, this is a set of hashmap,
         // the same time last true is that we let linkedhashmap in accordance with the access order to sort, on the recent visit of the head, on the tail of the oldest access 
        Super (( int ) Math.ceil (cacheSize / 0.75) + 1,0.75f , to true ); 
        CACHE_SIZEL = cacheSize; 
    } 

    @Override 
    protected  Boolean the removeEldestEntry (of Map.Entry <K, V>eldest) {
         // this means is that when the map data is greater than a specified number of cache, automatically delete the oldest data. 
        return size ()> CACHE_SIZEL; 
    } 
}

 

Guess you like

Origin www.cnblogs.com/qingmuchuanqi48/p/11129862.html