4.redis expiration policies are what? What mechanisms are out of memory? LRU handwritten code to achieve what?

Author: Chinese Shi Shan

Interview questions

What are redis expiration policy? What mechanisms are out of memory? LRU handwritten code to achieve what?

Interviewer psychological analysis

Even if you do not know the problem, come up on the ignorant, the answer does not come out, you write that line of code, written redis granted that the data will certainly exist, the latter causes the system to a variety of bug, who is responsible ?

There are two common problems:

  • How data is written to redis gone?

Some students may encounter, often lose some data in redis production environment, get a hold of, then, it may be gone. My God, classmates, you ask this question to explain redis you will not use right ah. redis is cached, you gave when storage is not it?

Shajiao cache? When using memory cache. Memory is unlimited right, memory is very precious and limited, disks are cheap and lots of. It may be a machine on the memory of dozens of G, but there are a few T of hard disk space. redis memory to be mainly based on high-performance, concurrent read and write operations.

Now that memory is limited, such as redis can only use 10G, if you write the data entered, 20G, and would we supposed to? Of course, the data will get rid of 10G, 10G and then retain the data of the. What data that kill? What data retention? Of course, is to get rid of infrequently accessed data, the common data retention.

  • Data clearly expired, how can they occupy the memory?

This is determined by the expiration policy redis.

Face questions analysis

redis expiration policy

redis expiration policy is: + inert periodically delete delete.

The so-called periodically delete, refers to the redis default is every 100ms on a random sample of some set an expiration time of the key, to check whether it has expired, expired delete.

Redis put a 10w assume a key, are set expiration time, you every few hundred milliseconds, checks 10w a key, that redis basically died, cpu load will be very high, consumption in your checking expired key on. Note that this 100ms to traverse all of the key is not set the expiration time intervals, as is a disaster on a performance. In fact some of the key redis are randomly selected every 100ms to check and delete.

But the problem is, periodically delete key may cause a lot of back to the time and has not been removed, it zezheng it? So it is inert deleted. This means that you get a key time, redis will check that the key if you set an expiration time does it expire? If this time has expired will be deleted and will not give you anything in return.

Get key time, if the key at this time has expired, delete, does not return anything.

But in fact this is a problem, if you delete regularly missed a lot of back key, then you have no time to investigate, also did not go inert delete, then what will happen? If the accumulation of a large number of expired key in memory, causing memory block redis exhausted, zezheng?

The answer is: go out of memory mechanisms.

Memory elimination mechanism

redis memory elimination mechanism are the following:

  • noeviction: When the memory is not sufficient to accommodate the new data is written, the new write operation will complain that this is generally no one with it, it was disgusting.
  • allkeys-lru: When insufficient memory to accommodate the new data is written in the key space, the key is removed the least recently used (this is the most common).
  • allkeys-random: When the memory is not sufficient to accommodate the new data is written in the key space randomly remove a key, this is generally not people use it, why should random, the key is certainly the least recently used to kill ah .
  • volatile-lru: When insufficient memory to accommodate the new data is written in the key space is provided expiration time, the key is removed the least recently used (this is generally not appropriate).
  • volatile-random: When insufficient memory to accommodate the new data is written in the key space is provided in the expiration time, remove a random key.
  • volatile-ttl: When the memory is insufficient to hold the new data is written in the key space is provided expiration time, the expiration time of the key has an earlier priority removed.

Handwritten a LRU algorithm

You can spot the most original handwriting LRU algorithm, that code is too big, it seems unrealistic.

Not seeking my own handmade from the ground up to create their own LRU, but at least you know how to use the existing data structure to achieve JDK version of a Java LRU.

class the LRUCache <K, V> the extends a LinkedHashMap < K, V> { Private Final int the CACHE_SIZE; / ** * the maximum number of data to be passed in buffer * * @param cacheSize cache size * / public the LRUCache ( int cacheSize) { // to true let linkedHashMap expressed in accordance with the access order to sort, on the recent visit of the head, the oldest accessed on the tail. Super (( int) the Math .ceil (cacheSize / 0.75) + . 1, 0.75f, to true); the CACHE_SIZE = cacheSize;} @Override protected Boolean the removeEldestEntry ( the Map. The Entry < K, V> eldest) { // when the map data is larger than a specified number when the cache, automatically delete the oldest data. size return () > the CACHE_SIZE;}}

Guess you like

Origin www.cnblogs.com/morganlin/p/11980165.html