Java architecture train --Redis processing and memory cache expires elimination mechanism

Delete a policy: Redis how to deal expired key?

Set the expire of the key cache expires, but the server's memory will still be occupied , it is deleted since both strategies redis is based.

  • Regularly delete
  • Inert delete

Regularly delete
redis will each set up a key expiration time put into a separate dictionary, after regularly traverse the dictionary to remove expired key. Delete strategy is as follows:

  1. Redis default will expire ten times scanning (100ms once), expired expired scan does not traverse the dictionary all key per second, instead of using a simple greedy strategy.
  2. 20 random key from the dictionary expired;
  3. Delete this key in the key 20 has expired;
  4. If the ratio exceeds 1/4 expired key, repeat Step 1

Inert delete
In addition to regularly traverse, it also uses inert policy to remove expired key, when the so-called inert strategy is to visit the key in the client, redis expiration time for the key check if expired immediately deleted without I will give you anything in return.

Why use regularly delete + inert delete two kinds of strategy?

If you delete expired. Assume redis put a 100,000 key, have set an expiration time, you every few hundred milliseconds, checks 100,000 key, then redis basically died, cpu load will be very high, consumption in your checking expired on key

But the problem is, periodically delete key may cause a lot of back to the time and has not been removed, it zezheng it? So it is inert deleted. This means that you get a key time, redis will check that the key if you set an expiration time does it expire? If this time has expired will be deleted and will not give you anything in return.

It is not the key to the time it was removed, but when you query this key, redis lazy to check again

By combining the above two methods, to ensure that expired key will be killed.

Cache elimination: If the memory is occupied Redis cache slow supposed to?

Memory is full, you can use the hard drive to save, but no sense, because the hard drive is not as fast memory, affects redis performance. So, when the memory is full occupancy after, redis provides a caching mechanism to eliminate: MEMORY MANAGEMENT.

Memory elimination mechanism is nothing more than a few, brief description here:
volatile-the LRU
to pick and choose out of the least recently used data from a set expiration time of the data. redis not guarantee access to all key data set for the least recently used, but only a few randomly selected key pair, when the memory limit is reached can not be written to non-expiration time of data collection.
volatile-ttl
from a set expiration time of the selected data set to expire data eliminated. redis not guarantee access to all data set to expire recent key-value pairs, and just randomly picked a few key pair, when the memory limit is reached can not be written to non-expiration time of data collection.
volatile-random
centralized data arbitrarily selected from the data set out expiration time. When the memory limit is reached when the expiration date can not be written to the non-data sets.
allkeys-lru
from the data set selection phase out the least recently used data. When the memory limit is reached, the selection of the least recently used data out of all the data sets can write new data set.
allkeys-random
from the data set out arbitrarily selected data, when the memory limit is reached, a random selection of all data sets out, write new data set.
no-enviction
When the memory limit is reached when, not out of any data, not write any data set, all commands cause the application memory error.

Published 385 original articles · won praise 326 · views 160 000 +

Guess you like

Origin blog.csdn.net/No_Game_No_Life_/article/details/104296744