If the memory is full Redis how to do?

Redis memory footprint size

We know that Redis is a memory-based key-value database, because of limited memory size of the system, so we can configure the maximum memory size Redis can be used when using the Redis.

1, the profile configuration

By adding the following configuration settings installation directory memory size Redis profile following redis.conf

// set the maximum occupancy Redis memory size is 100M 
maxmemory 100MB
redis configuration file does not use the installation directory of redis.conf files, startup redis services can pass a parameter to specify the configuration file redis

2, modifying the command

Redis dynamically modify the memory size by run command support

// set the maximum occupancy Redis memory size is 100M 
127.0.0.1:6379> config the SET maxmemory 100MB
maximum memory size // Redis get set can be used
127.0.0.1:6379> config get maxmemory
If not set the maximum memory size or the maximum memory size set to 0 in the 64-bit operating system does not limit the memory size, use up memory 3GB in 32-bit operating system

Redis out of memory

Since you can set the maximum occupancy Redis memory size, the configuration of the memory have run out of time. When that runs out of memory, but also continue to add Redis there is no memory available data does not it?

Redis actually defines several strategies to deal with this situation:

noeviction (default policy): For a write request is no longer available, directly returns an error (DEL and some special request, unless the request)

allkeys-lru: carried out using the LRU algorithm from all the key

volatile-lru: carried out using LRU algorithm from set an expiration time of the key

allkeys-random: random out of all the key data from the

volatile-random: from a key expiration time set randomly eliminated

volatile-ttl: a set of key expiration time, the expiration time carried out according to the key, the sooner the more preferentially eliminated expired

When using a volatile-lru, volatile-random, volatile-ttl these three strategies, can be eliminated if there is no key, and then return the same error noeviction

How to get out of the policy and set the memory

Get out of the current memory strategy:

127.0.0.1:6379> config get maxmemory-policy

Setting out of the policy through a configuration file (modify redis.conf file):

maxmemory-policy allkeys-lru

Modify elimination policy command:

127.0.0.1:6379> config set maxmemory-policy allkeys-lru

LRU algorithm

What is LRU?

Speaking above the Redis can use the maximum memory usage is over, you can use LRU algorithm is out of memory, then what is the LRU algorithm it?

LRU (Least Recently Used), that is, the least recently used, a cache replacement algorithm. When used as a cache memory, the cache size is generally fixed. When the cache is full, this time to continue to add data to the cache inside, we need to eliminate part of the old data, free up memory space for new data storage. This time you can use the LRU algorithm. The core idea is: if a data not used in the most recent period, then the future is to use the possibility of very small, so it can be eliminated.

Using java achieve a simple LRU algorithm

class LRUCache public <k, v> { 
// capacity
private int capacity;
statistics of how many nodes are currently //
Private int COUNT;
// cache nodes
Private the Map <k, the Node <k, v >> nodeMap;
Private the Node <k , V> head;
Private the Node <K, V> tail;
public the LRUCache (int Capacity) {
IF (Capacity <. 1) {
the throw new new an IllegalArgumentException (String.valueOf (Capacity));
}
this.capacity = Capacity;
this.nodeMap the HashMap new new = <> ();
// initialize the head node and tail node, is determined using a Sentinel model reduction tail nodes and the head node of an empty code
the node HeadNode the node new new = (null, null);
the node = new new tailNode the node (null , null);
headNode.next = tailNode;
tailNode.pre = HeadNode;
this.head = HeadNode;
this.tail = tailNode;
}
Public void PUT (Key K, V value) {
the Node <K, V> = nodeMap.get Node (Key);
IF (Node == null) {
IF (COUNT> = Capacity) {
// first remove a node
the removeNode ();
}
node the node new new = <> (Key, value);
// add nodes
the addNode (node);
} the else {
// the mobile node to the head node
moveNodeToHead (node);
}
}
public the node <K, V> GET (Key K) {
the Node <K, V> = nodeMap.get Node (Key);
IF (Node = null!) {
moveNodeToHead (Node);
}
return Node;
}
Private void the removeNode () {
the Node Node = tail.pre ;
// list is removed from the inside
removeFromList (Node);
nodeMap.remove (node.key);
count--;
}
Private void removeFromList (the Node <K, V> Node) {
the Node pre = node.pre;
the Node Next = node.next;
pre.next = Next;
next.pre = pre;
node.next = null;
= null node.pre;
}
Private void the addNode (the node <K, V> node) {
// add nodes to the head
addToHead (node);
nodeMap.put (node.key, node);
COUNT ++;
}
Private void addToHead ( the Node <K, V> Node) {
the Node Next = head.next;
next.pre = Node;
node.next = Next;
node.pre = head;
head.next = Node;
}
public void moveNodeToHead (the Node <K, V > node) {
// list is removed from the inside
removeFromList (node);
// add nodes to the head
addToHead(node);
}
class Node<k, v> {
k key;
v value;
Node pre;
Node next;
public Node(k key, v value) {
this.key = key;
this.value = value;
}
}
}
The above code implements a simple LUR algorithm, code is very simple, but also added a note, a closer look it is easy to understand.

LRU is realized Redis

Approximate LRU algorithm

Redis using approximate LRU algorithm, which with conventional LRU algorithm is also not the same. Approximate LRU algorithm data out random sampling method, each time a random 5 (default) a key, key out from the inside out of the least recently used.

The number of samples can be modified by maxmemory-samples Parameters: Example: larger maxmemory-samples 10 maxmenory-samples configuration, the closer the result out strict LRU algorithm

To achieve approximation Redis LRU algorithm, for each key an extra increase of a 24bit field, the time for which the key was last accessed stored.

Redis3.0 optimized approximation of LRU

Redis3.0 LRU algorithm approximate number of optimization. The new algorithm maintains a candidate pool (size 16), the data pool sorted according to the access time, the first key are randomly selected into the pool, and then each of the randomly selected key only if the access time is less than the pool the minimum time before into the pool until the candidate pool is filled. When filled, if there is need to put in a new key, then (most recently accessed) to remove the largest pool of last access time.

When the need to eliminate, then select from a pool of recently accessed directly minimum time (the longest not be accessed) eliminated the key on the line.

Contrast LRU algorithm

We can compare the accuracy of each LRU algorithm, the first to add Redis inside by a certain number of experimental data n, the Redis available memory is used up, go down inside to add new data Redis n / 2, and this time you need to eliminate part the data, according to strict LRU algorithm, the data should eliminate the first to join the n / 2 in. FIG generate the following comparison of the respective LRU algorithm (Source):

Think!  The interviewer asked me: Redis memory is full of how to do?

 

You can see the figure, there are points in three different colors:

  • Light gray is out of the data
  • Gray is not eliminated the old data
  • Green is the newly added data

We can see the number of samples is generated Redis3.0 10 closest to strict LRU. 5 while the same number of samples used, Redis3.0 also superior Redis2.8.

LFU algorithm

LFU algorithm is Redis4.0 inside plus a new phase-out strategy. Its full name is Least Frequently Used, its core idea is to be phased out according to the frequency of recently visited key priority seldom accessed been eliminated, and more were being accessed to stay.

LFU algorithm can better represent the heat of a key being accessed. If you are using the LRU algorithm, a key for a long time will not be visited, is only occasionally visited just once, then it is considered to be hot data, will not be eliminated, and some key future is likely to be accessed the were eliminated. If you use the LFU algorithm does not happen, because one does not make use of a key to be hot data.

LFU There are two strategies:

  • volatile-lfu: LFU algorithm using the key out of the expiration time set in the key
  • allkeys-lfu: LFU algorithm using phase-out in all of the key data in
Using these two strategies setting out of speaking with the same as before, but the point to note is that this strategy can only be two weeks or more in Redis4.0 setting, if provided in the following Redis4.0 will complain

problem

Finally, leave a small problem, some people may have noticed that I did not explain why Redis approximate LRU algorithm instead of using accurate LRU algorithm, can give your answer in the comments section in the article, we discuss the study together.

Guess you like

Origin www.cnblogs.com/longxok/p/11504911.html