redis data structures, persistence, caching elimination strategy

Redis single thread performance, all data in its memory, all arithmetic operations are memory level, but also to avoid the single-threaded multi-thread handover performance loss problem. redis achieved using epoll IO multiplexing, and the connection information into the event queue, event dispatcher successively into the file, event dispatcher will distribute events to the event processor.

 

1.Redis the data structure and easy operation instruction

String、list、set、hash、zset(有序set)

Overall redis are in the form of Key-Value to store data. Not just different forms of data type Value.

 

String: simple data structure, for example, we will turn into a target string stored json

  set key value store data

  get key data acquisition

  exists key data to see if there is, there is 1 otherwise return 0

  del key to delete the data returned several successful operations

 

  mset key1 value1 key2 value2 key3 value3 ... store multiple sets of data

  mget key1 ke2y key3 ... acquiring data of a plurality of key, returns a collection of values ​​similar method Map

 

  expire key second set key elapsed time in seconds

  setex key second value set key elapsed time in seconds (equivalent to the first SET, then The expire)

  setnx key value does not exist If key is set to return 0 return 1 if present (this can be achieved based Distributed Lock)

 

List: java not inside the list, redis like a list of the list or queue / stack structure. This means that its insertion deleted faster, but by indexing positioning is relatively slow. When the pop-up list of the last element, the data structure is automatically deleted, memory is recovered.

Redis list queue structure often used for asynchronous use. The need to delay processing tasks into the list of Redis, another thread for processing from the list of polling data.

 

  rpush key value1 value2 value3 ... inserted data list

  View llen key length

  lpop key acquisition (FIFO, queue-like) order of addition

  rpop key LIFO, somewhat similar to the stack

  Take a list of data, taken after the entire list have been recovered, that data can only be taken once.

 

Hash: similar to java's hashMap, and string comparison, some of the properties we store data when you can only store objects, and the string you need to complete the entire object conversion. Of course consumption hash storage structure is definitely higher than that of the string

  hset redisKey hashKey1 value1

    inserting data hset redisKey hashKey2 value2

  hgetall redisKey get the data, key value interval appears

  hlen redisKey View hash length

  hget redisKey hashKey acquired value corresponding hashKey

  hmset redisKey hashKey1 value1 hashKey2 value2 hashKey2 value3 inserted bulk value

 

Set: Similar HashSet, but the list is similar to the last data taken after completion, the structure will be cleaned, you can not get the data again

  sadd key value         

  sadd key value1 value2 add bulk

  smembers key View All

  sismember key value query whether a value exists, exists returns 1

  View scard key size

  spop key element to obtain a

 

Atom count operation

If the value is an integer, then increment operation may also be implemented (which may be used to implement a distributed lock, the limited increase, to the maximum long max, above which directly error)

incr key increment if the key does not exist by default from 0 from 1

incrby key step to increase the step size setting step

 

2.redis persistence

Although redis are operating at memory, there is also a persistent.

One is based on RDB snapshots ,

Redis database snapshots stored in the memory named dump.rdb binary file.

 

 

 

When the Redis can be set, in that it has at least N second data set M of changes and this condition is satisfied, automatically save a data set.

 

 

 

Another is AOF of (the append-only File)

Snapshots are not reliable, since the last snapshot, the next time snapshot condition has not been reached, the service at this time there is a problem, then the data can not be saved during this period is to snapshot version. This time you need to AOF, and will each instruction are recorded into the file, when redis restart, re-execute this command file inside, you can restore all of the data into memory.

AOF can be configured to open appendonly yes, the default is off

 

 

AOF also has four synchronous data strategy,

Every time operations to refresh the file, very slow, but safe

Synchronous refresh every second: You might lose data within one second

Never Synchronization refresh: allow the operating system to refresh the data when needed, insecurity

The default is once per second brush

 

 

 

Mixed persistence :

RDB snapshot data recovery speed, but there may be significant data loss, it is usually used to recover data or AOF log replay, but relatively speaking AOF can be slow, especially when large volumes of data. So when brought mixed 4.0 persistence, that is, when the refresh AOF, first record a snapshot of the last version, and then record the last snapshot version to the current increment, then merge into one file, overwriting the original appendonly.aof file. Redis restart when the first load content RDB snapshot, the contents of the incremental operating in replay AOF log on it.

Open mixed Persistence: aof-use-rdb-preamble yes

Mixed persistence in appendonly.aof content format, is part of RDB file content format, content format is another AOF file.

 

3. Cache elimination strategy:

When the Redis memory beyond the limit of physical memory, memory and disk data will begin to produce frequent exchange. Redis make a sharp decline in performance, for more frequent visits Redis, the efficiency is substantially equal to this access is not available.

In a production environment, we are not allowed to appear Redis exchange behavior, in order to limit the maximum use of memory, Redis provides configuration parameters maxmemory to limit the size of memory beyond expectations.

When the actual memory beyond maxmemory, Redis offers several optional policies (maxmemory-policy) to allow users to decide how to make the new space to continue to provide literacy services.

 

 maxmemory <bytes>

# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
#
# volatile-lru -> Evict using approximated LRU among the keys with an expire set.
# allkeys-lru -> Evict any key using approximated LRU.
# volatile-lfu -> Evict using approximated LFU among the keys with an expire set.
# allkeys-lfu -> Evict any key using approximated LFU.
# volatile-random -> Remove a random key among the ones with an expire set.
# allkeys-random -> Remove a random key, any key.
# volatile-ttl -> Remove the key with the nearest expire time (minor TTL)
# noeviction -> Don't evict anything, just return an error on write operations.
#
# LRU means Least Recently Used
# LFU means Least Frequently Used
#
# Both LRU, LFU and volatile-ttl are implemented using approximated
# randomized algorithms.
#
# Note: with any of the above policies, Redis will return an error on write
#       operations, when there are no suitable keys for eviction.
#
#       At the date of writing these commands are: set setnx setex append
#       incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
#       sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
#       zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
#       getset mset msetnx exec sort
#
# The default is:
#
# maxmemory-policy noeviction

noeviction will not continue to process write requests (del, read request can proceed). This ensures that no data is lost, but the line will write related business can not continue. This is the default elimination strategy.

volatile-lru attempt to eliminate a key set expiration time, the least used of the key priorities to be eliminated. The key is not set an expiration time will not be eliminated, thus the need to ensure data persistence will not suddenly be lost.

volatile-ttl like above, except that the LRU policy is not eliminated, but the key value of the remaining lifetime of ttl, ttl smaller the priority is eliminated.

volatile-random, like the above, but the key is out of date key set of random key.

allkeys-lru different from the volatile-lru, this strategy to phase out the key target is the key to all collections, not just expired key collection. This means that there is no set expiration time of the key will be eliminated.

allkeys-random with the same as above, but out of the strategy is random key.

volatile-xxx strategy only be eliminated with an expiration time for the key, allkeys-xxx strategy will all be out of key. If you just take Redis as cache, it should be used allkeys-xxx, clients do not have to carry expiration time write cache. If you want to use Redis persistence function, and then use volatile-xxx strategy so that you retain no set expiration time key, they will not be permanent key LRU algorithm eliminated.

 

Guess you like

Origin www.cnblogs.com/nijunyang/p/11443001.html