redis cache penetration cache avalanche, persistent way

Redis

redis is a database for persistence of memory, can support string, list, hsh, set, zet five data types.

redis high performance (redis read speed up to 11 million times per second and write speeds of up to 80,000 times per second)

redis [cache] penetration

     Cache penetration refers to a certain query data does not exist, for example, I, a total of 100 commodities, then you chosen to make inquiries 101, 102 of these goods does not exist, this time we tried to get back is usually the case inside redis to make inquiries, but redis inside there, it will go to the database query, if a large number of such requests over, then fell on top of the database, and the database there is no such data, then the cache is like a dummy, increase the pressure on the database.

Solution:

      To solve this problem, we can cache a null value, which means that even if the database query out of the data does not exist, a null value will be cached in reids cache, so next time you can go to our database cache rather than friends, but too many null values ​​will take up our cache memory space, resulting in a waste of memory, we give it by setting an expiration time such as three minutes, so, after a null value will be credited after 3 minutes to empty, free up space, but this appeared another problem, that if someone added a data in three minutes, this time there will be a data Article 101 friends, but when we went to check the user's taking the cache and the cache was empty time value, resulting in inconsistent data in the cache data and databases, in order to solve this problem, we can increase the data in the background, while the active presence of the update data in the cache, or to delete data in the cache, but pieces of data need to query the database again.

 

redis [avalanche] Cache

    A lot of cache penetration, or failure in the cache at the same time over a period of time, a lot of cache penetration occurs, all queries fall on the database, resulting in a cache avalanche.

Solution:

1. different key, to set different expiry time for the cache invalidation point in time as uniform as possible

2. do double caching policy.

A1 is the original cache, A2 copy is cached, when A1 fails, you can access the A2,

A1 cache expiration time is set short-term, A2 to long-term.

 

[Persistent] redis

 

There are two ways to AOF and RDB, we generally use the default RDB way.

Because it's more efficient,

RDB persistence within the specified time intervals a snapshot of the data set in the memory is written to disk,

The actual procedure is the fork [points] a child process,

First dataset written to the temporary file,

After writing is successful, then replace the previous file, compressed binary storage, which can effectively prevent writing failures credited into the previous data is also lost.

 

Specifically configured redis.conf, find the save key,

Specify how often the number of key changes,

It will be persistent.

 

save 900 1

# After 900 seconds (15 minutes), if at least one key is changed, the dump memory snapshot.

save 300 10

# After 300 seconds (5 minutes), at least if there is a change key 10, the dump memory snapshot.

save 60 10000

 

 

 

 

 

 

 

 

Guess you like

Origin www.cnblogs.com/ycq-qiang/p/11139897.html
Recommended