Redis (f) of the problems and solutions

I ask you, have not encountered any problems in the process of redis use it? For example avalanche cache, the cache penetration, obstruction. What are the causes of these problems is it? How should we solve it? This article will talk about this.

Clog

Because redis is single-threaded architecture, all reads and writes are done in a main thread, so when a choke happens, would be fatal.

Underlying causes

(1) API or data structures used irrational

// 获取最近的10条慢查询
slowlog get 10

(2) CPU saturated
(3) related to persistent blocking: fork blocking, aof brush plate blocked, hugepage blocking write operation

External causes

(1) CPU competition
(2) Memory exchange
(3) network problems: Connection refused, network latency, network card soft interrupt

solution

(1) was added to monitor anomalies in the application side, and to target specific problems in the node
(2) Redis monitoring

Cache coherency

Update Policy

Redis data is often life cycle, need to be removed or updated after a certain time, so you can ensure cache space in a controllable range. There are three kinds of cache update policy.

(1) LRU / LFU / FIFO algorithm culling

When the cache usage exceeds a preset maximum, it will launch policy for data removal.

LRU:最近最久未使用
LFU:最近最少使用
FIFO:先进先出

(2) Excluding overtime

When the data into the cache, the expiration time has been set, when the time is automatically deleted. Use the command expire to achieve.

(3) take the initiative to update

After the data "CRUD" operation, immediately remove the relevant data redis

Cache penetration

1, the concept of

Cache penetration means that the data does not exist query a database, to query the database every time, and every time the query is empty, each time it will not be cached. If a large number of requests coming in, could overwhelm the database.

2, the causes

(1) self-service code or data problems
(2) malicious attacks, causing a large number of reptiles and other hits empty

3, solutions

(1) empty object cache

When the database is not hit, still empty object remains in the cache layer, after re-access the data will be retrieved from the cache, thus protecting the back-end data sources.
Here Insert Picture Description
Codes are as follows:

String get(String key){
     //从缓存中获取数据
     String cacheValue=cache.get(key);
     //缓存为空
     if(StringUtils.isBlank(cacheValue)){
         //从存储中获取
         String storageValue=storage.get(key);
         cache.set(key,storageValue);
         //如果存储数据为空,需要设置一个过期时间(300秒)
         if(storageValue==null){
             cache.expire(key,60*5);
         }
         return storageValue;
     }else{
       //缓存非空
       return cacheValue;
     }
}

This solution is used for the scene: data hit is not high and frequently changing data in real-time high.

(2) Bloom filter intercepting

The existing key discharge Rubu Long filter. When a query request over go through this filter, this filter is that if the data does not exist, directly discarded, no further access to the cache and memory layers.
Here Insert Picture Description
This solution is applicable scene: data hit is not high, relatively fixed data, real-time low

Bottomless pit

1, the concept of

In order to meet business needs, adding a large number of cluster nodes redis, but the performance did not improve the worse.

2, the causes

In stand-alone operation, if you want to get more bulk key, with a single network can operate in the cluster, the more nodes, the more involved network IO, performance can be degraded performance.

3, the solution

(1) Serial command
(2) Serial the IO
(. 3) parallel the IO
(. 4) to achieve HASH_TAG

Cache avalanche

1, the concept of

Cache avalanche refers to the use of the same expiration date when setting up the cache, the cache at the same time lead to failure at some point, lead to all queries fall on the database, resulting in a cache avalanche.

2, the solution

(1) a different key, to set different expiry time for the point in time of a cache miss as even as possible.
(2) If a cache database is distributed deployment, the hot data uniformly distributed in different cache database.
(3) Set hot data never expires.
(4) do the secondary cache.
(5) After a cache miss, by locking to control the number of threads or queue read write cache database. For example, a key for allowing only one thread to query the data and write cache, other threads wait.

Published 107 original articles · won praise 88 · views 260 000 +

Guess you like

Origin blog.csdn.net/Code_shadow/article/details/99734212