Reflections redis cache

With the improvement of traffic system, to enhance the complexity, the response performance become the focus of a concern. The use of the cache to become a priority. redis as a leader in the cache middleware, becoming the interview will ask project. This article Redis share a few common interview questions:

 

Cache avalanche

 

1.1 What is the cache avalanche?

 

If we hang the cache, which means that all of our requests ran the database.

We all know that Redis can not put all the data is cached (expensive and limited memory), so Redis data needs to set the expiration time, and the use of inert + Delete to expire periodically delete key to remove the two strategies.

 

If the cached data set expiration time is the same, and all of these just Redis delete part of the data of the light. This can result in this period, while the failure of these caches all requests to the database.

 

This is the cache Avalanche: Redis hung up, go all database requests.

 

Cache avalanche If this happens, it is likely to bring down put our database, resulting in paralysis of the entire service!

 

1.2 How to solve the cache avalanche?

 

To the expiration time plus a random value when the cache, which would greatly reduce the cache expire at the same time.

 

For "Redis hung up, go all database requests" This situation, we can have the following ideas:

 

Prior to the incident: Redis achieve high availability (master-slave architecture + Sentinel or Redis Cluster), try to avoid Redis hang this from happening.

 

Incident in: Redis really hung up the case, we can set up a local cache (ehcache) + current limiting (hystrix), try to avoid our database was to kill (at least to ensure that our services can still work normally)

 

After the incident: redis persistent, restart automatically load the data from the disk, cache data fast recovery.

 

Cache penetration

 

2.1 What is the cache penetration

 

Cache penetration refers to a certain query data does not exist. Since the cache miss, and for fault-tolerant considerations, finding out from the database if the data is not written to the cache, which will result in a time that does not exist in the data request should go to the database query, it lost the meaning of the cache.

This is the cache penetration:

 

A large number of requested data in the cache misses, leading to requests go database.

Cache penetrating If this happens, it may bring down our database, resulting in paralysis of the entire service!

 

2.1 How to solve the cache penetrate?

 

Solve the cache penetration, there are two options:

 

由于请求的参数是不合法的(每次都请求不存在的参数),于是我们可以使用布隆过滤器(BloomFilter)或者压缩filter提前拦截,不合法就不让这个请求到数据库层!

 

当我们从数据库找不到的时候,我们也将这个空对象设置到缓存里边去。下次再请求的时候,就可以从缓存里边获取了。

 

这种情况我们一般会将空对象设置一个较短的过期时间。

 

缓存与数据库双写一致

 

3.1对于读操作,流程是这样的

 

 

如果我们的数据在缓存里边有,那么就直接取缓存的。

如果缓存里没有我们想要的数据,我们会先去查询数据库,然后将数据库查出来的数据写到缓存中。最后将数据返回给请求。

 

 

3.2什么是缓存与数据库双写一致问题?

 

如果仅仅查询的话,缓存的数据和数据库的数据是没问题的。但是,当我们要更新时候呢?各种情况很可能就造成数据库和缓存的数据不一致了。

 

这里不一致指的是:数据库的数据跟缓存的数据不一致

从理论上说,只要我们设置了键的过期时间,我们就能保证缓存和数据库的数据最终是一致的。因为只要缓存数据过期了,就会被删除。随后读的时候,因为缓存里没有,就可以查数据库的数据,然后将数据库查出来的数据写入到缓存中。

 

除了设置过期时间,我们还需要做更多的措施来尽量避免数据库与缓存处于不一致的情况发生。

 

最后

 

本文带领大家了解了如何解决缓存雪崩、缓存穿透、保证缓存与数据库双写时一致 等问题。除此之外还有很多提问点需要我们关注,推荐阅读列出一部分。

希望大家看完有所帮助。

Guess you like

Origin blog.csdn.net/suifeng629/article/details/94474369