Redis cache avalanche

Redis cache avalanches and penetrate at first glance seem almost confusing concepts.

Cache avalanche refers to the use of the same expiration time on we set the cache expiration time, cause the cache to be invalidated at some point, request all hit the back-end database, the database request is too large one o'clock, one o'clock and IO cpu database overload , resulting in an avalanche. If you can not understand the words, close your eyes and imagine a scene of snow-capped mountains collapse, still can not understand it, then forget it.

User requests .jpg

In fact, this industry there are many solutions, but each program said to be perfect, it requires a combination of actual concurrency. The first is to lock line, serial to perform, but it is essentially a relief, allow wait from the user point of view is not very good second more is mutex mutex, such as Redis distributed lock SetNX, if there is no cache to load the database is not a third current limit: for example, using sliding window control at a time when the flow rate of tightening; or token bucket, leaky bucket, etc. the fourth is to do secondary cache, or double caching policy. For example the original buffer B, B2 to the copy buffer, when B fails, access B2, B1 is set to the short time of a cache miss, B2 is set to long. This little master-slave, from the primary meaning.

These technical means, to achieve the ultimate goal is nothing less than to make time to failure to achieve uniform state.

3068404-09ee7b74e89d3599.jpg
Jane from the bottom of the picture book App

Reproduced in: https: //www.jianshu.com/p/11c69b455d4a

Guess you like

Origin blog.csdn.net/weixin_33698043/article/details/91310529