Redis-cache design and performance optimization

1. Multi-level cache architecture

Insert picture description here

2. Cache design

1. Cache avalanche

(1) Description:

Cache avalanche refers to the large amount of data in the cache until the expiration time, and the huge amount of query data causes excessive database pressure or even downtime. Unlike cache breakdown, cache breakdown refers to checking the same piece of data concurrently. Cache avalanche means that different data has expired, and many data cannot be found, so you can check the database.

Since the cache layer carries a large number of requests, it effectively protects the storage layer. However, if the cache layer cannot provide services for some reasons (such as large concurrency, the cache layer cannot support it, or the cache design is not good, similar to a large number of requests to access bigkey) , Resulting in a sharp drop in the concurrency that the cache can support), so a large number of requests will reach the storage layer, and the call volume of the storage layer will increase sharply, causing the storage layer to also cascade down.

Insert picture description here

(2) Solution:

  • Beforehand: Redis is highly available, master-slave + sentinel, redis cluster, to avoid total crash.
  • In the event: local ehcache cache + hystrix current limit & downgrade to prevent MySQL from being killed.
  • After the event: redis persists. Once restarted, data is automatically loaded from the disk to quickly restore cached data.
    Insert picture description here

2. Cache penetration

(1) Description:

Cache penetration refers to querying a data that does not exist at all, and neither the cache layer nor the storage layer will hit. Usually for fault tolerance, if data cannot be found from the storage layer, it is not written to the cache layer.

Cache penetration will cause non-existent data to be queried at the storage layer every time it is requested, which loses the meaning of cache protection for back-end storage.

There are two basic reasons for cache penetration:

  • First, there is a problem with its own business code or data.
  • Second, some malicious attacks, crawlers, etc. caused a large number of empty hits.
    Insert picture description here

(2) Solution:

The interface layer adds verification, such as user authentication verification, id is used for basic verification, and id<=0 is directly intercepted;
data that cannot be retrieved from the cache is not retrieved in the database. At this time, the key- The value pair is written as key-null, and the cache validity time can be set to a short point, such as 30 seconds (setting too long will cause it to be unusable under normal conditions). This can prevent attacking users from repeatedly using the same id brute force attack

3. Cache breakdown

(1) Description:

Cache breakdown refers to the data that is not in the cache but in the database (usually because the cache time expires). At this time, because there are so many concurrent users, and the data is not read in the read cache at the same time, the data is also retrieved from the database, causing database pressure Increase instantly, causing excessive pressure.

Cache breakdown means that a key is very hot, frequently accessed, and is in a situation of centralized and high concurrent access. When the key becomes invalid, a large number of requests break through the cache and directly request the database, as if A hole was cut in a barrier.

(2) Solution:

  • If the cached data is basically not updated, you can try to set the hotspot data to never expire.
  • If the cached data is not updated frequently and the entire process of cache refreshing takes less time, you can use distributed mutexes based on distributed middleware such as redis and zookeeper, or local mutexes to ensure only a small amount The request can request the database and rebuild the cache, and the remaining threads can access the new cache after the lock is released.
  • If the cached data is updated frequently or the cache refresh process takes a long time, you can use the timing thread to actively rebuild the cache before the cache expires or delay the expiration time of the cache to ensure that all requests can be accessed until the corresponding Cache.

Guess you like

Origin blog.csdn.net/weixin_42201180/article/details/112982893