Redis study notes eight (cache design)

table of Contents:

  • Cache Update Policy
  • Cache size
  • Cache penetration
  • Cache avalanche
  • Cache breakdown

Cache update strategy:

1, out of memory overflow strategy

When redis use more memory than maxmemory that trigger the appropriate strategy, specific strategies by the maxmemory-policy control parameters

There are six phase-out strategy:

) Noeviction: default policy , this policy does not delete any data; when the client also writes returns OOM (out of memory)

) Volatile-lru: according to LRU algorithm is set to delete the key expiration time, if there is no delete key, fallback strategy noevication

) Volatile-random: random delete the expired key

) Allkeys-lru: delete from all key in accordance with the LRU algorithm

) Allkeys-random: random calculate all the key

) Volatile-ttl: Clear Recently depending on the object to expire ttl attribute key, if there is no fallback strategy noevication

Not recommended noevication, allkeys-lru

2, delete the expired

) Inert Delete: redis has a dedicated storage set an expiration dictionary expiration time, whenever the client wants to access a key, go back and check the expiration dictionary, Ruoyi expired return null; but that strategy only if a deletion of this it will lead to some long-unused key also take up this memory, it has a policy of regularly delete

) Remove Timing: internal redis maintains a Job, default run once every 10 seconds; it will expire in accordance with the proportion of the key, using the key recovery speed modes

3, delete the application side

) Existing cache read, acquiring data DB not read, then read write cache DB

) To delete the cache, and then update the DB (not recommended), fear of concurrent write operations lead to dirty data

) To update DB, and then delete the cache ( recommended )

Cache size:

In the end it is the whole object cache, or data cache need it, we analyzed from two aspects:

1, the entire object cache

Advantages: better versatility

Disadvantages: a waste of memory; network traffic, extreme cases can clog the network; greater serialization and de-serialization of CPU overhead

2, the data cache only need to use

Advantages and disadvantages of the entire object cache on the contrary, but when you need to modify the code to add cache properties, but the revised also need to refresh the cache

Cache penetration:

Cache penetration means impossible to access a nonexistent key, the cache layer and persistence layer will not be hit, resulting in a direct hit to the request persistence layer, increasing the pressure persistence layer.

The reason appears cache breakdown:

1, to achieve their business problems

2, malicious attacks, reptiles, etc.

solution:

1, empty object cache

Empty the cache objects have two questions:

) Take up more memory; buffer layer will have more keys, require more memory; can set the expiration time to solve the problem

) Data inconsistency; if the cache is valid at the time, persistence has added this piece of data, there will be the problem; by clearing empty object cache MQ or otherwise

2, Bloom filter

Bloom filter is actually a very long series of random binary vectors and mapping functions. Bloom filter can be used to retrieve if an element in a set. The advantage is space efficiency and query time is far more than the general algorithm, the disadvantage is a certain error recognition rate and remove difficulties. It receives one pair of first Bloom filter verification key request upon whether there is a key, if the layer into the cache, the storage layer is present.

Use bitmap do Bloom filter; but this approach is only suitable for data hit is not high, relatively fixed data, real-time low application scenario, code maintenance is more complicated; small footprint advantage that buffer.

Cache Avalanche:

We all know that the cache layer carries a large number of requests, it effectively protects our DB, but the cache layer may also be because in some cases is down or a large cache invalidation at the same time, and this will lead to a large number of requests directly to the DB, cause the system to avalanche.

solution:

1, the use of highly available sentinel, cluster scheme

2, multi-level cache, a cache system for the process, as a secondary cache, etc. Redis

3, random cache expiration time, do not let a lot of cache invalidation at the same time

Cache breakdown:

When a hot key is expired, just a large number of concurrent requests to the key, this time will be a lot of these concurrent requests into the DB layer.

The reason appears cache breakdown:

1, the cache has a hot key

2, the cache can not rebuild completed shortly, such complex calculations or complicated SQL

solution:

1, distributed mutex

Only allows threads to rebuild a cache, other threads need to thread execution to wait for reconstruction after the caching to retrieve data from the cache.

2, never expires

From the point of view level cache: do not set the expiration time of the hot key

From a functional point of view level: to set up a hot key logic expiration time, when the expiration time exceeds this logic, use a separate thread to update its cache

 

Guess you like

Origin www.cnblogs.com/bzfsdr/p/11540369.html