Cache Cache avalanche and penetration

redis single node supports things, if it is not supported by redis clusters things.

Cache avalanche

A lot of the same cache expiration time, resulting in many key simultaneously fail at some point, forwards all requests to the database, resulting in excessive pressure on the database instantaneously collapse.

solution:

  1. Employs a secondary cache (redis + ehcache)
  2. Plus a random value shared equally Redis key assigned dead time, the expiration time may be provided to the buffer time, the expiration time of each such key distribution open, fail at the same time does not concentrate
  3. The use of messaging middleware, it can be solved concurrently
  4. Distributed lock / unlock mode to ensure that the local database does not have a large number of threads read and write operations to ease the pressure of the database at the same time, it reduces the throughput of the system (suitable for small projects scenes).

Cache penetration

A large number of concurrent requests data does not exist, the data does not exist in the cache, the database, resulting in each request to the database to query the database to increase the pressure collapse.

solution:

  1. api gateway parameter calibrating all requests, reject illegal request.
  2. When not in the database query results, empty cache results and set a shorter time expired.

Guess you like

Origin www.cnblogs.com/lspkenney/p/11433288.html