Redis cache design principles

The basic principle

  • Only hot data should be placed in the cache

  • All cached information should have an expiration time set

  • Cache expiration times should be spread out to avoid centralized expiration

  • Cache keys should be readable

  • Cache keys with the same name in different services should be avoided

  • Keys can be abbreviated appropriately to save memory space

  • Choose the right data structure

  • Make sure the data written to the cache is complete and correct

  • Avoid using time-consuming operation commands, such as: keys *

    • In the default configuration of Redis, an operation that takes more than 10ms is considered a slow query
  • The data corresponding to a key should not be too large

    • For the string type, the size of the value corresponding to a key should be controlled within 10K, preferably around 1K
    • hash type, should not exceed 5000 lines
  • Avoid cache penetration

    • For data that is not queried in the database, a special identifier can be set in Redis to avoid each request reaching the database due to no data in the cache.
  • The cache layer should not throw exceptions

    • The cache should have a downgrade solution. If there is a problem with the cache, it must be returned to the database for processing.
  • Can do proper cache warmup

    • For applications that may have a large number of read requests after going online, data can be written to the cache in advance before going online
  • The read order is cache first, then the database; the write order is the database first, then the cache

  • Data Consistency Issues

    • When the data source changes, the data in the cache may be inconsistent with the data in the data source. An appropriate cache update strategy should be selected according to the actual business requirements:

      • Active update: synchronously update cached data or expire cached data when the data source changes. High consistency and high maintenance costs.

      • Passive deletion: Redis is responsible for the expired deletion of data according to the expiration time set by the cache. Lower consistency and lower maintenance costs.

Cache Expiration Algorithm

  • LRU

    • Eliminate data whose last use time is longer than the current time
  • LFU

    • Retire data that is used less frequently over a period of time
  • FIFO

    • Eliminate data written first

Copyright Notice

This article is original by the author, and the copyright belongs to the author Xue Feihong . Reprints must preserve the integrity of the article, and a link to the original text should be marked in a prominent position on the page .

If you have any questions, please send an email to contact the author.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325344741&siteId=291194637