Redis cache penetration cache breakdown mind map


foreword

Record and follow the video to do the project, the mind map of the product cache processing


mind Mapping

Mind map about store cache


Redis

Combat articles

Login and registration function of SMS verification code

  • build project
  • Session-based SMS login
  • Session sharing problem under the cluster
  • Redis implements shared session
  • Redis realizes session refresh problem

Merchant query cache function

  • Know the cache

    • what is cache

      • A data temporary storage area with high-efficiency read and write capabilities
    • The role of caching

      • Reduce backend load
      • Improve service read and write response speed
    • The cost of caching

      • Development costs
      • Operation and maintenance cost
      • consistency problem
  • Add Reis cache

  • Cache update strategy

    • three strategies

      • memory obsolescence

        • Redis's built-in memory elimination mechanism
      • Outdated

        • Use the expire command to set the expiration time
      • active update

        • Actively complete the simultaneous update of the database and cache
    • strategic choice

      • low consistency requirements

        • Memory Retirement and Expiration Retirement
      • high consistency requirements

        • active update
        • Overdue elimination pocket
    • Proactive update plan

      • Cache Aside

        • The cache caller finishes updating the cache while updating the database

          • good consistency
          • General difficulty
      • Read/Write Through

        • The cache and the database are integrated into one service, which ensures the consistency of the two and exposes the API interface to the outside world. The caller calls the API without knowing whether he is operating a database or a cache, and does not care about consistency.

          • Excellent consistency
          • achieve complex
          • general performance
      • Write Back

        • The CRUD of the cache caller is all done against the cache. The cached data is written to the database asynchronously by an independent thread to achieve final consistency

          • poor consistency
          • good performance
          • achieve complex
    • Cache Aside mode selection

      • Update cache or delete cache?

        • Updating the cache will generate invalid updates, and there are major thread safety issues
        • The essence of deleting the cache is to delay the update, there is no invalid update, and the thread safety problem is relatively low
      • Operate the database or the cache first?

        • Update the data first, then delete the cache

          • In the case of satisfying atomicity, the probability of security problems is low
        • Delete the cache first, then update the data

          • High probability of security issues
      • How to ensure the atomicity of database and cache operations

        • Single system

          • Use transaction mechanism
        • Distributed Systems

          • Use distributed transaction mechanism
    • the best solution

      • When querying data

        • 1. Query the cache first
        • 2. If hit, return directly
        • 3. If the cache misses, query the database
        • 4. Write the database data to the cache
        • 5. Return the result
      • when modifying the database

        • 1. Modify the data first
        • 2. Then delete the cache
        • ensure the atomicity of both
  • cache penetration

    • cause

    • solution

      • Cache empty objects

        • train of thought

          • Redis is also creating a cache for non-existent data, the value is empty, and a shorter TTL time is set
        • advantage

          • Simple to implement and easy to maintain
        • shortcoming

          • additional memory consumption
          • Short-term data inconsistencies
      • bloom filter

        • train of thought

          • Using the Bloom filter algorithm, first determine whether the request exists before entering Redis
        • advantage

          • low memory footprint
        • shortcoming

          • achieve complex
          • There is a possibility of misjudgment
      • other

        • Do a good job of data basic format verification
        • Strengthen user permission verification
        • Do a good job of limiting the hotspot parameters
  • cache avalanche

    • cause

      • At the same time, a large number of cache keys fail at the same time or the Redis server goes down, causing a large number of requests to reach the database, bringing huge pressure
    • solution

      • Add random values ​​to different keys
      • Using Redis Cluster to Improve Service Availability
      • Add a degraded current limiting policy to the cache business
      • Add multi-level cache to business
  • Cache breakdown (hot key)

    • cause

      • Hotspot Key

        • Highly concurrent access for a certain period of time
        • Cache rebuild takes a long time
      • The hotspot key suddenly expires, because the reconstruction takes a long time, during this period a large number of requests fall to the database, bringing a huge impact

    • solution

      • mutex

        • train of thought

          • Lock in the cache reconstruction process to ensure that only one thread executes the reconstruction process, and other threads wait
        • advantage

          • easy to implement
          • no extra memory consumption
          • good consistency
        • shortcoming

          • performance degradation due to waiting
          • risk of deadlock
      • logical expiration

        • train of thought

          • The hotspot key cache never expires, but sets a logical expiration time. When querying data, judge whether the cache needs to be rebuilt by making a judgment on the logical expiration time
          • Rebuilding the cache also guarantees single-threaded execution through a mutex
          • Rebuilding the cache is performed asynchronously using a separate thread
          • Other threads do not need to wait, just query the old data directly
        • advantage

          • The thread does not need to wait, and the performance is better
        • shortcoming

          • Consistency not guaranteed
          • additional memory consumption
          • achieve complex
  • Cache tool package

    • Generics and functional programming are used here to deal with different types of data

voucher

distributed lock

Summarize

I feel that I have sorted out the mind map, and I feel that the learning content of the past few days has made my thinking clearer.

Guess you like

Origin blog.csdn.net/qq_51603875/article/details/124937367