redis cache interview summary

Cache benefits and costs

1, the cache payoff

Speed read and write : Acceleration cache read and write speeds: CPU L1 / L2 / L3 Cache , Linux page Cache acceleration drives the reader, the browser cache, the cache database results Ehcache

The rear end load reduction : reduce the load through the front end server cache: Business rear end using MySQL Redis reduced load, etc.

2 cache bring costs

Inconsistent data: caching and data layers have a window of time inconsistency, and update policies related    

Code maintenance costs: MySQL had only read and write functions can be achieved, but after adding the cache going to maintain cached data, increasing code complexity.

Within a heap buffer overflow may pose a risk of memory effects of user processes, such as ehCache, loadingCache:

The stack cache buffer is allocated jvm

  • jvm runtime data areas: the stack, java virtual machine stack, method area, native method stacks, program counter

Within the heap cache and remote server cache redis choice

  • A heap buffer within the general better performance, remote cache needs socket transport
  • Wherever possible, remote user level cache cache
  • Wherever possible, remote large data cache, the principle of serving node

Characteristics of redis

What are the characteristics redis

  • Rich data types
  • Can be used for caching, message expiration time set by the key, are automatically deleted after expiration time setex set expire
  • Rdb persistent support mode and aof
  • Distributed from the master, redis support the master from the separate read and write support redis cluster, the dynamic expansion mode.

Have you ever used redis which of several properties

  • Realized in ranking projects with sorted Set
  • Springboot cache with expired key combination realized in the cache memory
  • redis implement a distributed environment seesion Share
  • Cache solved with a Bloom filter penetration
  • redis achieve Distributed Lock
  • redis order to achieve re-launch system

redis cache avalanche

1. What is the cache avalanche? Do you have any solutions to prevent caching avalanche?

If the cache is concentrated over a period of time has expired, a lot of cache penetration occurs, all queries fall on the database, resulting in a cache avalanche.
Since the original cache invalidation, the new cache yet reached all requests for access to the cache during which should go to query the database, while the database enormous pressure on the CPU and memory, can cause serious database downtime.

2, what is your solution to prevent caching avalanche?

1, data preheat

After preheating the cache line is on the system, the relevant data is directly loaded into the cache buffer system. This can be avoided when the user requests, first query the database, and then the data cache problem! Users to directly query cache data previously been preheated! Cache reload mechanism can advance to update the cache, then the impending large concurrent access pre-loaded manually trigger different cache key.

2, double caching policy

The original cache C1, C2 is a copy of the cache, when C1 fails, you can visit C2, C1 cache expiration time is set short-term, C2 is set for long-term

3, regularly updated caching policy

Cache invalidation less demanding, start the initialization container loading, updated or removed using the timer task cache

5, set different expiry time for the point in time of a cache miss as uniform as possible

redis cache penetration

What is Cache penetrate?

Cache penetration refers to a certain query data does not exist, because the cache is not passive when writing a hit, and for fault-tolerant considerations, finding out if the data is not written to the cache from the storage layer, which will lead to the non-existent each time data requests go to the storage layer to make inquiries, lost the meaning of the cache. When large flow, might DB hung up, if someone does not take advantage of the presence of frequent attacks our key applications, this is the loophole.

Solution to prevent the penetration of cache

  • Cache null

If a query returns the data is empty (data not whether, or system failure exists) we still see the empty cache results, but its expiration time will be very short, no longer than five minutes. Through this direct deposit set default values ​​to the cache, so the second time to get there is value in the buffer, but will not continue to access the database.

  • Using Bloom filter BloomFilter-> Advantages take up memory space is small, bit storage. Performance is particularly high.

The hash all possible data to a bitmap large enough, one must not exist this bitmap data will be blocked off, thus avoiding queries pressure on the underlying storage system

 

Guess you like

Origin www.cnblogs.com/woxbwo/p/11520973.html