Common problems of redis

  • Redis expired key deletion strategy
    1. Immediate deletion: There is a timing processor to process, and the key is deleted immediately when it is found to be expired (not friendly to cpu, friendly to memory)
    2. Lazy deletion: When a key needs to be used, it is judged whether it needs to be deleted, no The keys used have always been (cpu-friendly, not memory-friendly)
    3. Timing delete: Timing delete is to delete some expired keys at a certain point in time (small impact on cpu, low memory consumption, need Control the duration and frequency)


  • Redis data elimination strategy 1. Volatile-lru selects the most recently used data
    from the data set that has been [set expiration time] 2. volatile-ttl selects the data that will expire from the data set that has set expiration time
    3. volatile-random arbitrarily select data
    from the data set with set expiration time 4. volatile-leu from the data set with set expiration time
    5. allkeys-lru select the least recently used data from all the data sets
    6. allkeys-random from Any data in all data sets is selected for elimination.
    7.allkeys-lfu Least Frequently Used is used at least from all data sets.
    8. Noeviction is not deleted. When the maximum memory limit is reached, if more memory is needed, an error message will be returned directly. Most write commands will cause more memory to be used.
    Note (if the reader wants to implement it, after starting redis, use the client to execute the info memory command to see used_memory, and then set
    the maxmemory configuration item in the config file, the setting is a little bit larger , Then the elimination algorithm will be triggered)

  • Cache avalanche concept The
    stored cache key/value becomes invalid at the same time, causing a large amount of data to directly access the database at a certain time, causing the database to withstand a large number of requests in a short time and crash.

  • Cache avalanche solution
    1. The expiration time of cached data is set randomly to prevent a large amount of data from expiring at the same time
    . 2. If the amount of concurrency is not very large and the performance requirements are not very high, use locks for queuing. This does not mean distributed Lock, Java's built-in Lock and sync are fine, because concurrency is not considered here, but a large amount of data is not expected to hit the database at the same time
    3. Add a corresponding cache mark to each cached data to record whether the cache is invalid If the cache tag is invalid, update the data cache.

  • The concept of cache penetration
    The data that is not in the cache and the database causes all requests to fall on the database, causing the database to withstand a large number of requests in a short time and crash. Generally speaking, these are malicious requests.

  • Cache penetration solution
    1. Verify user permissions, verify user id, and reject directly if id<=0
    2. You can set a key for this malicious attack user, and then value=null, set an expiration time of 30s, In this way, the same user cannot be attacked by brute force at least within 30s.
    3. Using Bloom filter to hash all possible data into a bitmap that is large enough, a data that must not exist will be intercepted by this bitmap, thereby avoiding the query pressure on the underlying storage system

  • The concept of cache breakdown
    There is no cache, but there is this piece of data in the database because the cache time has come. In the case of high concurrency, because a certain piece of data is the data that must be checked by many interfaces, the cache failure will inevitably cause the database to directly face these requests and cause the database to crash.

  • The difference between cache avalanche and cache breakdown
    If you want to understand these two differences, you only need to understand an example. Cache avalanche refers to the invalidation of data in multiple user dimensions at the same time, causing multiple users to request to be hit on the database at the same time (user token cache); cache breakdown refers to a piece of data that all users need to use. Once invalid, all users will Go to the database to query this common data concurrently, causing a large number of requests to hit the database, and the database crashes (a configuration common to all users).

  • Cache breakdown solution
    1. Because it is data for all users, such as configuration for all users, the setting does not expire
    2. Add mutex lock
    3. You can preheat the cache, manually refresh the cache each time, most of this data is To be configurable

  • Cache warm-up
    Load related cache data directly to the cache system. In this way, you can avoid the problem of querying the database first and then caching the data when the user requests it! The user directly queries the pre-heated cache data

  • Cache warm-up solution
    1. Write a cache refresh page directly, and do it manually when you go online;
    2. The amount of data is not large, and it can be loaded automatically when the project starts;
    3. Refresh the cache regularly and refresh the cache in batches;

  • Five data structures
    1.string (store string, integer, decimal)
    2.list (store an ordered set of arbitrary data, elements can be the same)
    3.set (store an unordered set of arbitrary data, elements cannot be the same)
    4 .hash (a hash table that stores arbitrary data, the key cannot be the same)
    5.zset (stores an ordered set of arbitrary data, the elements cannot be the same)

  • Cache and database data inconsistency problem
    to be collected

  • Usage scenario
    1. Counter (high read and write efficiency, single thread does not have the problem of update loss and coverage)
    2. Cache some hot data, configure the maximum memory and elimination strategy, improve the hit rate
    3. Session cache, store tokens, etc., so you can Better realize the unified management of sessions, ensure that the application server is stateless and scalable
    4. Realization of distributed locks
    5. Use list to achieve the function of message queue, but it is best to use kafak, rocketMq
    6. Use zset to achieve the function of ranking

  • The difference between redis and memcached 1. redis
    is a single-threaded architecture, memcached is multi-threaded, non-blocking io multiplexing
    2. redis supports rich data structure, memcached only supports text, binary
    3. redis supports RDB, AOF persistence Strategy, memcached does not support persistence


  • Redis persistence strategy 1. RDB
    configuration method: Configure save m(s) n(key) in the redis configuration file when n keys are written within m seconds, then a persistence will be performed Operation, write data to RDB file. But the last data will be lost.
    2. AOF
    configuration method: In the redis configuration file, open aop, call fsync to append the redis command log, there are two ways aways and everysecond, each time a write command comes over, it will be appended Command to the aop file and every few seconds will be written to the log.

Guess you like

Origin blog.csdn.net/weixin_38312719/article/details/105524267