Redis actual combat - caching three musketeers penetration breakdown avalanche solution

cache penetration

definition

Cache penetration: Cache penetration means that the data requested by the client does not exist in the cache or the database , so the cache will never take effect, and these requests will hit the database, causing pressure on the database and preventing the cache from playing its due role the role of

solution

  • Cache empty objects

When our client accesses data that does not exist, we first request redis, but there is no data in redis at this time, and the database will be accessed at this time, but there is no data in the database. This data penetrates the cache and hits the database directly. We all know The concurrency that the database can carry is not as high as that of redis. If a large number of requests come to access such non-existent data at the same time, these requests will all access the database. Even if the database does not exist, we will store this data in redis. , In this way, the next time the user comes to access this non-existent data, then the data can be found in redis and will not enter the cache, but caching a large number of empty objects will also consume memory

  • bloom filter

The Bloom filter actually uses the hash idea to solve this problem. Through a huge binary array, the hash idea is used to judge whether the current data to be queried exists. If the Bloom filter judges that it exists, it will be released . This request will access redis, even if the data in redis has expired at this time, but the data must exist in the database, after querying the data in the database, put it into redis, assuming that the Bloom filter judges this If the data does not exist, it will return directly. The advantage is to save memory space, but there will be misjudgment, that is, the filter is accurate in judging that the data does not exist, but it is not necessarily accurate when judging that it exists. The reason for the misjudgment is: Bloom filter Follow the hash idea, as long as the hash idea, there may be hash conflicts

 

Solutions

In the original logic, if we find that this data does not exist in mysql, we will return 404 directly, so there will be a problem of cache penetration

In the current logic: if this data does not exist, we will not return 404, but will still write this data to Redis, and set the value to empty. When the query is initiated again, if we find a hit, we will judge this Whether the value is null, if it is null, it is the data written before, which proves to be the cache penetration data, if not, the data will be returned directly.

coding solution

Due to the complicated implementation of the Bloom filter, this project adopts scheme one, that is, directly caches empty objects when there is no data in the database, and transforms the method of querying store information

 @Override
    public Result queryById(Long id) {
        //根据业务代码组装key
        String key = CACHE_SHOP_KEY + id;
        //从redis中获取商铺信息
        String shopJson = stringRedisTemplate.opsForValue().get(key);
        //判断有值的情况
        if (StrUtil.isNotBlank(shopJson)) {
            //将json转化为shop对象直接返回
            Shop shop = JSONUtil.toBean(shopJson, Shop.class);
            return Result.ok(shop);
        }
        //对无值情况进行校验
        if(shopJson!=null){
            return Result.fail("店铺不存在");
        }
        Shop shop = getById(id);
        if (shop == null) {
            //将当前的key的空对象缓存到redis中,过期时间设置稍微短一点
            stringRedisTemplate.opsForValue().set(key, "", CACHE_NULL_TTL, TimeUnit.MINUTES);
            return Result.fail("店铺不存在");
        }
        //将数据库查询的数据写入缓存,并设置过期时间
        stringRedisTemplate.opsForValue().set(key, JSONUtil.toJsonStr(shop), CACHE_SHOP_TTL, TimeUnit.MINUTES);
        //返回
        return Result.ok(shop);
    }

cache avalanche

definition

Cache avalanche means that a large number of cache keys fail at the same time or the Redis service goes down, causing a large number of requests to reach the database, bringing huge pressure.

solution

  • Add a random value to the TTL of different keys so that the keys will not expire at the same time

  • Using Redis Cluster to Improve Service Availability

  • Add a degraded current limiting policy to the cache business

  • Add multi-level cache to business

cache breakdown

definition

The cache breakdown problem is also called the hotspot key problem. It is a key that has been accessed with high concurrency and the cache rebuilding business is complicated. The key suddenly fails, and countless access requests will bring a huge impact to the database in an instant . For example, the popular product data of Double Eleven activities

Scenario analysis: Assume that thread 1 should query the database after querying the cache, and then reload the data to the cache. At this time, as long as thread 1 completes this logic, other threads can load the data from the cache. But suppose that when thread 1 is not finished , subsequent thread 2, thread 3, and thread 4 come to access the current method at the same time, then these threads cannot query data from the cache , then they will access the query cache at the same time , did not find it, and then access the database at the same time, and execute the database code at the same time, which puts too much pressure on database access

 solution

  • mutex

Because locks can achieve mutual exclusion. Assuming that the thread comes over, only one person can access the database, so as to avoid excessive pressure on database access, but this will also affect the performance of the query , because at this time, the performance of the query will change from parallel to serial. We You can use the tryLock method + double check to solve such problems. The advantage of this scheme is to ensure the strong consistency of data , that is, the data queried by each thread is the latest data

Scenario analysis

Assuming that thread 1 comes to visit now, he does not hit the query cache, but at this time he obtains the resource of the lock, then thread 1 will execute the logic alone, assuming that thread 2 comes now, thread 2 has not obtained during the execution process When the lock is found, thread 2 can go to sleep until thread 1 releases the lock, thread 2 obtains the lock, and then executes the logic. At this time, the data can be obtained from the cache.

Encoding implementation 

Core idea: Compared with querying the database directly after querying no data from the cache, the current solution is to acquire a mutex if no data is found from the cache after querying. After acquiring the mutex , to judge whether the lock has been obtained, if not, sleep, and try again after a while, until the lock is obtained, the query can be performed. If you get the locked thread, then go to query, write the data into redis after the query, release the lock, return the data, and use the mutex to ensure that only one thread executes the logic of operating the database to prevent cache breakdown

Code to operate the lock:

The core idea is to use the setnx method of redis to express the acquisition lock. The meaning of this method is that if there is no such key in redis, the insertion is successful and 1 is returned, similar to the optimistic lock of mybatisplus , which returns true in stringRedisTemplate, and if there is such a key, then insert If it fails, it returns 0, and returns false in stringRedisTemplate. We can pass true or false to indicate whether a thread has successfully inserted the key. We think that the thread of the successfully inserted key is the thread that has obtained the lock.

private boolean tryLock(String key) {
    Boolean flag = stringRedisTemplate.opsForValue().setIfAbsent(key, "1", 10, TimeUnit.SECONDS);
    return BooleanUtil.isTrue(flag);
}

private void unlock(String key) {
    stringRedisTemplate.delete(key);
}

 The code of the lock should be as small as possible, here only add the mutex when accessing the database

public Shop queryWithMutex(Long id) {
        //根据业务代码组装key
        String key = CACHE_SHOP_KEY + id;
        //从redis中获取商铺信息
        String shopJson = stringRedisTemplate.opsForValue().get(key);
        //判断有值的情况
        if (StrUtil.isNotBlank(shopJson)) {
            //将json转化为shop对象直接返回
            Shop shop = JSONUtil.toBean(shopJson, Shop.class);
            return shop;
        }
        //对无值情况进行校验
        if (shopJson != null) {
            return null;
        }
        //拼装获取锁的key
        String lockKey = LOCK_SHOP_KEY + id;

        Shop shop = null;
        try {
            //获取锁
            boolean b = tryLock(lockKey);
            //获取锁失败要休眠然后继续重试,看缓存中是否已经被别的线程写入数据
            if (!b) {
                Thread.sleep(50);
                return queryWithMutex(id);
            }
            shop = getById(id);
            if (shop == null) {
                //将当前的key的空对象缓存到redis中,过期时间设置稍微短一点
                stringRedisTemplate.opsForValue().set(key, "", CACHE_NULL_TTL, TimeUnit.MINUTES);
                return null;
            }
            //将数据库查询的数据写入缓存,并设置过期时间
            stringRedisTemplate.opsForValue().set(key, JSONUtil.toJsonStr(shop), CACHE_SHOP_TTL, TimeUnit.MINUTES);
        } catch (InterruptedException e) {
            throw new RuntimeException(e);
        } finally {
            unlock(lockKey);
        }
        //返回
        return shop;
    }
  • Logical Expiration Scheme

The main reason why we have this cache breakdown problem is that we have set an expiration time for the key. If we do not set the expiration time, there will be no cache breakdown problem, but if we do not set the expiration time, the data will not Is it always occupying our memory? We can use a logical expiration scheme to keep hot keys resident in memory .

Scenario analysis

The expiration time is set in the value of redis. Note: this expiration time will not directly affect redis, but we will deal with it through logic later. Suppose thread 1 goes to query the cache, and then judges from the value that the current data has expired. At this time, thread 1 goes to obtain the mutex, then other threads will block, and the thread that has obtained the lock will open a new thread to perform The previous logic of reconstructing data does not release the lock until the newly opened thread completes the logic, and thread 1 directly returns the data without blocking the wait . Suppose now that thread 3 comes to visit, because thread 2 holds lock, so thread 3 cannot obtain the lock, and thread 3 also returns the data directly . Only after the newly opened thread 2 has constructed the reconstructed data can other threads return the correct data. That is to say, this solution does not need to wait for blocking to update data like a mutex, resulting in performance degradation, but returns the old data directly, but this also brings about the problem of data inconsistency.

 Encoding implementation

Idea analysis: When the user starts to query redis, judge whether it is hit. If there is no hit, it will directly return empty data without querying the database. Once it hits, take out the value and judge whether the expiration time in the value is satisfied. If it is not expired, then Directly return the data in redis. If it expires, the previous data will be returned directly after the independent thread is started. The independent thread will reconstruct the data , and release the mutex after the reconstruction is completed.

 Due to the need for a logically expired time variable, the variable needs to be expanded. Here, the redisdata method is used to directly encapsulate the shop into a member variable of redisdata, and the object has the variable of expiration time.

@Data
public class RedisData {
    private LocalDateTime expireTime;
    private Object data;
}

We need to preheat the cache, which is to store the data of the hot key in redis in advance. Here, we use the unit test to write the data into redis. Note that the redisdata object is written.

 @Override
    public void saveShopToRedis(Long id, Long expireSeconds) {
        Shop show = getById(id);
        //封装redisdata
        RedisData redisData = new RedisData();
        redisData.setData(show);
        redisData.setExpireTime(LocalDateTime.now().plusSeconds(expireSeconds));
        stringRedisTemplate.opsForValue().set(CACHE_SHOP_KEY+id,JSONUtil.toJsonStr(redisData));
    }

 

 Here to open the thread to build new data, the thread pool is opened to save resources

private static final ExecutorService CACHE_REBUILD_EXECUTOR = Executors.newFixedThreadPool(10);
public Shop queryWithLogicalExpire( Long id ) {
    String key = CACHE_SHOP_KEY + id;
    // 1.从redis查询商铺缓存
    String json = stringRedisTemplate.opsForValue().get(key);
    // 2.判断是否存在
    if (StrUtil.isBlank(json)) {
        // 3.存在,直接返回
        return null;
    }
    // 4.命中,需要先把json反序列化为对象
    RedisData redisData = JSONUtil.toBean(json, RedisData.class);
    Shop shop = JSONUtil.toBean((JSONObject) redisData.getData(), Shop.class);
    LocalDateTime expireTime = redisData.getExpireTime();
    // 5.判断是否过期
    if(expireTime.isAfter(LocalDateTime.now())) {
        // 5.1.未过期,直接返回店铺信息
        return shop;
    }
    // 5.2.已过期,需要缓存重建
    // 6.缓存重建
    // 6.1.获取互斥锁
    String lockKey = LOCK_SHOP_KEY + id;
    boolean isLock = tryLock(lockKey);
    // 6.2.判断是否获取锁成功
    if (isLock){

        CACHE_REBUILD_EXECUTOR.submit( ()->{

            try{
                //重建缓存
                this.saveShop2Redis(id,20L);
            }catch (Exception e){
                throw new RuntimeException(e);
            }finally {
                unlock(lockKey);
            }
        });
    }
    // 6.4.返回过期的商铺信息
    return shop;
}

Guess you like

Origin blog.csdn.net/weixin_64133130/article/details/132394447