[Business Function Chapter 87] Microservice-springcloud-local cache-redis-distributed cache-cache penetration-avalanche-breakdown

1. Cache

1. What is cache

  The function of caching is to reduce the frequency of access to the data source. Thereby improving the performance of our system.

image.png

image.png

Cached flowchart

image.png

2. Classification of cache

2.1 Local cache

  In fact, the cache data is stored in memory (Map <String,Object>). There is definitely no problem in a monolithic architecture.

image.png

Caching processing under monolithic architecture

image.png

2.2 Distributed cache

  In a distributed environment, our original local cache is not used too much, because:

  • Cache data redundancy
  • Caching is not efficient

image.png

  Structure diagram of distributed cache

image.png

3. Integrate Redis

  To integrate Redis, we add the corresponding dependencies on the homepage of the SpringBoot project.

<dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-redis</artifactId>
        </dependency>

  Then we need to add the corresponding configuration information

image.png

Test operation of Redis data

    @Autowired
    StringRedisTemplate stringRedisTemplate;

    @Test
    public void testStringRedisTemplate(){
    
    
        // 获取操作String类型的Options对象
        ValueOperations<String, String> ops = stringRedisTemplate.opsForValue();
        // 插入数据
        ops.set("name","bobo"+ UUID.randomUUID());
        // 获取存储的信息
        System.out.println("刚刚保存的值:"+ops.get("name"));
    }

Viewing can be viewed through the Redis client connection.

image.png

You can also view it through tools

image.png

4. Transform the three-level classification

  When querying the secondary and tertiary classification data on the homepage, we can use Redis to cache and store the corresponding data to improve the retrieval efficiency.

@Override
    public Map<String, List<Catalog2VO>> getCatelog2JSON() {
    
    
        // 从Redis中获取分类的信息
        String catalogJSON = stringRedisTemplate.opsForValue().get("catalogJSON");
        if(StringUtils.isEmpty(catalogJSON)){
    
    
            // 缓存中没有数据,需要从数据库中查询
            Map<String, List<Catalog2VO>> catelog2JSONForDb = getCatelog2JSONForDb();
            // 从数据库中查询到的数据,我们需要给缓存中也存储一份
            String json = JSON.toJSONString(catelog2JSONForDb);
            stringRedisTemplate.opsForValue().set("catalogJSON",json);
            return catelog2JSONForDb;
        }
        // 表示缓存命中了数据,那么从缓存中获取信息,然后返回
        Map<String, List<Catalog2VO>> stringListMap = JSON.parseObject(catalogJSON, new TypeReference<Map<String, List<Catalog2VO>>>() {
    
    
        });
        return stringListMap;
    }

  Then perform stress testing on the three-level classified data

Stress test content Number of threads for stress testing Throughput/s 90% response time 99% response time
Nginx 50 7,385 10 70
Gateway 50 23,170 3 14
Test services individually 50 23,160 3 7
Gateway+Service 50 8,461 12 46
Nginx+Gateway 50
Nginx+Gateway+service 50 2,816 27 42
A menu 50 1,321 48 74
Three-level classification pressure test 50 12 4000 4000
Three-level classification stress test (after business optimization) 50 448 113 227
Three-level classification stress test (Redis cache) 50 1163 49 59

  Through comparison, we can see that the performance improvement effect after adding Redis cache is still very obvious.

image.png

5. Cache penetration

  Refers to querying a data that must not exist. Since the cache is a miss, the database will be queried, but the database does not have such a record. We did not write the null of this query into the cache, which will cause the non-existent data to be requested every time. You have to go to the storage layer to query, which loses the meaning of caching.

image.png

Using non-existent data to attack, the instantaneous pressure on the database increases, and eventually leads to a crash. The solution is relatively simple, directly cache the null result and add a short expiration time

image.png

6. Cache avalanche

  Cache avalanche means that when we set up the cache, the keys use the same expiration time, causing the cache to expire at the same time at a certain moment, all requests are forwarded to the DB, and the DB is under instantaneous pressure and avalanches.

image.png

Solution: Add a random value to the original expiration time, such as 1-5 minutes randomly, so that the repetition rate of each cached expiration time will be reduced, making it difficult to cause collective failure events.
Note that the random number here must be a positive number. There may be a negative number randomly, so the validity period will be invalid and an exception will be reported.

image.png

7. Cache breakdown

  For some keys with an expiration time set, if these keys may be accessed extremely concurrently at certain points in time, it is a very "hot" data. If this key expires before a large number of requests come in at the same time, then all data queries for this key will fall to the db, which is called cache breakdown.

image.png

Solution: Lock, with large concurrency, only one can check, and others will wait. After checking, the lock will be released. If others obtain the lock, check the cache first, and there will be data without going to the db.

image.png

But when we did the stress test, the output results were a bit beyond our expectations.

image.png

The query was made twice because of timing issues in lock release and query result caching.

image.png

We only need to adjust the timing of lock release and result caching.

image.png

Then there is the complete code processing

/**
     * 查询出所有的二级和三级分类的数据
     * 并封装为Map<String, Catalog2VO>对象
     * @return
     */
    @Override
    public Map<String, List<Catalog2VO>> getCatelog2JSON() {
    
    
        String key = "catalogJSON";
        // 从Redis中获取分类的信息
        String catalogJSON = stringRedisTemplate.opsForValue().get(key);
        if(StringUtils.isEmpty(catalogJSON)){
    
    
            System.out.println("缓存没有命中.....");
            // 缓存中没有数据,需要从数据库中查询
            Map<String, List<Catalog2VO>> catelog2JSONForDb = getCatelog2JSONForDb();
            if(catelog2JSONForDb == null){
    
    
                // 那就说明数据库中也不存在  防止缓存穿透
                stringRedisTemplate.opsForValue().set(key,"1",5, TimeUnit.SECONDS);
            }else{
    
    
                // 从数据库中查询到的数据,我们需要给缓存中也存储一份
                // 防止缓存雪崩
                String json = JSON.toJSONString(catelog2JSONForDb);
                stringRedisTemplate.opsForValue().set("catalogJSON",json,10,TimeUnit.MINUTES);
            }

            return catelog2JSONForDb;
        }
        System.out.println("缓存命中了....");
        // 表示缓存命中了数据,那么从缓存中获取信息,然后返回
        Map<String, List<Catalog2VO>> stringListMap = JSON.parseObject(catalogJSON, new TypeReference<Map<String, List<Catalog2VO>>>() {
    
    
        });
        return stringListMap;
    }

    /**
     * 从数据库查询的结果
     * 查询出所有的二级和三级分类的数据
     * 并封装为Map<String, Catalog2VO>对象
     * 在SpringBoot中,默认的情况下是单例
     * @return
     */
    public Map<String, List<Catalog2VO>> getCatelog2JSONForDb() {
    
    
        String keys = "catalogJSON";
        synchronized (this){
    
    
            /*if(cache.containsKey("getCatelog2JSON")){
                // 直接从缓存中获取
                return cache.get("getCatelog2JSON");
            }*/
            // 先去缓存中查询有没有数据,如果有就返回,否则查询数据库
            // 从Redis中获取分类的信息
            String catalogJSON = stringRedisTemplate.opsForValue().get(keys);
            if(!StringUtils.isEmpty(catalogJSON)){
    
    
                // 说明缓存命中
                // 表示缓存命中了数据,那么从缓存中获取信息,然后返回
                Map<String, List<Catalog2VO>> stringListMap = JSON.parseObject(catalogJSON, new TypeReference<Map<String, List<Catalog2VO>>>() {
    
    
                });
                return stringListMap;
            }
            System.out.println("-----------》查询数据库操作");

            // 获取所有的分类数据
            List<CategoryEntity> list = baseMapper.selectList(new QueryWrapper<CategoryEntity>());
            // 获取所有的一级分类的数据
            List<CategoryEntity> leve1Category = this.queryByParenCid(list,0l);
            // 把一级分类的数据转换为Map容器 key就是一级分类的编号, value就是一级分类对应的二级分类的数据
            Map<String, List<Catalog2VO>> map = leve1Category.stream().collect(Collectors.toMap(
                    key -> key.getCatId().toString()
                    , value -> {
    
    
                        // 根据一级分类的编号,查询出对应的二级分类的数据
                        List<CategoryEntity> l2Catalogs = this.queryByParenCid(list,value.getCatId());
                        List<Catalog2VO> Catalog2VOs =null;
                        if(l2Catalogs != null){
    
    
                            Catalog2VOs = l2Catalogs.stream().map(l2 -> {
    
    
                                // 需要把查询出来的二级分类的数据填充到对应的Catelog2VO中
                                Catalog2VO catalog2VO = new Catalog2VO(l2.getParentCid().toString(), null, l2.getCatId().toString(), l2.getName());
                                // 根据二级分类的数据找到对应的三级分类的信息
                                List<CategoryEntity> l3Catelogs = this.queryByParenCid(list,l2.getCatId());
                                if(l3Catelogs != null){
    
    
                                    // 获取到的二级分类对应的三级分类的数据
                                    List<Catalog2VO.Catalog3VO> catalog3VOS = l3Catelogs.stream().map(l3 -> {
    
    
                                        Catalog2VO.Catalog3VO catalog3VO = new Catalog2VO.Catalog3VO(l3.getParentCid().toString(), l3.getCatId().toString(), l3.getName());
                                        return catalog3VO;
                                    }).collect(Collectors.toList());
                                    // 三级分类关联二级分类
                                    catalog2VO.setCatalog3List(catalog3VOS);
                                }
                                return catalog2VO;
                            }).collect(Collectors.toList());
                        }

                        return Catalog2VOs;
                    }
            ));
            // 从数据库中获取到了对应的信息 然后在缓存中也存储一份信息
            //cache.put("getCatelog2JSON",map);
            // 表示缓存命中了数据,那么从缓存中获取信息,然后返回
            if(map == null){
    
    
                // 那就说明数据库中也不存在  防止缓存穿透
                stringRedisTemplate.opsForValue().set(keys,"1",5, TimeUnit.SECONDS);
            }else{
    
    
                // 从数据库中查询到的数据,我们需要给缓存中也存储一份
                // 防止缓存雪崩
                String json = JSON.toJSONString(map);
                stringRedisTemplate.opsForValue().set("catalogJSON",json,10,TimeUnit.MINUTES);
            }
            return map;
        } }

8. Limitations of local locks

  The synchronized lock added above is a local lock and can only lock a single instance. If it is a distributed cluster with multiple containers, then this lock can only lock its own container and cannot lock other container nodes.
In a distributed environment, local locks cannot lock the operations of other nodes. This situation is definitely problematic, and multiple nodes will appear to check the database. Therefore,
distributed locks need to be used in distributed cluster scenarios. lock

image.png

For the problem of local locks, we need to solve it through distributed locks. Does that mean that the lock itself is no longer needed in a distributed scenario?

image.png

  Obviously this is not the case, because if each node in a distributed environment does not control the number of requests, the pressure on distributed locks will be very high. At this time, we need local locks to control the synchronization of each node to reduce the number of distributed locks. pressure, so in actual development we use a combination of local locks and distributed locks .

Guess you like

Origin blog.csdn.net/studyday1/article/details/132546591