Use redisson to implement distributed locks, how to solve deadlocks, read-write locks, semaphores, locks, and cache consistency solutions

1. Introduce maven dependency

<!--以后使用 redisson 作为分布锁,分布式对象等功能-->
<dependency>
    <groupId>org.redisson</groupId>
    <artifactId>redisson</artifactId>
    <version>3.12.0</version>
</dependency>

2. Add configuration class

@Configuration
public class MyRedissonConfig {
    
    
    /**
     * 所有对Redisson的使用都是通过RedissonClient对象
     */
    @Bean(destroyMethod = "shutdown")
    public RedissonClient redisson() throws IOException {
    
    
        //1 创建配置
        Config config = new Config();
        config.useSingleServer().setAddress("redis://81.68.112.20:6379");

        //2 根据Config创建出RedissonClient实例
        RedissonClient redissonClient = Redisson.create(config);
        return redissonClient;
    }
}

3. Redisson-Lock test & Redisson-Lock watchdog principle-How does Redisson solve deadlock

    //压力测试
    //redisson锁测试
    @GetMapping("/hello")
    public String hello() {
    
    
        //1 获取一把锁 (只要锁名一样,就是同一把锁)
        RLock lock = redissonClient.getLock("my-lock");

        //2 加锁
//        lock.lock();//阻塞式等待 默认加的锁 都是30s
        //1) 锁的自动续期,如果业务超长,运行期间自动给锁续上新的30s。不用担心担心业务时间长,锁自动过期被删掉
        //2) 加锁的业务只要运行完成,不会给当前锁续期,即使不手动解锁 锁也会在30s以后自动删除

        lock.lock(31, TimeUnit.SECONDS); //10秒自动解锁,自动解锁时间一定要大于业务的执行时间
        //1) 如果我们传递了锁的超时时间,就发送给redis执行脚本进行占锁,默认超时时间就是我们指定的时间
        //2) 如果我们没有指定锁的超时时间,就使用30*1000 【看门狗默认时间】
        //      只要占锁成功,就会启动一个定时任务【重新给锁设置过期时间,新的过期时间就是看门狗默认时间】每隔20秒自动续期,续成30s
        //      【看门狗时间】3,10s

        //最佳实战
        //1) 推荐lock.lock(30, TimeUnit.SECONDS); 省掉了续期操作。手动解锁
        try {
    
    
            System.out.println("加锁成功,执行业务..." + Thread.currentThread().getId());
            Thread.sleep(30000);
        } catch (InterruptedException e) {
    
    
            e.printStackTrace();
        } finally {
    
    
            System.out.println("解锁...." + Thread.currentThread().getId());
            //3 解锁  假设解锁代码没有运行,redisson会不会出现死锁
            lock.unlock();
        }
        return "hello";
    }

3. Reidsson-read-write lock

Without further ado, go to the code! ! !

   /**
     * 保证一定能读取到最新数据,修改期间,写锁是一个排他锁(互斥锁,独享锁)读锁是一个共享锁
     * 写锁没释放读锁就必须等待
     * 读 + 读 相当于无锁,并发读,只会在 reids中记录好,所有当前的读锁,他们都会同时加锁成功
     * 写 + 读 等待写锁释放
     * 写 + 写 阻塞方式
     * 读 + 写 有读锁,写也需要等待
     * 只要有写的存在,都必须等待
     * @return String
     */
    @RequestMapping("/write")
    @ResponseBody
    public String writeValue() {
    
    

        RReadWriteLock lock = redission.getReadWriteLock("rw_lock");
        String s = "";
        RLock rLock = lock.writeLock();
        try {
    
    
            // 1、改数据加写锁,读数据加读锁
            rLock.lock();
            System.out.println("写锁加锁成功..." + Thread.currentThread().getId());
            s = UUID.randomUUID().toString();
            try {
    
     TimeUnit.SECONDS.sleep(3); } catch (InterruptedException e) {
    
     e.printStackTrace(); }
            redisTemplate.opsForValue().set("writeValue",s);
        } catch (Exception e) {
    
    
            e.printStackTrace();
        } finally {
    
    
            rLock.unlock();
            System.out.println("写锁释放..." + Thread.currentThread().getId());
        }
        return s;
    }

    @RequestMapping("/read")
    @ResponseBody
    public String readValue() {
    
    
        RReadWriteLock lock = redission.getReadWriteLock("rw_lock");
        RLock rLock = lock.readLock();
        String s = "";
        rLock.lock();
        try {
    
    
            System.out.println("读锁加锁成功..." + Thread.currentThread().getId());
            s = (String) redisTemplate.opsForValue().get("writeValue");
            try {
    
     TimeUnit.SECONDS.sleep(3); } catch (InterruptedException e) {
    
     e.printStackTrace(); }
        } catch (Exception e) {
    
    
            e.printStackTrace();
        } finally {
    
    
            rLock.unlock();
            System.out.println("读锁释放..." + Thread.currentThread().getId());
        }
        return s;
    }

4. Redisson-lock-up test

/**
 * 放假锁门
 * 1班没人了
 * 5个班级走完,我们可以锁门了
 * @return
 */
@GetMapping("/lockDoor")
@ResponseBody
public String lockDoor() throws InterruptedException {
    
    
    RCountDownLatch door = redission.getCountDownLatch("door");
    door.trySetCount(5);
    door.await();//等待5个班级都走完了,闭锁才完成
    return "放假了....";
}
@GetMapping("/gogogo/{id}")
@ResponseBody
public String gogogo(@PathVariable("id") Long id) {
    
    
    RCountDownLatch door = redission.getCountDownLatch("door");
    door.countDown();// 每执行一次计数器减一
    return id + "班的人走完了.....";
}

Consistent with JUC's CountDownLatch

await() wait for the lock to complete

countDown() After decrementing the counter, await will release

5. Redisson-semaphore test

/**
 * 车库停车
 * 3车位
 * @return
 */
@GetMapping("/park")
@ResponseBody
public String park() throws InterruptedException {
    
    
    RSemaphore park = redission.getSemaphore("park");
    boolean b = park.tryAcquire();//获取一个信号,获取一个值,占用一个车位

    return "ok=" + b;
}

@GetMapping("/go")
@ResponseBody
public String go() {
    
    
    RSemaphore park = redission.getSemaphore("park");

    park.release(); //释放一个车位

    return "ok";
}

Similar to Semaphore in JUC

6. Redisson-Cache Consistency Resolution

Cache data consistency-double write mode
Two thread writes eventually only one thread writes successfully, and the succeeding write will overwrite the previously written data, which will cause dirty data
cache data consistency-failure mode
Three connections

Connect to the first to write the database and then delete the cache

The network connection is slow when the second connection is writing to the database, and the writing has not been successful.

The third link reads the data directly, and the data read is the data written by the first connection. At this time, the second link writes the data successfully and deletes the cache. The third link starts to update the cache and it is found that the updated
cache data is consistent with the second cache. Sexual solution
Whether it is dual-write mode or failure mode, there will be a problem of inconsistent cache, that is, if multiple strengths are updated at the same time, what should be done?

1. If it is user purity data (order data, user data), the chance of concurrency is very small, almost no need to consider this problem, the cached data plus the expiration time, the active update of the read will be triggered at regular intervals.
2. If it is a menu , Commodity introduction and other basic data, you can also use canal subscription, binlog method
3. Cached data + expiration time is also enough to meet the requirements of most businesses on the cache
4. Through locking to ensure concurrent read and write, write and write in accordance with the order Line up, it doesn’t matter if you read, so it is suitable for read-write locks, (business is not related to heart data, temporary dirty data is allowed to be ignored)
Summary:

The data we can put into the cache should not have real-time and high consistency requirements. Therefore, add the expiration time when caching the data to ensure that the latest value is obtained every day.
We should not over-design to increase the complexity of the system. When
encountering real-time and high-consistency data, we should check the database, even if it is slow. point

7. Real distributed lock cases

/**
     * 使用分布式锁
     * 从数据库查询并封装分类数据
     * <p>
     * 缓存一致性问题
     * 缓存里面的数据如何和数据库里面的数据保持一致?
     * 1) 双写模式 数据库改完后,缓存也改
     * 2) 失效模式 数据库改完后,把缓存删掉
     * <p>
     * 缓存数据一致性-解决方案
     * 无论是双写模式还是失效模式,都会导致缓存的不一致问题,即多个实例同时更新会出事,怎么办?
     * 1、如果是用户纬度数据(订单数据、用户数据),这种并发几率非常小,不用考虑这个问题,缓存数据加上过期时间,每隔一段时间触发读的主动更新即可
     * 2、如果是菜单,商品介绍等基础数据,也可以去使用canal订阅binlog的方式。
     * 3、缓存数据+过期时间也足够解决大部分业务对于缓存的要求。
     * 4、通过加效保证并发读写,写写的时候按顺序排好队,读读无所谓,所以适合使用读写锁,(业务不关心脏数据,允许临时脏数据可忽略);
     * 总结。
     * 我们能放入缓存的数据本就不应该是实时性、一致性要求超高的,所以缓存数据的时候加上过期时间,保证每天拿到当前最新数据即可,
     * 我们不应该过度设计,增加系统的复杂性
     * 遇到实时性、一致性要求高的数据,就应该查数据库,即使慢点。
     * <p>
     * 我们系统的一致性解决方案:
     * 1、缓存的所有数据都有过期时间,数据过期下一次查询触发主动更新
     * 2、读写敌据的时候,加上分布式的读写锁。
     * 经常写,经常读
     */
    public Map<String, List<Catelog2Vo>> getCatalogJsonFromDbWithRedissonLock() {
    
    

        //1 锁的名字,锁的粒度,越细越快
        RLock lock = redisson.getLock("catalogJson-lock");
        //加锁
        lock.lock();
        Map<String, List<Catelog2Vo>> dataFromDB;
        try {
    
    
            //业务代码
            dataFromDB = getDataFromDB();
        }finally {
    
    
            lock.unlock();
        }
        return dataFromDB;
    }


    private Map<String, List<Catelog2Vo>> getDataFromDB() {
    
    
        String catalogJSON = redisTemplate.opsForValue().get("catalogJSON");
        if (!StringUtils.isEmpty(catalogJSON)) {
    
    
            //如果缓存不为null,直接可以返回
            Map<String, List<Catelog2Vo>> result = JSON.parseObject(catalogJSON, new TypeReference<Map<String, List<Catelog2Vo>>>() {
    
    
            });
            return result;
        }
        System.out.println("查询了数据库。。。。。。。");
        List<CategoryEntity> selectList = baseMapper.selectList(null);

        //1 查出所有1级分类
        List<CategoryEntity> level1Catrgorys = getParent_cid(selectList, 0L);

        //2 封装分类
        Map<String, List<Catelog2Vo>> parent_cid = level1Catrgorys.stream().collect(Collectors.toMap(k -> k.getCatId().toString(), v -> {
    
    
            //1 拿到每一个1级分类 查到这个1级分类的2级分类
            List<CategoryEntity> categoryEntities = getParent_cid(selectList, v.getCatId());
            //2 封装上面的结果
            List<Catelog2Vo> catelog2Vos = null;
            if (categoryEntities != null) {
    
    
                catelog2Vos = categoryEntities.stream().map(l2 -> {
    
    
                    Catelog2Vo catelog2Vo = new Catelog2Vo(v.getCatId().toString(), null, l2.getCatId().toString(), l2.getName());
                    //1 找当前二级分类的三级分类封装成vo
                    List<CategoryEntity> level3Catelog = getParent_cid(selectList, l2.getCatId());
                    if (level3Catelog != null) {
    
    
                        List<Catelog2Vo.Catalog3Vo> collect = level3Catelog.stream().map(l3 -> {
    
    
                            //2 封装成指定格式
                            Catelog2Vo.Catalog3Vo catalog3Vo = new Catelog2Vo.Catalog3Vo(l2.getCatId().toString(), l3.getCatId().toString(), l3.getName());
                            return catalog3Vo;
                        }).collect(Collectors.toList());
                        catelog2Vo.setCatalog3List(collect);
                    }
                    return catelog2Vo;
                }).collect(Collectors.toList());
            }
            return catelog2Vos;
        }));

        //3 查到的数据库再放入缓存, 将对象转为json放在缓存中
        String jsonString = JSON.toJSONString(parent_cid);
        redisTemplate.opsForValue().set("catalogJSON", jsonString, 1, TimeUnit.DAYS);//1天过期
        return parent_cid;
    }

First query whether there is a value in the cache, and if no value enters the distributed lock, then confirm whether there is a value in the cache. If there is no value, the queried value is stored in the cache, and if there is a value, it is returned directly. Release lock

Guess you like

Origin blog.csdn.net/u014496893/article/details/113854388