Redis learning (2) thread safety, distributed locks, message queues

Coupon flash sale

Global ID Generator

  1. The first bit is the sign bit, which is always 0
  2. The 2-32 bits are the difference value of the timestamp, which specifies that from a certain moment, the difference between the current timestamp and the initial timestamp is calculated, which ensures the self-increment of the id, but it is not necessarily continuous.
  3. For the last 32 bits, the method of partition + serial number can be used. (distributed)

It is essentially the same as the snowflake algorithm of mybatis-plus.

insert image description here

public class RedisIdWorker {
    
    
    /**
     * 开始时间戳
     */
    private static final long BEGIN_TIMESTAMP = 1640995200L;
    /**
     * 序列号的位数
     */
    private static final int COUNT_BITS = 32;

    private final StringRedisTemplate stringRedisTemplate;

    public RedisIdWorker(StringRedisTemplate stringRedisTemplate) {
    
    
        this.stringRedisTemplate = stringRedisTemplate;
    }

    public long nextId(String keyPrefix) {
    
    
        // 1.生成时间戳
        LocalDateTime now = LocalDateTime.now();
        long nowSecond = now.toEpochSecond(ZoneOffset.UTC);
        long timestamp = nowSecond - BEGIN_TIMESTAMP;

        // 2.生成序列号
        // 2.1.获取当前日期,精确到天
        String date = now.format(DateTimeFormatter.ofPattern("yyyy:MM:dd"));
        // 2.2.自增长
        // 此处的警告可以忽略,因为如果key不存在,会从0开始增长。
        // 这里key前缀以日期构建的目的是为了避免自增超出序列号范围,同一天下单数量一定不会超出32位
        long count = stringRedisTemplate.opsForValue().increment("icr:" + keyPrefix + ":" + date);

        // 3.拼接并返回
        // timestamp 左移 32位,变到高位,那么低32位为0,或上count即可,count一定不会超出32位范围。
        return timestamp << COUNT_BITS | count;
    }
}

Test:
Write a runnable task task, loop 100 times, and execute the auto-increment id test.

Build a thread pool with a fixed number of worker threads of 300, and submit tasks to the thread pool in a loop. Then in the end it is equivalent to self-incrementing id 30,000 (100 * 300) times

Use CountDownLatch to help timing, because we use the thread pool, the execution of the thread pool is asynchronous, so simply use end - begin, when the execution reaches the end, there may be asynchronous threads that have not yet completed.

Using CountDownLatch can help us mark asynchronous threads, latch.await(); will wait for all asynchronous threads to complete.

@Resource
private RedisIdWorker redisIdWorker;

private final ExecutorService es = Executors.newFixedThreadPool(500);

@Test
public void testIdWorker() throws InterruptedException {
    
    
    CountDownLatch latch = new CountDownLatch(300);

    Runnable task = () -> {
    
    
        for (int i = 0; i < 100; ++i) {
    
    
            long id = redisIdWorker.nextId("order");
            System.out.println("id:" + id);
        }
        latch.countDown();
    };

    long begin = System.currentTimeMillis();
    for (int i = 0; i < 300; ++i) {
    
    
        es.submit(task);
    }
    latch.await();
    long end = System.currentTimeMillis();
    System.out.println("time cost:" + (end - begin));
}

Place an order with a coupon

insert image description here

oversold problem

insert image description here
One thread inquires about the inventory, but the inventory has not been deducted, and the other thread also executes the inventory inquiry. Since the previous thread has not had time to deduct the inventory, the later thread can also execute the order.

insert image description here
insert image description here

Use optimistic locking to solve the oversold problem. Only when the database update operation, the remaining inventory is still greater than 0, can the execution be successful.

public class IVoucherOrderServiceImpl extends ServiceImpl<VoucherOrderMapper, VoucherOrder> implements IVoucherOrderService {
    
    

    @Resource
    private ISeckillVoucherService seckillVoucherService;

    @Resource
    RedisIdWorker redisIdWorker;

    // 因为设计两张表操作,使用事务保证操作连续性
    @Transactional
    @Override
    public Result secKillVoucher(Long voucherId) {
    
    
        // 1. 查询优惠券
        SeckillVoucher voucher = seckillVoucherService.getById(voucherId);
        // 2. 判断秒杀是否开始
        if (voucher.getBeginTime().isAfter(LocalDateTime.now())) {
    
    
            return Result.fail("秒杀尚未开始!");
        }
        // 3. 判断秒杀是否结束
        if (voucher.getEndTime().isBefore(LocalDateTime.now())) {
    
    
            return Result.fail("秒杀已经结束!");
        }
        // 4. 判断库存是否充足
        if (voucher.getStock() < 1) {
    
    
            // 库存不足
            return Result.fail("库存不足!");
        }
        // 5. 扣减库存
        // 使用乐观锁解决超卖问题,仅当数据库更新操作时,剩余库存依旧大于0,才执行成功。
        boolean success = seckillVoucherService.update()
                .setSql("stock = stock - 1")
                .eq("voucher_id", voucherId)
                .gt("stock", 0)
                .update();
        if (!success) {
    
    
            return Result.fail("库存不足!");
        }
        // 6. 创建订单
        VoucherOrder voucherOrder = new VoucherOrder();
        // 6.1 创建订单id,使用全局生成器
        long orderId = redisIdWorker.nextId("order");
        voucherOrder.setId(orderId);
        // 6.2 用户id
        Long userId = UserHolder.getUser().getId();
        voucherOrder.setUserId(userId);
        // 6.3 代金券id
        voucherOrder.setVoucherId(voucherId);
        save(voucherOrder);

        // 7. 返回订单id
        return Result.ok(orderId);
    }
}

one person one single

Requirement: To modify the coupon flash sale business, the same user can only place an order.
insert image description here
Main problems:
1. In order to guarantee one ticket per person, it is necessary to check whether an order has been placed according to the user id and coupon id. This process needs to be locked to avoid thread safety issues.

2. The lock object can be in the string form of the user id, which is stored in the constant pool.

3. The scope of the lock should be after the transaction is committed, so it is best to lock the entire method.
4. Using the methods in this class may cause transaction failure. The solution is to use the methods in the proxy object.

(1) Add dependencies:

<dependency>
    <groupId>org.aspectj</groupId>
    <artifactId>aspectjweaver</artifactId>
</dependency>

(2) Expose the proxy object to the spring container in the startup class:
insert image description here
(3) Use the proxy execution method in the container.

  synchronized (userId.toString().intern()) {
    
    
      // 防止事务失效
      IVoucherOrderService proxy = (IVoucherOrderService) AopContext.currentProxy();
      return proxy.createVoucherOrder(voucherId);
  }

The complete logic is as follows:

public class IVoucherOrderServiceImpl extends ServiceImpl<VoucherOrderMapper, VoucherOrder> implements IVoucherOrderService {
    
    

    @Resource
    private ISeckillVoucherService seckillVoucherService;

    @Resource
    RedisIdWorker redisIdWorker;

    @Override
    public Result secKillVoucher(Long voucherId) {
    
    
        // 1. 查询优惠券
        SeckillVoucher voucher = seckillVoucherService.getById(voucherId);
        // 2. 判断秒杀是否开始
        if (voucher.getBeginTime().isAfter(LocalDateTime.now())) {
    
    
            return Result.fail("秒杀尚未开始!");
        }
        // 3. 判断秒杀是否结束
        if (voucher.getEndTime().isBefore(LocalDateTime.now())) {
    
    
            return Result.fail("秒杀已经结束!");
        }
        // 4. 判断库存是否充足
        if (voucher.getStock() < 1) {
    
    
            // 库存不足
            return Result.fail("库存不足!");
        }
        //用户id
        Long userId = UserHolder.getUser().getId();
        // 使用.intern(),使得从字符串常量池中取得唯一的id对象,作为锁对象
        synchronized (userId.toString().intern()) {
    
    
            // 防止事务失效
            IVoucherOrderService proxy = (IVoucherOrderService) AopContext.currentProxy();
            return proxy.createVoucherOrder(voucherId);
        }
    }

    // 因为设计两张表操作,使用事务保证操作连续性
    @Transactional
    public Result createVoucherOrder(Long voucherId) {
    
    
        // 5 一人一单
        Integer count = query()
                .eq("user_id", UserHolder.getUser().getId())
                .eq("voucher_id", voucherId)
                .count();
        if (count > 0) {
    
    
            // 用户已经秒杀过优惠券
            return Result.fail("用户已经购买过一次!");
        }
        // 6. 扣减库存
        // 使用乐观锁解决超卖问题,仅当数据库更新操作时,剩余库存依旧大于0,才执行成功。
        boolean success = seckillVoucherService.update()
                .setSql("stock = stock - 1")
                .eq("voucher_id", voucherId)
                .gt("stock", 0)
                .update();
        if (!success) {
    
    
            return Result.fail("库存不足!");
        }

        // 7. 创建订单
        VoucherOrder voucherOrder = new VoucherOrder();
        // 7.1 创建订单id,使用全局生成器
        long orderId = redisIdWorker.nextId("order");
        voucherOrder.setId(orderId);
        // 7.2 用户id
        Long userId = UserHolder.getUser().getId();
        voucherOrder.setUserId(userId);
        // 7.3 代金券id
        voucherOrder.setVoucherId(voucherId);
        save(voucherOrder);

        // 8. 返回订单id
        return Result.ok(orderId);
    }
}

The above solution still has problems in the cluster mode, because the lock object is the user id in the string constant pool. In the cluster mode, different servers will have different JVMs, so the lock object is not unique.

insert image description here
The solution is to use distributed locks.

distributed lock

Distributed lock: A lock that is visible to multiple processes and mutually exclusive under a distributed system or cluster model.
insert image description here
There are three common distributed lock implementation schemes:

  • Mutex lock mechanism based on MySQL itself
  • Mutual exclusion commands like setnx based on Redis
  • Utilize the uniqueness and order of nodes based on Zookeeper
    insert image description here

Realize distributed lock based on Redis setnx instruction

lockAssuming that the server cluster shares a third-party Redis, then you can use a key- value pair on Redis threadidto represent the lock object.

Simulate acquiring a lock:

  • Guaranteed mutual exclusion, ensuring that only one thread can acquire the lock
  • Non-blocking: try once, return true on success, false on failure
  • In order to avoid the failure of the operation of releasing the lock, resulting in that the subsequent sequence will never be able to acquire the lock, the validity period should be set for the lock, and it will be automatically released after the expiration date.
    Redis command:
    set lock threadId nx ex 10

Simulate release lock: Just
delete it directlylock
del lock

insert image description here
The implementation in Java is as follows, the points to note are:

  1. To prepare for Redis operations StringRedisTemplate, to use different locks for different businesses, the business name should be added to the key of the lock object name, and these two variables are passed in through the constructor.
  2. Simulate the lock acquisition function tryLock(), return a Boolean value, representing whether the lock is successfully acquired, and the TTL of the lock can be specified. .setIfAbsent(KEY_PREFIX + name, threadId, timeoutSec, TimeUnit.SECONDS);, specify the key-value pair of the lock, the key is the lock prefix + business name, and the value is the thread id.
  3. Simulate releasing the lock unlock(), and delete the key-value pair representing the lock directly according to the key.
public class SimpleRedisLock implements ILock {
    
    

    private String name;
    private StringRedisTemplate stringRedisTemplate;

    public SimpleRedisLock(String name, StringRedisTemplate stringRedisTemplate) {
    
    
        this.name = name;
        this.stringRedisTemplate = stringRedisTemplate;
    }

    private static final String KEY_PREFIX = "lock:";

    @Override
    public boolean tryLock(long timeoutSec) {
    
    
        // 获取线程标示
        String threadId = Thread.currentThread().getId();
        // 获取锁
        Boolean success = stringRedisTemplate
                .opsForValue()
                .setIfAbsent(KEY_PREFIX + name, threadId, timeoutSec, TimeUnit.SECONDS);
        return Boolean.TRUE.equals(success);
    }

    @Override
    public void unlock() {
    
    
       // 释放锁
       stringRedisTemplate.delete(KEY_PREFIX + name);
    }
}

Solve the problem of accidental deletion of locks

The distributed lock implementation of the above version may cause the problem of accidental lock deletion. The specific situation is as follows:

  • Thread 1 acquires the lock, and the blocking time is longer than the lock automatic release time due to business blocking.
  • After the lock is automatically released, thread 2 acquires the lock and executes the business. During the execution, thread 1 completes the business and releases the lock. deleted.
  • Thread 1 deletes the lock, so thread 3 can continue to acquire the lock. At this time, thread 2 and thread 3 are already executing in parallel, which violates the mutual exclusion of the lock! ! ! .

insert image description here
Then the solution is to delete the lock field, that is, when releasing the lock, check that the current lock release is the lock acquired by yourself before! ! .

insert image description here
There are two main modifications:

  1. When acquiring a lock, store the unique ID of the thread. In the case of clusters, different thread IDs in different clusters may be the same. UUID is used to splice the thread IDs to ensure the uniqueness of the ID.
  2. When releasing the lock, judge whether it is consistent with the current thread ID. If not, the lock will not be released (to avoid deleting the locks of other threads by mistake).
private static final String ID_PREFIX = UUID.randomUUID().toString(true) + "-";
    @Override
    public boolean tryLock(long timeoutSec) {
    
    
        // 获取线程标示
        String threadId = ID_PREFIX + Thread.currentThread().getId();
        // 获取锁
        Boolean success = stringRedisTemplate
                .opsForValue()
                .setIfAbsent(KEY_PREFIX + name, threadId, timeoutSec, TimeUnit.SECONDS);
        return Boolean.TRUE.equals(success);
    }
    @Override
    public void unlock() {
    
    
        // 获取线程标示
        String threadId = ID_PREFIX + Thread.currentThread().getId();
        // 获取锁中的标示
        String id = stringRedisTemplate.opsForValue().get(KEY_PREFIX + name);
        // 判断标示是否一致
        if(threadId.equals(id)) {
    
    
            // 释放锁
            stringRedisTemplate.delete(KEY_PREFIX + name);
        }
    }

Realize atomicity of multiple instructions based on Lua script

Judging whether the lock identifier is consistent and releasing the lock are not atomic. In this gap, thread safety problems may be caused again.
insert image description here
The solution is to use lua scripts to ensure the atomicity of instruction execution.

Redis calls Lua script

  • Redis EVALcan be used to execute scripts, and Lua scripts redis.call()can be used to execute Redis instructions.
  • When using EVALthe command, you can specify the number of parameters of the key type that the script needs to operate, followed by the keys list and argv list, so that the incoming parameters can be used directly in the script.It should be noted that in Lua script, the array index subscript starts from 1, so KEYS[1] means name, and ARGV[1] means Rose
    insert image description here

Use Lua script in Java

1. Write the unlock.lua script in the Resource directory:
insert image description here
2. Configure the Redis script call object DefaultRedisScript, specify the script path and return value type.

    private static final DefaultRedisScript<Long> UNLOCK_SCRIPT;
    static {
    
    
        UNLOCK_SCRIPT = new DefaultRedisScript<>();
        // 指定脚本路径
        UNLOCK_SCRIPT.setLocation(new ClassPathResource("unlock.lua"));
        // 设置返回类型
        UNLOCK_SCRIPT.setResultType(Long.class);
    }

3. Use stringRedisTemplate in unlock to execute UNLOCK_SCTRIPT to call lua script to ensure the atomicity of the operation.

@Override
public void unlock() {
    
    
    // 调用lua脚本
    stringRedisTemplate.execute(
            UNLOCK_SCRIPT,
            Collections.singletonList(KEY_PREFIX + name),
            ID_PREFIX + Thread.currentThread().getId());
}

Redisson

The optimized Redis distributed lock based on Lua script can already meet the business needs in most scenarios, but it still has some shortcomings:

  • 1. The lock is not reentrant
  • 2. Acquire the lock, no retry
  • 3. Although the timeout release avoids deadlock, it takes a long time to execute the business, which will also cause the lock to be released, which poses a security risk.
  • 4. Master-slave consistency. If Redis provides a master-slave cluster (read operations, use slave nodes, and write operations use master nodes), then there is a delay in master-slave synchronization. When the master server is down and the slave nodes have not been synchronized, There will be lock mutual exclusion failure.

insert image description here
In order to realize the above advanced functions, we can use Redisson, a distributed lock framework based on Redis.
Official website address
insert image description here

Redisson Quick Start

  1. Introduce dependencies
  2. Configure the Redisson client, use the @Bean annotation in the configuration class, inject the Redisson client class into the IoC container, and hand it over to Spring for management.
    insert image description here
  3. Distributed locks using Redisson
    insert image description here

Redisson reentrant lock principle

The principle of reentrancy is similar to the principle of reentrant locks such as synchronized. Use setnx in Redis to store a hash type of data, field is the value of the lock, and value is the number of times the lock is currently acquired.

  1. First determine whether the lock exists, if not, acquire the lock and add the thread ID, and set the validity period of the lock.
  2. If the lock already exists, judge whether the lock belongs to the thread according to the lock ID. If it belongs, add 1 to the lock count; otherwise, the acquisition of the lock fails.
  3. When the business execution is completed, the lock count is decremented by 1, and the lock is released when the lock count is decremented to 0, otherwise the validity period of the lock is reset.
  4. The above logic needs to ensure atomicity, so all operations should be implemented using Lua scripts.
    insert image description here
    insert image description here

insert image description here

Redisson's lock retry and Watchdog mechanism

  1. The Redisson distributed lock implements the function of trying to acquire the lock again. When trying to acquire the lock, the maximum waiting time wait_timeand the lock automatic release time can be passed in lease_time.
  2. When trying to acquire the lock, if the lock is successfully acquired, null is returned, otherwise the maximum remaining waiting time pttl is returned, in milliseconds. If the remaining maximum waiting time is greater than 0, it will subscribe and wait for the signal to release the lock.
  3. Correspondingly, when the lock is released, a lock release message will be published, and all threads that subscribe to the message will receive it. After receiving it, it is necessary to judge whether the wait timed out at this time. If it times out, the lock acquisition fails, otherwise try to acquire the lock again.
  4. If the lock automatic release time is not -1, then when the lock is acquired successfully, Redisson internally adopts the watchdog mechanism, opens the watchDog mechanism, and continuously updates the validity period of the lock (opening a task, 1/3 of the lock release time Executed after a long time, the task to execute is itself, that is, recursive calls, every 1/3, reset the validity period), this watchdog mechanism is also canceled when the lock is released.
    insert image description here
    insert image description here

Redisson的multilock

Use multiple distributed Redis nodes, build a lock on each Redis, and each time you operate to acquire a lock, you need to be able to successfully acquire the lock from multiple Redis nodes at the same time, and then it is considered as a successful acquisition of the lock.

This method actually constitutes a chain, and the disadvantage is that the operation and maintenance cost is high and the implementation is complicated.

insert image description here

@BeforeEach is an annotation used in testing frameworks common in software development. It is commonly used in JUnit or other similar unit testing frameworks to mark setup operations performed before each test method.

use:
insert image description here

Lightning optimization

Redis cache decoupling

The original seckill business needs firstly judge the seckill inventory, then query the order to check whether it meets one order per person, so as to lock the seckill qualification, and then modify the inventory by operating the database to create an order.

There are many steps in series in the whole process, and the database is frequently operated, resulting in slow response.
insert image description here

In fact, the business can be disassembled into two steps: locking the flash coupons and generating flash coupons. The request for locking flash coupons has stricter requirements for high concurrency, and you canIt is implemented through Redis cache. After locking the flash coupon, it is equivalent to ordering a meal and giving the user a small ticket. The information of this small ticket will be saved in the blocking queue, and an asynchronous thread will be opened to consume the order in the blocking queue. , generate the corresponding order into the database.

insert image description here
During specific implementation, lua scripts can be used to implement operations on Redis to ensure the atomicity of code execution, and the processing of blocking queues by asynchronous threads can be constructed with reference to the connection performance of the database.

insert image description here

Redis message queue

Processing the coupon orders generated by Redis based on the blocking queue has a big problem: when high concurrency and high coupons are issued, the length of the blocking queue is limited, but limited by the memory of the JVM, the setting of the blocking queue is too large , which is likely to cause OOM.

For this purpose, the message queue should be used to store the coupon order messages generated by Redis.

insert image description here
For large-scale message processing scenarios, you can use kafka, rabbitMq, rocketMq.

For small-scale scenarios, you can use the message queue service that comes with Redis:

insert image description here

Based on the List structure

Use BRPOP, BLPOP to achieve blocking effect.
insert image description here
Advantages and disadvantages based on List message queue:
Advantages:

  • Using Redis storage, not limited by the upper limit of JVM memory
  • Based on the Redis persistence mechanism , data security is guaranteed
  • The order of the messages can be guaranteed

shortcoming:

  • If an exception occurs during message processing, the message is lost
  • Only single-consumer mode is supported.

PubSub-based message queue

Compared with the message queue of the List structure, the message queue friction based on PubSub allows consumers to subscribe to one or more channels. After the producer sends a message to the corresponding channel, all subscribers can receive the relevant message.
insert image description here
insert image description here

Stream-based message queue

insert image description here

The latest news can be read based on the blocking method and the & symbol.
However, there is a risk of missing messages, because when a message is read and consumed, multiple messages come during this period, but only the last sent message can be read.

insert image description here

insert image description here

Stream-based message queue - consumer group

Consumer group: Divide multiple consumers into a group and listen to the same queue. It has the following characteristics:

  • Message shunting: The messages in the queue will be shunted to different consumers in the group instead of repeated consumption, thus speeding up the message processing.
  • Message ID: The consumer group will maintain an ID to record the last message processed. Even if the consumer crashes and restarts, it will still start reading messages after the ID to ensure that every message will be consumed.
  • Message confirmation: After the consumer obtains the message, the message is in the pending state and stored in a pending-list. After processing, it needs to confirm the message through XACK and mark the message as processed before it is removed from the pending-list.

insert image description here

insert image description here

insert image description here

insert image description here

Guess you like

Origin blog.csdn.net/baiduwaimai/article/details/131520483