CAS and the use of double-checked locking algorithm: solve the consistency problem under concurrent database

background

Recently I encountered a scene database concurrency issues. Now I'll start with an abstract look, remove unnecessary complicated business.
Database table book stores the amount of reading each book, the beginning of the database is empty, no data exists. When the user access interface, determining whether there book table book records, if not present, i.e., a new record is inserted into, and read amount is set to 1; the moment when the user re-reading it, then this call the interface directly read the book amount is increased by 1, and do not have to insert a new record.

Concurrency issues arise under

The following look at pseudo-code:

public void addOrUpdateBook{
    Book oldBook = this.bookMapper.selectByBookName(bookName);
    if (oldBook == null){ // 1、判断书本记录是否为空
        this.bookMapper.insertSelective(b);  // 2、新增书本记录
    }else{
        Integer updateCount = this.bookMapper.updateReadFrequency(book); // 3、更新阅读量
    }
}

1, a plurality of identical insert record books:

At this time, if the thread A in [1] is determined that the record book is empty, and then recording the books created at [2]; if the re-entered if the thread before the thread B A, as determined at the book [1] is empty, a thread is not because you've created and submitted affairs, books or records in the database is empty, so the thread B also began to create a record of the books. The end result: two book records exist in the database.

2, the amount of reading books updated inaccurate

If the book "Java Concurrency in the art of" reading was 10, and at this time the amount of thread A read operation plus 1, but before submitting the transaction, thread B can also be the amount of reading plus 1 operating in the thread A, then it is Whether or thread a thread B, is added to 1 on the basis of the amount of reading 10 on the last two threads are executed when finished, "Java concurrent programming art" amount of reading is 11, and it results correct amount of reading 12 are different!

Think

At this point we all know, can cause concurrent database data consistency problems. So, how can we solve it?

1. A method for locking

The easiest way, plus there is a method in direct synchronized keyword lock up the entire method, each request can only be executed one by one in the order, then insert and update concurrency problems do not exist.

public synchronized void addOrUpdateBook{
    Book oldBook = this.bookMapper.selectByBookName(bookName);
    if (oldBook == null){ // 1、判断书本记录是否为空
        this.bookMapper.insertSelective(b);  // 2、新增书本记录
    }else{
        Integer updateCount = this.bookMapper.updateReadFrequency(book); // 3、更新阅读量
    }
}

2, double-checked locking

Performance of the above method is the lowest, not all requests to obtain the lock will be blocked out method (regardless of whether there is concurrency issues). In fact, we can use to solve this double-checked locking lock low performance issues, the practice is also very simple, is that we simply insert code concurrency problems may arise locked on the line, that changed sync blocks from the synchronization method, not then locked unnecessary code.

public void addOrUpdateBook{
    Book oldBook = this.bookMapper.selectByBookName(bookName);
    if (oldBook == null){ // 1、判断书本记录是否为空
        // 加锁
        synchronized (this){
            oldBook = this.bookMapper.selectByBookName(bookName);
            if (oldBook == null){
                BeanUtil.copyProperties(query,b);
                this.bookMapper.insertSelective(b); // 2、新增书本记录
            }else{
                Integer updateCount = this.bookMapper.updateReadFrequency(book); // 3、更新阅读量 
            }
        }
    }else{
        Integer updateCount = this.bookMapper.updateReadFrequency(book); // 4、更新阅读量
    }
}

We can see that: if the thread A in this case recorded at the new [2], at a time when the thread B [1] are determined at the record book is empty, then blocked outside the sync block, when executed thread A after releasing the lock, thread B can obtain the lock, but the record books at this time is no longer empty, thread B directly update the amount of reading books rather than insert records. Of course, all subsequent requests a first re-recording the books can be judged not empty, then read directly update amount.

So that is our problem to solve it is inserted and transmitted, and the performance is higher than the number of synchronization methods. But then, an updated concurrency problem has not yet resolved, but also why I did not lock the update read the amount of code? Because I think nothing is necessary, because in order to reduce the thread context switching, we recommend lock-free concurrent programming, so how can we do it? Here's how CAS reference algorithm to prevent concurrency problems updates.

3, CAS algorithm

First, we need to add a table fields: version number. Yes, this time it may think of the optimistic lock. Yes it ~ CAS algorithm is actually an optimistic locking. Its principle is: compare and then exchange; updating, when the same values ​​we hold the old values ​​and the database, we will be able to update the old value for our new value. Finally, we use an infinite loop to continuously update cycle guaranteed to end in success. You may be wondering infinite loop will result in low performance? In fact, okay, at least to avoid the thread context switching, and, at the same time the general volume of requests would not be so far off the mark (our company), concurrency is very likely to do the other programs. Let us first on the code:

public Boolean addOrUpdateBook(BookQuery query) {
     boolean flag = true;
     // 使用双重检查锁定来处理新增的并发问题
     Book b = Book.builder().bookName(query.getBookName()).build();
     Book oldBook = this.bookMapper.selectOne(b);
     if (oldBook == null){
         // 加锁
         synchronized (this){
             oldBook = this.bookMapper.selectOne(b);
             if (oldBook == null){
                 BeanUtil.copyProperties(query,b);
                 this.bookMapper.insertSelective(b);
             }else{
                 updateBook(query);
             }
         }
     }else{
         updateBook(query);
     }
     return flag;
 }

 /**
  * 参考CAS的无锁算法来处理更新的并发问题(利用死循环+版本号)
  * @param query
  */
 private void updateBook(BookQuery query){
     // 参考cas
     for (;;){
         // 获取当前记录的版本号
         Integer version = this.bookMapper.getVersionByBookName(query.getBookName());
         query.setVersion(version);
         // 根据书名和版本号进行阅读量更新
         Integer updateCount = this.bookMapper.updateBookByVersion(query);
         if (updateCount != null && updateCount.equals(1)){
             // 如果更新成功就跳出循环
             break;
         }
     }
 }

Updates amount of reading sql:

/**
  * 根据书名和版本号更新book
  * @param book
  * @return
*/
@Update("update book set version = version+1,read_frequency = #{readFrequency} where book_name = #{bookName} and version = #{version}")
Integer updateBookByVersion(BookQuery book);

The code here do not explain, look at the comments that this is what the principle is very straightforward drop ~

4, do use Redis Distributed Lock

And then locked back there, we can see that when we do micro-services, each service will usually be more than one instance or instances deployed single application, we lock can only be effective for a single application , while multiple instances or cause concurrency problems inserted. This time we have to think: Distributed Lock!

I do know now mainly distributed lock in two ways, one is based on distributed lock Redis, and the other one is based on the Zookeeper distributed lock. Simple analysis of the pros and cons lock above: From the reliability speaking, there is better than the Zookeeper distributed lock Redis distributed lock; and in terms of performance, better than the Zookeeper distributed lock Redis distributed lock, after all, is pure Redis memory operations, wanted a good performance, touted as 100,000 times per second read and write it, so I chose based distributed lock the Redis.

But there is a problem is this: we played redis all know, redis is only guaranteed to operate with a single transaction is atomic, multiple operations can not guarantee the atomicity. Because our practices are generally locked, the lock has a timeout feature, avoid waiting for a lock request unconditional, it has been blocking cause the system CPU to soar. So just use redis ways we can not have a distributed lock timeout feature. Of course, now there are two better solution: one is to use redis + lua script, is the use of open-source framework Redisson. Of course, there must be simple to use simple, here begin how to use Redisson developing distributed lock.

1) was added and redis-dependent redisson

<dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<dependency>
      <groupId>org.redisson</groupId>
      <artifactId>redisson</artifactId>
      <version>3.11.3</version>
</dependency>

2) creation of tools Redisson

package com.hyf.utils;

import org.redisson.Redisson;
import org.redisson.api.RLock;
import org.redisson.api.RedissonClient;
import org.redisson.config.Config;

/**
 * @author Howinfun
 * @desc Redisson工具类
 * @date 2019/9/2
 */
public class RedissonUtil {

    private static RedissonClient redissonClient;

    private RedissonUtil(){
        // 构造redisson实现分布式锁必要的Config
        Config config = new Config();
        config.useSingleServer().setAddress("redis://127.0.0.1:6379").setPassword("123456").setDatabase(1);
        // 构造RedissonClient
        redissonClient = Redisson.create(config);
    }

    public static final RedissonUtil INSTANCE = new RedissonUtil();

    /**
     * 设定锁定资源名称,返回锁
     * @param name
     * @return
     */
    public RLock getLock(String name){
        return redissonClient.getLock(name);
    }
}

3) Modify service code

public Boolean addOrUpdateBook(BookQuery query) {
    boolean flag = true;
    // 使用双重检查锁定来处理新增的并发问题
    Book b = Book.builder().bookName(query.getBookName()).build();
    Book oldBook = this.bookMapper.selectOne(b);
    if (oldBook == null){
        RLock lock = RedissonUtil.INSTANCE.getLock("addOrUpdateBook");
        try {
            // 尝试获取锁,最多等待10000毫秒,获取锁后1000毫秒自动释放锁
            if (lock.tryLock(10000, 10000, TimeUnit.MILLISECONDS)){
                oldBook = this.bookMapper.selectOne(b);
                if (oldBook == null){
                    BeanUtil.copyProperties(query,b);
                    this.bookMapper.insertSelective(b);
                }else{
                    updateBook(query);
                }
            }
        } catch (InterruptedException e) {
            e.printStackTrace();
            return false;
        }finally {
            // 最后记得释放锁
            lock.unlock();
        }
    }else{
        updateBook(query);
    }
    return flag;
}

/**
  * 参考CAS的无锁算法来处理更新的并发问题(利用死循环+版本号)
  * @param query
*/
private void updateBook(BookQuery query){
    // 参考cas
    for (;;){
        Integer version = this.bookMapper.getVersionByBookName(query.getBookName());
        query.setVersion(version);
        Integer updateCount = this.bookMapper.updateBookByVersion(query);
        if (updateCount != null && updateCount.equals(1)){
            break;
        }
    }
}

Here, we have more so in the code, not only can prevent database insert and update concurrency problems, but also to make Ye Hao in a distributed environment!

I guess we are not very confident in, and then the following I will use JMeter to test.

1) First create a thread group: a total of 10 threads, executed once.

2) Create Http request: message body data is read cycle CSV data file data

3) Create a Http header information management: add Content-Type

4) Create and specify the CSV data file

5) Click Start.

Observations tree: You can find all of the requests are requests to succeed without error.

And then look at the console, you can find only one insertion and nine sql update sql:

2019-09-03 09:28:39.888 DEBUG 8020 --- [nio-8888-exec-7] com.hyf.mapper.BookMapper.selectOne      : ==> Parameters: Java并发编程的艺术(String)
2019-09-03 09:28:39.889 DEBUG 8020 --- [nio-8888-exec-7] com.hyf.mapper.BookMapper.selectOne      : <==      Total: 0
2019-09-03 09:28:39.937 DEBUG 8020 --- [nio-8888-exec-7] c.hyf.mapper.BookMapper.insertSelective  : ==>  Preparing: INSERT INTO book ( id,book_name ) VALUES( ?,? ) 
2019-09-03 09:28:39.937 DEBUG 8020 --- [nio-8888-exec-7] c.hyf.mapper.BookMapper.insertSelective  : ==> Parameters: null, Java并发编程的艺术(String)
2019-09-03 09:28:39.939 DEBUG 8020 --- [nio-8888-exec-7] c.hyf.mapper.BookMapper.insertSelective  : <==    Updates: 1
2019-09-03 09:28:39.950 DEBUG 8020 --- [nio-8888-exec-6] c.h.m.BookMapper.updateBookByVersion     : ==>  Preparing: update book set version = version+1,read_frequency = read_frequency+1 where book_name = ? and version = ? 
2019-09-03 09:28:39.951 DEBUG 8020 --- [nio-8888-exec-6] c.h.m.BookMapper.updateBookByVersion     : ==> Parameters: Java并发编程的艺术(String), 1(Integer)
2019-09-03 09:28:39.953 DEBUG 8020 --- [nio-8888-exec-6] c.h.m.BookMapper.updateBookByVersion     : <==    Updates: 1
2019-09-03 09:28:39.957 DEBUG 8020 --- [nio-8888-exec-5] c.h.m.BookMapper.updateBookByVersion     : ==>  Preparing: update book set version = version+1,read_frequency = read_frequency+1 where book_name = ? and version = ? 
2019-09-03 09:28:39.957 DEBUG 8020 --- [nio-8888-exec-5] c.h.m.BookMapper.updateBookByVersion     : ==> Parameters: Java并发编程的艺术(String), 2(Integer)
2019-09-03 09:28:39.959 DEBUG 8020 --- [nio-8888-exec-5] c.h.m.BookMapper.updateBookByVersion     : <==    Updates: 1
。。。。。。。

Finally, let's look at the database: we can see only one record, and the amount of reading of 10, very accurate!

如果大家对此demo感兴趣的话,可以到github上和码云上拉取项目,项目里头还包含JMeter的测试用例噢:
GitHub
码云

总结

​ 平时我们程序猿真的要多看书,虽然我自己也没看多少书,也没能好好坚持,但是从上个月开始我就下定决心好好看书了。也是因为最近在阅读《Java并发编程的艺术》,所以才有上面的思考和方案!

​ 如果大家对我的读书笔记和思维导图感兴趣,可以到这里看看:Java并发编程的艺术-阅读笔记和思维导图

Guess you like

Origin www.cnblogs.com/Howinfun/p/11612653.html