[Java concurrent tool class-mutual exclusion] ReadWriteLock (read-write lock)


Such a scenario is often encountered in development: Read more and write less For example, the cache can improve performance. An important condition is that the cached data must be read more and less written, and the data in the cache will basically not change (write less). Read the cached data frequently (read many).

For this scenario, the Java SDK concurrent package provides a read-write lock-ReadWriteLock, and a faster lock than the read-write lock-StampedLock.

1.ReadWriteLock (read-write lock)

1.2 What is a read-write lock?

Read-write locks follow these three rules:

  • Allow multiple threads to read shared variables at the same time;
  • Only one thread is allowed to write shared variables;
  • If a thread is performing a write operation, other threads are prohibited from reading shared variables at this time.

The difference between a read-write lock and a mutex lock is that a read-write lock allows multiple threads to perform read operations.
The same point is: when the read-write lock is writing, it does not allow other threads to read or write.

1.3 Use read-write lock to quickly implement a cache?

To implement caching, we provide two methods, read caching (get () method) and write caching (put () method). Here is our code:

class Cache<K,V> {
 final Map<K, V> m=new HashMap<>();//final域禁止写final域引用对象以及第一次写引用对象成员域到构造函数外,之前文章提到过
 final ReadWriteLock rwl =new ReentrantReadWriteLock();
 final Lock r = rwl.readLock();  // 读锁
 final Lock w = rwl.writeLock(); // 写锁
  V get(K key) { // 读缓存
    r.lock();
    try { return m.get(key); }
    finally { r.unlock(); }
  }
  V put(K key, V value) {  // 写缓存
    w.lock();
    try { return m.put(key, v); }
    finally { w.unlock(); }
  }
}

Let's implement the cache below:

  1. If you want to implement caching, you must first solve the problem of cache initialization. Two ways: 1. Load data at one time 2. Load data on demand. On-demand loading of the
    Insert picture description here
    cache is implemented below: In the query cache, data is queried from the database to the cache if it does not exist (write cache needs to be judged again by other Thread wrote),
class Cache<K,V> {
  final Map<K, V> m =new HashMap<>();
  final ReadWriteLock rwl = new ReentrantReadWriteLock();
  final Lock r = rwl.readLock();
  final Lock w = rwl.writeLock(); 
  V get(K key) {
    V v = null;
    r.lock(); //读缓存         ①
    try {
      v = m.get(key);} finally{
      r.unlock();}
    //缓存中存在,返回
    if(v != null) {return v;
    }  
    //缓存中不存在,查询数据库
    w.lock();try {
      //再次验证
      //其他线程可能已经查询过数据库
      v = m.get(key);if(v == null){//查询数据库
        v=省略代码无数
        m.put(key, v);
      }
    } finally{
      w.unlock();
    }
    return v; 
  }
}
  1. In addition, the synchronization of cached data and source data also needs to be resolved, and the two need to ensure consistency.
    Solution: 1. Time-out mechanism: When the cached data exceeds the time limit, the data is invalidated in the cache, waiting for another access to load the source data into the cache.
    2. Or when the source data is modified, quickly feed back to the cache, and if the modification occurs, store the latest data in the cache.
    3. Double-write solution for database and cache.
1.4 Upgrade and downgrade of read-write lock
1.4.1 ReadWriteLock is not allowed to be upgraded!

First look at an example code:

r.lock();//读缓存
try {
  v = m.get(key);if (v == null) {
    w.lock();
    try {   
      //省略详细代码 //再次验证并更新缓存
    } finally{
      w.unlock();
    }
  }
} finally{
  r.unlock();}

In the above 1. Acquire the read lock first, and then acquire the write lock again, this is called the lock upgrade , but ** ReadWriteLock does not support this upgrade! ** In the above code, the read lock has not been released, and then to obtain the write lock, it will cause the write lock to wait forever, and then cause the thread to be blocked, and the server may show low CPU utilization.

1.4.1 ReadWriteLock allows degradation

The following is another way to achieve the on-demand loading of cached data, directly on the code:

class CachedData {
  Object data;
  volatile boolean cacheValid;
  final ReadWriteLock rwl =new ReentrantReadWriteLock();
  final Lock r = rwl.readLock(); // 读锁 
  final Lock w = rwl.writeLock();  //写锁 
  void processCachedData() {
    r.lock();   // 获取读锁
    if (!cacheValid) { //先判读缓存中是否存在该数据,如果存在跳过if下面的方法,直接use(data)然后释放读锁。
    //否则需要写入缓存数据,获取写锁进行写操作,因为不允许锁的升级,所以需要先释放读锁,获取写锁,把数据写入缓存
      r.unlock(); // 释放读锁,因为不允许读锁的升级     
      w.lock(); // 获取写锁
      try {
        // 获取写锁之前有可能其他线程获取写锁写入了,所以再次检查状态  
        if (!cacheValid) {//如果没有写入缓存,就写入
          data = ...
          cacheValid = true;
        }
        // 因为下面需要释放读锁,我们需要再获得读锁,让其有东西释放,同时允许锁降级,所以直接获得读锁。
        r.lock();  // 释放写锁前,降级为读锁,这个锁也需要释放
      } finally {
        // 释放写锁
        w.unlock(); 
      }
    }
    // 此处仍然持有读锁
    try {use(data);} 
    finally {r.unlock();}//释放读锁
  }
}
1.5 Notes on read-write lock
  1. ReadWriteLock is an interface, and its implementation class is ReentrantReadWriteLock. You can tell from the name that it is reentrant.
  2. At the same time, the read and write locks it acquires implement the Lock interface, so in addition to supporting the lock () method, it also supports non-blocking acquisition locks tryLock (), lockInterruptibly () and other methods.
  3. Read-write lock is similar to ReentrantLock, and also supports fair mode and unfair mode.
  4. Note: Its write lock supports conditional variables, but read locks do not support conditional variables. Reading lock calls newCondition () will throw an exception.

Reference: Geek Time
More: Deng Xin

Published 34 original articles · Likes0 · Visits 1089

Guess you like

Origin blog.csdn.net/qq_42634696/article/details/105111777