17. ReadWriteLock: How to quickly implement a complete cache? -Concurrency tools

The two synchronization primitives, tube and semaphore, can theoretically solve all concurrency problems. The Java SDK has many other tools for reasons: optimizing performance by scenario and improving ease of use .

A concurrency scenario: read more and write less, such as cache. The Java SDK concurrent package provides a read-write lock-ReadWriteLock.

1. What is a read-write lock?

Read-write locks follow three basic principles:

  • Allow multiple threads to read shared variables at the same time , performance is better than mutual exclusion locks;
  • Only one thread is allowed to write shared variables;
  • If a write thread is performing a write operation, the read thread is prohibited from reading shared variables at this time.

2. Quickly implement a cache

Examples of implementation code are as follows:

class Cache<K, V> {
	// 不是线程安全类,使用读写锁来保证线程安全
	final Map<K, V> m = new HashMap<>();
	// 可重入读写锁
	final ReadWriteLock rwl = new ReentrantReadWriteLock();
	// 读锁
	final Lock r = rwl.readLock();
	// 写锁
	final Lock w = rwl.writeLock();

	// 读缓存,try{}finally{}编程范式
	V get(K key) {
		r.lock();
		try {
			return m.get(key);
		} finally {
			r.unlock();
		}
	}

	// 写缓存
	V put(String key, Data v) {
		w.lock();
		try {
			return m.put(key, v);
		} finally {
			w.unlock();
		}
	}
}

To use the cache, we must first solve the problem of initializing the cached data . There are two ways:

  • The one-time loading method is suitable for the small amount of data, which is loaded when the application starts;
  • The on-demand loading method is also called lazy loading, which means querying the cache first, and not querying the database and putting it into the cache at the same time.

3. Implement on-demand loading of cache

class Cache<K, V> {
	// 不是线程安全类
	final Map<K, V> m = new HashMap<>();
	// 可重入读写锁
	final ReadWriteLock rwl = new ReentrantReadWriteLock();
	// 读锁
	final Lock r = rwl.readLock();
	// 写锁
	final Lock w = rwl.writeLock();

	// 读缓存,try{}finally{}编程范式
	V get(K key) {
		V v = null;
		// 读缓存
		r.lock(); // (1)
		try {
			v = m.get(key); // (2)
		} finally {
			r.unlock(); // (3)
		}
		// 缓存中存在,返回
		if (v != null) { // (4)
			return v;
		}
		// 缓存中不存在,查询数据库
		w.lock(); // (5)
		try {
			// 再次验证
			// 其他线程可能已经查询过数据库
			v = m.get(key);  // (6)
			if (v == null) { // (7)
				// 查询数据库
				v= 省略代码无数
				m.put(key, v);
			}
		} finally {
			w.unlock();
		}
		return v;
	}

}

To write a cache at 5, a write lock is required. At code 6 and 7, why do we need to re-examine whether it exists?

The reason is that in high concurrency scenarios, there may be multiple threads competing for write locks.

  • Suppose the cache is empty and nothing is cached. If there are three threads T1, T2, and T3 calling the get () method at the same time, and the key parameter is also the same.
  • Then they will execute to the code ⑤ at the same time, but at this time, only one thread can obtain the write lock, assuming that thread T1, thread T1 acquires the write lock, queries the database and updates the cache, and finally releases the write lock.
  • At this time, there will be another thread in threads T2 and T3 that can acquire the write lock, assuming it is T2. If re-authentication is not used, then T2 will query the database again.
  • After T2 releases the write lock, T3 will also query the database again.
  • In fact, the thread T1 has already set the cached value, and there is no need to query the database again for T2 and T3.
  • Therefore, the verification method can avoid the problem of repeatedly querying data in high concurrency scenarios.

4. Upgrade and downgrade of read-write lock

In the sample code loaded on demand above, the read lock is acquired at ①, and the read lock is released at ③. Is it possible to add a verification cache and update the logic of the cache below the ②? The detailed code is as follows.

// 读缓存
r.lock();try {
  v = m.get(key);if (v == null) {
    w.lock();
    try {
      // 再次验证并更新缓存
      // 省略详细代码
    } finally{
      w.unlock();
    }
  }
} finally{
  r.unlock();}

First obtain the read lock, and then upgrade to a write lock, this is called the upgrade of the lock .
Problem: The read lock has not been released. At this time, acquiring the write lock will cause the write lock to wait forever, eventually causing the related threads to be blocked and never have the opportunity to be awakened.

The downgrade of the lock is allowed. Examples are as follows:

class CachedData {
	Object data;
	volatile boolean cacheValid;
	final ReadWriteLock rwl = new ReentrantReadWriteLock();
	// 读锁
	final Lock r = rwl.readLock();
	// 写锁
	final Lock w = rwl.writeLock();

	void processCachedData() {
		// 获取读锁
		r.lock();
		if (!cacheValid) {
			// 释放读锁,因为不允许读锁的升级
			r.unlock();
			// 获取写锁
			w.lock();
			try {
				// 再次检查状态
				if (!cacheValid) {
					data = ...
					cacheValid = true;
				}
				// 释放写锁前,降级为读锁
				// 降级是可以的
				r.lock(); // (1)
			} finally {
				// 释放写锁
				w.unlock();
			}
		}
		// 此处仍然持有读锁
		try {
			use(data);
		} finally {
			r.unlock();
		}
	}
}
Published 97 original articles · praised 3 · 10,000+ views

Guess you like

Origin blog.csdn.net/qq_39530821/article/details/102558643