[Redis] cache and cache penetrate avalanche

A cache avalanche

The reason avalanche generated 1.1 Cache

  Cache avalanche popular understanding is simple: Because the original cache miss (or the data is not loaded into the cache), not yet reached during the new cache (cache normally get from Redis in the figure below) all requests for access to the cache are supposed to query the database , while the enormous pressure on the CPU and memory database, the database can cause serious downtime, resulting in the collapse of the system.

15560923220208
155609

Cache invalidation time

15560924031621
155609

1.2 Solution

  Avalanche effect when the cache invalidation impact on the underlying system is very scary! It there any way to solve this problem? Basic Solutions are as follows:

  1. Distributed Lock: Most system designers consider locking or queues to ensure that there will be no way to ensure that a large number of threads for the database to read and write a one-time, to avoid too much pressure on database cache invalidation, even though it can in certain ease the pressure on the database at the same time but also reduce the throughput of the system extent.
  2. Use messaging middleware
  3. And a secondary cache (Redis + Ehcache)
  4. Shared equally allocated redis key failure time analysis of user behavior, try to make time cache invalidation evenly distributed.
  5. If it is because a cache server is down, you can call the shots prepared to consider, such as: redis standby, but double-buffering issues related to the update transactions, update might read dirty data, you need to take to resolve.

1.3 lock mode

  • Use Distributed Lock (local lock) to solve the school collapse effect when suddenly a large number of requests to the database server when the database service requests to be limiting. We can use this mechanism locks to ensure that only one thread (request) operation to access the database, otherwise the situation directly queued. (If a cluster server, then you need to use distributed lock, this lock can be used stand-alone version), the service can indeed solve the avalanche effect, but will reduce the throughput of server problems. (Suitable for small projects)
  • After a cache miss, by locking to control the number of threads or queue database read write cache. For example, a key for allowing only one thread to query the data and write cache, other threads wait.
@RequestMapping("/getUsers")
        public Users getByUsers(Long id) {
            // 1.先查询redis
            String key = this.getClass().getName() + "-" + Thread.currentThread().getStackTrace()[1].getMethodName()
                    + "-id:" + id;
            String userJson = redisService.getString(key);
            if (!StringUtils.isEmpty(userJson)) {
                Users users = JSONObject.parseObject(userJson, Users.class);
                return users;
            }
            Users user = null;
            try {
                lock.lock();
                // 查询db
                user = userMapper.getUser(id);
                redisService.setSet(key, JSONObject.toJSONString(user));
            } catch (Exception e) {

        } finally {
            lock.unlock(); // 释放锁
        }
        return user;
    }

  • Note: locking the queue just to alleviate the pressure on the database, does not improve system throughput. Assuming a highly concurrent, key buffer during the reconstruction was locked, which is over 1000 requests 999 are blocked. It will also cause the user to wait for a timeout, this is a palliative approach.

1.4 messaging middleware

Messaging middleware
Messaging middleware

1.5 primary and secondary cache

  Do secondary cache, cache the original A1, A2 copy is cached, when A1 fails, access to A2, A1 cache expiration time is set short-term, A2 is set for long-term (this point is supplement)

1.6 shared equally allocated redis key expiration time

  Different key, to set different expiry time for the point in time of a cache miss as even as possible. Redis data set to the same time not fail

Second, the cache penetration

Penetrate principle
Penetrate principle

  • Cache penetration refers to the user query data, not in a database, naturally there will not be in the cache. This results in the user's query when not found in the cache, each time to re-query the database again, and then return empty. Such requests will bypass the cache Direct Access database, which is often mentioned cache hit rates.
  • The solution is this: If you query the database also empty directly set a default value stored in the cache, so the second time to get there is value in the buffer, but will not continue to access the database, this approach is the most simple and crude.
  • The null result, but also to be cached, so the next time the same request can be directly returned empty, that can be avoided when the cache is empty query caused when penetration. Also be provided a separate cache area for storing the null value, to query on key pre-check, and then released to normal caching logic behind.
private String SIGN_KEY = "${NULL}";
public String getByUsers2(Long id) {
            // 1.先查询redis
            String key = this.getClass().getName() + "-" + Thread.currentThread().getStackTrace()[1].getMethodName()
                    + "-id:" + id;
            String userName = redisService.getString(key);
            if (!StringUtils.isEmpty(userName)) {
                return userName;
            }
            System.out.println("######开始发送数据库DB请求########");
            Users user = userMapper.getUser(id);
            String value = null;
            if (user == null) {
                // 标识为null
                value = SIGN_KEY;
            } else {
                value = user.getName();
            }
            redisService.setString(key, value);
            return value;
        }

  • The null result, but also to be cached, so the next time the same request can be directly returned empty, that can be avoided when the cache is empty query caused when penetration. Also be provided a separate cache area for storing the null value, to query on key pre-check, and then released to normal caching logic behind.
  • Note: When the true value of the stored ip give corresponding need to clear the cache before the corresponding empty.

Hot key

  Hot key: a key visit very often, when the key failures have looked threads to build the cache, resulting in increased load, the system crashes. Solution:

  1. Use lock, single use synchronized, lock, etc., distributed by Distributed Lock.
  2. Cache expiration time is not set, but in the key in the corresponding value. If the detected stored for longer than the expiration time asynchronous update the cache.
  3. In a set value than the expiration time value t1 t0 small expiration time t1 when expired, extend and make the update cache operation t1.

Guess you like

Origin www.cnblogs.com/haoworld/p/redis-huan-cun-chuan-tou-yu-huan-cun-xue-beng.html