Understand cache penetration, avalanche, and breakdown, and introduce distributed locks

Caching in practice

1. Cache penetration

Let’s first understand a small picture,

1.1 Concept:

Cache penetration refers to data that must not exist. Since the cache misses this data, the database will be queried. The database does not have this data, so the return result is null. If every query goes to the database, the cache will lose its meaning, just like the cache is penetrated.

Insert image description here

1.2 Risks posed

Using non-existent data to carry out attacks will increase the pressure on the database and eventually cause the system to crash.

1.3 Solution

Cache the result null and add a short expiration time.

2. Cache avalanche

2.1 Concept

Cache avalanche means that when we cache multiple pieces of data, we use the same expiration time, such as 00:00:00 expiration. If the cache fails at the same time at this time, and a large number of requests come in, because the data is not cached, they all query the database. , the database pressure increases, which will eventually lead to an avalanche.
Insert image description here

2.2 Risks posed

Try to find the time when a large number of keys expire at the same time. A large number of attacks will be carried out at a certain time, and the pressure on the database will increase, eventually causing the system to crash.

2.3 Solution

Add a random value to the original expiration time, such as 1-5 minutes randomly, to reduce the repetition rate of cache expiration time and avoid collective cache failure.

3. Cache breakdown

3.1 Concept

An expiration time was set for a certain key, but when it happened to expire, a large number of requests came in, causing all the requests to be queried in the database.
Insert image description here

3.2 Solution

When there is a large amount of concurrency, only one request can obtain the lock of the query database. Other requests need to wait and release the lock after finding it. After other requests obtain the lock, they first check the cache. If there is data in the cache, there is no need to check the database.

4. Locking to solve cache breakdown

How to deal with cache penetration, avalanche, and breakdown problems?
Cache empty results to solve the cache penetration problem.
Set the expiration time and add a random value for expiration offset to solve the cache avalanche problem.
Lock to solve the cache breakdown problem. Also note that locking will have an impact on performance.
Here we look at using code to demonstrate how to solve the cache breakdown problem.
We need to use synchronized for locking. Of course this is a local lock method

public List<TypeEntity> getTypeEntityListByLock() {
  synchronized (this) {
    // 1.从缓存中查询数据
    String typeEntityListCache = stringRedisTemplate.opsForValue().get("typeEntityList");
    if (!StringUtils.isEmpty(typeEntityListCache)) {
      // 2.如果缓存中有数据,则从缓存中拿出来,并反序列化为实例对象,并返回结果
      List<TypeEntity> typeEntityList = JSON.parseObject(typeEntityListCache, new TypeReference<List<TypeEntity>>(){});
      return typeEntityList;
    }
    // 3.如果缓存中没有数据,从数据库中查询数据
    System.out.println("The cache is empty");
    List<TypeEntity> typeEntityListFromDb = this.list();
    // 4.将从数据库中查询出的数据序列化 JSON 字符串
    typeEntityListCache = JSON.toJSONString(typeEntityListFromDb);
    // 5.将序列化后的数据存入缓存中,并返回数据库查询结果
    stringRedisTemplate.opsForValue().set("typeEntityList", typeEntityListCache, 1, TimeUnit.DAYS);
    return typeEntityListFromDb;
  }
}

1. Query data from cache.

2. If there is data in the cache, take it out of the cache, deserialize it into an instance object, and return the result.

3. If there is no data in the cache, query the data from the database.

4. Serialize the data queried from the database into a JSON string.

5. Store the serialized data in the cache and return the database query results.

5. The problem of local lock

Local locks can only lock the threads of the current service. As shown in the figure below, multiple topic microservices are deployed, and each microservice is locked with a local lock.
Insert image description here
Local locking is generally not a problem, but it can cause problems in certain situations:

For example, there are problems when using it to lock inventory under high concurrency conditions:

1. For example, the current total inventory is 100, which is cached in Redis.

2. After inventory microservice A uses local lock to deduct inventory 1, the total inventory is 99.

3. After inventory microservice B uses local lock to deduct inventory 1, the total inventory is 99.

4. After the inventory was deducted twice, it was still 99, which means it was oversold by 1 unit.

So how to solve the problem of local locking? Please see the next article:
https://editor.csdn.net/md?not_checkout=1&articleId=132869070

Guess you like

Origin blog.csdn.net/qq_36151389/article/details/132868817