Redis cache penetration and avalanche

1 Cache penetration

1.1 Description

Cache penetration refers to data that is not in the cache or the database, and users continue to initiate requests, such as data with an id of "-1" or data with an id of very large non-existent. At this time, the cache will be bypassed and the database will be requested every time. In this case, a large number of requests will directly reach the database. This phenomenon is called cache penetration.

1.2 Solution

1.2.1 Code level

List<String> list = demoService.getDemoData(demoID);
/*
 * 防止缓存穿透
 * 解决方案: 把空的数据也缓存起来,比如空字符串,空对象,空数组或者list
 */
if(list != null && list.size() > 0) {
    
    
  redisOperator.set("demo:" + demoID, JsonUtils.objectToJson(list));
} else {
    
    
  // 设置过期时间,5分钟
  redisOperator.set("demo:" + demoID, JsonUtils.objectToJson(list),5*60);
}

1.2.2 Using Bloom Filter

Bloom filter, simple to understand, is an array composed of 0 and 1, 1 represents existence, and the key in redis will be stored in the bloom filter. As shown in the figure below, the position of 1 represents the location of the index of the key.

Insert picture description here

When the index of the key can be found in the Bloom filter, just demo:1like the data will be returned from redis normally.
If the index cannot be found, just like the demo:-1same, the value of nil is returned directly.

1.2.3 Comparison plan

布隆过滤器

  • The Bloom filter has misjudgment behavior, although the probability is very low, but it will happen.
  • Bloom filter, will increase the complexity of the code

代码层面

  • Relatively speaking, the shortcomings are not very obvious, and this method is often used in the actual environment

2 Cache avalanche

2.1 Description

Cache avalanche refers to the large-scale failure of the keys in redis at a specific time. At this time, a large amount of traffic flows in and will directly reach the database server. The database crashes and crashes. Repeatedly restarting the service does not work.

2.2 Prevention plan

2.2.1 Never expire

Some hot data is directly set to never expire, so that the access of hot data will always go through the cache service.

2.2.2 The expiration time is staggered

When the key of the cached data is set, different expiration times can be set for the data in batches, or expiration times within a random range can be distributed to different expiration time periods.

2.2.3 Multi-level caching scheme

You can use a multi-level cache scheme, such as nginx local cache + redis distributed cache + tomcat heap cache multi-level cache architecture.

2.2.4 Use third-party redis services

You can purchase and use third-party redis services. The performance and scalability of third-party redis services are relatively higher than those of self-deployed ones.

3 Related information

  • The blog post is not easy, everyone who has worked so hard to pay attention and praise, thank you

Guess you like

Origin blog.csdn.net/qq_15769939/article/details/113928181
Recommended