Redis Study Notes # 9: Cache penetration, breakdown caching, caching solutions avalanche

Cache penetration

Represents a malicious user to simulate many requests data in the cache does not exist, because the cache are not, leading to these requests in a short time it fell directly on the database, resulting in a database exception.

 

solution:

1. Empty the cache value, the reason why penetration occurs because the cache is no key storing these data in order to query the database every time we can set corresponding to the key in the cache is null, the query behind this key time do not need to query the database for the robustness of course, we want to set an expiration time for the key, to prevent real data .

2. Bloom filter, all possible data cache into the Bloom filter, when accessing the cache does not exist in cache, and quickly return to avoid hang DB.

 

public String getByKey(String key) {
 // 通过key获取value
 String value = redisService.get(key);
 if (StringUtil.isEmpty(value)) {
 if (bloomFilter.mightContain(key)) {
 value = userService.getById(key);
 redisService.set(key, value);
 return value;
 } else {
 return null;
 }
 }
 return value;
}

 

Cache breakdown

In the case of high concurrency, a large number of requests simultaneously query with a key, the key at this time just fails, it will lead to the same time, these requests will go to query the database, such a phenomenon we call cache breakdown

solution:

A distributed lock, only to get a lock first thread to request database, then insert buffers, of course, every time to get the lock at all times to make inquiries about the cache there

 

Cache avalanche

Cache at the same time a large number of key expires (fail), followed by a large wave of the request in the database fall instantly cause improper connection.

Beforehand :
① When setting the expiration time of the key, adding a random value on the expiration time, to prevent a large number of key overdue.
② highly available cache architecture structures, such as the use of sentinel + cluster master-slave structure or use patterns, avoid caching system failure.
③ can use ehcache be a small cache in the system to prevent collapse after redis out, there are still some cache.
Things in :
① the system and limiting requests to downgrade to prevent collapse out between databases.
Later :
quickly using redis persistent data, cache data fast recovery
 
1. Using a probability cluster, reduce service downtime
2.ehcache local cache downgrade + & Hystrix limiting
purposes ehcache local cache is considered Redis Cluster completely unusable time, it is also capable of supporting ehcache local cache burst
used for limiting Hystrix & degraded, such as one second to a request 5000 we can only assume that there is one second set 2000 request through this component, then the other remaining 3000 request will go limiting logic



 

Guess you like

Origin www.cnblogs.com/sunang/p/11352680.html