Cache invalidation and its solutions under high concurrency

  When using Redis cache in the project, there will be various problems, such as the cache misses, no data is found, and the cache is not used. The series of problems are simply: cache penetration , cache avalanche , and cache hit wear
  

Cache penetration

  When querying a data that must not exist, because there is no such data in the cache, it will be queried in the database, but there is no such data in the database, and we query the database to find that the null data is not written into the cache, which will lead to This non-existent data will be queried in the database every time it is requested, losing the meaning of caching. This will also cause others to use non-existent data to attack, the database will increase instantaneous pressure and eventually lead to a crash. Solution: The null results queried are also cached, and a short expiration time is added
  

Cache avalanche

  When we set up the cache, the key used the same expiration time, which caused the cache to fail at a certain moment at the same time, and all requests were forwarded to the database. The transient pressure of the database avalanche. Solution: Change the expiration time to a random value to ensure that the cache does not expire at the same time
  

Cache breakdown

  For some keys with an expiration time set, when the key just fails before a large number of requests come in at the same time, then all queries will be forwarded to the database. Solution: lock, a large number of concurrency only let one request go to the database to query, other requests wait, after the check is found, the lock is released, other requests get the lock, first query the cache, then there will be data.
  
  So how to solve these problems, cache penetration And cache avalanche is easier to solve. Cache penetration is querying the database. If the query result is null, a flag bit can be stored in the cache, which means that the value of the current key is empty, and a shorter expiration time is set for this key; while the cache avalanche is saving the data When storing in the cache, set a different expiration time, and you can get the time through a random number to ensure that the key will not expire at the same time; the solution of cache breakdown is more troublesome, so this blog will focus on
  
  the solution of cache breakdown It is solved by adding locks, but our current services tend to be distributed systems. When the project is deployed in a distributed manner, it will involve the use of distributed locks. This article mainly introduces the use of local locks under a single application. The next chapter of the blog will focus on the use of distributed locks.
  
  Locking mainly uses synchronized. You can modify the method, or you can add synchronized code blocks in the method.

    public void testDemo(){
    
    
        synchronized (this){
    
    
            //需要执行的逻辑
        }
    }
    public synchronized void testDemo() {
    
    
        //需要执行的逻辑
    }

  Local locks are very simple to use, mainly a keyword synchronized, but now our services are basically distributed based. If local locks are added to the distributed, then if several services are deployed, there will be several locks. In the high concurrency state, each application can only lock its own service. If each service does not query data in the cache, it will go to the database to check. This is the phenomenon of local locks in the distributed environment. The next chapter will talk about In distributed applications, a distributed lock is used to lock each service to ensure that the database is only queried once

Guess you like

Origin blog.csdn.net/weixin_45481406/article/details/113198770