Distributed lock, distributed cache

Distributed lock: Personally understand that it is actually a lock, but it is a lock for distributed applications. Ordinary lock or read-write lock.
Implementation method: For example, through redis, using Redisson, the application can first acquire the lock (the key of the custom string can be one lock, or multiple locks can be acquired at the same time, but pay attention to the deadlock problem), then tryLock, get the lock Then operate it.
Difficulty: Of course, there will be many details, such as blocking after acquiring the lock and never releasing it. etc.

 

Distributed cache practice , development history:

  • Performance requirements need to be solved as the amount of read operations increases. The processes experienced are: 
    database read and write separation (M/S) –> database uses multiple slaves –> increase Cache (memcache) –> go to Redis
  • Reliability requirements 
    The " avalanche " problem of Cache makes people tangled (Avalanche is cache invalidation, a large number of concurrent operations are fetched from data at the same time, and the database is stuck. You can add one more cache mark (invalidation time < corresponding data invalidation time), so that the tag is invalid. , the first request will be returned by invalid cache query set processing in another thread queue, and the invalidation time will be reset)
    Cache faces the challenge of rapid recovery

Questions:
1. Cache penetration: The database does not have it, and naturally it does not exist in the cache. In this way, when the user queries, it cannot be found in the cache, and every time the user has to go to the database to query again, and then return empty. In this way, the request bypasses the cache and directly checks the database, which is also a frequently mentioned cache hit rate problem. It can be solved by setting the default value after the query data is null.

But if there is no cache, it is forbidden to get it from the database. And the cache and database consistency are also synchronized through the job? But practicality is yet to be confirmed.

 

Distributed cache: Personally understand that since a single redis is a single thread, the distribution is implemented by itself, and a consistent hash is generally used. Each instance adopts master-slave to prevent request penetration caused by instance unavailability.

Question: When expanding or reducing machines, how to migrate cache data. And, when batch query, how to operate?

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326890907&siteId=291194637