Middleware and database caching in love with killing

Foreword

D Ma Mei recent years because too headstrong resigned, he happened to meet this epidemic, and made panic panic. Recently four vote resume looking for work, after all, still have to eat.

This is not to receive an interview notice of a company, is about to begin an awkward interview. . .

Interviewer: In order to alleviate the pressure on the database, the solution usually have?

Ma Mei D: Cache is the most common one kind. Including browser cache, Nginx reverse proxy cache, JVM cache, cache middleware, database caching these areas.

Interviewer: issues caching middleware and database, for query and update operations, how did you handle it?

Ma Mei D: For the update request to update the database and then delete the cache; when queries come not find the cache, it will query data from the database and into the cache.

Interviewer: When that request if the query and update requests concurrent occurrence, shown in Figure 1, the steps shown could happen, how to avoid that situation?

Ma Mei D :( panic, I would have not thought about it, not compile on line yet ...) then delete and then slow to update the database.

:( interviewer this kid swatting a pass?) As shown in Figure 2, there were still problems.


To update the database, and then delete the cache

Here Insert Picture Description
As shown in FIG. 1, when the concurrency update request with the query request, and result in inconsistent data cache database.


Delete the cache, and then update the database

Here Insert Picture Description
As shown in FIG 2, when the concurrency update request with the query request, and result in inconsistent data cache database.


solution

Premise: using the cache will tolerate inconsistent data. To ensure eventual consistency.

Set valid

Benefits: After the cache to be eliminated over the period, the program will get the latest data from the database back into the cache consistency.

Disadvantages: Because the cache invalidation, causing cache breakdown , cache avalanches and other issues.

Lock

Whether stand-alone or distributed lock lock, this approach ensures operation between different threads inconclusive results do not appear, but in fact in the project and will not adopt this practice. Originally using caching is to improve efficiency, because the cache is now locked and will only harm than good.

All operations involving cache and database are locked together, greatly reduces the ability to handle concurrency.

Cache Center

These problems can be summed up above, it is to modify too many entry cache.

So in order to solve these problems, we can be unified cache is managed by a module.
(schematic diagram)
After the establishment of buffer center, business systems do not need to cache management, unified buffer cache is maintained by the center.

Code Example:

/**
 * 建立缓存中心前
 */
public String findHotData() {
	// 查询缓存
    String result = redisTemplate.opsForValue().get(HOT_DATA_KEY);
    if (StringUtils.isNotBlank(result)) {
        return result;
    }
    // 缓存未命中则查询数据库
    result = dao.findHotData();
    if (StringUtils.isNoneBlank(result)) {
    	// 将查询的数据放入缓存
        redisTemplate.opsForValue().set(HOT_DATA_KEY, result);
    }
    return result;
}

/**
 * 建立缓存中心后
 */
public String findHotData() {
    return redisTemplate.opsForValue().get(HOT_DATA_KEY);
}

Data preheat

(schematic diagram)

For some hot data, we can perform the following operations by caching Center:

  • Cache system or middleware application (re) started, the data loaded into the cache
  • Periodically refresh the cache by a background task

Monitor logs, update the cache

(Schematic)
(tool: canal)

Published 107 original articles · won praise 88 · views 260 000 +

Guess you like

Origin blog.csdn.net/Code_shadow/article/details/104831013