When to use a cache faster than Redis? how to use?

I signed up for the 1st challenge of the Golden Stone Project - share the 100,000 prize pool, this is my 3rd article, click to view the details of the event

Hello everyone, I am between the sun and the stars. It is not easy to create. If you find it useful, please follow, like, comment, favorite, and forward, thank you.

preface

Imagine such a scenario, a user wants to withdraw the balance accumulated by watching the super-speed version of the video for a long time, so he clicks the withdraw button, and with a bang, his money will go to the bank card. Such a simple action for the user often involves more than a dozen services in the background (the larger the company scale and the higher the normative requirements, the more services in the entire call chain), and you are responsible for a cross-validation service , which is mainly responsible for verifying whether the billing ID, fund flow ID, payer account number, and payee account number passed to you by the upstream are the same as those originally configured in the application.

For a good experience of the product, the big boss requires that the request takes at most 1s to allow the user to see the result, so the person in charge of each service battled around and reserved only 50ms for your service. When you think about it, this is not easy, just go directly to the Redis cache. The Redis set of crackling output, there is no problem in the test environment, but I was dumbfounded after going online. Due to network fluctuations and other reasons, your service often times out. The team leader ordered you to solve it as soon as possible, and then because your service timed out caused him If you are scolded by the big boss, don't think about your performance. At this time, how do you optimize it?

theory

If you want to be faster than Redis, you must first know why Redis is faster. The most direct reason is that all data in Redis is in memory, and fetching the database from memory is several orders of magnitude faster than fetching data from hard disk.

So if you want to be faster than Redis, you can only work hard on data transmission, omit the data transmission between different servers and even between different processes, directly put the data in the JVM, and write a ConcurrentMap to save the data. But since it is a cache, there must be a whole set of deletion strategies, maximum space restrictions, refresh strategies, and so on. It’s too expensive to do it by hand. There must be a large company that has done such a job in a similar scenario, and he wants to make a good reputation. Do a search on github and there must be a ready-made solution. So today our protagonist came out, Guava Cache.

practice

First, a piece of code is used to introduce the use of Guava Cache as a whole. The Cache is divided into two parts: CacheBuilder and CacheLoader. CacheBuilder is responsible for creating cache objects, and then configures the maximum capacity, expiration method, and removes listeners when creating them. CacheLoader is responsible for according to the key to load the value.

LoadingCache<Key, Config> configs = CacheBuilder.newBuilder()
       .maximumSize(5000)
       .expireAfterWrite(30, TimeUnit.MINUTES)
       .removalListener(MY_LISTENER)
       .build(
           new CacheLoader<Key, Config>() {
             @Override
             public Graph load(Key key) throws AnyException {
               return loadFromRedis(key);
             }
           });
复制代码

Applicable scene

  1. Everything has a price, you are willing to accept the cost of memory space to increase speed
  2. The amount of data occupied by the storage will not be too large, too large will cause Out of Memoryexceptions
  3. The same key will be accessed many times.

CacheLoader

CacheLoader does not have to be specified in the build. If your data has multiple loading methods, you can use the callable method.

  cache.get(key, new Callable<Value>() {
    @Override
    public Value call() throws AnyException {
      return doThingsTheHardWay(key);
    }
  });
复制代码

Expiration Policy

Expiration strategies are divided into size benchmarks and time benchmarks. The size benchmarks can be specified by CacheBuilder.maximumSize(long)sum CacheBuilder.maximumWeight(long). It maximumSize(long)applies to the fact that the space occupied by each value is basically equal or the difference is negligible. It only depends on the number of keys, and maximumWeight(long)the occupied space of each value will be calculated. and ensure that the total weight is not greater than the set value, where the calculation method of each value can weigher(Weigher)be set by .

The time basis is easy to understand. It is divided into expireAfterAccess(long, TimeUnit)and expireAfterWrite(long, TimeUnit), respectively, how long it takes to fail after reading and how long it takes to fail after writing to the cache. Every time the cache is read, the value of the read failure will be renewed.

refresh policy

CacheBuilder.refreshAfterWrite(long, TimeUnit)The method provides the ability to automatically refresh. It should be noted that if the reload method is not rewritten, the refresh operation will only be performed when the key is found again.

Summarize

If you throw a question, you must give the answer, otherwise it will be a eunuch. So how do I do the question in the introduction? I set up a second level cache through guava cache and redis, and scan the table when the service starts. operation, put all the configuration content in the guava cache in advance. The refresh time of guava is set to five minutes, and the refresh operation is rewritten to force the refresh. The expiration time of redis is set to one day, and the value in the corresponding Redis cache is deleted after the database content is updated. In this way, it can be guaranteed that the local guava cache can be hit in most cases, and the data is inconsistent for up to 5 minutes (the business is acceptable). Everything has a price. As a back-end developer, you have to choose between various options and choose a solution with acceptable cost and acceptable business.

Guess you like

Origin juejin.im/post/7146946847465013278