springboot: cache is more than redis, learn to use local cache ehcache

0 Preface

With the popularity of redis, more students are more familiar with redis distributed cache, but in some practical scenarios, redis is not actually needed, and our caching requirements can be realized by using a simpler local cache.

Today, let's take a look at the local cache component ehcache

1. Introduction to ehcache

1.1 Introduction

Ehcache is a local cache component developed based on java. It does not need to be installed and deployed separately. It can be used to implement caching as long as the jar package is introduced.

The so-called local cache refers to the temporary cache data stored in the JVM heap memory. Of course, ehcache itself also supports the Off-Heap Store机制use of off-heap memory. Compared with redis, local cache has higher performance and response speed.

Ehcache's local cache also supports features such as expiration time, maximum capacity, and persistence, making it suitable for various caching scenarios.

Official document address: https://www.ehcache.org/documentation/

1.2 The difference between local cache and redis

The difference between local cache and redis is:

  • Architecture:

    The local cache is based on a stand-alone architecture, that is, the data is only available locally and cannot be shared with other services. Unless obtained using a service call. Redis itself is based on a distributed architecture and supports cross-service calls.
    So when the data needs distributed calls, it is suitable for redis. If the data only needs to be obtained locally, local cache can be considered

  • performance:

    The local cache itself is based on the local memory and has no network IO consumption, so its performance is much higher than that of redis. However, if the amount of data is large, redis should still be considered. Local cache is only suitable for data scenarios with small data volume and simple structure. Not suitable for complex business data

  • Function expansion:

    Redis supports persistence, subscription mode, cluster, master-slave mode, etc., while ehcache is more inclined to simple caching function scenarios. Although it also supports persistence, it is not recommended to use it as a cache for large or complex scenarios. If the scene is relatively simple and lightweight, and has high requirements for latency, local caching can be selected

2. Use of ehcache

1. Create a springboot project, here my springboot version is2.6.13

2. Introduce ehcahe component dependencies

It should be noted here that net.sf.ehcacheehcache2.X org.ehcacheand echcache3.X are different in configuration of the two versions

        <dependency>
            <groupId>net.sf.ehcache</groupId>
            <artifactId>ehcache</artifactId>
            <version>2.10.9.2</version>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-cache</artifactId>
            <version>2.6.13</version>
        </dependency>

3. Add annotations to the startup class @EnableCachingand enable caching

@SpringBootApplication
@EnableCaching
public class LocalCacheDemoApplication {
    
    

    public static void main(String[] args) {
    
    
        SpringApplication.run(LocalCacheDemoApplication.class, args);
    }

}

4. application.ymlAdd configuration to the configuration file

spring:
  profiles: 
  	active: dev
  cache:
    type: ehcache
    ehcache:
      config: classpath:ehcache.xml

5. Create a configuration file under the resources folder ehcache.xml. Note that a cache named user is created separately for subsequent storage of user information cache. If different caches need to use different names, you need to create cache tags separately

Label introduction:

defaultCache: The default cache configuration label
cache specifies the cache label, name indicates the cache name
diskStore data storage disk path

Attribute introduction:

eternal: Whether the cache is permanently valid, if true, ignore timeToIdleSeconds and timeToLiveSeconds
maxElementsInMemory: How many keys to cache at most
overflowToDisk: Whether to write to disk when the cache exceeds the limit, the default is true
overflowToOffHeap: Whether to use off-heap memory when the heap memory exceeds the limit, enterprise version function , charging
diskPersistent: whether the cache is persistent
timeToLiveSeconds: how long the cache expires
timeToIdleSeconds: how long the cache expires without being accessed
diskExpiryThreadIntervalSeconds: the disk cache expiration check thread running interval
memoryStoreEvictionPolicy: cache elimination policy, LFU: the least recently used element is removed first; FIFO: The first entered element is removed; LRU: The less used element is removed
maxBytesLocalHeap: The cache occupies the maximum JVM heap memory, 0 means no limit, and the unit supports K, M or G
maxBytesLocalOffHeap: The cache occupies the maximum off-heap memory, 0 means no Limit, the unit supports K, M or G, the enterprise version function, charges
maxBytesLocalDisk: the maximum disk occupied by the cache, 0 means no limit, the unit supports K, M or G

<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:noNamespaceSchemaLocation="http://ehcache.org/ehcache.xsd"
         updateCheck="false">
         
    <defaultCache
            eternal="false"
            maxElementsInMemory="10000"
            overflowToDisk="false"
            diskPersistent="false"
            timeToLiveSeconds="3600"
            timeToIdleSeconds="0"
            diskExpiryThreadIntervalSeconds="120"
            memoryStoreEvictionPolicy="LRU"/>

    <cache
            name="user"
            eternal="false"
            maxElementsInMemory="10000"
            overflowToDisk="false"
            diskPersistent="false"
            timeToLiveSeconds="3600"
            timeToIdleSeconds="0"
            diskExpiryThreadIntervalSeconds="120"
            memoryStoreEvictionPolicy="LRU"/>

<!--    存储到磁盘时的路径-->
    <diskStore path="/Users/wuhanxue/Downloads/ehcache" />

</ehcache>

6. Cache usage, use annotations in the acquisition method @Cacheable, and use annotations in the update method @CachePut.
I have simulated here without accessing the database to query data. You can connect to the data source for testing when actually writing.

@RestController
@RequestMapping("user")
public class UserController {
    
    

    @GetMapping("get")
    @Cacheable(cacheNames = "user", key = "#id")
    public User getById(Integer id) {
    
    
        System.out.println("get第一次获取,不走缓存");
        User user = new User();
        user.setId(id);
        user.setAge(18);
        user.setName("benjamin_"+id);
        user.setSex(true);
        return user;
    }

    @PostMapping("update")
    @CachePut(cacheNames = "user", key = "#search.id")
    public User update(@RequestBody User search) {
    
    
        System.out.println("update更新缓存");
        User user = new User();
        Integer id = search.getId();
        user.setId(id);
        user.setAge(search.getAge() != null ? search.getAge()+1 : 0);
        user.setName("update_benjamin_"+id);
        user.setSex(true);
        return user;
    }

}

3. Test

1. Call the query interface:localhost:8080/user/get?id=1

insert image description here

2. When called for the first time, it prints "get is acquired for the first time, and the cache is not used". Call it again and find that it is not printed, but the data is queried normally, indicating that the cache has been removed

insert image description here

3. Call the update interface

insert image description here

4. Call the query interface again, and the updated data is queried, indicating that the cache update is successful

insert image description here

4. Precautions

Use maxElementsInMemory sparingly

maxElementsInMemory indicates the maximum number of keys to be cached. This configuration item should be used with caution. Generally, we should control it according to how much memory space it occupies, not how many keys it occupies. If the data volume of some keys is particularly large, it will cause key The number is not exceeded, but the OOM caused by the memory usage exceeds

We simulate this through an interface that generates a large amount of data, and the generateMemoryStringmethod can be found in the source code warehouse at the end of the article

1. Writing interface

@GetMapping("build")
    @Cacheable(cacheNames = "user", key = "#id")
    public User build(Integer id) {
    
    
        System.out.println("get第一次获取,不走缓存");
        User user = new User();
        user.setId(id);
        user.setAge(18);
        // 生成指定大小的字符串
        user.setName(generateMemoryString(id));
        user.setSex(true);
        return user;
    }

2. Limit the JVM memory of the project to 100m, which is convenient and faster to simulate and report errors

insert image description here

3. Call the interface localhost:8080/user/build?id=100, because the interface will generate large data and occupy the local cache, and the JVM cache gives 100M, so the call will report an error of heap memory overflow, as shown in the figure

insert image description here

4. Therefore, this configuration item should be used with caution. It can be replaced by setting how much memory and disk it takes up maxBytesLocalHeap.maxBytesLocalDisk

<cache
            name="user"
            eternal="false"
            maxBytesLocalHeap="50M"
            maxBytesLocalDisk="200M"
            overflowToDisk="false"
            diskPersistent="false"
            timeToLiveSeconds="3600"
            timeToIdleSeconds="0"
            diskExpiryThreadIntervalSeconds="120"
            memoryStoreEvictionPolicy="LRU"
    />

If maxBytesLocalHeapboth maxElementsInMemoryare configured, whoever reaches the configured value first will trigger

If a single key value is too large, it will still cause OOM

Although we have configured above maxBytesLocalHeapto limit the maximum memory used, for example, we limit the value to 100M, if we have 4 pieces of 30M data coming in, then the previous key will be eliminated according to the configured elimination strategy to make room to install new data

But if the newly incoming data is very large, such as more than 100M, then the memory will be filled all at once, and even the previous keys will not be eliminated, so in this case, OOM will still be caused

In this case, there are two ways to deal with it. One is to ensure that no data greater than this threshold will be generated. This can be controlled through business codes. The second is to set up a global error capture to capture the OOM error generated, and then return a bottom line or other status codes to identify

Demo source code

https://gitee.com/wuhanxue/wu_study/tree/master/demo/local_cache_demo

Guess you like

Origin blog.csdn.net/qq_24950043/article/details/130296868