Realize local cache based on redis client cache mechanism


foreword

The use of caching has always been our weapon to improve the response speed of projects, and in some high-concurrency projects, a multi-level caching mechanism will be introduced to further improve query efficiency. So how to implement multi-level caches, and how to ensure data consistency between multi-level caches?
This article will introduce the implementation of local caching through the redis client caching mechanism.


1. Local cache and distributed cache

Cache is a Key-Value data structure stored in memory, which can generally be divided into remote cache and local cache.

  • 远程缓存In the solution, the general application process and the cache process are not on the same server, and communicate through RPC or HTTP, which can realize the complete decoupling of application services and caches, and support a large amount of data storage. Distributed caches commonly include redis, memcache, etc.
    Note: As long as the cache of network requests is required, it is a remote cache. It is not a local cache if the Redis service and the application server are deployed on the same server.

  • 本地缓存The application process and the cache process in the solution are in the same process, there is no network overhead, and the access speed is fast, but it is limited by memory and is not suitable for storing large amounts of data. Local caches mainly include Guava cache, Caffeine, Encache, etc., and a set of local cache mechanisms can also be self-implemented through HashMap.

In high concurrency scenarios, we can use local cache + remote cache to build a multi-level cache architecture to further improve cache stability and performance. The multi-level cache request process is as follows:
insert image description here

So, what are the advantages of using a two-level cache compared to simply using a remote cache?

  • The local cache is based on the memory of the local environment, and the access speed is very fast. For some data with low change frequency and low real-time requirements, it can be placed in the local cache to improve the access speed
  • Using local cache can reduce data interaction with remote cache of Redis class, reduce network I/O overhead, and reduce time-consuming network communication in this process
  • Reduced dependence on third-party processes and higher stability

But in the design, there are still some issues to consider, for example 数据一致性问题. First of all, the data in the two-level cache and the database must be consistent. Once the data is modified, the local cache and the remote cache should be updated synchronously while the database is being modified.

In addition, if it is a distributed environment, there will also be a consistency problem between the first-level caches. 当一个节点下的本地缓存修改后,需要通知其他节点也刷新本地缓存中的数据Otherwise, expired data will be read. This problem can be solved by a publish/subscribe function similar to Redis.

insert image description here
In addition, cache expiration time, expiration strategy, and multi-thread access issues also need to be taken into consideration.

2. Redis client caching mechanism

Official documentation: Client-side caching in Redis

Client-side caching is Redis6a more practical new feature among many new features. The official website explains it as follows:

Client-side caching is a technology used to create high-performance services. It can use the available memory on the application server (these servers are usually nodes different from the database server), and directly store some information in the database on the application server side. . Compared with accessing network services such as databases, it takes much less time to access local memory, so this mode can greatly shorten the delay for applications to obtain data, and at the same time reduce the load pressure on the database.

So what are the advantages of the redis client cache mechanism compared to other local caches Guava and Caffeine, except for the introduction of one middleware?

In distributed mode, it is necessary to ensure the consistency of the first-level cache under each host. Recalling our original solution, it can be realized by using the function of redis itself 发布/订阅:
insert image description here
the emergence of client-side cache greatly simplifies this process . Let's take the default mode as an example, and look at the operation process after using client-side caching:
insert image description here
Compared with the original publish/subscribe mode, we can see obvious advantages. After using the client-side caching function, we only need to simply modify redis The data in is fine, and the process of manually handling publish/subscribe messages can be completely omitted.

1. Principle of client cache implementation

Redis's client-side caching support is called tracking. The command for client caching is:

CLIENT TRACKING ON|OFF [REDIRECT client-id] [PREFIX prefix] [BCAST] [OPTIN] [OPTOUT] [NOLOOP]

Redis 6.0 implements the Tracking function and provides two modes to solve this problem, namely the normal mode and broadcast mode using the RESP3 protocol version, and the forwarding mode using the RESP2 protocol version.
insert image description here

normal mode

When tracking is enabled, Redis will "remember" the key requested by each client, and when the value of the key changes, it will send an invalidation message to the client ( invalidation message). Invalidation information can be sent to the requesting client via the RESP3 protocol, or forwarded to a client on a different connection (supporting RESP2 + Pub/Sub).

  • The server side stores the key accessed by the client and the client ID list information corresponding to the key in a globally unique table ( TrackingTable). When the table is full, it will remove the oldest record and trigger a notification to the client that the record has expired .
  • Each Redis client has a unique digital ID. TrackingTable stores each Client ID. When the connection is disconnected, the record corresponding to the ID is cleared.
  • The Key information recorded in the TrackingTable table does not consider which database it belongs to. Although the key of db1 is accessed, the client will receive an expiration prompt when the key with the same name in db2 is modified, but this will reduce the complexity of the system and the storage data of the table quantity.

Redis uses TrackingTable to store the mapping relationship between key pointers and client IDs. Because the pointer of the key object is the memory address, that is, the long integer data. The related operation of the client cache is to add, delete, modify and check the data:
insert image description here

broadcast mode

When broadcast mode ( broadcasting) is on, the server does not remember which keys were accessed by a given client, so this mode consumes no memory at all on the server side.

In this mode, the server will broadcast the invalidation of all keys to the client. If the key is frequently modified, the server will send a large number of invalidation broadcast messages, which will consume a lot of network bandwidth resources.

Therefore, in practical applications, we set the client registration to only track the key with the specified prefix. When the key prefix matching of the registration tracking is modified, the server will broadcast the invalidation message to all clients concerned with the key prefix.

client tracking on bcast prefix user

This broadcast mode of monitoring keys with a prefix matches our naming convention for keys very well. In actual application, we will set the same business name prefix for keys under the same business, so we can use the broadcast mode very conveniently.

insert image description here

Redirect mode redirect

Normal mode and broadcast mode require the client to use the RESP 3 protocol, which is a newly enabled protocol in Redis 6.0.
For clients using the RESP 2 protocol, implementing client-side caching requires another pattern: 重定向模式(redirect).

RESP 2 cannot directly PUSH invalidation messages, so another client supporting the RESP 3 protocol is required to tell the Server to Pus/Subnotify the RESP 2 client of invalidation messages via .

In the redirection mode, the client that wants to be notified of invalidation messages needs to execute the subscription command SUBSCRIBE to specifically subscribe to the channel used to send invalidation messages _redis_:invalidate.

At the same time, use another client and execute the CLIENT TRACKING command to set the server to forward the invalidation message to the client using the RESP 2 protocol.
insert image description here
Suppose client B wants to get invalidation messages, but client B only supports RESP 2 protocol, and client A supports RESP 3 protocol. SUBSCRIBEWe can execute and on clients B and A respectively CLIENT TRACKINGas follows:

//客户端B执行,客户端 B 的 ID 号是 606
SUBSCRIBE _redis_:invalidate
​
//客户端 A 执行
CLIENT TRACKING ON BCAST REDIRECT 606

Client B can _redis_:invalidate get the invalidation message through the channel.

2. Advantages and pitfalls

After understanding the implementation principle of client-side caching, let's compare the advantages of client-side caching compared with the traditional use of redis for remote caching and the use of integrated two-level caching.

  • Advantages
    When there is a cache on the server side of the application, it will directly read the local cache, which can reduce the delay caused by network access, thereby speeding up the access speed. At the same time, it can also reduce the number of accesses to the redis server and reduce the load pressure of redis.
    In a distributed environment, it is no longer necessary to notify other hosts to update the local cache through publish and subscribe to ensure data consistency. After using the client-side cache, its native message notification function can well support the invalidation of the local cache, ensuring that updated new data can be retrieved when accessing later.

  • Misunderstanding
    Although this new feature is called client-side caching, redis itself does not provide the function of caching data on the application server side. This function must be implemented by the client accessing redis itself.
    Simply put redis服务端只负责通知你,你缓存在应用服务本地的这个key已经作废了,至于你本地如何缓存的这些数据,redis并不关心,也不负责处理.
    In the local cache implementation CacheFrontend encapsulated in Lettuce, it helps us realize the processing operation of the local cache: after receiving the cache invalidation of the redis server, the local cache executes the delete operation.

3. Client caching mechanism request process

  1. Client 1 -> Server: CLIENT TRACKING ON (Client 1 opens trackingthe mechanism)
  2. Client 1 -> Server: GET foo (Client 1 obtains foo information)
  3. (The server remembers that Client 1 may have the key “foo” cached) The redis server records that Client 1 has foo cached information
  4. (Client 1 may remember the value of “foo” inside its local memory) Client 1 records foo information to the local cache
  5. Client 2 -> Server: SET foo SomeOtherValue (client 2 modifies foo cache information)
  6. Server -> Client 1: INVALIDATE “foo” (the redis server notifies client 2 that the cached foo is invalid)

3. Project actual combat

1. Introduce dependencies

Redis6.x only started to support the client cache function, and our lettuce dependency also needs to use the 6.x version to support the client segment cache feature.

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-redis</artifactId>
    <version>3.1.0</version>
    <exclusions>
        <exclusion>
            <groupId>io.lettuce</groupId>
            <artifactId>lettuce-core</artifactId>
        </exclusion>
    </exclusions>
</dependency>

<dependency>
    <groupId>io.lettuce</groupId>
    <artifactId>lettuce-core</artifactId>
    <version>6.2.4.RELEASE</version>
</dependency>

2. Redis connection attribute configuration

#redis连接信息
spring.redis.host=127.0.0.1
spring.redis.port=6379
spring.redis.password=123456
#最大连接数  cpu*2
spring.redis.lettuce.pool.max-active = 8
#最大空闲连接数  cpu*2
spring.redis.lettuce.pool.max-idle = 8
#最小空闲连接数
spring.redis.lettuce.pool.min-idle = 0
#最长等待时间
spring.redis.lettuce.pool.max-wait = 5s
#空闲等待时间
spring.redis.lettuce.pool.time-between-eviction-runs = 1s

3. Enable client caching

@Configuration
public class RedisConfig {
    
    
   @Bean
   public CacheFrontend cacheFrontend(RedisProperties redisProperties){
    
    
       RedisURI redisURI = RedisURI.builder()
               .withHost(redisProperties.getHost())
               .withPort(redisProperties.getPort())
               .withPassword(redisProperties.getPassword())
               .build();
       StatefulRedisConnection<String, String> connect = RedisClient.create(redisURI).connect();
       Map<String, String> clientCache = new ConcurrentHashMap<>();
       
       return ClientSideCaching.enable(
               CacheAccessor.forMap(clientCache),
               connect,
               //开启Tracking
               TrackingArgs.Builder.enabled());
   }

}

TrackingArgs parameter introduction : official document
insert image description here

4. Use a local cache

@Component
@Slf4j
public class CommandLineRunnerImpl implements CommandLineRunner {
    
    
    @Autowired
    private CacheFrontend cacheFrontend;

    @Override
    public void run(String... args) throws Exception {
    
    
        log.info("打印user的本地缓存值:");
        String key="user";
        while (true){
    
    
            String value = (String) cacheFrontend.get(key);
            System.out.println(value);
            TimeUnit.SECONDS.sleep(5);
        }
    }
}

CacheFrontend source code analysis:
1. Create a local cache CacheFrontend

    private static <K, V> CacheFrontend<K, V> create(CacheAccessor<K, V> cacheAccessor, RedisCache<K, V> redisCache) {
    
    
        ClientSideCaching<K, V> caching = new ClientSideCaching(cacheAccessor, redisCache);
        //redis服务端监听到缓存失效后,通知本地缓存caching
        redisCache.addInvalidationListener(caching::notifyInvalidate);
        //本地缓存caching监听到失效,执行evict清除命令
        caching.addInvalidationListener(cacheAccessor::evict);
        return caching;
    }

2. There is only a simple get method in CacheFrontend, which is used to obtain the local cache.
insert image description here
Check out the implementation of CacheFrontend's get method:

    public V get(K key) {
    
    
        //从本地缓存中查询,
        V value = this.cacheAccessor.get(key);
        if (value == null) {
    
    
            //从redis远程缓存中查询
            value = this.redisCache.get(key);
            if (value != null) {
    
    
                //保存到本地缓存
                this.cacheAccessor.put(key, value);
            }
        }
        return value;
    }

5. Test

Start multiple projects, and then modify the value of key=user through the redis client tool, you can find that the information in the local cache can be updated in real time; if you delete the data of key=user, the local cache information will also be deleted.

打印user的本地缓存值:
laozhang
laozhang
laozhang
laowan
laowan
null
null

Summarize

Based on the new client-side caching mechanism of redis6.x, we easily implement local caching, and do not need to care about the details of data synchronization between the local cache and the remote cache.

  • The redis6.x version adds a client caching mechanism ( Client-side caching), and the bottom layer is trackingimplemented through the mechanism, which requires RESP3protocol support. In the old version, Pub/Subthe local cache can only be expired by notification.
  • The client caching mechanism has two modes: 普通模式and 广播模式, the normal mode Redis will "remember" the key requested by each client, which will occupy a certain amount of memory on the redis server; the broadcast mode server will not remember which keys a given client has accessed , does not consume any memory on the server side, but the server will send a large number of failure broadcast messages, which will consume a large amount of network bandwidth resources.
  • Only lettuce6.x version can support the client cache feature

The new feature of Redis 6.0: client-side caching to improve performance.
Introducing "client-side caching", Redis6 can be considered to understand caching...
Redis+Caffeine two-level caching makes access speed enjoy silky smooth

Guess you like

Origin blog.csdn.net/w1014074794/article/details/130769329