Microservices (multi-level caching)

Table of contents

Multi-level cache

1.What is multi-level cache?

2.JVM process cache

2.2. First introduction to Caffeine

2.3. Implement JVM process cache

2.3.1.Requirements

2.3.2. Implementation

3. Introduction to Lua syntax

3.1. First introduction to Lua

3.1.HelloWorld

 3.2. Variables and loops

3.2.1.Lua data types

3.2.2. Declare variables

3.2.3. Loop

3.3. Conditional control and functions

3.3.1. Function

3.3.2.Conditional control

3.3.3.Case

4. Implement multi-level caching

4.1. Install OpenResty

1.Installation

1) Install development library

2) Install the OpenResty repository

3) Install OpenResty

4) Install the opm tool

5) Directory structure

6) Configure nginx environment variables

2. Get up and running

3.Remarks

4.2.OpenResty Quick Start

4.2.1.Reverse proxy process

4.2.2.OpenResty listens for requests

4.2.3.Write item.lua

 4.3. Request parameter processing

4.3.1. API for obtaining parameters

 4.3.2. Get parameters and return

 4.4. Query Tomcat

 4.4.1. API for sending http requests

 4.4.2. Encapsulating http tools

4.4.3.CJSON tool class

4.4.4. Implement Tomcat query

4.4.5. ID-based load balancing

2) Realize

3) Test

 4.5.Redis cache warm-up

4.6. Query Redis cache

4.6.1. Encapsulating Redis tools

4.6.2. Implement Redis query

4.7.Nginx local cache

 4.7.1.Local cache API

4.7.2. Implement local cache query

5. Cache synchronization

5.1. Data synchronization strategy

5.2.Install Canal

5.2.1.Get to know Canal

 5.2.2.Install Canal

1. Start MySQL master-slave

1.1. Start binlog

1.2.Set user permissions

 2.Install Canal

2.1.Create a network

2.3.Install Canal

5.3. Monitor Canal

5.3.1.Introduce dependencies:

5.3.2.Write configuration:

5.3.3. Modify the Item entity class

5.3.4.Writing a listener


Multi-level cache

1.What is multi-level cache?

The traditional caching strategy is generally to query Redis first after the request reaches Tomcat, and if it misses, query the database, as shown in the figure:

 The following problems exist:

  • Requests have to be processed by Tomcat, and Tomcat's performance becomes the bottleneck of the entire system.
  • When the Redis cache fails, it will have an impact on the database.

Multi-level caching is to make full use of every aspect of request processing and add cache respectively to reduce the pressure on Tomcat and improve service performance:

  • When the browser accesses static resources, it first reads the browser's local cache.
  • When accessing non-static resources (ajax query data), access the server
  • After the request reaches Nginx, the Nginx local cache is read first.
  • If the Nginx local cache misses, query Redis directly (without going through Tomcat)
  • If Redis query misses, query Tomcat
  • After the request enters Tomcat, the JVM process cache is queried first.
  • Query database if JVM process cache miss

       In the multi-level cache architecture, Nginx needs to write the business logic of local cache query, Redis query, and Tomcat query. Therefore, such an nginx service is no longer a reverse proxy server , but a Web server for writing business .

       Therefore, such business Nginx service also needs to build a cluster to improve concurrency, and then have a dedicated nginx service to do reverse proxy, as shown in the figure:

 In addition, our Tomcat service will also be deployed in cluster mode in the future:

 It can be seen that there are two keys to multi-level caching:

  • One is to write business in nginx to implement nginx local cache, Redis, and Tomcat queries.

  • The other is to implement JVM process caching in Tomcat

Among them, Nginx programming will use the OpenResty framework combined with languages ​​​​such as Lua .

2.JVM process cache

2.2. First introduction to Caffeine

       Caching plays a vital role in daily development. Since it is stored in memory, the reading speed of data is very fast, which can greatly reduce access to the database and reduce the pressure on the database . We divide cache into two categories:

  • Distributed cache, such as Redis:

    • Advantages: larger storage capacity, better reliability, and can be shared among clusters

    • Disadvantages: There is network overhead for accessing the cache

    • Scenario: The amount of cached data is large, reliability requirements are high, and it needs to be shared between clusters

  • Process local cache, such as HashMap, GuavaCache:

    • Advantages: Reading local memory, no network overhead, faster

    • Disadvantages: limited storage capacity, low reliability, and cannot be shared

    • Scenario: high performance requirements and small amount of cached data

Today we will use the Caffeine framework to implement JVM process caching.

        Caffeine is a high-performance local cache library developed based on Java8 that provides near-optimal hit rate. Currently, Spring's internal cache uses Caffeine. GitHub address: GitHub - ben-manes/caffeine: A high performance caching library for Java

The performance of Caffeine is very good. The following figure is the official performance comparison:

 You can see that Caffeine's performance is far ahead!

Basic API used by cache:

@Test
void testBasicOps() {
    // 构建cache对象
    Cache<String, String> cache = Caffeine.newBuilder().build();
​
    // 存数据
    cache.put("gf", "迪丽热巴");
​
    // 取数据
    String gf = cache.getIfPresent("gf");
    System.out.println("gf = " + gf);
​
    // 取数据,包含两个参数:
    // 参数一:缓存的key
    // 参数二:Lambda表达式,表达式参数就是缓存的key,方法体是查询数据库的逻辑
    // 优先根据key查询JVM缓存,如果未命中,则执行参数二的Lambda表达式
    String defaultGF = cache.get("defaultGF", key -> {
        // 根据key去数据库查询数据
        return "柳岩";
    });
    System.out.println("defaultGF = " + defaultGF);
}

Since Caffeine is a type of cache, it definitely needs a cache clearing strategy, otherwise the memory will always be exhausted.

Caffeine provides three cache eviction strategies:

  • Capacity-based: Set an upper limit on the number of caches
// 创建缓存对象
Cache<String, String> cache = Caffeine.newBuilder()
    .maximumSize(1) // 设置缓存大小上限为 1
    .build();
  • Time-based: Set the cache validity time
// 创建缓存对象
Cache<String, String> cache = Caffeine.newBuilder()
    // 设置缓存有效期为 10 秒,从最后一次写入开始计时 
    .expireAfterWrite(Duration.ofSeconds(10)) 
    .build();
  • Reference-based : Set the cache to soft reference or weak reference, and use GC to recycle cached data. Poor performance, not recommended.

Note : By default, Caffeine will not automatically clean up and evict a cache element immediately when it expires. Instead, the eviction of invalid data is completed after a read or write operation, or during idle time.

2.3. Implement JVM process cache

2.3.1.Requirements

Use Caffeine to achieve the following requirements:

  • Add a cache to the business of querying products based on ID, and query the database when the cache misses

  • Add a cache to the business of querying product inventory based on ID, and query the database when the cache misses

  • The cache initial size is 100

  • The cache limit is 10000

2.3.2. Implementation

First, we need to define two Caffeine cache objects to save the cache data of products and inventory respectively.

com.heima.item.configDefine the class under the item-service package CaffeineConfig:

package com.heima.item.config;
​
import com.github.benmanes.caffeine.cache.Cache;
import com.github.benmanes.caffeine.cache.Caffeine;
import com.heima.item.pojo.Item;
import com.heima.item.pojo.ItemStock;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
​
@Configuration
public class CaffeineConfig {
​
    @Bean
    public Cache<Long, Item> itemCache(){
        return Caffeine.newBuilder()
                .initialCapacity(100)
                .maximumSize(10_000)
                .build();
    }
​
    @Bean
    public Cache<Long, ItemStock> stockCache(){
        return Caffeine.newBuilder()
                .initialCapacity(100)
                .maximumSize(10_000)
                .build();
    }
}

Then, modify com.heima.item.webthe ItemController class under the package in item-service and add caching logic:

@RestController
@RequestMapping("item")
public class ItemController {
​
    @Autowired
    private IItemService itemService;
    @Autowired
    private IItemStockService stockService;
​
    @Autowired
    private Cache<Long, Item> itemCache;
    @Autowired
    private Cache<Long, ItemStock> stockCache;
    
    // ...其它略
    
    @GetMapping("/{id}")
    public Item findById(@PathVariable("id") Long id) {
        return itemCache.get(id, key -> itemService.query()
                .ne("status", 3).eq("id", key)
                .one()
        );
    }
​
    @GetMapping("/stock/{id}")
    public ItemStock findStockById(@PathVariable("id") Long id) {
        return stockCache.get(id, key -> stockService.getById(key));
    }
}


3. Introduction to Lua syntax

Nginx programming requires the use of Lua language, so we must first get started with the basic syntax of Lua.

3.1. First introduction to Lua

       Lua is a lightweight and compact scripting language written in standard C language and open in source code form. It is designed to be embedded in applications to provide flexible expansion and customization functions for applications.

Official website: The Programming Language Lua

Lua is often embedded in programs developed in C language, such as game development, game plug-ins, etc.

Nginx itself is also developed in C language, so it also allows expansion based on Lua.

3.1.HelloWorld

CentOS7 has the Lua language environment installed by default, so you can run Lua code directly.

1) Create a new hello.lua file in any directory of the Linux virtual machine

 2) Add the following content

print("Hello World!")  

3) Run

 3.2. Variables and loops

Learning any language is inseparable from variables, and the declaration of variables must first know the type of data.

3.2.1.Lua data types

Common data types supported in Lua include:

In addition, Lua provides the type() function to determine the data type of a variable:

3.2.2. Declare variables

Lua does not need to specify a data type when declaring a variable. Instead, it uses local to declare the variable as a local variable:

-- 声明字符串,可以用单引号或双引号,
local str = 'hello'
-- 字符串拼接可以使用 ..
local str2 = 'hello' .. 'world'
-- 声明数字
local num = 21
-- 声明布尔类型
local flag = true

      The table type in Lua can be used both as an array and as a map in Java. An array is a special table, and the key is just the array index:

-- 声明数组 ,key为角标的 table
local arr = {'java', 'python', 'lua'}
-- 声明table,类似java的map
local map =  {name='Jack', age=21}

The array index in Lua starts from 1, and the access is similar to that in Java:

-- 访问数组,lua数组的角标从1开始
print(arr[1])

Tables in Lua can be accessed using keys:

-- 访问table
print(map['name'])
print(map.name)

3.2.3. Loop

For tables, we can use a for loop to traverse. However, array traversal is slightly different from ordinary table traversal.

Traverse the array:

-- 声明数组 key为索引的 table
local arr = {'java', 'python', 'lua'}
-- 遍历数组
for index,value in ipairs(arr) do
    print(index, value) 
end

Traverse a normal table:

-- 声明map,也就是table
local map = {name='Jack', age=21}
-- 遍历table
for key,value in pairs(map) do
   print(key, value) 
end

3.3. Conditional control and functions

Conditional control and function declarations in Lua are similar to Java.

3.3.1. Function

Syntax for defining functions:

function 函数名( argument1, argument2..., argumentn)
    -- 函数体
    return 返回值
end

For example, define a function to print an array:

function printArr(arr)
    for index, value in ipairs(arr) do
        print(value)
    end
end

3.3.2.Conditional control

Java-like conditional control, such as if and else syntax:

if(布尔表达式)
then
   --[ 布尔表达式为 true 时执行该语句块 --]
else
   --[ 布尔表达式为 false 时执行该语句块 --]
end
​

Unlike Java, logical operations in Boolean expressions are based on English words:

3.3.3.Case

Requirement: Customize a function that can print the table and print an error message when the parameter is nil.

function printArr(arr)
    if not arr then
        print('数组不能为空!')
    end
    for index, value in ipairs(arr) do
        print(value)
    end
end

4. Implement multi-level caching

The implementation of multi-level cache is inseparable from Nginx programming, and Nginx programming is inseparable from OpenResty.

4.1. Install OpenResty

       OpenResty® is a high-performance web platform based on Nginx, used to easily build dynamic web applications, web services and dynamic gateways that can handle ultra-high concurrency and high scalability. Has the following characteristics:

  • Has the full functionality of Nginx

  • Expanded based on Lua language, integrating a large number of sophisticated Lua libraries and third-party modules

  • Allows the use of Lua custom business logic and custom libraries

Official website: OpenResty® - Open source official website

Specific installation steps :

1.Installation

First, your Linux virtual machine must be connected to the Internet

1) Install development library

First, install the dependency development library of OpenResty and execute the command:

yum install -y pcre-devel openssl-devel gcc --skip-broken

2) Install the OpenResty repository

       You can add a repository to your CentOS system openrestyto facilitate future installation or updates of our packages (via yum check-updatethe command). Run the following command to add our repository:

yum-config-manager --add-repo https://openresty.org/package/centos/openresty.repo

If it says the command does not exist, run:

yum install -y yum-utils 

Then repeat the above command

3) Install OpenResty

Then you can install the package as follows, for example openresty:

yum install -y openresty

4) Install the opm tool

opm is a management tool of OpenResty that can help us install a third-party Lua module.

If you want to install command line tools opm, you can install the package like this openresty-opm:

yum install -y openresty-opm

5) Directory structure

By default, the directory where OpenResty is installed is: /usr/local/openresty

 Have you seen the nginx directory inside? OpenResty integrates some Lua modules based on Nginx.

6) Configure nginx environment variables

Open the configuration file:

vi /etc/profile

Add two lines at the bottom:

export NGINX_HOME=/usr/local/openresty/nginx
export PATH=${NGINX_HOME}/sbin:$PATH

NGINX_HOME: followed by the nginx directory under the OpenResty installation directory

Then let the configuration take effect:

source /etc/profile

2. Get up and running

        The bottom layer of OpenResty is based on Nginx. Check the nginx directory of the OpenResty directory. The structure is basically the same as the nginx installed in windows:

 So the running method is basically the same as nginx:

# 启动nginx
nginx
# 重新加载配置
nginx -s reload
# 停止
nginx -s stop

       The default configuration file of nginx has too many comments, which will affect our subsequent editing. Here, delete the comments in nginx.conf and keep the valid parts.

Modify /usr/local/openresty/nginx/conf/nginx.confthe file as follows:

#user  nobody;
worker_processes  1;
error_log  logs/error.log;

events {
    worker_connections  1024;
}

http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;

    server {
        listen       8081;
        server_name  localhost;
        location / {
            root   html;
            index  index.html index.htm;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}

Enter the command in the Linux console to start nginx:

nginx

Then visit the page: http://192.168.150.101:8081 . Note that the ip address is replaced with your own virtual machine IP:

3.Remarks

Load OpenResty's lua module:

#lua 模块
lua_package_path "/usr/local/openresty/lualib/?.lua;;";
#c模块     
lua_package_cpath "/usr/local/openresty/lualib/?.so;;";  

common.lua

-- 封装函数,发送http请求,并解析响应
local function read_http(path, params)
    local resp = ngx.location.capture(path,{
        method = ngx.HTTP_GET,
        args = params,
    })
    if not resp then
        -- 记录错误信息,返回404
        ngx.log(ngx.ERR, "http not found, path: ", path , ", args: ", args)
        ngx.exit(404)
    end
    return resp.body
end
-- 将方法导出
local _M = {  
    read_http = read_http
}  
return _M

Release the Redis connection API:

-- 关闭redis连接的工具方法,其实是放入连接池
local function close_redis(red)
    local pool_max_idle_time = 10000 -- 连接的空闲时间,单位是毫秒
    local pool_size = 100 --连接池大小
    local ok, err = red:set_keepalive(pool_max_idle_time, pool_size)
    if not ok then
        ngx.log(ngx.ERR, "放入redis连接池失败: ", err)
    end
end

API for reading Redis data:

-- 查询redis的方法 ip和port是redis地址,key是查询的key
local function read_redis(ip, port, key)
    -- 获取一个连接
    local ok, err = red:connect(ip, port)
    if not ok then
        ngx.log(ngx.ERR, "连接redis失败 : ", err)
        return nil
    end
    -- 查询redis
    local resp, err = red:get(key)
    -- 查询失败处理
    if not resp then
        ngx.log(ngx.ERR, "查询Redis失败: ", err, ", key = " , key)
    end
    --得到的数据为空处理
    if resp == ngx.null then
        resp = nil
        ngx.log(ngx.ERR, "查询Redis数据为空, key = ", key)
    end
    close_redis(red)
    return resp
end

Turn on shared dictionary:

# 共享字典,也就是本地缓存,名称叫做:item_cache,大小150m
lua_shared_dict item_cache 150m; 

4.2.OpenResty Quick Start

The multi-level cache architecture we hope to achieve is as follows:

in:

  • nginx on windows is used as a reverse proxy service to proxy the front-end ajax request for product query to the OpenResty cluster.

  • OpenResty cluster is used to write multi-level cache services

4.2.1.Reverse proxy process

       Now, the product detail page uses fake product data. However, in the browser, you can see that the page initiates an ajax request to query the real product data.

The request is as follows:

        The request address is localhost and the port is 80, which is received by the Nginx service installed on Windows. Then the proxy was given to the OpenResty cluster:

We need to write the business in OpenResty, query the product data and return it to the browser.

But this time, we first receive the request in OpenResty and return fake product data.

Note :

Now download Nginx locally ( download the official website ), if it is running under windows, download it (the installation path is a non-Chinese directory):


#user  nobody;
worker_processes  1;

events {
    worker_connections  1024;
}

http {
    include       mime.types;
    default_type  application/octet-stream;

    sendfile        on;
    #tcp_nopush     on;
    keepalive_timeout  65;
    
    #代理服务器ip 端口
    upstream nginx-cluster{
        server 192.168.10.104:8081;
    }
    server {
        listen       80;
        server_name  localhost;
    #代理服务器的地址
	location /api {
            proxy_pass http://nginx-cluster;
        }
        #nginx的静态资源
        location / {
            root   html;
            index  index.html index.htm;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}

Enter the console:

nginx   启动nginx
nginx -s reload  重新启动nginx
nginx -s stop  停止nginx

 (Never start nginx multiple times, make sure that the configuration of nginx currently running is your own configured source file)

4.2.2.OpenResty listens for requests

       Many functions of OpenResty depend on the Lua library in its directory. You need to specify the directory of the dependent library in nginx.conf and import the dependency:

1) Add loading of OpenResty’s Lua module

Modify /usr/local/openresty/nginx/conf/nginx.confthe file and add the following code under http:

#lua 模块
lua_package_path "/usr/local/openresty/lualib/?.lua;;";
#c模块     
lua_package_cpath "/usr/local/openresty/lualib/?.so;;";  

2) Monitor the /api/item path

      Modify /usr/local/openresty/nginx/conf/nginx.confthe file and add monitoring for the path /api/item under server in nginx.conf:

location  /api/item {
    # 默认的响应类型
    default_type application/json;
    # 响应结果由lua/item.lua文件来决定
    content_by_lua_file lua/item.lua;
}

@GetMapping("/api/item")This monitoring is similar to path mapping in SpringMVC

It content_by_lua_file lua/item.luais equivalent to calling the item.lua file, executing the business in it, and returning the results to the user. Equivalent to calling service in java.

4.2.3.Write item.lua

1) /usr/loca/openresty/nginxCreate a folder in the directory: lua

 2) In /usr/loca/openresty/nginx/luathe folder, create a new file: item.lua

 3) Write item.lua and return false data

In item.lua, use the ngx.say() function to return data to Response.

ngx.say('{"id":10001,"name":"SALSA AIR","title":"RIMOWA 21寸托运箱拉杆箱 SALSA AIR系列果绿色 820.70.36.4","price":17900,"image":"https://m.360buyimg.com/mobilecms/s720x720_jfs/t6934/364/1195375010/84676/e9f2c55f/597ece38N0ddcbc77.jpg!q70.jpg.webp","category":"拉杆箱","brand":"RIMOWA","spec":"","status":1,"createTime":"2019-04-30T16:00:00.000+00:00","updateTime":"2019-04-30T16:00:00.000+00:00","stock":2999,"sold":31290}')

4) Reload configuration

nginx -s reload

Refresh the product page: http://localhost/item.html?id=1001 and you can see the effect:

 4.3. Request parameter processing

       We receive front-end requests in OpenResty, but false data is returned. To return real data, we must query product information based on the product id passed by the front-end. So how to obtain the product parameters passed by the front-end?

4.3.1. API for obtaining parameters

OpenResty provides some APIs to obtain different types of front-end request parameters:

 4.3.2. Get parameters and return

The ajax request initiated on the front end is as shown in the figure:

 You can see that the product ID is passed as a path placeholder, so you can use regular expression matching to get the ID.

1) Get product id

Modify /usr/loca/openresty/nginx/nginx.confthe code for monitoring /api/item in the file and use regular expressions to obtain the ID:

location ~ /api/item/(\d+) {
    # 默认的响应类型
    default_type application/json;
    # 响应结果由lua/item.lua文件来决定
    content_by_lua_file lua/item.lua;
}

2) Splice the ID and return

Modify /usr/loca/openresty/nginx/lua/item.luathe file, get the id and splice it into the result to return:

-- 获取商品id
local id = ngx.var[1]
-- 拼接并返回
ngx.say('{"id":' .. id .. ',"name":"SALSA AIR","title":"RIMOWA 21寸托运箱拉杆箱 SALSA AIR系列果绿色 820.70.36.4","price":17900,"image":"https://m.360buyimg.com/mobilecms/s720x720_jfs/t6934/364/1195375010/84676/e9f2c55f/597ece38N0ddcbc77.jpg!q70.jpg.webp","category":"拉杆箱","brand":"RIMOWA","spec":"","status":1,"createTime":"2019-04-30T16:00:00.000+00:00","updateTime":"2019-04-30T16:00:00.000+00:00","stock":2999,"sold":31290}')

3) Reload and test

Run the command to reload the OpenResty configuration:

nginx -s reload

Refresh the page and you can see that the ID has been included in the results:

 4.4. Query Tomcat

       After getting the product ID, we should query the product information in the cache, but currently we have not established nginx or redis cache. Therefore, here we first go to tomcat to query product information based on the product ID. We implement the part shown in the figure:

  

       It should be noted that our OpenResty is on a virtual machine and Tomcat is on a Windows computer. The two IPs must not be mistaken.

 4.4.1. API for sending http requests

nginx provides an internal API to send http requests:

local resp = ngx.location.capture("/path",{
    method = ngx.HTTP_GET,   -- 请求方式
    args = {a=1,b=2},  -- get方式传参数
})

The response content returned includes:

  • resp.status: response status code

  • resp.header: response header, which is a table

  • resp.body: response body, which is the response data

Note: The path here is the path and does not include IP and port. This request will be monitored and processed by the server inside nginx.

But we want this request to be sent to the Tomcat server, so we also need to write a server to reverse proxy this path:

 location /path {
     # 这里是windows电脑的ip和Java服务端口,需要确保windows防火墙处于关闭状态
     proxy_pass http://192.168.150.1:8081; 
 }

The principle is as shown in the figure:

 4.4.2. Encapsulating http tools

Next, we encapsulate a tool for sending Http requests and query tomcat based on ngx.location.capture.

1) Add a reverse proxy to the Java service of Windows

       Because the interfaces in item-service all start with /item, we monitor the /item path and proxy to the tomcat service on Windows.

Modify /usr/local/openresty/nginx/conf/nginx.confthe file and add a location:

location /item {
    proxy_pass http://192.168.150.1:8081;
}

 Note: When using a reverse proxy, you can first use ping to see if you can ping the network (the address of the proxy). Local address: The first three digits of the virtual machine address remain unchanged, the last digit is 1, and finally enter the local tomcat port.

      In the future, as long as we call ngx.location.capture("/item"), we will be able to send requests to the tomcat service of windows.

2) Package tool class

As we said before, OpenResty will load tool files in the following two directories when it starts:

Therefore, custom http tools also need to be placed in this directory.

In /usr/local/openresty/lualibthe directory, create a new common.lua file:

vi /usr/local/openresty/lualib/common.lua

The content is as follows:

-- 封装函数,发送http请求,并解析响应
local function read_http(path, params)
    local resp = ngx.location.capture(path,{
        method = ngx.HTTP_GET,
        args = params,
    })
    if not resp then
        -- 记录错误信息,返回404
        ngx.log(ngx.ERR, "http请求查询失败, path: ", path , ", args: ", args)
        ngx.exit(404)
    end
    return resp.body
end
-- 将方法导出
local _M = {  
    read_http = read_http
}  
return _M

       This tool encapsulates the read_http function into a variable of table type _M and returns it, which is similar to exporting.

When using it, you can use it require('common')to import the function library, where common is the file name of the function library.

3) Implement product query

         Finally, we modify /usr/local/openresty/lua/item.luathe file and use the function library just encapsulated to query tomcat:

-- 引入自定义common工具模块,返回值是common中返回的 _M
local common = require("common")
-- 从 common中获取read_http这个函数
local read_http = common.read_http
-- 获取路径参数
local id = ngx.var[1]
-- 根据id查询商品
local itemJSON = read_http("/item/".. id, nil)
-- 根据id查询商品库存
local itemStockJSON = read_http("/item/stock/".. id, nil)

       The result queried here is a json string, and contains two json strings, product and inventory. What the page ultimately needs is to splice the two json strings into one json:

This requires us to first convert JSON into a Lua table, and then convert it into JSON after completing the data integration.

4.4.3.CJSON tool class

OpenResty provides a cjson module to handle JSON serialization and deserialization.

Official address: GitHub - openresty/lua-cjson: Lua CJSON is a fast JSON encoding/parsing module for Lua

1) Introduce the cjson module:

local cjson = require "cjson"

2) Serialization:

local obj = {
    name = 'jack',
    age = 21
}
-- 把 table 序列化为 json
local json = cjson.encode(obj)

3) Deserialization:

local json = '{"name": "jack", "age": 21}'
-- 反序列化 json为 table
local obj = cjson.decode(json);
print(obj.name)

4.4.4. Implement Tomcat query

Next, we modify the business in the previous item.lua and add json processing function:

-- 导入common函数库
local common = require('common')
local read_http = common.read_http
-- 导入cjson库
local cjson = require('cjson')

-- 获取路径参数
local id = ngx.var[1]
-- 根据id查询商品
local itemJSON = read_http("/item/".. id, nil)
-- 根据id查询商品库存
local itemStockJSON = read_http("/item/stock/".. id, nil)

-- JSON转化为lua的table
local item = cjson.decode(itemJSON)
local itemStock = cjson.decode(itemStockJSON)  -- 改为itemStockJSON

-- 组合数据
item.stock = itemStock.stock  -- 改为itemStock
item.sold = itemStock.sold    -- 改为itemStock

-- 把item序列化为json 返回结果
ngx.say(cjson.encode(item))

4.4.5. ID-based load balancing

In the code just now, our tomcat is deployed on a single machine. In actual development, tomcat must be in cluster mode:

 Therefore, OpenResty needs to load balance the tomcat cluster.

The default load balancing rule is polling mode. When we query /item/10001:

  • The first time you access the tomcat service on port 8081, a JVM process cache is formed inside the service.

  • The second time it will access the tomcat service on port 8082. There is no JVM cache inside the service (because the JVM cache cannot be shared), and the database will be queried.

  • ...

      You see, due to polling, the JVM cache formed by querying 8081 for the first time does not take effect until the next time 8081 is accessed. The cache hit rate is too low. What should I do? If the same product can access the same tomcat service every time it is queried, then the JVM cache will definitely take effect. In other words, we need to do load balancing based on the product ID instead of polling.

1) Principle

nginx provides a load balancing algorithm based on the request path:

nginx performs a hash operation based on the request path, and takes the remainder of the obtained value to the number of tomcat services. Whatever the remainder is, it accesses which service to achieve load balancing.

For example:

  • Our request path is /item/10001

  • The total number of tomcats is 2 (8081, 8082)

  • The remainder of the hash operation on the request path /item/1001 is 1

  • Then access the first tomcat service, which is 8081

       As long as the ID remains unchanged and the result of each hash operation will not change, it can ensure that the same product always accesses the same tomcat service and ensures that the JVM cache takes effect.

2) Realize

Modify /usr/local/openresty/nginx/conf/nginx.confthe file to implement load balancing based on ID.

First, define the tomcat cluster and set up path-based load balancing:

upstream tomcat-cluster {
    hash $request_uri;
    server 192.168.150.1:8081;
    server 192.168.150.1:8082;
}

Then, modify the reverse proxy for the tomcat service so that the target points to the tomcat cluster:

location /item {
    proxy_pass http://tomcat-cluster;
}

Reload OpenResty

nginx -s reload

3) Test

Start two tomcat services:

 Start simultaneously:

 After clearing the log and visiting the page again, you can see products with different IDs and access different tomcat services:

 4.5.Redis cache warm-up

Redis cache will face cold start problem:

Cold start : When the service is just started, there is no cache in Redis. If all product data is cached during the first query, it may put greater pressure on the database.

Cache warm-up : In actual development, we can use big data to count the hot data accessed by users, query these hot data in advance and save them to Redis when the project is started. We have a small amount of data and no data statistics related functions. It is currently possible to put all data into the cache on startup.

1) Install Redis using Docker

docker run --name redis -p 6379:6379 -d redis redis-server --appendonly yes

2) Introduce Redis dependency into the item-service service

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>

3) Configure Redis address

spring:
  redis:
    host: 192.168.150.101

4) Write initialization class

Cache preheating needs to be completed when the project is started, and must be done after getting the RedisTemplate.

       Here we use the InitializingBean interface to implement it, because InitializingBean can be executed after the object is created by Spring and all member variables are injected.


import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.heima.item.pojo.Item;
import com.heima.item.pojo.ItemStock;
import com.heima.item.service.IItemService;
import com.heima.item.service.IItemStockService;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.redis.core.StringRedisTemplate;
import org.springframework.stereotype.Component;
​
import java.util.List;
​
@Component
public class RedisHandler implements InitializingBean {
​
    @Autowired
    private StringRedisTemplate redisTemplate;
​
    @Autowired
    private IItemService itemService;
    @Autowired
    private IItemStockService stockService;
​
    private static final ObjectMapper MAPPER = new ObjectMapper();
​
    @Override
    public void afterPropertiesSet() throws Exception {
        // 初始化缓存
        // 1.查询商品信息
        List<Item> itemList = itemService.list();
        // 2.放入缓存
        for (Item item : itemList) {
            // 2.1.item序列化为JSON
            String json = MAPPER.writeValueAsString(item);
            // 2.2.存入redis
            redisTemplate.opsForValue().set("item:id:" + item.getId(), json);
        }
​
        // 3.查询商品库存信息
        List<ItemStock> stockList = stockService.list();
        // 4.放入缓存
        for (ItemStock stock : stockList) {
            // 2.1.item序列化为JSON
            String json = MAPPER.writeValueAsString(stock);
            // 2.2.存入redis
            redisTemplate.opsForValue().set("item:stock:id:" + stock.getId(), json);
        }
    }
}

4.6. Query Redis cache

       Now that the Redis cache is ready, we can implement the logic of querying Redis in OpenResty. As shown in the red box in the figure below:

After the request enters OpenResty:

  • Query Redis cache first

  • If the Redis cache misses, query Tomcat again

4.6.1. Encapsulating Redis tools

       OpenResty provides a module for operating Redis, which we can use directly as long as we introduce the module. But for convenience, we encapsulate the Redis operation into the previous common.lua tool library.

Modify /usr/local/openresty/lualib/common.luafile:

1) Introduce the Redis module and initialize the Redis object

-- 导入redis
local redis = require('resty.redis')
-- 初始化redis
local red = redis:new()
red:set_timeouts(1000, 1000, 1000)

2) Encapsulation function, used to release the Redis connection, is actually put into the connection pool

-- 关闭redis连接的工具方法,其实是放入连接池
local function close_redis(red)
    local pool_max_idle_time = 10000 -- 连接的空闲时间,单位是毫秒
    local pool_size = 100 --连接池大小
    local ok, err = red:set_keepalive(pool_max_idle_time, pool_size)
    if not ok then
        ngx.log(ngx.ERR, "放入redis连接池失败: ", err)
    end
end

3) Encapsulate function to query Redis data based on key

-- 查询redis的方法 ip和port是redis地址,key是查询的key
local function read_redis(ip, port, key)
    -- 获取一个连接
    local ok, err = red:connect(ip, port)
    if not ok then
        ngx.log(ngx.ERR, "连接redis失败 : ", err)
        return nil
    end
    -- 查询redis
    local resp, err = red:get(key)
    -- 查询失败处理
    if not resp then
        ngx.log(ngx.ERR, "查询Redis失败: ", err, ", key = " , key)
    end
    --得到的数据为空处理
    if resp == ngx.null then
        resp = nil
        ngx.log(ngx.ERR, "查询Redis数据为空, key = ", key)
    end
    close_redis(red)
    return resp
end

4) Export

-- 将方法导出
local _M = {  
    read_http = read_http,
    read_redis = read_redis
}  
return _M

Complete common.lua:

-- 导入redis
local redis = require('resty.redis')
-- 初始化redis
local red = redis:new()
red:set_timeouts(1000, 1000, 1000)
​
-- 关闭redis连接的工具方法,其实是放入连接池
local function close_redis(red)
    local pool_max_idle_time = 10000 -- 连接的空闲时间,单位是毫秒
    local pool_size = 100 --连接池大小
    local ok, err = red:set_keepalive(pool_max_idle_time, pool_size)
    if not ok then
        ngx.log(ngx.ERR, "放入redis连接池失败: ", err)
    end
end
​
-- 查询redis的方法 ip和port是redis地址,key是查询的key
local function read_redis(ip, port, key)
    -- 获取一个连接
    local ok, err = red:connect(ip, port)
    if not ok then
        ngx.log(ngx.ERR, "连接redis失败 : ", err)
        return nil
    end
    -- 查询redis
    local resp, err = red:get(key)
    -- 查询失败处理
    if not resp then
        ngx.log(ngx.ERR, "查询Redis失败: ", err, ", key = " , key)
    end
    --得到的数据为空处理
    if resp == ngx.null then
        resp = nil
        ngx.log(ngx.ERR, "查询Redis数据为空, key = ", key)
    end
    close_redis(red)
    return resp
end
​
-- 封装函数,发送http请求,并解析响应
local function read_http(path, params)
    local resp = ngx.location.capture(path,{
        method = ngx.HTTP_GET,
        args = params,
    })
    if not resp then
        -- 记录错误信息,返回404
        ngx.log(ngx.ERR, "http查询失败, path: ", path , ", args: ", args)
        ngx.exit(404)
    end
    return resp.body
end
-- 将方法导出
local _M = {  
    read_http = read_http,
    read_redis = read_redis
}  
return _M

4.6.2. Implement Redis query

Next, we can modify the item.lua file to query Redis.

The query logic is:

  • Query Redis based on id

  • If the query fails, continue to query Tomcat

  • Return query results

1) Modify /usr/local/openresty/lua/item.luathe file and add a query function:

-- 导入common函数库
local common = require('common')
local read_http = common.read_http
local read_redis = common.read_redis
-- 封装查询函数
function read_data(key, path, params)
    -- 查询本地缓存
    local val = read_redis("127.0.0.1", 6379, key)
    -- 判断查询结果
    if not val then
        ngx.log(ngx.ERR, "redis查询失败,尝试查询http, key: ", key)
        -- redis查询失败,去查询http
        val = read_http(path, params)
    end
    -- 返回数据
    return val
end

2) Then modify the business of product inquiry and inventory inquiry:

 3) Complete item.lua code:

-- 导入common函数库
local common = require('common')
local read_http = common.read_http
local read_redis = common.read_redis
-- 导入cjson库
local cjson = require('cjson')
​
-- 封装查询函数
function read_data(key, path, params)
    -- 查询本地缓存
    local val = read_redis("127.0.0.1", 6379, key)
    -- 判断查询结果
    if not val then
        ngx.log(ngx.ERR, "redis查询失败,尝试查询http, key: ", key)
        -- redis查询失败,去查询http
        val = read_http(path, params)
    end
    -- 返回数据
    return val
end
​
-- 获取路径参数
local id = ngx.var[1]
​
-- 查询商品信息
local itemJSON = read_data("item:id:" .. id,  "/item/" .. id, nil)
-- 查询库存信息
local stockJSON = read_data("item:stock:id:" .. id, "/item/stock/" .. id, nil)
​
-- JSON转化为lua的table
local item = cjson.decode(itemJSON)
local stock = cjson.decode(stockJSON)
-- 组合数据
item.stock = stock.stock
item.sold = stock.sold
​
-- 把item序列化为json 返回结果
ngx.say(cjson.encode(item))

4.7.Nginx local cache

Now, only the last link in the entire multi-level cache is missing, which is nginx's local cache. As shown in the picture:

 4.7.1.Local cache API

       OpenResty provides the shard dict function for Nginx, which can share data between multiple workers of nginx and implement caching functions.

1) Enable the shared dictionary and add configuration under http in nginx.conf:

 # 共享字典,也就是本地缓存,名称叫做:item_cache,大小150m
 lua_shared_dict item_cache 150m; 

2) Manipulate the shared dictionary:

-- 获取本地缓存对象
local item_cache = ngx.shared.item_cache
-- 存储, 指定key、value、过期时间,单位s,默认为0代表永不过期
item_cache:set('key', 'value', 1000)
-- 读取
local val = item_cache:get('key')

4.7.2. Implement local cache query

1) Modify /usr/local/openresty/lua/item.luathe file, modify the read_data query function, and add local caching logic:

-- 导入共享词典,本地缓存
local item_cache = ngx.shared.item_cache
​
-- 封装查询函数
function read_data(key, expire, path, params)
    -- 查询本地缓存
    local val = item_cache:get(key)
    if not val then
        ngx.log(ngx.ERR, "本地缓存查询失败,尝试查询Redis, key: ", key)
        -- 查询redis
        val = read_redis("127.0.0.1", 6379, key)
        -- 判断查询结果
        if not val then
            ngx.log(ngx.ERR, "redis查询失败,尝试查询http, key: ", key)
            -- redis查询失败,去查询http
            val = read_http(path, params)
        end
    end
    -- 查询成功,把数据写入本地缓存
    item_cache:set(key, val, expire)
    -- 返回数据
    return val
end

2) Modify the business of querying products and inventory in item.lua and implement the latest read_data function:

       In fact, there are more cache time parameters. After expiration, the nginx cache will be automatically deleted. The cache can be updated next time you visit. Here, the timeout time for the basic product information is set to 30 minutes, and the inventory is 1 minute. Because the inventory update frequency is high, if the cache If the time is too long, it may be significantly different from the database.

3) Complete item.lua file:

-- 导入common函数库
local common = require('common')
local read_http = common.read_http
local read_redis = common.read_redis
-- 导入cjson库
local cjson = require('cjson')
-- 导入共享词典,本地缓存
local item_cache = ngx.shared.item_cache
​
-- 封装查询函数
function read_data(key, expire, path, params)
    -- 查询本地缓存
    local val = item_cache:get(key)
    if not val then
        ngx.log(ngx.ERR, "本地缓存查询失败,尝试查询Redis, key: ", key)
        -- 查询redis
        val = read_redis("127.0.0.1", 6379, key)
        -- 判断查询结果
        if not val then
            ngx.log(ngx.ERR, "redis查询失败,尝试查询http, key: ", key)
            -- redis查询失败,去查询http
            val = read_http(path, params)
        end
    end
    -- 查询成功,把数据写入本地缓存
    item_cache:set(key, val, expire)
    -- 返回数据
    return val
end
​
-- 获取路径参数
local id = ngx.var[1]
​
-- 查询商品信息
local itemJSON = read_data("item:id:" .. id, 1800,  "/item/" .. id, nil)
-- 查询库存信息
local stockJSON = read_data("item:stock:id:" .. id, 60, "/item/stock/" .. id, nil)
​
-- JSON转化为lua的table
local item = cjson.decode(itemJSON)
local stock = cjson.decode(stockJSON)
-- 组合数据
item.stock = stock.stock
item.sold = stock.sold
​
-- 把item序列化为json 返回结果
ngx.say(cjson.encode(item))

5. Cache synchronization

       In most cases, the browser queries cached data. If there is a big difference between cached data and database data, it may have serious consequences. Therefore, we must ensure the consistency of database data and cached data. This is caching. Synchronization with database.

5.1. Data synchronization strategy

There are three common ways to synchronize cache data:

Set validity period : Set the validity period for the cache, and it will be automatically deleted after expiration. Update when querying again

  • Advantages: simple and convenient

  • Disadvantages: poor timeliness, cache may be inconsistent before expiration

  • Scenario: Business with low update frequency and low timeliness requirements

Synchronous double write : directly modify the cache while modifying the database

  • Advantages: strong timeliness, strong consistency between cache and database

  • Disadvantages: code intrusion and high coupling;

  • Scenario: Cache data with high consistency and timeliness requirements

Asynchronous notification: event notification is sent when the database is modified, and the relevant services modify the cached data after listening to the notification.

  • Advantages: low coupling, multiple cache services can be notified at the same time

  • Disadvantages: Average timeliness, there may be intermediate inconsistencies

  • Scenario: The timeliness requirements are average and there are multiple services that need to be synchronized.

Asynchronous implementation can be implemented based on MQ or Canal:

1) MQ-based asynchronous notification:

 Interpretation:

  • After the product service completes the modification of the data, it only needs to send a message to MQ.

  • The cache service listens to MQ messages and then completes updates to the cache

There is still a small amount of code intrusion.

2) Canal-based notifications

Interpretation:

  • After the product service completes the product modification, the business ends directly without any code intrusion.

  • Canal monitors MySQL changes and immediately notifies the cache service when a change is discovered.

  • The cache service receives the canal notification and updates the cache.

Zero code intrusion

5.2.Install Canal

5.2.1.Get to know Canal

Canal [kə'næl] , translated as waterway/pipeline/ditch, canal is an open source project under Alibaba, developed based on Java. Based on database incremental log analysis, incremental data subscription & consumption is provided. GitHub address: GitHub - alibaba/canal: Alibaba MySQL binlog incremental subscription & consumption component

Canal is implemented based on mysql's master-slave synchronization. The principle of MySQL master-slave synchronization is as follows:

  • 1) MySQL master writes data changes to the binary log (binary log), and the recorded data is called binary log events

  • 2) MySQL slave copies the master's binary log events to its relay log (relay log)

  • 3) MySQL slave replays the events in the relay log and reflects the data changes to its own data

       Canal disguises itself as a slave node of MySQL to monitor the master's binary log changes. Then the obtained change information is notified to the Canal client, and then the synchronization of other databases is completed.

 5.2.2.Install Canal

Next, we will enable the master-slave synchronization mechanism of mysql and let Canal simulate salve.

1. Start MySQL master-slave

Canal is based on the master-slave synchronization function of MySQL, so the master-slave function of MySQL must be enabled first.

Here is an example of mysql that was previously run with Docker:

1.1. Start binlog

Open the log file mounted by the mysql container, mine is in /tmp/mysql/confthe directory:

 Modify file:

vi /tmp/mysql/conf/my.cnf

Add content:

log-bin=/var/lib/mysql/mysql-bin
binlog-do-db=heima

Configuration interpretation:

  • log-bin=/var/lib/mysql/mysql-bin: Set the storage address and file name of the binary log file, called mysql-bin

  • binlog-do-db=heima: Specify which database to record binary log events. The heima library is recorded here.

final effect:

[mysqld]
skip-name-resolve
character_set_server=utf8
datadir=/var/lib/mysql
server-id=1000
log-bin=/var/lib/mysql/mysql-bin
binlog-do-db=heima
1.2.Set user permissions

Next, add an account only for data synchronization. For security reasons, only the operation permissions for the heima library are provided here.

create user canal@'%' IDENTIFIED by 'canal';
GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT,SUPER ON *.* TO 'canal'@'%' identified by 'canal';
FLUSH PRIVILEGES;

Just restart the mysql container

docker restart mysql

Test whether the setting is successful: In the mysql console or Navicat, enter the command:

show master status;

 2.Install Canal

2.1.Create a network

We need to create a network and put MySQL, Canal, and MQ into the same Docker network:

docker network create heima

Let mysql join this network:

docker network connect heima mysql
2.3.Install Canal

You can download canal’s image compressed package on the official website:

You can upload it to a virtual machine and then import it through the command:

docker load -i canal.tar

Then run the command to create the Canal container:

docker run -p 11111:11111 --name canal \
-e canal.destinations=heima \
-e canal.instance.master.address=mysql:3306  \
-e canal.instance.dbUsername=canal  \
-e canal.instance.dbPassword=canal  \
-e canal.instance.connectionCharset=UTF-8 \
-e canal.instance.tsdb.enable=true \
-e canal.instance.gtidon=false  \
-e canal.instance.filter.regex=heima\\..* \
--network heima \
-d canal/canal-server:v1.1.5

illustrate:

  • -p 11111:11111: This is the default listening port of canal

  • -e canal.instance.master.address=mysql:3306: Database address and port. If you don’t know the mysql container address, you can docker inspect 容器idcheck it through

  • -e canal.instance.dbUsername=canal:database username

  • -e canal.instance.dbPassword=canal: database password

  • -e canal.instance.filter.regex=: The name of the table to be monitored

Supported syntax for table name listening:

Mysql data parsing focuses on tables, Perl regular expressions. 
Multiple regular expressions are separated by commas (,), and the escape character requires double slashes (\\). 
Common examples: 
1. All tables: .* or .*\\ ..* 
2. All tables under canal schema: canal\\..* 
3. Tables starting with canal under canal: canal\\.canal.* 
4. One table under canal schema: canal.test1 
5. Use multiple rules in combination and separate them with commas: canal\\..*,mysql.test1,mysql.test2

5.3. Monitor Canal

Canal provides clients in various languages. When Canal monitors binlog changes, it will notify Canal's client.

       We can use the Java client provided by Canal to listen to Canal notification messages. When a change message is received, the cache is updated.

       But here we will use the third-party open source canal-starter client on GitHub. Address: GitHub - NormanGyllenhaal/canal-client: spring boot canal starter easy-to-use canal client canal client

Perfectly integrated with SpringBoot and automatically assembled, it is much simpler and easier to use than the official client.

5.3.1.Introduce dependencies:

<dependency>
    <groupId>top.javatool</groupId>
    <artifactId>canal-spring-boot-starter</artifactId>
    <version>1.2.1-RELEASE</version>
</dependency>

5.3.2.Write configuration:

canal:
  destination: heima # canal的集群名字,要与安装canal时设置的名称一致
  server: 192.168.150.101:11111 # canal服务地址

5.3.3. Modify the Item entity class

Complete the mapping between Item and database table fields through @Id, @Column, and other annotations:

import com.baomidou.mybatisplus.annotation.IdType;
import com.baomidou.mybatisplus.annotation.TableField;
import com.baomidou.mybatisplus.annotation.TableId;
import com.baomidou.mybatisplus.annotation.TableName;
import lombok.Data;
import org.springframework.data.annotation.Id;
import org.springframework.data.annotation.Transient;
​
import javax.persistence.Column;
import java.util.Date;
​
@Data
@TableName("tb_item")
public class Item {
    @TableId(type = IdType.AUTO)
    @Id
    private Long id;//商品id
    @Column(name = "name")
    private String name;//商品名称
    private String title;//商品标题
    private Long price;//价格(分)
    private String image;//商品图片
    private String category;//分类名称
    private String brand;//品牌名称
    private String spec;//规格
    private Integer status;//商品状态 1-正常,2-下架
    private Date createTime;//创建时间
    private Date updateTime;//更新时间
    @TableField(exist = false)
    @Transient
    private Integer stock;
    @TableField(exist = false)
    @Transient
    private Integer sold;
}

5.3.4.Writing a listener

EntryHandler<T>Write a listener by implementing the interface to listen to Canal messages. Note two points:

  • The implementation class @CanalTable("tb_item")specifies the table information to be monitored

  • The generic type of EntryHandler is the entity class corresponding to the table

```java
package com.heima.item.canal;

import com.github.benmanes.caffeine.cache.Cache;
import com.heima.item.config.RedisHandler;
import com.heima.item.pojo.Item;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import top.javatool.canal.client.annotation.CanalTable;
import top.javatool.canal.client.handler.EntryHandler;

@CanalTable("tb_item")
@Component
public class ItemHandler implements EntryHandler<Item> {

    @Autowired
    private RedisHandler redisHandler;
    @Autowired
    private Cache<Long, Item> itemCache;

    @Override
    public void insert(Item item) {
        // 写数据到JVM进程缓存
        itemCache.put(item.getId(), item);
        // 写数据到redis
        redisHandler.saveItem(item);
    }

    @Override
    public void update(Item before, Item after) {
        // 写数据到JVM进程缓存
        itemCache.put(after.getId(), after);
        // 写数据到redis
        redisHandler.saveItem(after);
    }

    @Override
    public void delete(Item item) {
        // 删除数据到JVM进程缓存
        itemCache.invalidate(item.getId());
        // 删除数据到redis
        redisHandler.deleteItemById(item.getId());
    }
}
```

      The operations on Redis here are encapsulated into the RedisHandler object, which is a class we wrote when we were doing cache preheating. The content is as follows:

import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.heima.item.pojo.Item;
import com.heima.item.pojo.ItemStock;
import com.heima.item.service.IItemService;
import com.heima.item.service.IItemStockService;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.redis.core.StringRedisTemplate;
import org.springframework.stereotype.Component;
​
import java.util.List;
​
@Component
public class RedisHandler implements InitializingBean {
​
    @Autowired
    private StringRedisTemplate redisTemplate;
​
    @Autowired
    private IItemService itemService;
    @Autowired
    private IItemStockService stockService;
​
    private static final ObjectMapper MAPPER = new ObjectMapper();
​
    @Override
    public void afterPropertiesSet() throws Exception {
        // 初始化缓存
        // 1.查询商品信息
        List<Item> itemList = itemService.list();
        // 2.放入缓存
        for (Item item : itemList) {
            // 2.1.item序列化为JSON
            String json = MAPPER.writeValueAsString(item);
            // 2.2.存入redis
            redisTemplate.opsForValue().set("item:id:" + item.getId(), json);
        }
​
        // 3.查询商品库存信息
        List<ItemStock> stockList = stockService.list();
        // 4.放入缓存
        for (ItemStock stock : stockList) {
            // 2.1.item序列化为JSON
            String json = MAPPER.writeValueAsString(stock);
            // 2.2.存入redis
            redisTemplate.opsForValue().set("item:stock:id:" + stock.getId(), json);
        }
    }
​
    public void saveItem(Item item) {
        try {
            String json = MAPPER.writeValueAsString(item);
            redisTemplate.opsForValue().set("item:id:" + item.getId(), json);
        } catch (JsonProcessingException e) {
            throw new RuntimeException(e);
        }
    }
​
    public void deleteItemById(Long id) {
        redisTemplate.delete("item:id:" + id);
    }
}

Guess you like

Origin blog.csdn.net/dfdbb6b/article/details/132436118