Stage 8: Advanced service framework (Chapter 4: Redis multi-level cache case)

Stage 8: Advanced service framework (Chapter 4: Redis multi-level cache case)

Redis multi-level cache case

0.Learning Objectives

Insert image description here

1.What is multi-level cache?

The traditional caching strategy is generally Tomcatto query first after the request arrives, Redisand then query if it misses 数据库, as shown in the figure:
Insert image description here
There are the following problems:

•The request needs to be Tomcatprocessed, Tomcatand the performance becomes the bottleneck of the entire system.
RedisWhen the cache fails, it will have an impact on the database.

Multi-level cache isMake full use of every aspect of request processing and add cache respectively to reduce Tomcatpressure and improve service performance.

  • When the browser accesses static resources, it first reads the browser's local cache.
  • When accessing non-static resources (ajax query data), access the server
  • After the request arrives , the local cache Nginxis read firstNginx
  • If Nginx local cache misses, query directly Redis(without going through Tomcat)
  • If Redisthe query misses, the queryTomcat
  • After the request is entered Tomcat, the query will be given priorityJVM进程缓存
  • If JVM进程缓存there is a miss, query the database

Insert image description here
  In a multi-level cache architecture, the business logic of Nginxlocal cache query, Redisquery, and Tomcatquery needs to be written internally. Therefore, such a nginxservice is no longer a reverse proxy server , but a web server for writing business..
  Therefore, such business Nginx services also need to build a cluster to improve concurrency, and then use a dedicated nginx service to do the reverse proxy, as shown in the figure: In addition, our services will also be
Insert image description here
deployed Tomcatin cluster mode in the future:
Insert image description here

It can be seen that there are two keys to multi-level caching:

  • One is to write business innginx ,Implement nginxlocal cache, Redis, Tomcatquery
  • Another is to implement process caching inTomcatJVM

Among them, Nginxprogramming will use languages ​​such as OpenRestyframeworks and combinations .Lua
This is also the difficulty and focus of this article .

2. JVM process cache

In order to demonstrate the case of multi-level caching, we first prepare a product query business.

2.1. Import cases

Refer to the pre-course materials: "Case Import Instructions.md"
Reference document: Stages 1 and 8 - Chapter 4 - Case Import Instructions : https://editor.csdn.net/md/?articleId=128646254
Insert image description here

2.2. First introduction to Caffeineimportant

  Caching plays a vital role in daily development.Since it is stored in memory, the reading speed of data is very fast, which can greatly reduce the access to the database and reduce the pressure on the database.. We divide cache into two categories:

  • Distributed cache, for example Redis:
    • Advantages: larger storage capacity, better reliability, and can be shared among clusters
    • Disadvantages: There is network overhead for accessing the cache
    • Scenario: The amount of cached data is large, reliability requirements are high, and it needs to be shared between clusters
  • Process local cache, for example HashMap:GuavaCache
    • Advantages: Reading local memory, no network overhead, faster
    • Disadvantages: limited storage capacity, low reliability, and cannot be shared
    • Scenario: high performance requirements and small amount of cached data

Caffeine框架We'll do it today JVM进程缓存.

  Caffeine is a high-performance local cache library developed based on Java8 that provides near-optimal hit rate. Currently Spring's internal cache uses this Caffeine. GitHub address: https://github.com/ben-manes/caffeine

The performance of Caffeine is very good. The following picture is the official performance comparison: the performance
Insert image description here
you can see Caffeineis far ahead!

Basics of caching API:

@Test
void testBasicOps() {
    
    
    // 构建cache对象
    Cache<String, String> cache = Caffeine.newBuilder().build();

    // 存数据
    cache.put("gf", "迪丽热巴");

    // 取数据
    String gf = cache.getIfPresent("gf");
    System.out.println("gf = " + gf);

    // 取数据,包含两个参数:
    // 参数一:缓存的key
    // 参数二:Lambda表达式,表达式参数就是缓存的key,方法体是查询数据库的逻辑
    // 优先根据key查询JVM缓存,如果未命中,则执行参数二的Lambda表达式
    String defaultGF = cache.get("defaultGF", key -> {
    
    
        // 根据key去数据库查询数据
        return "柳岩";
    });
    System.out.println("defaultGF = " + defaultGF);
}

CaffeineSince it is a type of cache, there must beCache clearing strategy, otherwise the memory will always be exhausted.
CaffeineThree cache eviction strategies are provided:

  • Capacity based:Set the upper limit of the number of caches

    // 创建缓存对象
    Cache<String, String> cache = Caffeine.newBuilder()
        .maximumSize(1) // 设置缓存大小上限为 1
        .build();
    
  • time based:Set the cache validity time

    // 创建缓存对象
    Cache<String, String> cache = Caffeine.newBuilder()
        // 设置缓存有效期为 10 秒,从最后一次写入开始计时 
        .expireAfterWrite(Duration.ofSeconds(10)) 
        .build();
    
  • Reference-based : Set the cache to soft reference or weak reference, and use GC to recycle cached data. Poor performance, not recommended.

Note : By default, Caffeine will not automatically clean up and evict a cache element immediately when it expires. Instead, the eviction of invalid data is completed after a read or write operation, or during idle time.

2.3. Implement JVMprocess cachingimportant

2.3.1.Requirements

Utilize Caffeinethe following requirements:

  • Add cache to the business of querying products based on ID.Query database on cache miss
  • Add cache to the business of querying product inventory based on ID.Query database on cache miss
  • The cache initial size is 100
  • The cache limit is 10000

2.3.2. Implementation

  First, we need to define two Caffeinecache objects to store the cache data of products and inventory respectively. Define the class under
the item-servicepackage :com.heima.item.configCaffeineConfig

package com.heima.item.config;

@Configuration
public class CaffeineConfig {
    
    

    @Bean
    public Cache<Long, Item> itemCache(){
    
          //商品
        return Caffeine.newBuilder()
                .initialCapacity(100)         //缓存初始大小为100
                .maximumSize(10_000)         //缓存上限为10000
                .build();
    }

    @Bean
    public Cache<Long, ItemStock> stockCache(){
    
         //库存
        return Caffeine.newBuilder()
                .initialCapacity(100)         //缓存初始大小为100
                .maximumSize(10_000)         //缓存上限为10000
                .build();
    }
}

Then, modify the class item-serviceunder com.heima.item.webthe package ItemController,Add caching logic

@RestController
@RequestMapping("item")
public class ItemController {
    
    

    @Autowired
    private IItemService itemService;
    @Autowired
    private IItemStockService stockService;

    @Autowired
    private Cache<Long, Item> itemCache;
    @Autowired
    private Cache<Long, ItemStock> stockCache;
    
    // ...其它略
    
    @GetMapping("/{id}")
    public Item findById(@PathVariable("id") Long id) {
    
    
        return itemCache.get(id, key -> itemService.query()
                .ne("status", 3).eq("id", key)
                .one()
        );
    }

    @GetMapping("/stock/{id}")
    public ItemStock findStockById(@PathVariable("id") Long id) {
    
    
        return stockCache.get(id, key -> stockService.getById(key));
    }
}

3. Introduction to Lua syntaximportant

Nginx编程Need to be used Lua语言, so we must get started firstBasic syntax of Lua

3.1. First introduction to Lua

  Lua It is a lightweight and compact scripting language, written in standard C language and open in source code form. It is designed to be embedded in applications to provide flexible expansion and customization functions for applications. Official website: https://www.lua.org/
Insert image description here
Luaoften embedded in programs developed in C language, such as game development, game plug-ins, etc.

NginxIt is also developed in C language, so it also allows expansion based on Lua.

3.1.HelloWorld

CentOS7The Lua language environment is installed by default, so you can run Lua code directly.

1) Create a new file Linuxin any directory of the virtual machine 2) Add the following contenthello.lua
Insert image description here

print("Hello World!")  

3) Run
Insert image description here

3.2. Variables and loops

Learning any language is inseparable fromvariable, and the declaration of variables must first be knowndata type

3.2.1.Lua data types

LuaCommon data types supported in include:
Insert image description here
In addition, Lua provides the type() function to determine the data type of a variable:
Insert image description here

3.2.2. Declare variables

LuaWhen declaring variablesNo need to specify data type, butUsed tolocal declare variables as local variables

-- 声明字符串,可以用单引号或双引号,
local str = 'hello'
-- 字符串拼接可以使用 ..
local str2 = 'hello' .. 'world'
-- 声明数字
local num = 21
-- 声明布尔类型
local flag = true

LuaThe types in tablecan be mapused both as arrays and as Java objects. [An array is a special table, and the key is just the array index]:

-- 声明数组 ,key为角标的 table
local arr = {
    
    'java', 'python', 'lua'}
-- 声明table,类似java的map
local map =  {
    
    name='Jack', age=21}

in LuaArray indexes start from 1, access is similar to that in Java:

-- 访问数组,lua数组的角标从1开始
print(arr[1])

Tables in Lua can be accessed using keys:

-- 访问table
print(map['name'])
print(map.name)

3.2.3. Loop

For tables, we can use a for loop to traverse. butArray and ordinary table traversal are slightly different

Traverse the array:

-- 声明数组 key为索引的 table
local arr = {
    
    'java', 'python', 'lua'}
-- 遍历数组
for index,value in ipairs(arr) do
    print(index, value) 
end

Traverse a normal table

-- 声明map,也就是table
local map = {
    
    name='Jack', age=21}
-- 遍历table
for key,value in pairs(map) do
   print(key, value) 
end

3.3. Conditional control and functions

in LuaConditional controlandfunction declarationSimilar to Java.

3.3.1. Function

Syntax for defining functions:

function 函数名( argument1, argument2..., argumentn)
    -- 函数体
    return 返回值
end

For example, define a function to print an array:

function printArr(arr)
    for index, value in ipairs(arr) do
        print(value)
    end
end

3.3.2.Conditional control

Java-like conditional control, for example if, elsesyntax:

if(布尔表达式)
then
   --[ 布尔表达式为 true 时执行该语句块 --]
else
   --[ 布尔表达式为 false 时执行该语句块 --]
end

Unlike Java, logical operations in Boolean expressions are based on English words:
Insert image description here

3.3.3.Case

Requirement: Customize a function that can print table and print error information when the parameter nilis

function printArr(arr)
    if not arr then
        print('数组不能为空!')
    end
    for index, value in ipairs(arr) do
        print(value)
    end
end

4. Implement multi-level caching

The implementation of multi-level cache is inseparable Nginx编程, and Nginx programming is inseparable OpenResty.

4.1. Install OpenResty

  OpenResty® is a high-performance web platform based on Nginx forConveniently build dynamic web applications, web services and dynamic gateways that can handle ultra- high concurrency and high scalability. Has the following characteristics:

  • Has the full functionality of Nginx
  • Expanded based on Lua language, integrating a large number of sophisticated Lua libraries and third-party modules
  • Allows the use of Lua custom business logic and custom libraries

Official website:https://openresty.org/cn/
Insert image description here

To install Lua, you can refer to the "Installing OpenResty.md" provided in the pre-course materials:
Reference document: Stages 1 and 8 - Chapter 4 - Installing OpenResty : https://editor.csdn.net/md/?articleId=128646254
Insert image description here

4.2. Quick Start with OpenRestyimportant

The multi-level cache architecture we hope to achieve is as shown in the figure:
Insert image description here
where:

  • windowsIt is nginxused as a reverse proxy service to proxy front-end product ajaxquery requests to OpenRestythe cluster.

  • OpenRestyThe cluster is used to write multi-level cache services

4.2.1.Reverse proxy process

  Now, the product detail page uses fake product data. However, in the browser, you can see that the page initiates an ajax request to query the real product data.
  This request is as follows:
Insert image description here
  the request address is localhost, the port is , and it is received 80by the service installed on Windows . NginxThen the agent is given to OpenRestythe cluster:
Insert image description here
weNeed to OpenRestywrite business in , query product data and return to the browser

But this time, we receive the request first OpenRestyand return fake product data.

4.2.2.OpenResty listens for requests

OpenRestyMany functions of depend on libraries in its directory Lua. You need to nginx.confspecify the directory of the dependent library in and import the dependencies.

1) Add the loading of OpenResty's Lua module [in the virtual machine]
and modify /usr/local/openresty/nginx/conf/nginx.confthe file. Under http, add the following code:

#lua 模块
lua_package_path "/usr/local/openresty/lualib/?.lua;;";
#c模块     
lua_package_cpath "/usr/local/openresty/lualib/?.so;;";  

2)   Modify the file in the listening /api/itempath [in the virtual machine] , under the server of nginx.conf,
/usr/local/openresty/nginx/conf/nginx.confAdd /api/itema listener for this path

location  /api/item {
    # 默认的响应类型
    default_type application/json;
    # 响应结果由lua/item.lua文件来决定
    content_by_lua_file lua/item.lua;
}

This monitoring is similar to path mapping SpringMVCin .@GetMapping("/api/item")

It content_by_lua_file lua/item.luais equivalent to calling item.luathis file, executing the business in it, and returning the results to the user. Equivalent to calling in java service.

4.2.3.Write item.lua

1) /usr/loca/openresty/nginxCreate a directory in the directory: lua
Insert image description here
2) /usr/loca/openresty/nginx/luaCreate a new file in the folder: item.lua
Insert image description here
3) Write item.lua, return fake data
item.lua, and use ngx.say()the function to return the data Responseto

ngx.say('{"id":10001,"name":"SALSA AIR","title":"RIMOWA 21寸托运箱拉杆箱 SALSA AIR系列果绿色 820.70.36.4","price":17900,"image":"https://m.360buyimg.com/mobilecms/s720x720_jfs/t6934/364/1195375010/84676/e9f2c55f/597ece38N0ddcbc77.jpg!q70.jpg.webp","category":"拉杆箱","brand":"RIMOWA","spec":"","status":1,"createTime":"2019-04-30T16:00:00.000+00:00","updateTime":"2019-04-30T16:00:00.000+00:00","stock":2999,"sold":31290}')

4) Reload configuration

nginx -s reload

Refresh the product page: http://localhost/item.html?id=1001, and you can see the effect:
Insert image description here

4.3. Request parameter processingimportant

In the previous section, we were OpenRestyreceiving front-end requests,But it returns fake data

To return real data, you must query product information based on the product ID passed from the front end.
So how to get the product parameters passed by the front end?

4.3.1. API for obtaining parameters

OpenRestySome APIs are provided to obtain different types of front-end request parameters:
Insert image description here

4.3.2. Get parameters and return

The request initiated on the front end ajaxis as shown in the figure:
Insert image description here
you can see that the product idis passed in the form of path placeholder, so you can use regular expression matching to obtain the ID.

1) Obtain the code for monitoring /api/item in the product ID
  modification /usr/loca/openresty/nginx/nginx.conffile, and use regular expressions to obtain the ID:

location ~ /api/item/(\d+) {
    # 默认的响应类型
    default_type application/json;
    # 响应结果由lua/item.lua文件来决定
    content_by_lua_file lua/item.lua;
}

2) Splice the ID and return
. Modify /usr/loca/openresty/nginx/lua/item.luathe file, obtain idand splice it into the result and return:

-- 获取商品id
local id = ngx.var[1]
-- 拼接并返回
ngx.say('{"id":' .. id .. ',"name":"SALSA AIR","title":"RIMOWA 21寸托运箱拉杆箱 SALSA AIR系列果绿色 820.70.36.4","price":17900,"image":"https://m.360buyimg.com/mobilecms/s720x720_jfs/t6934/364/1195375010/84676/e9f2c55f/597ece38N0ddcbc77.jpg!q70.jpg.webp","category":"拉杆箱","brand":"RIMOWA","spec":"","status":1,"createTime":"2019-04-30T16:00:00.000+00:00","updateTime":"2019-04-30T16:00:00.000+00:00","stock":2999,"sold":31290}')

3) Reload and test
Run the command to reload OpenRestythe configuration:

nginx -s reload

Refresh the page and you can see that the results have been included ID:
Insert image description here

4.4. Query Tomcat [important

  After getting the product ID, we should query the product information in the cache, but we have not yet established nginxor rediscached it. Therefore, here we first go to tomcat to query product information based on the product ID. Our implementation is as shown in the figure:
Insert image description here
  It should be noted that ours OpenRestyis in a virtual machine and Tomcaton Windowsa computer. IPThere must be no mistake between the two .
Insert image description here

4.4.1. API for sending http requests

nginxProvides internals APIfor sending httprequests:
/usr/loca/openresty/nginx/lua/item.luafile

local resp = ngx.location.capture("/path",{
    
    
    method = ngx.HTTP_GET,   -- 请求方式
    args = {
    
    a=1,b=2},  -- get方式传参数
})

The response content returned includes:

  • resp.statusResponse status code
  • resp.headerThe response header is a table
  • resp.bodyThe response body is the response data

  Note: This pathis the path and does not include IPthe port. This request will be monitored and processed nginxinternally server.

  But we want this request to be sent to the Tomcat server, so we also need to write a server to reverse proxy this path:
modify /usr/local/openresty/nginx/conf/nginx.confthe file under the server in nginx.conf

 location /path {
     # 这里是windows电脑的ip和Java服务端口,需要确保windows防火墙处于关闭状态
     proxy_pass http://192.168.150.1:8081; 
 }

The principle is as shown in the figure:
Insert image description here

4.4.2. Encapsulating http tools

Next, we encapsulate a tool for sending Http requests and ngx.location.captureimplement queries based on it tomcat.

1) Add a reverse proxy to the Java service in Windows.
  Because item-servicethe interfaces in all /itemstart, we monitor /itemthe path and proxy to the service windowsin Windows tomcat.

  Modify /usr/local/openresty/nginx/conf/nginx.confthe file and add one location:
Note: When modifying on the virtual machine, you must also change the IP to your own virtual machine IP.

location /item {
    proxy_pass http://192.168.150.1:8081;
}

  In the future, as long as we call ngx.location.capture("/item"), we will be able to send requests to the tomcat service of windows.

2) Encapsulating tool classes
  As we said before, OpenRestytool files in the following two directories will be loaded at startup:
Insert image description here
Therefore, customized httptools also need to be placed in this directory.

/usr/local/openresty/lualibCreate a new file in the directory common.lua:

vi /usr/local/openresty/lualib/common.lua

The content is as follows:

-- 封装函数,发送http请求,并解析响应
local function read_http(path, params)
    local resp = ngx.location.capture(path,{
    
    
        method = ngx.HTTP_GET,
        args = params,
    })
    if not resp then
        -- 记录错误信息,返回404
        ngx.log(ngx.ERR, "http请求查询失败, path: ", path , ", args: ", args)
        ngx.exit(404)
    end
    return resp.body
end
-- 将方法导出
local _M = {
    
      
    read_http = read_http
}  
return _M

  This tool read_httpencapsulates functions into variables of _Mthis tabletype and returns them, similar to exporting.
  When using, you can use require('common')to import the function library, where common is the file name of the function library (common.lua file).

3)Realize product query
  Finally, we modify /usr/local/openresty/lua/item.luathe file and use the function library just encapsulated to query tomcat:

-- 引入自定义common工具模块,返回值是common中返回的 _M
local common = require("common")
-- 从 common中获取read_http这个函数
local read_http = common.read_http
-- 获取路径参数
local id = ngx.var[1]
-- 根据id查询商品
local itemJSON = read_http("/item/".. id, nil)
-- 根据id查询商品库存
local itemStockJSON = read_http("/item/stock/".. id, nil)

  The result queried here is a json string, and contains two json strings, product and inventory. What the page ultimately needs is to splice the two jsons into one json: This requires
Insert image description here
us to first JSONchange into lua. tableAfter completing the data integration, Then convert to JSON.

4.4.3. CJSON tool class [json serialization, deserialization]

OpenRestycjsonA module is provided forHandling JSONserialization and deserialization.
Official address: https://github.com/openresty/lua-cjson/

1) Introduce the cjson module:
.luain the file. According to the above operation, it should be in /usr/local/openresty/lua/item.luathe file.

local cjson = require "cjson"

2) Serialization:

local obj = {
    
    
    name = 'jack',
    age = 21
}
-- 把 table 序列化为 json
local json = cjson.encode(obj)

3) Deserialization:

local json = '{"name": "jack", "age": 21}'
-- 反序列化 json为 table
local obj = cjson.decode(json);
print(obj.name)

4.4.4. Implement Tomcat query

Next, we modify item.luathe previous business and add jsonprocessing functions:

-- 导入common函数库
local common = require('common')
local read_http = common.read_http
-- 导入cjson库
local cjson = require('cjson')

-- 获取路径参数
local id = ngx.var[1]
-- 根据id查询商品
local itemJSON = read_http("/item/".. id, nil)
-- 根据id查询商品库存
local itemStockJSON = read_http("/item/stock/".. id, nil)

-- JSON转化为lua的table
local item = cjson.decode(itemJSON)
local stock = cjson.decode(itemStockJSON)

-- 组合数据
item.stock = stock.stock
item.sold = stock.sold

-- 把item序列化为json 返回结果
ngx.say(cjson.encode(item))

4.4.5. ID-based load balancing

In the code just now, ours tomcatis a stand-alone deployment. In actual development, tomcatit must be cluster mode :
Insert image description here
therefore, the cluster OpenRestyneeds to betomcatload balancing

The default load balancing rule is polling mode. When we query /item/10001:

  • The first time you access the tomcat service on port 8081, a JVM process cache is formed inside the service.
  • The second time it will access the tomcat service on port 8082. There is no JVM cache inside the service (because the JVM cache cannot be shared), and the database will be queried.

  It can be seen that due to polling, the cache formed by querying 8081 for the first time JVMdoes not take effect until 8081 is accessed again next time. The cache hit rate is too low.

what to do?
  If possibleLet the same product access the same tomcat service every time it is queried., then the JVM cache will definitely take effect.
  That is to say,We need to do load balancing based on product ID instead of polling

1) Principle

  nginxProvides a load balancing algorithm based on request paths:

  nginxDo a hash operation based on the request path, and take the remainder of the number of tomcat services. If the remainder is the number, access which service to achieve load balancing.

For example:

  • Our request path is /item/10001
  • The total number of tomcats is 2 (8081, 8082)
  • /item/1001The remainder of the hash operation on the request path is 1
  • Then access the first tomcat service, which is 8081

  As long as the ID remains unchanged and the result of each hash operation will not change, it can ensure that the same product always accesses the same tomcat service and ensures that the JVM cache takes effect.

2) Realize

  Modify /usr/local/openresty/nginx/conf/nginx.confthe file to implement load balancing based on ID.

first,Define tomcatclusterAnd set up path-based load balancing:
Note: Change the ID to your own virtual machine ID

upstream tomcat-cluster {
    hash $request_uri;
    server 192.168.150.1:8081;
    server 192.168.150.1:8082;
}

Then, modify the reverse proxy for the tomcat service so that the target points to the tomcat cluster:

location /item {
    proxy_pass http://tomcat-cluster;
}

ReloadOpenResty

nginx -s reload

3) Test

Start two tomcatservices:
Insert image description here
Start at the same time:
Insert image description here
After clearing the log, visit the page again, you can see products with different IDs, and access different tomcat services:
Insert image description here
Insert image description here

4.5. RedisCache warm-upimportant

Redis缓存Will face cold start problem:

Cold start : When the service is just started, there is no cache in Redis. If all product data is cached during the first query, it may put greater pressure on the database.

Cache warm-up: In actual development, we can use big data to count the hot data accessed by users, and query these hot data in advance and save them to Redis when the project is started.

We have a small amount of data and no data statistics related functions. Currently, all data can be placed in the cache at startup.

1) Utilize the Dockerinstallation Rediscontainer

docker run --name redis -p 6379:6379 -d redis redis-server --appendonly yes

2) item-serviceIntroduce Redisdependencies into the service

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>

3) Configure the Redis address
. Note: The IP address must be written as your own virtual machine address.

spring:
  redis:
    host: 192.168.150.101

4) Write initialization class
Cache warm-up needs to be completed when the project is started, and must be obtained RedisTemplateafter.
  Here we use InitializingBeaninterfaces to implement it, becauseInitializingBeanCan Springbe executed after the object is created and all member variables are injected

package com.heima.item.config;

@Component
public class RedisHandler implements InitializingBean {
    
    

    @Autowired
    private StringRedisTemplate redisTemplate;

    @Autowired
    private IItemService itemService;
    @Autowired
    private IItemStockService stockService;

    private static final ObjectMapper MAPPER = new ObjectMapper();

    @Override
    public void afterPropertiesSet() throws Exception {
    
    
        // 初始化缓存
        // 1.查询商品信息
        List<Item> itemList = itemService.list();
        // 2.放入缓存
        for (Item item : itemList) {
    
    
            // 2.1.item序列化为JSON
            String json = MAPPER.writeValueAsString(item);
            // 2.2.存入redis
            redisTemplate.opsForValue().set("item:id:" + item.getId(), json);
        }

        // 3.查询商品库存信息
        List<ItemStock> stockList = stockService.list();
        // 4.放入缓存
        for (ItemStock stock : stockList) {
    
    
            // 2.1.item序列化为JSON
            String json = MAPPER.writeValueAsString(stock);
            // 2.2.存入redis
            redisTemplate.opsForValue().set("item:stock:id:" + stock.getId(), json);
        }
    }
}

4.6. Query Redis cacheimportant

  Now that the Redis cache is ready, we can OpenRestyimplement the logic of querying Redis in it. As shown in the red box in the figure below:
Insert image description here

When the request comes in OpenResty:

  • Query Redis cache first
  • If the Redis cache misses, query Tomcat again

4.6.1. Encapsulating Redis tools

  OpenResty provides a module for operating Redis, which we can use directly as long as we introduce the module. But for convenience, we encapsulate Redis operations into the previous common.luatool library.

Modify /usr/local/openresty/lualib/common.luafile:

1) Introduce the Redis module and initialize the Redis object

-- 导入redis
local redis = require('resty.redis')
-- 初始化redis
local red = redis:new()
red:set_timeouts(1000, 1000, 1000)

2) Encapsulation function, used to release Redisthe connection, is actually put into the connection pool

-- 关闭redis连接的工具方法,其实是放入连接池
local function close_redis(red)
    local pool_max_idle_time = 10000 -- 连接的空闲时间,单位是毫秒
    local pool_size = 100 --连接池大小
    local ok, err = red:set_keepalive(pool_max_idle_time, pool_size)
    if not ok then
        ngx.log(ngx.ERR, "放入redis连接池失败: ", err)
    end
end

3) Encapsulate function, based on keyquery Redisdata

-- 查询redis的方法 ip和port是redis地址,key是查询的key
local function read_redis(ip, port, key)
    -- 获取一个连接
    local ok, err = red:connect(ip, port)
    if not ok then
        ngx.log(ngx.ERR, "连接redis失败 : ", err)
        return nil
    end
    -- 查询redis
    local resp, err = red:get(key)
    -- 查询失败处理
    if not resp then
        ngx.log(ngx.ERR, "查询Redis失败: ", err, ", key = " , key)
    end
    --得到的数据为空处理
    if resp == ngx.null then
        resp = nil
        ngx.log(ngx.ERR, "查询Redis数据为空, key = ", key)
    end
    close_redis(red)
    return resp
end

Encapsulate functions, send http requests, and parse responses

-- 封装函数,发送http请求,并解析响应
local function read_http(path, params)
    local resp = ngx.location.capture(path,{
    
    
        method = ngx.HTTP_GET,
        args = params,
    })
    if not resp then
        -- 记录错误信息,返回404
        ngx.log(ngx.ERR, "http查询失败, path: ", path , ", args: ", args)
        ngx.exit(404)
    end
    return resp.body
end

4) Export

-- 将方法导出
local _M = {
    
      
    read_http = read_http,
    read_redis = read_redis
}  
return _M

Complete common.lua:

-- 导入redis
local redis = require('resty.redis')
-- 初始化redis
local red = redis:new()
red:set_timeouts(1000, 1000, 1000)

-- 关闭redis连接的工具方法,其实是放入连接池
local function close_redis(red)
    local pool_max_idle_time = 10000 -- 连接的空闲时间,单位是毫秒
    local pool_size = 100 --连接池大小
    local ok, err = red:set_keepalive(pool_max_idle_time, pool_size)
    if not ok then
        ngx.log(ngx.ERR, "放入redis连接池失败: ", err)
    end
end

-- 封装函数,查询redis的方法 ip和port是redis地址,key是查询的key
local function read_redis(ip, port, key)
    -- 获取一个连接
    local ok, err = red:connect(ip, port)
    if not ok then
        ngx.log(ngx.ERR, "连接redis失败 : ", err)
        return nil
    end
    -- 查询redis
    local resp, err = red:get(key)
    -- 查询失败处理
    if not resp then
        ngx.log(ngx.ERR, "查询Redis失败: ", err, ", key = " , key)
    end
    --得到的数据为空处理
    if resp == ngx.null then
        resp = nil
        ngx.log(ngx.ERR, "查询Redis数据为空, key = ", key)
    end
    close_redis(red)
    return resp
end

-- 封装函数,发送http请求,并解析响应
local function read_http(path, params)
    local resp = ngx.location.capture(path,{
    
    
        method = ngx.HTTP_GET,
        args = params,
    })
    if not resp then
        -- 记录错误信息,返回404
        ngx.log(ngx.ERR, "http查询失败, path: ", path , ", args: ", args)
        ngx.exit(404)
    end
    return resp.body
end
-- 将方法导出
local _M = {
    
      
    read_http = read_http,
    read_redis = read_redis
}  
return _M

4.6.2. Implement Redis query

  Next, we can modify item.luathe file and implement the correct Redisquery.
The query logic is:

  • Query Redis based on id
  • If the query fails, continue to query Tomcat
  • Return query results

1) Modify /usr/local/openresty/lua/item.luathe file and add a query function:

-- 导入common函数库
local common = require('common')
local read_http = common.read_http    --上面封装并导出的函数
local read_redis = common.read_redis  --上面封装并导出的函数
-- 封装查询函数
function read_data(key, path, params)
    -- 查询本地缓存
    local val = read_redis("127.0.0.1", 6379, key)
    -- 判断查询结果
    if not val then
        ngx.log(ngx.ERR, "redis查询失败,尝试查询http, key: ", key)
        -- redis查询失败,去查询http
        val = read_http(path, params)
    end
    -- 返回数据
    return val
end

2) Then modify the business of product query and inventory query:
Insert image description here
3) Complete item.luacode:

-- 导入common函数库
local common = require('common')
local read_http = common.read_http     --上面封装并导出的函数
local read_redis = common.read_redis   --上面封装并导出的函数
-- 导入cjson库
local cjson = require('cjson')

-- 封装查询函数
function read_data(key, path, params)
    -- 查询本地缓存
    local val = read_redis("127.0.0.1", 6379, key)
    -- 判断查询结果
    if not val then
        ngx.log(ngx.ERR, "redis查询失败,尝试查询http, key: ", key)
        -- redis查询失败,去查询http
        val = read_http(path, params)
    end
    -- 返回数据
    return val
end

-- 获取路径参数
local id = ngx.var[1]

-- 查询商品信息
local itemJSON = read_data("item:id:" .. id,  "/item/" .. id, nil)
-- 查询库存信息
local stockJSON = read_data("item:stock:id:" .. id, "/item/stock/" .. id, nil)

-- JSON转化为lua的table
local item = cjson.decode(itemJSON)
local stock = cjson.decode(stockJSON)
-- 组合数据
item.stock = stock.stock
item.sold = stock.sold

-- 把item序列化为json 返回结果
ngx.say(cjson.encode(item))

4.7. Nginx local cacheimportant

Now, only the last link in the entire multi-level cache is missing, which is nginxthe local cache. As shown in the picture:
Insert image description here

4.7.1.Local cache API

  OpenRestyProvides the shard dict function for Nginx, which can share data between multiple workers of nginx and implement caching functions.

1) Turn on the shared dictionary and add configuration under nginx.conf:http

 # 共享字典,也就是本地缓存,名称叫做:item_cache,大小150m
 lua_shared_dict item_cache 150m; 

2) Manipulate the shared dictionary:

-- 获取本地缓存对象
local item_cache = ngx.shared.item_cache
-- 存储, 指定key、value、过期时间,单位s,默认为0代表永不过期
item_cache:set('key', 'value', 1000)
-- 读取
local val = item_cache:get('key')

4.7.2. Implement local cache query

1) Modify /usr/local/openresty/lua/item.luathe file, modifyread_data the query function, and add local caching logic:

-- 导入共享词典,读取本地缓存对象
local item_cache = ngx.shared.item_cache

-- 封装查询函数
function read_data(key, expire, path, params)
    -- 查询本地缓存
    local val = item_cache:get(key)
    if not val then
        ngx.log(ngx.ERR, "本地缓存查询失败,尝试查询Redis, key: ", key)
        -- 查询redis
        val = read_redis("127.0.0.1", 6379, key)
        -- 判断查询结果
        if not val then
            ngx.log(ngx.ERR, "redis查询失败,尝试查询http, key: ", key)
            -- redis查询失败,去查询http
            val = read_http(path, params)
        end
    end
    -- 查询成功,把数据写入本地缓存
    item_cache:set(key, val, expire)
    -- 返回数据
    return val
end

2) item.luaThe business of querying goods and inventory is being modified to implement the latest read_datafunction:
Insert image description here
in fact, there is an additional cache time parameter. After expiration, the nginx cache will be automatically deleted, and the cache can be updated next time you visit.

Here, the timeout time for the basic information of the product is set to 30 minutes, and the inventory is set to 1 minute.

Because the inventory is updated frequently, if the cache time is too long, it may be significantly different from the database.

3) Complete item.lua file:

-- 导入common函数库
local common = require('common')
local read_http = common.read_http
local read_redis = common.read_redis
-- 导入cjson库
local cjson = require('cjson')
-- 导入共享词典,本地缓存
local item_cache = ngx.shared.item_cache

-- 封装查询函数
function read_data(key, expire, path, params)
    -- 查询本地缓存
    local val = item_cache:get(key)
    if not val then
        ngx.log(ngx.ERR, "本地缓存查询失败,尝试查询Redis, key: ", key)
        -- 查询redis
        val = read_redis("127.0.0.1", 6379, key)
        -- 判断查询结果
        if not val then
            ngx.log(ngx.ERR, "redis查询失败,尝试查询http, key: ", key)
            -- redis查询失败,去查询http
            val = read_http(path, params)
        end
    end
    -- 查询成功,把数据写入本地缓存
    item_cache:set(key, val, expire)
    -- 返回数据
    return val
end

-- 获取路径参数
local id = ngx.var[1]

-- 查询商品信息
local itemJSON = read_data("item:id:" .. id, 1800,  "/item/" .. id, nil)
-- 查询库存信息
local stockJSON = read_data("item:stock:id:" .. id, 60, "/item/stock/" .. id, nil)

-- JSON转化为lua的table
local item = cjson.decode(itemJSON)
local stock = cjson.decode(stockJSON)
-- 组合数据
item.stock = stock.stock
item.sold = stock.sold

-- 把item序列化为json 返回结果
ngx.say(cjson.encode(item))

5. Cache synchronizationimportant

  In most cases, the browser queries cached data. If there is a large difference between cached data and database data, serious consequences may occur.
  so we mustEnsure the consistency of database data and cache data, this is the synchronization of cache and database.

5.1. Data synchronization strategy

Cache data synchronizationThere are three common ways:

Set validity period : Set the validity period for the cache, and it will be automatically deleted after expiration. Update when querying again

  • Advantages: simple and convenient
  • Disadvantages: poor timeliness, cache may be inconsistent before expiration
  • Scenario: Business with low update frequency and low timeliness requirements

Synchronous double write : directly modify the cache while modifying the database

  • Advantages: strong timeliness, strong consistency between cache and database
  • Disadvantages: code intrusion and high coupling;
  • Scenario: Cache data with high consistency and timeliness requirements

Asynchronous notification: Send an event notification when modifying the database, and the relevant services will modify the cached data after listening to the notification.

  • Advantages: low coupling, multiple cache services can be notified at the same time
  • Disadvantages: Average timeliness, there may be intermediate inconsistencies
  • Scenario: The timeliness requirements are average and there are multiple services that need to be synchronized.

Asynchronous implementation can be implemented based on MQor Canal:

1) MQ-based asynchronous notification:
Insert image description here
interpretation:

  • After the product service completes the modification of the data, it only needs to send a message to MQ.
  • The cache service listens to MQ messages and then completes updates to the cache

There is still a small amount of code intrusion.

2) Canal-based notification
Insert image description here
interpretation:

  • After the product service completes the product modification, the business ends directly without any code intrusion.
  • Canal monitors MySQL changes and immediately notifies the cache service when a change is discovered.
  • The cache service receives the canal notification and updates the cache.

Zero code intrusion

5.2.Install Canal

5.2.1.Get to know Canal

  Canal [kə'næl] , translated as waterway/pipeline/ditch, canal is an open source project under Alibaba, developed based on Java. Based on database incremental log analysis, incremental data subscription & consumption is provided. GitHub address: https://github.com/alibaba/canal

CanalIt is mysqlimplemented based on master-slave synchronization. The principle of MySQL master-slave synchronization is as follows:
Insert image description here

  • 1) MySQL master writes data changes to the binary log (binary log), and the recorded data is called binary log events
  • 2) MySQL slave copies the master's binary log events to its relay log (relay log)
  • 3) MySQL slave replays the events in the relay log and reflects the data changes to its own data

  andCanal disguises itself as a slave node of MySQL to monitor the master's binary log changes. Then notify the Canal client of the obtained change information, and then complete the synchronization of other databases.
Insert image description here

5.2.2.Install Canal

Installing and configuring Canal reference pre-course material documents:
Reference documents: Stages 1 and 8 - Chapter 4 - Installing and configuring Canal : https://editor.csdn.net/md?articleId=128646254
Insert image description here

5.3. Monitor Canal

CanalClients in various languages ​​are provided. When Canal detects binlogchanges, it will notify Canalthe client.
Insert image description here
We can use the Java client provided by Canal to listen to Canal notification messages. When a change message is received, the cache is updated.

canal-starterBut here we will use the third-party open source client on GitHub . Address: https://github.com/NormanGyllenhaal/canal-client
is SpringBootperfectly integrated and automatically assembled, making it much simpler and easier to use than the official client.

5.3.1.Introduce dependencies:

<dependency>
    <groupId>top.javatool</groupId>
    <artifactId>canal-spring-boot-starter</artifactId>
    <version>1.2.1-RELEASE</version>
</dependency>

5.3.2.Write configuration:

canal:
  destination: heima # canal的集群名字,要与安装canal时设置的名称一致
  server: 192.168.150.101:11111 # canal服务地址

5.3.3. Modify the Item entity class

Complete the mapping between Item and database table fields through @Id, @Column, and other annotations:

package com.heima.item.pojo;

@Data
@TableName("tb_item")
public class Item {
    
    
    @TableId(type = IdType.AUTO)
    @Id
    private Long id;//商品id
    @Column(name = "name")
    private String name;//商品名称
    private String title;//商品标题
    private Long price;//价格(分)
    private String image;//商品图片
    private String category;//分类名称
    private String brand;//品牌名称
    private String spec;//规格
    private Integer status;//商品状态 1-正常,2-下架
    private Date createTime;//创建时间
    private Date updateTime;//更新时间
    @TableField(exist = false)
    @Transient
    private Integer stock;
    @TableField(exist = false)
    @Transient
    private Integer sold;
}

5.3.4.Writing a listener

EntryHandler<T>Write a listener by implementing the interface to listen to Canal messages. Note two points:

  • The implementation class @CanalTable("tb_item")specifies the table information to be monitored
  • The generic type of EntryHandler is the entity class corresponding to the table
package com.heima.item.canal;

@CanalTable("tb_item")
@Component
public class ItemHandler implements EntryHandler<Item> {
    
    

    @Autowired
    private RedisHandler redisHandler;
    @Autowired
    private Cache<Long, Item> itemCache;

    @Override
    public void insert(Item item) {
    
     //添加
        // 写数据到JVM进程缓存
        itemCache.put(item.getId(), item);
        // 写数据到redis
        redisHandler.saveItem(item);
    }

    @Override
    public void update(Item before, Item after) {
    
      //更新
        // 写数据到JVM进程缓存
        itemCache.put(after.getId(), after);
        // 写数据到redis
        redisHandler.saveItem(after);
    }

    @Override
    public void delete(Item item) {
    
      //删除
        // 删除数据到JVM进程缓存
        itemCache.invalidate(item.getId());
        // 删除数据到redis
        redisHandler.deleteItemById(item.getId());
    }
}

  The operations here Redisare all encapsulated into RedisHandlerthis object, which is a class we wrote when we were doing cache preheating. The content is as follows:

package com.heima.item.config;

import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.heima.item.pojo.Item;
import com.heima.item.pojo.ItemStock;
import com.heima.item.service.IItemService;
import com.heima.item.service.IItemStockService;
import org.springframework.beans.factory.InitializingBean;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.redis.core.StringRedisTemplate;
import org.springframework.stereotype.Component;

import java.util.List;

@Component
public class RedisHandler implements InitializingBean {
    
    

    @Autowired
    private StringRedisTemplate redisTemplate;

    @Autowired
    private IItemService itemService;
    @Autowired
    private IItemStockService stockService;

    private static final ObjectMapper MAPPER = new ObjectMapper();

    @Override
    public void afterPropertiesSet() throws Exception {
    
    
        // 初始化缓存
        // 1.查询商品信息
        List<Item> itemList = itemService.list();
        // 2.放入缓存
        for (Item item : itemList) {
    
    
            // 2.1.item序列化为JSON
            String json = MAPPER.writeValueAsString(item);
            // 2.2.存入redis
            redisTemplate.opsForValue().set("item:id:" + item.getId(), json);
        }

        // 3.查询商品库存信息
        List<ItemStock> stockList = stockService.list();
        // 4.放入缓存
        for (ItemStock stock : stockList) {
    
    
            // 2.1.item序列化为JSON
            String json = MAPPER.writeValueAsString(stock);
            // 2.2.存入redis
            redisTemplate.opsForValue().set("item:stock:id:" + stock.getId(), json);
        }
    }

    public void saveItem(Item item) {
    
    
        try {
    
    
            String json = MAPPER.writeValueAsString(item);
            redisTemplate.opsForValue().set("item:id:" + item.getId(), json);
        } catch (JsonProcessingException e) {
    
    
            throw new RuntimeException(e);
        }
    }

    public void deleteItemById(Long id) {
    
    
        redisTemplate.delete("item:id:" + id);
    }
}

Guess you like

Origin blog.csdn.net/weixin_52223770/article/details/128633716