[Case] Consistency solution for multi-level cache architecture for high concurrent business

We are basically inseparable from the cache in high-concurrency projects, so since the introduction of the cache, there will be a problem of data consistency between the cache and the database.

First of all, let's take a look at the three common cache read and write modes of Redis in high-concurrency projects.

insert image description here

Cache Aside

读写分离模式,是最常见的Redis缓存模式,多数采用。
读写数据时需要先查找缓存,如果缓存中没有,则从数据库中查找数据。
如果查询到数据,需要将数据放到缓存中,下次访问再直接从缓存中获取数据,以提高访问效率。
写操作通常不会直接更新缓存,而是删除缓存,因为存储结构是hash、list,则更新数据需要遍历。
  • advantage
    • The read efficiency is high, the cache hit rate is high, the write operation is synchronized with the database, the data consistency is high, and the implementation is relatively simple.
  • shortcoming
    • There is data inconsistency between the database and the cache, and cache inconsistency caused by cache invalidation and database update operations needs to be considered.
  • Application Scenario
    • It is suitable for situations where read operations are very frequent and write operations are relatively few. For example, on the product details page of an e-commerce website, the number of read operations is much higher than that of update and add operations.

Read/Write Through

读写穿透模式,读写操作会直接修改缓存,然后再同步更新数据库中的数据,开发较为复杂,一般少用。
在Read/Write  Through模式下,每次数据的读写操作都会操作缓存,再同步到数据库,以保证缓存和数据库数据的一致性。
应用程序将缓存作为主要的数据源,数据库对于应用程序是透明的,更新数据库和从数据库的读取的任务都交给缓存来实现。
  • advantage
    • The write operation speed is fast, the consistency is high, the data in the cache and the database are consistent, and the cache hit rate is high.
  • shortcoming
    • The read operation is slow. If there is no available data in the cache, the database query will be performed every time. When the amount of data is large, it will have a greater impact on performance.
  • Application Scenario
    • The system handles scenarios with frequent write operations and infrequent read operations, such as cloud storage Ceph

Write Behind

被称为Write Back模式或异步写入模式,一般较少使用。
如果有写操作,缓存会记录修改了缓存的数据,但是并不会立即同步到数据库中。
一般会把缓存中的数据更新到磁盘中,等到后续有查询数据操作时,再异步批量更新数据库中的数据。
该模式的优点就是写操作速度很快,不会对性能产生影响,同时也避免了频繁更新数据库的情况,提升了数据库性能。
  • advantage
    • The write operation is fast, the performance is high, and the data consistency is generally high.
  • shortcoming
    • The read operation is slow, and due to the asynchronous way to update the database, there may be data delays.
  • Application scenario:
    • It is used in scenarios where the ratio of data reading and writing is high, such as user activity points in games, and other information. At the beginning, the performance of writing operations is very high, and subsequent queries are relatively small.

In business development, it is basically read from the database to the cache, so what is the sequence of reading and writing from the cache to the database?

Scenario 1: Update the database first, then update the cache

  • Thread A updates the database. After updating the database, thread A updates the cache. The cache is updated successfully, but the thread A database transaction commit fails, or the method body is abnormal, and rollback is performed. Can cause cache and database inconsistencies.

insert image description here

Scenario 2: Delete the cache first, then update the database

  • Thread A deletes the cache and updates the database, but has not yet committed. At this time, thread B accesses the cache and finds that there is no data. It goes to the database to read the uncommitted data and put it in the cache, that is, the old data. Thread A performed a commit operation. This will also cause the cache to be old data and the database to be new data, inconsistent.

insert image description here

Scenario 3: Delete the cache first, then update the database, and then delete the cache

  • Thread A deletes the cache and updates the database, but has not yet committed. Thread B accesses the cache and finds that there is no data. It goes to the database to read the uncommitted data and puts it in the cache (old data). Thread A performed a commit operation. Thread A deletes the cache data again (the cache is empty at this time, and subsequent reads are the latest data). The consistency of the data is guaranteed, but multiple IOs are wasted, which is equivalent to deleting Redis once more each time.

insert image description here

OK, let's get back to the topic, what is a multi-level cache architecture?

A multi-level cache architecture is a high-availability caching technology that is usually used to optimize application performance. It usually consists of multiple layers of caches, where each layer of cache can be selected and adjusted according to its different characteristics and functions. Maximize application performance and availability by caching multiple copies of data, avoiding cache penetration, cache breakdown, and data inconsistency.

Here we use the Nginx+Lua+Canal+Redis+Mysql architecture, that is to say, the read operation queries the Nginx cache through Lua. If the Nginx cache has no data, it will query the Redis cache. If the Redis cache has no data, it will directly query mysql . During the write operation, Canal monitors the incremental changes of the specified table in the database, and the Java program consumes the incremental changes monitored by Canal and writes them to Redis. The Java-canal program operates the Redis cache, and whether the Nginx local cache is applied or invalidated depends on the project type.

insert image description here

OK, what is Canal?

Alibaba's MySQL-based incremental log parsing and subscription publishing system is mainly used to solve data subscription and consumption problems. Canal mainly supports the binlog analysis of MySQL, and the canal client is used to process the obtained relevant data after the analysis is completed.

In the early days, due to the deployment of dual computer rooms in Hangzhou and the United States, there was a business requirement for cross-computer room synchronization. The implementation method was mainly based on business triggers to obtain incremental changes. Since 2010, the business has gradually tried to parse database logs to obtain incremental changes for synchronization, and a large number of incremental database subscription and consumption businesses have been derived from this.

  • Services based on log incremental subscription and consumption include

    • database mirroring
    • Database real-time backup
    • Index construction and real-time maintenance (split heterogeneous index, inverted index, etc.)
    • Business cache refresh
    • Incremental data processing with business logic

The current canal supports source MySQL versions including 5.1.x , 5.5.x , 5.6.x , 5.7.x , 8.0.x

insert image description here

  • Canal simulates the interactive protocol of MySQL slave, pretends to be MySQL slave, and sends dump protocol to MySQL master.
  • MySQL master receives dump request and starts to push binary log to slave (ie canal).
  • Canal parses the binary log object (originally a byte stream) and parses it into independent data operation events, such as insert, update, delete, etc.

Environmental preparation

OK, the previous part mainly introduced a multi-level cache architecture, and then we will make some environmental preparations.

First deploy MySQL, here we deploy MySQL in a docker containerized way. After the deployment is complete, the binlog log of MySQL must be enabled.

#创建目录
mkdir -p /home/data/mysql/

#部署
docker run \
    -p 3306:3306 \
    -e MYSQL_ROOT_PASSWORD=123456 \
    -v /home/data/mysql/conf:/etc/mysql/conf.d \
    -v /home/data/mysql/data:/var/lib/mysql:rw \
    --name mysql_test \
    --restart=always \
    -d mysql:8.0

Edit the configuration file my.cnf, under the mysqld module, restart the mysql service after editing.

# 开启 binlog, 可以不加,默认开启
log-bin=mysql-bin

# 选择 ROW 模式
binlog_format=row

#server_id不要和canal的slaveId重复
server-id=1

insert image description here

The following is an introduction to the three binlog modes of MySQL

STATEMENTFormat

Statement-Based Replication,SBR,每一条会修改数据的 SQL 都会记录在 binlog 中。
每一条会修改数据 SQL 都会记录在 binlog 中,性能高,发生的变更操作只记录所执行的 SQL 语句,而不记录具体变更的值。
不需要记录每一行数据的变化,极大的减少了 binlog 的日志量,避免了大量的 IO 操作,提升了系统的性能。
由于 Statement 模式只记录 SQL,而如果一些 SQL 中 包含了函数,那么可能会出现执行结果不一致的情况。
缺点:uuid() 函数,每次执行都会生成随机字符串,在 master 中记录了 uuid,当同步到 slave 后再次执行,结果不一样,now()之类的函数以及获取系统参数的操作, 都会出现主从数据不同步的问题。

ROWformat (default)

Row-Based Replication,RBR,不记录 SQL 语句上下文信息,仅保存哪条记录被修改。
Row 格式不记录 SQL 语句上下文相关信息,仅记录某一条记录被修改成什么样子。
清楚地记录下每一行数据修改的细节,不会出现 Statement 中存在的那种数据无法被正常复制的情况,保证最高精度和粒度。
缺点:Row 格式存在问题,就是日志量太大,批量 update、整表 delete、alter 表等操作,要记录每一行数据的变化,此时会产生大量的日志,大量的日志也会带来 IO 性能问题。

MIXEDFormat

在 STATEMENT 和 ROW 之间自动进行切换的模式。在没有大量变更时使用 STATEMENT 格式。
而在发生大量变更时使用 ROW 格式,以确保日志具有高精度和粒度,同时保证存储空间的有效使用。

Run the following command to check whether binlog is enabled:SHOW VARIABLES LIKE 'log_bin'

insert image description here

database authorization

-- 创建同步用户
CREATE USER 'canal'@'%';
-- 设置密码
ALTER USER 'canal'@'%' IDENTIFIED WITH mysql_native_password BY '123456';
-- 授予复制权限
GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'canal'@'%';
-- 刷新权限
FLUSH PRIVILEGES;

insert image description here

Then we deploy Redis, which is also deployed with docker, which is very simple.

docker run -itd --name redis -p 6379:6379 \
--privileged=true \
-v /redis/data:/data --restart always redis \
--requirepass "psd"

insert image description here

Then, let's deploy canal-server, we also use docker deployment.

 docker run -p 11111:11111 --name canal -d canal/canal-server:v1.1.4

Enter the container and modify the configuration file

docker exec -it canal /bin/bash

# 修改配置文件
vi canal-server/conf/example/instance.properties

#################################################
## mysql serverId , 修改id,不要和mysql 主节点一致即可----------
canal.instance.mysql.slaveId=2
canal.instance.gtidon=false

# 修改 mysql 主节点的ip----------
canal.instance.master.address=ip:3306
canal.instance.tsdb.enable=true

# username/password 授权的数据库账号密码----------
canal.instance.dbUsername=canal
canal.instance.dbPassword=123456
canal.instance.connectionCharset = UTF-8
canal.instance.enableDruid=false

# mysql 数据解析关注的表,正则表达式. 多个正则之间以逗号(,)分隔,转义符需要双斜杠 \\,所有表:.* 或 .*\\..*
canal.instance.filter.regex=.*\\..*
canal.instance.filter.black.regex=

Restart the container and enter the container to view the logs.

docker restart canal

insert image description here

docker exec -it canal /bin/bash

tail -100f canal-server/logs/example/example.log

insert image description here

Next, let's deploy the last item of Nginx. Here we directly deploy OpenResty, which covers Nginx and Lua.

Execute the following commands in sequence.

# add the yum repo:
wget https://openresty.org/package/centos/openresty.repo
sudo mv openresty.repo /etc/yum.repos.d/

# update the yum index:
sudo yum check-update

sudo yum install openresty

#安装命令行工具
sudo yum install openresty-resty

# 列出所有 openresty 仓库里的软件包
sudo yum --disablerepo="*" --enablerepo="openresty" list available

#查看版本
resty -V

insert image description here

OK, the deployment of the front-end server environment is complete, let's start the coding process.

First we need to create a database table. Then write a set of addition, deletion, modification and query logic for this table. Now let's create a product table.

CREATE TABLE `product` (
  `id` bigint NOT NULL AUTO_INCREMENT,
  `title` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin DEFAULT NULL,
  `cover_img` varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin DEFAULT NULL COMMENT '封面图',
  `amount` decimal(10,2) DEFAULT NULL COMMENT '现价',
  `summary` varchar(2048) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin DEFAULT NULL COMMENT '概要',
  `detail` longtext CHARACTER SET utf8mb4 COLLATE utf8mb4_bin COMMENT '详情',
  `gmt_modified` datetime DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  `gmt_create` datetime DEFAULT CURRENT_TIMESTAMP,
  PRIMARY KEY (`id`) USING BTREE
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin ROW_FORMAT=DYNAMIC;

Create a SpringBoot project, add redis, mysql, mybatis, canal dependencies, and configure the yml file.

     <dependencies>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>
        <!-- redis -->
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-redis</artifactId>
            <version>3.0.6</version>
        </dependency>
        <!--数据库连接-->
        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
        </dependency>
        <!--mybatis plus-->
        <dependency>
            <groupId>com.baomidou</groupId>
            <artifactId>mybatis-plus-boot-starter</artifactId>
            <version>3.4.0</version>
        </dependency>
        <!--lombok-->
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <version>1.18.16</version>
        </dependency>
        <dependency>
            <groupId>com.alibaba.otter</groupId>
            <artifactId>canal.client</artifactId>
            <version>1.1.4</version>
        </dependency>
    </dependencies>

Write the addition, deletion, modification and query logic of products. The specific code will not be shown here, just some basic additions, deletions, modifications and checks. You can download the source code package during operation, and I will upload it to CSDN.


/**
 * @author lixiang
 * @date 2023/6/25 17:20
 */
public interface ProductService {
    
    

    /**
     * 新增商品
     * @param product
     */
    void addProduct(ProductDO product);

    /**
     * 修改商品
     * @param product
     */
    void updateProduct(ProductDO product);

    /**
     * 删除商品
     * @param id
     */
    void deleteProductById(Long id);

    /**
     * 根据ID查询商品
     * @param id
     * @return
     */
    ProductDO selectProductById(Long id);

    /**
     * 分页查询商品信息
     * @param current
     * @param size
     * @return
     */
    Map<String,Object> selectProductList(int current, int size);
}

insert image description here

There is no problem in starting the project verification, and we start the next step. When developing canal monitoring, let's first understand what is ApplicationRunner .

ApplicationRunner is Spring Bootan interface provided by the framework, which is used to Spring Bootrun some tasks or codes after the application starts. When the application starts, if you want to automatically perform some tasks or initialization operations after startup, you can use this interface.

Steps to use: Create a class, implement the interface, and add annotations to the class @Componentto ensure Spring Bootthat the class can be scanned and its runmethods executed.

/**
 * @author lixiang
 * @date 2023/6/29 23:08
 */
@Component
@Slf4j
public class CanalRedisConsumer implements ApplicationRunner {
    
    

    @Override
    public void run(ApplicationArguments args) throws Exception {
    
    
        log.info("CanalRedisConsumer执行");
    }
}

insert image description here

Ok, next we mainly focus on the realization of the core logic. Go directly to the code.

/**
 * 这里我们直接操作redis的String类型
 * @author lixiang
 * @date 2023/6/29 23:08
 */
@Component
@Slf4j
public class CanalRedisConsumer implements ApplicationRunner {
    
    

    @Autowired
    private RedisTemplate redisTemplate;

    @Override
    public void run(ApplicationArguments args) throws Exception {
    
    
        // 创建一个 CanalConnector 连接器
        CanalConnector canalConnector = CanalConnectors.newSingleConnector(new InetSocketAddress("payne.f3322.net", 11111),
                "example", "", "");
        try {
    
    
            // 连接 Canal Server,尝试多次重连
            while (true) {
    
    
                try {
    
    
                    canalConnector.connect();
                    break;
                } catch (CanalClientException e) {
    
    
                    log.info("Connect to Canal Server failed, retrying...");
                }
            }
            log.info("Connect to Canal Server success");
            //订阅数据库表,默认监听所有的数据库、表,等同于:.*\\..*
            canalConnector.subscribe(".*\\..*");
            // 回滚到上一次的 batchId,取消已经消费过的日志
            canalConnector.rollback();

            // 持续监听 Canal Server 推送的数据,并将数据写入 Redis 中
            while (true) {
    
    
                Message message = canalConnector.getWithoutAck(100);
                long batchId = message.getId();

                // 如果没有新数据,则暂停固定时间后继续获取
                if (batchId == -1 || message.getEntries().isEmpty()) {
    
    
                    try {
    
    
                        Thread.sleep(1000);
                        continue;
                    } catch (InterruptedException e) {
    
    
                        e.printStackTrace();
                    }
                }
                //处理数据
                for (CanalEntry.Entry entry : message.getEntries()) {
    
    
                    if (entry.getEntryType() == CanalEntry.EntryType.TRANSACTIONBEGIN || entry.getEntryType() == CanalEntry.EntryType.TRANSACTIONEND) {
    
    
                        continue;
                    }
                    CanalEntry.RowChange rowChange = null;
                    try {
    
    
                        rowChange = CanalEntry.RowChange.parseFrom(entry.getStoreValue());
                    } catch (Exception e) {
    
    
                        throw new RuntimeException("Error parsing Canal Entry.", e);
                    }

                    String table = entry.getHeader().getTableName();
                    CanalEntry.EventType eventType = rowChange.getEventType();
                    log.info("Canal监听数据变化,DB:{},Table:{},Type:{}",entry.getHeader().getSchemaName(),table,eventType);
                    // 变更后的新数据
                    for (CanalEntry.RowData rowData : rowChange.getRowDatasList()) {
    
    
                        if (eventType == CanalEntry.EventType.DELETE) {
    
    
                            deleteData(table, rowData);
                        } else {
    
    
                            insertOrUpdateData(table, rowData);
                        }
                    }
                }

                try {
    
    
                    canalConnector.ack(batchId);
                } catch (Exception e) {
    
    
                    // 回滚所有未确认的 Batch
                    canalConnector.rollback(batchId);
                }
            }

        } finally {
    
    
            canalConnector.disconnect();
        }
    }

    /**
     * 删除行数据
     */
    private void deleteData(String table, CanalEntry.RowData rowData) {
    
    
        List<CanalEntry.Column> columns = rowData.getBeforeColumnsList();
        JSONObject json = new JSONObject();
        columns.forEach(column->json.put(column.getName(), column.getValue()));
        String key = table + ":" + columns.get(0).getValue();
        log.info("Redis中删除Key为: {} 的数据",key);
        redisTemplate.delete(key);
    }

    /**
     * 新增或者修改数据
     */
    private void insertOrUpdateData(String table, CanalEntry.RowData rowData) {
    
    
        List<CanalEntry.Column> columns = rowData.getAfterColumnsList();
        JSONObject json = new JSONObject();
        columns.forEach(column->json.put(column.getName(), column.getValue()));
        String key = table + ":" + columns.get(0).getValue();
        log.info("Redis中新增或修改Key为: {} 的数据",key);
        redisTemplate.opsForValue().set(key, json);
    }
}

Then, we develop an interface for product addition, deletion, modification and query.

/**
 * @author lixiang
 * @date 2023/6/30 17:48
 */
@RestController
@RequestMapping("/api/v1/product")
public class ProductController {
    
    

    @Autowired
    private ProductService productService;

    /**
     * 新增
     * @param product
     * @return
     */
    @PostMapping("/save")
    public String save(@RequestBody ProductDO product){
    
    
        int flag = productService.addProduct(product);
        return flag==1?"SUCCESS":"FAIL";
    }

    /**
     * 修改
     * @param product
     * @return
     */
    @PostMapping("/update")
    public String update(@RequestBody ProductDO product){
    
    
        int flag = productService.updateProduct(product);
        return flag==1?"SUCCESS":"FAIL";
    }

    /**
     * 根据ID查询
     * @param id
     * @return
     */
    @GetMapping("/findById")
    public ProductDO update(@RequestParam("id") Long id){
    
    
        ProductDO productDO = productService.selectProductById(id);
        return productDO;
    }

    /**
     * 分页查询
     * @param current
     * @param size
     * @return
     */
    @GetMapping("/page")
    public Map<String, Object> update(@RequestParam("current") int current,@RequestParam("size") int size){
    
    
        Map<String, Object> stringObjectMap = productService.selectProductList(current, size);
        return stringObjectMap;
    }
}

Add an item to verify.

insert image description hereinsert image description here
insert image description here

Next, we package the SpringBoot program and run it on the server.

mvn clean package

insert image description here

守护进程启动  nohup java -jar multi-level-cache-1.0-SNAPSHOT.jar  & 

insert image description hereinsert image description here

We have verified the DB change synchronization cache, and then we will develop and directly read the Redis part through Nginx.

First of all, let's understand what is OpenResty, why use OpenResty?

OpenResty由章亦春发起,是基于Ngnix和Lua的高性能web平台,内部集成精良的LUa库、第三方模块、依赖, 开发者可以方便搭建能够处理高并发、扩展性极高的动态web应用、web服务、动态网关。 

OpenResty将Nginx核心、LuaJIT、许多有用的Lua库和Nginx第三方模块打包在一起。

Nginx是C语言开发,如果要二次扩展是很麻烦的,而基于OpenResty,开发人员可以使用 Lua 编程语言对 Nginx 核心模块进行二次开发拓展。

性能强大,OpenResty可以快速构造出1万以上并发连接响应的超高性能Web应用系统。
  • For some high-performance services, you can directly use OpenResty to access Mysql or Redis, etc.

  • There is no need to access the database and return through third-party languages ​​(PHP, Python, Ruby), etc., which greatly improves the performance of the application

So what is a Lua script?

Lua 由标准 C 编写而成,没有提供强大的库,但可以很容易的被 C/C++ 代码调用,也可以反过来调用 C/C++ 的函数。 

在应用程序中可以被广泛应用,不过Lua是一种脚本/动态语言,不适合业务逻辑比较重的场景,适合小巧的应用场景

Ok, then we will start developing Nginx to directly read the Redis part through Lua scripts.

-- 引入需要使用到的库
local redis = require "resty.redis"
local redis_server = "ip地址"
local redis_port = 6379
local redis_pwd = "123456"

-- 获取 Redis 中存储的数据
local function get_from_redis(key)
    local red = redis:new()

    local ok, err = red:connect(redis_server, redis_port)
    red:auth(redis_pwd)
    if not ok then
    -- 如果从 Redis 中获取数据失败,将错误信息写入 Nginx 的错误日志中
        ngx.log(ngx.ERR, "failed to connect to Redis: ", err)
        return ""
    end
    local result, err = red:get(key)
    if not result then
        ngx.log(ngx.ERR, "failed to get ", key, " from Redis: ", err)
        return ""
    end
    -- 将 Redis 连接放回连接池中
    red:set_keepalive(10000, 100)
    return result
end

-- 获取缓存数据
local function get_cache_data()
		-- 获取当前请求的 URI
    local uri = ngx.var.uri
    -- 获取当前请求的 id 参数
    local id = ngx.var.arg_id
		-- 将 URI 写入 Nginx 的错误日志中
    ngx.log(ngx.ERR, "URI: ", uri) 
    -- 将当前请求的所有参数写入 Nginx 的错误日志中
    ngx.log(ngx.ERR, "Args: ", ngx.var.args)

    local start_pos = string.find(uri, "/", 6) + 1  
    local end_pos = string.find(uri, "/", start_pos)
    -- 截取第三个和第四个斜杠之间的子串
    local cache_prefix = string.sub(uri, start_pos, end_pos - 1)   
    -- Redis 中键的名称由子串和 id 组成
    local key = cache_prefix .. ":" .. id

    local result = get_from_redis(key)

    if result == nil or result == ngx.null or result == "" then
				-- Redis 中未命中,需要到服务器后端获取数据
        ngx.log(ngx.ERR, "not hit cache, key = ", key)
    else
        -- Redis 命中,返回结果
        ngx.log(ngx.ERR, "hit cache, key = ", key)
        -- 直接将 Redis 中存储的结果返回给客户端
        ngx.say(result)
        -- 结束请求,客户端无需再等待响应
        ngx.exit(ngx.HTTP_OK)
    end
end

-- 执行获取缓存数据的功能
get_cache_data()

Put the Lua script in the directory specified by the server.

insert image description here

Create a new lua directory and create a cache.lua file.

insert image description here

Then, we configure nginx.

  • Nginx configures reverse proxy, combined with lua script to read redis.
  • If the redis cache hits, the cached data is read and returned directly.
  • If the cache misses, the reverse proxy requests the backend interface to get the data back.
#user  nobody;
worker_processes  1;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       mime.types;
    default_type  application/octet-stream;
    #配置下编码,不然浏览器会乱码
    charset utf-8;
    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"';

    #access_log  logs/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;

    #gzip  on;
    # 这里设置为 off,是为了避免每次修改之后都要重新 reload 的麻烦。
		# 在生产环境上需要 lua_code_cache 设置成 on。
		
    lua_code_cache off;
     # 虚拟机主机块,还需要配置lua文件扫描路径
    lua_package_path "$prefix/lualib/?.lua;;";
    lua_package_cpath "$prefix/lualib/?.so;;";
	
		#配置反向代理到后端spring boot程序
    upstream backend {
      server 127.0.0.1:8888;
    }

    server {
        listen       80;
        server_name  localhost;
        location /api {
            default_type 'text/plain';
            if ($request_method = GET) {
                access_by_lua_file /usr/local/openresty/lua/cache.lua;
            }
            proxy_pass http://backend;
            proxy_set_header Host $http_host;
        }
    }
}
./nginx -c /usr/local/openresty/nginx/conf/nginx.conf -s reload

nginx started successfully.

insert image description here

OK, let's test and verify next, first access the cached data in Redis. Access Redis directly through Nginx.

http://payne.f3322.net:8888/api/v1/product/findById?id=3

insert image description here

Then we are accessing data that is not cached in Redis. Without hitting the cache, it directly penetrates to the SpringBoot program.

http://payne.f3322.net:8888/api/v1/product/findById?id=2

insert image description hereinsert image description here

When new data is added, it will be synchronized to Redis.

http://payne.f3322.net:8888/api/v1/product/save

{
    
    
    "title":"Mac Pro 13",
    "coverImg":"/group/4.png",
    "amount":"19999.00",
    "summary":"Mac Pro 13",
    "detail":"Mac Pro 13"
}

insert image description here

OK, so far the multi-level cache case is complete. Remember to give the blogger a three-in-one!

insert image description here

Guess you like

Origin blog.csdn.net/weixin_47533244/article/details/131493468