Changgou Mall (4): Lua, OpenResty, Canal realize advertising caching and synchronization

Home page advertisement introduction

Process

On the homepage of the mall, we will see a lot of advertisements, and in many cases these advertisements are fixed, so the efficiency of accessing MySQL to get the advertisement content every time is very low. A better way is to use Redis and OpenResty to do multi-level caching. . If there is data in the cache, access the cache, if not, go to MySQL to get it, which can greatly improve performance.

 

 

Table Structure

The advertisement data is stored in the changgou-content database (this database is not in this information of mine, so I created one by myself). There are two tables in it, one is tb_content_catrgory (advertising classification table), according to different positions of the page, advertisements have different classifications, such as homepage carousel, guess you like it, etc.; the other is tb_content (advertising table), this one The advertisement data is stored in the table.

 

 

CREATE TABLE `tb_content_category` (
    `id` BIGINT(20) NOT NULL AUTO_INCREMENT COMMENT '类目ID',
    `name` VARCHAR(50) DEFAULT NULL COMMENT '分类名称',
    PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=5 DEFAULT CHARSET=utf8 COMMENT='内容分类';
INSERT INTO `tb_content_category` VALUES (1, '首页轮播广告');
INSERT INTO `tb_content_category` VALUES (2, '今日推荐A');
INSERT INTO `tb_content_category` VALUES (3, '活动专区');
INSERT INTO `tb_content_category` VALUES (4, '猜你喜欢');

CREATE TABLE `tb_content` (
    `id` BIGINT(20) NOT NULL AUTO_INCREMENT,
    `category_id` BIGINT(20) NOT NULL COMMENT '内容类目ID',
    `title` VARCHAR(200) DEFAULT NULL COMMENT '内容标题', 
    `url` VARCHAR(500) DEFAULT NULL COMMENT '链接',
    `pic` VARCHAR(300) DEFAULT NULL COMMENT '图片绝对路径',
    `status` VARCHAR(1) DEFAULT NULL COMMENT '状态,0无效,1有效',
    `sort_order` INT(11) DEFAULT NULL COMMENT '排序',
    PRIMARY KEY (`id`),
    KEY `category_id` (`category_id`)
) ENGINE=InnoDB AUTO_INCREMENT=5 DEFAULT CHARSET=utf8;
INSERT INTO `tb_content` VALUES (1, 1, '微信广告', 'https://blog.csdn.net/weixin_43461520', 'https://gitee.com/RobodLee/image_store/raw/master/%E5%BE%AE%E4%BF%A1%E5%85%AC%E4%BC%97%E5%8F%B7.png', '1', 1);

take

Introduction

Lua is a lightweight and compact scripting language, written in standard C language and open in source code form. Its design purpose is to be embedded in the application program, thereby providing flexible extension and customization functions for the application program.

installation

cd /usr/local/server                                # 切换到想要下载的目录,随意
curl -R -O http://www.lua.org/ftp/lua-5.3.5.tar.gz     # 下载Lua5.3.5
tar zxf lua-5.3.5.tar.gz                             # 解压 
cd lua-5.3.5                                        # 切换到解压后的目录
make linux test                                     # 安装
-------------------------------------------------------------------------------------
[root@localhost lua-5.3.5]# lua                     # 输入lua,出现下面一行说明安装成功
Lua 5.1.4  Copyright (C) 1994-2008 Lua.org, PUC-Rio

Programmatically

Lua has two methods: interactive programming and script programming.

  • 交互式编程

Interactive programming is to enter the lua command after entering the lua console, and then enter the lua command to execute.

[root@localhost lua-5.3.5]# lua
Lua 5.1.4  Copyright (C) 1994-2008 Lua.org, PUC-Rio
> print("Hello World!")
Hello World!
>
  • Scripted programming

Script programming is to create a .lua file, and then enter the command "lua filename.lua" to execute it.

 

 

Basic grammar

Refer to the rookie tutorial Lua : https://www.runoob.com/lua/lua-tutorial.html

OpenResty

Introduction

OpenResty is a powerful web application server. Web developers can use the Lua scripting language to mobilize various C and Lua modules supported by Nginx. More importantly, in terms of performance, OpenResty can quickly construct a super-capable of responding to more than 10K concurrent connections. High-performance Web application system. That is, Nginx is encapsulated and Lua scripts are integrated. Developers only need to simply use the provided modules to implement the relevant logic, instead of writing Lua scripts in nginx before calling them.

installation

yum install yum-utils              # 安装yum-utils,为了使用下面一行的命令
 # 添加openresty的仓库,不配置这一行安装不了
 yum-config-manager --add-repo https://openresty.org/package/centos/openresty.repo  
 yum install openresty          # 安装openresty,界面会有提示,一路按y就可以了

After the installation is complete, don’t forget to start

service openresty start

After the startup is complete, use a browser to access the virtual machine where openresty is installed. If the welcome interface appears, the installation is successful.

 

 

Configuration

Although OpenResty can be accessed now, in order to be able to directly load the lua script in the root directory, it needs to be configured.

cd /usr/local/openresty/nginx/conf          # 切换到openresty安装目录下的nginx目录中的conf目录中
vi nginx.conf                             #编辑nginx的配置文件

 

 

Loading and reading of ad cache

The task of this section is: Nginx intercepts http://192.168.31.200/read_content?id=1 , executes the Lua script, first loads from the Nginx cache, if not, loads from Redis, and if not, loads from MySQL. Then MySQL——>Redis——>Nginx——>Browser.

Define Nginx cache module

cd /usr/local/openresty/nginx/conf      # nginx的配置目录
vi nginx.conf       # 编辑nginx的配置文件

Configure the Nginx cache module in http :

 

 

Lua script

Then, prepare the lua script, create a read_content.lua file in the root/lua directory , and fill in the following content:

ngx.header.content_type="application/json;charset=utf8"
local uri_args = ngx.req.get_uri_args();    -- 获取uri中的所有参数
local id = uri_args["id"];      -- 获取名为id的参数
--获取本地缓存
local cache_ngx = ngx.shared.dis_cache; -- 加载Nginx缓存模块,需要先定义
--根据ID 获取本地缓存数据
local contentCache = cache_ngx:get('content_cache_'..id);

--[[
Nginx中有缓存就输出缓存,没有的话就从Redis中加载
--]]
if contentCache == "" or contentCache == nil then
    local redis = require("resty.redis");   -- 依赖Redis模块
    local red = redis:new()             -- 创建Redis对象
    red:set_timeout(2000)   -- 超时
    red:connect("192.168.31.200", 6379)     -- 连接Redis
    local rescontent=red:get("content_"..id);   -- 从Redis中读数据
    -- Redis中没有就从MySQL中加载
    if ngx.null == rescontent then
        local cjson = require("cjson");     -- 依赖json模块
        local mysql = require("resty.mysql");   -- 依赖mysql模块
        local db = mysql:new();     -- 创建mysql对象
        db:set_timeout(2000)    -- 设置过期时间
        -- mysql的参数信息
        local props = { 
            host = "192.168.31.200",
            port = 3306,
            database = "changgou_content",
            user = "root",
            password = "root"
        }
        local res = db:connect(props);  -- 连接mysql
        local select_sql = "select url,pic from tb_content where status ='1' and category_id="..id.." order by sort_order";
        res = db:query(select_sql); --执行sql
        local responsejson = cjson.encode(res); -- 将mysql返回的数据转换成json
        red:set("content_"..id,responsejson);   -- 存到Redis中
        ngx.say(responsejson);          -- 输出
        db:close()      -- 关闭mysql连接
    else
        cache_ngx:set('content_cache_'..id, rescontent, 10*60); -- 把Redis中的数据写到Nginx缓存中,设置过期时间
        ngx.say(rescontent)     -- 输出
    end
    red:close()   -- 关闭Redis连接
else
    ngx.say(contentCache)   -- 输出
end

Configure Nginx

Now you need to configure nginx so that it can execute the script. Edit the nginx.conf file mentioned above and add the content in the figure to http.server

 

 

The above line means that the lua file will be executed when there is a read_content request. Reload the file .

cd /usr/local/openresty/nginx/sbin     # 切换到nginx下的sbin目录中
 ./nginx -s road                      # 重新加载文件

Finally test it:

 

 

As you can see, the data was successfully loaded correctly. When I was doing this, one of the database fields was wrong, and no results were output. Therefore, friends must be careful not to make a mistake.

Nginx current limit

There are two ways to limit Nginx current, one is to control the rate, the other is to control the amount of concurrency.

Control rate

Controlling the rate is to limit the number of accesses to Nginx. If the number exceeds the limit, the request is directly rejected and not processed.

 

 

First of all, we need to configure a current limit , edit the nginx configuration file mentioned above, and add the following content in http:

#限流设置
#binary_remote_addr 是一种key,表示基于 remote_addr(客户端IP) 来做限流,binary_ 的目的是压缩内存占用量。
#zone:定义共享内存区来存储访问信息, contentRateLimit:10m 表示一个大小为10M,名字为contentRateLimit的内存区域。1M能存储16000 IP地址的访问信息,10M可以存储16W IP地址访问信息。
#rate 用于设置最大访问速率,rate=10r/s 表示每秒最多处理10个请求。Nginx 实际上以毫秒为粒度来跟踪请求信息,因此 10r/s 实际上是限制:每100毫秒处理一个请求。这意味着,自上一个请求处理完后,若后续100毫秒内又有请求到达,
limit_req_zone $binary_remote_addr zone=contentRateLimit:10m rate=2r/s;

 

 

After the configuration, we also need to use the current limiting configuration , in the nginx configuration file, use the current limiting configuration in http.server.location

#burst相当于队列,若rate=2r/s同时有4个请求到达,Nginx 会处理第一个请求,剩余3个请求将放入队列,然后每隔500ms从队列中获取一个请求进行处理。若请求数大于4,将拒绝处理多余的请求,直接返回503
#nodelay,配合burst使用,并发处理不延迟,不按(1s/rate)秒/个的速率处理,等到完成之后,按照正常的速率处理
limit_req zone=contentRateLimit burst=4 nodelay; #使用限流配置

 

 

Don’t forget to reload the file at the end

cd /usr/local/openresty/nginx/sbin     # 切换到nginx下的sbin目录中
 ./nginx -s road                      # 重新加载文件

Control the amount of concurrency

Controlling the amount of concurrency is to limit the number of connections an ip has to the server. First we need to configure it, edit the nginx.conf file, and add the following configuration under http .

#根据IP地址来限制,存储内存大小10M,配置名为perip,大小为1m
limit_conn_zone $binary_remote_addr zone=perip:10m;
#根据IP地址来限制,存储内存大小10M,配置名为perserver,大小为1m
limit_conn_zone $server_name zone=perserver:10m;

After the configuration is complete, we need to let a certain location use this configuration. Here, we let /brand use this configuration, and add the following content to http.server.location /brand in nginx.conf .

limit_conn perip 10;   #设置单个客户端ip与服务器的连接数为10.
limit_conn perserver 100; #限制与服务器的总连接数为100
#表示这个请求给180主机处理,因为程序运行在主机上,不在虚拟机上
proxy_pass http://192.168.31.180:18081

 

 

Finally, just reload the file.

cd /usr/local/openresty/nginx/sbin     # 切换到nginx下的sbin目录中
 ./nginx -s road                      # 重新加载文件

Canal environment construction

Introduction

Canal can be used to monitor changes in database data to obtain new or modified data. When the database is added, deleted, or modified, a log file will be generated. Canal knows which data has changed by reading the log file. Here, we give the updated data to the Canal microservice, and then the microservice writes the data to Redis.

 

 

Enable binlog mode and create MySQL user

Canal is implemented based on the master-slave mode of mysql. It simulates the interaction protocol of mysql slave. It pretends to be a mysql slave and sends a dump request to the mysql master. After receiving the dump request, the mysql master starts to push the binary log to the slave (that is, canal). ) Canal parses the binary log object (originally a byte stream). Therefore, MySQL must enable binlog mode .

docker exec -it mysql /bin/bash     # 进入到mysql中
cd /etc/mysql/mysql.conf.d          #切换到 mysql.conf.d文件夹中
vi mysqld.cnf                       # 编辑 mysqld.cnf文件

 

 

Because Canal needs to access the database, we arrange an account for it, and it is not safe to use the root account. Open Navicat or direct command line, run

-- 用户名是canal,%表示能在任意机器上登录,密码是123456
-- SELECT查询权限,REPLICATION SLAVE, REPLICATION CLIENT主从复制权限,
-- SUPER ON *.* TO 'canal'@'%':用户canal拥有任意数据库,任意表的这些权限
-- FLUSH PRIVILEGES:刷新权限
create user canal@'%' IDENTIFIED by '123456';
GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT,SUPER ON *.* TO 'canal'@'%';
FLUSH PRIVILEGES;

Finally restart the MySQL container .

docker restart mysql

Install and configure Canal container

  • Download mirror
docker pull docker.io/canal/canal-server
  • Install Canal
docker run -p 11111:11111 --name canal -d docker.io/canal/canal-server  #11111:11111:端口映射

After the installation is complete, it needs to be configured,

docker exec -it canal /bin/bash     # 进入到canal容器中
cd canal-server/conf                # 切换到配置文件所在的目录

 

 

Go to canal.properties to have a look, which is configured with Canal id, port and other information

 

 

Let's take a look at instance.properties, this file is configured with database-related information

 

 

After the configuration is complete, set the boot to start, and remember to restart canal.

docker update --restart=always canal
docker restart canal

Canal microservice construction

First, create a Module called changgou-service-canal under changgou-service as our microservice project . After the creation is complete, you can add the required dependencies, but the dependency package we added is not in the Maven warehouse. It needs to be imported manually . I did not find the file in the video. Then I found one on the Internet. If you need it If you can click to download . After downloading and decompressing, open the starter-canal directory inside, open the console in this directory, and use the mvn install command to install. The process may be a bit slow, just wait patiently.

 

 

After the installation is complete, you can import this dependency .

<dependencies>
    <!--canal依赖-->
    <dependency>
        <groupId>com.xpand</groupId>
        <artifactId>starter-canal</artifactId>
        <version>0.0.1-SNAPSHOT</version>
    </dependency>
</dependencies>

How can microservices lack startup classes and configuration files ?

@SpringBootApplication(exclude={DataSourceAutoConfiguration.class})
@EnableEurekaClient
@EnableCanalClient
public class CanalApplication {

    public static void main(String[] args) {
        SpringApplication.run(CanalApplication.class,args);
    }
}
server:
  port: 18083
spring:
  application:
    name: canal
eureka:
  client:
    service-url:
      defaultZone: http://127.0.0.1:7001/eureka
  instance:
    prefer-ip-address: true
feign:
  hystrix:
    enabled: true
#hystrix 配置
hystrix:
  command:
    default:
      execution:
        timeout:
          #如果enabled设置为false,则请求超时交给ribbon控制
          enabled: true
        isolation:
          strategy: SEMAPHORE
#canal配置
canal:
  client:
    instances:
      example:
        host: 192.168.31.200
        port: 11111

Start it and see if there are any problems

 

 

Oops, something went wrong. This problem caused me to sleep well all night, very uncomfortable, and tossed for a long time. Finally, Canal was unloaded and reinstalled, and finally got it done. There is Canal in the virtual machine. I just changed the configuration without installing it. It may be a problem with the previous configuration, so I still have to install it myself.

Start again

 

 

Finally done! Canal microservice is successfully built!

Ad sync

Build microservices

 

 

As shown in the above figure, each time an advertising operation is performed, MySQL will record the operation log, and then send the operation log to canal, canal sends the operation record to the canal microservice, and the canal microservice calls the content microservice to query the classification corresponding to the modified classification ID. For all advertisements, the canal microservice stores all advertisements in the Redis cache.

First, we need to build an advertising microservice, create a Module called changgou-service-content-api in changgou-service-api as the API project, and then prepare to add two JavaBeans in the com.robod.content.pojo package: Content .java and ContentCategory.java .

 

 

Then create a changgou-service-content project under changgou-service as an advertising microservice . Add the required dependencies:

<dependencies>
    <dependency>
        <groupId>com.changgou</groupId>
        <artifactId>changgou-common</artifactId>
        <version>1.0-SNAPSHOT</version>
    </dependency>
    <dependency>
        <groupId>com.changgou</groupId>
        <artifactId>changgou-service-content-api</artifactId>
        <version>1.0-SNAPSHOT</version>
    </dependency>
</dependencies>

Finally, add configuration files and startup classes

server:
  port: 18084
spring:
  application:
    name: content
  datasource:
    driver-class-name: com.mysql.cj.jdbc.Driver
    url: jdbc:mysql://192.168.31.200:3306/changgou_content?useUnicode=true&characterEncoding=UTF-8&serverTimezone=UTC
    username: root
    password: root
eureka:
  client:
    service-url:
      defaultZone: http://127.0.0.1:7001/eureka
  instance:
    prefer-ip-address: true
feign:
  hystrix:
    enabled: true
mybatis:
  configuration:
    map-underscore-to-camel-case: true  #开启驼峰功能

#hystrix 配置
hystrix:
  command:
    default:
      execution:
        timeout:
          #如果enabled设置为false,则请求超时交给ribbon控制
          enabled: true
        isolation:
          strategy: SEMAPHORE

logging:
  level:
    com: debug  # 不加这个MyBatis Log插件不打印sql语句
@SpringBootApplication
@EnableEurekaClient
@MapperScan(basePackages = {"com.robod.mapper"})
public class ContentApplication {

    public static void main(String[] args) {
        SpringApplication.run(ContentApplication.class);
    }
}

Start it

 

 

Advertising query implementation

The function that needs to be completed in this step is to query the corresponding advertisement set according to the classification id of the advertisement, so we add a method called findByCategoryId to realize this function, and realize it in each layer.

/**
 * 根据分类的ID 获取该分类下的所有的广告的列表
 * Controller层 ContentController.java
 */
@GetMapping(value = "/list/category/{id}")
public Result<List<Content>> findByCategoryId(@PathVariable long id){
   List<Content>  contents = contentService.findByCategoryId(id);
   return new Result<>(true,StatusCode.OK,"成功查询出所有的广告",contents);
}
-----------------------------------------------------------------------------
//Service层 ContentServiceImpl.java
@Override
public List<Content> findByCategoryId(long id) {
    return contentMapper.findByCategoryId(id);
}
-------------------------------------------------------------------------------
// Dao层 ContentMapper.java
@Select("select * from tb_content where category_id = #{id} and status = 1")
List<Content> findByCategoryId(long id);

Because we need to call the method in the advertising microservice in the Canal microservice, add feign to the changgou-service-content-api project :

@FeignClient(name="content")    //指定微服务的名字
@RequestMapping(value = "/content")
public interface ContentFeign {

    /**
     * 根据分类ID查询所有广告
     * @param id
     * @return
     */
    @GetMapping(value = "/list/category/{id}")
    Result<List<Content>> findByCategoryId(@PathVariable long id);
}

Advertisement synchronization

Since you are synchronizing data to Redis, you need to configure Redis, modify the application.yml configuration file in the Canal microservice, and add the redis configuration .

 

 

Next, open feign in the startup class, modify CanalApplication, and add @EnableFeignClients annotation

//要先在changgou-servie-canal中添加changgou-service-content-api的依赖
@EnableFeignClients(basePackages = {"com.robod.content.feign"})

Finally, add a monitor class CanalDataEventListener to the com.robod.canal package to monitor data changes and write the changed data to Redis.

/**
 * @author Robod
 * @date 2020/7/14 10:47
 * 实现MySQL数据监听
 */
@CanalEventListener
public class CanalDataEventListener {
    private final ContentFeign contentFeign;
    private final StringRedisTemplate stringRedisTemplate;

    public CanalDataEventListener(ContentFeign contentFeign, StringRedisTemplate stringRedisTemplate) {
        this.contentFeign = contentFeign;
        this.stringRedisTemplate = stringRedisTemplate;
    }

    /**
     * 监听数据变化,将数据写到Redis中
     * @param eventType
     * @param rowData
     */
    @ListenPoint(
            destination = "example",
            schema = "changgou_content",
            table = {"tb_content","tb_content_category"},
            eventType = {
                    CanalEntry.EventType.INSERT,
                    CanalEntry.EventType.UPDATE,
                    CanalEntry.EventType.DELETE}
    )
    public void onEventListener(CanalEntry.EventType eventType, CanalEntry.RowData rowData) {
        String categoryId = getColumnValue(eventType,rowData);
        List<Content> contents = contentFeign.findByCategoryId(Long.parseLong(categoryId)).getData();
        stringRedisTemplate.boundValueOps("content_"+categoryId).set(JSON.toJSONString(contents));
    }

    private String getColumnValue(CanalEntry.EventType eventType, CanalEntry.RowData rowData) {
        if (eventType == CanalEntry.EventType.UPDATE || eventType == CanalEntry.EventType.INSERT) {
            for (CanalEntry.Column column : rowData.getAfterColumnsList()) {
                if ("category_id".equalsIgnoreCase(column.getName())) {
                    return column.getValue();
                }
            }
        }
        if (eventType == CanalEntry.EventType.DELETE) {
            for (CanalEntry.Column column : rowData.getBeforeColumnsList()) {
                if ("category_id".equalsIgnoreCase(column.getName())) {
                    return column.getValue();
                }
            }
        }
        return "";
    }

}

Let's test it

 

 

OK! The data in the database and Redis are synchronized.

to sum up

This article introduces Lua, OpenResty and Canal, and implements the caching and synchronization of advertisements. The operation is not difficult, but there are so many pits. I thought it would be completed soon, but it took several days to complete, so you must be careful when doing this.

Guess you like

Origin blog.csdn.net/qq_17010193/article/details/114391649