Data synchronization between database and Redis cache based on Canal

In the previous article " How to use Lua scripts to implement data caching based on OpenResty and Redis?" "Introduced a simple way to implement data caching. But there is another important thing about data caching that there may be data inconsistency. Take the advertisement in the previous article as an example. In the scenario where the advertisement content is temporarily replaced or the advertisement is removed from the shelves, if the data in the cache is not updated immediately but is allowed to wait until the expiration time has passed, it may cause serious problems. s consequence. Therefore, it is necessary to achieve consistency between the cache and the data in the database in a very short period of time. Here is another solution with limitations - based on Canal. The so-called limitation is that Canal itself is only aimed at Mysql, so it can only realize the synchronization of Mysql and cached data, and other databases are powerless.

In the early days, due to the deployment of dual computer rooms in Hangzhou and the United States, there was a business need for synchronization across computer rooms. So Canal was born, mainly by trying to parse the database log to obtain incremental changes for synchronization. Therefore, the working principle of Canal is actually following the interaction protocol of Mysql Master Slave, disguising itself as a Mysql Slave, continuously receiving binary logs from Mysql Master, and then parsing them into corresponding binary log objects. Then send the data to different storage destinations, such as Mysql, Kafka, ElasticSearch, etc.

Before using Canal, you need to do some preparation work, because Canal is based on Mysql's Master Slave interaction protocol and parsing binary log files. Therefore, you need to enable the log-bin in the Mysql configuration file my.cnf and set a server_id [globally unique].

# 开启mysql的binlog模块及文件存储目录
log-bin=mysql-bin
# 选择ROW(行)模式
binlog-format=ROW
# server_id需保证唯一,不能和canal的slaveId重复
server_id=123456

In addition, in order to ensure data security, it is necessary to create a user dedicated to Canal and authorize it.

Canal installation

In order to solve the cumbersome configuration problems during the installation process, here is an introduction to using Docker to install Canal.

# 拉取镜像
docker pull canal/canal-server:latest
# 运行
docker run -p 11111:11111 --name canal -id canal/canal-server

After running, you need to enter the container to modify some configuration files of Canal.

docker exec -it canal /bin/bash

vim /etc/mysql/mysql.conf/canal.properties

canal.properties are some configuration files of Canal, and canal.id must be modified here, and it cannot be the same as server_id in my.cnf of mysql.

In addition to canal.properties, there is also instance.properties under the example folder. This file is to configure the specified database to be synchronized. You need to modify the canal.instance.master.address parameter, which is the information for configuring the Mysql database.

There are also access users and passwords.

If you need to target a specific database or table, you can configure it here separately.

Here are the rules for configuring filtering:

               .* : Indicates all databases

               .\\..* : Indicates all tables.

               .*\\..* : Indicates all changes in all tables in all databases

The above basic configuration of Canal is basically completed, and the Docker container needs to be restarted after the configuration is modified.

Project build

For the integration of Canal, an open source project Spring-boot-starter-canal [click to download] is used here , which integrates SpringBoot and Canal, which is more elegant than native. But it is not in maven's central warehouse, so we need to download the project to the local manual install to the local warehouse.

①Introduction of dependencies

<dependency>
    <groupId>com.xpand</groupId>
    <artifactId>starter-canal</artifactId>
    <version>0.0.1-SNAPSHOT</version>
</dependency>

②Add properties

canal:
  client:
    instances:
      example:
        host: 192.168.132.132
        port: 11111

③ Create a startup class

@SpringBootApplication(exclude = {DataSourceAutoConfiguration.class})
@EnableCanalClient
public class CanalApplication {
    public static void main(String[] args) {
        SpringApplication.run(CanalApplication.class);
    }
}

④Create a listening class

/**
 * @author SunRains
 * @date 2021/3/10 0010
 */
@CanalEventListener
public class CanalDataEventListener {


    
    /**
     * @param eventType 当前操作的类型 增加数据类型
     * @param rowData   发生变更的一行数据
     * @InsertListerPoint 增加监听 只有增加后的数据
     * rowData.getAfterColumnsList:增加、修改
     * rowData.getBeforeColumnList:删除、修改
     */
    @InsertListenPoint
    public void onEventInsert(CanalEntry.EventType eventType, CanalEntry.RowData rowData) {
        for (CanalEntry.Column column : rowData.getAfterColumnsList()) {
            System.out.println("列名:" + column.getName() + "-------变更的数据" + column.getValue());
        }
    }

    /**
     * 修改监听
     */
    @UpdateListenPoint
    public void onEventUpdate(CanalEntry.EventType eventType, CanalEntry.RowData rowData) {
        for (CanalEntry.Column column : rowData.getAfterColumnsList()) {
            System.out.println("列名:" + column.getName() + "-------变更的数据" + column.getValue());
        }
    }

    /**
     * 删除添加
     */
    @DeleteListenPoint
    public void onEventDelete(CanalEntry.EventType eventType, CanalEntry.RowData rowData) {
        for (CanalEntry.Column column : rowData.getAfterColumnsList()) {
            System.out.println("列名:" + column.getName() + "-------变更的数据" + column.getValue());
        }
    }

    /***
     * 自定义数据修改监听
     * @param eventType
     * @param rowData
     */
    @ListenPoint(destination = "example", schema = "changgou_content", table = {"tb_content_category", "tb_content"}, eventType = CanalEntry.EventType.UPDATE)
    public void onEventCustomUpdate(CanalEntry.EventType eventType, CanalEntry.RowData rowData) {
        System.err.println("DeleteListenPoint");
        rowData.getAfterColumnsList().forEach((c) -> System.out.println("By--Annotation: " + c.getName() + " ::   " + c.getValue()));
    }


}

The result of the final operation is shown in the figure:

With the above example, it can be easily realized that the data in the cache is updated immediately when the database data is changed, so as to maintain data consistency.

Guess you like

Origin blog.csdn.net/qq_35363507/article/details/115522575
Recommended