09 Redis and MySQL data double-write consistency project landing case

canal

what is

  • canal [kə'næl], translated into Chinese as waterway/pipeline/ditch/canal, is mainly used for subscription, consumption and analysis of MySQL database incremental log data. It is developed and open source by Alibaba, and is developed in Java language;
  • The historical background is that in the early days of Alibaba, due to the deployment of dual computer rooms in Hangzhou and the United States, there was a business requirement for cross-computer room data synchronization. The implementation method was mainly based on business triggers (triggers) to obtain incremental changes. Since 2010, Alibaba has gradually tried to use parsing database logs to obtain incremental changes for synchronization, and thus derived the canal project;
  • Canal is a component based on MySQL change log incremental subscription and consumption

what can you do

  • database mirroring
  • Database real-time backup
  • Index construction and real-time maintenance (split heterogeneous index, inverted index, etc.)
  • Business cache refresh
  • Incremental data processing with business logic

where to go

working principle

Working principle of traditional MySQL master-slave replication

insert image description here
MySQL's master-slave replication will go through the following steps:

  1. When the data on the master master server changes, the change is written to the binary event log file;
  2. The salve slave server will detect the binary log on the master master server within a certain time interval to detect whether it has changed. If it detects that the binary event log of the master master server has changed, it will start an I/O Thread request master binary event log;
  3. At the same time, the master server starts a dump Thread for each I/O Thread to send binary event logs to it;
  4. The slave saves the received binary event log from the server to its own local relay log file;
  5. The salve slave server will start SQL Thread to read the binary log from the relay log and replay it locally to make its data consistent with the master server;
  6. Finally, the I/O Thread and SQL Thread will go to sleep and wait for the next wake-up;

How canal works

insert image description here

  • canal simulates the interactive protocol of MySQL slave, pretends to be MySQL slave, and sends dump protocol to MySQL master
  • MySQL master receives dump request and starts to push binary log to slave (namely canal) canal parses binary log object (originally byte stream)
  • Distributed systems only have final consistency, and it is difficult to achieve strong consistency

mysql-canal-redis double write consistency Coding

mysql

CREATE TABLE `t_user` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT,
  `userName` varchar(100) NOT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=10 DEFAULT CHARSET=utf8mb4
  • Enable the binlog writing function of MySQL
  • Authorize canal to connect to MySQL account
    • The default user of mysql is in the user table of the mysql library
    • There is no canal account by default, create a new + authorize here

canal server

  • download
  • After decompression, put it into the /mycanal path as a whole
  • configuration modification
    • /mycanal/canal.deployer-1.1.5/conf/example path

    • insert image description here

    • instance.properties

    • insert image description here

    • Replace it with your own new canal account in mysql

    • insert image description here

  • start up
    • Execute under /mycanal/canal.deployer-1.1.5/bin path
    • ./startup.sh
  • View server logs
  • insert image description here
  • View instance logs
  • insert image description here

canal client (business program written in Java)

  • build module

  • change pom

    <?xml version="1.0" encoding="UTF-8"?>
    <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
        <modelVersion>4.0.0</modelVersion>
        <parent>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-parent</artifactId>
            <version>2.3.5.RELEASE</version>
            <relativePath/> <!-- lookup parent from repository -->
        </parent>
        <groupId>com.zzyy.study</groupId>
        <artifactId>canal_demo</artifactId>
        <version>0.0.1-SNAPSHOT</version>
        <name>canal_demo</name>
        <description>Demo project for Spring Boot</description>
    
    
        <properties>
            <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
            <maven.compiler.source>1.8</maven.compiler.source>
            <maven.compiler.target>1.8</maven.compiler.target>
            <junit.version>4.12</junit.version>
            <log4j.version>1.2.17</log4j.version>
            <lombok.version>1.16.18</lombok.version>
            <mysql.version>5.1.47</mysql.version>
            <druid.version>1.1.16</druid.version>
            <mybatis.spring.boot.version>1.3.0</mybatis.spring.boot.version>
        </properties>
    
        <dependencies>
            <dependency>
                <groupId>com.alibaba.otter</groupId>
                <artifactId>canal.client</artifactId>
                <version>1.1.0</version>
            </dependency>
            <!--guava-->
            <dependency>
                <groupId>com.google.guava</groupId>
                <artifactId>guava</artifactId>
                <version>23.0</version>
            </dependency>
            <!--web+actuator-->
            <dependency>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-starter-web</artifactId>
            </dependency>
            <dependency>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-starter-actuator</artifactId>
            </dependency>
            <!--SpringBoot与Redis整合依赖-->
            <dependency>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-starter-data-redis</artifactId>
            </dependency>
            <dependency>
                <groupId>org.apache.commons</groupId>
                <artifactId>commons-pool2</artifactId>
            </dependency>
            <!-- jedis -->
            <dependency>
                <groupId>redis.clients</groupId>
                <artifactId>jedis</artifactId>
                <version>3.1.0</version>
            </dependency>
            <!-- springboot-aop 技术-->
            <dependency>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-starter-aop</artifactId>
            </dependency>
            <!-- redisson -->
            <dependency>
                <groupId>org.redisson</groupId>
                <artifactId>redisson</artifactId>
                <version>3.13.4</version>
            </dependency>
            <!--Mysql数据库驱动-->
            <dependency>
                <groupId>mysql</groupId>
                <artifactId>mysql-connector-java</artifactId>
                <version>5.1.47</version>
            </dependency>
            <!--集成druid连接池-->
            <dependency>
                <groupId>com.alibaba</groupId>
                <artifactId>druid-spring-boot-starter</artifactId>
                <version>1.1.10</version>
            </dependency>
            <dependency>
                <groupId>com.alibaba</groupId>
                <artifactId>druid</artifactId>
                <version>${druid.version}</version>
            </dependency>
            <dependency>
                <groupId>com.alibaba</groupId>
                <artifactId>druid</artifactId>
                <version>${druid.version}</version>
            </dependency>
            <!--mybatis和springboot整合-->
            <dependency>
                <groupId>org.mybatis.spring.boot</groupId>
                <artifactId>mybatis-spring-boot-starter</artifactId>
                <version>${mybatis.spring.boot.version}</version>
            </dependency>
            <!-- 添加springboot对amqp的支持 -->
            <dependency>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-starter-amqp</artifactId>
            </dependency>
            <!--通用基础配置-->
            <dependency>
                <groupId>junit</groupId>
                <artifactId>junit</artifactId>
                <version>${junit.version}</version>
            </dependency>
            <dependency>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-devtools</artifactId>
                <scope>runtime</scope>
                <optional>true</optional>
            </dependency>
            <dependency>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-starter-test</artifactId>
                <scope>test</scope>
            </dependency>
            <dependency>
                <groupId>log4j</groupId>
                <artifactId>log4j</artifactId>
                <version>${log4j.version}</version>
            </dependency>
            <dependency>
                <groupId>org.projectlombok</groupId>
                <artifactId>lombok</artifactId>
                <version>${lombok.version}</version>
                <optional>true</optional>
            </dependency>
            <dependency>
                <groupId>cn.hutool</groupId>
                <artifactId>hutool-all</artifactId>
                <version>RELEASE</version>
                <scope>compile</scope>
            </dependency>
            <dependency>
                <groupId>com.alibaba</groupId>
                <artifactId>fastjson</artifactId>
                <version>1.2.73</version>
            </dependency>
        </dependencies>
    
        <build>
            <plugins>
                <plugin>
                    <groupId>org.springframework.boot</groupId>
                    <artifactId>spring-boot-maven-plugin</artifactId>
                </plugin>
            </plugins>
        </build>
    
    </project>
    
  • write YML

    server.port=5555
    
  • main boot

  • business class

    • RedisUtils
      package com.zzyy.study.util;
      
      import redis.clients.jedis.Jedis;
      import redis.clients.jedis.JedisPool;
      import redis.clients.jedis.JedisPoolConfig;
      
      /**
       * @auther zzyy
       * @create 2020-10-11 14:33
       */
      public class RedisUtils
      {
              
              
          public static JedisPool jedisPool;
      
          static {
              
              
              JedisPoolConfig jedisPoolConfig=new JedisPoolConfig();
              jedisPoolConfig.setMaxTotal(20);
              jedisPoolConfig.setMaxIdle(10);
              jedisPool=new JedisPool(jedisPoolConfig,"192.168.111.147",6379);
          }
      
          public static Jedis getJedis() throws Exception {
              
              
              if(null!=jedisPool){
              
              
                  return jedisPool.getResource();
              }
              throw new Exception("Jedispool is not ok");
          }
      
      
          /*public static void main(String[] args) throws Exception
          {
              try(Jedis jedis = RedisUtils.getJedis())
              {
                  System.out.println(jedis);
      
                  jedis.set("k1","xxx2");
                  String result = jedis.get("k1");
                  System.out.println("-----result: "+result);
                  System.out.println(RedisUtils.jedisPool.getNumActive());//1
              }catch (Exception e){
                  e.printStackTrace();
              }
          }*/
      }
      
    • RedisCanalClientExample
       
      package com.zzyy.study.t1;
      
      import com.alibaba.fastjson.JSONObject;
      import com.alibaba.otter.canal.client.CanalConnector;
      import com.alibaba.otter.canal.client.CanalConnectors;
      import com.alibaba.otter.canal.protocol.CanalEntry.*;
      import com.alibaba.otter.canal.protocol.Message;
      import com.zzyy.study.util.RedisUtils;
      import org.springframework.beans.factory.annotation.Autowired;
      import redis.clients.jedis.Jedis;
      
      import java.net.InetSocketAddress;
      import java.util.List;
      import java.util.concurrent.TimeUnit;
      
      /**
       * @auther zzyy
       * @create 2020-11-11 17:13
       */
      public class RedisCanalClientExample
      {
              
              
      
          public static final Integer _60SECONDS = 60;
      
          public static void main(String args[]) {
              
              
      
              // 创建链接canal服务端
              CanalConnector connector = CanalConnectors.newSingleConnector(new InetSocketAddress("192.168.111.147",
                      11111), "example", "", "");
              int batchSize = 1000;
              int emptyCount = 0;
              try {
              
              
                  connector.connect();
                  //connector.subscribe(".*\\..*");
                  connector.subscribe("db2020.t_user");
                  connector.rollback();
                  int totalEmptyCount = 10 * _60SECONDS;
                  while (emptyCount < totalEmptyCount) {
              
              
                      Message message = connector.getWithoutAck(batchSize); // 获取指定数量的数据
                      long batchId = message.getId();
                      int size = message.getEntries().size();
                      if (batchId == -1 || size == 0) {
              
              
                          emptyCount++;
                          try {
              
               TimeUnit.SECONDS.sleep(1); } catch (InterruptedException e) {
              
               e.printStackTrace(); }
                      } else {
              
              
                          emptyCount = 0;
                          printEntry(message.getEntries());
                      }
                      connector.ack(batchId); // 提交确认
                      // connector.rollback(batchId); // 处理失败, 回滚数据
                  }
                  System.out.println("empty too many times, exit");
              } finally {
              
              
                  connector.disconnect();
              }
          }
      
          private static void printEntry(List<Entry> entrys) {
              
              
              for (Entry entry : entrys) {
              
              
                  if (entry.getEntryType() == EntryType.TRANSACTIONBEGIN || entry.getEntryType() == EntryType.TRANSACTIONEND) {
              
              
                      continue;
                  }
      
                  RowChange rowChage = null;
                  try {
              
              
                      rowChage = RowChange.parseFrom(entry.getStoreValue());
                  } catch (Exception e) {
              
              
                      throw new RuntimeException("ERROR ## parser of eromanga-event has an error,data:" + entry.toString(),e);
                  }
      
                  EventType eventType = rowChage.getEventType();
                  System.out.println(String.format("================&gt; binlog[%s:%s] , name[%s,%s] , eventType : %s",
                          entry.getHeader().getLogfileName(), entry.getHeader().getLogfileOffset(),
                          entry.getHeader().getSchemaName(), entry.getHeader().getTableName(), eventType));
      
                  for (RowData rowData : rowChage.getRowDatasList()) {
              
              
                      if (eventType == EventType.INSERT) {
              
              
                          redisInsert(rowData.getAfterColumnsList());
                      } else if (eventType == EventType.DELETE) {
              
              
                          redisDelete(rowData.getBeforeColumnsList());
                      } else {
              
              //EventType.UPDATE
                          redisUpdate(rowData.getAfterColumnsList());
                      }
                  }
              }
          }
      
          private static void redisInsert(List<Column> columns)
          {
              
              
              JSONObject jsonObject = new JSONObject();
              for (Column column : columns)
              {
              
              
                  System.out.println(column.getName() + " : " + column.getValue() + "    update=" + column.getUpdated());
                  jsonObject.put(column.getName(),column.getValue());
              }
              if(columns.size() > 0)
              {
              
              
                  try(Jedis jedis = RedisUtils.getJedis())
                  {
              
              
                      jedis.set(columns.get(0).getValue(),jsonObject.toJSONString());
                  }catch (Exception e){
              
              
                      e.printStackTrace();
                  }
              }
          }
      
          private static void redisDelete(List<Column> columns)
          {
              
              
              JSONObject jsonObject = new JSONObject();
              for (Column column : columns)
              {
              
              
                  jsonObject.put(column.getName(),column.getValue());
              }
              if(columns.size() > 0)
              {
              
              
                  try(Jedis jedis = RedisUtils.getJedis())
                  {
              
              
                      jedis.del(columns.get(0).getValue());
                  }catch (Exception e){
              
              
                      e.printStackTrace();
                  }
              }
          }
      
          private static void redisUpdate(List<Column> columns)
          {
              
              
              JSONObject jsonObject = new JSONObject();
              for (Column column : columns)
              {
              
              
                  System.out.println(column.getName() + " : " + column.getValue() + "    update=" + column.getUpdated());
                  jsonObject.put(column.getName(),column.getValue());
              }
              if(columns.size() > 0)
              {
              
              
                  try(Jedis jedis = RedisUtils.getJedis())
                  {
              
              
                      jedis.set(columns.get(0).getValue(),jsonObject.toJSONString());
                      System.out.println("---------update after: "+jedis.get(columns.get(0).getValue()));
                  }catch (Exception e){
              
              
                      e.printStackTrace();
                  }
              }
          }
      
      
      }
      

Guess you like

Origin blog.csdn.net/m0_56709616/article/details/131033196