Huawei Cloud Yaoyun Server L instance evaluation|Pull and create canal image configuration related parameters & build canal connection to MySQL database & spring project application canal preliminary

Insert image description here

Preface

Recently, Huawei Cloud Yaoyun Server L instance has been released, and I also built one to play with. This blog introduces how to deploy canal's docker image on Huawei Cloud and its preliminary application in the spring project.

The list of other related Huawei Cloud Yaoyun Server L instance evaluation articles is as follows:

Insert image description here

lead out


1.Canal: Based on MySQL database incremental log analysis, providing incremental data subscription and consumption;
2.Canal usage, MySQL configuration, docker Canal installation;
3. Cloud server open port, a way to test whether the server port is open;
4 .Preliminary application of canal in spring;

1. Understand the Canal pipeline

1.What is canal?

https://github.com/alibaba/canal

https://github.com/alibaba/canal/wiki/ClientExample

Canal is an Alibaba open source incremental subscription and consumption component based on Mysql binlog. Through it, you can subscribe to the binlog log of the database and then perform some data consumption, such as data mirroring, data heterogeneity, data indexing, cache updates, etc. Compared with message queues, this mechanism can achieve data ordering and consistency.

Insert image description here

canal [kə'næl] , translated as waterway/pipeline/ditch, its main purpose is to provide incremental data subscription and consumption based on MySQL database incremental log analysis

In the early days of Alibaba, due to the deployment of dual computer rooms in Hangzhou and the United States, there was a business need for synchronization across computer rooms. The implementation method was mainly to obtain incremental changes based on business triggers. Since 2010, the business has gradually tried to parse database logs to obtain incremental changes for synchronization, which has resulted in a large number of database incremental subscription and consumption businesses.

Businesses based on log incremental subscription and consumption include

  • Database mirroring
  • Database real-time backup
  • Index construction and real-time maintenance (split heterogeneous index, inverted index, etc.)
  • Business cache refresh
  • Incremental data processing with business logic

The current canal supports source MySQL versions including 5.1.x, 5.5.x, 5.6.x, 5.7.x, 8.0.x

2.The preliminary principle of canal

Insert image description here
The principle of MySQL master-slave

  • MySQL master writes data changes to the binary log (binary log, where the records are called binary log events, which can be viewed through show binlog events)
  • MySQL slave copies the master's binary log events to its relay log (relay log)
  • MySQL slave replays events in the relay log and reflects data changes to its own data

The principle of canal----> Disguise as a slave

  • canal simulates the interaction protocol of MySQL slave, disguises itself as MySQL slave, and sends the dump protocol to MySQL master.
  • MySQL master receives the dump request and starts pushing binary log to slave (ie canal)
  • canal parses the binary log object (originally a byte stream)

Application prospects of canal

  • canal gets the data from MySQL and then synchronizes it to redis. There is no need to perform delayed double deletion to ensure the data consistency between MySQL and Redis and avoid other problems caused by delayed double deletion.

2. MySQL container creates canal user

docker exec -it mysql_3306 bash
mysql -uroot -p    
show VARIABLES like 'log_%';

Insert image description here

create user 'canal'@'%' IDENTIFIED with mysql_native_password by 'canal';
grant SELECT, REPLICATION SLAVE, REPLICATION CLIENT on *.* to 'canal'@'%';
FLUSH PRIVILEGES;

Insert image description here

3. Pull the canal image to create a container

1. Query and pull the canal image

docker pull canal/canal-server

Insert image description here

root@hcss-ecs-52b8:~# docker pull canal/canal-server
Using default tag: latest
latest: Pulling from canal/canal-server
1c8f9aa56c90: Pull complete 
c5e21c824d1c: Pull complete 
4ba7edb60123: Pull complete 
80d8e8fac1be: Pull complete 
705a43657e98: Pull complete 
28e38bfb6fe7: Pull complete 
7d51a00deff6: Pull complete 
4f4fb700ef54: Pull complete 
Digest: sha256:0d1018759efd92ad331c7cc379afa766c8d943ef48ef8d208ade646f54bf1565
Status: Downloaded newer image for canal/canal-server:latest
docker.io/canal/canal-server:latest

2. Run the canal container and obtain the configuration file

Prepare for subsequent mounting and startup

docker run --name canal -itd canal/canal-server

docker run

  • -i: Run the container in interactive mode
  • -t: Reassign a pseudo input terminal to the container
  • –name: container name
  • –privileged: Set container public permissions (default is true)
  • -p: mapped port linux port: container built-in port (mysql default port is 3306)
  • -v: Mapping of linux mounted folders/files and paths within the container
  • -e: Container environment variables (set mysql default username & password)
  • -d: Run the container in the background and return the container ID

Insert image description here

docker exec -it canal bash

Insert image description here

/home/admin/canal-server/conf/canal.properties
/home/admin/canal-server/conf/example/instance.properties
docker cp canal:/home/admin/canal-server/conf/canal.properties ./
docker cp canal:/home/admin/canal-server/conf/example/instance.properties ./

Copy the configuration file from canal's docker container

Insert image description here

Result of copying configuration file

Insert image description here

3. Edit canal configuration file

vim instance.properties 

Parameters required to edit the configuration file, mysql's internal IP address, binlog file name and log location

docker inspect mysql_3306 | grep IPA
show master status;

Insert image description here

Be careful not to forget to add the port number

Insert image description here

Modify the username and password for connecting to the MySQL database

Insert image description here

4. Delete the previous canal container and create a mounting startup container

docker stop canal 
docker rm canal

Insert image description here

docker run -itd  --name canal \
-p 11111:11111 --privileged=true \
-v /usr/local/software/canal/conf/instance.properties:/home/admin/canal-server/conf/example/instance.properties \
-v /usr/local/software/canal/conf/canal.properties:/home/admin/canal-server/conf/example/canal.properties \
canal/canal-server

Insert image description here

5. View logs

docker logs canal

Check the log, the operation was successful

Insert image description here

6. Open the canal port

firewall-cmd --zone=public --add-port=11111/tcp --permanent
firewall-cmd --reload
firewall-cmd --zone=public --list-ports

Insert image description here

ps: You need to open the port in the Huawei Cloud backend

Insert image description here

Tips: How to monitor whether the service port is developed

nc is netcat. netcat is a simple Unix tool that uses TCP or UDP protocols to read and write data between network connections.

It is designed to be a reliable back-end tool that can be used directly or simply called by other programs or scripts.

At the same time, it is also a feature-rich network debugging and exploration tool, as it can create almost any type of connection you need, and it also has several interesting features built in.

Netcat has three functional modes, namely connection mode, listening mode and tunnel mode.

General syntax of nc (netcat) command:

$ nc [-options] [HostName or IP] [PortNumber]

Command details:

  • nc: That is, the subject of the executed command;
  • z:Zero I/O mode (used for scanning);
  • v: Output explicitly;
  • w3: Set the timeout to 3 seconds;
  • 192.168.1.8:The IP address of the target system;
  • 22: The port that needs to be verified.

Use Cases

[root@localhost conf]# nc -zvw3 124.80.139.65 3927
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connection timed out.
[root@localhost conf]# nc -zvw3 124.80.139.65 3927
Ncat: Version 7.50 ( https://nmap.org/ncat )
Ncat: Connected to 124.80.139.65:3927.
Ncat: 0 bytes sent, 0 bytes received in 0.02 seconds.

4. springboot integrates canal

1.Introduce dependencies

        <!--        canal管道-->
        <dependency>
            <groupId>com.alibaba.otter</groupId>
            <artifactId>canal.client</artifactId>
            <version>1.1.0</version>
        </dependency>

2. Transform the case code of the official website

package com.woniu.fresh.config.redis;


import java.net.InetSocketAddress;
import java.util.List;


import com.alibaba.otter.canal.client.CanalConnectors;
import com.alibaba.otter.canal.client.CanalConnector;
import com.alibaba.otter.canal.common.utils.AddressUtils;
import com.alibaba.otter.canal.protocol.Message;
import com.alibaba.otter.canal.protocol.CanalEntry.Column;
import com.alibaba.otter.canal.protocol.CanalEntry.Entry;
import com.alibaba.otter.canal.protocol.CanalEntry.EntryType;
import com.alibaba.otter.canal.protocol.CanalEntry.EventType;
import com.alibaba.otter.canal.protocol.CanalEntry.RowChange;
import com.alibaba.otter.canal.protocol.CanalEntry.RowData;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.data.redis.core.StringRedisTemplate;
import org.springframework.stereotype.Component;

/**
 * 用canal管道监听MySQL数据变化,自动更新redis缓存
 */
@Slf4j
@Component
public class AutoUpdateRedis {
    
    

    @Value("${canal.host}")
    private String host;

    @Value("${canal.port}")
    private Integer port;

    public void run() {
    
    

        // 创建链接
        final InetSocketAddress HOST = new InetSocketAddress(host,port);
//        final InetSocketAddress HOST = new InetSocketAddress("192.168.111.130",11111);
        CanalConnector connector = CanalConnectors.newSingleConnector(HOST, "example", "", "");
        int batchSize = 1000;
        int emptyCount = 0;
        try {
    
    
            connector.connect();
            connector.subscribe(".*\\..*");
            connector.rollback();
            int totalEmptyCount = 120;
            while (emptyCount < totalEmptyCount) {
    
    
                Message message = connector.getWithoutAck(batchSize); // 获取指定数量的数据
                long batchId = message.getId();
                int size = message.getEntries().size();
                if (batchId == -1 || size == 0) {
    
    
                    emptyCount++;
                    System.out.println("empty count : " + emptyCount);
                    try {
    
    
                        Thread.sleep(1000);
                    } catch (InterruptedException e) {
    
    
                    }
                } else {
    
    
                    emptyCount = 0;
                    // System.out.printf("message[batchId=%s,size=%s] \n", batchId, size);
                    printEntry(message.getEntries());
                }

                connector.ack(batchId); // 提交确认
                // connector.rollback(batchId); // 处理失败, 回滚数据
            }

            System.out.println("empty too many times, exit");
        } finally {
    
    
            connector.disconnect();
        }
    }

    private void printEntry(List<Entry> entrys) {
    
    
        for (Entry entry : entrys) {
    
    
            if (entry.getEntryType() == EntryType.TRANSACTIONBEGIN || entry.getEntryType() == EntryType.TRANSACTIONEND) {
    
    
                continue;
            }

            RowChange rowChage = null;
            try {
    
    
                rowChage = RowChange.parseFrom(entry.getStoreValue());
            } catch (Exception e) {
    
    
                throw new RuntimeException("ERROR ## parser of eromanga-event has an error , data:" + entry.toString(),
                        e);
            }

            EventType eventType = rowChage.getEventType();
            System.out.println(String.format("================&gt; binlog[%s:%s] , name[%s,%s] , eventType : %s",
                    entry.getHeader().getLogfileName(), entry.getHeader().getLogfileOffset(),
                    entry.getHeader().getSchemaName(), entry.getHeader().getTableName(),
                    eventType));

            for (RowData rowData : rowChage.getRowDatasList()) {
    
    
                if (eventType == EventType.DELETE) {
    
    
                    printColumn(rowData.getBeforeColumnsList()); // 删除
                } else if (eventType == EventType.INSERT) {
    
    
                    printColumn(rowData.getAfterColumnsList());  // 添加
                } else {
    
    
                    // 修改
                    log.debug("-------修改之前before");
                    updateBefore(rowData.getBeforeColumnsList());
                    log.debug("-------修改之后after");
                    updateAfter(rowData.getAfterColumnsList());
                }
            }
        }
    }

    private static void printColumn(List<Column> columns) {
    
    
        for (Column column : columns) {
    
    
            System.out.println(column.getName() + " : " + column.getValue() + "    update=" + column.getUpdated());
        }
    }

    /**
     * 数据库更新之前
     * @param columns
     */
    @Autowired
    private StringRedisTemplate stringRedisTemplate;
    private  void updateBefore(List<Column> columns) {
    
    
        for (Column column : columns) {
    
    
            System.out.println(column.getName() + " : " + column.getValue() + "    update=" + column.getUpdated());
//            // 如果数据更新,就更新缓存的数据
//            if ("username".equals(column.getName())){
    
    
//                // 把更新之前的数据删除
//                stringRedisTemplate.opsForSet().remove("usernames", column.getValue());
//                break;
//            }
        }
    }
    private  void updateAfter(List<Column> columns) {
    
    
        for (Column column : columns) {
    
    
            System.out.println(column.getName() + " : " + column.getValue() + "    update=" + column.getUpdated());
//            // 如果数据更新,就更新缓存的数据
//            if ("username".equals(column.getName()) && column.getUpdated()){
    
    
//                // 把更新后的数据放入缓存
//                stringRedisTemplate.opsForSet().add("usernames", column.getValue());
//                break;
//            }
        }
    }
}


3. The main startup class starts canal

package com.woniu.fresh;

import com.woniu.fresh.config.redis.AutoUpdateRedis;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.CommandLineRunner;

@SpringBootApplication
@Slf4j
public class FreshApp implements CommandLineRunner {
    
    
    public static void main(String[] args) {
    
    
        SpringApplication.run(FreshApp.class);
    }

    @Autowired
    private AutoUpdateRedis autoUpdateRedis;

    @Override
    public void run(String... args) throws Exception {
    
    
        log.debug(">>>>>启动缓存自动更新");
        autoUpdateRedis.run();
    }
}

4. Start canal and modify the database

Insert image description here

5. Monitor changes in the background

Insert image description here


Summarize

1.Canal: Based on MySQL database incremental log analysis, providing incremental data subscription and consumption;
2.Canal usage, MySQL configuration, docker Canal installation;
3. Cloud server open port, a way to test whether the server port is open;
4 .Preliminary application of canal in spring;

Guess you like

Origin blog.csdn.net/Pireley/article/details/132900906