Canal real-time monitoring case

Canal real-time monitoring case


0. write in front

  • Canal version:Canal-1.1.5
  • Kafka version:Kafka-2.4.1
  • Zookeeper version:Zookeeper-3.5.7

分散Before decompressing and installing the tar.gz package of canal, create a directory canal-xxx in advance as the installation directory of canal, because canal is decompressed

1. TCP mode test

1.1 IDEA creates project canal-module

Edit pom.xmlthe file: add the following dependencies

<dependencies>
    <dependency>
        <groupId>com.alibaba.otter</groupId>
        <artifactId>canal.client</artifactId>
        <version>1.1.2</version>
    </dependency>

    <dependency>
        <groupId>com.alibaba.otter</groupId>
        <artifactId>canal.protocol</artifactId>
        <version>1.1.5</version>
    </dependency>

    <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka-clients</artifactId>
        <version>2.4.1</version>
    </dependency>
</dependencies>

V1.1.5 version needs to add canal.protocolthis dependency, if it is V1.1.2, it does not need

1.2 General monitoring class - CanalClient

1.2.1 Canal encapsulated data structure

Message: the information captured by a canal from the log, a message can contain the results of multiple sql executions

insert image description here

1.2.2 Create the cn.canal package under the canal-module module, and create the CanalClient.java file under the package

code show as below:

import com.alibaba.fastjson.JSONObject;
import com.alibaba.otter.canal.client.CanalConnector;
import com.alibaba.otter.canal.client.CanalConnectors;
import com.alibaba.otter.canal.protocol.CanalEntry;
import com.alibaba.otter.canal.protocol.Message;
import com.google.protobuf.ByteString;
import com.google.protobuf.InvalidProtocolBufferException;

import java.net.InetSocketAddress;
import java.util.List;


public class CanalClient {
    
    

    public static void main(String[] args) throws InvalidProtocolBufferException, InterruptedException {
    
    
        // TODO 获取连接
        CanalConnector canalConnector = CanalConnectors.newSingleConnector(new InetSocketAddress("node01", 11111),"example", "", "");

        while (true) {
    
    
            // TODO 连接
            canalConnector.connect();
            // TODO 订阅数据库test_canal
            canalConnector.subscribe("test_canal.*");
            // TODO 获取指定数量的数据
            Message message = canalConnector.get(100);
            // TODO 获取Entry集合
            List<CanalEntry.Entry> entries = message.getEntries();

            //TODO 判断集合是否为空,如果为空,则等待一会继续拉取数据
            if (entries.size() <= 0) {
    
    
                System.out.println("当次抓取没有数据,休息一会----------------");
                Thread.sleep(1000);
            } else {
    
    
                // TODO 遍历entries,单条解析
                for (CanalEntry.Entry entry : entries) {
    
    
                    // 1.获取表名
                    String tableName = entry.getHeader().getTableName();
                    // 2.获取类型
                    CanalEntry.EntryType entryType = entry.getEntryType();
                    / /3.获取序列化后的数据
                    ByteString storeValue = entry.getStoreValue();

                    //4.判断当前entryType类型是否为ROWDATA
                    if (CanalEntry.EntryType.ROWDATA.equals(entryType)) {
    
    
                        // 5.反序列化数据
                        CanalEntry.RowChange rowChange = CanalEntry.RowChange.parseFrom(storeValue);
                        // 6.获取当前事件的操作类型
                        CanalEntry.EventType eventType = rowChange.getEventType();
                        // 7.获取数据集
                        List<CanalEntry.RowData> rowDataList = rowChange.getRowDatasList();

                        // 8.遍历rowDataList,并打印数据集
                        for (CanalEntry.RowData rowData : rowDataList) {
    
    
                            // 之前的数据
                            JSONObject beforeData = new JSONObject();
                            List<CanalEntry.Column> beforeColumnsList = rowData.getBeforeColumnsList();
                            for (CanalEntry.Column column : beforeColumnsList) {
    
    
                                beforeData.put(column.getName(), column.getValue());
                            }
                            // 之后的数据
                            JSONObject afterData = new JSONObject();
                            List<CanalEntry.Column> afterColumnsList = rowData.getAfterColumnsList();
                            for (CanalEntry.Column column : afterColumnsList) {
    
    
                                afterData.put(column.getName(), column.getValue());
                            }
                            // 数据打印(控制台|Kafka)
                            System.out.println("Table:" + tableName +
                                    ",EventType:" + eventType +
                                    ",Before:" + beforeData +
                                    ",After:" + afterData);
                        }
                    } else {
    
    
                        System.out.println("当前操作类型为:" + entryType);
                    }
                }
            }
        }
    }
}

Open canal, run the CanalClient query program, add, delete, and modify tables under the subscribed database canal_test, and observe the output of the console

  • increase data

word insert a piece of data

insert into user_info values('1001', 'zss', 'male');

insert image description here

One sql affects multiple rows

insert into user_info values('1002', 'lisi', 'female'),('1001', 'zss', 'male');

insert image description here

  • change the data

insert image description here

  • delete data

insert image description here

2. Kafka mode test

  • Modify the output model of canal in canal.properties, the default is tcp, and it is output to kafka
# tcp, kafka, rocketMQ, rabbitMQ
canal.serverMode = kafka
  • Modify the address of the Kafka cluster
##################################################
#########                    Kafka                   #############
##################################################
kafka.bootstrap.servers = node01:9092,node02:9092,node03:9092
  • Modify the topic (canal_test) and the number of partitions output to Kafka by instance.properties
# mq config
canal.mq.topic=canal_test
# dynamic topic route by schema or table regex
#canal.mq.dynamicTopic=mytest1.user,mytest2\\..*,.*\\..*
canal.mq.partition=0
# hash partition config
#canal.mq.partitionsNum=3
#canal.mq.partitionHash=test.table:id^name,.*\\..*
#canal.mq.dynamicTopicPartitionNum=test.*:4,mycanal:6

Note: By default, the output is still a kafka partition of the specified Kafka topic, because multiple partitions parallel may disrupt the order of the binlog. If you want to improve the parallelism, first set the number of kafka partitions > 1, and then set the canal.mq.partitionHash property

  • Start Canal
[zhangsan@node01 example]$ cd /opt/module/canal/ 
[zhangsan@node01 example]$  bin/startup.sh
  • Seeing CanalLauncher, you indicate that the startup is successful, and the canal_test topic will be created at the same time
[zhangsan@node01 example]$ jps 
2269 Jps
2253 CanalLauncher
  • Start the Kafka consumer client test to check the consumption
	[zhangsan@node01 kafka]$ bin/kafka-console-consumer.sh --bootstrap-server node01:9092 --topic canal_test
  • Check the consumer console after inserting|modifying|deleting data into MySQL

Kafka Consumer Console

  • increase data

word insert a piece of data

insert into user_info values('1001', 'zss', 'male');

insert image description here

One sql affects multiple rows

insert into user_info values('1002', 'lisi', 'female'),('1001', 'zss', 'male');
update user_info 

insert image description here

  • change the data

tp

  • delete data

insert image description here

Finish!

Guess you like

Origin blog.csdn.net/m0_52735414/article/details/128593577
Recommended