Maxwell monitors MySQL incremental data

Maxwell monitors MySQL incremental data

Precondition

1. Kafka can be used normally

2. Mysql can be used normally

Open the binlog of Mysql

How to open binlog:

https://blog.csdn.net/qq_41489540/article/details/112709210

How to configure binlog to see the above post, replace the content of my.cnf with the following

my.cnf content

#设置binlog清理时间
expire_logs_days = 5
#binlog每个日志文件大小
max_binlog_size = 50m
server-id=1
log-bin=master
binlog_format=row
# 这个数据库发生数据改变就产生binlog日志
binlog-do-db=gmall202004

Assign users to maxwell and create data to store maxwell data

Create a maxwell library in the database to store Maxwell metadata:

 $ mysql -uroot -proot

mysql> CREATE DATABASE maxwell ;

#  分配一个账号可以操作该数据库

mysql> GRANT ALL ON maxwell.* TO 'maxwell'@'%' IDENTIFIED BY '123456';

#  分配这个账号可以监控其他数据库的权限

mysql> GRANT SELECT ,REPLICATION SLAVE , REPLICATION CLIENT ON *.* TO maxwell@'%';

Is it easy to start maxwell test?

Start command:

bin/maxwell --user='maxwell' --password='123456' --host='zjj101' --producer=stdout
# --user:上述创建的用户
# --password:maxwell对应的密码
# --host:mysql的ip
# --producer:生产者,即将数据写入到哪里。stdout表示将数据输出到控制台,kafka表示写入到kafka.

Just create a table in the gmall202004 database and perform addition, deletion, modification, and data query operations, and then you can see that maxwell has output json below.

It shows that maxwell and the previous binlog are easy to use.

[root@zjj101 maxwell-1.25.0]# bin/maxwell --user='maxwell' --password='123456' --host='zjj101' --producer=stdout
Using kafka version: 1.0.0
16:11:12,343 WARN  MaxwellMetrics - Metrics will not be exposed: metricsReportingType not configured.
16:11:12,857 INFO  SchemaStoreSchema - Creating maxwell database
16:11:12,991 INFO  Maxwell - Maxwell v1.25.0 is booting (StdoutProducer), starting at Position[BinlogPosition[master.000001:120], lastHeartbeat=0]
16:11:13,144 INFO  AbstractSchemaStore - Maxwell is capturing initial schema
16:11:13,698 INFO  BinlogConnectorReplicator - Setting initial binlog pos to: master.000001:120
16:11:13,789 INFO  BinaryLogClient - Connected to zjj101:3306 at master.000001/120 (sid:6379, cid:38)
16:11:13,789 INFO  BinlogConnectorLifecycleListener - Binlog connected.
{
    
    "database":"gmall202004","table":"z_user_info","type":"delete","ts":1610784697,"xid":1540,"commit":true,"data":{
    
    "id":30,"user_name":"zhang3","tel":"13810001010"}}

maxwell integrates kafka

Copy config.properties.example in the root directory of maxwell and change the name to config.properties

 cp config.properties.example  config.properties

Modify config.properties

producer=kafka
# 因为你要把读到的数据发送到Kafka,所以这里需要Kafka地址
kafka.bootstrap.servers=hadoop202:9092,hadoop203:9092,hadoop204:9092
#将数据放到哪个Kafka的topic里面.
kafka_topic=gmall2020_db_m
# mysql  登录账号密码啥的
host=zjj101
user=maxwell
password=123456
#需要添加 后续初始化会用
client_id=maxwell_1

Note: The data is still output to a Kafka partition of the specified Kafka topic by default, because multiple partitions in parallel may disrupt the order of binlog (that is, the order of business data insertion)

If you want to increase the degree of parallelism and do not consider the order of binlog (that is, the order of business data insertion), first set the number of partitions of kafka> 1, and then set the producer_partition_by attribute

可选值 producer_partition_by=database|table|primary_key|random| column

Start maxwell

Write maxwell script

maxwell.sh

/root/soft/maxwell-1.25.0/bin/maxwell --config /root/soft/maxwell-1.25.0/config.properties >/dev/null 2>&1 &

Run startup script

sh maxwell.sh

Check if it is activated

Found that maxwell has been started

[root@zjj101 bin]# jps
123016 Maxwell

Start consumer

 kafka-console-consumer.sh --bootstrap-server zjj101:9092 --topic gmall2020_db_m

start testing

Create a table casually in the gmall202004 database and perform addition, deletion, modification, and data query operations

At this time, it is found that the kafka consumer has data in json format

[root@zjj101 bin]# kafka-console-consumer.sh --bootstrap-server zjj101:9092 --topic gmall2020_db_m
{
    
    "database":"gmall202004","table":"z_user_info","type":"delete","ts":1610785227,"xid":2119,"commit":true,"data":{
    
    "id":31,"user_name":"li4","tel":"1389999999"}}
{
    
    "database":"gmall202004","table":"z_user_info","type":"insert","ts":1610785232,"xid":2141,"xoffset":0,"data":{
    
    "id":30,"user_name":"zhang3","tel":"13810001010"}}
{
    
    "database":"gmall202004","table":"z_user_info","type":"insert","ts":1610785232,"xid":2141,"commit":true,"data":{
    
    "id":31,"user_name":"li4","tel":"1389999999"}}

Guess you like

Origin blog.csdn.net/qq_41489540/article/details/113800578
Recommended