今日公司需求,需要将mysql更新实时同步到kafka中,后来又要将数据库中的一张表的变化实时同步到另一台mysql中,于是乎canal与canal-adapter紧急解决,其中踩了不少坑,下面为总结内容
官方文档:https://github.com/alibaba/canal/wiki
一、canal镜像的创建及canal的compose文件
1、Dockerfile文件内容
FROM openjdk:8-jre-alpine ADD [ "canal.tar.gz", "/opt/" ] WORKDIR /opt/canal EXPOSE 11110 11112 COPY ["entrypoint.sh", "/"] VOLUME ["/opt/canal/logs", "/opt/canal/conf"] ENTRYPOINT /entrypoint.sh
2、entrypoint.sh文件内容
#!/bin/sh Base_dir=/opt/canal/conf Log_dir=/opt/canal/logs if [ -n ${canal_instance_master_address} ]; then sed -i "/^canal.instance.master.address=/ccanal.instance.master.address=${canal_instance_master_address}" ${Base_dir}/example/instance.properties fi if [ -n ${canal_mq_servers} ]; then sed -i "/^canal.mq.servers/ccanal.mq.servers=${canal_mq_servers}" ${Base_dir}/canal.properties fi if [ -n ${canal_instance_filter_regex} ]; then sed -i "/^canal.instance.filter.regex/ccanal.instance.filter.regex=${canal_instance_filter_regex}" ${Base_dir}/example/instance.properties fi if [ -n ${canal_mq_dynamicTopic} ]; then sed -i "/^canal.mq.dynamicTopic/ccanal.mq.dynamicTopic=${canal_mq_dynamicTopic}" ${Base_dir}/example/instance.properties fi /bin/sh /opt/canal/bin/startup.sh tail -F ${Log_dir}/canal/canal.log
3、docker-compose文件内容
version: "3" services: canal: image: registry.cn-beijing.aliyuncs.com/wavewisdom-bj-registry-common/canal:1.1.4 container_name: canal env_file: - ./wave-canal.env ports: - "11110:11110" - "11112:11112" restart: always
4、docker-compose文件用到的wave-canal.env文件内容
canal_instance_master_address=193.168.1.136:3306 canal_instance_filter_regex=wavewisdom-bj-develop.odin_business_emergency_record,wavewisdom-bj-develop.odin_business_capture_record,wavewisdom-bj-develop.work_flow_mission,wavewisdom-bj-develop.odin_device_camera,wavewisdom-bj-develop.odin_device_device_position_associate,wavewisdom-bj-develop.odin_device_position,wavewisdom-bj-develop.odin_business_alarm_record,wavewisdom-bj-develop.odin_advise_info canal_mq_servers=10.0.14.47:9092 # 其中有两个转义符 canal_mq_dynamicTopic=.*\\\\..*
二、canal-adapter镜像的创建及canal-adapter的compse文件
参考文献:https://www.jianshu.com/p/5bcf97335e71 https://blog.csdn.net/q936889811/article/details/95745721 https://blog.csdn.net/singgel/article/details/86166154
1、Dockerfile文件内容
FROM openjdk:8-jre-alpine RUN echo "Asia/Shanghai" > /etc/timezone ADD [ "adapter.tar.gz", "/opt/" ] WORKDIR /opt/adapter COPY ["entrypoint.sh", "/"] ENTRYPOINT /entrypoint.sh
2、entrypoint.sh文件内容
#!/bin/sh Conf_Dir=/opt/adapter/conf if [ -n ${Mq_Servers} ]; then sed -i "/mqServers:/ s/:.*/: ${Mq_Servers}/" ${Conf_Dir}/application.yml fi if [ -n ${Mq_Topic} ]; then sed -i "/- instance:/ s/:.*/: ${Mq_Topic}/g" ${Conf_Dir}/application.yml sed -i "/table:/ s/:.*/: ${Mq_Topic}/g" ${Conf_Dir}/rdb/mytest_user.yml fi if [ -n ${Src_Data_Server} ]; then sed -i "/^ *url: jdbc:mysql:/ s/mysql:.*/mysql:\/\/${Src_Data_Server}\/${Src_Database}/" ${Conf_Dir}/application.yml fi if [ -n ${Src_User} ]; then sed -i "/^ *username:/ s/:.*/: ${Src_User}/" ${Conf_Dir}/application.yml fi if [ -n ${Src_Password} ]; then sed -i "/^ *password:/ s/:.*/: ${Src_Password}/" ${Conf_Dir}/application.yml fi if [ -n ${Src_Database} ]; then sed -i "/^ *database:/ s/:.*/: ${Src_Database}/" ${Conf_Dir}/rdb/mytest_user.yml fi if [ -n ${Src_Table} ]; then sed -i "/^ *table:/ s/:.*/: ${Src_Table}/" ${Conf_Dir}/rdb/mytest_user.yml fi if [ -n ${Dest_Data_Server} ]; then sed -i "/^ *jdbc.url: jdbc:mysql:/ s/mysql:.*/mysql:\/\/${Dest_Data_Server}\/${Dest_Database}/" ${Conf_Dir}/rdb/mytest_user.yml fi if [ -n ${Dest_User} ]; then sed -i "/^ *jdbc.username:/ s/:.*/: ${Dest_User}/" ${Conf_Dir}/application.yml fi if [ -n ${Dest_Password} ]; then sed -i "/^ *jdbc.password:/ s/:.*/: ${Dest_Password}/" ${Conf_Dir}/application.yml fi if [ -n ${Dest_Database} ] && [ -n ${Dest_Table} ]; then sed -i "/^ *targetTable:/ s/:.*/: ${Dest_Database}.${Dest_Table}/" ${Conf_Dir}/rdb/mytest_user.yml fi if [ -n ${Target_Pk} ]; then R_Target_Pk=`echo $Target_Pk | sed -e 's/:/: /g'` sed -i "/^ *targetPk:/{n;s/[a-z].*/${R_Target_Pk}/g}" ${Conf_Dir}/rdb/mytest_user.yml fi if [ ${Map_All} == 'true' ]; then sed -i "/mapAll:/c\ mapAll: true" ${Conf_Dir}/rdb/mytest_user.yml sed -i "/targetColumns:/c\# targetColumns:" ${Conf_Dir}/rdb/mytest_user.yml else sed -i "/mapAll:/c\# mapAll: true" ${Conf_Dir}/rdb/mytest_user.yml sed -i "/targetColumns:/c\ targetColumns:" ${Conf_Dir}/rdb/mytest_user.yml for colume in ${Mapping_Columes} do R_colume=`echo $colume | sed -e 's/:/: /g'` sed -i "/^ *targetColumns:/a\ ${R_colume}" ${Conf_Dir}/rdb/mytest_user.yml done fi sh /opt/adapter/bin/startup.sh tail -F /opt/adapter/logs/adapter/adapter.log
3、docker-compose文件内容
version: "3" services: canal: image: adapter:v1 container_name: canal-adapter env_file: - ./wave-canal-adapter.env restart: always
4、docker-compose文件用到的wave-canal.env文件内容
# kafka服务地址 Mq_Servers=10.0.14.47:9092 # kafka中topic名称 Mq_Topic=test.odin_business_clock_record # 源mysql的服务地址 Src_Data_Server=193.168.1.136:3306 # 源mysql用户名 Src_User=root # 源mysql用户名密码 Src_Password=123456 # 源mysql要同步的库名 Src_Database=test # 源mysql要同步的库中的表名 Src_Table=odin_business_clock_record # 目标mysql地址 Dest_Data_Server=193.168.1.136:3306 # 目标mysql的用户名 Dest_User=root # 目标mysql用户名密码 Dest_Password=123456 # 目标mysql中的库名 Dest_Database=test # 目标mysql中的表名 Dest_Table=test # 主键id Target_Pk=clock_record_id:clock_record_id # 是否开启全表映射,开启为true Map_All=false # 当未开启全表映射时,需要映射的列名,格式为:“目标表中的列名:源表中的列名”,多个列以空格分隔 Mapping_Columes=clock_record_id:clock_record_id personnel_name:personnel_name personnel_num:personnel_num
三、以上两个镜像均为自己所写,在目标数据库中必须先全量同步源表后方可使用adapter,否则可能会出现表的列名类型不同等问题而导致同步失败。本次的adapter工作模式是kafka,从kafka获取数据后对目标表进行update;本次的adapter的镜像仅针对单库单表的同步,若多库多表需重写entrypoint.sh文件
以上两个镜像可在百度网盘下载:
链接:https://pan.baidu.com/s/1yLcx2yD8DyQq_BxvqpYPEQ
提取码:fg0d
四、canal动态创建topic参数“canal.mq.dynamicTopic”表达式说明
test\\.test | 指定匹配的单表,发送到以 test\\.test为名字的topic上 |
.\\..* | 匹配所有表,每个表都会发送到各自表名的topic上 |
test | 指定匹配对应的库,一个库的所有表都会发送到库名的topic上 |
test\\.* | 指定匹配的表达式,针对匹配的表会发送到各自表名的topic上 |
.*\\..* | 将匹配到的表发送到库名.表名的topic上 |
test,test1\\.test1,指定多个表达式,会将test库的表都发送到test的topic上,test1\\.test1的表发送到对应的test1\\.test1 topic上,其余的表发送到默认的canal.mq.topic值