Summary of common solutions for mysql data migration and synchronization

Table of contents

I. Introduction

2. Data Migration Scenario

2.1 Whole database migration

2.2 Table data migration

2.3 mysql version change

2.4 Migrate mysql data to other storage media

2.5 Self-built data to the cloud environment

2.6 mysql data to other domestic databases

3. Database physical migration implementation plan

3.1 Overview of database physical migration

3.1.1 Applicable Scenarios for Physical Migration

3.1.2 Physical advantages and disadvantages

3.2 Copy data files for physical migration

3.2.1 Overview of data file replication

3.2.2 Create a database on the first machine

3.2.3 Copy the files on the first machine to copy

3.3 Tool Migration for Physical Migration

3.3.1 Delete the database on IP2

3.3.2 Export the database on IP1

3.3.3 Import the sql file into IP2

4. Database Logic Migration Implementation Plan

4.1 Disadvantages of Physical Migration

4.2 Features of logical migration

4.2.1 Advantages and disadvantages of logical migration

4.2.2 Matters needing attention in logic migration

4.3 Database logic backup and recovery

4.3.1 Execute database backup command

4.3.2 Copy sql file

4.3.3 Restoring backup data

5. Application Migration Implementation Plan

5.1 Business Case Requirements and Implementation 1

5.1.1 Use canal to complete data migration

5.1.2 canal implementation sample code

5.2 Business Case Requirements and Implementation 2

6. Middleware realizes offline data migration

6.1 canal realizes data migration or synchronization

6.1.1 Use canal to realize data migration or synchronization

6.2 datax implements data migration

6.2.1 Introduction to datax

6.2.2 datax features

6.2.3 Applicable scenarios

6.2.4 datax implements data migration process

6.2.5 Simulation configuration of datax data migration

7. Offline data migration of client tools

7.1 navicat

7.2 Navicat migrates mysql data to postgresql

7.2.2 Migrate configuration

7.2.3 Selecting tables for migration

7.2.4 Tick Continue when errors are encountered

7.3 kettle

7.3.1 kettle overview

7.3.2 Kettle data migration process

Eight, written at the end of the text


I. Introduction

In production practice, for some reasons, many students have performed backup, data synchronization or data migration operations on the mysql database or data table through the mysqldump command. In fact, there are many synchronization scenarios involving database migration, such as the following These scenarios:

  • Force majeure, the server where the database is located is recycled, or the server disk is damaged, the database must be migrated?
  • The read and write pressure of the single-point database is increasing. Need to expand one or more nodes to share the read and write pressure?
  • The amount of data in a single table is too large, what should I do if it needs to be split horizontally or vertically?
  • The database needs to be migrated from mysql to other databases, such as PG, OB...

For many students, the above scenarios may be more or less involved in the business they are in. It is okay if they have not encountered them. Once such problems occur, how to deal with them? This article will use a certain amount of space to discuss in detail in combination with production practice experience.

2. Data Migration Scenario

Generally speaking, different business scenarios have different modes and requirements for data migration. Some require server-level migration of the entire database, some only need to migrate table data, and some may only migrate the database Migrate some tables in , and here are the following summary of several scenarios involved in data migration based on practical experience in production.

2.1 Whole database migration

The main reasons for the need to migrate the entire database are:

  • The server is recycled;
  • Insufficient disk space;
  • Disk corruption;
  • The micro-service transformation of the project requires splitting the database...

Whole database migration is common when the database is deployed on a physical machine, but the physical machine may be recycled due to some special reasons, or the disk may be damaged, so the database on the physical machine needs to be migrated as a whole;

In some scenarios, in order to restore the production problem scenario, but when the development or test environment cannot be reproduced, it may also involve the migration of the entire database data, and it is necessary to migrate a copy of the production data to a self-built machine.

2.2 Table data migration

I don't know if you have encountered the following scenarios:

  • The structure of the production table has changed, such as adding new fields, but you do not want to stop updating;
  • The amount of data in a certain table is too large, which has become the performance bottleneck of the system, and the data in the data table needs to be split horizontally or vertically;
  • When the project changes, the data of a certain table under a certain database needs to be completely migrated to another database;

The migration of table data usually occurs when the data volume of a single table needs to be segmented. At this time, certain means are needed, which may be tools, SQL scripts, stored procedures, or even external programs to complete;

2.3 mysql version change

In the past experience of the editor, mysql has been upgraded from 5.7 to 8.X in the production environment. Due to the large differences in character set encoding and other aspects of databases of different versions, the data of the lower version is directly used as the higher version. To use it, I experienced a lot of twists and turns. After some pitfalls, I finally finished it. Therefore, when it comes to upgrading versions of mysql or other relational databases, there is a high probability that data migration will be involved.

2.4 Migrate mysql data to other storage media

When the structure of the product is adjusted, for example, before the adjustment, the business data is stored in a relational database such as mysql. After the structure is adjusted, the data needs to be double-written, such as to es, mongo, hbase and other storage media. At this time, in order to be able to To put the newly added storage medium into use as soon as possible, it is necessary to align the mysql data with the new storage medium, which involves data migration, which is a common scenario in actual production.

2.5 Self-built data to the cloud environment

In the early years before the public cloud was fully rolled out, many companies purchased their own servers to build their own databases, or enterprises with a certain scale built their own computer rooms and data centers. With the rise and widespread use of public clouds, Once the service is on the cloud, it will inevitably face the data migration work of self-built data to the cloud environment.

2.6 mysql data to other domestic databases

In recent years, due to the influence of the external environment, security issues have been raised by many Internet companies, and many domestic databases are surging, such as GAUSS, Dameng, etc. For the systems that have been put into production, in order to be compatible with domestic database , will inevitably face the work of migrating mysql data to these localized databases.

3. Database physical migration implementation plan

In the above, some common scenarios where data migration may occur are summarized in more detail. For different scenarios, the strategies to deal with specific data migration are also different, and it is difficult to have a set of general ones in the industry. However, combined with actual production experience, here are some relatively feasible solutions and ideas for reference.

3.1 Overview of database physical migration

Physical migration is suitable for the overall migration of massive data. You can directly copy the original data files, or use the navicat client tool for backup migration. Physical migration between different servers requires keeping the same version, configuration and permissions of the MySQL server in the two servers.

The advantage of this kind of physical migration is that it is fast, but the disadvantage is that the configuration of the new server is required to be exactly the same as that of the original server. Even so, some unknown errors may occur.

3.1.1 Applicable Scenarios for Physical Migration

It is suitable for overall migration under large amount of data, such as data migration for complete server replacement.

3.1.2 Physical advantages and disadvantages

advantage

The migration speed is relatively fast, without having to consider the details of the data

shortcoming

In order to ensure data consistency before and after migration, downtime may be required, which is not flexible enough, and unknown errors are difficult to predict

Next, a case will be used to demonstrate the complete steps of physical migration.

3.2 Copy data files for physical migration

Preparation

1. Two servers, respectively IP1 and IP2;

2. Prepare the mysql environment on the two servers in advance, which can be quickly built using docker;

3.2.1 Overview of data file replication

Students who have used the mysql database should be familiar with mysql data storage files. Taking 5.7 as an example, for a certain database, many file directories will be created for this database under the mysql data directory, such as ibd for recording table data files, frm files describing table information, etc.

Therefore, in theory, to achieve data migration by copying data files, you only need to make a complete copy of the data directory file of the database, and keep the mysql data directory and structure of the target machine the same;

3.2.2 Create a database on the first machine

Create a database db_emp on the IP1 machine, and create a table t_user under the database, and insert a piece of data for the t_user table;

At this point, under the mysql data directory, you can see a data directory about the library

3.2.3 Copy the files on the first machine to copy

It should be noted that, in order to minimize other problems caused during the data copy migration process, try to keep the mysql version on the two machines consistent, and the location of the mysql data directory should be consistent. If it is built by docker, the mounted data The directory settings are consistent;

The file directory that needs to be copied is as follows (some students said that it is not necessary to copy the mysql directory, but I tried it here and it seems to work)

Then copy the above directories to the same location on the IP2 machine. Before copying, make sure that there is no such file in the IP directory

 To copy the data file, you can use the ftp tool, or directly scp, and decompress it after the copy is completed (note that you should back up the previous data directory)

After the copy is complete, decompress the file. After the decompression is complete, restart the mysql service, and then use the client to connect. If there is no accident, you can see the same database and table data;

3.3 Tool Migration for Physical Migration

From the above operations, the process of directly using the copy of the data directory is still very cumbersome, and when the size of the database file to be migrated is large, the process may be very long. In addition, data transmission The process is also easily affected by external environmental factors such as the network. Of course, if in production practice, if the use of client tools is allowed, you can consider using client tools to cooperate with the migration of the entire database data. Still taking the above database as an example for illustration, let's look at the specific operation process.

3.3.1 Delete the database on IP2

In order to ensure the demonstration effect, first delete the database on IP2

3.3.2 Export the database on IP1

Use the navicat tool to connect to the mysql service of IP1, and export the database of db_emp as a sql file

Export is fast due to small data volume

3.3.3 Import the sql file into IP2

Run the sql file and import the sql exported above through navicat. This operation must be done by many students at ordinary times, so I won’t go into details;

4. Database Logic Migration Implementation Plan

Although physical migration seems simple and rough, in fact, in production practice, especially in environments with many microservices and large-scale databases, physical migration is not a common solution. Why do you say that?

4.1 Disadvantages of Physical Migration

not flexible enough

In order to ensure the consistency of data before and after migration, it may be necessary to shut down and re-migrate the services running online, which is not flexible enough

data too large

In the actual production environment, the file size of the database is small in G, and dynamic in T. This kind of cross-machine copy speed and requirements for the network are unimaginable.

backup trouble

If you back up the previous data files, you need to take up a lot of disk storage space. If there are many files and the file size is too large, it will take a lot of time to back up.

The environment is difficult to guarantee consistency

In fact, it is difficult to ensure the consistency of the data directory and other environments of different machines before and after data replication as mentioned above. In reality, many uncontrollable difficulties are often brought about by server environment factors. This is why many development engineers and A headache for operation and maintenance engineers

In actual operation, more unpredictable factors make physical migration not the preferred data migration solution. Therefore, in most cases, DBAs or operation and maintenance personnel prefer logical migration.

Still take the above db_emp as an example for illustration, let's look at the specific operation steps.

4.2 Features of logical migration

Compared with physical migration, logical database migration has more obvious advantages and can be applied to various scenarios, specifically:

4.2.1 Advantages and disadvantages of logical migration

advantage

High compatibility, flexible and convenient, cross-version, relatively small file transfer

shortcoming

The migration time may be very long. The principle of logical migration is to convert the data and table structure in the MySQL database into SQL files. If the backup SQL file is particularly large, the parsing and conversion process will take a long time

4.2.2 Matters needing attention in logic migration

1. Check the libraries or data tables that do not need to be migrated;

2. Separate processing for large tables;

3. Verify the data as soon as the migration is completed

4.3 Database logic backup and recovery

The logical backup of the mysql database data table will not be elaborated here. It is not the focus of this article. Interested students can refer to this article: mysql data backup and recovery. Simply put, the core of logical backup is to use the mysqldump  command Back up the database data table, and perform data recovery or migration based on this file.

4.3.1 Execute database backup command

Execute the following database backup command to back up the db_emp database

mysqldump -uroot -p your password --databases db_emp > /var/lib/mysql/db_emp.sql

After the execution is complete, you can see that a sql file is exported in the current directory;

4.3.2 Copy sql file

Copy the database file backed up above to the same directory on the second machine

4.3.3 Restoring backup data

Log in to the mysql service of IP2 and delete the previous db_emp database

Execute the following command to restore data

mysql -uroot -your password < /var/lib/mysql/db_emp.sql

After the execution is complete, check the mysql of IP2 again, and you can see that the data has been migrated;

5. Application Migration Implementation Plan

In actual business, the following scenarios may exist (derived from real business requirements):

  • There are no professional implementation personnel or personnel familiar with the database at the project site;
  • Some business tables in the database may increase or decrease fields from time to time;
  • Off-site high availability requires data backup, incremental or full backup;
  • Only need to back up the data of specific change events of some tables, such as addition, deletion and modification of data;
  • It is necessary to heterogeneously synthesize new tables for some table data for other business use...

To sum up, one of the characteristics of the above scenarios is that the data scenarios that need to be migrated have certain particularities, or in personalized scenarios. In this case, it is difficult to deal with it through a fixed method. It is a good choice to solve it through the program.

When it comes to practice, how to implement it? Combined with several common scenarios, several commonly used processing solutions are listed below.

5.1 Business Case Requirements and Implementation 1

There is a table in the database of an application A, and now it is necessary to save some field data of the data in the table A to another database, and at the same time collect statistics and aggregate the results of a certain indicator to the application B for large-screen display

demand analysis

Through the above requirement description, the following key points can be extracted:

  • The operation is a specific table in the database;
  • What needs to be migrated is part of the data in the table;
  • Statistical calculations need to be performed on existing table data;

Combined with the above original needs and the analysis of the requirements, it is difficult to achieve this requirement directly through SQL migration. Here are some commonly used implementation solutions:

  • Through the stored procedure of mysql;
  • Through logical migration, after the migration is completed, manually correct, calculate, and fill the calculation results;
  • through the program to complete;

Obviously, the implementation of the first and second methods is not flexible enough, and it may involve downtime operations. An intermediate program can be considered to complete this matter. Here we recommend Ali's open source middleware canal to implement it.

5.1.1 Use canal to complete data migration

canal is a mysql database synchronization tool open sourced by Ali. Its main purpose is to provide incremental data subscription and consumption based on MySQL database incremental log analysis. canal git address

The principle of canal implementation is shown in the figure below

è¿éæå¥å¾çæè¿°

So specific to the application program, how can canal achieve the above requirements? In fact, it can be simply understood that after using canal, canal is equivalent to a listener that can monitor various event changes in the target data table. After listening to different data change events, it can analyze and process the changed logs. Canal can directly Installation, deployment, simple configuration, and use on the server also provide a client-side SDK for the program. Based on the SDK, it can become the entry point for the above-mentioned business realization.

Mapped to the above requirements, the complete implementation idea is as follows:

  • Configure the server canal in advance;
  • The client introduces canal's SDK;
  • The application listens to the specified data change event and writes the data to the new table;
  • The application statistics aggregate the results to the new table (this step can also be done elsewhere);

5.1.2 canal implementation sample code

Intermediate program sample code (for complete implementation, please refer to the official demo case)

import com.alibaba.fastjson.JSONObject;
import com.alibaba.otter.canal.client.CanalConnector;
import com.alibaba.otter.canal.client.CanalConnectors;
import com.alibaba.otter.canal.protocol.CanalEntry;
import com.alibaba.otter.canal.protocol.Message;
import com.google.protobuf.ByteString;

import java.net.InetSocketAddress;
import java.util.List;

public class CanalClient {

    public static void main(String[] args) throws Exception{

        //1.获取 canal 连接对象
        CanalConnector canalConnector =
                CanalConnectors.newSingleConnector(new
                        InetSocketAddress("canal所在服务器IP", 11111), "example", "", "");

        System.out.println("canal启动并开始监听数据 ...... ");

        while (true){
            canalConnector.connect();
            //订阅表
            canalConnector.subscribe("shop001.*");

            //获取数据
            Message message = canalConnector.get(100);

            //解析message
            List<CanalEntry.Entry> entries = message.getEntries();
            if(entries.size() <=0){
                System.out.println("未检测到数据");
                Thread.sleep(1000);
            }

            for(CanalEntry.Entry entry : entries){
                //1、获取表名
                String tableName = entry.getHeader().getTableName();

                //2、获取类型
                CanalEntry.EntryType entryType = entry.getEntryType();

                //3、获取序列化后的数据
                ByteString storeValue = entry.getStoreValue();

                //判断是否rowdata类型数据
                if(CanalEntry.EntryType.ROWDATA.equals(entryType)){
                    //对第三步中的数据进行解析
                    CanalEntry.RowChange rowChange = CanalEntry.RowChange.parseFrom(storeValue);
                    //获取当前事件的操作类型
                    CanalEntry.EventType eventType = rowChange.getEventType();

                    //获取数据集
                    List<CanalEntry.RowData> rowDatasList = rowChange.getRowDatasList();

                    //便利数据
                    for(CanalEntry.RowData rowData : rowDatasList){

                        //数据变更之前的内容
                        JSONObject beforeData = new JSONObject();
                        List<CanalEntry.Column> beforeColumnsList = rowData.getAfterColumnsList();
                        for(CanalEntry.Column column : beforeColumnsList){
                            beforeData.put(column.getName(),column.getValue());
                        }

                        //数据变更之后的内容
                        List<CanalEntry.Column> afterColumnsList = rowData.getAfterColumnsList();
                        JSONObject afterData = new JSONObject();
                        for(CanalEntry.Column column : afterColumnsList){
                            afterData.put(column.getName(),column.getValue());
                        }

                        System.out.println("Table :" + tableName +
                                ",eventType :" + eventType +
                                ",beforeData :" + beforeData +
                                ",afterData : " + afterData);

                    }
                }else {
                    System.out.println("当前操作类型为:" + entryType);
                }
            }
        }
    }
}

5.2 Business Case Requirements and Implementation 2

Description of Requirement

The business system often encounters the need to update the data of a certain table to multiple storages, that is, the upgrade of the above-mentioned requirements. For example, after the data of a certain table in mysql changes, it needs to be updated to another database synchronously, and also updated synchronously to es, redis, mongo and other storage

In such a demand scenario, there is a better choice, which is the implementation of flink-cdc.

The following figure is a business flow chart about flink cdc implementation given by the official website. It is not difficult to understand that this is very similar to the implementation principle of canal

è¿éæå¥å¾çæè¿°

If you use flink cdc to achieve this requirement, you can roughly do it according to the following ideas

1. MySQL configuration binlog;

2. The program introduces SDK dependencies;

3. Write a program, monitor the data change event of the target table, and write data to the target table;

The sample code implemented using flink-cdc is given below. Interested students can study it in depth

import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.streaming.api.datastream.DataStream;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.Table;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
import org.apache.flink.types.Row;

public class CdcTest2 {

    public static void main(String[] args) throws Exception{

        //1.创建执行环境
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

        env.setParallelism(1);
        StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);

        //2.创建 Flink-MySQL-CDC 的 Source
        tableEnv.executeSql("CREATE TABLE user_info1 (" +
                " id STRING NOT NULL," +
                " name STRING NOT NULL," +
                " version INTEGER NOT NULL" +
                ") WITH (" +
                " 'connector' = 'mysql-cdc'," +
                " 'hostname' = 'IP'," +
                " 'port' = '3306'," +
                " 'username' = 'root'," +
                " '连接密码' = '连接密码'," +
                " 'database-name' = 'bank1'," +
                " 'table-name' = 'record'" +
                ")");

        //tableEnv.executeSql("select * from user_info1").print();

        Table table = tableEnv.sqlQuery("select * from user_info1");
        DataStream<Tuple2<Boolean, Row>> retractStream = tableEnv.toRetractStream(table, Row.class);

        //结果打印输出
        retractStream.print();

        env.execute("flinkCdcSql");

    }

}

6. Middleware realizes offline data migration

With the increasing demand for data migration, some open-source middlewares that are easy to use have appeared on the market to help complete the migration of database data tables. Here are some commonly used and relatively stable middleware for data synchronization. .

6.1 canal realizes data migration or synchronization

In the above, I briefly introduced the function of canal and the process of completing data synchronization in combination with the program. In fact, the function of canal is still very powerful. It can do data synchronization, and can also import mysql data into other storages, and even cooperate with applications. program to complete some complicated and special scene realizations.

6.1.1 Use canal to realize data migration or synchronization

Generally speaking, to use canal to realize data migration or synchronization, please refer to the following steps

1. Install the canal server;

2. Add a configuration file, configure the source database, data table address, configure the target database, data table address;

3. Based on the second step, finer-grained settings can be made, such as only synchronizing the change data of specific events in the table, and incremental or full synchronization can be set according to requirements;

4. After the configuration file is completed, start the service;

5. If it cannot be solved directly through configuration, more complex operations can be completed with application code;

6.2 datax implements data migration

6.2.1 Introduction to datax

DataX  is an offline data synchronization tool/platform widely used within Ali, which can realize efficient data synchronization between various heterogeneous data sources including MySQL, Oracle, HDFS, Hive, OceanBase, HBase, OTS, ODPS, etc. DataX adopts The framework + plug-in mode is currently open source, and the code is hosted on github, datax git address

6.2.2 datax features

As a data synchronization framework, DataX abstracts the synchronization of different data sources into a Reader plug-in that reads data from the source data source, and a Writer plug-in that writes data to the target end. In theory, the DataX framework can support data synchronization of any data source type Work. At the same time, the DataX plug-in system serves as an ecosystem. Every time a new data source is connected, the newly added data source can communicate with the existing data source.
 

6.2.3 Applicable scenarios

Usually when the service is suspended, a piece of data is migrated from one database to another database in a short period of time, or under other different types of databases.

6.2.4 datax implements data migration process

The general process of data migration using datax is roughly as follows:

1. Download the installation package and decompress it;

2. Write a configuration file to configure the original database, table, connection and other information to be synchronized;

3. Configure the information of the target database, database, table, etc.;

4. datax does not need the name of the source table and the target table to be the same, and even the fields do not need to be completely consistent, as long as the data type is correct;

5. Execute commands on the configuration file to complete the offline migration of data;

6.2.5 Simulation configuration of datax data migration

The following is a configuration information for data migration using datax. For more details, please refer to the official information;

{
    "job": {
        "setting": {
            "speed": {
                 "channel": 3
            },
            "errorLimit": {
                "record": 0,
                "percentage": 0.02
            }
        },
        "content": [
            {
                "reader": {
                    "name": "mysqlreader",
                    "parameter": {
                        "username": "连接用户名",
                        "password": "连接密码",
                        "column": [
                            "id",
                            "name"
                        ],
                        "connection": [
                            {
                                "table": [
                                    "user_info"    #表名
                                ],
                                "jdbcUrl": [
     "jdbc:mysql://IP1:3306/shop001"			#源地址
                                ]
                            }
                        ]
                    }
                },
               "writer": {
                    "name": "streamwriter",
                    "parameter": {
                        "print":true,	#开启任务运行过程中的打印
	       "column": [
                        "id","name"		#待同步的字段
	         ], 
	    "connection": [
                            {
                                "jdbcUrl": "jdbc:mysql://IP2:3306/shop001", 		#目标地址
                                "table": ["user_info_copy"]		#目标表
                            }
                        ], 
                        "password": "连接密码", 
                        "username": "连接用户名"
                    }
                }
            }
        ]
    }
}

7. Offline data migration of client tools

In daily work, when the real-time requirements are not high and the amount of data is not particularly large, you can also use some client tools to complete the data migration work. The following introduces several commonly used client tools to assist in the data migration work.

7.1 navicat

Navicat is a client tool that I believe everyone is familiar with. It will be encountered during daily development and operation and maintenance. This client tool can not only meet the daily connection and use of many databases such as mysql, PG, OB, etc., but also assist in completing some Routine data migration work. For example, the process of migrating the mysql database to postgresql can be easily completed through the navicat client. Let's take a look at the specific operation steps.

7.2 Navicat migrates mysql data to postgresql

7.2.1 Preparation

Before migrating data, you need to create a new MySQL and PostgreSQL connection, and confirm that the connection account has all the permissions to migrate the database

7.2.2 Migrate configuration

Click the menu Tools>>Transfer, the following window pops up, select the source MySQL, target PostgreSQL database

7.2.3 Selecting tables for migration

Select all tables under the database

7.2.4 Tick Continue when errors are encountered

The reason is that the index in MySQL is unique in the table, but the index name in PostgreSQL is globally unique. If you encounter related errors during synchronization, SQL supplements will be provided later

According to the above operation steps, the data migration from mysql to pg can be completed.

7.3 kettle

7.3.1 kettle overview

The Chinese name of Kettle  is Kettle. The main programmer of this project hopes to put all kinds of data into a pot, and then flow out in a specified format. Kettle is used for data migration, that is, to transfer all the data in a database Import another database.

KettleIt is a foreign open source ETLtool, purely Javawritten, green without installation, efficient and stable data extraction (data migration tool).

7.3.2 Kettle data migration process

Kettle is widely used in big data ETL. After opening, you can directly configure various operations through the client through the client with an interface similar to navicat. The general idea is similar to that of navicat above.

Kettle can not only be used for offline migration of data between mysql databases, but also can realize migration between different types of databases, such as mysql to mogo.

Eight, written at the end of the text

This article summarizes the common processing schemes of mysql data migration or synchronization in detail in a large space. In fact, the situation in the production environment is far more complicated than these. For example, in the master-slave mode of mysql, new slave nodes need to be expanded. Here are some relatively general ones. The solution ideas can be consulted when encountering similar scenarios in later work. This is the end of this article, thank you for watching.

Guess you like

Origin blog.csdn.net/zhangcongyi420/article/details/130496538