The whole process is precise, the whole process of migrating MySQL data to SQL Server _

Why do the migration?

Due to the upgrade of the system version and database, the test process was blocked. In order to ensure the consistency of the data and the system version, I urgently needed to use this environment for performance testing, so I asked the leaders and developers for instructions, and after getting approval, I had This learning opportunity, so hereby to record the whole process.

Usage plan:

With the combination of tools and coding, back up the MySQL database, restore the backup database to the local MySQL database, and use third-party tools to complete data migration. .

use tools:

The first migration tool

Microsoft SQL Server Migration Assistant for MySQL: This tool is recommended, from Microsoft, but there are also some problems, such as some table data cannot be completely migrated

Second Migration Tool

Navicat Premium 12: Not recommended, slow and prone to failure

The third migration tool

Tapdata: This is also a good third-party tool, but it is unstable, and the total memory overflows. It is written in the underlying Java. It needs to communicate with the customer service to solve the problem in use. The response speed of the customer service is not very ideal.

comparison tool

ultracompare: compare results using

tool usage

The first migration tool uses

Microsoft SQL Server Migration Assistant for MySQL, this tool is produced by Microsoft, it is really easy to use, and the speed is relatively fast.

From https://www.microsoft.com/en-us/download/details.aspx?id=54257, download and install .

Here's how to use this tool, the specific steps are as follows:

Step 1: Create a Migration Project

It should be noted that you need to select the version of the SQL Server database to be migrated to. Currently, it supports: SQL Azure, SQL Server 2005, SQL Server2008, SQL Server 2012, SQL Server2014. Select the version you want to migrate to the target database according to actual needs.

Step 2: Connect the source database and the target database

Above is source: MySQL, below is target: SQL Server

Step 3: Select the database to be migrated and create a migration analysis report

This report analyzes all table structures in the database that needs to be migrated and generates a feasibility report

The generated report is as follows:

Analyze the objects that need to be converted, how many tables and databases there are, whether there are unconvertible objects and other information. If there is a check error, it will be output to the following

Step 4: Convert the schema, that is, the database structure

The migration is divided into two steps: 1. Convert the database structure, 2. Migrate the data;

Step 5: Remember to perform a synchronous schema operation on the target database after the source database has converted the schema

Otherwise, the converted database structure will not be transferred to the target database

After clicking sync, there will also be a sync report:

After clicking OK, the actual synchronization operation will synchronize the converted structure to the target database, and create corresponding tables and other objects. After the synchronization operation is completed, the following output will be output:

Step 6: After the structure synchronization is completed, the next step is the data migration operation

We can see that there are several tab pages on the right. The currently selected is Type Map, which will list the mapping relationship between the field types of the source database and the target database.

Because the data types between different databases are still different.

After clicking Migrate Data, you need to confirm the input of the source database password and the target database password again, and then start the real data migration.

After execution, just wait for the completion, and a report of data migration completion will also be generated. At this point, the data migration can be completed.

The second migration tool uses

Navicat Premium 12 is a tool that is easier to operate because many steps can be graphical and relatively simple.

The specific operation steps are as follows:

Establish MySQL, SqlServer connection,

Double-click the MySQL connection to establish a connection

Then select the upper left corner tool of navicat

Data will be imported automatically

Note: The tool will not synchronize constraints such as default values ​​and the like. But non-null constraints can be passed to SqlServer.

The third migration tool

Tapdata, this tool is permanently free and easy to use. The specific usage is as follows:

Step 1: Configure the MySQL connection

1. Click [Connection Management] in the left menu bar of the Tapdata Cloud operation background, and then click the [Create Connection] button in the upper right corner of the [Connection List] in the right area to open the connection type selection page, and then select MySQL

2. Enter the required configuration information in turn on the connection information configuration page that opens.

[Connection name]: Set the name of the connection, the names of multiple connections cannot be repeated

[Database address]: database IP / Host

[Port]: database port

[Database name]: The tapdata database connection uses a db as a data source. The db here refers to the database in a database instance, not a mysql instance.

【Account】: The account that can access the database

【Password】: The password corresponding to the database account

【Time zone】: The time zone of the database is used by default; if a time zone is specified, the specified time zone setting is used

Step 2: Configure the SQL Server connection

3. Through the first step, click [Connection Management] in the left menu bar, and then click the [Create Connection] button in the upper right corner of the [Connection List] in the right area to open the connection type selection page, and then select SQL Server

4. Enter the required configuration information in turn on the connection information configuration page that opens, and save the test connection after the configuration is complete.

Step 3: Select the synchronization mode - full / incremental / full + incremental

Enter the Tapdata Cloud operation background task management page, click the Add task button to enter the task setting process

Select the remote end and the target end according to the connection just created.

According to the data requirements, select the library and table to be synchronized. If you need to modify the table name, you can set the target table name in batches through the table name batch modification function on the page.

After the above options are set, the next step is to select the synchronization type. The platform provides full synchronization, incremental synchronization, full + incremental synchronization, and set the write mode and the number of reads.

If full + incremental synchronization is selected, Tapdata Agent will automatically enter the incremental synchronization state after the full task is executed. In this state, Tapdata Agent will continuously monitor the data changes (including: write, update, delete) of the source end, and write these data changes to the target end in real time.

Click the task name to open the task details page, where you can view the task details.

Click Task Monitor to open the task execution details page, where you can view specific information such as task progress/milestones.

Step 4: Perform data verification

Generally, after the synchronization is completed, I habitually perform data verification to prevent stepping on the pit.

Tapdata has three verification modes. I usually use the fastest fast count verification. I only need to select the table to be verified without setting other complicated parameters and conditions. It is simple and convenient.

If you feel that it is not enough, you can also choose table full field value verification. In addition to selecting the table to be verified, you also need to set an index field for each table.

When performing full field value verification in the table, advanced verification is also supported. Through advanced verification, JS verification logic can be added, and the data of the source and target can be verified.

There is also a verification method associated with field value verification. When creating an associated field value verification, in addition to selecting the table to be verified, it is also necessary to set an index field for each table.

The above is the operation sharing of real-time synchronization of MySQL data to SQL Server.

SQL technology used

MySQL section

Query all table names of a library

sql

select table_name from information_schema.tables where table_schema='数据库名';

Query all table name column name field lengths in a database

sql

SELECT TABLE_NAME as '表名', COLUMN_NAME as '列名',COLUMN_COMMENT,DATA_TYPE as '字段类型' ,COLUMN_TYPE as '长度加类型' FROM information_schema.`COLUMNS` where TABLE_SCHEMA='数据库名' order by  TABLE_NAME,COLUMN_NAME

sqlserver part

SQLserver queries all table names in the current database

sql

SELECT Name FROM SysObjects Where XType='U' ORDER BY Name;

Query the duplicate data in the database and query by ID

sql

SELECT id FROM 数据库名 where id<>'' GROUP BY id HAVING COUNT(*)>1

Delete the exact same situation of each field in a table, leaving only one piece of data

sql

-- delete  top(1) from 数据库名 where id =id值

delete log

sql

USE [master]
GO
ALTER DATABASE 数据库名 SET RECOVERY SIMPLE WITH NO_WAIT
GO
ALTER DATABASE 数据库名 SET RECOVERY SIMPLE   --简单模式
GO
USE 数据库名
GO
DBCC SHRINKFILE (N'数据库名_log' , 2, TRUNCATEONLY)  --设置压缩后的日志大小为2M,可以自行指定
GO
USE [master]
GO
ALTER DATABASE 数据库名 SET RECOVERY FULL WITH NO_WAIT
GO
ALTER DATABASE 数据库名 SET RECOVERY FULL  --还原为完全模式
GO

Modify table fields

sql

alter table 数据库名 alter column 字段名	字段类型(长度)

Solve the sqlserver problem: the timeout has expired. The timeout expired or the server did not respond before the operation completed.

1. Click on the menu bar: Tools -> Options

2. Set the script execution timeout (according to your own needs, 0 means no limit)

3. Set the link string update time (according to your own needs, the range is 1-65535)

Navicat Premium 16 Unlimited Trial

bash

@echo off

echo Delete HKEY_CURRENT_USER\Software\PremiumSoft\NavicatPremium\Registration[version and language]
for /f %%i in ('"REG QUERY "HKEY_CURRENT_USER\Software\PremiumSoft\NavicatPremium" /s | findstr /L Registration"') do (
    reg delete %%i /va /f
)
echo.

echo Delete Info folder under HKEY_CURRENT_USER\Software\Classes\CLSID
for /f %%i in ('"REG QUERY "HKEY_CURRENT_USER\Software\Classes\CLSID" /s | findstr /E Info"') do (
    reg delete %%i /va /f
)
echo.

echo Finish

pause

Problems encountered after successful data migration

  1. Some table data will be repeated, caused by repeated attempts to migrate, and it is necessary to manually delete the duplicate data.
  2. Some table field types will be changed, and the migration tool will automatically convert them to SqlServer supported field types, which will affect some application services and make them unable to start normally. Development colleagues need to locate and modify them to the correct type;
  3. Some tables may have no primary key and index, and need to be added manually;
  4. Table field type and index, primary key modification, if modified table by table, the workload will be very large.

write at the end

The entire migration process took nearly two weeks in total, which was much more difficult than I thought, and the problems encountered were really difficult. I have to say that when the amount of data is large, it will indeed bring huge problems to the operation of the data. challenge.

Original link:
https://www.cnblogs.com/longronglang/p/16165672.html

If this article is helpful to you, please like and follow!

Guess you like

Origin blog.csdn.net/m0_67645544/article/details/124413167