mysql large data volume migration

The SQL statements exported by MySQL will be very, very slow when the amount of data is large during import. After importing only 30 million, it took nearly 30 hours. Reasonable use of several parameters when exporting can greatly speed up the import speed.

 

-e Use multi-VALUES INSERT syntax; 
--max_allowed_packet=XXX The maximum size of the buffer area for communication between client/server; 
--net_buffer_length=XXX TCP/IP and socket communication buffer size, create a length up to net_buffer_length Row

Note: max_allowed_packet and net_buffer_length cannot be larger than the configuration value of the target database, otherwise an error may occur.

 

First determine the parameter value of the target library 
mysql>show variables like 'max_allowed_packet'; 
mysql>show variables like 'net_buffer_length';

 

Write the mysqldump command according to the parameter value, such as: 
mysql>mysqldump -uroot -p database name -e --max_allowed_packet=1048576 --net_buffer_length=16384 > SQL file

Such as:

mysql>mysqldump -uroot -p test_db  -e --max_allowed_packet=1048576 --net_buffer_length=16384 > test_db.sql

 

 

 

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325105669&siteId=291194637