Work requirements
The requirements of the job is to textbook movies(title,year,length,movietype,studioname,producerC)
add 10 million records in.
movies
The primary key (title,year)
.
analysis
Check the information that
- A plurality of insert statements can be combined into one, i.e., an INSERT statement to insert a plurality of tuples
- By the transaction, reducing each insert statement to create a new space-time caused by consumption of affairs
- By
load data infile
the data file to import mysql, it seems like soon
Although it seems a third soon, but here I used the first two methods, the transaction achieved by merging +.
Generated by the analog I python (only the year changes in the primary key, to generate the different tuples) 1 million records, it is organized as \ (10 \ times100 \ times10000 \ ) records add movies
, divided into 10 transactions, each Affairs, there are 100 insert statements, each insert statement inserts 10,000 yuan groups.
achieve
Realization of ideas are as follows:
- Copy the original database
moviedb
tonewmoviedb
- Setting max_allowed_packet, to ensure that a sufficient number can be inserted into the insert statement tuple
- generating a first insert python insert statement 10,000 yuan group
- Using python to generate a 100 insert statements included in the transaction, save the file to sql
- Run the sql file navicat
At this point we can achieve one million records inserted (my computer takes 327s? Seems very slow!?)
After then sets one complete cycle can continue inserting the 1 million records.
Given below may be used in step (if not described, the default control codes in the command line or mysql environment):
Copy database
Create a new database newmoviedb
Sign in to create the database:
mysql -u root -p
CREATE DATABASE `newmoviedb` DEFAULT CHARACTER SET UTF8 COLLATE UTF8_GENERAL_CI;
Copy moviedb to newmoviedb
Copy database
mysqldump moviedb -u root -pchouxianyu --add-drop-table | mysql newmoviedb -u root -pchouxianyu
The above chouxianyu
is my mysql password
Enter newmoviedb
use newmoviedb;
Setting max_allowed_packet
Max_allowed_packet is set to 100M
set global max_allowed_packet = 100*1024*1024;
Delete movies all the elements (for debugging)
delete from movies;
Generate an insert statement
Below isinsert.py
insertStr = "INSERT INTO movies(title,year,length,movietype,studioname,producerC) VALUES"
value1_str = "('mymovietitle',"
# j
value2_str = ",120,'sciFic','MGM',100)"
# ,;
num_value = 10000
f = open(r'C:\Users\Cxy\Documents\Navicat\MySQL\Servers\MySQL\newmoviedb\insertRow.sql', 'w') # 清空文件内容再写
f.write(insertStr)
for j in range(1, num_value):
f.write(value1_str)
f.write(str(j))
f.write(value2_str)
f.write(',')
f.write(value1_str)
f.write(str(num_value))
f.write(value2_str)
f.write(';')
f.close()
Generating a transaction
The following aretransaction.py
transaction_begin_str = "START TRANSACTION;\n"
transaction_end_str = "COMMIT;\n"
insertStr = "INSERT INTO movies(title,year,length,movietype,studioname,producerC) VALUES"
value1_str = "('mymovietitle',"
# j
value2_str = ",120,'sciFic','MGM',100)"
# ,;
num_value = 10000
num_sql = 100
# 打开文件
f = open(r'C:\Users\Cxy\Documents\Navicat\MySQL\Servers\MySQL\newmoviedb\transaction.sql', 'w') # 清空文件内容再写
# 将SQL语句写入文件
f.write(transaction_begin_str)
for i in range(1, num_sql+1):
f.write(insertStr)
for j in range(1, num_value):
f.write(value1_str)
f.write(str(i*num_value*10+j))
f.write(value2_str)
f.write(',')
f.write(value1_str)
f.write(str(i*num_value*10+num_value))
f.write(value2_str)
f.write(';\n')
f.write(transaction_end_str)
# 关闭文件
f.close()
Reference links
https://blog.csdn.net/qq_22855325/article/details/76087138
https://segmentfault.com/a/1190000016867644
https://mp.weixin.qq.com/s?__biz=MzUxMTgyNTQ2MA==&mid=2247483757&idx=1&sn=e8f9728ef6cd12c3c87195fcd4c7eb01&chksm=f96c8236ce1b0b20753e278ade6e58351c3c3bd8d1160d4ab1a0a3a9597bf9ab75bbf1cf2a9e&token=1803387227&lang=zh_CN#rd
https://zhuanlan.zhihu.com/p/55538088
https://zhuanlan.zhihu.com/p/39850961
Author: @ smelly salted fish
Please indicate the source: https://www.cnblogs.com/chouxianyu/
Welcome to discuss and share!