Foreword
This paper to introduce the four methods on mysql insert large quantities of data, sharing learning for your reference, the following did not talk much to say, to see a detailed description with it
Method a: Insert the cycle
This is the most common way, if the data is not large, can be used, but every time the consumption of resources connected to the database.
Thinking roughly as follows
(I'm here to write pseudo code, specific binding can write your own business logic or grammar writing framework)
1
2
3
4
5
6
7
8
9
10
11
12
13
|
for
($i=1;$i<=100;$i++){
$sql =
'insert...............'
;
//querysql
}
foreach($arr
as
$
key
=> $value){
$sql =
'insert...............'
;
//querysql
}
while($i <= 100){
$sql =
'insert...............'
;
//querysql
$i++
}
|
Because too general but also no difficulty at the same time is not my main writing here today, so I will not say
Method two: reduce connection resource, a splicing sql
Pseudo code
1
2
3
4
5
6
7
8
9
10
|
//这里假设arr的
key
和数据库字段同步,其实大多数框架中在php操作数据库的时候都是这么设计的
$arr_keys = array_keys($arr);
$sql =
'INSERT INTO tablename ('
. implode(
','
,$arr_keys) .
') values'
;
$arr_values = array_values($arr);
$sql .=
" ('"
. implode(
"','"
,$arr_values) .
"'),"
;
$sql = substr($sql ,0 ,-1);
//拼接之后大概就是
INSERT
INTO
tablename (
'username'
,
'password'
)
values
(
'xxx'
,
'xxx'
),(
'xxx'
,
'xxx'
),(
'xxx'
,
'xxx'
),(
'xxx'
,
'xxx'
),(
'xxx'
,
'xxx'
),(
'xxx'
,
'xxx'
)
.......
//querysql
|
Write properly inserted into the little ten thousand basic problem, unless the data long enough to cope with the ordinary bulk inserts, such as: batch production number, batch generate random codes, and so on. . .
Method three: Using Stored Procedures
This is just my hands and then put this sql paid to the specific business logic all their own combination of what you can.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
|
delimiter $$$
create
procedure
zqtest()
begin
declare
i
int
default
0;
set
i=0;
start
transaction
;
while i<80000 do
//your
insert
sql
set
i=i+1;
end
while;
commit
;
end
$$$
delimiter;
call zqtest();
|
This is only a test code, they define their own specific parameters
I was here first inserted 80,000, while small, however, the amount of each piece of data is large, there are many varchar4000 text fields and
time-consuming 6.524s
Method 4: Use MYSQL LOCAL_INFILE
I am currently using this, so I dropped the pdo code also re-up for your reference
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
|
//设置pdo开启MYSQL_ATTR_LOCAL_INFILE
public
function
pdo_local_info ()
{
global
$system_dbserver;
$dsn =
'mysql:dbname='
. $dbname .
';host='
. $ip .
';port=3306'
;
$options = [PDO::MYSQL_ATTR_LOCAL_INFILE =>
true
];
$db = new PDO($dsn ,$
user
,$pwd ,$options);
return
$db;
}
//伪代码如下
public
function
test(){
$arr_keys = array_keys($arr);
$root_dir = $_SERVER[
"DOCUMENT_ROOT"
] .
'/'
;
$fhandler = fopen($my_file,
'a+'
);
if ($fhandler) {
$sql = implode(
"\t"
,$arr);
$i = 1;
while ($i <= 80000)
{
$i++;
fwrite($fhandler ,$sql .
"\r\n"
);
}
$sql =
"LOAD DATA local INFILE '"
. $myFile .
"' INTO TABLE "
;
$sql .=
"tablename ("
. implode(
','
,$arr_keys) .
")"
;
$pdo = $this->pdo_local_info ();
$res = $pdo->
exec
($sql);
if (!$res) {
//TODO 插入失败
}
@unlink($my_file);
}
}
|
这个同样每一条数据量都很大,有很多varchar4000 和text字段
耗时 2.160s
以上满足基本需求,100万数据问题不大,要不数据实在太大也涉及分库分表了,或者使用队列插入了。
总结
以上就是这篇文章的全部内容了,希望本文的内容对大家的学习或者工作具有一定的参考学习价值,谢谢大家对脚本之家的支持。