Why mysql batch processing is faster than single submission innodb_flush_at_trx_commit test analysis

background

  There is a business scenario that requires a lot of writing to the library, and there will be a problem. The log file will be full soon, the dba extension, and the extension will be full soon ... Later we analyzed the code and found that there is a little cute, each Each commit will not only produce a large amount of redo log and undo log, but also slow. Below we test why it is slow. Only sync this test from InnoDB logs. Thread pool, binlog, index creation and other factors are not considered first.

innodb_flush_at_trx_commit 

  Controls the strategy for flushing cache logs to disk. There are three strategies as follows:

= innodb_flush_log_at_trx_commit 1    each transaction Submission from the log buffer to logfile on disk, the default policy 
innodb_flush_log_at_trx_commit = 0    submit per second from the log buffer to the Log file on disk, the largest loss 1S 
innodb_flush_log_at_trx_commit = 2    0 , 1 mixed value, per Submitted once per second, each session submission will also be synchronized to the Log file.

Create test table

The table was not written by me. I picked it up from the Internet and was lazy enough.

drop table if exists test_flush_log;
create table test_flush_log(id int,name char(50))engine=innodb;

Create a stored procedure

The stored procedure is also taken from the Internet, and the batch is adjusted according to a single time.

# 批量提交
drop procedure if exists proc_batch;
delimiter $$
create procedure proc_batch(i int)
begin
    declare s int default 1;
    declare c char(50) default repeat('a',50);
    start transaction;
    while s<=i do
        
        insert into test_flush_log values(null,c);
        set s=s+1;
    end while;
     commit;
end$$
delimiter ;
#  单次提交
drop procedure if exists proc;
delimiter $$
create procedure proc(i int)
begin
    declare s int default 1;
    declare c char(50) default repeat('a',50);
    while s<=i do
        start transaction;
        insert into test_flush_log values(null,c);
        commit;
        set s=s+1;
    end while;
end$$
delimiter ;

 

testing scenarios

##############innodb_flush_log_at_trx_commit =1   默认策略就是1

[mysql> call proc(100000);
Query OK, 0 rows affected (12.95 sec)

[mysql> call proc_batch(100000);
Query OK, 0 rows affected (0.97 sec)


##############innodb_flush_log_at_trx_commit =0

set @@global.innodb_flush_log_at_trx_commit
= 0; [mysql> call proc(100000); Query OK, 0 rows affected (1.60 sec) [mysql> call proc_batch(100000); Query OK, 0 rows affected (0.95 sec) ##############innodb_flush_log_at_trx_commit =2
[mysql> set @@global.innodb_flush_log_at_trx_commit =2; [mysql> call proc(100000); Query OK, 0 rows affected (2.99 sec) [mysql> call proc_batch(100000); Query OK, 0 rows affected (0.96 sec)
  • Batch processing is faster than single data submission
  • In a large number of write situations = 1 strategy is the slowest, but the safest.
  • In each transaction commit process, redo log undo log cache synchronization will be involved to the disk, call fsync function, there will be IO waiting.

solution

  • Modify the single change to batch, set the session level innodb_flush_log_at_trx_commit = 0 strategy, of course, depending on the actual business requirements.
  • Use solid disk to speed up IO read and write.

  

Reference documents:

 

Guess you like

Origin www.cnblogs.com/--net/p/12748990.html