Mysql Professional Series - Part 27: How mysql ensure that data is not lost? We learn from this design to achieve high concurrency hotspots account design and cross-database transfer problem

Mysql series of objectives are: to fully grasp all the skills needed for a senior developer from the introduction through this series.

Welcome to add my micro letter itsoku exchange with java, algorithm, database technologies.

This is the first 27 series Mysql.

In this article let's look at how mysql is to ensure that data is not lost through this article we can understand the internal mysql ensure that data is not lost in principle, learning good design features inside, and then we learn from these points to practice good design applications deepen understanding.

Preliminaries

  1. Internal mysql using the structure b + tree to the disk, b + tree nodes corresponding to the data of the memory pages in mysql, the smallest unit mysql and disk interaction of pages, page by default 16KB, the data records stored in the table in the b + leaf nodes of the tree, when we need to modify, delete, insert data, we need to operate in accordance with the disk page.
  2. Disk Sequential Write efficiency is higher than random write a lot, we usually use a mechanical hard disk, hard disk when writing data involves mechanical disk seek, disk rotation addressing, time data is written, takes longer, if the order is written , eliminating seek time and disk rotation, efficiency will be several orders of magnitude.
  3. Memory read and write data in a plurality of high orders of magnitude better than the speed of the disk data read and write operations.

mysql ensure that data is not lost Principle Analysis

Let's think about the execution of this statement is what:

start transaction;
update t_user set name = '路人甲Java' where user_id = 666;
commit;

According to the normal idea, typically as follows:

  1. User_id = p1 666 pages to find this record is located, p1 will be loaded from disk into memory
  2. Modify user_id = 666 p1 of this information is recorded in the memory
  3. mysql receive instruction commit
  4. The p1 pages to disk
  5. Returned to the client successfully updated

The above process ensures that the data is persisted to the disk.

We will demand change it, as follows:

start transaction;
update t_user set name = '路人甲Java' where user_id = 666;
update t_user set name = 'javacode2018' where user_id = 888;
commit;

Look at the process:

  1. User_id = p1 666 pages to find this record is located, p1 will be loaded from disk into memory
  2. Modify user_id = 666 p1 of this information is recorded in the memory
  3. User_id = 888 found on this page p2 record is located, p2 will be loaded from disk into memory
  4. P2 modify the user_id = 888 records this information in memory
  5. mysql receive instruction commit
  6. The p1 pages to disk
  7. The page is written to disk p2
  8. Returned to the client successfully updated

The above process we have to see what the problem is

  1. After 6 if successful, mysql goes down, this time p1 modify written to disk, but modify p2 has not yet written to disk, resulting in record user_id = 666 has been modified successful, user_id = 888 data is modified failed the data is problematic
  2. P1 and p2 may be located above the disk at different positions, it involves the issue of random write disk, causing the entire process takes too long

The above problem can be summarized in two points: Failed to ensure data reliability, resulting in time-consuming random write long.

On the above issue, we look at how to optimize the mysql, mysql introduced inside a redo log, which is a file for the update operation above 2, mysql to achieve the following:

Internal mysql have a redo log buffer, is the memory of an area, we will be understood as an array of structures, when writing data to redo log files, will first write the contents of redo log buffer, the content will follow in this buffer redo log file is written to disk, this is a redo log buffer throughout the mysql connection in all shared memory area, it can be reused.

  1. After receipt mysql start transaction, generates a global transaction number trx_id, such trx_id = 10

  2. user_id = 666 this record we called r1, user_id = 888 this record called r2

  3. Find the data page p1 r1 record is located, it is loaded from disk into memory

  4. Found in memory location r1 p1, and then make changes to p1 (this process can be described as follows: The value of p1 in pos_start1 to pos_start2 position changed v1), this process we write rb1 (interior contains a transaction number trx_id ), rb1 into the redo log buffer array, in which case the information p1 is modified in memory, and the data disk is not the same p1

  5. Find the data page p2 r2 record is located, it is loaded from disk into memory

  6. Found in memory r2 position p2, and then make changes to p2 (this process can be described as follows: the value of p2 in pos_start1 to pos_start2 position changed v2), the process we write rb2 (interior contains a transaction number trx_id ), rb2 into the redo log buffer array, this time information is modified p2 in memory, and the data on the disk is not the same p2

  7. In this case redo log buffer array has two recording [rb1, rb2]

  8. mysql receive instruction commit

  9. The redo log buffer array contents are written to redo log file, the contents written:

    1.start trx=10;
    2.写入rb1
    3.写入rb2
    4.end trx=10;
  10. Returned to the client successfully updated.

After the above process is finished, the data is such that:

  1. Memory p1, p2 page is modified, yet synchronized to disk, this time in-memory data pages and pages of data on disk is inconsistent, this time in-memory data pages we called dirty pages
  2. For p1, p2 page modification is persisted to disk redolog file, and will not be lost

Serious look at the process in Step 9 above, a successful record in the redo log transaction is in the start and end, redo log file, if a trx_id appear in pairs corresponding to start and end on the implementation of this transaction a success, if only start no end note is problematic.

So changes to p1, p2 page when it will sync to disk it?

All redo log file is mysql connection sharing, similar to the implementation process update insert, delete and above on mysql, it is the first page modify data in memory, and then modify the process persisted to disk redo log file is located, and then return success. redo log files are size, need to repeat the use of (redo log multiple ring structure incorporates several variables to be done between the use of multiple re-use, this knowledge is not explained, are interested can go online to find it), when the redo log is full, or when the system is relatively idle , the content will be treated redo log file, the process is as follows:

  1. Redo log information is read, a read complete information corresponding to trx_id then treated

  2. For example, to read the entire contents of trx_id = 10, including the start end, this transaction represents the operation is successful, then continue down

  3. P1 determine whether there is in memory, if present, the information written to the disk directly to p1 where p1; if p1 is not present in memory, p1 will be loaded from disk into memory, the redo log information in the memory p1 to be modified, and then written to disk

    After the above update, p1 exists in memory, and p1 that has been modified, you can refresh directly to disk.

    If, after the above update, mysql down, then restarted, p1 in memory does not exist, then the system will read the contents of redo log file recovery process.

  4. The redo log file space occupied trx_id = 10 marked as processed, this space will be released can be reused a

  5. If step 2 corresponding to the read content is not trx_id End, that represents half failed transaction execution (step 9 may be written to the first half of the down), then the record is invalid, you can not skip deal with

The above process is done: the final data will be persisted to disk page, not lost, so that reliability.

And internal use of modify operation before operating the first page in memory and then write the redo log file, here redo log is written sequentially used to sequentially write io, efficiency is very high, with respect to user response will be faster.

For the changed data pages persistence to disk, and here uses asynchronous way to read the contents of redo log, and then change the page to disk brush, this design is also very good, the operation asynchronous disk brush !

But there is a situation when a transaction commit time, just found redo log is not enough, then will stop and deal with the contents of redo log, and then during the subsequent operation, in which case, the whole thing will respond a little slower.

There is also a mysql binlog, during a transaction operation will write binlog, Let me talk about the role binlog, binlog in a detailed record of what has been done to the database operation, be operated on a database running water, this water is also very important , master-slave synchronization is achieved using binlog, read the information in the main library binlog from the library, and then executed from the library, and finally, from the library to the main library and information synchronized agreement. There are other systems can also be used binlog functions, such as can be achieved bi etl system functions by binlog, will extract business data into a data warehouse, Ali offers a java version of the project: the Canal , this project can be simulated from the library binlog read from the main library functions, that can monitor changes in water through a detailed database java program, that we can brain-hole wide open look, you can do many things, interested friends can go to look at; it binlog for mysql is also very important that we look at how the system to ensure consistency in the redo log and binlog, have written to success.

Or to update an example:

start transaction;
update t_user set name = '路人甲Java' where user_id = 666;
update t_user set name = 'javacode2018' where user_id = 888;
commit;

A transaction may be many operations that will write a lot binlog log, in order to speed up the write speed, mysql binlog log generated during the entire first first wrote binlog cache memory cache, cache contents back then binlog disposable persisted to the binlog file. binlog a transaction can not be disassembled, so no matter how much this matters, but also to ensure that the write-once. This involves the preservation of binlog cache. Binlog cache system to an allocated memory, one for each thread, binlog_cache_size parameter for controlling the size occupied by a single thread binlog cache memory. If you exceed the size specified in this parameter, it is necessary to scratch disk.

Process is as follows:

  1. After receipt mysql start transaction, generates a global transaction number trx_id, such trx_id = 10

  2. user_id = 666 this record we called r1, user_id = 888 this record called r2

  3. Find the data page p1 r1 record is located, it is loaded from disk into memory

  4. P1 to make changes in memory

  5. P1 is recorded to modify the operation of the redo log buffer

  6. The water p1 change log recorded in the binlog cache

  7. Find the data page p2 r2 record is located, it is loaded from disk into memory

  8. P2 to be modified in memory

  9. P2 to modify the operation of recording in the redo log buffer

  10. The change log p2 recorded binlog cache in water

  11. mysql receive instruction commit

  12. The redo log buffer to carry trx_id = 10 is written to the redo log file, persisted to disk, this operation is called PREPARE redo log , which reads as follows

    TRX = 10 1.start;
    2. write Rb1
    3. write RB2
    4.prepare TRX = 10;

    Note that the above is prepare, and not before the end of the say.

  13. The binlog cache carry trx_id = 10 is written to the binlog file, persisted to disk

  14. Write data to a redo log in: end trx=10;representation redo log in the transaction is complete, and this operation is called redo log commit

  15. Returned to the client successfully updated

Let's analyze some cases above process may occur:

Step 10 operation is completed, mysql is down

Before downtime, all changes in memory, then restart mysql, memory modification has not been synchronized to the disk, the disk has no effect on the data, it had no effect.

After step 12 is finished, mysql is down

At this point redo log prepare process is written redo log file, but binlog write failed, this time after the restart mysql reads redo log recovery process, the query to trx_id record = 10 is prepare the state will go in binlog Find trx_id = 10 exists in the operation of the binlog, if there is described binlog write fails, this case can rollback

After step 13 is finished, mysql downtime

At this point redo log prepare process is written redo log file, but binlog write failed, this time after the restart mysql reads redo log recovery process, the query to trx_id record = 10 is prepare the state will go in binlog Find trx_id = binlog in operation 10 exists, and then proceeds to a step 14 and 15 above.

conclude

The above process design better place, 2:00

Log in advance, io sequential write, asynchronous operation, so that the efficient operation of

, The first changes to the data pages in memory, and then use the io order to write the way persisted to the redo log file; then asynchronous to deal redo log, the modified data pages persisted to disk, the efficiency is very high, the whole process, in fact, MySQL is often said in the WAL technology, WAL stands for write-Ahead logging, its key point is to write the log, write to disk.

Two-phase commit to ensure the redo log and consistency of binlog

In order to ensure consistency of binlog and redo log, here using two-phase commit protocol technology, redo log and write binlog divided three-step:

  1. Carry trx_id, redo log prepare to disk

  2. Carry trx_id, binlog written to disk

  3. Carry trx_id, redo log commit to disk

Step 3 above, to ensure that the same may be associated redo log trx_id binlog and reliability.

Good design on top 2:00, we usually also in the process of development can learn from, the following cite two common cases to learn about.

Case: electricity supplier in the capital account changes in high-frequency solutions

Table electricity supplier has an account and account flow table, Table 2 structured as follows:

drop table IF EXISTS t_acct;
create table t_acct(
  acct_id int primary key NOT NULL COMMENT '账户id',
  balance decimal(12,2) NOT NULL COMMENT '账户余额',
  version INT NOT NULL DEFAULT 0 COMMENT '版本号,每次更新+1'
)COMMENT '账户表';

drop table IF EXISTS t_acct_data;
create table t_acct_data(
  id int AUTO_INCREMENT PRIMARY KEY COMMENT '编号',
  acct_id int primary key NOT NULL COMMENT '账户id',
  price DECIMAL(12,2) NOT NULL COMMENT '交易额',
  open_balance decimal(12,2) NOT NULL COMMENT '期初余额',
  end_balance decimal(12,2) NOT NULL COMMENT '期末余额'
) COMMENT '账户流水表';

INSERT INTO t_acct(acct_id, balance, version) VALUES (1,10000,0);

On account for t_acct insert a data table, the balance of 10,000, when we succeed or single recharge, the above two tables will operate, modify data t_acct, the way to t_acct_data Write a water table, this table t_acct_data there is a beginning and ending of flowing water, the following relationship:

end_balance = open_balance + price;
open_balance为操作业务时,t_acct表的balance的值。

As to the account recharge 1 100, as follows:

t1:开启事务:start transaction;
t2:R1 = (select * from t_acct where acct_id = 1);
t3:创建几个变量
    v_balance = R1.balance;
t4:update t_acct set balnce = v_balance+100,version = version + 1 where acct_id = 1;
t5:insert into t_acct_data(acct_id,price,open_balnace,end_balance) 
    values (1,100,#v_balance#,#v_balance+100#)
t6:提交事务:commit;

The above process to analyze the problems:

We opened two threads [thread1, thread2] simulation were top-100, under normal circumstances, the data should look like this:

t_acct表记录:
(1,10200,1);
t_acct_data表产生2条数据:
(1,100,10000,10100);
(2,100,10100,10200);

But when two threads simultaneously performed to t2 acquires record information is the same as R1, the value of the variable v_balance the same, after the last execution is completed, the data becomes the following:

t_acct表:1,10200
t_acct_data表产生2条数据:
1,100,10000,10100;
2,100,10100,10100;

Resulting in t_acct_datatwo data generated is the same, this situation is a problem, which is caused by concurrent problems.

Last article says there are optimistic locking concurrency can solve this problem, are interested can take a look, as follows:

t1:打开事务start transaction
t2:R1 = (select * from t_acct where acct_id = 1);
t3:创建几个变量
    v_version = R1.version;
    v_balance = R1.balance;
    v_open_balance = v_balance;
    v_balance = R1.balance + 100;
    v_open_balance = v_balance;
t3:对R1进行编辑
t4:执行更新操作
    int count = (update t_acct set balance = #v_balance#,version = version + 1 where acct_id = 1 and version = #v_version#);
t5:if(count==1){
        //向t_acct_data表写入数据
        insert into t_acct_data(acct_id,price,open_balnace,end_balance) values (1,100,#v_open_balance#,#v_open_balance#)
        //提交事务
        commit;
    }else{
        //回滚事务
        rollback;
    }

The above process, if two threads simultaneously perform data R1 t2 to see is the same, but in the end went to t4 when the database will be locked, two threads update in mysql will be queued for execution, only last a the impact of the number of rows update result returned is 1, then according to t5, there will be a rollback, the other one is submitted, avoiding the problems caused by concurrency.

We analyze the above process will be a problem?

Just above mentioned, a large amount of concurrent time, only some will be successful, such as 10 threads to execute when t2, of which only one will succeed, the other nine will fail, the probability of failure in the case of large concurrent relatively high , that we can test concurrency, high failure rate, let's continue to optimize.

Analyze problems occurred mainly in writing t_acct_data above, if there is no table of this operation, we use a direct update to complete the operation, the speed is very fast, we learned above, the first to write a mysql logs, brush and asynchronous disk way, here we can use this idea to record a transaction log, then asynchronous transaction flow according to the transaction log is written to t_acct_datathe table.

Then we continue to optimize, add an account operation log table:

drop table IF EXISTS t_acct_log;
create table t_acct_log(
  id INT AUTO_INCREMENT PRIMARY KEY COMMENT '编号',
  acct_id int primary key NOT NULL COMMENT '账户id',
  price DECIMAL(12,2) NOT NULL COMMENT '交易额',
  status SMALLINT NOT NULL DEFAULT 0 COMMENT '状态,0:待处理,1:处理成功'
) COMMENT '账户操作日志表';

T_acct standard way to do some renovation, a new field old_balance, a new structure is as follows:

drop table IF EXISTS t_acct;
create table t_acct(
  acct_id int primary key NOT NULL COMMENT '账户id',
  balance decimal(12,2) NOT NULL COMMENT '账户余额',
  old_balance decimal(12,2) NOT NULL COMMENT '账户余额(老的值)',
  version INT NOT NULL DEFAULT 0 COMMENT '版本号,每次更新+1'
)COMMENT '账户表';

INSERT INTO t_acct(acct_id, balance,old_balance,version) VALUES (1,10000,10000,0);

Added a old_balance field, the value of the field and the beginning of the balance of the value is the same, the back will change in the job, you can look down first, followed by interpretation

Assuming that the transaction amount to the account v_acct_id v_price, as follows:

t1.开启事务:start transaction;
t2.insert into t_acct_log(acct_id,price,status) values (#v_acct_id#,#v_price#,0)
t3.int count = (update t_acct set balnce = v_balance+#v_price#,version = version+1 where acct_id = #v_acct_id# and v_balance+#v_price#>=0);
t6.if(count==1){
        //提交事务
        commit;
    }else{
        //回滚事务
        rollback;
    }

The above can be seen that the water is not recorded, it is inserted into a log t_acct_log, according to the back of our asynchronous t_acct_logdata to generate a t_acct_datarecord.

The above operations support concurrent operations is still relatively high, at 500 per test a pen, and are successful, very efficient.

Add a Job, recording status query t_acct_log 0, then a traverse of processing performed, the process is as follows:

假设t_acct_log中当前需要处理的记录为L1
t1:打开事务start transaction
t2:创建变量
    v_price = L1.price;
    v_acct_id = L1.acct_id;
t3:R1 = (select * from t_acct where acct_id = #v_acct_id#);
t4:创建几个变量
    v_old_balance = R1.old_balance;
    v_open_balance = v_old_balance;
    v_old_balance = R1.old_balance + v_price;
    v_open_balance = v_old_balance;
t5:int count = (update t_acct set old_balance = #v_old_balance#,version = version + 1 where acct_id = #v_acct_id# and version = #v_version#);
t6:if(count==1){
        //更新t_acct_log的status置为1
        count = (update t_acct_log set status=1 where status=0 and id = #L1.id#);
    }

    if(count==1){
        //提交事务
        commit;
    }else{
        //回滚事务
        rollback;
    }

T5 above conditions added in the update version, t6 the update added a condition status=0of operation, mainly to prevent the concurrent operation modifies problem may be wrong.

T_acct_log above all after the recording status = 0 is processed, t_acct table and old_balance balance becomes uniform.

In this way the use of the above first to write an account operation log, then the log asynchronous operation, generating water, draws mysql in design, we can also learn from him.

Case 2: cross-database transfer problem

Here we use a two-stage mysql described above to submit to resolve.

The turn table T1 B 100 to database A from database table T1.

We created a C library, a new transfer orders table in the C library, such as:

drop table IF EXISTS t_transfer_order;
create table t_transfer_order(
  id int NOT NULL AUTO_INCREMENT primary key COMMENT '账户id',
  from_acct_id int NOT NULL COMMENT '转出方账户',
  to_acct_id int NOT NULL COMMENT '转入方账户',
  price decimal(12,2) NOT NULL COMMENT '转账金额',
  addtime int COMMENT '入库时间(秒)',
  status SMALLINT NOT NULL DEFAULT 0 COMMENT '状态,0:待处理,1:转账成功,2:转账失败',
  version INT NOT NULL DEFAULT 0 COMMENT '版本号,每次更新+1'
) COMMENT '转账订单表';

A, B Table 3 plus the library, such as:

drop table IF EXISTS t_acct;
create table t_acct(
  acct_id int primary key NOT NULL COMMENT '账户id',
  balance decimal(12,2) NOT NULL COMMENT '账户余额',
  version INT NOT NULL DEFAULT 0 COMMENT '版本号,每次更新+1'
)COMMENT '账户表';

drop table IF EXISTS t_order;
create table t_order(
  transfer_order_id int primary key NOT NULL COMMENT '转账订单id',
  price decimal(12,2) NOT NULL COMMENT '转账金额',
  status SMALLINT NOT NULL DEFAULT 0 COMMENT '状态,1:转账成功,2:转账失败',
  version INT NOT NULL DEFAULT 0 COMMENT '版本号,每次更新+1'
) COMMENT '转账订单表';

drop table IF EXISTS t_transfer_step_log;
create table t_transfer_step_log(
  id int primary key NOT NULL COMMENT '账户id',
  transfer_order_id int NOT NULL COMMENT '转账订单id',
  step SMALLINT NOT NULL COMMENT '转账步骤,0:正向操作,1:回滚操作',
  UNIQUE KEY (transfer_order_id,step)
) COMMENT '转账步骤日志表';

t_transfer_step_logA table for recording transfer of log steps, transfer_order_id,stepplus a unique constraint, in each step can be performed only once, ensuring idempotency steps.

Define a few variables:

v_from_acct_id: transferor account

v_to_acct_id: transferee account

v_price: transaction amount

The entire transfer process is as follows:

Each step has a return value, the return value is an array type, meaning: 0: Processing (result unknown), 1: Success, 2: Failed

step1:创建转账订单,订单状态为0,表示处理中
C1:start transaction;
C2:insert into t_transfer_order(from_acct_id,to_acct_id,price,addtime,status,version) 
    values(#v_from_acct_id#,#v_to_acct_id#,#v_price#,0,unix_timestamp(now()));
C3:获取刚才insert成功的订单id,放在变量v_transfer_order_id中
C4:commit;

step2:A库操作如下
A1:AR1 = (select * from t_order where transfer_order_id = #v_transfer_order_id#);
A2:if(AR1!=null){
        return AR1.status==1?1:2;
    }
A3:start transaction;
A4:AR2 = (select 1 from t_acct where acct_id = #v_from_acct_id#);
A5:if(AR2.balance<v_price){
        //表示余额不足,那转账肯定是失败了,插入一个转账失败订单
        insert into t_order (transfer_order_id,price,status) values (#transfer_order_id#,#v_price#,2);
        commit;
        //返回失败的状态2
        return 2;
    }else{
        //通过乐观锁 & balance - #v_price# >= 0更新账户资金,防止并发操作
        int count = (update t_acct set balance = balance - #v_price#, version = version + 1 where acct_id = #v_from_acct_id# and balance - #v_price# >= 0 and version = #AR2.version#);
        //count为1表示上面的更新成功
        if(count==1){
            //插入转账成功订单,状态为1
            insert into t_order (transfer_order_id,price,status) values (#transfer_order_id#,#v_price#,1);
            //插入步骤日志
            insert into t_transfer_step_log (transfer_order_id,step) values (#v_transfer_order_id#,1);
            commit;
            return 1;
        }else{
            //插入转账失败订单,状态为2
            insert into t_order (transfer_order_id,price,status) values (#transfer_order_id#,#v_price#,2);
            commit;
            return 2;
        }
    }

step3:
    if(step2的结果==1){
        //表示A库中扣款成功了
        执行step4;
    }else if(step2的结果==2){
        //表示A库中扣款失败了
        执行step6;
    }

step4:对B库进行操作,如下:
B1:BR1 = (select * from t_order where transfer_order_id = #v_transfer_order_id#);
B2:if(BR1!=null){
    return BR1.status==1?1:2;
}else{
    执行B3;
}
B3:start transaction;
B4:BR2 = (select 1 from t_acct where acct_id = #v_to_acct_id#);
B5:int count = (update t_acct set balance = balance + #v_price#, version = version + 1 where acct_id = #v_to_acct_id# and version = #BR2.version#);
if(count==1){
    //插入订单,状态为1
    insert into t_order (transfer_order_id,price,status) values (#transfer_order_id#,#v_price#,1);
    //插入日志
    insert into t_transfer_step_log (transfer_order_id,step) values (#v_transfer_order_id#,1);
    commit;
    return 1;
}else{
    //进入到此处说明有并发,返回0
    rollback;
    return 0;
}

step5:
    if(step4的结果==1){
        //表示B库中加钱成功了
        执行step7;
    }

step6:对C库操作(转账失败,将订单置为失败)
C1:AR1 = (select 1 from t_transfer_order where id = #v_transfer_order_id#);
C2:if(AR1.status==1 || AR1.status=2){
        return AR1.status=1?"转账成功":"转账失败";
    }
C3:start transaction;
C4:int count = (udpate t_transfer_order set status = 2,version = version+1 where id = #v_transfer_order_id# and version = version + #AR1.version#)
C5:if(count==1){
        commit;
        return "转账失败";
    }else{
        rollback;
        return "处理中";
    }

step7:对C库操作(转账成功,将订单置为成功)
C1:AR1 = (select 1 from t_transfer_order where id = #v_transfer_order_id#);
C2:if(AR1.status==1 || AR1.status=2){
        return AR1.status=1?"转账成功":"转账失败";
    }
C3:start transaction;
C4:int count = (udpate t_transfer_order set status = 1,version = version+1 where id = #v_transfer_order_id# and version = version + #AR1.version#)
C5:if(count==1){
        commit;
        return "转账成功";
    }else{
        rollback;
        return "处理中";
    }

Also need to add a compensation Job, process library C over 10 minutes status transfer order of the order 0, as follows:

while(true){
    List list = select * from t_transfer_order where status = 0 and addtime+10*60<unix_timestamp(now());
    if(list为空){
        //插叙无记录,退出循环
        break;
    }
    //循环遍历list进行处理
    for(Object r:list){
        //调用上面的steap2进行处理,最终订单状态会变为1或者2
    }
}

Talk about: dealing with this job is there are bad places, may be an infinite loop, this leave you to think about how to solve? Welcome Message

Mysql series directory

  1. The first one: mysql Basics
  2. Part 2: Detailed mysql data type (Key)
  3. Part 3: Administrator essential skills (must master)
  4. Part 4: DDL common operations
  5. Part 5: DML operation summary (insert, update, delete)
  6. The first six: select Query Basics
  7. Part 7: Fun select query conditions, avoid mining pit
  8. The first eight: Detailed sorting and paging (order by & limit)
  9. Chapter 9: Detailed grouping queries (group by & having)
  10. The first 10: dozens of commonly used functions Detailed
  11. The first 11: in-depth understanding of the principles and join query
  12. Chapter 12: Subqueries
  13. Chapter 13: God elaborate pit NULL caused people keep track
  14. Chapter 14: Detailed Affairs
  15. The first 15: Detailed view
  16. The first 16: Variables Comments
  17. Of 17: & custom functions stored Detailed procedure
  18. The first 18: flow control statements
  19. The first 19: Detailed cursor
  20. The first 20: Detailed abnormal capture and processing
  21. The first 21: What is the Index?
  22. The first 22: mysql indexing works Detailed
  23. The first 23: mysql index management Detailed
  24. The first 24: how to properly use the index?
  25. The first 25: sql in conditions where extraction and analysis applications in the database
  26. The first 26: How to talk to mysql achieve Distributed Lock?

mysql series of about 20 articles, please look like, welcome to add me or leave a message exchange micro-channel itsoku mysql related technologies!

Guess you like

Origin www.cnblogs.com/itsoku123/p/11757592.html