Summarize MySQL deadlock problems encountered online

 
If you encounter problems related to MySQL deadlock online, you need to view the Deadlock log that appears in MySQL. You can execute:
 
show engine innodb status
 
 
To view the status of the innodb type database, look for the last detected deadlock part, you can see the two sqls that caused the deadlock recently
 
------------------------
LATEST DETECTED DEADLOCK
------------------------
161020 17:58:11
*** (1) TRANSACTION:
TRANSACTION ED354BF4, ACTIVE 0 sec starting index read
mysql tables in use 1, locked 1
LOCK WAIT 6 lock struct(s), heap size 1248, 3 row lock(s), undo log entries 1
MySQL thread id 2938474, OS thread handle 0x2b9ffd19b940, query id 3121991643 192.168.1.163 apitest140715 Updating
 
UPDATE xxx SET fix_stock=fix_stock+-1 WHERE aaa = 1 AND aaa=101488 AND fix_stock+-1>=0 AND stock>=fix_stock+-1
 
*** (1) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 196984 page no 743 n bits 1272 index `xxxx` of table `xxx`.`xxxx` trx id ED354BF4 lock_mode X waiting
Record lock, heap no 581 PHYSICAL RECORD: n_fields 2; compact format; info bits 0
 
0: len 4; hex 80018c70; asc p;;
1: len 4; hex 80018ce8; asc ;;
 
*** (2) TRANSACTION:
TRANSACTION ED354C8C, ACTIVE 0 sec starting index read
mysql tables in use 1, locked 1
6 lock struct(s), heap size 1248, 4 row lock(s)
MySQL thread id 2938340, OS thread handle 0x2b9ffcae8940, query id 3121991660 192.168.1.115 163test Updating
update xxx
set fix_stock=fix_stock+1
where product_spec_id=101488
and fix_stock+1>=0
and stock>=fix_stock+1
*** (2) HOLDS THE LOCK(S):
RECORD LOCKS space id 196984 page no 743 n bits 1272 index `xxx` of table `shop_zp`.`gt_goods_warehouse_index` trx id ED354C8C lock_mode X
Record lock, heap no 581 PHYSICAL RECORD: n_fields 2; compact format; info bits 0
 
0: len 4; hex 80018c70; asc p;;
1: len 4; hex 80018ce8; asc ;;
 
*** (2) WAITING FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 196974 page no 2114 n bits 176 index `PRIMARY` of table `xxxx`.`xxxx` trx id ED354C8C lock_mode X locks rec but not gap waiting
Record lock, heap no 85 PHYSICAL RECORD: n_fields 35; compact format; info bits 0
0: len 4; hex 00018c70; asc p;;
1: len 6; hex 0000ed354bf4; asc 5K ;;
2: len 7; hex 7600011487203d; asc v =;;
3: len 1; hex 00; asc ;;
4: len 4; hex 80000000; asc ;;
5: len 1; hex 00; asc ;;
6: len 0; hex ; asc ;;
7: len 12; hex 373030323335343037303836; asc 700235407086;;
8: len 0; hex ; asc ;;
9: len 4; hex 0010225f; asc "_;;
10: len 4; hex 00000338; asc 8;;
 
11: len 4; hex 800186a0; asc ;;
12: only 4; hex 80000056; asc V ;;
13: len 4; hex 80000000; asc ;;
14: len 4; hex 80000000; asc ;;
15: len 9; hex 800000000000000577; asc w;;
16: len 5; hex 8000000000; asc ;;
17: len 5; hex 8000000000; asc ;;
18: len 1; hex 81; asc ;;
19:len 9; hex 800000000000000af0; asc ;;
20: len 1; hex 80; asc ;;
21: len 4; hex 0000011b; asc ;;
22: len 4; hex 000000e0; asc ;;
23: len 4; hex 80000000; asc ;;
24: len 4; hex 80000000; asc ;;
25: len 1; hex 81; asc ;;
26: len 4; hex d5684647; asc hFG;;
27: len 4; hex 58089533; asc X 3;;
28: len 0; hex ; asc ;;
 
29: len 5; hex 800002e505; asc ;;
30: len 5; hex 8000036303; asc c ;;
31: len 4; hex 000f4240; asc B@;;
32: len 4; hex 80000000; asc ;;
33: len 4; hex 0000803f; asc ?;;
34: SQL NULL;
 
*** WE ROLL BACK TRANSACTION (2)
------------
TRANSACTIONS
------------
 
 
 
When writing a program, within each transaction, it is best to modify the table in the same order (such as numbering all tables, try to modify the table with a small or large number first), which can avoid most deadlocks. There is a related method on how to avoid mysql deadlock in StackOverflow: http://stackoverflow.com/questions/2332768/how-to-avoid-mysql-deadlock-found-when-trying-to-get-lock-try -restarting-trans ,
 
 

MySQL lock mechanism

   
Compared with other databases, MySQL's lock mechanism is relatively simple, and its most notable feature is that different storage engines support different lock mechanisms. For example, the MyISAM and MEMORY storage engines use table-level locking; the BDB storage engine uses page-level locking, but also supports table-level locking; the InnoDB storage engine supports both row-level locking (row-level locking), table-level locking is also supported, but row-level locking is used by default.
 
The relevant indicators of these three types of MySQL locks can be referred to: overhead, locking speed, deadlock, granularity, concurrency performance:
 
  • Table-level lock: low overhead and fast locking; no deadlock; large locking granularity, the highest probability of lock conflict, and the lowest concurrency.
  • Row-level locks: high overhead, slow locking; deadlocks; the smallest locking granularity, the lowest probability of lock conflicts, and the highest concurrency.
  • Page locks: Overhead and locking time are between table locks and row locks; deadlocks can occur; locking granularity is between table locks and row locks, and the degree of concurrency is average.
 
It can be seen from the above characteristics that it is difficult to say which kind of lock is better in general, only in terms of the characteristics of specific applications, which kind of lock is more suitable! Only from the perspective of locks: table-level locks are more suitable for applications that focus on queries and only a small amount of data is updated according to index conditions, such as web applications; while row-level locks are more suitable for a large number of concurrent updates according to index conditions. Data, and concurrent query applications, such as some online transaction processing (OLTP) systems. This is also mentioned in the "Development" of this book when introducing the selection of table types. In the following sections, we focus on the issues of MySQL table locks and InnoDB row locks. Since BDB has been replaced by InnoDB and is about to become history, no further discussion will be made here.
 
We are currently using InnoDB, which is different from MyISAM in two points: one is that it supports transactions, and the other is that row-level locks are used. There are many differences between row-level locks and table-level locks.
 
After the introduction of database transaction support, compared with serial processing, concurrent transaction processing can greatly increase the utilization of database resources, improve the transaction throughput of the database system, and support more users, but concurrent users also bring some problems. :
 
  • Lost Update: The Lost Update problem occurs when two or more transactions select the same row and then update that row based on the originally selected value, since each transaction is unaware of the existence of the other transaction -- The last update overwrites updates made by other firms. For example, two editors make electronic copies of the same document. Each editor independently changes their copy and then saves the changed copy, thus overwriting the original document. The editor who last saved a copy of their changes overwrites the changes made by the other editor. This problem can be avoided if another editor cannot access the same file until one editor completes and commits the transaction.
  • Dirty Reads: A transaction is modifying a record. Before the transaction is completed and committed, the data of this record is in an inconsistent state; at this time, another transaction also reads the same record. With additional control, the second transaction reads the "dirty" data and does further processing accordingly, resulting in uncommitted data dependencies. This phenomenon is vividly called "dirty reading".
  • Non-Repeatable Reads: At a certain time after reading some data, a transaction reads the previously read data again, but finds that the read data has changed, or some records have been changed. has been removed! This phenomenon is called "non-repeatable read".
  • Phantom Reads: A transaction re-reads previously retrieved data according to the same query conditions, but finds that other transactions have inserted new data that satisfies its query conditions. This phenomenon is called "phantom reads".
 
Preventing some of the above problems is not solved by the database transaction controller, such as "update loss", which requires the application to add necessary locks to the data to be updated. These should be the responsibility of the application. The other three are actually databases. For consistency issues, the database must provide a certain transaction isolation mechanism. The database implementation of transaction isolation can basically be divided into the following two ways:
 
 
  • Before reading the data, lock it to prevent other transactions from modifying the data;
  • Without adding any locks, a consistent data snapshot at the time of the data request is generated through a certain mechanism, and this snapshot is used to provide a certain level, which also becomes multi-version concurrency control.
 
The stricter the transaction isolation of the database, the smaller the concurrent side effects and the greater the cost, because transaction isolation essentially makes the transaction "serialized" to a certain extent, which contradicts concurrency.
 
 
In order to solve the contradiction between isolation and concurrency, ISO/ANSI SQL92 defines 4 transaction isolation levels. Each level has different isolation degrees, and the allowed side effects are also different. Applications can choose different isolation levels according to their own business logic requirements. level to balance the conflict between isolation and concurrency.
 
 
 
  read data consistency dirty read non-repeatable read hallucinations
uncommitted read lowest level Yes Yes Yes
Submitted for read statement level no Yes Yes
repeatable read transaction level no no Yes
Serializable highest level, transaction level no no no
 
Note: Each specific database does not necessarily fully implement the above four isolation levels.
 
 
InnoDB row lock mode and locking method
 
 
Shared lock: Allows a transaction to read a row, preventing other transactions from obtaining an exclusive lock on the same data set;
Exclusive lock: Allows transactions that acquire exclusive locks to update data, preventing other transactions from acquiring shared read locks and exclusive write locks on the same data set;
Intentional shared lock: The transaction intends to add a row shared lock to a data row. Before adding a shared lock to a data row, the transaction must first obtain the intentional shared lock of the table;
Intentional exclusive lock: The transaction intends to add a row exclusive lock to a data row. The transaction must first obtain the intentional exclusive lock of the table before adding an exclusive lock to a data row.
 


 
 
 
The intent lock is automatically added by InnoDB and does not require user intervention. For update, insert, and delete statements, InnoDB will automatically add exclusive locks to the involved datasets. For ordinary select statements, InnoDB will not add any locks. Transactions can be explicitly executed through the following statement Add shared locks and exclusive locks to the recordset.
 
 
select * from table_name where … lock in share mode; //共享锁
select * from table_name where … for update; //排它锁
 
 
 
InnoDB row lock is achieved by locking the index item on the index, which is different from Oracle, which is achieved by locking the corresponding data row in the data block. The characteristics of InnoDB's row lock implementation mean: InnoDB uses row-level locks only when retrieving data through index conditions, otherwise table locks are used!
 
Since MySQL's row lock is a lock for an index, not a lock for a record, although records of different rows are accessed, lock conflicts will occur if the same index key is used.
 
When a table has multiple indexes, different transactions can use different indexes to lock different rows. Whether using a primary key index, a unique key index or a common index, InnoDB will use row locks to lock data.
 
But there is a special case, even if the index field is used in the condition, whether to use the index to retrieve data is determined by MySQL by judging the cost of different execution plans. If MySQL thinks that the full table scan is more efficient, for example, for some very If the table is small, it will not use the index, which will use table locks instead of row locks. Therefore, when analyzing lock conflicts, it is necessary to check the SQL execution plan to confirm whether the index is actually used.
 
For InnoDB tables, row-level locks should be used in most cases, because transactions and row locks are often the reasons why we choose InnoDB tables, but in individual special transactions, table-level locks can also be considered:
 
  • When a transaction needs to update most or all of the data, and the table is relatively large, if the default row lock is used, not only the transaction execution efficiency is low, but also other transactions may cause long-time lock waiting and lock conflicts;
  • The transaction involves multiple tables, which is more complicated and may cause deadlocks and cause a large number of transaction rollbacks. In this case, you can also consider locking the transaction-involved tables at one time, so as to avoid deadlocks and reduce the overhead of the database due to transaction rollbacks. .
 
MyISAM table locks are deadlock free, because MyISAM always acquires all the locks needed at once, either all are satisfied or wait, so there is no deadlock, but in InnoDB, except for a single SQL transaction, locks are acquired gradually , which determines the possibility of deadlocks in InnoDB.
 
After a deadlock occurs, InnoDB can generally detect it automatically, and make one transaction release the lock and roll back, and another transaction acquires the lock and continues to complete the transaction. But in the case of involving external locks or involving table locks, InnoDB cannot completely detect deadlocks, which need to be resolved by setting the lock wait timeout parameter innodb_lock_wait_timeout. It should be noted that this parameter is not only used to solve the deadlock problem. In the case of high concurrent access, if a large number of transactions are suspended because the required lock cannot be obtained immediately, it will take up a lot of computer resources and cause serious performance. The problem even drags across the database. We can avoid this by setting an appropriate lock wait timeout threshold.
 
Generally speaking, deadlock is a problem of application design. By adjusting the business process, database object design, transaction size, and SQL statements that access the database, most deadlocks can be avoided. The following is an example to introduce several common methods to avoid deadlock.
 
 
 
 
 

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326950253&siteId=291194637