Detailed Explanation of MySQL Lock Mechanism - Table Lock and Row Lock

1. Database lock theory

A lock is a mechanism by which a computer coordinates concurrent access to a resource by multiple processes or threads.

In the database, in addition to the contention of traditional computer resources, such as CPU, RAM, I/O, etc., data is also a resource shared by many users. How to ensure the consistency and validity of concurrent data access is a problem that all databases must solve. In addition, lock conflict is also an important factor affecting the performance of concurrent access to the database.

We go to Taobao to buy a product, and there is only one product in stock. If another person buys it at this time, how to solve the problem of whether you buy it or someone else buys it? Here is the transaction involved. We first take out the item quantity from the inventory table, then insert the order, insert the payment table information after payment, and then update the item quantity. In this process, the use of locks can protect limited resources and solve the problems of isolation and concurrency.

2. Classification of locks

2.1 Classification by type of data operation

The classification of locks is divided into read locks and write locks according to the type of data operation:

  • Read lock (shared lock, Share Lock): For the same data, multiple read operations can be performed at the same time without affecting each other. If transaction T adds a read lock to data object A, thenTransaction T can only read A; other transactions can only add read locks to A, but cannot add write locks, until transaction T releases the read lock on A. This ensures that other transactions can read A, but cannot make any changes to A until transaction T releases the read lock on A.

  • Write lock (exclusive lock, Exclusive Lock): Only one write lock can be added. Before the current write operation is completed, it will block other write locks and read locks. If transaction T adds a write lock to data object A, only transaction T is allowedread and modifyA,No other transaction can add any type of lock on A until T releases the lock on A. It prevents any other transaction from acquiring the lock on the resource until the original lock on the resource is released at the end of the transaction. INSERT、UPDATE 或 DELETEAn exclusive lock is always applied during an update operation ( ).

2.2 Granularity Classification by Data Operation

In a relational database, according to the granularity of data operations, it is divided into table locks, row locks and page locks. Table locks, row locks, and page locks are compared as follows:

  • Table lock : A lock with the largest locking granularity in MySQL, which means to lock the entire table currently being operated. It is simple to implement, consumes less resources, and is supported by most MySQL engines. The most commonly used MyISAM and InnoDB both support table-level locking. Table-level locks are divided into table shared read locks (shared locks) and table exclusive write locks (exclusive locks).

Features: table lock overhead is small, fast locking, no deadlock, large locking granularity, the highest probability of lock conflicts, and the lowest concurrency

  • Row lock : A kind of lock with the finest locking granularity in MySQL, which means that only the currently operated row is locked. Row-level locks can greatly reduce conflicts in database operations. Its locking granularity is the smallest, but the locking overhead is also the largest. Row-level locks are divided into shared locks and exclusive locks. The InnoDB storage engine uses row locks by default. There are two biggest differences between InnoDB and MyISAM: one is to support transactions (TRANSACTION); the other is to use row-level locks

Features: Row locks are costly and slow to add; deadlocks may occur; the locking granularity is the smallest, the probability of lock conflicts is the lowest, and the concurrency is also the highest.

  • Page lock : A page lock is a lock in MySQL whose locking granularity is between row-level locks and table-level locks. Table-level locks are fast, but have more conflicts, and row-level locks have fewer conflicts, but are slower. Therefore, a compromised page level is adopted to lock a group of adjacent records at a time.

Features: The overhead and locking time are between table locks and row locks: there will be deadlocks; the locking granularity is between table locks and row locks, and the concurrency is average.

3. Application of table lock

The table lock locks the entire table currently being operated, which is simple to implement.The resource consumption is also relatively small, the locking is fast, and there will be no deadlock. The probability of triggering lock conflicts is the highest, and the degree of concurrency is the lowest. Both MyISAM and InnoDB engines support table-level locks. In the MyISAM storage engine, selectshared locks are automatically added to statements, and update/delete/insertexclusive locks are added to operations.

3.1 Table lock related commands

(1) Manually add table locks

lock table 表名字 read(write),表名字2 read(write)

(2) View the command of the lock added on the table

show open tables

insert image description here

(3) The command to release the table lock

unlock tables

(4) Commands for analyzing table locks

 show status  like 'table%';

insert image description here
We can analyze the table locking situation of the system by inspecting table_locks_waitedand status variables. table_locks_immediateThe descriptions of the two variables are as follows:

  • table_locks_waited: Indicates that the lock cannot be acquired immediately, and the number of times to wait for the lock
  • table_locks_immediate: The number of times the lock can be acquired immediately

If table_locks_waitedthe value is relatively high, it means that there is serious table-level lock contention. At this time, we need to further check the application to determine the problem.

3.2 Add a table shared read lock to the table

Table shared read lock, the table with shared read lock will not block the read requests of other sessions, but will block the write requests of other sessions.

SQL to create data:

create table mylock (
id int not null primary key auto_increment,
name varchar(20) default ''
) engine myisam;

insert into mylock(name) values('a');
insert into mylock(name) values('b');
insert into mylock(name) values('c');
insert into mylock(name) values('d');
insert into mylock(name) values('e');
### 3.1 读锁

To add a read lock to the mylock table (read blocking write example)
insert image description here

insert image description here
Summarize:

A table with a shared read lock will not block the read (select) requests of other sessions, but will block the write (insert, update, delete) requests of the current session and other sessions.

3.3 Add an exclusive write lock to the table

An exclusive write lock is an exclusive lock well known by everyone. It will block other processes from reading and writing operations on the same table. Only when the current exclusive lock is released, will the read and write operations of other processes be performed.
insert image description here
After adding a write lock, the current session cannot read and write to other tables, while other sessions can read and write to other tables.
insert image description here

3.4 Intent Shared Lock and Intent Exclusive Lock

When a transaction needs to lock a certain resource it needs, if it encounters a shared lock that is locking the resource it needs, it can add another shared lock, but it cannot add an exclusive lock. However, if the resource you need to lock is already occupied by an exclusive lock, you can only wait for the lock to release the resource before you can acquire the locked resource and add your own lock.

InnoDB supports multi-granularity locks, allowing row locks and table locks to coexist . Intention locks are table-level locks . The purpose of an intention lock is to indicate which type of lock (shared lock or exclusive lock) the transaction needs to use on a row in the table later. That is, when a transaction needs to acquire a resource lock, if the resource it needs is already occupied by an exclusive lock, the transaction may need to add a suitable intention lock to the table where the row is locked. If you need a shared lock yourself, add an intent shared lock to the table. And if what you need is to add an exclusive lock on a certain row (or some rows), first add an intentional exclusive lock on the table. Multiple intent-shared locks can coexist at the same time, but only one intent-exclusive lock can exist at the same time.

Two table-level locks of the InnoDB storage engine:

  • Intention Shared Lock (IS): Indicates that the transaction is going to record a shared lock for the data row, and the transaction must first obtain the IS lock of the table before adding a shared lock to a data row.

  • Intent exclusive lock (IX): It means that the transaction is going to add an exclusive lock to the data row, and the transaction must first obtain the IX lock of the table before adding an exclusive lock to a data row.

Notice:

  • The intent lock here is a table-level lock,It expresses an intention, which only means that the transaction is reading or writing a certain row of records, and whether there is a conflict will be judged when the row lock is actually added. Intention locks are automatically added by InnoDB without user intervention.
  • IX, IS are table-level locks, and will not conflict with row-level X, S locks, but only with table-level X, S locks.

The lock mechanism compatibility of InnoDB is shown in the following figure:
insert image description here

When the lock mode requested by a transaction is compatible with the current lock, InnoDB grants the requested lock to the transaction; otherwise, if the request is not compatible, the transaction waits for the lock to be released.

3.5 Concurrent Insertion

To reduce contention between read-write locks, the MyISAM storage engine supports concurrent inserts. We can use the local keyword to add a read lock to a table, and other sessions can still perform write operations on the table. But only when the session currently holding the read lock releases the read lock, the results of other sessions' write operations will be visible. The syntax is as follows:

lock table 表名 read local

In this way, when the current table is read-locked, other sessions can add records to the table, but it needs to be concurrent_insertused with global variables. concurrent_insertThe enumerated values ​​and meanings of MySQL parameters are as follows:

  • NEVER : After adding a read lock, other sessions are not allowed to write concurrently
  • AUTO : After the read lock is added, other sessions are allowed to write concurrently under the condition that there is no hole in the table (that is, no rows have been deleted).
  • ALWAYS : After adding a read lock, other sessions are allowed to write concurrently, even for tables with holes

View the settings of the current database:

show global variables like '%concurrent_insert%';

insert image description here

Change database settings:

set global concurrent_insert = ALWAYS;

insert image description here

3.6 MyISAM lock scheduling mechanism

For storage engines that only use table-level locks (such as MyISAM, MEMORY, and MERGE), the priority of the writing process is higher than that of the reading process. Although the reading process is at the head of the queue, the writing process will also be queued. By setting the system variable low-priority-updates=1, all statements will wait until there are no pending or read operations INSERT、UPDATE、DELETE和 LOCK TABLE WRITEon the affected tables .SELECTLOCK TABLE

insert image description here

3.7 Summary

  • Shared read locks are compatible, but shared read locks and exclusive write locks, and exclusive write locks are mutually exclusive, that is to say, reading and writing are serial.
  • Under certain conditions, MyISAM allows queries and inserts to be executed concurrently. We can use this to solve the problem of lock contention for queries and inserts on the same table in the application.
  • The default lock scheduling mechanism of MyISAM is write priority, which is not necessarily suitable for all applications. Users can adjust the contention of read-write locks by setting LOW_PRIORITY_UPDATESparameters or INSERT、UPDATE、DELETEspecifying options in statements .LOW_PRIORITY
  • Due to the large locking granularity of table locks and the serialization between reading and writing, if there are many update operations, MyISAM tables may experience serious lock waiting. You can consider using the InnoDB storage engine to reduce lock conflicts.

4. Application of row lock

4.1 Basic introduction

The InnoDB storage engine uses row locks by default.Row-level locks have the smallest locking granularity, the lowest probability of lock conflicts, and the highest concurrency. However, the cost of row lock is high, the lock is slow, and deadlock will occur. InnoDB isRow lock based on index,For example:

select * from tab_with_index where id = 1 for update;

for updateRow locks can be locked according to conditions, and ID is a column with an index key,If ID is not an index key then InnoDB will perform a table lock.

There are two biggest differences between InnoDB and MyISAM:1. Support transactions 2. Use row-level locks

A MySQL transaction is a logical processing unit composed of a set of SQL statements. The transaction has the following four attributes, which are usually referred to as the ACID characteristics of the transaction.

  1. Atomicity: A transaction is the smallest unit of execution and cannot be split. The atomicity of transactions ensures that actions are either fully completed or have no effect at all;
  2. Consistency (Consistency): Before and after the execution of the transaction, the data must maintain a consistent state, and the results of multiple transactions reading the same data are the same;
  3. Isolation: The database system provides a certain isolation mechanism. When accessing the database concurrently, a user's transaction will not be disturbed by other transactions, and the database between concurrent transactions is independent;
  4. Durability: After a transaction is committed. Its changes to the data in the database are persistent, and even if the database fails, it should not have any impact on it.

In a typical application, multiple transactions run concurrently, often operating on the same data to complete their respective tasks (multiple users operate
on the same data). Although concurrency is necessary, it may cause the following problems.

  1. Dirty read: When a transaction is accessing data and modifying the data, and this modification has not been submitted to the database, another transaction also accesses the data, and then uses this data. Because this data is uncommitted data, the data read by another transaction is "dirty data", and the operation based on "dirty data" may be incorrect.
  2. Lost to modify (Lost to modify): When one transaction reads a data, another transaction also accesses the data, then after the data is modified in the first transaction, the second transaction also modifies this data. In this way, the modification result in the first transaction is lost, so it is called lost modification.
    For example: transaction 1 reads data A=20 in a table, transaction 2 also reads A=20, transaction 1 modifies A=A-1, transaction 2 also modifies A=A-1, the final result is A=19, transaction 1 modifications are lost.
  3. Unrepeatable read (Unrepeatable read): Refers to reading the same data multiple times within a transaction. While this transaction is not over, another transaction also accesses the data. Then, between the two reads in the first transaction, the data read by the first transaction may be different due to the modification of the second transaction. This happens that the data read twice in a transaction is different, so it is called non-repeatable read.
  4. Phantom read: Phantom read is similar to non-repeatable read. It happens when one transaction (T1) reads a few rows of data, and then another concurrent transaction (T2) inserts some data. In subsequent queries, the first transaction (T1) will find some more records that do not exist, as if hallucinations have occurred, so it is called phantom reading.

The difference between non-repeatable reading and phantom reading: The focus of non-repeatable reading is modification, such as reading a record multiple times and finding that the value of some columns has been modified. The focus of phantom reading is to add or delete, such as reading multiple times A record found record increase or decrease.

There are four transaction isolation levels in MySQL:

  • READ-UNCOMMITTED (read uncommitted): The lowest isolation level that allows reading uncommitted data changes, which may cause dirty reads, phantom reads, or non-repeatable reads.
  • READ-COMMITTED (read committed): Allows to read data that has been committed by concurrent transactions, which can prevent dirty reads, but phantom reads or non-repeatable reads may still occur.
  • REPEATABLE-READ (repeatable read): The results of multiple reads of the same field are consistent, unless the data is modified by its own transaction, which can prevent dirty reads and non-repeatable reads, but phantom reads Still possible.
  • SERIALIZABLE (serializable): The highest isolation level, fully compliant with the ACID isolation level. All transactions are executed one by one, so that there is no possibility of interference between transactions, that is to say, this level can prevent dirty reads, non-repeatable reads, and phantom reads.

MySQL InnoDB storage engine'sThe isolation level supported by default is REPEATABLE-READ (repeatable read)

4.2 Use of row locks

Create table SQL:

CREATE TABLE test_innodb_lock (
a INT(11),
b VARCHAR(16)
)ENGINE=INNODB;

insert into test_innodb_lock values(1,'b2');
insert into test_innodb_lock values(2,'2000');
insert into test_innodb_lock values(3,'3000');
insert into test_innodb_lock values(4,'4000');
insert into test_innodb_lock values(5,'5000');
insert into test_innodb_lock values(6,'6000');
insert into test_innodb_lock values(7,'7000');

create index idx_a on test_innodb_lock(a);
create index idx_b on test_innodb_lock(b);

Ordinary select statements do not lock records. If you want to add row locks to records during query, you can use the following two methods:

#对读取的记录加共享锁
select... from ... where ... lock in share mode;

#对读取的记录加独占锁
select ... from ... where ... for update;

The above two statements must be in a transaction, because when the transaction is committed, the lock will be released, so when using these two statements, you must add begin, start transaction or cancel the automatic submission.
To cancel the autocommit command:

set autocommit = 0;

Row lock demo:
lock a row by for update
insert image description here

Two sessions update the same row, session 2 will block.
insert image description here

Two sessions update different rows, session 1 updates a=1, and session 2 updates other rows, which can be updated normally without blocking.
insert image description here
Notice:If the index is not valid or the query condition is not indexed, it will cause the row lock to change the table lock. For example, if varchar is not used, the system will automatically convert the type and make the index invalid.

In the case of index failure, the row lock becomes a table lock, and the update operation of session 2 is blocked until the table lock held by session 1 is released.
insert image description here
If the index is not invalidated, using row locks will not cause the update operation of session 2 to be blocked
insert image description here

4.3 Algorithms for row locks

There are three algorithms for InnoDB storage engine row locks:

  • Record Lock : Lock the index items and lock the rows that meet the conditions. Other transactions cannot modify and delete locked items;
  • Gap Lock: Lock the "gap" between index items, lock the range of records (lock the gap before the first record or the gap after the last record), excluding the index item itself. Other transactions cannot insert data within the lock range, which prevents other transactions from adding phantom rows.
  • Next-key Lock : Lock the index entry itself and the index range. That is the combination of Record Lock and Gap Lock . It can solve the problem of phantom reading.

4.3.1 Record Lock

Innodb uses Next-key Lock for row queries . Next-key Lock is used to solve the Phantom Problem phantom reading problem.

When the queried index contains unique attributes, downgrade Next-key Lock to Record Lock . The following SQL:

SELECT id FROM user WHERE id = 1;

When the id column is the only index column, the index record with id=1 is locked, and Record Lock is used at this time.

4.3.2 Gap Lock

Gap Lock is a lock on the gap between index records, or a lock on the gap before the first index record or after the last index record.

insert image description here
It can be seen that the value 2 cannot be inserted into column a of the test_innnodb_lock table, because the gap in the current range (1,5) of column a is locked by a gap.

When we retrieve data with range conditions instead of equality conditions and request shared or exclusive locks,InnoDB will lock the index items of existing data records that meet the conditions; for records whose key values ​​are within the condition range but do not exist, it is called a "gap (GAP)". InnoDB will also lock this "gap". This locking mechanism is the so-called gap lock (Next-Key lock).

Gap locks have a relatively fatal weakness, that is, after locking a range of key values, even some non-existing key values ​​will be locked, resulting in the inability to insert any data within the locked key value range during locking. In some scenarios this can be very harmful to performance.

The purpose of the Gap lock design is to prevent multiple transactions from inserting records into the same range, which can lead to phantom read problems.
There are two ways to explicitly close the Gap Lock gap lock: (except for foreign key constraints and uniqueness checks, only record locks are used in other cases)

  • Set the transaction isolation level to READ-COMMITTED (read committed)
  • set the parameter innodb_locks_unsafe_for_binlogto 1

4.3.3 Next-key Lock

Next-key Lock locks the index item itself and the index range, that is, the combination of Record Lock and Gap Lock, which can solve the problem of phantom reading.

Example: When transaction T1 adds a shared or exclusive lock to r rows, it also adds a gap lock to the gap before r rows. At this time, another transaction T2 cannot insert new index records before r rows.
Suppose an index contains the values ​​10, 11, 13 and 20. The possible Next-Key Locks of this index cover the following intervals:

(-∞, 10]
(10, 11]
(11, 13]
(13, 20]
(20, +∞)
For the last interval, Next-Key Locks lock the gap above the maximum value in the index and "positive infinity " pseudo-record, the value of the pseudo-record is higher than any actual value in the index. It is not a real index record, so, in fact, this Next-Key Locks only locks the gap after the maximum index value.

By default, InnoDB operates at the REPEATABLE READ transaction isolation level. In this case, InnoDB uses Next-Key Locks for searches and index scans, which prevent phantom reads.

Learn more about row locks: https://mp.weixin.qq.com/s/1LGJjbx_n_cvZndjM3R8mQ

4.4 Analysis of row lock contention

To analyze the row lock contention on the system by checking the InnoDB_row_lock status variable, the command is as follows:

 show status like 'innodb_row_lock%';

insert image description here
The description of each status quantity is as follows:

  • innodb_row_lock_current_waits: The number currently waiting to be locked;
  • innodb_row_lock_time: Total lock time from system startup to now (total waiting time)
  • innodb_row_lock_time_avg: Average time spent waiting each time (average waiting time)
  • innodb_row_lock_time_max: Waiting for the longest event from the system startup to the present
  • innodb_row_lock_waits: The total number of waiting times since the system was started (the total number of waiting times)

When the number of waits is high and the waiting time is not small each time, we need to analyze why there are so many waits in the system, and then proceed to designate an optimization plan based on the analysis results.

4.5 Deadlocks and deadlock avoidance

A deadlock is a phenomenon in which two or more transactions occupy each other on the same resource and request to lock each other's resources, resulting in a vicious circle.

InnoDB's row-level lock is implemented based on the index, if the query statement does not hit any index, then InnoDB will use the table-level lock.. In addition, InnoDB's row-level locks are locked for indexes, not for data records. Therefore, even if records of different rows are accessed, lock conflicts will still occur if the same index key is used.

also,Unlike MyISAM, which always acquires all the locks required at one time, InnoDB's locks are acquired gradually. When two transactions need to acquire the locks held by the other party, both parties are waiting, which results in a deadlock.. After a deadlock occurs, InnoDB can generally detect it and make one transaction release the lock and roll back, while the other can acquire the lock to complete the transaction. We can avoid deadlocks in the following ways:

  • Use table-level locks to reduce the probability of deadlocks . For business parts that are very prone to deadlocks, you can try to use upgraded locking granularity to reduce the probability of deadlocks through table-level locking
  • Multiple programs try to agree to access tables in the same order . If different programs will access multiple tables concurrently, try to agree to access the tables in the same order, which can greatly reduce the probability of deadlock.
  • The same transaction should try to lock all the resources needed at one time , which can reduce the probability of deadlock

4.6 Summary

Because the InnoDB storage engine implements row-level locking, although the performance loss caused by the implementation of the locking mechanism may be higher than that of table-level locking, it is far superior to MyISAM's table-level locking in terms of overall concurrent processing capabilities. of. When the system concurrency is high, the overall performance of InnoDB will have obvious advantages compared with MyISAM.

However, InodbDB's row-level lock also has its fragile side. When we use it improperly, the overall performance of InnoDB may not only be higher than that of MyISAM, but may even be worse.

Optimization suggestions for using the InnoDB storage engine:

  1. As much as possible, all data retrieval is done through the index to avoid the upgrade of non-indexed row locks to table locks
  2. Reasonably design indexes to minimize the scope of locks
  3. Fewer search conditions as possible to avoid gap locks
  4. Try to control the transaction size, reduce the amount of locked resources and the length of time
  5. Low-level transaction isolation as possible

5. Page lock

Page lock is a kind of lock in MySQL whose locking granularity is between row-level lock and table-level lock. Table-level locks are fast, but have more conflicts, and row-level locks have fewer conflicts, but are slower. A compromise is made at the page level, locking a contiguous set of records at a time. The BDB storage engine supports page-level locks. The overhead and locking time are between table locks and row locks, and deadlocks will occur. The locking granularity is between table locks and row locks, and the concurrency is average.

6. The relationship between isolation level and lock

At the Read Uncommitted level, reading data does not require a shared lock, so that it will not conflict with the exclusive lock on the modified data

At the Read Committed level, the read operation needs to add a shared lock, but the shared lock is released after the statement is executed.

At the Repeatable Read level, read operations need to add a shared lock, but the shared lock is not released before the transaction is committed, that is, the shared lock must be released after the transaction is completed.

SERIALIZABLE is the most restrictive isolation level because it locks an entire range of keys and holds the lock until the transaction completes.

7. Optimistic locking and pessimistic locking of the database

The task of concurrency control in the database management system (DBMS) is to ensure that the isolation and unity of the transaction and the unity of the database are not destroyed when multiple transactions access the same data in the database at the same time. Optimistic concurrency control (optimistic lock) and pessimistic concurrency control (pessimistic lock) are the main technical means used in concurrency control.

  • Pessimistic lock: Assuming that concurrency conflicts will occur, all operations that may violate data integrity are blocked. When the data is queried, the transaction is locked until the transaction is committed.

The implementation of pessimistic locking: using the locking mechanism in the database

  • Optimistic locking: Assuming that no concurrency conflicts will occur, check for data integrity violations only when committing operations. When modifying data, lock the transaction and lock it by Version.

Optimistic lock implementation method: generally use the version number mechanism or CAS algorithm to achieve.

Regarding the use scenarios of the two locks, from the above introduction to the two locks, we know that the two locks have their own advantages and disadvantages, and we cannot think that one is better than the other, such as optimistic locks are suitable for less writing ( Multi-read scenarios), that is, when conflicts really rarely occur, this can save the overhead of locking and increase the overall throughput of the system.

However, if there is a lot of writing, conflicts will often occur, which will cause the upper-layer application to continue to retry, which will actually reduce performance. Therefore, it is more appropriate to use pessimistic locks in scenarios with more writing.

8. Summary

Locks used by MyISAM and InnoDB storage engines:

  • MyISAM uses table-level locking.
  • InnoDB supports row-level locking and table-level locking, and the default is row-level locking

Comparison between table-level locks and row-level locks, and page locks:

  • Table-level lock: A lock with the largest locking granularity in Mysql . It locks the entire table currently being operated. It is simple to implement, consumes less resources, locks quickly, and does not cause deadlocks. It has the largest locking granularity, the highest probability of triggering lock conflicts, and the lowest concurrency. Both MyISAM and InnoDB engines support table-level locks.

Features: small overhead, fast locking; no deadlock; large locking granularity, the highest probability of lock conflicts, and the lowest concurrency.

  • Row-level lock: A lock with the smallest locking granularity in Mysql , which is only locked for the currently operated row. Row-level locks can greatly reduce conflicts in database operations. Its locking granularity is the smallest and the concurrency is high, but the overhead of locking is also the largest, slow locking, and deadlocks may occur.

Features: high overhead, slow locking; there will be deadlocks; the smallest locking granularity, the lowest probability of lock conflicts, and the highest concurrency.

  • Page-level locks: The BDB storage engine supports page-level locks. A lock in MySQL whose locking granularity is between row-level locks and table-level locks. Table-level locks are fast, but have more conflicts, and row-level locks have fewer conflicts, but are slower. A compromise is made at the page level, locking a contiguous set of records at a time.

Features: The overhead and locking time are between table locks and row locks, and deadlocks will occur. The locking granularity is between table locks and row locks, and the concurrency is average.

It can be seen from the above characteristics that it is difficult to say which lock is better in general, but which lock is more suitable for specific application characteristics! Only from the perspective of locks: table-level locks are more suitable for query-based applications with only a small amount of data updated according to index conditions; while row-level locks are more suitable for applications that have a large number of concurrent updates of a small amount of different data according to index conditions, and at the same time Applications with concurrent queries.

参考:
1.https://blog.csdn.net/qq_34337272/article/details/80611486
2.https://mp.weixin.qq.com/s/rFBFwzsDvoqptTubAqyuFQ
3.https://zhuanlan.zhihu.com/p/123962424

Guess you like

Origin blog.csdn.net/huangjhai/article/details/119011417