The "lock" of the MySQL interview

Artificial intelligence, zero-based entry! http://www.captainbed.net/inner 

1 What is a lock

1.1 Overview of locks

There are many examples of locks in life, from ancient simple door locks to password locks, to the current fingerprint unlocking and face recognition locks, these are all clear examples of locks, so we understand The lock should be very simple.

Turning to the lock in MySQL, for MySQL, the lock is a very important feature. The lock of the database is to support concurrent access to shared resources and provide data integrity and consistency, so as to ensure high concurrency. When accessing the database, there will be no problems with the data.

1.2 Two concepts of lock

In the database, both lock and latch can be called locks, but they have different meanings.

Latch is generally called 闩锁(lightweight lock) because it requires a very short lock time. If the duration is long, the performance of the application will be very poor. In the InnoDB engine, Latch can be divided into mutex(mutual exclusion) and rwlock(read-write lock). Its purpose is to ensure the correctness of concurrent threads operating critical resources, and there is usually no deadlock detection mechanism.

The object of Lock is 事务to lock objects in the database, such as tables, pages, and rows. And generally lock objects are released only after transaction commit or rollback (the release time may be different for different transaction isolation levels).

2 Locks in the InnoDB storage engine

2.1 Lock granularity

In the database, the granularity of locks can be divided into table locks, page locks, and row locks. The granularity of these locks will also be upgraded. Lock upgrade means that the granularity of the current lock is reduced. The database can change a table Upgrade the 1000 row locks to a page lock, or upgrade the page lock to a table lock. The granularity of these three locks are introduced below (refer to the blog: https://blog.csdn.net/baolingye/article/details /102506072).

Table lock

Table-level locking is the most granular locking mechanism among MySQL storage engines. The biggest feature of the locking mechanism is that the implementation logic is very simple, and the negative impact of the system is minimal. So the speed of acquiring and releasing locks is very fast. Since the table-level lock will lock the entire table at one time, it can avoid the deadlock problem that plagues us.

Of course, the biggest negative impact brought about by the large lock granularity is that the probability of contention for locked resources will also be the highest, resulting in a large discount.

Table-level locking is mainly used by some non-transactional storage engines such as MyISAM, MEMORY, and CSV.

Features: low  overhead, fast locking; no deadlock; large locking granularity, the highest probability of lock conflicts, and the lowest concurrency.

Page lock

Page-level locking is a unique locking level in MySQL, and it is not too common in other database management software. The feature of page-level locking is that the granularity of locking is between row-level locking and table-level locking. Therefore, the resource overhead required to obtain the lock and the concurrent processing capabilities that can be provided are also between the above two. In addition, page-level locking is the same as row-level locking, and deadlock occurs.
In the process of database resource locking, as the granularity of locked resources decreases, the amount of memory required to lock the same amount of data is more and more, and the implementation algorithm will become more and more complicated. However, as the granularity of locked resources decreases, the possibility of application access requests encountering lock waiting will also decrease, and the overall concurrency of the system will also increase.
The main use of page-level locking is the BerkeleyDB storage engine.

Features: The  overhead and lock time are between table locks and row locks; deadlocks will occur; the locking granularity is between table locks and row locks, and the concurrency is average.

Row lock

The biggest feature of row-level locking is that the granularity of the locked object is very small, and it is also the smallest granularity of locking achieved by major database management software. Because the lock granularity is very small, the probability of contention for locked resources is also the smallest, which can give applications as much concurrent processing capabilities as possible and improve the overall performance of some high-concurrency application systems.

Although it can have greater advantages in concurrent processing capabilities, row-level locking also brings a lot of drawbacks. Because the granularity of locked resources is very small, there are more things to do to acquire and release locks each time, and the consumption is naturally greater. In addition, row-level locking is also the most prone to deadlock.

Features: high  overhead and slow locking; deadlock will occur; the smallest locking granularity, the lowest probability of lock conflicts, and the highest degree of concurrency.

Comparing table locks, we can find that the characteristics of these two types of locks are basically opposite. From the perspective of locks, table-level locks are more suitable for query-oriented applications with only a small number of applications that update data based on index conditions, such as Web Application; while row-level locks are more suitable for applications that have a large number of concurrent updates of a small amount of different data according to index conditions and concurrent queries, such as some online transaction processing (OLTP) systems.

The granularity of locks supported by different MySQL engines

 

2.2 Types of locks

There are different types of locks in the InnoDB storage engine, which are introduced one by one below.

S or X (shared lock, exclusive lock)

There are actually only two data operations, that is, read and write, and when the database implements locks, different locks are used for these two operations; InnoDB implements a standard row-level lock , which is a shared lock (Shared Lock) And exclusive lock (Exclusive Lock) .

  • Shared lock (read lock) (S Lock), allows transactions to read a row of data.

  • Exclusive lock (write lock) (X Lock), allows transactions to delete or update a row of data.

IS or IX (shared, exclusive) intention lock

In order to allow row locks and table locks to coexist and implement a multi-granularity lock mechanism, the InnoDB storage engine supports an additional locking method, called intentional locks . The intentional locks are table-level locks in InnoDB . The intentional locks are divided into:

  • Intentional shared lock: expresses that a transaction wants to acquire a shared lock on certain rows in a table.

  • Intent exclusive lock: expresses that a transaction wants to acquire an exclusive lock on certain rows in a table.

In addition, these locks are not necessarily compatible with each other, and some locks are incompatible. The so-called compatibility means that after transaction A acquires a certain lock of a certain row, transaction B also tries to acquire a certain row on this row. This kind of lock, if it can be acquired immediately, is called lock compatible, otherwise called conflict.

Let's take a look at the compatibility of these two locks.

[1] Compatibility of S or X (shared lock, exclusive lock) 

[2] IS or IX (shared, exclusive) intent lock compatibility 

3 previous summary

Here is a summary of the previous concepts with a mind map.

 

4 Consistent non-locking reads and consistent locking reads

Consistent Locking Reads

When querying data in a transaction, ordinary SELECT statements will not lock the queried data, and other transactions can still perform update and delete operations on the queried data. Therefore, InnoDB provides two types of lock reads to ensure additional security:

【1】SELECT … LOCK IN SHARE MODE

【2】SELECT ... FOR UPDATE

【3】SELECT ... LOCK IN SHARE MODE: Add S lock to read rows, other things can add S lock to these rows, if add X lock, it will be blocked.

【4】SELECT ... FOR UPDATE: X locks will be added to the queried rows and associated index records, and S locks or X locks requested by other transactions will be blocked. When the transaction is committed or rolled back, the locks added by these two statements will be released. Note: SELECT FOR UPDATE can lock rows only when auto-commit is disabled. If auto-commit is turned on, matching rows will not be locked.

Consistent non-locking read

Consistent nonlocking read  refers to the way that the InnoDB storage engine reads row data in the current database through multiple version control (MVVC). If the read row is performing a DELETE or UPDATE operation, then the read operation will not wait for the release of the row lock. Conversely, InnoDB will read a snapshot of the row. Therefore, the non-locking read mechanism greatly improves the concurrency of the database.

 

Consistent non-locking read is InnoDB's default read mode, that is, reads do not occupy and wait for the lock on the row. In the transaction isolation level READ COMMITTEDand REPEATABLE READlower, InnoDB uses a consistent non-locking read.

However, the definition of snapshot data is different. In the READ COMMITTEDtransaction isolation level, consistent non-locking read always reads the latest snapshot data of the locked row . Under the REPEATABLE READtransaction isolation level, the row data version at the beginning of the transaction is read .

Let's use a simple example to illustrate the difference between these two methods.

First create a table:

Insert a piece of data;

insert into lock_test values(1);

View the isolation level;

select @@tx_isolation;

The following is divided into two types of transactions to operate.

Under the REPEATABLE READtransaction isolation level:

 

Under the REPEATABLE READtransaction isolation level, the row data at the beginning of the transaction is read, so after the session B modifies the data, the data can still be queried through previous queries.

Under the READ COMMITTEDtransaction isolation level:

Under the READ COMMITTEDtransaction isolation level, the latest snapshot data of the row version is read. Therefore, because session B modifies the data and commits the transaction, A cannot read the data.

5-row lock algorithm

The InnoDB storage engine has three row lock algorithms, which are:

[1] Record Lock: The lock on a single row record.

[2] Gap Lock: gap lock, lock a range, but does not include the record itself.

[3] Next-Key Lock: Gap Lock+Record Lock, lock a range, and lock the record itself.

Record Lock : always lock the index record. If the InnoDB storage engine table is not set up with any index when it is created, then the InnoDB storage engine will use the implicit primary key to lock it.

Next-Key Lock : A locking algorithm that combines Gap Lock and Record Lock. Under the Next-Key Lock algorithm, InnoDB uses this locking algorithm for row queries. For example, 10, 20, 30, then the interval where the index may be Next-Key Locking is:

 

In addition to Next-Key Locking, there is Previous-Key Locking technology. This technology is the opposite of Next-Key Lock. The locked interval is the interval range and the previous value. For the same value mentioned above, using Previous-Key Locking technology, the lockable interval is: 

Not all indexes will add Next-key Lock, there is a special case , in the case of the query column is a unique index (including the primary key index), it Next-key Lockwill be downgraded to Record Lock.

Next, let's explain through an example.

CREATE TABLE test (
    x INT,
    y INT,
    PRIMARY KEY(x),    // x是主键索引
    KEY(y)    // y是普通索引
);
INSERT INTO test select 3, 2;
INSERT INTO test select 5, 3;
INSERT INTO test select 7, 6;
INSERT INTO test select 10, 8;

We now execute the following statement in session A;

SELECT * FROM test WHERE y = 3 FOR UPDATE

Let's analyze the lock situation at this time.

[1] For the primary key x

[2] Auxiliary index y

The user can display Gap Lock in the following two ways:

  • Set the isolation level of the transaction to READ COMMITED.

  • Set the parameter innodb_locks_unsafe_for_binlog to 1.

The function of Gap Lock is to prevent multiple transactions from inserting records into the same range. It is designed to solve the Phontom Problem (phantom reading problem) . In MySQL's default isolation level (Repeatable Read), InnoDB uses it to solve the phantom read problem.

Phantom read : It means that under the same transaction, two consecutive executions of the same SQL statement may lead to different results. The second SQL may return rows that did not exist before, that is, during the first execution and the second execution Another transaction inserted a new row into it.

6 Problems caused by locks

6.1 Dirty read

Dirty read:  Under different transactions, the current transaction can read the uncommitted data of another transaction. In addition, we need to pay attention to the default MySQL isolation level is REPEATABLE READthat dirty reads will not occur. The condition for dirty reads to occur is the isolation level of the transaction READ UNCOMMITTED, so if dirty reads occur, it may be caused by this isolation level.

Let's take a look at it through an example.

 As can be seen from the above example, when the isolation level of our transaction is READ UNCOMMITTED, session A can query the data that has not been submitted by session A before session A has been submitted.

6.2 Non-repeatable read

Non-repeatable reading:  refers to reading the same set of data multiple times in a transaction, but the data read multiple times is different, which violates the principle of consistency of database transactions. However, this is still different from dirty reads. The dirty read data is not submitted, but the non-repeatable read data is the submitted data.

Let's take a look at the occurrence of this kind of problem through the following example.

As can be seen from the above example, in a session of A, because session B inserts data, the results of the two queries are inconsistent, so there is a problem of non-repeatable reading.

What we need to pay attention to is that the data read by non-repeatable read is the data that has been submitted, and the isolation level of the transaction is READ COMMITTED, this kind of problem is acceptable to us.

If we need to avoid the occurrence of non-repeatable read problems, then we can use the Next-Key Lock algorithm (set the transaction isolation level to READ REPEATABLE) to avoid them. In MySQL, the non-repeatable read problem is the Phantom Problem, that is, the phantom problem .

6.3 Lost update

Lost update: refers to the update operation of one transaction will be overwritten by the update operation of another transaction, resulting in data inconsistency. Under any isolation level of the current database, it will not lead to the loss of update problem. If this problem occurs, this problem may occur in a multi-user computer system environment.

How to avoid the problem of missing updates, we only need to make the transaction operation serialized, not parallel execution.

We generally use SELECT ... FOR UPDATEstatements to add an exclusive X lock to the operation.

6.4 Summary

Here we make a summary, mainly to compare the problems that occur under the isolation levels of different transactions, so that it is more clear.

 

Guess you like

Origin blog.csdn.net/qq_35860138/article/details/102677920