Database lock mechanism and principle

Transfer from https://blog.csdn.net/C_J33/article/details/79487941

Database lock
look at a map of their own sort of database lock tree

 

Summary of
database locks can generally be divided into two categories, one is pessimistic lock, a lock is optimistic.

Optimistic locking latch mechanism generally refers to a user's own implementation, will not create a conflict under the assumption that general data, so be time to submit updated before formal conflict or not the data is detected in the data, if the conflict found , then let users return an error message, allowing users to decide what to do. Optimistic lock implementations typically include a version number and time stamp.

Pessimistic locking in general is what we usually say that database lock mechanism, the following discussion is based on pessimistic locking.

Pessimistic locking the main table locks, row locks, page locks. In only used in MyISAM table lock, there will be no deadlock, the lock overhead is very small, but very poor respective concurrent capacity. innodb implements row-level locks and table locks, lock granularity smaller, concurrent capacity strong, but the corresponding lock overhead increases, it is likely to deadlock. Meanwhile inodb need to reconcile these two locks, the algorithm becomes complicated. InnoDB row lock is an index to retrieve the data entry lock on the index achieved only through index conditions through, only InnoDB row-level locking, otherwise, would use the InnoDB table lock.

Table and row locks are divided into shared locks and exclusive locks (exclusive lock), and update locks to solve the row lock promotion (shared lock to an exclusive lock) is the deadlock.

Together with innodb table and row locks, so in order to improve efficiency will have an intent lock (intent share locks and exclusive intent lock).

Intention to table locks and row locks exists
an official document is so described,

Intention locks are table-level locks that indicate which type of lock (shared or exclusive) a transaction requires later for a row in a table

Known to have a very vivid interpretation of the peace, as follows:

There are table locks mysql, the table read locks, block other transactions from modifying the table data. Write table locks will block other transactions to read and write.
Innodb engine and support row locks, row locks are divided into shared locks, a transaction read-only shared locks on the line. Exclusive lock on a row exclusive lock a read-write transaction.
The coexistence of these two types of locks in question Consider this example: A locked transaction row in a table, so that the line can only read, can not write. Transaction B then apply for a write lock on the entire table. If the transaction B application is successful, then theoretically it will be able to modify any row of the table, which holds A row lock conflict. The database is necessary to avoid such a conflict, that let application B is blocked until the release of A row lock.
Database to determine how this conflict it?

step1: determining whether a table has been used by other transactions table lock lock table
step2: determining whether each row in the table row lock has been locked.
Note step2, such a method for determining the efficiency is not high, because of the need to traverse the entire table. Then there is the intention to lock. In the presence of intent locks A transaction must first apply for a shared intent lock on a table, after the successful application row lock line.

In the presence of intent locks, the above determination can be changed

step1: unchanged
step2: find intent shared lock on the table, indicating that the table some rows are shared row locks locked, therefore, transaction B application form write lock will be blocked.
Note: Applicants intent lock operation is complete database, that is, when the transaction row lock A row of application, the database will automatically lock the table before the start of the intention to apply, we do not need programmers use code to apply.

Subdivision row lock
shared lock
to lock and unlock: When a transaction execute select statement, the database system will allocate a shared lock to lock the data being queried for this transaction. By default, data is read, the database system shared lock is released immediately. For example, when a transaction executes a query "SELECT * FROM accounts" statement, the database system first lock the first row, after reading, unlock the first row, and then lock the second row. Thus, during a read operation in a transaction, while allowing other transactions to update accounts table rows not locked.

Compatibility: If you place a shared lock on the data resources, can then place shared locks and update locks.

Concurrent performance: good concurrent performance, when the data shared lock is placed, can then place a shared or update locks. So good concurrent performance.

Exclusive lock
to lock and unlock: When a transaction execute insert, update or delete statement, the database system will automatically SQL statements to manipulate data resources using exclusive lock. If the data resource has other locks (no lock) exists, it can not then place an exclusive lock.

Compatibility: exclusive locks and other locks can not be compatible, if the data resource has been added exclusive lock, you can not place the other locked. Similarly, if the data resources already placed other locks, then it can no longer be placed exclusively locked.

Concurrent performance: the worst. Only allows a transaction to access data locked, if other things are also required to access the data, you have to wait.

Update locks
update lock to lock in the initial stage of the resources may have to be modified, which avoids the use of a shared lock deadlock caused. For example, for the following update statement:

UPDATE accounts SET balance=900 WHERE id=1

Update operation requires two steps: reading accounts table record id - 1> update operation is performed.

If you use a shared lock in the first step, then the second step to upgrade the lock to an exclusive lock on the deadlock may occur. For example: both transactions acquire a shared lock same data resource, and then must lock to an exclusive lock, but need to wait for another transaction before releasing the shared lock to an exclusive lock, which resulted in a deadlock.

Update lock has the following characteristics:

Lock and Unlock: When a transaction update statement, the database system will be assigned an update lock for the transaction. When reading data is completed, the updating operation is executed will update lock in an exclusive lock.

Compatibility: update locks and shared locks are compatible, that is to say, a resource can be placed simultaneously update locks and shared locks, but the place up an update lock. Thus, when multiple transactions update the same data, only one transaction can obtain an update lock, and then update lock to an exclusive lock, other transactions must wait until the end of the previous transaction to acquire update locks too, which avoids deadlock.

Concurrent performance: Allows multiple transactions simultaneously read a locked resource, but does not allow other transactions from modifying it.

Database isolation level
to understand the data lock mechanism, the isolation level of the database also like to understand more. Each isolation levels to meet different requirements of data using different degrees of lock.

Read Uncommitted, reading and writing are not using the lock, the worst data consistency, many logic errors also occur.

Read Committed, using the write lock, but read and inconsistent, non-repeatable read.

Repeatable Read, read and write locks use, to solve the problem of non-repeatable read, but there will be phantom reads.

The Serializable, use of burst-like transaction scheduling, to avoid inconsistent data not locking insert as a result of occurrence.

Reading is not submitted, resulting in dirty read (Read Uncommitted)
a transaction read operation may read uncommitted data modification of another transaction, if the transaction is rolled back may cause errors.

Example: A hit 100 to B, B to see the account, which is two operations against the same database, two things, if B read A transaction 100 that money to fight over, but the A's last transaction It rolled back, resulting in losses.

Avoid these things when we need to write the lock, the separate read and write, to ensure that when reading data, the data is not modified, when writing data, the data is not read. Thus ensuring at the same time can not be written in the write and read another transaction.

Read submitted (Read Committed)
we added a write lock, you can guarantee that dirty reads, is to ensure that the data are submitted after the reading, but will not cause re-read, that reading is not the time lock, a read transaction process, if the data is read twice, in between two write transaction modifies data, will lead to inconsistent results read twice, resulting in a logical error.

May be re-read (Repeatable Read)
must be solved repeatedly read problem, if a transaction has multiple read operations, the read result needs to be consistent (that is consistent with a fixed data, the phantom read out refers to the number of inconsistent queries). This has implications for whether or not to hold back a transaction whether to raise read lock and read lock until the transaction commit lock problem, if not a read lock, there will be non-repeatable read, read if the lock is released immediately, not holding there are, then it may be modified in another transaction, if other transactions have been executed, the transaction at this time is read again there will be non-repeatable read,

So read locks held in a transaction can guarantee that non-repeatable read, write and hold time must be locked, it is necessary, or else there will be a dirty read. Repeatable Read (can be re-read) MySql is the default transaction isolation level, above the mean reading when needed and remain locked

Serializable (Serializable)
to solve the problem of phantom read, in the same transaction, the same query repeatedly returned inconsistent results. A new record of a transaction, the transaction B before and after the transaction A submission to execute a query, time and again after the discovery of more than the previous record. Magic Reading is due to concurrent transactions increased due to record, this is not like the non-repeatable read lock resolved by recording, because for the new record can not be locked. Transactions need to be serialized in order to avoid phantom reads.
This is the highest level of isolation, it is forced to sort through the transaction, making it impossible to conflict with each other, so as to solve the problem phantom read. In short, it is to add a shared lock on each row of data read. At this level, it could lead to a lot of timeouts and lock contention
----------------

Guess you like

Origin www.cnblogs.com/KQNLL/p/11963884.html