Locks, table locks, row locks, page locks, shared locks, exclusive lock

                                                                       

I. Overview

        Database locking mechanism is simple, it is the database to ensure data consistency, the various shared resources in a regular accessed concurrently become the order of the design. For any kind of database it needs to have the appropriate locking mechanism, so naturally MySQL is no exception. Due to its own MySQL database architecture, there are a variety of data storage engine, the storage engine characteristics of each application scenarios for which are not the same, in order to meet the needs of their specific application scenarios, each storage engine locking mechanism are for each face a particular scene and optimize the design, so the locking mechanism of each storage engine is also quite different. Each MySQL storage engine uses three types (levels) of the locking mechanism: table-level locking, row-level locking and page-level locking.
1. Table level locking (table-level)
a table level lock each storage engine is MySQL maximum particle size of the locking mechanism. The greatest feature of the locking mechanism is to implement logic is very simple, minimal negative impact on the system brings. So acquiring the lock and release the lock quickly. Due to a table-level lock will lock the entire table, it can be good to avoid deadlock plaguing us.
Of course, the probability of lock contention resource lock large particles of the biggest is the emergence of the negative impact will be the highest, resulting in greatly reduced concurrency.
Table-level locking is mainly MyISAM, MEMORY, CSV and some other non-transactional storage engine.
2. row-level locking (row-level)
row-level locking biggest feature is the locking granularity of small objects, is currently locked in major database management software to achieve the minimum granularity. Since the locking granularity is small, so the probability of occurrence locked resource contention is minimal, the application can give the greatest possible number of concurrent processing capability and improve concurrent applications that require high overall system performance.
Even though it can have a greater advantage in the above concurrent processing capability, but row-level locking that brought a number of shortcomings. Since the locking granularity little resources, so every time things get and release locks also need to do more to bring the consumption of naturally greater. In addition, row-level locking is also the most prone to deadlock.
Use row-level locking InnoDB storage engine primarily.
3. The page-level locking (page-level)
page-level locking in MySQL is a relatively unique locking levels, in other database management software also is not too common. Page-level locking granularity locking features between the row level locking table level lock, it is required to acquire the lock overhead resources, and can provide concurrent processing is also interposed between the two above. In addition, the page-level locking and row-level locking, as a deadlock occurs.
In the process of locking the database of resources, as the particle size decreases resource locking, the locking of the same amount of data required amount of memory is consumed more and more, it will be more complex algorithm. However, with the locking granularity is reduced resources, access to the application's request likelihood of encountering lock wait could be reduced, the overall degree of concurrency system are increasing.
Use page-level locking is mainly BerkeleyDB storage engine.
Overall, MySQL lock these three characteristics can be broadly summarized as follows:
table-level lock: Small overhead, lock fast; not deadlock; lock large size, the probability of lock conflicts of the highest, lowest degree of concurrency;
line level locking: large overhead, locking slow; there will be a deadlock; locking the smallest size and lowest probability of lock conflicts, but also the highest degree of concurrency;   
page locks: cost and locked in a time bound between the table and row locks; will deadlocks; locking granularity boundary between table and row locks, general concurrency.
Application: From the point of view of the lock, table-level locking is more suitable to check the main, only a few are applied in the index update condition data, such as Web applications; and row-level locking is more suitable for a large number of small amount of concurrent updates by index conditions different data applications, while there are concurrent queries, such as some online transaction processing (OLTP) systems.

Second, the table-level locking

Since the locking mechanism MyISAM storage engine used exclusively provided by MySQL table-level locking to achieve, so we will be following the MyISAM storage engine as an example storage engine.
1.MySQL table-level locking lock mode
MySQL table-level locking has two modes: a shared table read locks (Table Read Lock) and exclusive write locks table (Table Write Lock). Compatibility lock modes:
a read operation MyISAM table, without blocking other users read requests for the same table, but will block write requests to the same table;
writes for MyISAM tables, it will block other users on the same table read and write operations;
between read and write operations MyISAM tables, and a write operation is serial. When a thread acquires a write lock on the table, only the thread holding the lock can update operations on the table. Other threads read and write operations will wait until the lock is released.
2. How to add table lock
MyISAM before executing the query (SELECT), will be automatically added to all table read locks involved, before performing the update operation (UPDATE, DELETE, INSERT, etc.), will be automatically added to the table write locks involved this process does not require user intervention, and therefore, users generally do not need to directly MyISAM table with explicit lock command lOCK tABLE.
3.MyISAM table lock optimization suggestions
for MyISAM storage engine, although the use of table-level locking locks in the process of implementation than row-level locking or page-level locking brings additional costs to be small, locked resource itself is the least consumed . However, due to the locking granularity is relatively large, so the resulting resource contention for the lock will lock than any other level, which to a large extent reduce the concurrent processing capabilities. So, when optimized MyISAM storage engine locking problem, the key is how to improve concurrency allowed. Since the lockout levels are unlikely to change, so we need as far as possible to lock in shorter time, then it is to make possible concurrently operating as concurrency.
(1) lookup table-level lock contention
MySQL has two dedicated internal state variables recording system internal lock resource contention:

Copy the code
mysql> show status like 'table%';
+----------------------------+---------+
| Variable_name              | Value   |
+----------------------------+---------+
| Table_locks_immediate      | 100     |
| Table_locks_waited         | 11      |
+----------------------------+---------+
Copy the code

There are two internal state variables recorded case MySQL table level lock, two variables as follows:
Table_locks_immediate: frequency generating table-level locking;
Table_locks_waited: the number of occurrences occurrence waiting table level lock contention;
two state values are after the system starts recording from the beginning, the event occurs once the corresponding number plus one. If there's Table_locks_waited state value is higher, then that system table-level locking contention quite serious, we need to further analyze why there is more to lock the resource contention.
(2) shorten the lock time
how to make it lock time as short as possible? The only way is to let our Query execution time as short as possible.
a) minimizing complexity Query large, complex Query Query split into several small distribution;
B) sufficient to establish an efficient index as possible, so that faster data retrieval;
C) try to make the table only MyISAM storage engine storing the necessary information, the control field types;
D) with a suitable opportunity to optimize MyISAM table data file.
(3) can be isolated parallel operation
comes MyISAM table locks, but also read and write to each other blocking table locks, some people may think that on the table MyISAM storage engine can only be fully serialized, no way to a parallel. Please do not forget, there is a storage engine MyISAM very useful feature, that is ConcurrentInsert (concurrent insert) features.
MyISAM storage engine has a control parameter option is open Concurrent Insert function: concurrent_insert, can be set to 0, 1 or 2. DETAILED DESCRIPTION three values as follows:
concurrent_insert = 2, whether there is no empty MyISAM tables, allows concurrent insertion end of the table record;
concurrent_insert = 1, if no voids MyISAM tables (i.e., the intermediate table rows are not deleted), while allowing a MyISAM table reading process, another process is inserted from the end of the table records. This is the MySQL default settings;
concurrent_insert = 0, does not allow concurrent inserts.
You can use concurrent insert MyISAM storage engine characteristics, to address applications to query the same table and inserted into the lock contention. For example, the concurrent_insert system variable is set to 2, always allow concurrent inserts; at the same time, through periodic execution OPTIMIZE TABLE statement in the system idle time to organize space debris, empty middle recover deleted because records generated.
(4) the rational use of read-write priority
MyISAM storage engine to read and write is blocking each other, then the process of requesting a read lock on a MyISAM table, but also another process requests a write lock on the same table, MySQL how to handle it?
The answer is that the writing process to obtain a lock. Not only that, even after the read request first to lock waiting queue, a write request that requests a write lock before the read lock will be inserted.
This is because MySQL table-level locking for reading and writing are different priorities set by default write priority is greater than the read priority.
So, if we can determine the priority of reading and writing system based on differences in their environment:
by executing the command SET LOW_PRIORITY_UPDATES = 1, so that the connection read than write priority. If our system is a read-mostly, you can set this parameter, if write-based, you do not set;
by specifying INSERT, UPDATE, LOW_PRIORITY attribute DELETE statements, reducing the priority of the statement.
Although the above methods are either priority update, or query-first approach, but can still use it to solve a query relative importance of the application (such as user login system), a read lock waiting for serious problems.
Further, MySQL also provides a compromise to regulate read and write conflicts, i.e. a value suitable for setting system parameters max_write_lock_count, when a read lock table reaches this value, MySQL write request to temporarily lower priority , get a chance to read a certain process locks.
We would like to stress here that: some of the long-running query operations, but also make the process of writing "starving to death" and therefore, the application should try to avoid long-running query occurs, do not always want to use a SELECT statement to solve the problem because this seemingly clever SQL statements, often more complex, longer execution time, in possible by using measures such as the middle of the table to the SQL statement to do some "decomposition", so that each step in the query can be compared short time to complete, thus reducing lock conflicts. If a complex query unavoidable, should be scheduled for execution in the database idle periods, such as some statistics can be regularly scheduled to perform at night.

Third, row-level locking

Row-level locking is not locked MySQL own manner, but by other storage engines themselves implemented as widely as we all know InnoDB storage engine, and MySQL distributed storage engine NDBCluster are all realized the row-level locking. Taking into account various row-level locking storage engine to achieve by themselves, but also to achieve specific differences, and while InnoDB transactional storage engine is currently the most widely used storage engine, so here we mainly analyze the InnoDB's locking feature.
1.InnoDB lock mode and implementation mechanism

in general, InnoDB locking mechanism and the Oracle database, there are many similarities. InnoDB's row-level locking is also divided into two types, shared locks and exclusive locks, the locking mechanism of the implementation process in order to allow row-level locking and table-level locking coexist, InnoDB also uses intent locks (table-level locking) of concept, and we will have a shared intent locks and intent exclusive lock both.
When a transaction needs to lock a resource they need, when, if you encounter a positive shared lock to lock the resources they need, when they can add a shared lock, but can not add exclusive lock. However, if they need to lock their own resources has been an exclusive lock after possession, we can only wait for the lock after the release of their own resources in order to get a lock resource and add your own lock. The effect is intent locks when a transaction needs to lock access to resources, the resources they need if they have been occupied when the exclusive lock, the transaction may need to add a suitable locking intent locks rows in the table above. If they need a shared lock, then add a shared intent lock on the table top. And if they need is a line (or some rows) added on top of an exclusive lock, then the intention to add an exclusive lock on the table above. Intent shared lock can coexist more, but intent exclusive lock while only one exists. So, we can say InnoDB lock mode can actually be divided into four categories: shared lock (S), exclusive lock (X), the intention shared lock (IS) and intent exclusive locks (IX), we can summarize the above, the following table the coexistence of these four logical relationship:

If a transaction request is compatible with the current lock mode locks, InnoDB will grant the lock request transaction; the other hand, if the two are not compatible, the transaction must wait for a lock release.
Intent locks InnoDB is added automatically, without user intervention. For UPDATE, DELETE and INSERT statements, InnoDB automatically to involve data sets plus exclusive lock (X); for ordinary SELECT statement, InnoDB will not add any locks; transactions can be displayed to the record set shared locks or exclusive locks by the following statement.

共享锁(S):SELECT * FROM table_name WHERE ... LOCK IN SHARE MODE
排他锁(X):SELECT * FROM table_name WHERE ... FOR UPDATE

With SELECT ... IN SHARE MODE obtain a shared lock, mainly used in data dependencies needed to confirm a row record exists, and to ensure that no one on this record UPDATE or DELETE operations.
However, if the current transaction also requires the record to be updated operation, it is likely to cause a deadlock, after locking rows for applications that require an update operation, you should use SELECT ... FOR UPDATE way to get an exclusive lock.
2.InnoDB row lock implementations
InnoDB row lock is given by the index , index entries to retrieve the data lock on to achieve only through index conditions, only InnoDB row-level locking. Otherwise, InnoDB table locks will be used
in practical applications, Pay special attention to this feature InnoDB row lock, otherwise, may result in a large number of lock conflicts, thus affecting the concurrent performance. By following some practical examples will be described.
(1) when not queries through index conditions, InnoDB does use a table lock instead of row locks.
(2) Because MySQL row locks are locks for the index plus, not a plus for the record lock, so although access is recorded in different rows, but if you use the same index key, there will be lock conflicts.
(3) when the table has a plurality of indexes, different transactions can lock with different index different rows, additionally, whether used primary key index, the general index or a unique index, InnoDB row locks are used to lock the data.
(4) Even with the index field in the condition, but whether to use the index to retrieve the data by MySQL determined by determining the cost of various implementation plan, if MySQL full table scan think higher efficiency, such as some small tables , it will not use the index, in this case the use InnoDB table locks instead of row locks. Therefore, when analyzing locking conflicts, do not forget to check the SQL execution plan to verify the real use of the index.
3. Lock gap (Next-Key Lock)
When we use the range of conditions rather than equal conditions to retrieve the data, and requests shared or exclusive locks, InnoDB will comply with the existing data record index entry lock condition;
for the key in the range of conditions, but there is no record, called "gap (gAP)", InnoDB will this "gap" lock, this lock mechanism is the so-called gap locks (Next-Key lock).
Example:
If the emp table only records 101, which are empid values 1,2, ..., 100, 101, the following SQL:

mysql> select * from emp where empid > 100 for update;

Is a range of retrieval condition, InnoDB will not only meet the criteria value recording empid lock 101, also of greater than 101 empid (these records do not exist) in the "gap" lock.
InnoDB lock object using the gap:
(1) preventing the phantom read isolation levels to meet the relevant requirements. For the above example, if the lock is not used the gap, if another transaction inserts empid larger than any record 100, then if this transaction executes the statement again, phantom read occurs;
(2) In order to meet the needs of their recovery and replication.
Obviously, when range conditions to retrieve and lock records, even if some non-existent key will be locked innocent, but can not cause any data within the scope of the key in the lock when inserted into the lock. In some scenarios that might cause great harm performance.
In addition to adversely affect the performance gap lock to InnoDB outside, locked achieved by indexing there are several other large performance problems:
(1) When Query can not use the index, InnoDB will give up and use row-level locking switch table level lock, resulting in a lower concurrency;
(2) when using the index Query does not include all the filters, and data retrieval using the key index to just part of the data may not belong to the Query ranks of the result set, but will be locked, because the gap is a range of locks, rather than a specific index key;
(3) when the index Query when using positioning data, using the same index key, but if the access when different data rows (only part of the filter conditions index), as will be locked.
Therefore, the development of practical applications, especially in concurrent insert relatively large number of applications, we should try to optimize the business logic, try to use equal conditions to access updated data, avoiding the use of range conditions.
Should be particularly noted that, InnoDB locks when the gap except by the scope of the conditions outside the lock, if the conditions are equal to the recording request a lock does not exist, InnoDB using a gap will lock.
4. Deadlock
Talked about above, MyISAM table lock is deadlock free, this is because once MyISAM always get all the locks required to meet either all, or wait, so there is no deadlock. But in InnoDB, in addition to a single SQL transaction consisting of the lock is gradually obtained, when two transactions are required to obtain an exclusive lock held by the other party in order to continue to complete the transaction, which is a typical cycle lock wait deadlock.
In the InnoDB transaction management and locking mechanisms, there is a special mechanism to detect deadlocks, detects the presence of a very short time after the deadlock will produce a deadlock in the system. After the system is detected when InnoDB deadlock produced, will be selected InnoDB deadlock produce two smaller transactions to roll back the transaction by a corresponding determination, and so a large additional transaction completed successfully.
That is what InnoDB is the standard to determine the size of the transaction it? The amount of data after the official MySQL manual also mentioned this issue, in fact, found that the deadlock in InnoDB, calculates the two transactions each insert, update, or delete to determine the size of the two transactions. That is more the number of records which firms change, the more the deadlock will not be rolled back.
But one thing to note is that when a deadlock in the scene involves more than InnoDB storage engine, InnoDB is no way to detect this deadlock, this time can only timeout limit parameter InnoDB_lock_wait_timeout by locking to resolve.
It should be noted that this parameter is not only used to resolve deadlocks in concurrent access is relatively high, if a large number of transactions due to inability to obtain the necessary locks and immediately suspended, will take up a lot of computer resources, cause serious performance problems, and even drag you down database. We wait for the timeout threshold by setting the appropriate locks to avoid this from happening.
Generally speaking, the deadlock is designed for application by adjusting business processes, SQL statements, database objects, design, transaction size, as well as access to the database, the vast majority of the deadlock can be avoided. Here to introduce several common methods avoid deadlock by way of example:
(1) in the application, if a different program concurrent accesses a plurality of tables, the same should be agreed in order to access the table, which can greatly reduce the generation of deadlock opportunities.
(2) when the program processing data in a batch fashion, if the pre-sort the data, ensure that each thread according to a fixed sequence to the recording process, it can significantly reduce the possibility of deadlock.
(3) in the transaction, if you want to update the records, should directly apply a sufficient level of locks, namely exclusive lock, and not to apply for a shared lock, and then apply for discharge of updates locks, because when the user applies an exclusive lock, other transactions may and they have received the same record shared lock, resulting in lock conflicts, and even deadlock.
(4) In REPEATABLE-READ isolation level, if two threads simultaneously recording the same conditions using SELECT ... FOR UPDATE plus exclusive lock, in the absence of compliance with the conditions under which records case, two threads will lock success. Finds record does not already exist, it tries to insert a new record, if two threads are doing, there will be a deadlock. In this case, change the isolation level READ COMMITTED, problems can be avoided.
(5) When the isolation level is READ COMMITTED, if two threads to execute SELECT ... FOR UPDATE, to determine whether the conditions exist for the record, if not, insert records. At this time, only one thread can be inserted success, another thread will lock wait, when the first one thread is submitted, the second thread because the primary key re wrong, but although this thread is wrong, but it will get an exclusive lock. Then if there is a third thread again apply for an exclusive lock, there will be a deadlock. In this case, you can do directly into the action, and then re-capture the primary key exception, or in the event of heavy primary key error, always get a ROLLBACK release an exclusive lock.
5. When using the table lock
for InnoDB tables, in most cases you should use row-level locking, since the transaction and row lock often we chose InnoDB tables grounds. But in individual special affairs, may also consider the use of table-level lock:
(1) the transaction needs to be updated most or all of the data table and relatively large, if the default row locks, this transaction is not only low efficiency, and may cause other transaction wait long locks and lock the conflict, in which case you can consider the use of table locks to increase the speed of execution of the transaction.
(2) a transaction involving multiple tables, more complex, is likely to lead to deadlock, resulting in a large number of transaction rollback. This situation can also be considered a one-time transaction involves locking the table in order to avoid deadlock and reduce the overhead of database due to transaction rollback brings.
Of course, the application of these two transactions can not be too much, otherwise, you should consider using a MyISAM table.
In InnoDB, using a table lock with two caveats.
(1) Although it is possible to use LOCK TABLES add InnoDB table lock, but it must be noted that the lock table is not managed by the InnoDB storage engine layer, but by a layer ──MySQL Server responsible only when autocommit = 0, innoDB_table_locks = 1 (the default setting), plus MySQL InnoDB layer in order to know the table lock, MySQL Server also add to the perception of InnoDB row lock, in this case, in order to automatically identify a deadlock involving InnoDB table lock, otherwise , InnoDB will not be able to automatically detect and handle this deadlock.
(2) to be careful when using LOCK TABLES on InnoDB table lock, to AUTOCOMMIT is set to 0, otherwise it will not give MySQL table lock; before the end of the transaction, do not use the UNLOCK TABLES unlock the tables, because UNLOCK TABLES implicitly commit the transaction; COMMIT or ROLLBACK does not release with lOCK tABLES plus table-level locks, table locks must be released with the UNLOCK tABLES. See the following statement in the right way:
for example, if you need to write t table t1 and read from the table, you can do the following:

SET AUTOCOMMIT=0;
LOCK TABLES t1 WRITE, t2 READ, ...;
[do something with tables t1 and t2 here];
COMMIT;
UNLOCK TABLES;

6.InnoDB row lock optimization suggestions
InnoDB storage engine to achieve due to the row-level locking, while locking mechanism in achieving the performance cost likely than table-level locking will be higher, but the overall concurrent processing capabilities far much better than MyISAM table-level locking. When the system is higher when the amount of concurrency, InnoDB and MyISAM overall performance will have a clear advantage compared. However, the row-level locking InnoDB also has its weak side, when we used improperly, it may make InnoDB overall performance not only can not be higher than MyISAM, and may even be worse.
(1) To the rational use of InnoDB row-level locking, so avoid weaknesses, we must do the following:
A) as much as possible so that all data retrieval will be done through an index to avoid InnoDB because you can not lock and key by index upgraded to table-level locking;
b) the rational design of the index, so the index InnoDB lock key on top of the time as accurately as possible the narrow lock range, to avoid unnecessary locked affect the execution of other Query;
c) possible reducing the range of data retrieval based on filter conditions, avoid the negative influence of the gap to bring the lock should not locked locked records;
D) try to control the size of the transaction, the amount of resources to reduce the length of the lock and the locking time;
E) in a business environment Where permitted, as far as possible the use of lower transaction isolation levels, in order to reduce the additional costs because the realization of MySQL transaction isolation level brings.
(2) Due to InnoDB row-level locking and transactional, it would certainly produce a deadlock, the following tips are some of the more commonly used to reduce the probability of occurrence of the deadlock:
A) similar service module, to the extent possible in accordance with the same access sequence access to prevent deadlock;
b) in the same transaction, the lock as possible once all the resources needed to reduce the probability of deadlock;
c) for the part of the business is very prone to deadlock, you can try to upgrade the lock particle size, to reduce the probability of deadlock by locking the table level.
(3) a race condition can be analyzed on-line lock system by checking the state variables InnoDB_row_lock:

Copy the code
mysql> show status like 'InnoDB_row_lock%';
+-------------------------------+-------+
| Variable_name                 | Value |
+-------------------------------+-------+
| InnoDB_row_lock_current_waits | 0     |
| InnoDB_row_lock_time          | 0     |
| InnoDB_row_lock_time_avg      | 0     |
| InnoDB_row_lock_time_max      | 0     |
| InnoDB_row_lock_waits         | 0     |
+-------------------------------+-------+
Copy the code

InnoDB's row-level locking state variables not only records the lock wait times, also recorded the total length of the lock, each time the average duration, as well as the maximum time, in addition to a non-cumulative amount shows the number of wait states are currently waiting for a lock. Description of each state quantity is as follows:
InnoDB_row_lock_current_waits: number of currently waiting lock;
InnoDB_row_lock_time: boot from the system to the present time the total length of the lock;
InnoDB_row_lock_time_avg: the average time spent waiting per;
InnoDB_row_lock_time_max: Start Wait until now most frequently from the system once the time spent;
InnoDB_row_lock_waits: the number of times the system starts up to now a total of waiting;
for the five state variables, the more important mainly InnoDB_row_lock_time_avg (the average wait duration), (long waiting total) InnoDB_row_lock_waits (waiting for the total number) and InnoDB_row_lock_time these three. Especially when high wait times, and each time the long wait is not small, we need to analyze why there are so many systems of waiting, and then proceed to specify an optimization plan based on analysis results.
If it is found more serious lock contention, and if the value InnoDB_row_lock_waits InnoDB_row_lock_time_avg is relatively high, can be further observed that the occurrence of a table lock conflicts, and the like by setting the data line InnoDB Monitors, and analyze the causes lock contention.
Lock conflicts tables, rows, etc., and analyze the causes lock contention. Specific methods are as follows:

mysql> create table InnoDB_monitor(a INT) engine=InnoDB;

Then you can use the following statement to view:

mysql> show engine InnoDB status;

Monitors can be stopped by issuing the following statement to view:

mysql> drop table InnoDB_monitor;

After setting the monitor, the current lock will wait detailed information including the table name, the type of lock, like lock case of recording, to facilitate further analysis and determination problems. Readers may ask why there must first create a table called InnoDB_monitor it? Since the table is created InnoDB actually tell us begin to monitor the details of his state, and then InnoDB will be more detailed transaction records into the MySQL and lock information in the errorlog, so that we can use later for further analysis. After Turn on the monitor, by default it will be recorded every 15 seconds to log content monitoring, and if prolonged will lead to open .err files become very large, so the user after confirming the cause of the problem, remember to delete the monitoring tables to close monitor, or to start the server by using the "--console" option to turn off write log files.

Source: nine percent

Guess you like

Origin www.cnblogs.com/ljl150/p/12028523.html