Database Read-Write Lock

MySQL in the lock (table locks, row lock)
    lock is a mechanism to coordinate multiple computer processes or threads of pure concurrent access to a resource. In the database, in addition to traditional computing resources (CPU, RAM, I / O ) contention outside, data is also a shared resource for many users. How to ensure the consistency of concurrent data access, efficiency is where there is a database problem that must be addressed, lock conflicts is an important factor affecting the performance of concurrent access to the database. From this perspective, the lock is especially important for databases, but also more complicated.
 
Overview
    In contrast to other databases, MySQL locking mechanism is relatively simple, its most notable feature is the different storage engines support different lock mechanisms.
MySQL can be summarized into the following three locks:
 table-level lock: Small overhead, lock fast; not deadlock; lock large size, the probability of lock conflicts of the highest and lowest degree of concurrency.
 row-level locking: large overhead, locking slow; there will be a deadlock; locking granularity smallest and lowest probability of lock conflicts, have the highest degree of concurrency.
 page lock: locking overhead and time boundaries between the table and row locks; there will be a deadlock; locking granularity boundary between the table and row locks, concurrency general
 
----------- -------------------------------------------------- ---------
 
MySQL table-level locking lock mode (MyISAM)
MySQL table-level locking has two modes: a shared lock table (table Read lock) and exclusive write locks table (table write lock).
 MyISAM for read operations do not block other users request the same table, but will block write requests to the same table;
 writes for MyISAM, and will block other users on the same table read and write operations;
MyISAM read and write operations between the tables, and a write operation is between serial.
When a thread acquires a write lock on the table, only the thread holding the lock can update operations on the table. Other threads read and write operations will wait until the lock is released.
 
MySQL table-level locking lock mode
    MySQL table lock has two modes: a shared table read locks (Table Read Lock) and exclusive write locks table (Table Write Lock). Compatibility lock mode in the following Table
Table in MySQL compatibility lock
the current lock mode / compatibility / None lock mode request read lock write lock
read whether the lock is
a write lock is no
    visible MyISAM table read operation, the other users will not block read requests for the same table, but will block write requests to the same table; writes for MyISAM tables, blocking other users will be requested to read and write the same table; MyISAM tables between read and write operations, and write and the write operation is between serial! (When a thread acquires a write lock on the table, only the thread holding the lock can update the operating table. Other threads read and write operations will wait until the lock is released.)
 
 
How to add table lock
    MyISAM in execution before the query (SELECT), will be automatically added to all table read locks involved, before performing the update operation (uPDATE, DELETE, INSERT, etc.), will be automatically added to the table write locks involved, this process does not require user intervention, Usually the user does not need to explicitly MyISAM table lock command lOCK tABLE directly. In the examples in this book, the explicit locking basically for the convenience of it, is not required.
    To lock MyISAM table shows, usually to some extent simulate transaction operation, reading of a consistency point in time a plurality of tables. For example, there is an order table orders, which recorded total order amount total, as well as an order list order_detail, which recorded the amount of the order subtotal subtotal of each product, if we need to check the total amount of the two tables are equal, it may be necessary to perform the two the SQL:
the SELECT the SUM (Total) Orders the FROM;
the SELECT the SUM (SUBTOTAL) the FROM Order_Detail;
in this case, if the lock does not give these two tables, it is possible to produce erroneous results, since the the first statement in the implementation process, order_detail table may have changed. Therefore, the correct approach should be:
LOCK the Tables the Orders the Read local, Order_Detail the Read local;
the SELECT SUM (Total) the FROM the Orders;
the SELECT SUM (the SUBTOTAL) the FROM Order_Detail;
Unlock the Tables;
special note the following two points.
The above example is applied to  LOCK TABLES when the 'local' option, its role is satisfied in the case where the conditions of concurrent insert MyISAM table, allowing the user to insert the other end of the table records
 with LOCKTABLES explicitly added to the table is a table lock, the lock must obtain all involved tables simultaneously, and MySQL support lock escalation. That is, after performing LOCK TABLES, can only access explicitly lock the tables can not be accessed unlocked table; at the same time, if the increase is a read lock, then only execute the query operation, but can not perform updates . In fact, are basically the same in the case of automatic locking, MySQL issues a lock to get all SQL statements needed. That's why MyISAM table will not deadlock (Deadlock Free) is
a session using the LOCK TABLE command to add the table film_text read lock, the session can query lock records in the table, but other tables are updated or accessed an error ; at the same time, another session can query the records in the table, but the update will wait for a lock.
When using LOCK TABLE, not only need to lock all the tables once used, but also, how many times the same table appears in the SQL statement, it is necessary by the same SQL statement alias lock many times, otherwise it will be wrong!
Concurrent lock
    under certain conditions, MyISAM also supports queries and concurrent operations carried out.
    MyISAM storage engine has a variable concurrent_insert system, specifically to control the behavior of concurrent insert, and its value may be 0, 1 or 2.
 concurrent_insert When set to 0, not allow concurrent insertion.
 concurrent_insert When set to 1, if at the same time allowing a reading MyISAM table, another process is inserted from the end of the table records. This is the MySQL default settings.
 When concurrent_insert set to 2, whether there is no empty MyISAM tables are recorded in Table permit insertion tail end of the table allow concurrent record insert.
You can use concurrent insert MyISAM storage engine characteristics, to address applications to query the same table and inserted into the lock contention. For example, the system variables concurrent_insert 2, always allow concurrent inserts; simultaneously, by periodically performing OPTIONMIZE TABLE statement to organize the system idle period of space debris, delete records received by intermediate cavity generated.
 
Scheduling of MyISAM lock
said before, MyISAM storage engine are mutually exclusive read and write locks, a read operation is sequential. Then, a process requests a read lock on a MyISAM table, but also another process requests a write lock on the same table, MySQL how to handle it? The answer is that the writing process to obtain a lock. Not only that, even after first-reading process first request queue waiting for a lock, write to request, write lock before the read request will be inserted! This is because MySQL write requests are generally considered more important than the read request. This is exactly the MyISAM table is not suitable for a lot of reasons for the update operation and query applications, because a large number of update operations can cause query operation is difficult to obtain a read lock, which may never be blocked. This sometimes can become very bad! Fortunately, we can adjust the scheduling behavior MyISAM through a number of settings.
 start low-priority-updates parameters specified by the default MyISAM engine administered to a read request according to the priority.
 Run through SET LOW_PRIORITY_UPDATES = 1, so that the connection update request sent by lower priority.
 by specifying the INSERT, UPDATE, LOW_PRIORITY DELETE statement properties, lowers the priority of the statement.
Although the above three methods are either priority update, or query-first approach, but can still use it to solve a query relative importance of the application (such as user login system), a read lock waiting for serious problems.
Further, MySQL also provides a compromise to regulate read and write conflicts, i.e. a value suitable for setting system parameters max_write_lock_count, when a read lock table reaches this value, MySQL write request becomes temporarily lower priority , get a chance to read a certain process locks.
    As already discussed the write priority scheduling mechanisms and solutions. Here I would like to emphasize one point: Some query operation takes a long time to run, but also make the process of writing "starving to death"! Therefore, the application should try to avoid long-running query occurs, do not always want to use a SELECT statement to solve the problem. Because this seemingly clever SQL statements, often more complex, longer execution time, in possible by using measures such as the middle of the table to the SQL statement to do some "decomposition", so that each step can be shorter query complete, thus reducing lock conflicts. If a complex query unavoidable, should be scheduled for execution in the database idle periods, such as some statistics can be regularly scheduled to perform at night.
 
 
-------------------------------------------------- --------------------
InnoDB lock problems
    the biggest difference InnoDB and MyISAM are two points: First, to support the transaction (tRANSACTION); the second is using row-level locking.
Row-level locking and table-level locks do have many differences, in addition, the introduction of affairs has also brought some new problems.
 
1. transaction (Transaction) and the ACID properties of
    transactions is a logic processing unit consisting of a set of SQL statements, has a transaction attribute 4, commonly referred to as ACID properties of transactions.
 immunogenic properties (Actomicity): the transaction is an atomic operation unit which modification of data, either all executed or not executed full.
 Consistency (Consistent): at the start and completion of the transaction, the data must be consistent state. This means that all relevant rules must be applied to the transaction data modifications to manage integrity; at the end of the transaction, all of the internal data structures (such as B-tree indexes or doubly linked list) must also be correct.
 isolation (Isolation): Database systems provide some isolation mechanism to ensure "independence" environment to perform operations is not affected by external concurrent transactions. This means that the intermediate state in the transaction process is not visible on the outside, and vice versa.
 persistent (Durable): After the transaction is complete, it is permanent modifications to the data, even if a system failure occurs can be maintained.
2. The problem of concurrent transactions brought
    in relation to serial processing, the concurrent transactions can greatly increase the utilization of database resources, increase transaction throughput of the database system, which can support can support more users. But concurrent transactions will also bring some problems, including the following situations.
 Update loss (Lost Update): When two or more transactions select the same row, and then selected based on the initial value to update the row, because each transaction not know the existence of other matters, will be lost update problem - - last update update covering the other firms do. For example, two editors make an electronic copy of the same document. Each independently change its editorial staff copy, and then save a copy of the change, thus covering the original document. Finally, save the changes to save changes to the copy editors cover the modifications made by other editors. If you have completed and submitted before a transaction editorial staff, other editors can not access the same file, you can avoid this problem
 dirty read (Dirty Reads): a transaction is made modifications to a record, prior to this transaction and submit it data records left in an inconsistent state; in this case, another transaction record also read the same, if not controlled, the second transaction reads these "dirty" data, and accordingly for further processing, it will produce data dependency uncommitted. This phenomenon has been aptly called "dirty read."
 Non-repeatable read (Non-Repeatable Reads): a transaction reads some data has changed, or some record has been deleted! This phenomenon is called "non-repeatable read."
 Magic Reading (Phantom Reads): A transaction re-read in the same query data retrieval seen before, but found other transactions from inserting new data to meet their query, a phenomenon called "phantom read."
 
3. The transaction isolation level
in question concurrent transactions brought in, "lost updates" should generally be avoided completely. But to prevent lost updates, and can not rely on database transactions controller to address applications that require data to be updated add the necessary locks to solve, therefore, prevent lost updates should be the responsibility of the application.
"Dirty read", "non-repeatable read" and "phantom read", they are actually reading the database consistency problem, it must provide a mechanism by the database transaction isolation to resolve. Realization of database transaction isolation mode, which can be divided into the following two.
One is before reading the data, it locked, preventing other transactions modify the data.
Another plus is without any locks, generate consistent data snapshot (Snapshot) a data request by a certain point in time mechanism, and use this snapshot to provide a certain level (statement-level or transaction-level) read consistency. From a user perspective, if the database is to provide multiple versions of the same data, and therefore, this technique is called concurrency control data multi-version (MultiVersion Concurrency Control, abbreviated MVCC or the MCC), also often referred to as multi-version database.
    The more strict transaction isolation level of the database, the smaller the concurrent side effects, but the cost greater, because the transaction isolation is essentially the transaction to some extent, "serialized" in, which is obviously the "concurrency" is contradictory Meanwhile, different applications for read consistency and degree of isolation transaction requirements are different, for example, many applications are not sensitive to the "non-repeatable read" and "phantom reads", may be more concerned about the ability of concurrent access to data.
    In order to solve the conflict "isolation" and "concurrency" is, ISO / ANSI SQL92 defines four transaction isolation levels, each level is different degree of isolation, side effects are also allowed different applications according to their own business logic requirements by selecting different isolation level to balance the "isolation" and "concurrent" of the conflict
4 kinds of transaction isolation levels Comparative
isolation level / read data consistency and to allow concurrent read data consistency dirty read side unrepeatable reads Magic Reading
Uncommitted Read (Read uncommitted) the lowest level, to ensure that data corruption can not physically read yes yes yes
submitted degrees (read committed) statement-level No Yes Yes
repeatable read (repeatable read) transaction-level No No Yes
serializable (serializable) the highest level, the transaction level No No No
    Finally, note that: each specific database not necessarily the full realization of the four isolation levels, for example, Oracle provides only Serializable Read committed and two standard level, Read only in addition to their own definition of isolation levels: SQL Server above ISO / ANSI SQL92 defined in addition to support four levels , also supports called "snapshot" isolation level, but strictly speaking it is a MVCC to achieve Serializable isolation level use. MySQL supports all four isolation levels, but in the specific implementation, there are some features, such as in some isolation level read consistency is the use of MVCC, but in some cases they are not.
 
 
Get InonoD row lock contention
may analyze a race condition on the line lock system by checking the state variable InnoDB_row_lock:
MySQL> Show Status like 'innodb_row_lock%';
± --------------- ± ------ + ---------------
| variable_name | Value |
± --------------------- --------- ± ------ +
| Innodb_row_lock_current_waits | 0 |
| Innodb_row_lock_time | 0 |
| Innodb_row_lock_time_avg | 0 |
| Innodb_row_lock_time_max | 0 |
| Innodb_row_lock_waits | 0 |
± ------------------------ + ------ ± ------
. 5 in rows sET (0.00 sec)
    If the contention found more serious, and if the value Innodb_row_lock_waits Innodb_row_lock_time_avg is relatively high, can be further observed that by providing the lock conflicts occur InnoDB Monitors tables, rows, etc., and analyze the causes lock contention.
    
    
Lock mode and the row of the locking method InnoDB
InnoDB implements the following two types of row locks.
 shared lock (s): allows a transaction to read a row to prevent other transactions to obtain the same data set exclusive lock.
 exclusive lock (X): allows to obtain a transaction update data exclusive lock, preventing other transactions to achieve the same set of data shared read locks and exclusive write locks.
Further, in order to allow the coexistence of rows and table locks, multi-granularity locking mechanism, InnoDB there intent lock (Intention Locks) two kinds of internal use, are both intent lock table lock.
Intention shared lock (IS): Transaction going to lock rows of data sharing, data transaction to a front row shared locks must obtain the IS lock on the table.
Intent exclusive locks (IX): Rights intends to add to the data line exclusive lock, the transaction data in a row to add exclusive lock must be obtained before the IX lock on the table.
InnoDB row lock mode compatibility list
current lock mode / compatibility / lock mode request IS S IX X-
X-conflict conflict conflict conflict
IX conflict Conflict Compatible Compatible
Compatible Compatible S Clash
IS conflict Compatible Compatible Compatible
 
    lock mode if a transaction request with the current lock compatible, InnoDB lock granted on the request of the transaction; the other hand, if both the two are not compatible, the transaction will wait for a lock release.
    Intent locks InnoDB is added automatically, without user intervention. For UPDATE, DELETE and INSERT statements, InnoDB automatically to involve and data set plus exclusive lock (X); for ordinary SELECT statement, InnoDB automatically to involve data sets plus exclusive lock (X); for ordinary SELECT statement, InnoDB will not any locks; transaction record may be set to a shared locks or lock rows displayed by the following statement.
Shared lock (S): SELECT * FROM table_name WHERE ... LOCK IN SHARE MODE
exclusive lock (X): SELECT * FROM table_name WHERE ... FOR UPDATE
    with SELECT ... IN SHARE MODE obtain a shared lock, mainly used when required data dependencies confirm a line record exists, and to ensure that no one on this record UPDATE or DELETE operations. However, if the current transaction also requires the record to be updated operation, it is likely to cause a deadlock, after locking rows for applications that require an update operation, you should use SELECT ... FOR UPDATE way to acquire an exclusive lock.
    
 
InnoDB row lock implementations
    InnoDB row lock is achieved by index entries in the index, which is different from MySQL and Oracle, which is achieved by locking the corresponding data in the data row. InnoDB row lock to achieve this characteristic means of: only retrieve data through index conditions, InnoDB will use row-level locking, otherwise, would use InnoDB table lock!
    In practical application, pay special attention to this feature InnoDB row lock, otherwise, may result in a large number of lock conflicts, thus affecting the concurrent performance.
    
 
Lock gap (Next-Key Lock)
    When we retrieve the data instead of range condition equal conditions, and requested to share his or index entries locking the lock, InnoDB will meet the requirements of existing data; key value for the range of conditions but there is no record, called "gap (gAP)", InnoDB will this "gap" lock, the lock mechanism is not this so-called gap locks (Next-Key lock).
    For example, if only the emp table 101 records, respectively empid value 1,2, ..., 100, 101, following the SQL:
the SELECT * the WHERE empid the FROM emp> the FOR the UPDATE 100
    is a retrieval range condition, InnoDB only empid will meet the conditions of the lock 101 is recorded, is greater than 101 will also empid (these records do not exist) in the "gap" lock.
    InnoDB object using the gap locks, on the one hand to prevent phantom read isolation levels to meet relevant requirements, for example above, if the lock is not used the gap, if another transaction inserts empid larger than any record 100, then if this transaction the above statement is executed again, phantom read occurs; on the other hand, in order to meet the needs of their recovery and replication. About the impact on their recovery and replication mechanisms, as well as under the different isolation levels InnoDB uses gap locking.
    Obviously, the use of conditions to retrieve and record locking, InnoDB locking mechanism that blocks the line with key values within the range of conditions concurrent inserts, which often cause serious lock wait. Therefore, in the actual development, in particular concurrent insert relatively large number of applications, we should try to optimize the business logic, try to use equal conditions to access updated data, avoiding the use of range conditions.
 
 
When to use table locks
    for InnoDB tables, in most cases you should use row-level locking, since the transaction and row lock often we chose InnoDB tables grounds. But in a separate special transaction, you can also consider the use of table-level locking.
 The first is: Transaction most or all of the data needs to be updated, and relatively large table, if you use the default row locks, this transaction is not only low efficiency, and may cause other transactions to wait long locks and lock the conflict, this the case may consider the use of table locks to increase the speed of execution of the transaction.
 The second scenario is: a transaction involving multiple tables, more complex, is likely to lead to deadlock, resulting in a large number of transaction rollback. This situation can also be considered a one-time transaction involves locking the table in order to avoid deadlock and reduce the overhead of database due to transaction rollback brings.
    Of course, the application of these two transactions can not be too much, otherwise, you should consider using MyISAM tables.
    In InnoDB, using a table lock with two caveats.
    (1) Although it is possible to use the LOCK TALBES add InnoDB table lock, but it must be noted that the lock table is not managed by the InnoDB storage engine layer, but by a layer of MySQL Server responsible only when autocommit = 0, when innodb_table_lock = 1 (the default setting), in order to know MySQL InnoDB layer plus table lock, MySQL Server can add perceived InnoDB row locks, in this case, in order to automatically identify deadlock relates InnoDB table lock; otherwise, the InnoDB It can not automatically detect and handle this deadlock.
    (2) In LOCAK TABLES to pay attention to when InnoDB lock, to AUTOCOMMIT is set to 0, otherwise it will not give MySQL table lock; before the end of the transaction, do not use UNLOCAK TABLES unlock the tables, because UNLOCK TABLES implicitly submitted transaction; COMMIT or ROLLBACK production can not be released by LOCAK tABLES plus table-level locks, table locks must be released with the UNLOCK tABLES, see the following statement in the right way.
    For example, if the write table t1 and t is read from the table, the following make:
the SET the AUTOCOMMIT = 0;
LOCAK TABLES t1 the WRITE, T2 the READ, ...;
[do something with Tables t1 and here Wallpaper];
a COMMIT;
the UNLOCK TABLES;
 
on dead lock
    MyISAM table locking is deadlock free, this is because MyISAM is always time access to all the locks required to meet either all, or wait, so there is no deadlock. But in InnoDB, in addition to a single SQL transaction consisting of the lock is gradually obtained, which determines the InnoDB deadlock is possible.
    After a deadlock occurs, InnoDB general can automatically detect and make a transaction to release the lock and return, another transaction to acquire a lock, continue to complete the transaction. However, in the context of an external lock, lock, or involving, InnoDB not completely automatically detect deadlocks, timeout parameters that need to wait innodb_lock_wait_timeout solved by setting lock. It should be noted that this parameter is not only used to resolve deadlocks in concurrent access is relatively high, if a large number of transactions due to inability to obtain the required lock and immediately suspended, will take up a lot of computer resources, cause serious performance problems, and even caused the collapse of the database. We wait for the timeout threshold by setting the appropriate locks to avoid this from happening.
    Generally speaking, the deadlock is designed for application by adjusting business processes, database objects, design, transaction size, as well as SQL statements to access the database, most of them can be avoided. Here are several commonly used methods to introduce deadlock by way of example.
    (1) in the application, if a different program concurrent accesses a plurality of tables, the agreement should be in the same order as the access list, which can greatly reduce the chance of deadlock. If you order two different session access to two tables, the chance of a deadlock is very high! However, if access to the same order, it is possible to avoid a deadlock.
    (2) when the program processing data in a batch fashion, if the pre-sort the data, ensure that each thread according to a fixed sequence to the recording process, it is possible to greatly reduce the possibility of deadlock.
    (3) in the transaction, if you want to update the records, should directly apply a sufficient level of locks, namely exclusive lock should not apply for a shared lock, then apply updates to an exclusive lock, and even deadlock.
    (4) In REPEATEABLE-READ isolation level, if two threads simultaneously recording the same conditions with SELECT ... ROR UPDATE plus exclusive lock, in line with the record is not the case, two threads will lock success. Finds record does not already exist, it tries to insert a new record, if two threads are doing, there will be a deadlock. In this case, change the isolation level READ COMMITTED, you can avoid the problem.
    (5) When the isolation level is READ COMMITED, if two threads to execute SELECT ... FOR UPDATE, to determine whether the conditions exist for the record, if not, insert records. At this time, only one thread can be inserted success, another thread will lock wait, when the first one thread is submitted, the second thread because the primary key re wrong, but although this thread is wrong, but it will get an exclusive lock! Then if there is a third thread again apply for an exclusive lock, there will be a deadlock. In this case, you can do directly into the action, and then re-capture the primary key exception, or in the event of heavy primary key error, always get a ROLLBACK release an exclusive lock.
 
    Despite the adoption of the above design and optimization measures that can reduce large deadlock, but the deadlock is difficult to completely avoid. Therefore, in the program design always catch and handle an exception deadlock is a good programming practice.
    If a deadlock occurs, you can use SHOW INNODB STATUS command to determine the cause of the latest deadlock and improvement measures.
 
 
-------------------------------------------------- ------------------------------
 
summary
    for MyISAM table lock, the following main points
    (1) shared read locks (S It is between) compatible, but shared read lock (S) between him and write locks (X) row, and exclusive write locks between (X) are mutually exclusive, that is serial read and write.
    (2) Under certain conditions, MyISAM allows concurrent execution of queries and inserts, we can use this application to solve the same table and inserted into the lock contention.
    (3) MyISAM default lock priority scheduling mechanism is to write, this is not necessarily suitable for all applications, users can adjust the read-write lock contention by setting LOW_PRIPORITY_UPDATES parameter, or specify LOW_PRIORITY options in the INSERT, UPDATE, DELETE statements.
    (4) Due to the large size of the lock table lock, between the reader and is serial, so if more update, MyISAM table may cause serious lock wait, you can consider using InnoDB tables to reduce lock conflicts.
 
    For InnoDB tables, the following main points
    (1) InnoDB marketing is based on the realization of the index, if the index does not access the data through, will use InnoDB table lock.
    (2) a gap InnoDB locking mechanism, and the reason for using the gap InnoDB locks.
    (3) at different isolation levels, the InnoDB different locking mechanisms and consistency read policy.
    Recovery and replication (4) MySQL's InnoDB also have a greater impact on the lock mechanism and consistency read policy.
    (5) lock is difficult to completely avoid conflicts and even deadlock.
    InnoDB lock in the understanding of the characteristics, the user can reduce lock conflicts and deadlocks by design and SQL adjustment measures, including:
 Try using a lower isolation level
 well-designed index, and try to use the index to access the data, the more locked precise, thus reducing the chance of lock conflicts.
 Select a reasonable transaction size, the probability of small transactions that occur lock conflict is also smaller.
When set to the recording  display lock, preferably a sufficient level of lock off requests. For example, to modify the data, preferably directly apply exclusive lock, not to apply for a shared lock, an exclusive lock when requested again modified so prone to deadlocks.
 to access the different set of tables, the agreement should be in the same sequential access tables, a table, it is possible in a fixed sequential access table rows. This can reduce the chance of large deadlock.
 try to access the data with equal conditions, to avoid influence of the gap concurrent lock inserted.
 Do not apply lock level than necessary; do not display the lock unless necessary inquiries.
 For certain transactions, lock may be used to improve the processing speed table or reduce the possibility of deadlock.

Published 95 original articles · won praise 0 · Views 1899

Guess you like

Origin blog.csdn.net/qq_42894864/article/details/104235923