java architecture Road - (mysql underlying principle) Mysql transaction isolation and MVCC

  On several blog we talked generally about the underlying structure of mysql, what B + tree, what Hash need to return ah, and later talked about mysql optimization of EXPLAIN , this time for us to say mysql locks.

mysql lock

  From a performance lock into optimistic locking (with version compare to implement) and pessimistic locking, optimistic locking lock pessimistic than high performance.

  From the sub-type of database operations are divided into read and write locks (all belong pessimistic locking )

  Read lock (shared locks) : for the same data, multiple read operations can be carried out without affecting each other at the same time. Thread other than the read only lock, can not be written.

  Write lock (exclusive lock) : The current write operation is not completed before it will block other write locks and read locks. In addition to the lock thread can not do anything.

  From the size of the sub-operation data is divided into tables and row locks, lock Then rarely mentioned gap.

We mainly for table and row locks, and our gap locks.

Note: There are almost a lock to wait for the pessimistic locking

Table lock 

  As the name suggests, plus a table lock will lock the entire table. Overhead is small, locked soon, it will not deadlock; lock the large particle size, the highest probability of lock conflicts, the lowest degree of concurrency;

Let's look at a few commands. Table, for example with student

Add table locks: lock table name table read (write), the table name 2 read (write);

lock table student write;

View table status (whether locked):

show open tables;

There are as In_use to have a lock that is already a presence.

Unlock table: unlock tables;

unlock tables;

MyISAM before executing the query (SELECT), will be automatically added to all table read locks involved
Before performing CRUD operations, will be automatically added to the table write locks involved.
1, to read MyISAM tables (plus read lock), not blocking other processes cold read requests for the same table, but will block write requests to the same table. Only after reading the lock is released, it will perform the write operation of other processes.
2, MylSAM table writes (write-lock), will block other processes read and write operations on the same table, the lock is released only when the write, read and write operations will be performed by other processes
3, MylSAM tables do not support row lock, does not support transactions.

Row lock

  Lock the row of data for each operation. Large overhead, locking slow; there will be a deadlock; locking the smallest size and lowest probability of lock conflicts, the highest degree of concurrency.

Having said that we have to mention the ACID, and we have to review it.

A (atomicity) Atomicity:

That transaction either all done, or not all, only a part of the case does not appear, such as the transfer A to B, A does not appear less money, the money has not increased in the B cases, either all succeed or all failed (rolled back). This series of actions can be considered as an atom.

C (consistency) Consistency:

Refers to a transaction from one state to another is the same as A reduced 100, B 30 can not only increase.

I (isolation) isolation:

That is a transaction at the time of submission of complete data is not modified, and other transactions is not visible. Of course, here there is a concept of isolation levels, at different isolation levels, there will be different manifestations.

D (durability) Persistence:

Once the transaction is committed, to make the modification will be saved to the database permanent.

Then is our problem caused by concurrent transactions, which will first go over what the consequences.

  Update loss (Lost Update)

When two or more transactions select the same row, and then selected based on the initial value to update the row, because each transaction not know the existence of other matters, lost update problem occurs - last update covered by other matters updates made.

Example: For example, we opened two threads simultaneously go to the ticket and sell a small one, we open thread A transactions, we open thread B, while queries to more than 10 tickets to sell a bar. A sold one, 10-1, and the remaining nine, we also sell a piece of thread B is 10-1, and the remaining nine, submitted A, B submitted, we obviously sell two tickets, but the resulting database 9 indeed, only sell a ticket.

  Dirty read (Dirty Reads)

A transaction is being done to modify a record, before the transaction is completed and submitted, the data on this record in an inconsistent state; this time, another transaction also read the same record, if not controlled, the second transaction reads these "dirty" data, and accordingly for further processing, it will have data dependencies uncommitted. This phenomenon is called the image of the "dirty read." Bottom line: A read transaction to transaction B has been modified but not yet committed, the operation is still done on the basis of this data. At this time, if the transaction is rolled back B, A the read data is invalid, does not meet the consistency requirements.

  Can not be re-read (Non-Repeatable Reads)

A transaction at some time after reading some of the data, read again the previous read data, they found that the read data has changed, or some of the records have been deleted! This phenomenon is called " non-repeatable read. " Bottom line: A read transaction to modify the data transaction B has been submitted does not comply with isolation.

  Magic Reading (Phantom Reads)

A transaction re-read in the same query data retrieval seen before, but found other transactions from inserting new data to meet their query, a phenomenon called "phantom read." Bottom line: A transaction reads data transactions to the new B submitted does not comply with isolation

Having two MVCC mechanism behind it know how it happens, temporarily placed here.

These problems we go back to our database right.

Transaction isolation level

  

Generally set to be repeated reading. Database transaction isolation of the more stringent, the smaller the concurrent side effects, but the cost greater, because the transaction isolation is essentially the transaction to some extent, "serialized" in, which is obviously the "concurrency" is contradictory. At the same time, different applications for read consistency and degree of isolation transaction requirements are different, such as many applications

 For "non-repeatable read" and "phantom read" not sensitive, may be more concerned about the ability of concurrent access to data. Often see the current database transaction isolation level: show variables like 'tx_isolation'; disposed transaction isolation level: set tx_isolation = 'REPEATABLE-READ';

 The rest can go to try, read uncommitted READ-UNCOMMITTED, Read Committed READ-COMMITTED, serialization SERIALIZABLE;

MVCC:

  This super important to understand that above almost know everything ~!

English called the Multi-Version Concurrency Control, translated into Chinese, ie multi-version concurrency control. The concept is abstract, we do not know what his control yes.

For it is a chestnut, assuming that our MySQL table has two virtual field called open a transaction ID, called a delete transaction ID, are auto-incremented. When reopening the transaction will not give any value in the implementation of the first SQL, giving open a transaction ID number, we assumed to be zero, but does not give submit transaction ID (or empty). Student table we give an example to speak on the map.

 

 

 Briefly about the meaning map, each time when we run sql, will generate a time-stamped snapshot version number, if the query SQL, this version will update our createID field, we will CRUD operations the version number of the update to the deleteID field, the version number of threads between each transaction is independent, for our next inquiry, we will check the data createID greater than or equal to our snapshot version number, and deleteID less than our current data snapshot version of ID. MVCC is generally Repeatable Read isolation level, but also in reading submitted trial. The disadvantage is MVCC will save multiple versions of snapshots, resulting in redundant space, but to ensure the independent operation of each thread.

Lock gap

Briefly about gaps lock, if our table ID is incremented, we write an open transaction, we write a modified SQL

 update student set name = '1111'  where id>8 and id<22;

In other words, whether you have data id 8 to 22, when all of the maximum ID is less than 8, greater than 22 to a minimum ID of this range, locking down the scope of this prohibition is that you add and modify the rest position is possible. See your table structure

For example, your table is

 sql is SET name = Student Update '1111'   WHERE ID> ID. 8 and <22; in fact, we lock range is (6 to 22) are not open interval range operation.

 Lock upgrade:

   We InnoDB internal locks are added to the index, that is, when we update or delete where the conditions behind the effort to keep the index field.

Lock analysis: 

  We can show status like'innodb_row_lock% '; we compete for command to view the line lock.

They are expressed as

Innodb_row_lock_current_waits: the number of locks currently waiting

Innodb_row_lock_time: boot from the system is now locked to the total length of time (waiting total length)

Innodb_row_lock_time_avg: Every time the average time spent waiting (waiting an average duration)

Innodb_row_lock_time_max: start from the system to the current longest waiting time spent

Innodb_row_lock_waits: the number of times the system starts up to now a total of waiting (waiting for the total number) 

Deadlock

That is, mutual lock wait deadlock.

See recent deadlock log information: show engine innodb status \ G; mysql most cases can automatically detect deadlocks and rolls back the transaction that generated the deadlock, but in some cases can not automatically detect deadlocks mysql

to sum up

As far as possible to retrieve all the data are done by the index, the index line to avoid no lock escalation to a table lock, reasonable design index.

Minimize the scope of the lock, to reduce the range as the search condition, to avoid gaps lock.

Try to control the size of the transaction, the amount of resources and reduce the length of time the lock, locked transactions involving sql

Try to put the final transaction execution

The transaction isolation level as low as possible

  

Guess you like

Origin www.cnblogs.com/cxiaocai/p/11594151.html