Things isolation level, MVCC and a common database lock introduction

The basic elements of the transaction (ACID)

  1, atomicity (Atomicity): After the start of the transaction all operations, either all done, or all do not do, can not be stuck in the middle part. An error occurred during the execution of the transaction, will be rolled back to the state before the start of the transaction, all operations like never happened. That transaction is an indivisible whole, like a high school chemistry ever atom, is the basic unit of matter.

   2. Consistency (Consistency): the beginning and the end of the transaction, integrity constraints of the database is not corrupted before. For example, A transfers to B, A can not be deducted money, B did not receive.

   3, isolation (Isolation): the same time, only one transaction request for the same data, without any interference from each other between different transactions. For example, A is to withdraw money from a bank card, withdraw money before the end of the process A, B can not transfer money to the card.

   4, persistent (Durability): After the completion of the transaction, the transaction all updates to the database will be saved to the database can not be rolled back.

Wherein the atom, persistence is achieved by the redo log (redo log), undo log is used to ensure transactional consistency.

Concurrent environment prone to problems

1. Modify the loss of
T1 and T2 are two of a transaction to modify the data, to modify T1, T2 subsequently modified, T2 changes cover the modifications T1.

2. Read dirty data
T1 a data modification, T2 then read this data. If this amendment revoked T1, T2 then reads the data is dirty data.

3. The non-repeatable reads
T2 a read data, T1 the data has been modified. If T2 reads the data again, this time reading the results and the results of the first reading are different.

Note: Non-repeatable read and read dirty distinction is dirty read uncommitted data, non-repeatable read that data has been submitted.

4. The phantom read
T1 reads the data in a range, T2 insert the new data in this range, this range T1 to read data again, and the results read at this time and the results of the first read different.

Defines four levels of isolation in the SQL standard, each level provides all changes made in a transaction, which is visible in the transaction and between transactions, which are not visible. Lower isolation level may generally perform more complicated, lower system overhead.

Transaction isolation level

1. uncommitted read (READ UNCOMMITTED)
modification transactions, even if not submitted, are also visible to other transactions, which will lead to the occurrence of a dirty read. It is the lowest level of isolation levels of concurrency, large numbers of concurrent prone to problems, the actual business scenarios almost never use.

2. Read Committed (READ COMMITTED)
a firm to do a transaction can only read the already submitted. In other words, a firm changes made prior to submission of other transactions is not visible, at this point, we will be able to find this level of isolation has been achieved above our definition of the transaction! It can effectively solve the problem of dirty reads, but it is worth noting that if a transaction T from the beginning to be submitted during this period, there may be many other matters have been submitted, if these other matters submitted revised transaction T the data used, which can lead to non-repeatable read problems occur.

3. Repeatable Read (REPEATABLE READ)
to ensure the same result of reading the same data multiple times in the same transaction. This solves the problem of dirty read, but also solve the problem of non-repeatable reads.
Note, however, Repeatable Read isolation level down, only to ensure that the same read data is the same, but the range for reading, still the presence of phantom read!

4. serialization (SERIALIZABLE)
This is the most violent of means, a pessimistic concurrency strategy, it will have a lock on each row of data is read, it may cause problems with a lot of timeouts and lock contention , but because transactions are serialized execution, then there is no concurrency problem. Practical applications rarely use this isolation level, except in that very need to ensure data consistency and acceptable without concurrent, will consider using this isolation level.

 

 There is a transaction isolation of examples

Why not use long transactions

Long transaction system which means that there will be a very old view of affairs, since these transactions are subject to access any data inside the database, so before the transaction commits, the database which it might be used to roll back all records must be retained, which will lead to take up a lot of storage space.

In MySQL 5.5 and previous versions, roll back log is put together with the data dictionary ibdata file, even if the final submission of long transaction, rollback segments to be cleaned, the file does not become small, the authors have seen data only 20GB, and there rollback 200GB libraries, and ultimately had to clean up rollback segments, re-build a library.

In addition to the impact of the rollback, but also long transaction holding the lock resource, it could drag down the entire library

Various database lock

Note: Record Locks Lock index on a record, rather than the records themselves. If the table is not set index, InnoDB automatically creates a hidden clustered index on the primary key, so you can still use the Record Locks.

An example of MVCC

Some data show that at the end of each line MVCC actually increased by three, the third column is actually a hidden id, no other action

 

 Reference links:

【1】https://www.cnblogs.com/shihaiming/p/11044740.html

【2】CyC

[3] Why do not use long transactions https://blog.csdn.net/xcy1193068639/article/details/85058641

【4】https://blog.csdn.net/u013568373/article/details/90550416

Guess you like

Origin www.cnblogs.com/lalalatianlalu/p/11568790.html