Four characteristics of the transaction isolation level & dirty reads, phantom reads, non-repeatable reads transaction difference &

First, what are the transaction
the transaction: the smallest unit of work database operations, a series of operations as a single logical unit of work performed; these operations as a whole is submitted to the system together, are either executed or not executed; transaction is a set no longer divided set of operations (logical unit of work);

Four characteristics of the transaction:
1, atomicity (atomicity):. Emphasize affairs indivisible
transaction is a logical unit of work database, all operations in the transaction included either do or do not do

2, the consistency (consistency): the integrity of the data before and after execution of the transaction consistent.
Outcome of the transaction must be executed to make database changes from a consistent state to another consistent state. Therefore, when the database contains only the result of a successful transaction commit, say the database in a consistent state. If a failure database system is running, some were forced to interrupt the transaction has not been completed, these transactions did not complete the modifications made to the database has been written to the physical part of the database, then the database will be in a bad state, or is inconsistent state.

3, isolation (isolation): process a transaction execution, should not be subject to interference from other transactions of
a transaction execution can not interfere with other matters. I.e., operation and use of the data inside a transaction other concurrent transactions are isolated and can not interfere with each other between the respective transaction executed concurrently.

4, persistence (durability): Once the end of the transaction, the data persistence to the database
, also known as permanent, means that once a transaction commits, changing the data in the database is that it should be permanent. The next operation or other faults should not have any impact on the results.

Jdbc native handling of the transaction are as follows:

try{
    connection.setAutoCommit( false);
      数据库操作...
    connection.commit();
}catch(Exception ex){
    connection.rollback();
}finally{
    connection.setAutoCommit( true);
}   

Second, then when there is no transaction isolation which would harm happened?

1, dirty read (read uncommitted data, for single data)
2, non-repeatable read (had read were read different data, with emphasis modify, and delete, but also for single data)
3 , phantom read (read were, respectively, read different data, with emphasis on new, multi-pen for data)

1. Dirty read: A transaction data can be read another uncommitted transactions. It should be noted here is that for the data itself can be understood as for single data.

Personal understanding: A transaction has been opened query data, and transaction B is turned on, a sum which modify the data, but did not submit, then transaction A has conducted a query, then there is not the same as two pen data, then the transaction B does not end, a transaction rollback transaction B is, so that the transaction a transaction B reads the data uncommitted modified. This occurs because there is not the root cause of the data submitted, it was read.
Need to be reading submitted (Read Committed) can be solved.

2, non-repeatable reads: a read transaction, read respectively different data. It should be noted here is that for the data itself can be understood as the data for the single, with a focus on the modification and deletion of data, so the lock rows can be resolved.

Personal understanding: A transaction data query, then things open B, for which the sum of the data has been modified, then the submission (submitted here), then Transaction A data query and found the same sum of different , so read a transaction made two different data, twice the data read from the same pen with different data. This occurs when the root causes of the transaction are to be read in A, other transaction data has been modified.
Read submitted (Read Committed) is not enough to solve, the need for repeatable read (Repeatable read) can be solved.

3, phantom read: a read transaction, read respectively different data. It should be noted here is that the data for the number of pieces, can be understood as more than a pen for data is the data set, the focus is on the new data, so the table lock can be resolved.

Personal understanding: A transaction data query, then things open B, for which a sum of data were added, followed by the submission (submitted here), then Transaction A data query and found that the resulting query the result set is not the same. Magic is a multi-read for records. Read submitted (Read Committed) is not enough to solve, the need for Serializable the serialization can be solved.

Non-repeatable read and phantom read difference:
from the whole, both the data query twice, but the results are not the same query twice.
But if you're from a control point of view, the difference is relatively large
for the former, just lock the records meet the conditions
for the latter to be locked to meet the conditions of its record close

The need to avoid non-repeatable read lock line.
We need to avoid phantom read lock table.

Indeed somewhat similar to both. However, non-repeatable reads focuses on update and delete, and phantom reads focus is insert.

If the locking mechanism used to achieve these two levels of isolation, in repeatable read, the first read sql data, the data will be locked, other transactions can not modify these data, we can achieve repeatable read . However, this method can not lock the data insert, so when Transaction A previously read data, or modify all the data, transaction B can still submit insert data, transaction A then you will find more than a baffling not before data, which is read magic, can not be avoided by the row lock. Need Serializable isolation level, read with read lock, write, write locks, read and write locks mutex, doing so can effectively prevent phantom reads, non-repeatable read, dirty reads and other issues, but will greatly reduce concurrency database ability.
So the non-repeatable read and phantom read the biggest difference lies in how to solve their problems generated by the lock mechanism.

Said above, it is to use pessimistic locking mechanism to deal with these two issues, but MySQL, ORACLE, PostgreSQL and other sophisticated database, for performance reasons, are using the optimistic locking as the theoretical basis of MVCC (multi-version concurrency control ) to avoid both problems.

Third, the transaction isolation level

DEFAULT
using isolation level of the database itself using
ORACLE (Read Committed) MySQL (repeatable read)

Read uncommitted
read uncommitted, by definition, a transaction can read another data uncommitted transactions.

Read committed
read committed, by definition, is a transaction have to wait another transaction commits to read the data.
Solve the dirty read, but can not solve the non-repeatable read and phantom read.

Repeatable read
Repeatable read is read at the start of the data (open transaction), no longer allowed to modify operations
to solve the non-repeatable read, but can not solve the phantom reads.

Serialization Serializable
Serializable transaction isolation level is the highest, at this level, execution serialization order of the transaction, to avoid dirty reads, non-repeatable read and phantom read. But this is inefficient transaction isolation level, database comparison consumption performance, is not generally used.

Guess you like

Origin www.cnblogs.com/zhangsonglin/p/11131705.html