Detailed explanation of database transactions--spring transaction basis

Isolation levels and locks in database transactions

Database transactions play a very important role in back-end development. How to ensure the correctness and security of data reading is also an issue we need to study.
ACID
first summarizes the four elements (ACID) of the correct execution of database transactions:

Atomicity: that is, a transaction is an indivisible smallest unit of work. The operations in a transaction are either all done or not done at all, not only part of it;
Consistency: The data in the database is in the correct state before the transaction is executed. state, and the data in the database is still in the correct state after the transaction is executed, that is, the data integrity constraint is not destroyed; for example, when we do bank transfer related business, A transfers to B, and B must receive the money transferred by A. If A transfers money but B does not receive it, then the consistency of database data cannot be guaranteed, so pay attention to reasonable design when doing high-concurrency business.
Isolation: There is no impact between concurrent transaction executions, and operations within one transaction have no impact on other transactions, which requires transaction isolation level to specify isolation;
Durability: Once a transaction is successfully executed, it Changes to the data in the database must be permanent and will not cause data inconsistency or loss due to various exceptions.
Transaction isolation level
Most database transaction operations are executed concurrently, which may encounter the following problems:

Lost update: Two transactions update a row of data at the same time, and the update of the last transaction will overwrite the update of the first transaction, resulting in the loss of the data updated by the first transaction, with serious consequences. It is usually caused by the lack of locking.
Dirty reads: A transaction A reads data that has not yet been committed by another transaction B, and operates on this basis. If transaction B rolls back, then the data read by transaction A is incorrect, which will cause problems.
Non-repeatable reads: The same data is read twice within the same transaction scope, and the returned results are different. For example, after transaction B reads the data for the first time, transaction A updates the data and commits it, then the data read by transaction B for the second time is different from the first time.
Phantom reads: A transaction A reads data newly submitted by another transaction B. For example, transaction A modifies the data of all rows in a table according to a certain rule (whole table operation), and at the same time, transaction B inserts a row of original data into the table, then when transaction A operates on the table later, it will find the table There is actually a row of data that has not been modified in the middle, as if a hallucination has occurred, and it is like a fairy.
Note: The difference between non-repeatable read and phantom read is that the operation of the table corresponding to the non-repeatable read is an UPDATE, while the operation of the table corresponding to the phantom read is an insert (INSERT). The two strategies are different. For non-repeatable reads, you only need to use row-level locks to prevent the record from being updated. For phantom reads, you must add a table-level lock to prevent data from being inserted into the table. The issue of locks is discussed below.

In order to deal with these kinds of problems, SQL defines the following four levels of transaction isolation levels:

Uncommitted read (READ UNCOMMITTED): the lowest isolation level, a transaction can read the uncommitted update data of other transactions, it is very unsafe, and there may be lost updates, dirty reads, non-repeatable reads, and phantom reads;
committed read (READ ) COMMITTED): A transaction can read the update data submitted by other transactions, and cannot see the uncommitted update data, and there will be no lost updates and dirty reads, but non-repeatable reads and phantom reads may occur;
repeatable reads (REPEATABLE ) READ): Guarantee that multiple queries executed successively in the same transaction will return the same result, and will not be affected by other transactions. Lost updates, dirty reads, and non-repeatable reads are impossible, but phantom reads may occur;
serialization (SERIALIZABLE): the highest The isolation level does not allow concurrent execution of transactions, but must be executed serially, which is the safest. Update, dirty reads, non-repeatable reads, and phantom reads are impossible, but the efficiency is the lowest.
The higher the isolation level, the worse the concurrent execution performance of database transactions and the fewer operations that can be processed. So in general, it is recommended to use the REPEATABLE READ level to ensure read consistency of data. For the problem of phantom reading, it can be prevented by locking.
MySQL supports these four transaction levels, and the default transaction isolation level is REPEATABLE READ. Oracle database supports two transaction isolation levels of READ COMMITTED and SERIALIZABLE, so Oracle database does not support dirty reads. The default transaction isolation level of Oracle database is READ COMMITTED.

Various locks
Let's summarize the locks in MySQL. There are several categories. The same is true for other RDBMSs.
The first and most important classification is optimistic lock (Optimistic Lock) and pessimistic lock (Pessimistic Lock), which are actually two locking strategies.
Optimistic locking, as the name implies, is very optimistic. It believes in truth, goodness and beauty. Every time it reads data, it thinks that other transactions are not writing data, so it does not lock, reads data happily, and only judges whether other transactions are engaged when submitting data. If you have passed this data, rollback if you have done it. Optimistic locking acts as a means of detecting conflicts, which can be achieved by versioning or adding timestamps to records.
Pessimistic locks have a conservative attitude towards other transactions. Every time they read data, they think that other transactions want to cause trouble, so every time they read data, they will be locked until the data is retrieved. In most cases, pessimistic locking relies on the locking mechanism of the database to ensure the maximum exclusivity of the operation, but it comes with various overheads. Pessimistic locking is equivalent to a means of avoiding conflicts.
Selection criteria: If the amount of concurrency is small, or the consequences of data conflict are not serious, optimistic locking can be used; if the amount of concurrency is large or the consequences of data conflict are serious (unfriendly to users), then pessimistic locking is used.

From the perspective of reading and writing, there are shared locks (S locks, Shared Locks) and exclusive locks (X locks, Exclusive Locks), also called Read Locks and Write Locks.
understand:

The transaction holding the S lock is read-only but not writable. If transaction A adds S lock to data D, other transactions can only add S lock to D but not X lock.
The transaction holding the X lock is readable and writable. If transaction A adds X lock to data D, other transactions can no longer lock D until A's lock on D is released.
From the perspective of lock granularity, it is mainly divided into Table Lock and Row Lock.
Table-level locks lock the entire table with minimal performance overhead. Users can perform read operations at the same time. When a user writes to the table, the user can obtain a write lock, which prohibits other users from reading and writing. Write locks have higher priority than read locks. Even if a read operation is already queued, an applied write lock can still be queued at the front of the queue.
Row-level locks only lock the specified records, so that other processes can read and write other records in the same table. Row-level locks have the smallest granularity and high overhead, can support high concurrency, and may cause deadlocks.

MySQL's MyISAM engine uses table-level locks, while InnoDB supports table-level locks and row-level locks. The default is row-level locks.
There are also BDB engines that use page-level locks, which lock a group of records at a time, and the concurrency is between row-level locks and table-level locks.

Three-level locking protocol
The three-level locking protocol is to ensure correct concurrent transaction operations, and transactions need to follow the locking rules when reading and writing database objects.

First-level blocking protocol: transaction T must add X lock to data R before modifying it, and it can be released until the end of the transaction. However, if transaction T only reads data and does not modify it, it does not need to be locked. Therefore, dirty reads and non-repeatable reads may occur under the first-level locking protocol.
Second-level locking protocol: On the basis of the first-level locking protocol, a rule is added - transaction T must add S lock to data R before reading it, until it is released after reading. Non-repeatable reads may occur under the secondary locking protocol.
Three-level locking protocol: On the basis of the first-level locking protocol, a rule is added - transaction T must add S lock to data R before reading it, and it can be released until the end of the transaction. The three-level locking protocol avoids the problems of dirty reads and non-repeatable reads.

Detailed explanation of spring @Transactional annotation parameters

Thing annotation method: @Transactional

When marked in front of a class, all methods in the marked class are processed, for example:

1 @Transactional public class TestServiceBean implements TestService {}
When some methods in the class don't need things:

复制代码
 @Transactional 
 public class TestServiceBean implements TestService { 
 private TestDao dao; 
 public void setDao(TestDao dao) { 
 this.dao = dao; 
 } 
 @Transactional(propagation =Propagation.NOT_SUPPORTED)
 public List getAll() { 
 return null; 
 } 
 }
复制代码

Introduction to the spread of things:

  @Transactional(propagation=Propagation.REQUIRED) : If there is a transaction, add a transaction, if not, create a new one (by default)
  @Transactional(propagation=Propagation.NOT_SUPPORTED) : The container does not open a transaction for this method
  @Transactional(propagation=Propagation .REQUIRES_NEW): Regardless of whether there is a transaction, create a new transaction, the original one is suspended, the new one is executed, and the old transaction continues to be executed
  @Transactional(propagation=Propagation.MANDATORY): Must be executed in an existing transaction , otherwise throw exception
  @Transactional(propagation=Propagation.NEVER) : Must be executed in a transaction that does not have, otherwise throw exception (as opposed to Propagation.MANDATORY)
  @Transactional(propagation=Propagation.SUPPORTS) : If other beans call this method, declare transactions in other beans, then use transactions. If other beans do not declare transactions, then do not use transactions.

 

Transaction timeout settings:

  @Transactional(timeout=30) //The default is 30 seconds

 

Transaction isolation level:

  @Transactional(isolation = Isolation.READ_UNCOMMITTED): read uncommitted data (dirty read, non-repeatable read) basically do not use
  @Transactional(isolation = Isolation.READ_COMMITTED): read committed data (non-repeatable read and Phantom read)
  @Transactional(isolation = Isolation.REPEATABLE_READ): Repeatable read (phantom read)
  @Transactional(isolation = Isolation.SERIALIZABLE): Serialization

  MYSQL: default to REPEATABLE_READ level
  SQLSERVER: default to READ_COMMITTED

Dirty read: One transaction reads the uncommitted update data of another transaction.
Non-repeatable read: In the same transaction, reading the same data multiple times returns different results, in other words, 
subsequent reads can read another The updated data submitted by the transaction. On the contrary, when "repeatable read" reads data multiple times in the same transaction , it can ensure that the data read is the same, that is, subsequent reads cannot read
the updated data submitted by another transaction.
: A transaction reads the committed insert data of another transaction

 

Description of common parameters in @Transactional annotation

parameter name

Function description

readOnly

This property is used to set whether the current transaction is a read-only transaction. When set to true, it means read-only, and false means that it can be read and written. The default value is false. For example: @Transactional(readOnly=true)

rollbackFor

This property is used to set the exception class array that needs to be rolled back. When the exception in the specified exception array is thrown in the method, the transaction will be rolled back. E.g:

Specify a single exception class: @Transactional(rollbackFor=RuntimeException.class)

Specify multiple exception classes: @Transactional(rollbackFor={RuntimeException.class, Exception.class})

rollbackForClassName

This property is used to set an array of exception class names that need to be rolled back. When an exception in the specified exception name array is thrown in the method, the transaction will be rolled back. E.g:

Specify a single exception class name: @Transactional(rollbackForClassName="RuntimeException")

Specify multiple exception class names: @Transactional(rollbackForClassName={"RuntimeException","Exception"})

noRollbackFor

This property is used to set the exception class array that does not need to be rolled back. When the exception in the specified exception array is thrown in the method, the transaction will not be rolled back. E.g:

Specify a single exception class: @Transactional(noRollbackFor=RuntimeException.class)

Specify multiple exception classes: @Transactional(noRollbackFor={RuntimeException.class, Exception.class})

noRollbackForClassName

This property is used to set an array of exception class names that do not need to be rolled back. When an exception in the specified exception name array is thrown in the method, the transaction will not be rolled back. E.g:

Specify a single exception class name: @Transactional(noRollbackForClassName="RuntimeException")

Specify multiple exception class names:

@Transactional(noRollbackForClassName={"RuntimeException","Exception"})

propagation

This property is used to set the propagation behavior of the transaction. For specific values, refer to Table 6-7.

例如:@Transactional(propagation=Propagation.NOT_SUPPORTED,readOnly=true)

isolation

This property is used to set the transaction isolation level of the underlying database. The transaction isolation level is used to handle multi-transaction concurrency. Usually, the default isolation level of the database can be used, and no setting is required.

timeout

This property is used to set the timeout seconds of the transaction. The default value is -1, which means never timeout.


A few points to note:
  1. @Transactional can only be applied to public methods. For other non-public methods, if @Transactional is marked, no error will be reported, but the method has no transaction function.
  2. Use the spring transaction manager, by Spring is responsible for opening, committing, and rolling back the database. By default, it will roll back when it encounters a runtime exception (throw new RuntimeException("comment");), that is, it rolls back when it encounters an unchecked exception; The exception that needs to be caught (throw new Exception("comment");) will not be rolled back, that is, it will encounter a checked exception (that is, an exception that is not thrown at runtime, the exception that the compiler will check is called a checked exception or When saying checked exceptions), we need to specify a way to make the transaction rollback. If all exceptions are rolled back, add @Transactional(rollbackFor={Exception.class, other exceptions}) . If unchecked exceptions are not rolled back: @Transactional(notRollbackFor=RunTimeException.class)
is as follows:

Copy code
@Transactional(rollbackFor=Exception.class) //Specify rollback, rollback when an Exception is encountered
public void methodName() {
   throw new Exception("Comment");
}
@Transactional(noRollbackFor=Exception.class)/ /Specify not to roll back, it will be rolled back when encountering runtime exception (throw new RuntimeException("comment");)
public ItimDaoImpl getItemDaoImpl() {
   throw new RuntimeException("comment");
}
Copy code
3, @Transactional annotation should only Applies to methods with public visibility. If you use the @Transactional annotation on a protected, private or package-visible method, it will not throw an error, but the annotated method will not display the configured transaction settings.
  4. The @Transactional annotation can be applied to interface definitions and interface methods, class definitions and public methods of classes. However, please note that the mere presence of the @Transactional annotation is not sufficient to enable transactional behavior, it is merely metadata that can be used by beans that recognize the @Transactional annotation and configure appropriately transactional behavior as described above. In the above example, it is the presence of the element that initiates the transactional behavior.
  5. The recommendation of the Spring team is that you use the @Transactional annotation on a specific class (or method of a class), rather than any interface that the class wants to implement. You can of course use the @Transactional annotation on an interface, but this will only work if you have an interface-based proxy set up. Because annotations are not inheritable, this means that if you are using class-based proxies, then the transaction settings will not be recognized by class-based proxies, and objects will not be wrapped by transactional proxies (will be confirmed serious). So please take the advice of the Spring team and use @Transactional annotation on concrete classes

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326159004&siteId=291194637