Spring transaction propagation mechanism and isolation level (transfer)

Reprinted from: http://blog.csdn.net/edward0830ly/article/details/7569954 (well written)

 

Transactions are a means of guaranteeing the atomicity of logical processing. By using transaction control, problems such as dirty data caused by logical processing failures can be greatly avoided.

The two most important characteristics of a transaction are the propagation level and the data isolation level of the transaction. The propagation level defines the control scope of the transaction, and the transaction isolation level defines the control scope of the transaction in reading and writing the database .


The following are the seven propagation levels of transactions:


1)  PROPAGATION_REQUIRED  , the default spring transaction propagation level. The feature of using this level is that if a transaction already exists in the context, it will be added to the transaction and executed. If there is no transaction in the current context, A new transaction is executed. So this level is usually sufficient to handle most business scenarios.


2) PROPAGATION_SUPPORTS  , literally, supports, supports, the feature of this propagation level is that if there is a transaction in the context, the transaction is supported to join the transaction, and if there is no transaction, it is executed in a non-transactional way. So, not all code wrapped in transactionTemplate.execute will have transaction support. This is usually used to handle non-core business logic operations that are not atomic. There are few application scenarios.


3) PROPAGATION_MANDATORY  , this level of transaction requires that there must be a transaction in the context, otherwise an exception will be thrown! Configuring the propagation level in this way is an effective means of controlling the omission of context calling code to add transaction control. For example, a piece of code cannot be called and executed alone, but once it is called, it must be included in a transaction, and this propagation level can be used.


4) PROPAGATION_REQUIRES_NEW , you can literally know that new requires a new transaction every time. The characteristic of this propagation level is that a new transaction is created each time, and the transaction in the context is suspended at the same time. After the execution of the current new transaction is completed, the context transaction is completed. Resume and execute again.

This is a very useful level of dissemination. Take an application scenario: Now there is an operation to send 100 red packets. Before sending, some system initialization, verification, and data recording operations are required, and then 100 red packets are sent, and then recorded. Send logs. Sending logs requires 100% accuracy. If the logs are inaccurate, the entire parent transaction logic needs to be rolled back.
How to deal with the entire business needs? It is through this PROPAGATION_REQUIRES_NEW level transaction propagation control can be completed. The sub-transaction that sends the red envelope will not directly affect the commit and rollback of the parent transaction.


5) PROPAGATION_NOT_SUPPORTED  , this can also be known literally, not supported, not supported, the feature of the current level is that there is a transaction in the context, then the transaction is suspended, the current logic is executed, and the transaction of the context is restored after the end.


What are the benefits of this level? It can help you to reduce the transaction as much as possible. We know that the bigger a transaction is, the more risk it has. Therefore, in the process of processing transactions, it is necessary to ensure that the scope is as narrow as possible. For example, a piece of code must be called for every logical operation, such as a non-core business logic operation that loops 1000 times. If such code is wrapped in a transaction, it will inevitably cause the transaction to be too large, resulting in some exceptions that are difficult to consider. So the propagation level at this level of this transaction comes in handy. Just hug it with the current level transaction template.


6) PROPAGATION_NEVER  , the transaction is more strict, the above transaction propagation level is just not supported, and there is a transaction suspended, while the PROPAGATION_NEVER propagation level requires that no transaction exists in the context, once there is a transaction, a runtime exception will be thrown and the execution will be forced to stop! This level had a grudge against affairs in his previous life.


7)PROPAGATION_NESTED  , literally known, nested, nested level transaction. The propagation level feature is that if there is a transaction in the context, the nested transaction is executed, and if there is no transaction, a new transaction is created.

So what is a nested transaction? Many people don't understand it. I have read some blogs, but there are some misunderstandings.

Nesting is that the sub-transaction is executed in the parent transaction. The sub-transaction is a part of the parent transaction. Before entering the sub-transaction, the parent transaction establishes a rollback point called save point, and then executes the sub-transaction. The execution of this sub-transaction is also considered Part of the parent transaction, then the child transaction execution ends and the parent transaction continues to execute. The point is that save point. It becomes clear after looking at a few questions:

what happens if the subtransaction is rolled back?  

The parent transaction will roll back to the save point established before entering the child transaction, and then try other transactions or other business logic. The previous operations of the parent transaction will not be affected, and will not be automatically rolled back.


What happens if the parent transaction is rolled back?  

The parent transaction is rolled back, and the child transaction will also be rolled back! Why, because the child transaction will not be committed before the parent transaction ends. We say that the child transaction is a part of the parent transaction, which is exactly the reason. So:


what is the commit of the transaction?  

Does the parent transaction commit first, and then the child transaction, or does the child transaction commit first, and then the parent transaction? The answer is the second case, or that sentence, the sub-transaction is a part of the parent transaction and is submitted by the parent transaction uniformly.


Now you can experience this "nesting" again, doesn't it mean so much?


The above are the 7 propagation levels of transactions. In daily applications, various business needs can usually be met. However, in addition to the propagation level, in the process of reading the database, if two transactions are executed concurrently, how is the data between each other? Affected?

This requires understanding of another feature of transactions: data isolation level

Data isolation levels are divided into four different types:


1.Serializable  : The most stringent level, transactions are executed serially, and resource consumption is the largest;

2. REPEATABLE READ  : It is guaranteed that a transaction will not modify data that has been read by another transaction but not committed (rolled back). The "dirty reads" and "non-repeatable reads" situations are avoided, but with more performance penalty.

3. READ COMMITTED  : The default transaction level of most mainstream databases, which ensures that a transaction will not read data that has been modified but not committed by another parallel transaction, avoiding "dirty reads". This level is suitable for most systems.

4. Read Uncommitted  : It ensures that no illegal data will be read during the reading process.
 
In the above explanation, each definition is actually a bit awkward, which involves several terms: dirty reading, non-repeatable reading, and phantom reading.
Explain here:
 
Dirty read: The so-called dirty read is actually reading the dirty data before the rollback of other transactions. For example, transaction B modifies data X during execution. Before committing, transaction A reads X, but transaction B rolls back, so transaction A forms a dirty read.
 
Non-repeatable read: The literal meaning of non-repeatable read is very clear. For example, when transaction A first reads a piece of data, and then executes the logic, transaction B changes the data, and then when transaction A reads it again, it finds the data If it does not match, it is the so-called non-repeatable read.
 
Phantom reading: When I was a child, I counted my fingers. The first time I counted dozens and 10, and the second time it was 11. What happened? hallucinations?
The same is true for phantom reading. Transaction A first obtains 10 pieces of data according to the conditional index, and then transaction B changes one piece of data in the database, which also meets the search conditions of transaction A at that time. In this way, transaction A searches again and finds that there are 11 pieces of data, which results in phantom reading.
 
A comparison table:
                                       Dirty reads non-repeatable reads phantom reads
Serializable No No No
REPEATABLE READ No No No
READ COMMITTED No Yes
Read Uncommitted
 
Yes Yes So the safest is Serializable, but it also comes with high performance overhead .
In addition, there are two commonly used attributes of transactions: readonly and timeout
. One is to set the transaction to read-only to improve performance.
The other is to set the timeout period of the transaction, which is generally used to prevent the occurrence of large transactions. Again, keep things as small as possible!

Finally, a question is introduced:
There are 20 conditions that need to be checked for a logical operation. Can the checkable content be placed outside the transaction in order to reduce the transaction? 

Many systems start the transaction within the DAO, then perform the operation, and finally commit or roll back. This involves the problem of code design. Smaller systems can use this method, but in some larger systems and systems with
more complex logic, too much business logic is bound to be embedded in DAO, resulting in a decrease in the reusability of DAO. So this is not a good practice.


To answer the question: Can some business logic checks be placed outside the transaction in order to shrink the transaction? The answer is: For the core business check logic, it cannot be placed outside the transaction, and it must be used as a concurrency control under distributed!
Once the check is performed outside the transaction, the data that has been checked by transaction A is bound to be modified by transaction B, resulting in futile transaction A and concurrency problems, which directly lead to the failure of business control.
Therefore, in a distributed high-concurrency environment, a locking mechanism should be used for checking the core business logic.
For example, when a transaction is opened, a piece of data needs to be read for verification, and then the data needs to be modified in the logical operation, and finally submitted.
In such a process, if the code read and verified is placed outside the transaction, the read data is likely to have been modified by other transactions. Once the current transaction is committed, the data of other transactions will be overwritten again, resulting in data abnormal.
Therefore, when entering the current transaction, this data must be locked. Using for update is a good control method in a distributed environment.

A good practice is to use programmatic transactions rather than lifetimes, especially on larger projects. For the configuration of the transaction, in the case of a very large amount of code, it will be a torture, and the human flesh way can never avoid this problem.
Keep DAO as the most basic operation for a table, and then put the processing of business logic into manager and service, and use programmatic transactions to control the scope of transactions more precisely.
In particular, for some situations that may throw exceptions within the transaction, the capture should be careful, and the exception of the transaction cannot be eaten casually and cannot be rolled back normally.

 

 

 

Spring configures declarative transactions:
* Configure SessionFactory
* Configure transaction manager
* Transaction propagation characteristics
* Those classes and those methods use transactions

 

Write business logic methods
* Inherit the HibernateDaoSupport class, use HibernateTemplate for persistence, HibernateTemplate is  a lightweight encapsulation of
   hibernate
Session * By default, runtime exceptions will be rolled back (including inherited RuntimeException subclasses), and ordinary exceptions will not be rolled back * When writing business logic methods, it is
best to throw exceptions all the way up and handle them in the presentation layer (struts)
* The setting of transaction boundaries, usually set to the business layer, do not add to Dao

 

<!-- 配置SessionFactory -->
<bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean">
    <property name="configLocation">
     <value>classpath:hibernate.cfg.xml</value>
    </property>
</bean>

<!-- 配置事务管理器 -->
<bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager">
    <property name="sessionFactory" ref="sessionFactory"/>
</bean>

<!-- 事务的传播特性 -->
<tx:advice id="txAdvice" transaction-manager="transactionManager">
    <tx:attributes>
     <tx:method name="add*" propagation="REQUIRED"/>
     <tx:method name="del*" propagation="REQUIRED"/>
     <tx:method name="modify*" propagation="REQUIRED"/>
     <tx:method name="*" propagation="REQUIRED" read-only="true"/>
    </tx:attributes>
</tx:advice>

<!-- 哪些类哪些方法使用事务 -->
<aop:config>
    <aop:pointcut expression="execution(* com.service.*.*(..))" id="transactionPC"/>
    <aop:advisor advice-ref="txAdvice" pointcut-ref="transactionPC"/>
</aop:config>

<!-- 普通IOC注入 -->
<bean id="userManager" class="com.service.UserManagerImpl">
    <property name="logManager" ref="logManager"/>
    <property name="sessionFactory" ref="sessionFactory"/>
</bean>
<bean id="logManager" class="com.service.LogManagerImpl">
    <property name="sessionFactory" ref="sessionFactory"/>
</bean>

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324958836&siteId=291194637