Consistency and Atomicity Analysis of Database Transactions

This article refers to self-knowledge

The concept of Oracle transaction: The transaction user guarantees the consistency of data, which is composed of a group of dml statements, which either all execute successfully or all fail.

1. Transaction Consistency

For example: if you go to the bank to transfer 1,000 yuan to your friend, after all operations are completed, and you are prompted that the transfer is successful (assuming the bank transfers immediately, there is no delay), you find that your account has decreased 1,000 yuan, but when you call your friend to confirm, but your friend's account has not increased by 1,000 yuan, then we think the data at this time is inconsistent!!!

In the application scenario of database implementation, consistency can be divided into consistency outside the database and consistency inside the database:

i. External consistency: It is implemented by external application code, that is, when the bank's application performs transfer operations, it must call operations on account A and account B within the same transaction. If something goes wrong at this stage, it's not something the database can fix by itself, and it's not part of our discussion.

ii. Consistency within the database: A set of operations within the same thing must all succeed (or all fail). This is the atomicity of transaction processing

 

2. Transaction atomicity

As mentioned above, the atomicity of a transaction is guaranteed: a set of operations within a transaction are all successful (or all failed). The operation has been successful, but the latter part of the operation has not been successfully executed due to system power failure, operating system crash, etc., then it is necessary to undo the previously successfully executed operation through the backtracking log, so as to achieve the effect of "all execution failed".

 

3. Common scenarios that reflect transaction atomicity and database consistency and durability

After the database crashes and restarts, the database is in an inconsistent state. At this time, the database must perform a crash recovery operation. The general steps are as follows:

a. Through log REDO (replay all operations that were successfully executed but not written to disk)

b. Undo the transactions that were not completed before the database crashed (revoke all the operations that have been partially executed, but some have not been completed and have not been submitted to ensure the atomicity of the transaction)

c. After the crash recovery is over, the database is restored to consistency and can continue to work

 

4. Problems with multi-threaded transactions

Under a single thread, the atomicity of the transaction can guarantee the consistency of the database, but in some cases, the atomicity of the transaction cannot guarantee the consistency of the database. The specific scenarios are as follows:

Question: Transaction 1 transfers 100 yuan to account A, then he first reads account A, and then adds 100 to account A, but in the middle of this process, transaction 2 also modifies account A and adds 100 yuan to it , then the final result should be an increase of 200 yuan. But when transaction 1 is finally completed, account A only adds 100, because the modification result of transaction 2 is overwritten by transaction 1.

In order to ensure the consistency of data, isolation is introduced, which not only ensures that the data seen by each transaction is consistent, but also ensures that while a transaction is processing data, no other transaction interferes with the data it is processing, as if other transactions It does not exist, that is, the state of the transaction after concurrent execution is the same as the state after serial execution, and isolation is mainly achieved by locking.

The following is to solve the data inconsistency problem of transactions under multi-threading through "locks":

a. Pessimistic lock

That is, the transaction locks all the objects involved in the current operation, and releases it to other objects after the operation is completed. In order to improve performance as much as possible, various granularities (database level/table level/row level)/various properties (shared) have been invented. lock/exclusive lock/shared intention lock/shared exclusive lock/shared exclusive intention lock...) lock, if you want to unlock it specifically, please refer to the Oracle lock mechanism . In order to solve the deadlock problem, a series of technologies such as two-stage lock protocol/deadlock detection have been invented.

b. Optimistic locking

That is, different transactions see different historical versions of the same object (usually a data row). If two transactions modify the same data row at the same time, conflict detection is performed on the transaction submitted later. There are also two implementations: one is to obtain historical versions of data rows through UNDO, and the other is to simply save different historical versions of data rows in memory, which are distinguished by timestamps

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324843782&siteId=291194637