[pessimistic lock vs optimistic lock]

1. The isolation level of the database

1. Read Uncommited (uncommitted read): data can be read without commit (insert is issued, but can be read without commit)

2.Read Commited : Only after submission can you read

3. Repeatable Read (repeatable read): mysql default level, must be submitted to see, read data is data locked

4. Serializable (serialized read): the highest isolation level, string type, I can only operate after you finish the operation, the concurrency is not good

 

Dirty read: data can be read without committing

Non-repeatable read: read again, the data is different from the data read last time.

Phantom reading: After querying the data of a certain condition, after the query is started, others add or delete some data, and the data is different from the original data when reading it again.

Database transactions must have ACID characteristics. ACID is the English abbreviation of Atomic (atomicity), Consistency (consistency), Isolation (isolation) and Durability (durability).

 

2. Why do you need locks (concurrency control)?

 

In a multi-user environment, multiple users may be updating the same records at the same time, which can create conflicts. This is the famous concurrency problem.

Typical conflicts are:

l Lost update: The update of one transaction covers the update result of other transactions, which is the so-called lost update. For example: User A changes the value from 6 to 2, and User B changes the value from 2 to 6, then User A loses his update.

l Dirty reads: Dirty reads occur when a transaction reads other records that have completed half of the transaction. For example, the value seen by user A and B is 6, user B changes the value to 2, and the value read by user A is still 6.

 

In order to solve the problems caused by these concurrency. We need to introduce concurrency control mechanism.

3. Concurrency control mechanism

The most common way to handle concurrent access by multiple users is locking. When a user locks an object in the database, other users can no longer access the object. The impact of locking on concurrent access is reflected in the granularity of the lock. For example, a lock placed on a table restricts concurrent access to the entire table; a lock placed on a data page restricts access to an entire data page; a lock placed on a row only restricts concurrent access to that row. It can be seen that the row lock granularity is the smallest, the concurrent access is the best, the page lock granularity is the largest, and the table lock is between the two.

Pessimistic locking : Assuming that concurrency conflicts will occur, block all operations that may violate data integrity. Pessimistic locking assumes that there is a high probability that other users will attempt to access or change the object you are accessing or changing, so in a pessimistic locking environment, lock the object before you start changing it, and until you commit The lock is not released until the changes are made. The disadvantage of pessimism is that whether it is a page lock or a row lock, the locking time may be very long, which may restrict the access of other users for a long time, which means that the concurrent access of pessimistic locks is not good.

Optimistic locking : Assuming that no concurrency conflicts will occur, only checks for data integrity violations when committing an operation. Optimistic locking does not solve the problem of dirty reads. Optimistic locking thinks that the probability of other users trying to change the object you are changing is very small, so optimistic locking does not lock the object until you are ready to commit the changes, and does not lock when you read and change the object . It can be seen that the locking time of optimistic locks is shorter than that of pessimistic locks, and optimistic locks can obtain better concurrent access performance with larger lock granularity. But if the second user reads the object just before the first user commits the changes, then when he is done committing his changes, the database will see that the object has changed, so the second user cannot Do not re-read the object and make changes. This means that in an optimistic locking environment, the number of concurrent users reading objects will increase.

 

 

From the database vendor's point of view, it is better to use optimistic page locks, especially in batch operations that affect many rows, you can place fewer locks, thereby reducing resource requirements and improving database performance. Consider clustered indexes again. The records in the database are stored in the physical order of the clustered index. If page locks are used, when two users simultaneously access and change two adjacent rows on the same data page, one user must wait for the other user to release the lock, which can significantly degrade system performance. Interbase, like most relational databases, uses optimistic locking, and read locks are shared and write locks are exclusive. Read locks can be placed on a read lock, but no write locks can be placed; you cannot place any more locks on a write lock. Locks are currently an effective means of solving concurrent access by multiple users.  

optimistic locking application

1. Use an auto-incrementing integer to represent the data version number. Check whether the version number is consistent when updating. For example, the data version in the database is 6. When the update is submitted, version=6+1. Use the version value (=7) to compare with the database version+1 (=7). If they are equal, you can Update, if not, it is possible that another program has updated the record, so an error is returned.

2. Use timestamps to implement.

Note: For the above two methods, Hibernate has its own implementation method: add annotation: @Version before the field that uses optimistic locking, and Hibernate will automatically verify the field when updating.

 

Pessimistic Lock Application

Need to use the database lock mechanism, such as SQL SERVER's TABLOCKX (exclusive table lock) When this option is selected, SQL Server will set an exclusive lock on the entire table until the end of the command or transaction. This will prevent other processes from reading or modifying the data in the table.

 

 

 

 

Pessimistic Lock, as the name suggests, is very pessimistic. Every time I go to get the data, I think that others will modify it, so every time I get the data, it will be locked, so that others will block until it gets the data. to the lock. Many such lock mechanisms are used in traditional relational databases, such as row locks, table locks, etc., read locks, write locks, etc., which are all locked before performing operations.

Pessimistic locking is suitable for data tables that are frequently changed. The query will be locked at the beginning, and will not be released until the end of the update operation. The performance is degraded. Use select ... from for update.

 

 

Optimistic Lock, as the name suggests, is very optimistic. Every time I go to get data, I think that others will not modify it, so it will not be locked, but when it is updated, it will judge whether others have updated it during this period. Data, you can use mechanisms such as version numbers. Optimistic locking is suitable for multi-read application types, which can improve throughput. For example, if the database provides a mechanism similar to write_condition, it is actually an optimistic locking provided.

Optimistic locking is suitable for data with a low probability of change, and is only released when the update is submitted. This method is actually to add a field to the table, such as the version field, to determine whether the currently operating data is the same version.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326258367&siteId=291194637
Recommended