About mysql affairs a few little things

Zero .MyISAM about the difference between InnoDB and locks

①MyISAM default using a table-level locking, row-level locking is not supported.

②InnoDB default using a row-level locking, and also supports table-level locking.

③ shared lock and the exclusive lock compatibility
| X | exclusive lock | shared locks |
- | - | -
exclusive lock | Conflict | conflict
shared locks | conflict | compatible

④ usage scenarios
MyISAM
A: frequently perform all count statement.
B: frequency of the data additions and deletions is not high, very frequent queries.
C: do not need to support transactions.
The InnoDB
A: CRUD data quite frequently.
B: required to support the transaction.

A. Lock free

The particle size divided by:
table-level locks: lock an entire table
row-level locks: locks of the line data
page locks: lock between the stages involved table-level pages, data locking in a few lines of fast storage adjacent.

Click the lock level points:
Shared lock: for the same data, multiple read operations can be carried out without affecting each other at the same time.
Exclusive lock: before the current write operation is not completed, it will prevent other write locks and read locks.

By means locking points
automatically lock: Lock like intent, plus the additions and deletions to the time change search MyISAM lock is automatically lock, which is mysql automatic lock.
Explicit lock: like select for update, lock the reality that we add the lock is explicitly lock.

Grouped by operation
DML lock: when the data applied to operate the lock.
DDL locks: the structure of the table becomes more locks.

Divided by use
optimistic locking: that the data will not cause conflict, judged only when submitted, without using the locking mechanism of the database, but the version number or timestamp implementation.
Pessimistic lock: the influence of the outside world in a conservative state, the data processing will be locked in, often rely lock mechanism provided by the database. Full use of exclusive locks, first obtain a lock on execution.

III. Characteristics of the four database transactions

1. atomic
transaction is the smallest unit of execution, segmentation is not allowed. Atomicity of transactions to ensure that actions are either completed or totally ineffective.
2. consistency
before and after the implementation of the transaction, the data is consistent, the results of multiple transactions on the same data read is the same.
3. Isolation of
concurrent access to the database, a user transaction not interfere with the other firms, database between concurrent transactions are independent.
4. Persistent
after a transaction is committed, it changed data in the database is persistent, even if the database fails nor should it have any impact.

III. Brought the issue of concurrent transactions

1. Dirty read
when a transaction is accessing the data and the data can be modified, and such modifications are not committed to the database, when another transaction also access this data, and then using this data. Because this data is not submitted, then another transaction data is then read "dirty data", based on what you did "dirty data" may be incorrect.
2. Modify the loss
after refers to when a transaction reads a data, another transaction also access this data, then modify the data in this first transaction, second transaction also modify this data. Such modification results in the first transaction will be lost, called the loss of data.
3. The non-repeatable reading
means within a transaction reads the same data multiple times. When this transaction is not over, another transaction also access the data, then, between the two read data in the first transaction, second transaction because the modification results in the first transaction twice read data may not be the same. This is the case in a transaction occurs twice read data is not the same, called non-repeatable read.
4. Magic Reading
Magic Reading and similar non-repeatable read. It occurs in a transaction reads a few rows, then a number of other concurrent transaction data inserted. In the following query, the first transaction will find more than a few original records do not exist, as if the same happened hallucinations, so called phantom reads.

IV. Database transaction isolation mechanism

1.READ-UNCOMMITTED (read uncommitted)
the lowest isolation level, allowing the read data changes have not been submitted, may cause dirty reads, phantom reads, non-repeatable read .
2.READ-COMMITTED (read submitted)
allows reading of the data has been submitted concurrent transactions, can prevent dirty reads, but can not read any phantom reads and repeats may occur .
3.REPEATABLE-READ (repeatable read)
repeatedly read the results on the same field are the same, unless the data is modified their affairs themselves, can prevent dirty reads and non-repeatable do, but the magic is still possible to read occur .
4.SERIALIZABLE (serializable read)
the highest level of isolation, full compliance ACID isolation levels. All transactions executed one by one in sequence, it is impossible to produce interference between such matters, that is to say, this level prevents dirty reads, non-repeatable reads and phantom reads .

Isolation Levels Dirty read Non-repeatable read Magic Reading
READ-UNCOMMITTED
READ-COMMITTED ×
REPEATABLE-READ × ×
SERIALIZABLE × × ×

Supported by default isolation level MySQL InnoDB storage engine is REPEATABLE-READ (repeatable read) . You can be viewed by @@ tx_isolation command.

It should be noted that: the difference is that SQL standard InnoDB storage engine REPEATABLE-READ (repeatable read) is used under the transaction isolation level Next-key Lock algorithm, thus avoiding to generate magic of reading. So the default isolation level support InnoDB storage engine is REPEATABLE-READ (repeatable read) has been completely guarantee transaction isolation requirements, namely to achieve the SQL standard SERIALIZABLE (serialization) isolation level.

Because the lower the isolation level, the less transaction requests a lock, so most of the database system isolation level is READ-COMMITTED; however InnoDB storage engine uses the default REPEATABLE-READ will not have any performance loss.
InnoDB storage engine in the case of a distributed transaction will generally be used SERIALIZABLE isolation level.

V. snapshot of the current read and read

1. Current reading

select ... lcok in share mode, select ..... for update, update, delete, insert and other operations are currently reading.

2. Reading Snapshot

Unlocked non-blocking reads, select a snapshot reading operation is based on multiple versions of the operation.
Snapshots read:
1. depends on the data lines in the DB_TRX_ID, DB_ROLL_PTR, DB_ROW_ID field.
DB_TRX_ID: id marked the most recent operation which rows of data.
DB_ROLL_PTR: rollback pointer, write rollback segment of straight rollback undo log.
DB_ROW_ID: line number, comprising a new row is added with the increment monotonically recording (hidden primary key).
2.undo log
when we record the change operation, it will generate undo log, undo records in the old version of the data, when a transaction needs to read the old records, along undo the chain can be read to meet the old version required data. Divided insert undolog and update undolog, insert undolog only when the transaction rollback is needed and can be discarded after the transaction commit; update undolog occur when the data update and delete, not only when the transaction mixing roll, and the snapshot is taken, you also need to read, and therefore can not just be deleted.

3.read view
is mainly used to make the visibility of judgment, when read select a snapshot of me to perform, will go to create a read view for we need to read the data, determines the current to be able to read the data on which version; mainly based on the DB_TRX_ID data with an active ID of the current transaction contrast, if the upper layer is equal to fetch data according to undolog, get to know as much as his data.

How to six .RR avoid phantom read

1. Representation: Snapshot Read (Read nonblocking) - pseudo MVCC
2. Inner, next-key lock (lock row locks + gap)
the GAP lock
Gap locks a range to prevent phantom read.
1. The primary key or unique index will lock lock Gap?
If the conditions where all the hits, you will not use lock Gap, only add the line lock.
2. If the conditions are not hit where, Gap lock will be used.
3. If the condition where part of the hit, it will lock with a Gap.
4.Gap lock will be used in non-unique index or the current index reading does not go in.
Non-unique index case:

As shown above, the operation data 9, will lock (6,9], (9,11] That these two sections (6, 11] coupled interval Gap is locked, the operation is not permitted, thus ensuring prevented magic reading occurs .gap and also locks with master keys to accurately determine, for example (6, 11] this section is locked, but the corresponding primary key 6 is c, such insertion (a, 6) so that data is It can be inserted, but the (d, 6) such that data can not be inserted.

Do not go Index:
do not take the index will increase table lock, that is, plus all of the locks Gap,

VII. Lock optimization recommendations

1. Let retrieve all the data are used as an index to complete, no index row avoid lock escalation to a table lock.
2. Reasonable design index, minimize the scope of the lock.
3. minimize search condition, to avoid gaps lock (lock Gap).
4. try to control the size of the transaction, and the length of time to reduce the amount of resources locked.
5. Whenever possible, use low-level transaction isolation.

Guess you like

Origin www.cnblogs.com/jack1995/p/10947356.html