Database Review

1.MYSQL basis

view

For a virtual table exists, using the user's view is substantially transparent. View does not actually exist, the row and column data from the query using the view definition table, and is used when the view dynamically survive.

Advantages views: simple, safe, separate

 

Stored procedures and functions

Stored procedures and functions for a set of SQL statements are compiled in advance and stored in the database, call stored procedures and functions can simplify a lot of work application developers to reduce the transmission of data between the data and application server for improving data processing the efficiency is good.

 

The difference between the stored procedures and functions that the function must have the function site, and the process is not stored, the process parameters can be stored with aqueous IN, OUT, INOUT type, and the parameter is a function only of the type IN.

 

trigger

It is an object database triggers associated with a table, while meeting the definition of a trigger condition and executes statements punishment is defined in combination. This feature can help trigger its application in the German end database to ensure data integrity.

 

Lock granularity

Various MySQL storage engines can implement your own strategy and lock lock granularity.

 Table lock

MySQL basic strategy to lock the entire table. Write lock has a higher priority than a read lock.

 Row-level locking

Implemented in the data storage layer.

 

Affairs

The transaction is a set of atomic SQL queries, or is a separate unit. Statement within the transaction executed either all succeed or fail to perform.

ACID:

Atomic: A transaction is considered a minimum indivisible unit, either all succeed, or all fail rolled back.

Consistency: Database always translate from one consistent state to another consistent state.

Isolation: a firm to make changes before final submission, other transactions are not visible.

Durability: Once a transaction is committed, it will be permanently save the changes made to the database.

 

Isolation Levels

The four isolation levels

1, uncommitted read (Read Uncommitted): dirty reads are allowed, that is possible to read other sessions uncommitted transactions from modifying data

2. Read Committed (Read Committed): can only read the data already submitted. Most Oracle and other database is the default level (not repeatable read)

3, may be repeated to read (Repeated Read): Read repeatable. Within the same transaction query start time of the transaction are consistent, InnoDB the default level. In the SQL standard, the isolation level eliminates non-repeatable read, but there are phantom read, but innoDB solve the phantom read

4, a serial read (Serializable): fully serialized read, reading each level are required to obtain a shared lock table, each reader will block

 

 

Read Uncommitted: The transaction can be reading data submitted, that is dirty read. A transaction has been updated copy of the data, another transaction at this time to read the same data, for some reason, before a RollBack operation, the firm after a data read will be incorrect.

Committed read: non-repeatable read problems. In two queries in a transaction data is inconsistent, which may be inserted into the middle of the query process twice the original data in a transaction update.

Repeatable Read: Magic Reading, refers to a transaction record within a range of the reading, and another transaction insert a new record in this range, the current record is read transaction within this range will produce phantom Row. MySQL default transaction isolation level.

Serializable: locking each row of data. Lead to a lot of timeouts and lock contention.

Phantom reads and non-repeatable read difference?

Magic Reading is higher than the level of a non-repeatable read error, read the same data found just now is the same, only read a bunch of data suddenly found one more, or one less, like himself raised a bunch of Ke somehow more out of a base, just obviously not only checked Yeah, is not hallucinating!

 

Magic Reading solve the problem, the need to introduce multi-version concurrency control, or the entire table lock (obviously not desirable).

Multi-version concurrency control

To improve performance, providing a multi-version database concurrency control (MVCC), to avoid locking operation; fast recruited by saving data implemented in Mo Acura point in time.

MVCC realize there are many, the most typical are optimistic and pessimistic concurrency control concurrency control.

Pessimistic concurrency control:

A locking system that can prevent users from affecting other users to modify data. If the operation performed by the user results in the application of a lock, only the lock owner releases the lock, other users can perform operations in conflict with the lock. This method is called pessimistic concurrency control, because it is mainly used for data contention competitive environment, and the cost of protecting data concurrent conflict with locks lower than roll back the transaction cost environment.

Optimistic concurrency control:

In the optimistic concurrency control, the user does not lock data when reading data. When a user updates the data, the system will check to see if the user after reading the data of other users if they change the data. If other users from updating the data, will generate an error. Under normal circumstances, users receive an error message will roll back the transaction and start over. This approach is called optimistic concurrency control, because it is mainly used in the following environments: cost data contention and not lock data when the occasional roll back the transaction cost is lower than the read data.

 

MVCC to achieve a repeatable read in, saved two hidden achieved in each row record, creation time (system version) to save a row, one expiration time (deletion time).

SELECT statement:

1.InnoDB find only version earlier than the current version of the data line transaction

2. Delete the line version of either undefined or greater than the version number of the current transaction

INSERT:

InnoDB save the current system version number for each seat row version of the newly inserted row

DELETE:

InnoDB for deletion of each row to save the current version number as identification delete rows.

UPDATE:

InnoDB insert a new row record, save the current version number is the version number of rows and delete rows identity.

read view (or MVCC) to achieve the consistency is not locked read (Consistent Nonlocking Reads), thus avoiding the phantom read

 

2.MYSQL engine

mysql commonly used engine, InnoDB Storage Engine and MyISAM storage engine.

InnoDB storage engine:

1. Data stored in the form:

When using InnoDB, the data table will be divided .frm (table structure) and IDB (store data and index table) for storing two files

2. The lock granularity

InnoDB uses MVCC (multi-version concurrency control) to support high-concurrency, InnoDB implements four isolation levels, the default level is REPETABLE READ, and prevent phantom read through the gap locking strategy. Its lock granularity is row lock.

3. Transaction

InnoDB is typical of transactional storage engine, and through a number of mechanisms and tools that support true hot backup.

4. The data storage characteristics

InnoDB tables are based on a clustered index (another blog with a) established clustered indexes have primary key query high performance, but his secondary index (non-primary key index) must contain a primary key columns, indexes other the index will be great

MyISAM storage engine

1. Data stored in the form:

MyISAM index and data are used in an isolated form, the data stored in three files .frm (table structure) .MYD (data) ,. MYIs (index).

2. The lock granularity

MyISAM does not support row lock, so when reading on the table plus a shared lock, writing is a plus exclusive table lock. Since the entire table is locked, compared to InnoDB, inefficient when concurrent writes.

3. Transaction

MyISAM does not support transactions.

4. The data storage characteristics

MyISAM is stored on a non-clustered index

5. Other

MyISAM offers a number of features, including full-text indexing, compression, spatial function, delayed update the index keys.

Table after compression can not be modified, but the compression table can greatly reduce disk space, and therefore can reduce disk IO, providing query performance.

Full-text index, an index that is created based on the word, can support complex queries.

Delayed update the index key, the index will not be updated data is written to the disk immediately, but will be written to the memory buffer, the buffer is cleared only when the corresponding index will be written to disk, in this way greatly improved write performance

3.MYSQL the index

The index is implemented in MYSQL storage engine layer, but not implemented in the service layer. So the index of each storage engine are not necessarily identical, not all storage engines support all index types. MYSQL currently provides the following four kinds of indexes.

  • B-Tree index: The most common type of index, most of the engines support B-tree indexes.
  • HASH index: Only Memory engine support, the use of simple scenarios.
  • R-Tree index (spatial index): MyISAM spatial index is a special type of index, mainly for geospatial data types.
  • Full-text (full-text indexing): full-text index is a special type of MyISAM index, mainly for full-text indexing, InnoDB provides support for full-text indexing from MYSQL5.6 version.

An index selection principle

1. The more frequent as the field query should create an index

2. The uniqueness of the field is not too bad for creating an index alone, even as a frequent query

3. The heavily-updated fields are not suitable for creating an index

4. does not appear in the WHERE clause and group by ordery by the fields should not create an index

A type of index

innodb primary key indexes are clustered index, clustered index on the data line of the leaf nodes of the index data structure, a table can have a clustered index. If there is no primary key, Innodb selects a unique index instead of the non-empty If there is no such index, Innodb implicitly defined as a primary key clustered index.

Note: The figure above non-leaf node only has the index column, leaf node holds the indexed columns and rows of data.

 

Clustered indexes following advantages:

 

1, faster access to data, and cluster index data stored in the same index B-tree, the index means that all the hit-hit data.

2, covering the scanning index query can be used directly in the leaf nodes primary key.

Shortcomings are as follows:

1, a clustered index to maximize the performance of I / O intensive applications.

2, the insertion speed is heavily dependent on the insertion order.

3, update the clustered index column high price, because a shift is required.

When 4, when inserting new data, the primary key or crack needs to be moved, it could face a "page split."

5, non-clustering index (secondary index) comprising a clustered index, if the clustered index is large, secondary index will be great (two leaf node contains the index value of the primary key column). 

 

innodb secondary index is stored directly in the leaf nodes of a clustered index value.

Saved secondary index leaf node points to the physical location of the pointer is not in line, but the primary key of the row. Such lines in the face of page splitting, does not need to maintain two separate indices.

MyISAM storage engine is the primary key index and the other two indexes point to the same data row.

 

MySQL primarily two ways of providing an index: B-Tree (including B + Tree) index, Hash index.

B-tree index having a prefix lookup capabilities and to find, for a B-tree node N, the complexity of a retrieved record is O (LogN).

The hash index can only be equal to find, but no matter how much Hash table lookup complexity is O (1).

Obviously, if the difference value is large, and mainly equal to find, Hash index is a more efficient selection, it has to find the complexity of O (1) is. If the difference value is relatively poor, and to find the range-based, B tree is a better choice, it supports a range lookup.

 

4.SQL Optimization Tips

All optimization is nothing more than a principle, try to avoid full table scan (index itself is for this reason), to avoid some of the operations on SQL statements, if necessary re-processing program.

1.对查询进行优化,应尽量避免全表扫描,首先应考虑在 where 及 order by 涉及的列上建立索引。  

2.应尽量避免在 where 子句中对字段进行 null 值判断,否则将导致引擎放弃使用索引而进行全表扫描,如:    

select id from t where num is null    

可以在num上设置默认值0,确保表中num列没有null值,然后这样查询:    

select id from t where num=0    

    

3.应尽量避免在 where 子句中使用!=或<>操作符,否则将引擎放弃使用索引而进行全表扫描。    

    

4.应尽量避免在 where 子句中使用 or 来连接条件,否则将导致引擎放弃使用索引而进行全表扫描,如:    

select id from t where num=10 or num=20    

可以这样查询:    

select id from t where num=10    

union all    

select id from t where num=20    

    

5.in 和 not in 也要慎用,否则会导致全表扫描,如:    

select id from t where num in(1,2,3)    

对于连续的数值,能用 between 就不要用 in 了:    

select id from t where num between 1 and 3    

    

6.下面的查询也将导致全表扫描:    

select id from t where name like '%abc%'    

    

7.应尽量避免在 where 子句中对字段进行表达式操作,这将导致引擎放弃使用索引而进行全表扫描。如:    

select id from t where num/2=100    

应改为:    

select id from t where num=100*2    

    

8.应尽量避免在where子句中对字段进行函数操作,这将导致引擎放弃使用索引而进行全表扫描。如:    

select id from t where substring(name,1,3)='abc'--name以abc开头的id    

应改为:    

select id from t where name like 'abc%'    

    

9.不要在 where 子句中的“=”左边进行函数、算术运算或其他表达式运算,否则系统将可能无法正确使用索引。    

    

10.在使用索引字段作为条件时,如果该索引是复合索引,那么必须使用到该索引中的第一个字段作为条件时才能保证系统使用该索引,    

否则该索引将不会被使用,并且应尽可能的让字段顺序与索引顺序相一致。    

    

11.不要写一些没有意义的查询,如需要生成一个空表结构:    

select col1,col2 into #t from t where 1=0    

这类代码不会返回任何结果集,但是会消耗系统资源的,应改成这样:    

create table #t(...)    

    

12.很多时候用 exists 代替 in 是一个好的选择:    

select num from a where num in(select num from b)    

用下面的语句替换:    

select num from a where exists(select 1 from b where num=a.num)    

    

13.并不是所有索引对查询都有效,SQL是根据表中数据来进行查询优化的,当索引列有大量数据重复时,SQL查询可能不会去利用索引,    

如一表中有字段sex,male、female几乎各一半,那么即使在sex上建了索引也对查询效率起不了作用。    

    

14.索引并不是越多越好,索引固然可以提高相应的 select 的效率,但同时也降低了 insert 及 update 的效率,    

因为 insert 或 update 时有可能会重建索引,所以怎样建索引需要慎重考虑,视具体情况而定。    

一个表的索引数最好不要超过6个,若太多则应考虑一些不常使用到的列上建的索引是否有必要。    

    

15.尽量使用数字型字段,若只含数值信息的字段尽量不要设计为字符型,这会降低查询和连接的性能,并会增加存储开销。    

这是因为引擎在处理查询和连接时会逐个比较字符串中每一个字符,而对于数字型而言只需要比较一次就够了。    

    

16.尽可能的使用 varchar 代替 char ,因为首先变长字段存储空间小,可以节省存储空间,    

其次对于查询来说,在一个相对较小的字段内搜索效率显然要高些。    

17.任何地方都不要使用 select * from t ,用具体的字段列表代替“*”,不要返回用不到的任何字段。    

18.避免频繁创建和删除临时表,以减少系统表资源的消耗。

19.临时表并不是不可使用,适当地使用它们可以使某些例程更有效,例如,当需要重复引用大型表或常用表中的某个数据集时。但是,对于一次性事件,最好使用导出表。    

    

20.在新建临时表时,如果一次性插入数据量很大,那么可以使用 select into 代替 create table,避免造成大量 log ,    

以提高速度;如果数据量不大,为了缓和系统表的资源,应先create table,然后insert。

21.如果使用到了临时表,在存储过程的最后务必将所有的临时表显式删除,先 truncate table ,然后 drop table ,这样可以避免系统表的较长时间锁定。    

    

22.尽量避免使用游标,因为游标的效率较差,如果游标操作的数据超过1万行,那么就应该考虑改写。    

    

23.使用基于游标的方法或临时表方法之前,应先寻找基于集的解决方案来解决问题,基于集的方法通常更有效。

24.与临时表一样,游标并不是不可使用。对小型数据集使用 FAST_FORWARD 游标通常要优于其他逐行处理方法,尤其是在必须引用几个表才能获得所需的数据时。

在结果集中包括“合计”的例程通常要比使用游标执行的速度快。如果开发时间允许,基于游标的方法和基于集的方法都可以尝试一下,看哪一种方法的效果更好。

25.尽量避免大事务操作,提高系统并发能力。

26.尽量避免向客户端返回大数据量,若数据量过大,应该考虑相应需求是否合理。

 

Guess you like

Origin blog.csdn.net/u011510825/article/details/92378899