Method of optimizing SQL statements

First, where the statement notes:

1. fields should be avoided to a null value is determined in the where clause, will cause the engine to give up using the index and a full table scan, such as:
SELECT ID from where NUM IS null T
may be provided on a default value of 0 num, to ensure num table column value is not null, then this query:
SELECT ID WHERE num from T = 0

2. should be avoided in the where clause! = Or <> operator, otherwise the engine to give up using the index and a full table scan

3 should be avoided in the where clause to connect or condition will cause the engine to give up using the index and a full table scan, such as:
SELECT ID from T where NUM = NUM = 10 or 20 is
could this query:
SELECT ID from WHERE NUM = 10 T
Union All
SELECT from ID = 20 is T WHERE NUM

4. The following query will result in a full table scan:
SELECT ID from T WHERE name like '%% ABC'
To improve efficiency, can be considered full-text search.

5.in should be used with caution and not in, otherwise it will lead to a full table scan, such as:
the SELECT NUM in the above mentioned id from the WHERE t (, 2, 3)
for continuous values, can not use in the between:
the SELECT the above mentioned id from t where num between 1 and 3

6. Should be avoided fields operations expression in the where clause, which would cause the engine to give up using the index and full table scan. Such as:
SELECT ID WHERE NUM from T / 2 = 100
should read:
SELECT ID from T WHERE NUM = 100 * 2

7. If the parameters in the where clause, will lead to a full table scan.
Because SQL at runtime will only resolve local variables, but the optimizer can not defer the choice of access plan to run; it must choose at compile time.
If, however, establish access plan at compile time, the value of the variable is unknown, and therefore can not be used as an index entry selected.
As the following statement will perform full table scans:
the SELECT from the above mentioned id = @ t the WHERE NUM NUM
can be changed to force the query using the index:
the SELECT with the above mentioned id from t (index (index name)) where num = @ num

8 should be avoided to a function operation in the fields where clause that will cause the engine to give up using the index and full table scan. Such as:
SELECT ID WHERE T from the substring (name, l, 3) = 'ABC' - id abc name beginning to
select id from t where datediff (day , createdate, '2005-11-30') = 0-- '2005-11-30' id generated
should read:
SELECT id from T WHERE name like 'ABC%'
SELECT id from CreateDate WHERE T> = '2005-11-30' and CreateDate < '2005-12-1'

9. Do not carry out functions, arithmetic operations, or other expressions in the where clause "=" left, or the system may not work properly indexed.
---------------------------
other optimization considerations:

1. query optimization, should try to avoid full table scan,
should first consider indexing by the column involved in where and order.

2. As a condition of using the index field, if the index is a composite index,
you must use the index to the first field to ensure the system uses the index as a condition,
otherwise the index will not be used, and should as much as possible so that field order is consistent with the order index.

3. Do not write the query does not make sense, such as the need to generate an empty table structure:
the SELECT col1, col2 INTO #t from the WHERE t 1 = 0
This code does not return any result sets, but consumes system resources, can change:
the Create the Table #t (...)

4. For a multi-table data amount Da JOIN,
first tab then JOIN, or read and the logic high, poor performance.

5. Do not use anywhere select * from t, with a specific list of fields instead of "*", do not return any of the fields with less than.

6. make use of numeric fields, numeric fields containing information if only so as not to design for the character, which will reduce the performance of queries and connections, and will increase the storage overhead.
This is because the engine when processing queries and connections one by comparing each character in the string, and for numeric comparison purposes only once is enough.

7. The index is not better.
Although the index can improve the efficiency of the corresponding select, but also reduces the efficiency of insert and update,
as it is possible when the insert or update will rebuild the index,
so the need to carefully consider how to build the index, as the case may be.
An index number table is best not more than six,
if too much you should consider some of the less frequently used to build the index column if necessary.

8.Update statement, if only to change the 2 fields, do not Update all fields, otherwise called frequently cause significant performance overhead, and bring a lot of logs.

9. Use as much as possible varchar / nvarchar instead of char / nchar,
because first of all variable-length fields small storage space, you can save storage space,
followed by the query, in a relatively small field of search efficiency is clearly higher.

10 make use of numeric fields, if only the fields containing numerical information is not possible for the character design, which reduces the performance of the connections and queries, and increases storage costs.
This is because the engine when processing queries and connections one by comparing each character in the string, and for numeric comparison purposes only once is enough.

11. The update should be avoided as much as possible clustered index data column, the column because the order of the index data is clustered physical storage order recorded in the table,
once the column will result in the adjustment value changes in the order of recording the entire table, will consume considerable resources.
If applications require frequent updates clustered index data columns, you need to consider whether it should be built for the clustered index index.

12. When the new temporary table, if one inserts a large amount of data,
it may be used instead of select into create table, to avoid a large number of log, in order to increase speed;
if small data, in order to ease the system resource table, should create table, then insert.

13. Table variables instead make use of a temporary table. If the table variable contains a large amount of data, please note that the index is very limited (only the primary key index).

14. In the new temporary table, if one inserts a large amount of data, it may be used instead of select into create table, to avoid a large number of log, in order to increase speed;
if small data, in order to ease the system resource table, should create table, then insert.

15. Avoid frequent create and delete temporary tables, system tables to reduce the consumption of resources.
Is not a temporary table can not be used, they may be suitably used to make certain routines more effective,
e.g., when a reference to a data set to be repeated a large table or when the table used. However, for a one-time event, it is best to use export table.

16. If you use a temporary table to be sure all the temporary table explicit deleted at the end of the stored procedure,
first truncate table, then drop table,
to avoid locking the system tables a long time.

17. Set SET NOCOUNT ON at the beginning of all the stored procedures and triggers, SET NOCOUNT OFF disposed at the end.
DONE_IN_PROC not need to send a message to the client after each statement is executed and triggers stored procedure.

18. Use the cursor before the method or methods based on temporary tables,
you should look for set-based solutions to solve the problem,
usually more efficient set-based method.

19. temporary tables, cursors are not unusable.
Use FAST_FORWARD cursor on small data sets are usually better than other progressive treatment methods,
particularly in reference to several tables must be in order to obtain the required data.
In the result set includes "total" than usual routines performed by using the cursor speed fast.
If the development time permits, cursor-based methods and can be set based approach to try to see which method is better.

20. Avoid returned to the client a large amount of data, if the data is too large, you should consider the corresponding demand is reasonable.

21. Try to avoid large transaction operations, improve system concurrency.

Guess you like

Origin www.cnblogs.com/ryanace1988/p/11083145.html