What are the methods for database optimization? These 30 ways to help you solve it

1
You should try to avoid using the != or <> operator in the where clause , otherwise the engine will give up using the index and perform a full table scan.
2
When optimizing queries, you should try to avoid full table scans. First, you should consider creating indexes on the columns involved in where and order by .
3
You should try to avoid the null value judgment of the field in the where clause , otherwise the engine will give up the use of the index and perform a full table scan, such as:
select id from t where num is null
You can set a default value of 0 on num to make sure there are no null values ​​for the num column in the table , then query like this:
select id from t where num=0
4
You should try to avoid using or to join conditions in the where clause , otherwise it will cause the engine to give up using the index and perform a full table scan, such as:
select id from t where num=10 or num=20
You can query like this:
select id from t where num=10union allselect id from t where num=20
5
The following query will also result in a full table scan: ( no leading percent sign )
select id from t where name like ‘%abc%’
To improve efficiency, consider full-text search.
6
In and not in should also be used with caution, otherwise it will result in a full table scan, such as:
select id from t where num in(1,2,3)
7
For continuous values, if you can use between , do n't use in :
select id from t where num between 1 and 3
8
Expression operations on fields in the where clause should be avoided as much as possible , which will cause the engine to give up the use of indexes and perform full table scans. Such as:
select id from t where num/2=100
Should be changed to :
select id from t where num=100*2
9
You should try to avoid functional operations on fields in the where clause, which will cause the engine to give up using indexes and perform full table scans. Such as:
select id from t where substring(name,1,3)=’abc’–name
starts with abc
id select id from t where datediffff(day,createdate,’2005-11-30′)=0–’2005-11-30′
The generated id should be changed to :
select id from t where name like ‘abc%’select id from t where createdate>=’2005-11-30′ and createdate<’2005-12-1′
10
Do not perform functions, arithmetic operations, or other expression operations on the left side of the "=" in the where clause , otherwise the system may not use the index correctly.
11
When using an index field as a condition, if the index is a composite index, the first field in the index must be used as a condition to ensure that the system uses the index, otherwise the index will not be used, and should be used as much as possible to make the field order consistent with the index order.
12
Don't write some meaningless queries, such as generating an empty table structure: select col1,col2 into #t from where 1=0
13
This kind of code will not return any result set, but will consume system resources, it should be changed to this:
create table #t(…)
Many times using exists instead of in is a good choice:
select num from a where num in(select num from b)
Replace with the following statement :
select num from a where exists(select 1 from b where num=a.num)
14
Not all indexes are effective for queries. SQL optimizes queries based on the data in the table. When a large amount of data in the index column is repeated, the SQL query may not use the index. For example, there are fields in a table sex , male , and female . Even if the index is built on sex , it will not affect the query efficiency .
15
The more indexes the better, the index can certainly improve the efficiency of the corresponding select , but it also reduces the efficiency of insert and update , because the index may be rebuilt during insert or update , so how to build an index needs to be carefully considered, depending on the Depends on the specific situation. The number of indexes in a table should not exceed 6. If there are too many indexes, you should consider whether it is necessary to build indexes on some columns that are not frequently used.
16
You should avoid updating the clustered index data column as much as possible, because the order of the clustered index data column is the physical storage order of the table records. Once the value of this column is changed, the order of the entire table record will be adjusted, which will consume considerable resources. If the application system needs to update the clustered index data column frequently , it needs to consider whether the index should be built as a clustered index.
17
Try to use numeric fields as much as possible, and try not to design character fields for fields that only contain numeric information, which will reduce query and connection performance and increase storage overhead. This is because the engine compares each character of the string one by one when processing queries and joins, whereas only one comparison is required for numbers .
18
Use varchar/nvarchar instead of char/nchar as much as possible , because first of all, the storage space of variable-length fields is small, which can save storage space, and secondly
For queries, it is obviously more efficient to search within a relatively small field.
19
Do not use select * from t anywhere , replace "*" with a list of specific fields , and do not return any fields that are not used.
20
Try to use table variables instead of temporary tables. If the table variable contains a lot of data, be aware that the indexes are very limited (only the primary key index).
21
Avoid frequent creation and deletion of temporary tables to reduce the consumption of system table resources.
22
Temporary tables are not unusable, and their proper use can make certain routines more efficient, for example, when a large table or a dataset in a frequently used table needs to be repeatedly referenced. However, for one-time events, it is better to use an export table.
23
When creating a new temporary table, if a large amount of data is inserted at one time, you can use select into instead of create table to avoid causing a large number of logs to improve the speed; if the amount of data is not large, in order to ease the resources of the system table, you should create table first , then insert .
24
If temporary tables are used, all temporary tables must be explicitly deleted at the end of the stored procedure, first truncate table , and then drop table , which can avoid long-term locking of system tables.
25
Try to avoid using the cursor, because the efficiency of the cursor is poor, if the data operated by the cursor exceeds 10,000 rows, then you should consider rewriting.
26
Before using the cursor-based method or the temporary table method, you should look for a set-based solution to the problem, and the set-based method is usually more efficient.
27
Like temporary tables, cursors are not unusable. Using FAST_FORWARD cursors on small datasets is often preferable to other row-by-row processing methods , especially if several tables must be referenced to obtain the required data. Routines that include " totals " in the result set are usually faster than using cursors . If development time allows, try both the cursor-based approach and the set-based approach to see which one works better.
28
Set SET NOCOUNT ON at the beginning of all stored procedures and triggers , and at the end
SET NOCOUNT OFF
There is no need to send a DONE_IN_PROC message to the client after executing each statement of stored procedures and triggers.
29
Try to avoid returning a large amount of data to the client. If the amount of data is too large, you should consider whether the corresponding demand is reasonable.
30
Try to avoid large transaction operations and improve system concurrency capabilities

What other good ways do you have, welcome to leave a message!

Guess you like

Origin blog.csdn.net/qinluyu111/article/details/123193032