[0008] class big data will cause a full table scan several SQL and sql optimization

Query time try to avoid full table scan, using a full scan, index scan! It will cause a full table scan several SQL as follows

1, fuzzy query efficiency is very low:

  The reason : like itself efficiency is relatively low, and should be avoided query using the like; for like '% ...%' this condition (full blur), can not use the index, a full table scan is very low natural efficiency; In addition, Since the relationship between the matching algorithms, fuzzy query field length greater, the lower the efficiency of the fuzzy search.

  Solution : First, try to avoid fuzzy query, because if the business needs to be sure to use fuzzy queries, then at least do not use the full assurance fuzzy search, fuzzy search for the right, that is like '...%', will use the index; left vague like

  '% ...' can not directly use the index, but can use the reverse of the form + function index, change into like '...%'; the whole is not fuzzy optimization, it must then consider using the search engine. For reduce the database load regardless of the server as much as possible to reduce database fuzzy query.

 

2, the query select statement contains a slow execution is null

  The reason : the Oracle  9i, the query field is null single index fail, causing full table scan.

  Solution : SQL syntax in NULL will be a lot of trouble, the best index column is NOT NULL; for is null, you can build composite index, on the tables and indexes analyse, is null query nvl (field 0) can re-enable the index to find, but the efficiency is not worthy of recognition; never use when the index is not null. A large amount of data tables generally do not use the query is null.

 

3, the conditions used in the query is not equal operator (<>,! =) Select statement execution slow

  Cause : SQL, the operator is not equal to the index limits, causing full table scan, even on the index field comparison

  Solution : The operator is not equal to the change or, you can use the index to avoid full table scan.

column <> 'aaa' // full table scan
column < 'aaa' OR column> 'aaa' // use the index

 

4, a composite index used improperly

 The query is not the leading column, resulting in the index does not work;

create index skip1 on emp5(job,empno); 
select count (*) from emp5 where empno = 7900; // full table scan 
select / * + index (emp5 skip1) * / count (*) from emp5 where empno = 7900; // composite index using

 When using the combination index, sorted (sort required even if only one column) according to a combination of the index in order to sort the columns should be, or poor performance.

create index skip1 on emp5(job,empno,date); 
select job,empno from emp5 where job=’manager’and empno=’10’ ORDER BY date desc; //性能较差
select job, empno from emp5 where job = 'manager'and empno = '10' ORDER BY job, empno, date desc; // composite index using

 

5, ORimproper use of statements

 ORConnection of conditional statements contained in the column is not indexed all.

 For example: two conditions where clause comparison, an index, an index is not used or would cause a full table scan. For example: where A =: 1 or B =: 2, there is an index on the A, B is not on the index, the comparison B =: restarts the full table scan 2.

 

6, UPDATAthe statement updata all fields

  If you change only 1,2 fields, do not Update all fields, otherwise called frequently cause significant performance overhead, and bring a lot of logs .

 

7, the amount of data for a plurality of tables QueenJOIN

 The reason: There is no first page, resulting in a high logical reads;
 Solution: page again JOIN.

 

8、select count(*) from table;

 count so without any conditions will cause full table scan, and no business sense, is sure to put an end to.

 

9, the query is executed repeatedly, WHEREclause variable bindings can reduce the analysis time and improve performance

 

10, should be avoided to a null value is determined fields in the where clause, will cause the engine to give up using the index and a full table scan,

 Such as:

select id from t where num is null

 NULL for most databases require special handling, MySQL is no exception, it requires more code, more checks and special index logic, some developers did not realize, NULL is the default when you create a table, but the most of the time use should NOT NULL, or the use of a special value, such as 0, -1 as a default value.
 
     Null can not be used as an index, any column that contains null values will not be included in the index. Even if the index multiple columns such a case, there is a column that contains null, the column will be excluded from the index. This means that if there is a column a null value, even if the construction of the index column will not improve performance. Any use where clause is null or is not null statement optimizer is not allowed to use the index.
 
      This embodiment may be provided on a default value of 0 num, num column of the table to ensure that the value is not null, then this query:

 select id from t where num=0;

 

11, should try to avoid or use to connect to the conditions in the where clause, will cause the engine to give up using the index and full table scan

 Such as:

select id from t where num=10 or num=20;

Queries can do:

select id from t where num=10 union all select id from t where num=20;

 

12, in it should be used with caution and not in, otherwise it will lead to a full table scan

 Such as:

select id from t where num in(1,2,3);

 For continuous values, you can not use in the between:

select id from t where num between 1 and 3;

 

13, should be avoided for operating fields in the where clause expression, which will cause the engine to give up using the index and full table scan.

 Such as:

select id from t where num/2=100;

 Should read:

select id from t where num=100*2

 

14, should be avoided for a function operation in the fields where clause that will cause the engine to give up using the index and full table scan.

 Such as:

select id from t where substring(name,1,3)='abc'; --name
 
select id from t where datediff(day,createdate,'2005-11-30')=0;  --‘2005-11-30’

 Should read:

select id from t where name like 'abc%'
 
select id from t where createdate>='2005-11-30' and createdate<'2005-12-1'

 

15, make use of numeric fields, numeric fields containing information if only so as not to design for the character, which will reduce the performance of queries and connections, and will increase the storage overhead. This is because the engine when processing queries and connections one by comparing each character in the string, and for numeric comparison purposes only once is enough.

 

16, as much as possible the use of varchar / nvarchar instead of char / nchar, because first of all variable-length fields small storage space, you can save storage space, followed by the query, in a relatively small field of search efficiency is clearly higher.

 

17, try to avoid large transaction operations, improve system concurrency.

 

18, returned to the client to avoid large amounts of data, if the data is too large, you should consider the corresponding demand is reasonable.

 

19, the index is not possible, the corresponding index can certainly improve the efficiency of select, but also reduces the efficiency of insert and update, because it is possible when the insert or update will rebuild the index, the index needs to be carefully considered how to build, As the case may be. An index number table is best not more than six, if too much you should consider some of the less frequently used to build the index column if necessary.

 

20, not all indexes are valid query, SQL query optimization is performed according to data in the table, when the index lists a large number of duplicate data, SQL queries may not go to use the index, such as a table has a field sex, male, almost half of each female, even if the index built on sex also no effect on the query efficiency.

Guess you like

Origin www.cnblogs.com/ljt1412451704/p/11737369.html