Oracle SQL optimization summary

 

1. Reduce the scope

Reduce additional io/cpu/physical/logical performance consumption

①, narrow the scope (can add filter condition'preferably index condition' for full scan);

②, the partition table try to add partition conditions

Execution plan difference: (PARTITION RANGE ALL partition full scan) (PARTITION RANGE SINGLE partition range)"

③, whether there are meaningless association conditions

例如 1:A.id in (Select id from a1)

① and A.id in (Select id from a1 where id=1)

② At this time, you can comment out ① part to avoid unnecessary associations and queries.

For example 2: The business function is similar, function 1 requires 5 tables in the range abcde, and function 2 requires 3 tables in the range abc;

The excluded tables/fields have the same requirements, so the development may be rewritten to save trouble, directly use function 2 sql directly to function 1 sql;

In the end, the outermost layer will not need to be filtered out.

④, left join to confirm whether it can be changed to join

In the left join association method, the wrong driving table is selected and the driven table may be selected. Sometimes there may be a small amount and the index should not be used.

And produce a lot of unnecessary intermediate results, produce a lot of physical / logical read; Join can filter data (filter conditions);

2. Implementation plan

①. Small tables/small partitions become larger tables/large partitions, and execution plan errors caused by non-statistics;

②, special cases can add hint (driving_site, full, no_index, use_hash..) statement to add hint needs to be negotiated with dba

When the database statistics are okay, there is no need to add hint to specify the execution plan in most cases without special circumstances;

③, good at using bind variables;

④ The use of scalar quantum query should be done within the limits of one's ability;

⑤, the fixed value can be imported as far as possible

If a certain field passes a certain value, try to pass the value directly (for example: association conditions, group by, etc.) where the field of the passed value is used.

When oracle generates an execution plan, it needs to measure which drive is driven by the two tables through correlation conditions, etc. If you directly write a fixed value,

The database will more accurately select the best execution plan to read the data in the database. The group by sorting part is to reduce resource consumption.

The following situations in the execution plan require serious attention:

① CARTESIAN, Cartesian product, it is necessary to confirm whether there is a dropped table that is not associated with other tables;

② Filter, the larger the drive meter, the more it will cause performance problems;

③ When the Nest loop drive table or the driven table is fully scanned

Worst case: Both the driven table and the driven table are scanned in full, and there will be performance problems as the amount increases..

Large table do loop --> add index

Poorly selective index loop --> delete index

-->2 When a large table is used in the nest loop, it is necessary to measure the frequency of use and try to negotiate with dba to add indexes appropriately;

--> When nest loop is an index with poor selectivity (distinct key is small) (and the execution time is slow), dba is required to analyze the index;

If you delete the index, it is recommended to make a statistics to confirm whether no one really uses it in the past few months before deleting it;

Reference blog: http://blog.itpub.net/28602568/viewspace-1362044/

Three, experience points

①, merge into where conditions in addition to the update field must be placed in on

Field: where condition (primary key id=1) does not take the index, put on on will take the primary key index;

Subqueries: where contains subqueries that are not placed in on. There may be cases in which the execution plan does not appear in the associated table of the subquery, resulting in the situation where the results cannot be run.

②, the choice of update and merge

The following is not for all situations, the specific slow situation will be changed according to the business/data situation'

update: change a single table, and when small tables are associated, it is faster and more stable than merge;

merge: When the same table is changed, the field is also used as the filter condition "update A set i=(select i from b where a.id=b.id) where i<>(select i from b where a.id=b. id)"<> requires additional consumption, which can be avoided by merge;

③, do not use (id = 1 or in subquery) form

Reason: The execution plan will select filter. If there are many entries after the driving table is filtered, the driven table will generate hot blocks;

Blog reference: http://blog.itpub.net/28602568/viewspace-1462937/

④, the choice of exists and in (avoid filter)

exists is more likely to generate a filter execution plan than in, so when sql is slow and press F5 to find that there is a filter in the execution plan, you can try to change it to in.

If there is a full scan of the table in the in subquery or there is still a slow situation, you can consider in (subquery) A and outer B to do a join association,

If there are still problems, confirm whether the situation mentioned in this article exists, and optimize for the overall measurement.

Supplement: [not] exists subquery does not have conditions associated with the outer layer, and there is no data in the final result

Blog reference: http://blog.itpub.net/28602568/viewspace-1666675/

⑤ Paging: the choice of rownum and row_number

Blog reference: http://blog.itpub.net/28602568/viewspace-1366015/

⑥ Choose rowid appropriately...

Blog reference: http://blog.itpub.net/28602568/viewspace-1378912/

⑦, try not to write is null and other conditions that will cause the index not to go, "the index does not record null values"

If the field has an nvl function index, you can write NVL(field, 0))=0 to judge it as is null, and you can use the function index

⑧. Code If count>0.. When making a judgment, if rownum=1, the data can be found, then count>0

You don’t have to take out all of them and then judge count>0... So this judgment can be unified with rownum=1

After making all changes, dba needs to track sql to confirm whether there are problems caused by rownum shortcomings'

Blog reference: http://blog.itpub.net/28602568/viewspace-1366015/

MySQL uses limit 1, Oracle uses rownum=1 to fetch a row in the data.

⑨, Reasonable use of temporary tables to solve a large number of download functions

Blog reference: http://blog.itpub.net/28602568/viewspace-1685600/

⑩. Temporary table: Temporary attribute table? Ordinary table?

Temporary table: no statistics. After a large number of create/inserts, the temporary table may be selected and the index may cause performance problems;

Ordinary table: create as select... No statistics. When called again, the database will use dynamic sampling to generate the correct execution plan; if insert needs to be counted in time, there may be problems with temporary tables.

Guess you like

Origin blog.csdn.net/wangshengfeng1986211/article/details/112597200