A Discussion on Performance Optimization of Oracle EBS Customized Reports

Need to describe: the logic of the voucher drawer and reviewer is not transmitted to the flexfield on the voucher through the elastic field of the submodule, and needs to be penetrated into the submodule to obtain, and the extraction logic of each submodule is still It is different. Currently, the scale of voucher lines and sub-ledgers are tens of millions. The efficiency of multiple table joint query is a bit low, because the cost of rewriting SQL is too high. On the premise of not changing the business logic code as much as possible, the code structure is transformed. and parameter configuration for optimization.

Tried method: Tried the following methods successively

SQL TUNING

Using Oracle's built-in SQL execution plan to rewrite, the rewritten execution plan did not achieve very good results

SQL rewriting

SQL rewriting: Merge multiple SQLs of subcategories into a large view, instead of segmentally extracting them into temporary tables, the result is slower than the original and is not feasible.

Dbms_Parallel_Execute

Dbms_Parallel_Execute, this parallelism is mostly used for entity tables, because data cannot be found when decomposing tasks based on session-based temporary tables, which is not feasible.

pipeline function

The pipeline function generally does not need to materialize the result group of the query, similar to streaming media technology, the traditional joint query is to materialize the result set, and then output the result, while the pipeline function can return the result as soon as it is executed, and can be opened parallel.
Using the pipeline function technology, the OBJECT and TYPE corresponding to the view in 1 are established, but the result is not as ideal as before optimization.

caching function

For the same input parameters, the cache function does not need to execute SQL to obtain the result, and directly caches the return.
Encapsulate the logic of obtaining the maker and verifying the person into a cache function, but for the input parameter of credential je_header_id + credential line je_line_num, the reuse rate is not high, so the prompt effect is not obvious.

materialized view

The materialized view is equivalent to the entity table, and it is mostly used for joint query of DB_LINK interaction, avoiding time-consuming caused by network waiting, and using space for time.

CREATE MATERIALIZED VIEW xxxx_MV
REFRESH FORCE ON DEMAND
START WITH TO_DATE('23-06-2021 11:04:22', 'DD-MM-YYYY HH24:MI:SS') NEXT SYSDATE+5/(24*60) --每五分钟刷新一次
AS
--select 逻辑

When using materialized views, you need to pay attention to two refresh methods, one is full refresh, which does not depend on the log table of the materialized view, and the other is incremental refresh, which needs to create a log table for each entity table designed for the materialized view. Both of these methods have an impact on database performance. This has a certain effect, but the real-time data is delayed, and the delay time depends on the refresh time of the materialized view.

end use

Materialized view + function cache + SQL rewrite + (merge comparison)

Encapsulate all query logic of subcategorized accounts into a materialized view, and refresh the materialized view in full by job timing.
For the data generated during the refresh interval, the new data is supplemented by using the merge comparison method.
After using these two methods, the time has been greatly improved, and the original request of more than one hour can be shortened to about ten minutes. Through the hierarchical analysis of the code through DBMS_HPROF, it is found that the standard function function (via CCID) called in SELECT is obtained. The function described in Chinese in the COA section is time-consuming, so I made a customized function for this part, which is essentially a shell with a cache function, so that for the same CCID, there is no need to execute SQL queries, and it is directly from the cache It saves a lot of time to return the data. After changing the sql that took 12 minutes to use a shell function with caching function, it can be improved to within 2 minutes, saving 10 minutes of time.
After the comprehensive transformation, the SQL that used to take more than an hour can now run for two minutes to produce results

Summarize

Before interviewing candidates for ETL data development, they sometimes asked how to optimize. Most candidates had two answers: look at the SQL execution plan + index transformation.
SQL performance optimization is not a one-off process. It needs to be comprehensively combined with business transformation, such as whether SQL can be rewritten, how much parallelism the current hardware performance can support, and whether space can be exchanged for time, etc.

Guess you like

Origin blog.csdn.net/x6_9x/article/details/118156113