Oracle Performance Tuning and Optimization

Oracle Performance Tuning and Optimization

 

Oracle database performance analysis and evaluation of the main database throughput, database user response time of two indicators. Database user response time and service time can be divided into system and user wait time two, namely: System database user response time = service time + wait time users Therefore, to obtain a satisfactory user response time there are two ways: one is to reduce system services time, that is, to improve the throughput of the database; second is to reduce user wait time, a reduction of users accessing the same database resource conflict rate. 
  Database Performance Optimization includes the following sections: adjusting the data structure design this part before completion of the development of information systems, programmers need to consider whether to use Oracle database partition function, for frequently accessed database tables need to be indexed and so on. Adjusting the structural design of this part of the application is completed before the information system development. In this step, programmers need to consider what kind of application architecture uses, using the traditional Client / Server two-tier architecture, or use the three-tier architecture Browser / Web / Database of. Different resource database application architecture requirements are different. Execute SQL statements tune database application will ultimately boil down to the database in the execution of SQL statements, and therefore the efficiency of SQL statements the final decision of the Oracle database performance. Oracle recommends using the Oracle statement optimizer (Oracle Optimizer) and line lock manager (Row-Level Manager) to adjust and optimize the SQL statement. Adjust the memory allocation server memory allocation is optimized configuration information during the system operation. The database administrator can adjust not only the health of the database according to the database System Global Area (SGA area) data buffer, the log buffer and shared pool size, but also can adjust the size of the program global area (PGA area). Adjust the hard disk I / O This step is completed before the information system development. Database administrators can make up the data files in the same table space on different hard disk, do hard disk between the I / O load balancing. Adjust the operating system parameters such as: running Oracle databases on Unix operating systems, you can adjust the size of the data buffer Unix, each process can use the memory size and other parameters. In fact, between the database optimization measures are interrelated. Oracle database performance deterioration in performance are basically user response time is longer, the user needs to wait for a long time. The cause of performance degradation is varied, and sometimes the result of multiple factors causing performance degradation, which requires the database administrator to have more comprehensive computer knowledge, can sensitively aware of where the main impact database performance . In addition, a good database management tool is also very important for optimizing database performance. 

 


Common database Oracle database performance optimization tools are: 


Oracle Database Oracle data dictionary online online Oracle data dictionary can reflect the dynamic operation, the adjustment for database performance it is helpful. Operating system tools such as using the Unix operating system Vmstat, Iostat commands can view the use of system-level memory and hard disk I / O. These tools can help administrators figure out where the system bottlenecks occur. SQL language tracking tool can record the implementation of SQL statements, administrators can use to adjust the virtual table instance, and use the SQL statement trace file adjustment of application performance. SQL language tracking tool output the results into an operating system file, administrators can use TKPROF tool to view these files. Oracle Enterprise Manager (OEM) This is a management graphical user interface, users can use it easily without having to remember complex database management of Oracle database management commands. Explain Plan - SQL Language optimize command Use this command to help programmers write efficient SQL language. System Performance Evaluation 
of different types of information systems, need to focus on the database parameters are different. Database administrators need to type their own information systems to focus on considering different database parameters. 
Online transaction processing information systems (OLTP) information systems of this type generally require a large number of Insert, Update operation, a typical system including civil aviation ticket sale system, bank savings system. OLTP systems need to ensure that the database concurrency, reliability and speed of the end-user, Oracle database using such systems need to consider the following parameters: Database rollback segment is sufficient? The need to build Oracle database index, aggregation, hash? System Global Area (SGA) size is adequate? SQL statement is effective? Data warehouse system (Data Warehousing) The main task of this information system is to be queried from the Oracle of massive data, in order to get some of the laws between the data. Database administrators need to focus on to consider the following parameters for this type of Oracle database: whether to adopt the B *  index or Bitmap index? Whether to adopt parallel query SQL query to improve efficiency? Whether to adopt the process of writing stored PL / SQL function? If necessary, require the establishment of parallel database query to improve the efficiency of the database. Adjust parameters 


CPU parameters 


CPU is an important resource server, good working condition server CPU performance at work peak usage is above 90%. If the idle time CPU utilization to above 90%, indicating a lack of server CPU resources; peak CPU utilization if the work is still very low, then the server CPU resources are still relatively abundant. Use the operation command you can see CPU usage, general Unix operating system server, you can use sar-u command to view CPU utilization; NT server operating system, you can use NT's Performance Manager to view CPU usage. Database administrators can $ sysstat data dictionary "CPU used by this session" statistical items that Oracle database using CPU time by viewing v; see "OS User level CPU time" statistical items that CPU in the operating system user state time; see "OS system call CPU time" statistical items that CPU time in the operating system, the operating system's total CPU time is the user state and system state and time. If the Oracle database using CPU time accounted for more than 90% of the total operating system CPU time, it shows the server CPU was basically to use the Oracle database, which is reasonable, on the contrary, it indicates that the server is consuming too much CPU other programs, Oracle Database You can not get more CPU time. Memory parameter 
adjustment ---- Memory parameter adjustment mainly refers to the Oracle Database System Global Area (SGA) is. SGA is mainly composed of three parts: the shared pool, the data buffer, the log buffer. 
---- Shared pool consists of two parts: shared SQL areas and data dictionary buffer. Shared SQL areas are stored user SQL command area, the buffer data dictionary to run dynamic information database is stored. 

Performance Tuning case 

a. Look through the top command CPU utilization: 
#top 
以下是TOP的结果:  last pid: 11225; load averages: 7.95, 6.63, 6.25 17:19:35  273 processes: 259 sleeping, 3 running, 5 zombie, 3 stopped, 3 on cpu  CPU states: 10.0% idle, 75.0% user, 15.0% kernel, 0.0% iowait, 0.0% swap  Memory: 8192M real, 4839M free, 2147M swap in use, 12G swap free 
PID USERNAME THR PRI NICE SIZE RES STATE TIME CPU COMMAND  10929 oracle 1 59 0 1048M 1022M cpu/6 2:52 21.59% oracle  11224 oracle 1 59 0 1047M 1018M run 0:03 4.22% oracle  8800 oracle 1 59 0 1048M 1022M run 1:39 3.99% oracle  4354 oracle 1 59 0 1049M 1023M cpu/4 0:28 3.46% oracle  3537 oracle 1 59 0 1048M 1022M sleep 1:01 1.93% oracle  29499 oracle 1 59 0 1048M 1022M sleep 30.0H 1.84% oracle  11185 oracle 1 59 0 1047M 1020M sleep 0:01 0.74% oracle  11225 wacos 1 44 0 2832K 1928K cpu/0 0:00 0.65% top  9326 oracle 1 59 0 1047M 1020M sleep 0:58 0.50% oracle  410 root 14 59 0 7048K 6896K run 76.3H 0.42% picld  21363 oracle 1 59 0 1047M 1019M sleep 574:35 0.36% oracle  10782 oracle 11 59 0 1052M 1024M sleep 749:05 0.28% oracle  13415 oracle 1 59 0 1047M 1019M sleep 6:07 0.27% oracle  5679 oracle 11 59 0 1052M 1026M sleep 79:23 0.19% oracle  5477 oracle 258 59 0 1056M 1021M sleep 57:32 0.14% oracle 
二.通过分析找出了消耗CPU最高进程对应的SQL语句: 
SQL>set line 240 
SQL>set verify off 
SQL>column sid format 999 
SQL>column pid format 999 
SQL>column S_# format 999 
SQL>column username format A9 heading "ORA User" 
SQL>column program format a29 
SQL>column SQL format a60 
SQL>COLUMN OSname format a9 Heading "OS User" 
SQL>SELECT P.pid pid,S.sid sid,P.spid spid,S.username username, 
S.osuser osname,P.serial# S_#,P.terminal,P.program program, 
P.background,S.status,RTRIM(SUBSTR(a.sql_text, 1, 80)) SQL 
FROM v$process P, v$session S,v$sqlarea A WHERE P.addr = s.paddr AND S.sql_address = a.address (+) AND P.spid LIKE '%&1%'; 
Enter value for 1:10929 
Eventually find a SQL statement is as follows: 
the SELECT NVL (SUM (RURALCHARGE), 0.00) AS the FROM LOCALUSAGE Fee = 219 987 and the WHERE ServiceID startTime> = to_date ( '2003/12/30 13:24:20', 'YYYY / MM / DD HH24: MI: SS '); 

III. Partition index on the table query localusage: 
obtained by the following query on the index localusage partition table has two: 
the SQL> SELECT INDEX_NAME is from USER_PART_INDEXES The WHERE TABLE_NAME = 'LOCALUSAGE'; 

INDEX_NAME is -------------- ---------------- I_LOCALUSAGE_SID UI_LOCALUSAGE_ST_SEQ 

analysis through the implementation of a plan of this statement is one UI_LOCALUSAGE_ST_SEQ index, the index is not used I_LOCALUSAGE_SID, I figure a bit of time, very efficient use of this index UI_LOCALUSAGE_ST_SEQ poor return time with 2 minutes 36 seconds, and then use this index I_LOCALUSAGE_SID, return time is more than a second. The ORACLE default using UI_LOCALUSAGE_ST_SEQ index, thus taking up a lot of CPU resources, leading to CPU utilization fell. 

The following is the analysis of autotrace: 
SQL> Connect WACOS / oss 
SQL> autotrace the SET ON 
SQL> set timing on  SQL> SELECT NVL(SUM(RURALCHARGE),0.00) AS Fee FROM LOCALUSAGE WHERE ServiceID=219987 and starttime >= to_date('2003/12/30 13:24:20','YYYY/MM/DD HH24:MI:SS'); 
FEE  ----------  107.25 

Elapsed: 00:02:36.19 (返回时间2分36秒) 

Execution Plan  ----------------------------------------------------------  0 SELECT STATEMENT Optimizer=CHOOSE (Cost=10 Card=1 Bytes=35)  1 0 SORT (AGGREGATE)  2 1 PARTITION RANGE (ALL)  3 2 TABLE ACCESS (BY LOCAL INDEX ROWID) OF 'LOCALUSAGE' (C  ost=10 Card=10035 Bytes=351225) 

4 3 INDEX (RANGE SCAN) OF 'UI_LOCALUSAGE_ST_SEQ' (UNIQUE  ) (Cost=2 Card=10035) 

Statistics  ----------------------------------------------------------  0 recursive calls  0 db block gets  11000821 consistent gets  349601 physical reads  0 redo size  292 bytes sent via SQL*Net to client  359 bytes received via SQL*Net from client  2 SQL*Net roundtrips to/from client  1 sorts (memory)  0 sorts (disk)  1 rows processed 

用HINT强制ORACLE使用I_LOCALUSAGE_SID索引,然后查看执行计划: 
SQL>connect wacos/oss 
SQL>set autotrace on 
SQL>set timing on 
SQL> SELECT /*+ INDEX(LOCALUSAGE I_LOCALUSAGE_SID)*/ NVL(SUM(RURALCHARGE),0.00) AS Fee FROM LOCALUSAGE WHERE ServiceID=219987 and  starttime >= to_date('2003/12/30 13:24:20','YYYY/MM/DD HH24:MI:SS'); 
FEE  ----------  107.25 
Elapsed: 00:00:01.15 (返回时间1秒) 
Execution Plan ------------------------------------------------ ---------- 0 SELECT STATEMENT Optimizer = CHOOSE ( Cost = 15 Card = 1 Bytes = 35) 1 0 SORT (AGGREGATE) 2 1 PARTITION RANGE (ALL) 3 2 TABLE ACCESS (BY LOCAL INDEX ROWID) . OF 'LOCALUSAGE' (Card Cost = 15 Bytes = 351 225 = 10035) 
. 4. 3 the INDEX (the RANGE SCAN). OF 'I_LOCALUSAGE_SID' (the NON-UNIQUE) (Cost = 14 = 10035 Card) 
Statistics ---------- ------------------------------------------------ 0 recursive calls 0 db block gets 307 consistent gets 232 physical reads 0 redo size 292 bytes sent via SQL * Net to client 359 bytes received via SQL * Net from client 2 SQL * Net roundtrips to / from client 0 sorts (memory) 0 sorts (disk ) 1 rows processed 
recommended that researchers adjust the statement to make this statement defaults I_LOCALUSAGE_SID index, or use the HINT to force the use I_LOCALUSAGE_SID index in the statement. 


Case II 
I. Introduction This article describes the process to optimize the adjustment in the Linux environment on a large-scale Oracle database Web applications and are divided into database analysis, database tuning, database monitoring three parts. In each section, to have a more specific operation detailed analysis shows. II. system Overview 2.1 database analysis system is a large-scale web system for a particular client. there is a high frequency of growth in several data tables, not many concurrent transactions. 2.2 bottleneck detailed analysis systems currently reflected in section query efficiency is relatively low, the rapid increase in the archive log two aspects. with the increase in the number of future users, concurrent operation efficiency is also a potential problem. Hence the need for a comprehensive analysis of the database server, including environmental configuration, network configuration, database initialization parameter configuration and analysis of the SQL statement 2.2.1 environment configuration hardware configuration:. RAM: 4G Disk: 500G software configuration: OS: Redhat Linux database: Oracle 817 Enterprise Edition 
Analysis: higher hardware configuration, to meet the large-scale application of Oracle OLTP better performance in Unix / Linux environment ratio WINNT 2.2.2 database initialization parameter for each instance of an Oracle database, there is a corresponding initialization parameters. . initsid.ora file parameter file certain parameters impact on database performance greatly listed below are some of the core parameters: the results JAVA_POOL_SIZE LOG_BUFFER DB_FILE_MULTIBLOCK_READ_COUNT SORT_AREA_SIZE SHARED_POOL_RESERVED_SIZE analysis DB_BLOCK_BUFFERS SHARED_POOL_SIZE LARGE_POOL_SIZE: most parameter values are missing at the time of the establishment of a database Province value, the logical structure of the database needs to be adjusted according to the rules 2.2.3 OFA (Optimal Flexible architecture), the database objects and object types differentiated by operation type is characterized according to specific business planning table space. 
analysis: tablespace organization is not reasonable, most of the data in the establishment of SYSTEM table space; table index is not exclusive table space is not conducive to management; user quota management and rights management needs to be adjusted. 
2.2.4 database physical structure of the data The physical structure of the library is mainly reflected in the planning disk distribution, to ensure that the disk I / Effect O performance of the database is minimized. 
Analysis: binding analysis logic structure, part of the data need to be stored separately from other data files to ensure that I / O minimize competition, comprising: a service table index table space and spatial separation 
rollback spatial separation space and business table 
SYSTEM table space is separated from the other table spaces also determine the overall goal of the system is based on the recovery, for the product database, to enable the archived redo log mode for the test database, to back up data by logical or cold backup. Table 2.2.5 structure analysis results: basic table structure rational design, part of the field type of the table needs to be adjusted; parts table or not done STATISTICS STATISTICS old, appeared in the case index without using ORACLE, using full table scan; adjustment of the FK. 2.2.6 SQL statement analysis results: the system is not overly complex queries, but have a higher cost of a query; the bulk transfer of data can have a better way to block more; more use characteristics of the database, the processing statement written as a procedure or function, running on the server, reducing network traffic; not fully using bind variables, shared pool use efficiency is not high; a short time a large number of DML operations will lead to a surge rollback segments and redo log switch frequently, 2.2.7 network need to optimize results: can be transplanted into a separate database server On to play a better performance. 
2.3 summary concluded from the above analysis: the performance of the database has not been fully played, many aspects can be adjusted to improve performance through three databases adjustment 
Principle: one by one according to the data analysis section mentioned adjustments for program detection parameter adjustment adjusted 3.1 Description: 4G server memory, the server taking into account also running other tasks, which may be allocated to the SGA 10% according to the actual situation again. the distribution range of the SGA is generally 20% to 40% of physical memory, not exceed 50% of the total maximum physical memory operations: calculated: SGA = ((db_block_buffers * block size) + (shared_pool_size + large_pool_size + java_pool_size + log_buffers before) + 1MB = 0.1 * 4G = 200M i.e. half adjustment 0.4G = 410M 3.1.1 SGA shared pool of approximately: shared_pool_size = 31457280 (31.45M) after adjustment: shared_pool_size = 209715200 (200M) Description: this value can not be too large , can be measured in terms of the complexity of the actual business of the SQL statement. If the business is not too much dynamic SQL or SQL is not complicated, it can be reduced, otherwise it would SHARE POOL increase in debris, and frequently lead to OS-level memory normal operation scheduling, seriously affect the system of monitoring: statspack run periodically or utlstat package, adjusted according to the monitoring data obtained monitoring results: Statspack analysis report file generated packet, set the value of the shared pool is too large, a half tone may be small i.e. 200M 3.1.2 buffer cache block is set as the SGA 
buffer cache capacity = number of blocks in buffer cache ( DB_BLOCK_BUFFERS) * size of each block (DB_BLOCK_SIZE) DB_BLOCK_SIZE = 8K, this parameter is the default setting at the installation database. 
After adjustment db_block_buffers = 2048:: Before adjustment db_block_buffers = 25600 (200M / 8K) 
Description: buffer cache great impact on performance, because all the data are accessed by a user process through the buffer cache to access adjustment. after completion, it periodically checks if the hit rate is less than 90% hit rate, it is necessary to increase the parameter 3.1.3 log buffer (logbuffer) is not adjusted, keep the default parameters:... by alert files monitoring found that found that information can not be assigned redo log file space, increasing the redo log file size / log buffer size may solve this problem. 
3.1.4 large pool (Large_pool_size) to allocate large heap storage pool, which can be multi-threaded server memory used as a session, the message buffer as well as parallel execution of backup and recovery RMAN disk I / O buffer before adjustment:. large_pool_size = 614400 after adjustment: large_pool_size = 1048576 (derived according to the formula) Description: considering the increased frequency data, RAMN need for backup and recovery of data when the quantity reaches a certain level. Thus can be appropriately adjusted. 3.1.5 JAVA Pool does not adjust, keep the default parameters before 3.1.6 DB_FILE_MULTIBLOCK_READ_COUNT adjustment: 8 (default) after adjustment: 16 (for medium-sized systems, namely the transaction amount is not large OLTP system) Description: This parameter SQL query policy influential. If the parameter is set relatively large, it may cause Oracle to use full table scan rather than using the index. 
3.1.7 sort buffers (SORT_AREA_SIZE) before adjustment: 65536 (64K) Adjusted: 1048576 (1M) Description: Oracle SORT_AREA_SIZE setting value based on, the sorting operation for large memory allocation, if enough memory available, then the temporary space segment. Therefore, as far as possible the sort operation in memory system uses a lot of sorting operations in data exchange, so this parameter due emphasis on larger monitors: run regularly statspack, adjusted according to monitoring data. 
3.1.8 5% after adjustment SHARED_POOL_SIZE value:: SHARED_POOL_RESERVED_SIZE defaults 10485760 (200M * 5% = 10M ) Description: shared pool to use for a long time, there will be fragmentation, in order to try to perform fewer operations refresh shared pool, need to set aside some memory the space that would otherwise increase the burden on CPU rescheduling, but also affect other processes 3.2 database logic restructuring objectives: New data table space USERDATA; finishing SYSTEM table space, leaving only the basic data dictionary; transfer a specific user, will present common user. set to default tablespace data table space; index transferred to a dedicated space 3.2.1 new index table. Considering the limitations tablespace USERDATA (may cause management problems) in the UNIX filesystem 2G, each data file is arranged to 2G; entire reservoir analysis logical backup file size, the initial size set to 6G. 
Most 3.2.2 metastasis particular user access to the existing system tables is to use the built-in user database scott, consider the security and user management, there are the following two options: to retain existing customers, and to adjust the quota allocation policy the corresponding authority; new user setting, re-assign permissions improved using the following scheme 1. STEP 1: recovery roles and privileges Revoke resource from scott STEP 2: Alter cancel the user on a disk quota SYSTEM tablespace user scott quota 0. on system STEP 3: the user's default table space instead USERATA Alter user scott default tablespace USERDATA STEP 5: adjustment disk quotas, reauthorization Alter user scott quota unlimited on USERDATA grant create procedure to scott grant create trigger to scott grant create type to scott 
Description: best not unlimited tablespace system privileges given to ordinary users at the same time, the default user database DBSNMP have the authority to consider from a security point of view, the authority needs to be recovered. 
3.2.3 data transfer there are three options you can choose : 1. use the SQL statement transfer table alter table TableNam e move tablespace TBSNAME 
Description: the table because the index of the physical storage location has changed, the need to rebuild the index. 
2. import the exported object based Policy / 
Objective: scott all objects owned by the user (tables, indexes, constraints, trigger etc.) to USERDATA tablespace STEP 1: export 
exp userid = system / manager parfile = exp_scott.par file = exp_scott.dmp log = exp_scott.log owner = "(scott)" parameter file exp_scott.par: BUFFER = 4096000 COMPRESS = Y GRANTS = Y INDEXES = N ROWS = Y CONSTRAINTS = N dIRECT = Y Note: If you do not export index, constraint, and the direct path, increasing the cache can greatly increase the speed derived using the original compression options cOMPRESS = Y table space; previously estimated size file before export process, and if it exceeds 2G, for dividing STEP 2:. import imp userid = system / manager parfile = imp_scott.par file = exp_scott.dmp log = imp_scott.log fromuser = "(scott)" touser = "(scott)" parameter file: BUFFER = 4096000 COMMIT = Y GRANTS = Y INDEXES = N IGNORE = Y ROWS = Y Definitions: when the data amount is not large, the speed can also be introduced acceptable, but the amount of data increases, the speed will be introduced more slowly. then you can consider using RMAN or cold / hot backup mode to back up data. 3. CTAS statements CREATE TABLE TableName TABLESPACE USERDATA As select * from ... 
Description: In the archived redo log mode, use NOLOGGING parameter can significantly reduce the redo log data transfer is complete, delete the old table, create index index 3.2.4 metastasis Objective: exclusive distribution index table space can be reduced small disk I / O competition, so that the logical database structure more clearly implemented: You can specify the tablespace index used by tABLESPACE parameter CREATE iNDEX statement; create unique index simultaneously create the primary key table using using index tablespace clause table space specified. 
For example: Create index IX_T1_SepID ON T1 (SepID ) tablespace INDX storage (initial 40K next 40K pctincrease 1) Alter table T1 add constraint PK_T1 primary key (ID1, ID2) using index tablespace INDX storage (initial 40K next 40K pctincrease 1
3.2.5 Finishing SYSTEM table space purposes: SYSTEM table space for only the data dictionary to achieve: After transferring the data and indexes, clean up temporary tables, and finally merge free extents alter tablespace SYSTEM default storage (pctincrease 1) alter tablespace SYSTEM coalesce 3.3 Performance. when Tuning 3.3.1 Execution plan when the SQL statement performance bottlenecks, first check the query plan. If the plan is not optimal query plan, check the adjustment from these areas one by one. did you make a table or index statistics. statistics last time. from now to the last statistics, an increase of the frequency of active tables. are you using the index. index access is not the best strategy to access to two active tables T1, T2 for the whole table statistics ANALYZE tABLE T1 COMPUTE sTATISTICS ANALYZE tABLE T2 COMPUTE sTATISTICS other statistical table is indexed selectively ANALYZE tABLE T3 COMPUTE sTATISTICS fOR ALL iNDEXES statistics of the specified column as specified SIZE: ANALYZE after tABLE T4 COMPUTE sTATISTICS fOR cOLUMNS COLX SIZE XXX finished, check USER_INDEXES, USER_TAB_COLUMNS view taken relevant information. However, Made after some simple tests SQL, the table T1 col1 + col2 primary key, col3 listed non-unique index. Select count (*) from T1 Select col1, col2, col3 from T1 where col1 = 1 and col2 = ... and T1 .SepID =: iLowVal '||' and '|| sColumn ||' <=: 
This a major bottleneck in SQL statements on access to col5 column, because the column with only two values ​​(1,0), and no construction of the index. Try to build a composite index (the column can not build a bitmap index alone, the bit FIG OLAP systems typically used index) create index ix_bind_T1 on T1 (Sep_ID, col5) tablespace indx storage (initial 64K NEXT 64K PCTINCREASE 1); do a statistics: analyze table T1 compute statistics for columns Sep_ID Size 8; and finally re-execute the query The query plan: 0 SELECT STATEMENT Optimizer = CHOOSE (Cost = 2 Card = 1 Bytes = 6) 1 0 SORT (AGGREGATE) 2 1 INDEX (RANGE SCAN) OF 'IX_BIND_T1' (NON-UNIQUE) (Cost = 2 Card = 83672 Bytes = 502032) 
Statistics: 0 recursive calls 0 db block gets 392 consistent gets 391 physical reads 0 redo size 293 bytes sent via SQL * Net to client 421 bytes received via SQL * Net from client 2 SQL * Net roundtrips to / from client 1 sorts (memory ) 0 sorts (disk) 1 rows processed can be seen: the index improved query efficiency is relatively high for complex SQL, also in accordance with such steps, large queries for each sub-query query plan to identify them. it is important to performance bottlenecks: from the perspective of the whole system to consider the index, a reasonable set and use other indexes 3.3.2.2 DML system in addition to the query, there are other DML statements, including add, modify and delete. for OLTP systems, optimization of these operations is critical to improving the performance of the new operating system: when importing large amounts of data, optimize efficiency can from these aspects: 
For batch import operation, consider these factors: 1) is not preferred system busy period; 2) preparing a large rollback; 3) index is disabled, constraints on the target table; 4) operating on the object table redo log is not created; 5) Insert / * + Append * / direct introduction mode; 3,4 point if running in archive log mode, the above-mentioned more important, as it will produce a large number of archived during import redo logs, most likely caused by insufficient disk space to complete the operation immediately after the backup of the database using the IMP command, consider these factors: 1) to increase the cache; 2) the index on the target tables, constraints temporarily removed, import after the completion of re-creating; 3) submit batch delete operation: for the delete operation, consider these factors: 1) If possible, use a separate private rollback segment size is about 1.5 -2 times the maximum amount of data; 2. ) batch submitted; 3) distinguish the delete operation and maintenance of the required deletion, if data is regular maintenance, building a JOB (night) is done automatically when the database is not busy; if it is necessary for the application In addition to the operation, to ensure that the operation is completed quickly as possible. 4) to estimate the size of the amount of data to be deleted, if close to full table, SQL commands can be used to derive the remaining data in the table, then execute Truncate table command to clear the table, and finally re-import the data. 
IV. database monitoring 
4.1 STATSPACK 4.1.1 Overview STATSPACK is one of the performance monitoring tools provided by Oracle, the database can provide a comprehensive index running over a period of time, which can detect the largest database performance bottlenecks. 4.1.2 TOP WAIT EVENTS TOP WAIT EVENTS is STATSPACK REPORT generated in the core section, which lists the system up to wait for an event to see an example below:. event waits wait Time (CS)% Total Wt Time db file scattered read 36,159 694 43.38 db file sequential read 9,900 296 18.50 Log file parallel write 1,620 255 15.94 control file parallel write 1,069 198 12.38 Log buffer space 50 73 4.56 analysis: db file scattered read: excessive use of full table scan instead of using the index; 
db file sequential read: the excessive use of a single block read operation; log file parallel write: disk I / O operations related; control file parallel write: log buffer space: log buffer size needs to be adjusted 4.1.2.1 db file scattered read. 1. check the SQL statement (the snaplevel set to 5 relevant information can be obtained to monitor SQL statements) to see whether the SQL statement using a physical read too much REPORT find the relevant part:. Statistic Total Per Second Per Trans physical reads 475,420 150.4 844.4 physical reads direct 132 0.0 0.2 physical  writes 4,381 1.4 7.8 physical writes direct 132 0.0 0.2 physical writes non checkpoint 2,748 0.9 4.9
can be concluded from the above data: Since the values of physical reads is relatively large, there is certainly no good optimization of SQL . these SQL statements from the statement found in REPORT: Physical Reads Executions Reads per Exec%  Total Hash Value 73,537 4 18,384.3 99.6 1570063194 select count (*) from T1
41,098 6 6,849.7 8.6 1127261530  DELETE /*+NESTED_TABLE_SET_REFS+*/ FROM T2_TMP  40,432 6 6,738.7 8.5 3300685046  Delete from T2_TMP  40,363 6 6,727.2 8.5 2027128933  INSERT INTO T2 SELECT * FROM T2_TMP  40,306 6 6,717.7 8.5 2047057492 
delete from T2 where (col1, col2) in (select col1, col2 from T2_TMP) ... Analysis: As can be seen, this is a typical batch operation to import data in accordance with conditions to remove duplicate data and the data from the temporary. transfer table to the target table, and finally empty the temporary table. Delete from the rollback clause would take up a lot of space, a large amount of redo logs, use Truncate table statement efficiency will be much better. INSERT iNTO operating costs are relatively high, because the target table on the index, the new data when the need for maintenance of the index, will generate a lot of redo log. If you use direct way to import, and the index is set to not generate log mode, the speed will be a lot, and will greatly reduce the redo log small ALTER INDEX IX_DEST_TABLE NOLOGGING; INSERT / * + APPEND * / INTO ...; in ARCHIVELOG mode, if the operation is not a time to write redo log, then the operation is unrecoverable, so if you use the above operation, must be in operation after the completion of the backup database, in addition to ensure that these operations should be completed soon. for other queries, analyze the query plan optimization of SQL statements Adjust 2. Check the system's I / O subsystem if there is a bottleneck, see REPORT relevant information about TABLESPACE IO STATS and FILE IO STATS below are specific indicators column two parts:. Read: Reads Av Av Av Reads / s Rd (ms) Blks / Rd Write: Writes Av Buffer Av Buf Writes / s Waits Wt (ms) required information is mainly concerned with average read time, if this value is too large, indicating that there is a disk bottleneck own I / O performance . 4.1.3 Buffer Hit Ratio & Shared pool except TOP WAIT EVENTS outside, REPORT the statistics about Buffer (Library) Hit and Shared pool should also be concerned about.

Oracle database optimization Abstract proposes a method for optimizing Oracle databases. Oracle SQL statement in the implementation process can be divided into parsed (Parse), the Executive (Execute) and extraction results (Fetch) three-step, this method is performed by optimizing the three processes in the Oracle Database SQL statement to improve Oracle database performance. Keywords database scanning multi-table join subquery 
1 How to Optimize Parse 1. Parse processing steps 1 SQL statements: 1) to calculate the value of the statement 2) whether the value of this statement is the same shared pool of statements? 3) there is a shared pool with this statement exact match for the character of the statement? 4) ready to run SQL statements 5) create room for new statements in the shared pool of 6) will be stored in a statement 7) Modify the shared pool shared pool chart, indicating the value of the statement and execution position shared pool of 8) preparation SQL statement Ideally, the statement is executed only steps 2, 3 and 8 for processing. Step 3 without test is passed Oracle statement to be used to process steps 1 to 8. After only 1,2,3,8 SQL statement than after 1-8 steps statements are more effective. 1.2 Reuse SQL statements in the shared pool when a SQL statement is passed to the Oracle deal, the secret is to reuse statements already in the shared pool, rather than in an Oracle statement to prepare a new statement. Front suggests that if Oracle accepted the statement with a shared pool of consistent statements to reuse the statement shared pool. Oracle provides the ability to store the code in the database. When the application starts running, read the code (available PL / SQL statements compiled) from the database and transmitted like the other statements as to the shared pool to handle. Removed from the database code is compiled and reside in the shared pool. It can be stored in the database using a program code design application system, the transaction processing and check all common main process, existing applications and research the main processing program into the program code stored in the database. Code stored in Oracle may be implemented by the process, package, functions, and triggers. 2 How to optimize Execute and Fetch 2.1 avoid unplanned full table scan Full table scan continuously read all data from the table, regardless of whether the data is relevant to the query. Avoid unnecessary full table scan sufficient for two reasons: 1) non-selective full table scan 2) through a full table scan quickly remove read data (if the table is not being scanned "the cache buffer from the SGA "table) in the case of a rule-based optimization, if any of the following conditions occur in SGA statement, it is necessary for a table full table scan. 1) None of the index table 2) defines a condition of no return line (e.g. Where no statements) 3) Data with any index table columns corresponding to the primary line of indefinite conditions. For example, create a three composite index on the City-State-Zip column, then the only State lists the qualifications of queries can not use the index, because the State is not the main index of the column. 4) The conditions imposed on the main column line index, but the condition is or is NULL or not equal. For example, City column on the existence of an index, the index will not be used in all of the following conditions. Where city is null Where city is not null Where city! = 'Liaoning' 5) has defined conditions of primary column line index, but the conditions employed in expressions. For example, if the column index City, then qualification Where City = 'liaoning' can use the index. However, if the proviso that Where UPPER (City) = 'liaoning' it does not use an index on the City column, as in the City column in the UPPER function. If the City column and text strings linked together, we will not use the index. For example, if the proviso that Where City || 'x' like 'liaoning%' then does not use an index on the City column. 6) The conditions imposed on the main column line index, but Like operating conditions employed and the values ​​with '%' is a start value or variable assignment. For example, the index will not be used in all the following cases: Where City like '% aonin%' Where City like: City_Bind_Variable If the table is small, non-selective index column, cost-based optimizer may decide to use a full table scan. 2.2 selective index selective index is the ratio of the number of columns in the index table recorded in a different number of values. If the table has 1000 records, the index table lists 950 different values, then the index is selectively 950/1000 or 0.95. The best choice is the possibility of 1.0. The only non-null value index based on the column, which is usually 1 selectivity. 0. If you are using cost-based optimization, the optimizer should not use selective bad index. Selectivity index is the ratio of the number of columns in the index table recorded in a different number of values. If the table has 1000 records, the index table lists 950 different values, then the index is selectively 950/1000 or 0.95. The best choice is the possibility of 1.0. The only non-null value index based on the column, usually a selectivity of 1.0. Selectivity index is the ratio of the number of columns in the index table recorded in a different number of values. If the table has 1000 records, the index table lists 950 different values, then the index is selectively 950/1000 or 0.95. The best choice is the possibility of 1.0. The only non-null value index based on the column, which is usually 1 selectivity. 0. 2.3 Managing multi-table joins Oracle provides three join operations: NESTED LOOPS, HASH JOIN and MERGE JOIN. MERGE JOIN is a set of operations, until all rows are processed, it does not return any record to the next operation. NESTED LOOPS and HASH is the line operation, so the first batch of records will soon return to the next operation. In each coupling option, you must take some steps to get the best performance coupling. If not properly optimize join operations, the time required for coupling may table grows exponentially grow. 2.4 management view contains the SQL statement if the query contains a view, the optimizer There are two ways to execute the query: first solve the view, and then execute the query, or view the text is integrated into the query go. If the view is performed first, then the first to complete a full set of results, and as a filter with the rest of the query. First, solve the problem query views can lead to performance degradation, depending on the relative size of the tables involved. If the view is integrated into a query, the query conditions can also be applied in view, and you can use a small number of the result set. However, in some cases, you may be able to improve query performance by separating the view set of operations. If a view contains a set of operations (e.g. Group by, SUM, COUNT or DISTINCT), then the view can not be integrated into the query go. Do not use group or set of operations not view SQL syntax can be integrated into a large query go. 2.5 sub-query optimization when using self-inquiry, it may be met with some unique problems. Discover potential problems involving sub-query is as follows:? Maybe the implementation of sub-queries (similar to the view of the execution grouping function) before executing the remaining part of the query. ? Subquery may require specific tips, but these tips are not directly related to call inquiries about the sub-query? Can be used as a single query execution sub-queries may be written instead of several different sub-queries. ? Perhaps not in use or not exists clause clause, subqueries can not exist query in the most efficient manner. 1) When executing sub-queries If the query contains a subquery, there are two ways to complete the query optimizer: first complete sub-queries, and then complete the query ( "Method views"), or go to the query subquery integrated (Method "coupled") is . If the first address sub-query, then the entire sub-query result set will first be calculated, and as a filter with the remaining part of the query criteria. If you do not use a subquery to carry out checks exist, then the "link" approach than the usual "View" way to do well. If the query includes a set of sub-operations, such as group by, SUM or DISTINCT, then the child can not be integrated into the rest of the query queries go. Non-integrated sub-query limits the options available to the optimizer. 2) how to combine a query sub-query can contain multiple sub-queries, the more subqueries use, integrated or rewrite them to join in the more difficult large. Since multiple sub-queries make integration difficult, it should be a combination of multiple sub-queries as possible. 3) How to check for the presence sometimes be subquery returns no rows (records), but you can check the correctness of the data. Related table or record the presence or absence of the logic checks, called presence checking. You can use it exists and not exists clause exists to improve the performance check. 2.6 Managing access to very large table with the table to increase significantly large space than SGA data block buffer cache memory, it is necessary to optimize the query table from another angle. 1) When the issue of small tables and its indexes when, in the SGA can have a high degree of data sharing. Multiuser reading table or index can be used repeatedly scans the same range block. With the growth of the table, index table growing. With the growth of the table and its index is larger than the space provided to the SGA, the possibility of a range of scanning needs to find the next line in the SGA smaller percentage of the database will be reduced. Finally, each logic required to read a single physical read. The use of very large optimization method selection table looks at the special indexing techniques and related indexes. 2) to manage data closer to very large table during the visit, if you prefer to continue to use the index, then the data should be concerned about close physical relationship that is logically related records. To make the data close to the maximum, you should be inserted into the table recorded continuously. Records sorted column is generally used in a range where the scan table. 3) Avoid the index scan does not help If you want to use an index scan of a large table, you can not assume that perform better than the index scan a full table scan. The only access is not keeping up table scan or index scan range to perform better, but the execution was poor perhaps followed by an index range scan to access the table of RowID. As the table grows to the cache memory than the data block is much larger, finally, the balance between full table scan and index scan break. 4) Create a table full index if data in the table fairly stable, fully index a table is useful. Create a composite index, which includes all the usual queries during the selected column. During the inquiry, all the data query requires an index can provide access through without any table access. 5) Parallel option to put a database task, such as Select statement, divided into units of work, carried out simultaneously by multiple Oracle process. This can allow a single database query activity coordinated by multiple processes transparently ability to process, known as parallel query option (PQO). Parallel option to invoke multiple processes to make use of idle system resources to reduce the time required to complete the task. Parallel option does not reduce the number of resources required for processing, but the process of decentralized tasks to multiple CPU. To get the maximum benefit from parallel options, should the I / O on the CPU and disk Do not use full capacity. Because the parallel aim is to make more CPU and disk command processing while participating in a database, a lack of CPU and I / O resources service program can not benefit from parallel options. 2.7 instead of using UNION ALL UNION operation is the most common set of UNION operation, UNION operation sets a plurality of records into a single set of links. UNION operation is a mathematical definition of a single set of records returned and no duplicate rows, so the combined result set, Oracle returns only different records. When the UNION operation as part of a SQL statement, the only requirement Oracle forced removal of duplicate records. Oracle's function is to remove duplicate records SORT UNIQUE operation, it performs the operation similar to the use of DISTINCT clause. UNION ALL operation allows repeated. UNION ALL SORT UNIQUE operation is not required, thus saving overhead. UNION ALL is an operation line, it is effectively returned to the user when it becomes. The SORT including UNION UNIQUE set operations, before the end of the ordered set of all records, no records are returned to the user. When UNION ALL operations produce large result sets, does not require any sort will return to the application response time is the fact that the record means that the first line of faster retrieval, and in many cases, you can not complete the operation interim period. In some cases, UNION ALL and UNION not return the same results. If the application environment, the result set does not contain any duplicate records, may be converted into UNION UNION ALL. 2.8 Avoid using PL / SQL function calls for increasing the PL / SQL usage, many users trying to benefit the advantages of PL / SQL functions to generate code that can be reused in SQL. One of forcing repeated use PL / SQL function is to use in an SQL statement. For example, you can create a function to convert international currencies of US $. This function is called the US $. Examples are as follows: select transaction_type, US $ (amount, currency) from international_transaction where US $ (amount, currency)> 1000; previous execute SQL statements not as good as expected. In testing, its performance is about the same result than the following SQL statement slow about a few times. select transaction_type, amount * exchange_rate US $ from exchange_rate er, international_transaction it where er.currency = it.currency and amount * exchange_rate> 1000; different response times because the mix PL / SQL and SQL, the use of different mechanisms Oracle. Embedding PL / SQL function in SQL queries, when executed, Oracle will call into two parts: using SQL function call in place with assignment of unknown variables and for each function call PL / SQL block. select transaction_type,: a1 from international_transaction where: a1> 1000 and BEGIN: a1: = US $ (: amount,: currency); END international_transaction for each row in the table will be performed twice anonymous block displayed in the previous example was. Block anonymous call led to sharp increase in query response time. You should avoid using PL / SQL function calls in an SQL statement. 

Guess you like

Origin blog.csdn.net/Aria_Miazzy/article/details/93204208