Oracle database performance management-IO analysis


Contents of the original title of this article "Analysis Scheme and Case Study of ORACLE Database Server IO High" :

**

1 IO is abnormally busy, exceeding the upper limit of the HBA card port process, resulting in a jam

**
Wise Eyes and Beads——The server disk is so busy, who does it?

Keep in mind-ORACLE DBA's standard for judging IO performance problems

Guard with knife-an ORACLE tool that must be mastered to deal with IO problems

Is it difficult to say-in a few words to explain the ORACLE database

Live to understand-an example to illustrate the working process of ORACLE

Keep in mind-a picture to summarize the IO characteristics of ORACLE

How to break-what is invalid IO and the solution
Foreword Yuanbang
has contacted many system/storage administrators in the process of providing Oracle third-party services for the data center, and found that many SAs lacked sufficient understanding of the ORACLE database, which led to the comprehensive processing It’s easy to say different things when it comes to questions, so I have the idea to write this series of articles. The original intention is to use the vernacular to popularize the theories needed for some common problems as much as possible, supplemented by a few practical case studies, I hope to help you Future work will help.

Closer to home, on the server where the ORACLE database is deployed, all of us have more or less encountered the following situations:

  1. The business system is running slowly. As a system administrator, you need to check system resources including IO. At this time, the system administrator and storage administrator may get feedback from the DBA (database administrator) that the response time of IO is very slow, reaching 30 More than milliseconds, request to be resolved However, the storage administrator checks that there is no hot disk. The system's IO volume is very large. There seems to be no good way except for using more RAID groups to redistribute data and replacing it with higher-end storage;
  2. We may use the iostat and sar -d commands to observe dangerous phenomena such as high disk busyness, high IOPS per second, high IO read and write per second, and high traffic of HBA cards;
  3. Long IO response time, is it the cause or the result of slow business?
  4. High IOPS and large IO reads and writes. Is it the cause or the result?
  5. Apart from hardware expansion or upgrade, is there no other solution?
  6. How to identify the source of IO on the ORACLE server, how to judge whether these IOs are valid IOs, and how to eliminate invalid IOs?
  7. As a system administrator and storage administrator, what simple database skills need to be mastered so as not to be passive when IO problems occur?
  8. What is the standard for ORACLE DBA to judge whether IO has performance problems?
  9. What are the characteristics of IO in ORACLE database? Which IO is more critical and must guarantee performance?
    We will explain and share for everyone through the interspersed introduction of theories and practical cases, hoping to inspire you.
    This article is the first in a series. It should be noted that due to limited space, some content that actually occurred in the process but not so closely related to this topic, such as UNDO, checkpoint, etc., will be temporarily omitted.
    At the same time, considering that the AIX expert club may be more system/storage administrators, there will be some popular science content, and a better ORACLE DBA can skip to the case study section by itself. For students who haven't really worked as a DBA, ORACLE may be a little difficult, but as long as you calm down and take the time to actively understand, it is not difficult.
    Wisdom Eyes and Beads – The server disk is so busy, who does it?
    When an AWR report occurs during a problematic period, download the html file corresponding to the awr report to the PC terminal
    and open it with a browser such as IE, and find the "SQL ordered by Reads" part, as shown in the figure below
    Insert picture description here
  1. The number one SQL statement, accounting for 99.79% of the IO of the entire database server
  2. A total of 8 executions, each execution of IO is 2,129,053 BLOCK, a BLOCK is 8K, that is, every time the SQL is executed, 2,129,053*8K=16.24G will occur

Knowledge points:
BLOCK is the smallest allocation unit of ORACLE data files, similar to LV PP, the data is stored in BLOCK, a BLOCK can store tens to hundreds of user table data

Is not it simple?
Of course, you can also use the commands of the operating system to see the process-level IO distribution.
For example, in the Linux environment, you can monitor which process has higher IO consumption through pidstat -d.
Of course, the shortcomings of using the operating system to view the process-level IO distribution are obvious. What's more terrifying is that this shows that you are still treating the database as a black box. What is the process doing? Why is IO so high?
We need to continue to move forward.
Insert picture description here
Keep in mind-ORACLE DBA's standard for judging IO performance problems.
Knowledge point:
Generally speaking, if the response time of a single IO is within 20 milliseconds, it is acceptable. The better performance should be less than 10 milliseconds, the lower The better. A single IO response time of more than 20 milliseconds can be considered poor performance and needs to be tuned. It should be noted that for files with single-digit IO times, IO exceeding 20 milliseconds is also acceptable, because it is not easy to be cached at the storage level.

Both the OS and the database AWR report can be used to determine whether there is a problem with the IO, and the OS method is recommended.

In
the output of the operating system mode sar –d 2 10, the sum of the avwait and avserv columns is the IO response time (AIX environment), in milliseconds. There is a difference in the LINUX environment, the response time of IO is in the AVWAIT column.
Insert picture description here
It can be seen that:
The response time of a single IO on hdisk4 reached more than 4000 milliseconds and 2000 milliseconds, which is far greater than 20 milliseconds. The IO performance has reached an intolerable level. It is necessary to analyze as soon as possible whether the storage cache is closed, whether the hard disk is faulty, or the link Whether there is a problem with the road, etc.

Database AWR report method
The Av Rd(MS) in the figure below represents the milliseconds of a single read, which is the response time of a single IO. It can be seen that in 0.01 milliseconds, which is much lower than 20 milliseconds, the IO performance is very good! (To achieve the entire performance, it is often cached in the file system cache)
Insert picture description here
Av Rd (MS) in the figure below represents the number of milliseconds for a single IO read, which is the response time of a single IO. It can be seen that the IO response time of most data files exceeds 40 milliseconds, which is far greater than 20 milliseconds. The IO performance is not ideal. Before expanding or upgrading the storage, you should first analyze whether the IO is invalid IO and whether it can be eliminated. IO! Elimination of invalid IO through SQL optimization can effectively protect the investment in storage and other hardware, and meet the business development for many years in the future, rather than blindly expanding.
Insert picture description here

Guard with knife-an ORACLE tool that must be mastered to deal with IO problems

How was the above AWR report obtained? What is an AWR report? Allow me to be long-winded.
Many students may have heard of AWR reports. The steps for collecting AWR reports are fixed and simple. The steps are as follows:

#su – oracle $sqlplus “/ as sysdba” SQL>exec dbms_workload_repository.create_snapshot();
SQL>@?/rdbms/admin/awrrpt.sql Enter 1) Html 2) Enter 3)
Enter the time you want to capture The start snap_id corresponding to the range 4) Enter the end snap_id corresponding to the time range you want to capture 5)
Enter the name you want to save as the report (you can choose a name) 6) The report is displayed by refreshing the screen...
through AWR Reporting the tools built into the database can clearly understand which SQL is executed in the database, how much IO is generated, and how much CPU is consumed in a certain period of time.

Some people may say that I have already published the AWR report, but understanding the AWR report is the key.
Yes, you need to have a deeper understanding of some database knowledge, so you might as well look down patiently.
It’s not hard to say – In a few words, you can clarify the
ORACLE database. In short, it mainly provides data storage services to the outside world. At the same time, it can provide data retrieval, comparison, and association through the built-in rich SQL/PLSQL interface. And other computing services.
Specifically, the ORACLE database reads, writes and calculates data on the storage through a set of processes (background process and foreground process) and memory structure on the server.
Knowledge
What is the difference ORACLE instance and the ORACLE database is?
Many students who do SA have always been unclear about what an ORACLE instance is, but it is actually very simple: the
process and the memory structure together are called the ORACLE instance or instance.

Because the process and memory disappear as the database/OS restarts, the
data stored in the oracle instance is not permanent. Our user data is ultimately stored in the disk, specifically, in the data file in the disk .
Data files, control files, online log files, these physically existing files form what we traditionally call the ORACLE database.

Knowledge point Is
ORACLE a typical C/S architecture?
Answer: Yes.
We know that the client or application interacts with the database through SQL statements.
When a client or application wants to connect to the database to execute SQL statements to complete the query and add, delete, and modify data, by default on the ORACLE server, a dedicated service process will be created for each client (it will be seen on the operating system). To the process of LOCAL=NO) to serve the client alone. This proprietary service process helps the client execute related SQL. When the SQL is executed, the process cannot do anything else, it can only wait for the client The next SQL initiated by the client. This is like, we go to a high-end restaurant, there is a dedicated waiter to serve us.

Next, we use a picture to summarize the above content, that is, the simple architecture of ORACLE. It
Insert picture description here
can be seen that
each client needs to execute SQL, and only needs to pass the SQL to the corresponding service process through the network, and the service process can help to execute it. . Multiple clients correspond to multiple service processes (see the exclusive foreground process of LOCAL=NO in the figure above). These service processes are also called foreground processes.
ORACLE contains memory structure and background processes

SGA shared memory can be subdivided into
Buffer cache, which is used to cache recently accessed data and avoid IO. The data in the buffer cache may be newer than the data in the disk, for example, read into the memory and then modify it to new data. We call the only data as dirty data, dirty block.
Log buffer, using a set of records to represent the modification process to the database, we call it the change vector.
Others, such as shared pool/large pool/java pool/stream pool, etc., do not introduce the
ORACLE background process here, but can also be subdivided into the
DBWR process, because ORACLE will periodically store the recent changes like word (we call it Checkpoint checkpoint triggers DBWR to write dirty data), which is to write the dirty data in the buffer cache back to the data file in the disk. This IO is written randomly.
The LGWR process, when submitting the commit command, writes the modification record in the log buffer to the online log file in the disk in the form of synchronous IO and returns, and then the commit can be completed. At this time, although the data in the buffer cache memory is larger than the data in the disk The data in the data file needs to be new, but because it has been ensured that there is an online log file that is written to the disk during the modification process, even if the database is powered off, you can re-execute the modification record of the online log file to ensure that the data is not lost. This is a common "log first" strategy in any relational database. Since the lgwr process writes the change vector to the back of the online log file by means of additional writing, the IO of LGWR is a continuous write.
Other oracle background processes starting with ora_, such as pmon/smon/ckpt, are not introduced here.
Knowledge points:
Does the IO of the LGWR process support writing to the file system cache and then returning?
Not supported, the IO of the LGWR process is write-through. If you only write to the file system cache (if the data file is stored in the file system cache) and return, once the system crashes and the file system cache is too late to flush to the disk, the user will be prompted to modify successfully after commit, but due to log buffer/buffer The changes in the cache are not finally placed on the disk and data loss occurs.

Is the IO of the LGWR process asynchronous IO?
No, because to ensure that the data is not lost, lgwr must wait for the IO to return before proceeding to process the next IO write request.

Where is the biggest bottleneck in ORACLE architecture?
Before ORACLE 12C, there was only one Lgw background process, because all processes need to notify the lgwr process to help write the modification process record (change vector) previously generated in the log buffer to the disk before committing. When a large number of processes want to ask the lgwr process to help write at the same time, there will be a queue. In a highly concurrent online transaction OLTP system, the single-process lgwr process may become a big bottleneck, especially when the online log IO writing performance cannot be guaranteed, it is easy to queue up and wait for the lgwr process. In fact, this is also a point that is easy to cause problems, and it is a relatively fragile place of ORACLE.

Knowledge point:
Why does ORACLE eat memory so much? About half of the server's memory is eaten by ORACLE...
Because IO is much slower than memory, many relational databases use a large number of memory-for-IO strategies for better performance, and ORACLE is no exception. Specifically, ORACLE uses the buffer cache in the SGA to cache the recently accessed data, so as to avoid the need for IO when accessed again. The buffer cache usually accounts for 80% of the SGA size, that is, the buffer cache accounts for about 50%*0.8=40% of the server memory.

(Case 1) 800M IO traffic per second-case sharing (1)

Problem description: According to the
customer, the IO volume of the database server is very large, reaching 800M per second, which almost fills the bandwidth of the HBA, and the transaction is slow.

Analysis process:
Collect the Oracle AWR report for the current period and find the Load Profile section, as shown in the figure below.
Insert picture description here
You can see:
Physical Reads per second reaches 104,207 BLOCKs, and the size of each BLOCK is 8K
, which is IO per second. The amount reaches 104207*8K=814M!
The IO traffic is too large, almost full of the throughput supported by the HBA, and it is inevitable that the overall performance will be affected!
According to the above knowledge points, we know that ORACLE adopts the strategy of swapping memory for IO to avoid excessive IO.
Then we might as well guess whether it is because the buffer cache is too small that the IO cannot be cached?
Further check the "Cache Sizes" part of the AWR report
. As shown in the figure below, the buffer cache size is only 128M! The shared pool size is 15,232M.
And the server is configured with 60G of memory!
We mentioned that the buffer cache can be set to 40% of the server memory, which is 24G.
Insert picture description here
By looking for the "Buffer Pool Advisory" in the AWR report, as shown in the figure below, you can see:
When the buffer cache is set from 128M to 256M, the number of IO, that is, the number of physical reads will drop from 36 million to 19 million. Double!
Insert picture description here
the reason
Is it a configuration error? no. In fact, the customer set a memory_target parameter to allocate a total of 40G of memory for the ORACLE database. This is a common practice, that is, to ORACLE to dynamically allocate the memory size of each component between the SGA and the PGA. Due to the imperfection of the memory dynamic adjustment algorithm, too much memory is allocated to the PGA and the SGA is not enough, and due to the unsatisfactory use of application binding variables, the shared pool is continuously expanded. Step by step, the buffer cache is compressed to Only 128M left.

Solution
After setting a benchmark value of 20G for the buffer cache**, the problem of high IO is solved. **
We first came into contact with the first case of high IO. Next, we use an example to further learn the working process of ORACLE and learn more about the IO characteristics of ORACLE.

(D) Live clearly-an example to illustrate the working process of ORACLE

When the client or application initiates an UPDATE statement, what things and those IOs have gone through?
Insert picture description here
The simplified process is as follows: after the
client connects to the database, a process with LOCAL=NO will be created on the database server to specifically serve the
client to initiate the update T set id=5 where id=3 SQL statement, where the table The size of T is 1G, and there is no index on the table

The service process on the server (LOCAL=NO) will determine whether the data BLOCK of table T exists in the buffer cache in the memory. If it does not exist, the service process initiates IO and reads 1G of data from the disk to the memory. Specifically, first read 16 BLOCKs, namely 128K (a BLOCK can store tens to hundreds of records), and then determine whether there is data with id=3 in these BLOCKs one by one. This IO is a random read, and the IO is initiated by the LOCAL=NO foreground process.

Finally, there is a BLOCK in the memory at the same time, and there is a record with id=3 that meets the conditions in the BLOCK

Next, before the foreground process changes id=3 to id=5, it is necessary to generate the corresponding record (change vector) of the process of which BLOCK and which position is changed from 3 to 5 in the log buffer.

Change the id=3 in the memory buffer cache to id=5, see step 5 in the figure below, the
Insert picture description here
client initiates a commit

Because once committed, it means that the data is persistent, which means that it will not be lost as the database crash/os restarts. Therefore, at this time, the lgwr process needs to modify the record of id=3 in the log buffer to id=5 to write to the disk. Online log files. This IO can only be synchronous IO (non-asynchronous IO). After the IO confirms that it is written to the disk, the commit in step 6 is completed, and the result returned to the client is that the modification is successful. At this time, although the data in the buffer cache memory (id=5) is newer than the data in the data file (id=3) in the disk, it has been ensured that there is a modification process (id=3 modified to id=5). When the online log file of the disk is reached, the database can be powered off at this time, or the modification record of the online log file can be replayed (id=3 modified to id=5) to ensure that the data is finally modified to 5 (the state before the downtime) ), which guarantees that data will not be lost.

When scenarios such as checkpoint occur, the DBWR process writes the dirty data in the memory buffer cache (the memory is newer than the disk data) to the data file on the disk. This process is asynchronous.

Knowledge points:
Why doesn't ORACLE flush the dirty data in the buffer cache to the disk in real time when commit is submitted?
The Lgw process writes the modification record (change vector) in the log buffer to the online log file on the disk, and appends it to the back of the online log file. Therefore, the IO of LGWR is continuous writing.
The data in the memory is flushed to the disk, which is a random write (each client may change the record of a different physical location each time). The performance of random write is obviously not as good as the performance of continuous write, so ORACLE allows the existence of dirty data, and Instead of flushing the dirty data to the disk when commit is submitted in real time, it can be written down asynchronously, because the continuous writing of lgwr has ensured that the data will not be lost.

(Case 2) Pressure test TPS can not go up-case sharing (2)

Problem description:
A key business system newly installed by the customer, during the pre-launch stress test, the concurrency of the application cannot meet the pre-launch concurrency index and response time index requirements. The curve of TPS is very unstable during pressure test, as shown below:
Insert picture description here

Analysis process:
From the above knowledge points, we can know:
There is only one LGWR process in ORACLE, because all processes need to notify the lgwr process to help write the modification process record (change vector) generated in the log buffer to the disk before committing.
When a large number of processes want to ask the lgwr process to help write at the same time, there will be a queue.
In a highly concurrent online transaction OLTP system, the single-process lgwr process may become a big bottleneck, especially when there is a problem with the online log IO writing performance.
Therefore, we need to check the status of the lgwr process.
Through gv$session to observe the log writing of the lgwr process of the two RAC nodes, the results are shown in the following figure: the
reason is
that the customer’s storage team did not struggle anymore in front of the iron evidence, but began to investigate carefully one by one, and finally replaced it. The problem was solved satisfactorily after the fiber optic cable was installed. The following is the waiting event for the pressure test again after replacing the optical fiber line
Insert picture description here

(E) Keep in mind-a picture to summarize the IO characteristics of ORACLE

Insert picture description here
Knowledge point
Reading data is completed by the foreground process. The characteristic of IO is random reading, and the data is read into the memory before operation

The IO feature of the DBWR process is random writing. The DBWR process supports multiple processes to flush data down at the same time. In order to avoid writing a large number of dirty blocks down at a certain point in time, ORACLE will use asynchronous and slow disk pressure. Ways to flush dirty blocks to reduce IO pressure, but when the checkpoint and archive log current/all commands are issued, a large number of DBWR processes will be activated to flush dirty blocks down, which may cause greater pressure on IO and affect overall performance and affect LGWR processes Write performance and response time.

The Lgwr process is continuous writing. If possible, try to allocate a separate RAID group for Redo, which is physically separated from other files.

(Case 3) Severe performance degradation when concurrency is large-case sharing (3)

Description of the problem: I
received the email from the customer, and the customer had the following problems during the performance comparison test between X86 and small computer:
a high-end pc server, 32G memory, cpu is a 32-core
pc server, test 8 statements in parallel, the small computer drops Not obvious, but it drops significantly on this pc server.
The execution time was reduced from more than 1 minute to 7 minutes. It is always 2 minutes on a small machine.
Attached is the AWR report, please analyze the reason.
In
Insert picture description here
the case of 8 parallel tests in the analysis process , logical read = physical read, logical read represents the number of BLOCKs in the operating memory, and logical reads must occur after physical reads (IO) are read into the memory. This shows that ORACLE simply failed to cache the data in the shared memory buffer cache for reuse by other processes. From the waiting events in the AWR report, you can see that the first one is direct path read, which is an IO event, as mentioned above, bypassing the BUFFER CACHE and directly reading into the PGA private memory (non-shared memory) , The time of a single IO reached 42 milliseconds, which is too large, indicating serious disk IO competition and poor performance. But is this the cause or the result? We might as well look down.
Insert picture description here
As shown in the figure below, it is no longer a process that reads data to the buffer cache, and other people directly reuse the data in the memory, thereby reducing the number of IO requests and reducing the busyness of the disk.
Insert picture description here
Instead, each process reads its own private memory PGA. Each process executes the same SQL and needs to read its own. Obviously a large number of processes read the same piece of data repeatedly at the same time, which will inevitably cause busy disks and IO Performance drops. This is the analysis conclusion obtained from the AWR report. As shown in
Insert picture description here
the figure below, then why isn't one person reading the shared memory and other people enjoying its achievements?
This is caused by the new features of 11G.When the optimizer judges that more physical IO is needed under 11g, then it bypasses the BUFFER CACHE and reads directly into the PGA private memory.
The original intention of this feature is: when ORACLE reads 16 BLOCKs at a time by default, because some of the required BLOCKs are already in the buffer cache and are not continuous, these 16 BLOCKs are likely to need to be split into multiple IOs, resulting in The original one-time multi-block read has become multiple single-block reads.
But in reality, when the number of parallels is large, due to this feature, it is easy to make the disk very busy at this time, and the IO performance is severely degraded, and the execution time increases from 1 minute to 7 minutes.

The original mechanism under 10G is:
one session reads into the memory, and other sessions sit back and enjoy the benefits, and just read the data in the memory directly, so the number of disks read will be smaller.

The problem is solved: After using the following methods to temporarily disable the feature, test again, and the problem is solved.

alter system set event= ‘10949 trace name context forever, level 1’
scope=spfile;
–重启数据库
Shutdown immediate
startup
alter system register;

Therefore, it is not difficult to see that slow IO is actually the result, not the cause, because the same piece of data is repeatedly read, and there are too many IOs.
**

(F) How to break-what is invalid IO and how to solve it

**
We will not let go of the chapter "Using an example to illustrate the working process of ORACLE and the characteristics of IO". It is not difficult to find that it is actually a live example of invalid IO.
The client initiates an update T set id=5 where id=3 SQL statement.
Among them, the size of table T is 1G, and there is no index
on the table. During execution, the foreground process reads 1G of data in the disk in total. The SAN switch is transferred to the ORACLE shared memory, and then filtered. Finally, only one piece of data is satisfied, and finally it is updated from id=3 to id=5.
Imagine that if the size of the table is not 1 G, but 100 G, and it is not that a client initiates an update to the table, but multiple sessions update different records at the same time, then the IO of the entire system will be abnormal. busy.

We look for the word "喜" in the dictionary. If it is page-to-page and page-to-page, it will inevitably do a lot of useless work. The easiest way is to search from the radical or pinyin, and you can quickly find the word "喜" "word.
The same is to find a "hi" character. The former has to look through a lot of dictionaries, that is, a lot of invalid IOs are generated. The latter is very efficient.

Therefore, invalid IO, in the final analysis, is that the SQL statement lacks an efficient way to locate data, which results in a lot of data being read, but the dissatisfaction is discarded, resulting in a lot of invalid IO. If we create an index on the id field of table T, we can quickly and accurately locate the data with id=3, only need to read a few BLOCKs, and the overall IO amount can be controlled within 40K instead of the previous 1 G.

The SQL statement of the application is not efficient enough, which is the main reason for invalid IO.
The optimization of SQL statements is not as simple as an index. An index is just one of many single-table access paths. SQL optimization also involves optimization of table connection methods, table connection sequence optimization, and SQL rewriting. These optimization methods will be introduced in the future.

Knowledge point:
We need to have such a consciousness: the
disk is 100% busy, and the IO response time is very long. This is probably because some inefficient SQL statements generate a lot of invalid IO, or cause IOPS to exceed the entire disk (array ) The IO capabilities that can be provided, or the use of invalid IO bandwidth leads to IO congestion.
The busy disk and the long IO response time may have been the result, rather than the real cause of slow business.
By optimizing high-IO SQL, eliminating invalid IO, controlling IO within a reasonable range, and improving overall IO performance.

(Case 4) Realizing the optimization of IO 359M per second to 1M per second without understanding the business logic-case sharing (4)

Problem description:
The IO volume of the system's IO is very large. Frequently, the IO throughput reaches more than 300M per second.
Through the AWR report, the problem was found to be concentrated on one SQL.
The question is, we are not doing development, we don't know the business logic of this business system, can we optimize it?
The answer is yes.
In fact, we can complete most of the optimizations without understanding the business logic. We only need to obtain the SQL execution process and details to quickly complete the optimization. The following is the most IO-occupied SQL Optimization We completed the optimization within 1 minute.
In order to show that we do not need to understand the business logic to complete the optimization, you will find that you have not seen any SQL statements from beginning to end _
optimization process:
get the execution plan and execution details (which steps consume how much time and how much IO)
Insert picture description here
can be seen To:
The execution time of the above SQL is 39 seconds, and the step with id=14 takes up 38 seconds. This bottleneck step must be optimized.
Step Id=14, reads 1750K BLOCKs, that is, read 1750K*8K=13G, execute 13G in a single time, read in 38 seconds, that is, the IO per second reaches 359M, but in the end only 6 records are returned. This step Performing a full table scan on Table A, obviously, after reading 13G and applying the filter conditions, only 6 records were returned in the end! A lot of data is basically discarded after being read into memory. There is a lack of an efficient way to locate data, and the index is most suitable for locating a small amount of data.
Optimization method
The predicate part of id=14 in the figure above, that is, the red boxed part, you can see that 13G of table A has been scanned. The main filter conditions are that the two fields of c_captialmode and c_state have filtered out most of the data, so create Compound index can be, the command is as follows
Create index idx_1 on A(c_captialmode,c_state) tablespace &tbs online;

Optimization effect After
optimization, IO dropped from 13G to 0 for each execution;
after optimization, the execution time dropped from 50 seconds to 50 milliseconds
. After optimization, the IO of the entire system dropped from 359M per second to less than 1M.

Knowledge point:
Without understanding the business logic, you can quickly optimize the SQL statement that consumes the most IO.
DBAs who are proficient in database knowledge often understand SQL optimization better than developers and programmers who do not understand database principles.

Guess you like

Origin blog.csdn.net/oradbm/article/details/109040420
Recommended