real-time process analysis DB

 

In our view awr report when there will always be a key indicator to note that DB time, this indicator usually reported by awr to see.
For example, we get the following information in the header awr report shows, we clearly know DB time is 1502.06 mins, Elapsed time is relative, nearly 20 times the pressure. This problem certainly needs attention.
              Time Sessions Curs Id SNAP SNAP / Sess
            --------- ------------------- -------- ------ ---
the Begin Snap: 6219 21-Jul-15 22:00:08 583 2.5
  End Snap: 6220 21-Jul-15 23:00:44 639 2.4
   the Elapsed: 60.61 (mins)
   DB Time: 1,502.06 (mins)
Of course, we all of a sudden is unlikely to generate dozens of awr report, and then just to get this DB time value.
In the previous blog also shared the information on how to combine load a shell script to crawl database.
http://blog.itpub.net/23718752/viewspace-1168027/
results obtained for example as follows:
the DB_NAME BEGIN_SNAP END_SNAP SNAPDATE LVL DURATION_MINS the DBTIME
--------- ---------- ---------- -------------------- - - ------------- ----------
XXX 93 465 93464 30 21 is 15-Aug. 1, 2015 00:00
               93 465 93 466 30 21 is-Aug. 5. 1, 2015 00:30
               93 466 93 467 30 Aug 2015 01:00 1 21 5
               93 467 93 468 21 Aug 2015 01:30 1 30 13
               93 468 93469 21 Aug 2015 02:00 1 30 24-
this fact for everyday use basic enough, there is a major drawback is that this indicator in fact, based on a snapshot of history, such as is now 15:30, generate a snapshot every hour, so I want to see roughly database load 15:30 this point in time can not be achieved.
In fact, the job still has some sense of the use of, for example, we want orabbix do this type of monitoring can not wait one hour special of it, or to adjust short time, but the performance will presumably still have a certain impact, because we just want to know the situation DB time, and other information be put off, no need.
Oracle 10g launch in time model, which has a very important data dictionary table is DBA_HIST_SYS_TIME_MODEL, but there are historical information, not have the latest information, this time to the aid of another dynamic performance view v $ sys_time_model
So for this term monitoring he is also full of confidence, he wrote the following statement, seemingly can achieve the desired effect.
SELECT 
round 
       (
    (SELECT round (e.Value / 1000000,2) the dbtime 
                        the FROM V $ E SYS_TIME_MODEL 
                        the WHERE 
                         e.STAT_NAME = 'Time DB') * 100 / 
      (SELECT ((SYSTIMESTAMP + 0) -startup_time) * 24 * 60 * 60 dbtime_duartion_ 
                        from v $ instance)  
   , 2) dbtime_per                  
     from Dual;
my idea is the value of the current DB time can be obtained, but need to find a benchmark, this time and no other reference benchmarks, I think the use of the database instance starts time, initial start time value DB time will start from zero initialization, gradually increasing.
Of course, there are still errors, such as databases from nomount, mount to the open stage, the value of db time began gradually increasing, and may refer to the reference time there will be some errors, but relatively small.
Under the open state database such as
the SQL> SELECT value / 1000000, T. * $ SYS_TIME_MODEL from V = T WHERE stat_name 'Time DB'
of VALUE / 1000000 STAT_ID STAT_NAME of VALUE
------------- ---- ------ ---------- ----------
   130.364805 3649082374 DB Time     130 364 805

and then stopped library, start to mount the stage.
Of VALUE / 1000000 STAT_ID STAT_NAME of VALUE
------------- ---------- ----------- ----------
     DB Time 3,649,082,374 6.057183        6,057,183
boot into Open stage
of VALUE / 1000000 STAT_ID STAT_NAME of VALUE
------------- ---------- ------------ ---------
    10.063956 3649082374 DB Time       10,063,956
You can see the DB time are in the process of gradually increasing, under the general situation in this way seems a good choice.
But after configured to orabbix monitoring, database load careful comparison of snapshots, found that load the database will be some mistakes, some library to view a snapshot of the load by about 40%, but the results achieved by the real-time data obtained from load 40 %, this time I still believe DB time snapshot.
themselves with doubt, test it manually.
For example, get the following information is a snapshot of the benchmark
BEGIN_SNAP END_SNAP SNAPDATE BEGIN_INTERVAL_TIME END_INTERVAL_TIME DURATION_MINS the DBTIME
---------- ---------- ------------- ------- ------------------------------ ------------- ----------------- ------------- ----------
     36202 36203 21 Aug 2015 07:00 21- 06.00.59.625 15 AM-21 is the AUG the AUG-59 AM-15 130. 07.00.04.417
     36203 36204 21 is 08:00, 2015-Aug-21 is 07.00.04.417 the AUG-15 AM-21 is the AUG-60 139 15 AM 08.00.09.642
     36204 36205 21 Aug 2015 09:00 21- AUG-15 08.00.09.642 AM 21-AUG-15 09.00.14.248 AM 60 138
We still DB time is running real-time viewing of scripts to see the difference in the end how much, you can see the library the load situation is more the law of large jitter will not appear, so the DB time within this point in time should still be in the range of about 130.               
Because according to the official description of the v $ sys_time_model document, there is an error or a delay of 5 seconds.

V$SYS_TIME_MODEL displays the system-wide accumulated times for various operations. The time reported is the total elapsed or CPU time (in microseconds). Any timed operation will buffer at most 5 seconds of time data. Specifically, this means that if a timed operation (such as SQL execution) takes a long period of time to perform, the data published to this view is at most missing 5 seconds of the time accumulated for the operation.

We'll test the prolonged wait more than 5 seconds.
SQL 11:25:14> @ aa.sql
    the DBTIME DURATION PER
---------- ---------- ----------
 105 969 035 872,180.533 121.498968
11:26 : 42 SQL> @ aa.sql
    the DBTIME DURATION PER
---------- ---------- ----------
 105 969 055 872,180.717 121.498966
can see a snapshot obtained by the load is 138/60 = + 200%
but the result is calculated by this formula there is some small, and at about 120%.
In other libraries have also done a similar test, some small differences in the library, the library and some large differences. The results at this time feel of the script is erratic, not accurate.
Is v $ sys_time_model the wrong
view of this view definition and use of dba_hist_sys_time_model still relevant.

Column Datatype Description
STAT_ID NUMBER Statistic identifier for the time statistic
STAT_NAME VARCHAR2(64) Name of the statistic (see Table 7-4)
VALUE NUMBER Amount of time (in microseconds) that the system has spent in this operation

It may not be a reference, if the instance started startup_time wrong, then the basis on which to find it, can think of only a snapshot of the data.
We can try to reference a snapshot of history, to be calculated by comparing the latest load of DB time value.
This time also need to rely on dba_sys_time_model and dba_snapshots
this statement is the time to re-improved form below. It has not been tested under rac, or no problem at a single instance.
SELECT (e.Value / 1000000 / temp.dbtime-60) / (((SYSTIMESTAMP + 0) - (END_INTERVAL_TIME + 0)) * 24 * 60) the dbtime
from (SELECT t.begin_interval_time, t.end_interval_time, t.snap_id, e.Value / 1000000/60 the dbtime, e.stat_name
                        the FROM DBA_HIST_SYS_TIME_MODEL E, T DBA_HIST_SNAPSHOT
                        the WHERE
                         e.STAT_NAME = 'DB Time'
                         and t.snap_id = e.snap_id
                         and t.begin_interval_time> SYSDATE-2/24
                         and rownum <2
                         ) TEMP, V $ E SYS_TIME_MODEL
                         WHERE e.STAT_NAME = 'DB Time'
                         and rownum <2; 
Critical Control Point of this statement is that the selection time, if 3 hours before the selection, there are some differences will be 2 hours before, but the difference is small. It can be said is the error.
BEGIN_SNAP END_SNAP SNAPDATE Bank of Latvia LVL DURATION_MINS DB_NAME the DBTIME
--------- ---------- ---------- -------------- ------ --- ------------- ----------
XXX 93 464 93 465 21 Aug 2015 00:00 1 30 15
               93 465 00 93 466 21 Aug 2015 : 30 1 30 5
               93466 93467 21 Aug 2015 01:00 1 30 5
for example, load the library is relatively low, at less than 20%, we take a look at baseline three hours ago, two hours ago, resulting DB time of Happening.
3 hours ago as a benchmark:
    DBTIME_WORDLOAD
----------
       18%
two hours ago as a benchmark:
    DBTIME_WORDLOAD
----------
       11%
within the sum of the reference value is not very different, are a reasonable range.
In fact, the root cause of the problem lies in the range of samples, the closer the reference benchmark, the lower the resulting margin of error. Make a simple analogy, it is based on the income level of wage statistics in recent years, we have to count with the liberation of the standard, and now the error will be great, and inaccurate, to the situation in recent years to statistics, the results will be in within the range, a basic value of the trusted.

In our view awr report when there will always be a key indicator to note that DB time, this indicator usually reported by awr to see.
For example, we get the following information in the header awr report shows, we clearly know DB time is 1502.06 mins, Elapsed time is relative, nearly 20 times the pressure. This problem certainly needs attention.
              Time Sessions Curs Id SNAP SNAP / Sess
            --------- ------------------- -------- ------ ---
the Begin Snap: 6219 21-Jul-15 22:00:08 583 2.5
  End Snap: 6220 21-Jul-15 23:00:44 639 2.4
   the Elapsed: 60.61 (mins)
   DB Time: 1,502.06 (mins)
Of course, we all of a sudden is unlikely to generate dozens of awr report, and then just to get this DB time value.
In the previous blog also shared the information on how to combine load a shell script to crawl database.
http://blog.itpub.net/23718752/viewspace-1168027/
results obtained for example as follows:
the DB_NAME BEGIN_SNAP END_SNAP SNAPDATE LVL DURATION_MINS the DBTIME
--------- ---------- ---------- -------------------- - - ------------- ----------
XXX 93 465 93464 30 21 is 15-Aug. 1, 2015 00:00
               93 465 93 466 30 21 is-Aug. 5. 1, 2015 00:30
               93 466 93 467 30 Aug 2015 01:00 1 21 5
               93 467 93 468 21 Aug 2015 01:30 1 30 13
               93 468 93469 21 Aug 2015 02:00 1 30 24-
this fact for everyday use basic enough, there is a major drawback is that this indicator in fact, based on a snapshot of history, such as is now 15:30, generate a snapshot every hour, so I want to see roughly database load 15:30 this point in time can not be achieved.
In fact, the job still has some sense of the use of, for example, we want orabbix do this type of monitoring can not wait one hour special of it, or to adjust short time, but the performance will presumably still have a certain impact, because we just want to know the situation DB time, and other information be put off, no need.
Oracle 10g launch in time model, which has a very important data dictionary table is DBA_HIST_SYS_TIME_MODEL, but there are historical information, not have the latest information, this time to the aid of another dynamic performance view v $ sys_time_model
So for this term monitoring he is also full of confidence, he wrote the following statement, seemingly can achieve the desired effect.
SELECT 
round 
       (
    (SELECT round (e.Value / 1000000,2) the dbtime 
                        the FROM V $ E SYS_TIME_MODEL 
                        the WHERE 
                         e.STAT_NAME = 'Time DB') * 100 / 
      (SELECT ((SYSTIMESTAMP + 0) -startup_time) * 24 * 60 * 60 dbtime_duartion_ 
                        from v $ instance)  
   , 2) dbtime_per                  
     from Dual;
my idea is the value of the current DB time can be obtained, but need to find a benchmark, this time and no other reference benchmarks, I think the use of the database instance starts time, initial start time value DB time will start from zero initialization, gradually increasing.
Of course, there are still errors, such as databases from nomount, mount to the open stage, the value of db time began gradually increasing, and may refer to the reference time there will be some errors, but relatively small.
Under the open state database such as
the SQL> SELECT value / 1000000, T. * $ SYS_TIME_MODEL from V = T WHERE stat_name 'Time DB'
of VALUE / 1000000 STAT_ID STAT_NAME of VALUE
------------- ---- ------ ---------- ----------
   130.364805 3649082374 DB Time     130 364 805

and then stopped library, start to mount the stage.
Of VALUE / 1000000 STAT_ID STAT_NAME of VALUE
------------- ---------- ----------- ----------
     DB Time 3,649,082,374 6.057183        6,057,183
boot into Open stage
of VALUE / 1000000 STAT_ID STAT_NAME of VALUE
------------- ---------- ------------ ---------
    10.063956 3649082374 DB Time       10,063,956
You can see the DB time are in the process of gradually increasing, under the general situation in this way seems a good choice.
But after configured to orabbix monitoring, database load careful comparison of snapshots, found that load the database will be some mistakes, some library to view a snapshot of the load by about 40%, but the results achieved by the real-time data obtained from load 40 %, this time I still believe DB time snapshot.
themselves with doubt, test it manually.
For example, get the following information is a snapshot of the benchmark
BEGIN_SNAP END_SNAP SNAPDATE BEGIN_INTERVAL_TIME END_INTERVAL_TIME DURATION_MINS the DBTIME
---------- ---------- ------------- ------- ------------------------------ ------------- ----------------- ------------- ----------
     36202 36203 21 Aug 2015 07:00 21- 06.00.59.625 15 AM-21 is the AUG the AUG-59 AM-15 130. 07.00.04.417
     36203 36204 21 is 08:00, 2015-Aug-21 is 07.00.04.417 the AUG-15 AM-21 is the AUG-60 139 15 AM 08.00.09.642
     36204 36205 21 Aug 2015 09:00 21- AUG-15 08.00.09.642 AM 21-AUG-15 09.00.14.248 AM 60 138
We still DB time is running real-time viewing of scripts to see the difference in the end how much, you can see the library the load situation is more the law of large jitter will not appear, so the DB time within this point in time should still be in the range of about 130.               
Because according to the official description of the v $ sys_time_model document, there is an error or a delay of 5 seconds.

V$SYS_TIME_MODEL displays the system-wide accumulated times for various operations. The time reported is the total elapsed or CPU time (in microseconds). Any timed operation will buffer at most 5 seconds of time data. Specifically, this means that if a timed operation (such as SQL execution) takes a long period of time to perform, the data published to this view is at most missing 5 seconds of the time accumulated for the operation.

We'll test the prolonged wait more than 5 seconds.
SQL 11:25:14> @ aa.sql
    the DBTIME DURATION PER
---------- ---------- ----------
 105 969 035 872,180.533 121.498968
11:26 : 42 SQL> @ aa.sql
    the DBTIME DURATION PER
---------- ---------- ----------
 105 969 055 872,180.717 121.498966
can see a snapshot obtained by the load is 138/60 = + 200%
but the result is calculated by this formula there is some small, and at about 120%.
In other libraries have also done a similar test, some small differences in the library, the library and some large differences. The results at this time feel of the script is erratic, not accurate.
Is v $ sys_time_model the wrong
view of this view definition and use of dba_hist_sys_time_model still relevant.

Column Datatype Description
STAT_ID NUMBER Statistic identifier for the time statistic
STAT_NAME VARCHAR2(64) Name of the statistic (see Table 7-4)
VALUE NUMBER Amount of time (in microseconds) that the system has spent in this operation

It may not be a reference, if the instance started startup_time wrong, then the basis on which to find it, can think of only a snapshot of the data.
We can try to reference a snapshot of history, to be calculated by comparing the latest load of DB time value.
This time also need to rely on dba_sys_time_model and dba_snapshots
this statement is the time to re-improved form below. It has not been tested under rac, or no problem at a single instance.
SELECT (e.Value / 1000000 / temp.dbtime-60) / (((SYSTIMESTAMP + 0) - (END_INTERVAL_TIME + 0)) * 24 * 60) the dbtime
from (SELECT t.begin_interval_time, t.end_interval_time, t.snap_id, e.Value / 1000000/60 the dbtime, e.stat_name
                        the FROM DBA_HIST_SYS_TIME_MODEL E, T DBA_HIST_SNAPSHOT
                        the WHERE
                         e.STAT_NAME = 'DB Time'
                         and t.snap_id = e.snap_id
                         and t.begin_interval_time> SYSDATE-2/24
                         and rownum <2
                         ) TEMP, V $ E SYS_TIME_MODEL
                         WHERE e.STAT_NAME = 'DB Time'
                         and rownum <2; 
Critical Control Point of this statement is that the selection time, if 3 hours before the selection, there are some differences will be 2 hours before, but the difference is small. It can be said is the error.
BEGIN_SNAP END_SNAP SNAPDATE Bank of Latvia LVL DURATION_MINS DB_NAME the DBTIME
--------- ---------- ---------- -------------- ------ --- ------------- ----------
XXX 93 464 93 465 21 Aug 2015 00:00 1 30 15
               93 465 00 93 466 21 Aug 2015 : 30 1 30 5
               93466 93467 21 Aug 2015 01:00 1 30 5
for example, load the library is relatively low, at less than 20%, we take a look at baseline three hours ago, two hours ago, resulting DB time of Happening.
3 hours ago as a benchmark:
    DBTIME_WORDLOAD
----------
       18%
two hours ago as a benchmark:
    DBTIME_WORDLOAD
----------
       11%
within the sum of the reference value is not very different, are a reasonable range.
In fact, the root cause of the problem lies in the range of samples, the closer the reference benchmark, the lower the resulting margin of error. Make a simple analogy, it is based on the income level of wage statistics in recent years, we have to count with the liberation of the standard, and now the error will be great, and inaccurate, to the situation in recent years to statistics, the results will be in within the range, a basic value of the trusted.

Guess you like

Origin www.cnblogs.com/yaoyangding/p/12052378.html