Database Replay in Oracle Database 11g(原创)

Overview

The Database Replay functionality of Oracle 11g allows you to capture workloads on a production system and replay them exactly as they happened on a test system. This provides an accurate method to test the impact of a variety of system changes including:

  • Database upgrades.
  • Operating system upgrades or migrations.
  • Configuration changes, such as changes to initialization parameters or conversion from a single node to a RAC environment.
  • Hardware changes or migrations.

The Database Replay tool first records all workload that's directed at the RDBMS. It then exercises the RDBMS code during the replay in a way that's similar to the way the workload was exercised during the data capture phase. You achieve this by re-creating all the external client requests to the RDBMS. The ultimate objective is to replay the exact production workload as seen by the RDBMS, in the form of requests made by various external clients.

Database Replay captures all external requests made while the production database is running, including SQL queries, PL/SQL blocks, limited PL/SQL remote OCI calls. It doesn't capture background jobs and requests made by internal clients such as the Enterprise Manager, for example. To be precise, Database Replay doesn't capture the following types of client requests:

  • SQL*Loader direct path load of data
  • Oracle Streams
  • Data Pump Import and Export
  • Advanced replication streams
  • Non-PL/SQL-based Advanced Queuing (AQ)
  • Flashback Database and Flashback queries
  • Distributed transactions and remote describe/commit operations
  • Shared server
  • Non-SQL-based object access

Tip:In an RAC environment, during the workload capture, the captured data is written in each instance's file system. The data is then consolidated into a single directory for the preprocessing and replay stages.

Following are the steps you must follow to use Database replay to analyze significant changes in your system:
1.  Capture the production workload.
2.  Preprocess the captured workload.
3.  Replay the workload.
4.  Analyze the replayed workload and create a report.
Capture the production workload
Database Replay captures all requests made to the database by external clients in binary files called capture files. You can transport these capture files to another system for testing after the workload is completed. The capture files contain key information regarding client requests such as SQL queries, bind values, and transaction details.

Restart the Database 

Restarting the database, while not mandatory, ensures that you won't have needless data divergences as a result of in-progress or uncommitted transactions when you start the workload capture. To avoid partial capture of transactions and errors due to dependent transactions in the workload, restart the production database and start clean.
Restart the database in the restricted mode using the startup restrict command, in order to prevent users from connecting and starting transactions before you start the workload capture. Once you start the workload capture, the instance automatically switches to the unrestricted mode, allowing normal user connections to the database. If you're dealing with an Oracle RAC environment, you must first shut down all instances and restart one of the instances in the restricted mode and start the workload capture. You can then restart the other instances after the workload capture starts.
Define Workload  Filters

You can use optional workload filters to restrict the workload capture to only a part of the actual production workload. For example, you can use an exclusion filter to exclude Enterprise Manager sessions. You can use inclusion filters to capture subsets of the actual production workload by specifying user sessions to capture in the workload. All other activity will be ignored by Database replay as a result.ADD_FILTER adds a new filter that will affect the next workload capture, and whether the filters will be considered as INCLUSION filters or EXCLUSION filters depends on the value of the default_action input to START_CAPTURE Procedure.

Note that you can use either an inclusion filter or an exclusion filter during any workload capture, but not both.

SQL> begin
     dbms_workload_capture.add_filter(
                         fname       => 'user_salapati',
                         fattribute  => 'USER',
                          fvalue      => 'salapati');
     end;

In the ADD_FILTER procedure, the various parameters are defined as follows:

  • fname specifies the filter name.
  • fattribute specifies the filter attributes such as program, module,action, service, instance_number, and user.
  • fvalue specifies the value of the attribute corresponding to the fattribute parameter you choose.

In my example, I chose user as the fattribute parameter's value. The fvalue attribute specifies the particular username of the user (salapati) whose actions will be captured.
The ADD_FILTER procedure example shown here restricts the workload capture to external calls made by a single user, salapati. Everything else that happens in the database is completely ignored by Database Replay. You can remove a filter by using the DELETE_FILTER procedure, as shown here:
SQL> BEGIN
          DBMS_WORKLOAD_CAPTURE.DELETE_FILTER(
                                FNAME  => 'USER_SALAPATI');
          END;
Note that there is only a single required parameter for the DELETE_FILTER procedure, fname, which provides the name of the filter. Use the DBA_WORKLOAD_FILTERS view to see all the workload filters defined in a database.
Set Up a Capture Directory

Make sure you set up a directory on your file system that's large enough to hold the results of the workload capture process. You don't have to create a new directory specifically for the workload capture because you can use a preexisting directory path. Of course, the workload capture will stop if there isn't sufficient free space in the directory you allocate for the data capture. For an Oracle RAC environment, you can use a shared file system or a separate physical directory for each of the instances, but it's easier to use the shared file system.
After above preparation,you can start to captur the production workload  by using START_CAPTURE procedure.
The capture and replay processes can be configured and initiated using PL/SQL APIs, or Enterprise Manager, both of which are demonstrated in this article. To keep things simple, the examples presented here are performed against two servers (prod-11g and test-11g), both of which run an identical database with a SID of DB11G.
The DBMS_WORKLOAD_CAPTURE package provides a set of procedures and functions to control the capture process. Before we can initiate the capture process we need an empty directory on the "prod-11g" database server to hold the capture logs.
    mkdir /u01/app/oracle/db_replay_capture
Next, we create a directory object pointing to the new directory.
    CONN sys/password@prod AS SYSDBA
    CREATE OR REPLACE DIRECTORY db_replay_capture_dir AS '/u01/app/oracle/db_replay_capture/';
    -- Make sure existing processes are complete.
    SHUTDOWN IMMEDIATE
    STARTUP
The combination of the ADD_FILTER procedure and the DEFAULT_ACTION parameter of the START_CAPTURE procedure allow the workload to be refined by including or excluding specific work
For simplicity let's assume we want to capture everything, so we can ignore this and jump straight to the START_CAPTURE procedure. This procedure allows us to name a capture run, specify the directory the capture files should be placed in, and specify the length of time the capture process should run for. If the duration is set to NULL, the capture runs until it is manually turned off using the FINISH_CAPTURE procedure.
    BEGIN
      DBMS_WORKLOAD_CAPTURE.start_capture (name     => 'test_capture_1',
                                           dir      => 'DB_REPLAY_CAPTURE_DIR',
                                           duration => NULL);
    END;
    /
Now, we need to do some work to capture. First, we create a test user.
    CREATE USER db_replay_test IDENTIFIED BY db_replay_test
      QUOTA UNLIMITED ON users;
    GRANT CONNECT, CREATE TABLE TO db_replay_test;
Next, we create a table and populate it with some data.
    CONN db_replay_test/db_replay_test@prod
    CREATE TABLE db_replay_test_tab (
      id           NUMBER,
      description  VARCHAR2(50),
      CONSTRAINT db_replay_test_tab_pk PRIMARY KEY (id)
    );
    BEGIN
      FOR i IN 1 .. 500000 LOOP
        INSERT INTO db_replay_test_tab (id, description)
        VALUES (i, 'Description for ' || i);
      END LOOP;
      COMMIT;
    END;
    /
Once the work is complete we can stop the capture using the FINISH_CAPTURE procedure.
    CONN sys/password@prod AS SYSDBA
    BEGIN
      DBMS_WORKLOAD_CAPTURE.finish_capture;
    END;
    /
If we check out the capture directory, we can see that some files have been generated there.
    $ cd /u01/app/oracle/db_replay_capture
    $ ls
    wcr_4f9rtgw00238y.rec  wcr_cr.html       wcr_scapture.wmd
    wcr_4f9rtjw002397.rec  wcr_cr.text
    wcr_4f9rtyw00239h.rec  wcr_fcapture.wmd
We can retrieve the ID of the capture run by passing the directory object name to the GET_CAPTURE_INFO function, or by querying the DBA_WORKLOAD_CAPTURES view.
    SELECT DBMS_WORKLOAD_CAPTURE.get_capture_info('DB_REPLAY_CAPTURE_DIR')
    FROM   dual;
    DBMS_WORKLOAD_CAPTURE.GET_CAPTURE_INFO('DB_REPLAY_CAPTURE_DIR')
    ---------------------------------------------------------------
                                                                 21
    1 row selected.
    COLUMN name FORMAT A30
    SELECT id, name FROM dba_workload_captures;
            ID NAME
    ---------- ------------------------------
            21 test_capture_1
    1 row selected.
The DBA_WORKLOAD_CAPTURES view contains information about the capture process. This can be queried directly, or a report can be generated in text or HTML format using the REPORT function.
    DECLARE
      l_report  CLOB;
    BEGIN
      l_report := DBMS_WORKLOAD_CAPTURE.report(capture_id => 21,
                                               format     => DBMS_WORKLOAD_CAPTURE.TYPE_HTML);
    END;
    /
The capture ID can be used to export the AWR snapshots associated with the specific capture run.
    BEGIN
      DBMS_WORKLOAD_CAPTURE.export_awr (capture_id => 21);
    END;
    /
A quick look at the capture directory shows a dump file and associated log file have been produced.
    $ cd /u01/app/oracle/db_replay_capture
    $ ls
    wcr_4f9rtgw00238y.rec  wcr_ca.dmp   wcr_cr.text
    wcr_4f9rtjw002397.rec  wcr_ca.log   wcr_fcapture.wmd
    wcr_4f9rtyw00239h.rec  wcr_cr.html  wcr_scapture.wmd
Replay using the DBMS_WORKLOAD_REPLAY Package
The DBMS_WORKLOAD_REPLAY package provides a set of procedures and functions to control the replay process. In order to replay the logs captured on the "prod-11g" system, we need to transfers the capture files to our test system. Before we can do this, we need to create a directory on the "test-11g" system to put them in.

Tips:The database version of the system where you replay the workload must match the version of the database where you captured the workload.
For simplicity we will keep the name the same.
    mkdir /u01/app/oracle/db_replay_capture
Transfer the files from the production server to the test server.
Next, we create a directory object pointing to the new directory.
    CONN sys/password@test AS SYSDBA
    CREATE OR REPLACE DIRECTORY db_replay_capture_dir AS '/u01/app/oracle/db_replay_capture/';
Oracle recommends that you reset the system time on the test system to the time when you started the workload capture in order to avoid encountering invalid data when processing time-sensitive data, as well as to avoid  potential failure of any scheduled jobs.The key to a successful replay is to have the application transactions access an identical version of the application data as that on the system where you captured the initial workload.For this test I have ignored this step.

Before the replay, resolve all external references from the databases such as database links. If these links exist in the captured workload, you must fully disable or reconfigure them so they are fully functional in the test system. In addition to database links, external references include objects such as directory objects, URLs, and external tables that point to production systems. You're likely to encounter unexpected problems if you replay a workload with unresolved external references.

Preprocessing the Workload
Before you can replay the captured workload, you must first preprocess the captured data. Preprocessing involves creating replay files that you can use to replay the workload on a test system. However, you need to preprocess the captured workload only once, no matter how many times you replay the workload. Any files that were created by the database aren't modified when you run the preprocessing step multiple times. The database will create new files but not modify the older files. Thus, if you run into any errors, you can run the preprocess step multiple times without any problem.

Use the PROCESS_CAPTURE procedure to preprocess the captured workload, as shown here:
  begin
     DBMS_WORKLOAD_REPLAY.process_capture(CAPTURE_DIR => 'DB_REPLAY_CAPTURE_DIR');
  end;
The capture_dir parameter refers to the directory where the database has stored the captured workload. Preprocessing the data will produce the metadata for the captured workload and transform the captured workload data files into replay streams called replay files that you can now replay on the test system.

Set up the Replay Clients

The replay driver is a special application that consumes the captured workload by sending replay requests to the test database. The replay driver consists of one or more replay clients that connect to the test system and send requests to execute the captured workload. The replay driver thus replaces external client in charge of all interaction with the RDBMS. The replay client in essence simulates the production system on the test database by sending appropriate requests that make the test system behave as if those requests came from the external clients during the workload capture. The replay driver distributes the replayworkload streams among the multiple replay clients based on network bandwidth, CPU, and memory capabilities.The replay client is a multi-threaded client, capable of driving multiple workload sessions.
Each of the workload clients, which you start with the wrc executable from the command line, submits a session's workload. It's the replay client that actually connects to the database and drives the replay. The number of replay clients you'll need will depend on the number of user sessions you need to replay in the captured workload. If you need multiple hosts because of a large number of usr sessions, you must install the wrc executable on each of the hosts.
You can start multiple clients if you want, each of which will initiate one or more replay threads with the database. Each of these replay threads represents a single stream from the workload capture.

For wrc executable command,the mode parameter is the only required parameter. If you don't specify the replay_dir parameter, the replay directory will default to the current directory. Following are three optional parameters:

  • process_per_cpu:specifies the maximum number for client processes per CPU and its default value is 4.
  • threads_per_process:parameter specifies the maximum number of threads in a single wrc client process and its default value is 50.
  • connection_override:specifies whether wrc must override the connection mapping stored in the DBA_WORKLOAD_CONNECTION_MAP view. The default value of this parameter is FALSE, meaning all replay threads will use the connection mappings in the DBA_WORKLOAD_CONNECTION_MAP view to connect.

Initializing the Replay Data 
Use the INITIALIZE_REPLAY procedure to initialize the data, which loads the metadata into tables required by the workload replay process.
SQL> exec   DBMS_WORKLOAD_REPLAY.initialize_replay (replay_name => 'test_capture_1',replay_dir  => 'DB_REPLAY_CAPTURE_DIR');
The replay_name parameter specifies the replay name, and the replay_dir parameter specifies the directory containing the captured workload. Among other things, the initialization process will load captured connection strings so they can be remapped for the database replay.

I've named the replay with the same name as the capture process (test_capture_1), but this is not necessary.

Remapping External Connections 
You can use the DBA_WORKLOAD_CONNECTION_MAP view to check the external connection mappings made by database users during the workload capture. You must remap the external connections so the individual user sessions can connect to all the external databases.
Use the REMAP_CONNECTION procedure to remap external connections. On a single-instance system, the capture and replay system connection strings are mapped one-to-one.The following example shows how to remap external connections:
SQL> exec dbms_workload_replay.remap_connection (connection_id =>
     111,replay_connection => 'prod1:1522/testdb');
In the REMAP_CONNECTION procedure, the connection_id parameter shows the connection from the workload capture, and the optional replay_connection parameter specifies the new connection string you want to use during the workload replay. If the replay_connection parameter's value is set to its default value of null, all replay sessions will connect to the default host. When dealing with an Oracle RAC environment, you can map all the connection strings to a single load balancing connection string.

Setting Workload  Options 

After initializing the replay data and remapping necessary external connections, you must set various workload replay options. You can specify the following four options while replaying the production workload.

synchronization 

By default, the value for this parameter is TRUE,meaning that the commit order of the captured workload will be preserved during the workload replay. Replay actions execute only after all the dependent commit actions are completed successfully. This leads to the elimination of data divergence that results when commit order is not followed correctly among dependent transactions. If the captured workload consists primarily of independent transactions, you can set the value of the synchronization parameter to FALSE because you aren't worried about data divergence in this case. Synchronized commit-based replay ensures minimal data divergence when compared with unsynchronized replay. Unsynchronized replay is useful for load or stress testing where you don't have to adhere to the original commit ordering. Unsynchronized replay leads to high data divergence.

connect_time_scale 
This is an optional parameter. Use the connect_time_scale parameter to calibrate the time between the beginning of the workload capture and the time when a session connects with the specified value. This parameter enables you to adjust the number of concurrent users during the workload replay. The input is interpreted as a % value. Can potentially be used to increase or decrease the number of concurrent users during the workload replay. DEFAULT VALUE is 100.
think_time_scale 
An optional parameter that lets you calibrate the speed at which you send user calls to the database. The parameter scales the elapsed time between user calls from the same session. Can potentially be used to increase or decrease the number of concurrent users during the workload replay. The default value for this parameter is 100. If you set this value to 0, you'll send client requeststo the database in the fastest time possible.

think_time_auto_correct 
Also an optional parameter that automatically corrects the think time set by the think_time_scale parameter. By default, this parameter is set to FALSE, meaning there's no automatic adjustment of the think time. When you set it to TRUE, the database will automatically reduce the value set for the think_time_scale parameter if the replay is progressing slower than the data capture. If the replay is going faster than the data capture, it'll automaticallyincrease the think time.
Note the difference between how elapsed time is computed during a workload capture and a workload replay. During a workload capture, elapsed time is the sum of two components: user time and user think time. User time is the time it takes to make a user call to the database, and user think time is the time the user waits between calls. Workload replay includes three components: user time, user think time, and synchronization time.

EXAMPLE

Application of the connect_time_scale Parameter
If the following was observed during the original workload capture:
12:00 : Capture was started
12:10 : First session connect  (10m after)
12:30 : Second session connect (30m after)
12:42 : Third session connect  (42m after)
If the connect_time_scale is 50, then the session connects will happen as follows:
12:00 : Replay was started with 50% connect time scale
12:05 : First session connect  ( 5m after)
12:15 : Second session connect (15m after)
12:21 : Third session connect  (21m after)
If the connect_time_scale is 200, then the session connects will happen as follows:
12:00 : Replay was started with 200% connect time scale
12:20 : First session connect  (20m after)
13:00 : Second session connect (60m after)
13:24 : Third session connect  (84m after)
Application of the think_time_scale Parameter If the following was observed during the original workload capture:
12:00 : User SCOTT connects
12:10 : First user call issued (10m after completion of prevcall)
12:14 : First user call completes in 4mins
12:30 : Second user call issued (16m after completion of prevcall)
12:40 : Second user call completes in 10m
12:42 : Third user call issued ( 2m after completion of prevcall)
12:50 : Third user call completes in 8m
If the think_time_scale is 50 during the workload replay, then the user calls will look something like below:
12:00 : User SCOTT connects
12:05 : First user call issued 5 mins (50% of 10m) after the completion of
        previous call
12:10 : First user call completes in 5m (takes a minute longer)
12:18 : Second user call issued 8 mins (50% of 16m) after the completion of prev
        call
12:25 : Second user call completes in 7m (takes 3 minutes less)
12:26 : Third user call issued 1 min  (50% of 2m) after the completion of prev
        call
12:35 : Third user call completes in 9m (takes a minute longer)
Application of the think_time_auto_correct Parameter If the following was observed during the original workload capture:
12:00 : User SCOTT connects
12:10 : First user call issued (10m after completion of prevcall)
12:14 : First user call completes in 4m
12:30 : Second user call issued (16m after completion of prevcall)
12:40 : Second user call completes in 10m
12:42 : Third user call issued ( 2m after completion of prevcall)
12:50 : Third user call completes in 8m
If the think_time_scale is 100 and the think_time_auto_correct is TRUE during the workload replay, then the user calls will look something like below:
12:00 : User SCOTT connects
12:10 : First user call issued 10 mins after the completion of prev call
12:15 : First user call completes in 5m (takes 1 minute longer)
12:30 : Second user call issued 15 mins (16m minus the extra time of 1m the prev call took) after the completion of prev call
12:44 : Second user call completes in 14m (takes 4 minutes longer)
12:44 : Third user call issued immediately (2m minus the extra time of 4m the prev call took) after the completion of prev call
12:52 : Third user call completes in 8m
Preparing the Workload for Replay
To prepare the workload to replay the test system, first prepare the workload by executing the PREPARE_REPLAY procedure, as shown here:
SQL> dbms_workload_replay.prepare_replay (replay_name =>
     'replay1',replay_dir => 'test_dir',
     synchronization= FALSE);
In this example, the synchronization parameter is set to FALSE(default value is TRUE). This means that the commit order of transactions in the captured workload may not be preserved during the workload replay. This is a good strategy if you believe that the workload is composed mostly of independent transactions, which means the commit order need not be preserved by setting the synchronization parameter to TRUE.
Before we can start the replay, we need to calibrate and start a replay client using the "wrc" utility. The calibration step tells us the number of replay clients and hosts necessary to faithfully replay the workload.
    $ wrc mode=calibrate replaydir=/u01/app/oracle/db_replay_capture
    Workload Replay Client: Release 11.1.0.6.0 - Production on Tue Oct 30 09:33:42 2007
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    Report for Workload in: /u01/app/oracle/db_replay_capture
    -----------------------
    Recommendation:
    Consider using at least 1 clients divided among 1 CPU(s).
    Workload Characteristics:
    - max concurrency: 1 sessions
    - total number of sessions: 3
    Assumptions:
    - 1 client process per 50 concurrent sessions
    - 4 client process per CPU
    - think time scale = 100
    - connect time scale = 100
    - synchronization = TRUE
The calibration step suggest a single client on a single CPU is enough, so we only need to start a single replay client, which is shown below.
    $ wrc system/password@test mode=replay replaydir=/u01/app/oracle/db_replay_capture
    Workload Replay Client: Release 11.1.0.6.0 - Production on Tue Oct 30 09:34:14 2007
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    Wait for the replay to start (09:34:14)
The replay client pauses waiting for replay to start. We initiate replay with the following command.
    BEGIN
      DBMS_WORKLOAD_REPLAY.start_replay;
    END;
    /
If you need to stop the replay before it is complete, call the CANCEL_REPLAY procedure.
The output from the replay client includes the start and finish time of the replay operation.
    $ wrc system/password@test mode=replay replaydir=/u01/app/oracle/db_replay_capture
    Workload Replay Client: Release 11.1.0.6.0 - Production on Tue Oct 30 09:34:14 2007
    Copyright (c) 1982, 2007, Oracle.  All rights reserved.
    Wait for the replay to start (09:34:14)
    Replay started (09:34:44)
    Replay finished (09:39:15)

Analyzing Workload Capture and Replay
Once complete, we can see the DB_REPLAY_TEST_TAB table has been created and populated in the DB_REPLAY_TEST schema.
    SQL> CONN sys/password@test AS SYSDBA
    Connected.
    SQL> SELECT table_name FROM dba_tables WHERE owner = 'DB_REPLAY_TEST';
    TABLE_NAME
    ------------------------------
    DB_REPLAY_TEST_TAB
    SQL> SELECT COUNT(*) FROM db_replay_test.db_replay_test_tab;
      COUNT(*)
    ----------
        500000
Information about the replay processing is available from the DBA_WORKLOAD_REPLAYS view.
    COLUMN name FORMAT A30
    SELECT id, name FROM dba_workload_replays;
            ID NAME
    ---------- ------------------------------
            11 test_capture_1
    1 row selected.
In addition, a report can be generated in text or HTML format using the REPORT function.
    DECLARE
      l_report  CLOB;
    BEGIN
      l_report := DBMS_WORKLOAD_REPLAY.report(replay_id => 11,
                                              format     => DBMS_WORKLOAD_REPLAY.TYPE_HTML);
    END;

We can also use the following procedure to generate a replay report.
declare
      cap_id     number;
      rep_id     number;
      rep_rpt    clob;
begin
    cap_id  :=  dbms_workload_replay.get_replay_info (dir =>
               'test_dir');
    select max(id) into rep_id
     from dba_workload_replays
     where capture_id = cap_id;
     rep_rpt  :=  dbms_workload_replay.report(replay_id
                  => rep_id,
                  format => dbms_workload_replay.type_text);
end;
/
The GET_REPLAY_INFO function provides a history of the workload capture in the specified replay directory test_dir.

Here's a typical report produced by the REPORT function:
Error Data
(% of total captured actions)
New errors:
 12.3%
Not reproduced old errors: 1.0%
Mutated errors:
 2.0%
Data Divergence
Percentage of row count diffs:
 7.0%
Average magnitude of difference (% of captured):
4.0%
Percentage of diffs because of error (% of diffs):
20.0%
Result checksums were generated for 10% of all
actions(% of checksums)
Percentage of failed checksums:
0.0%
Percentage of failed checksums on same row count:
0.0%
Replay Specific Performance Metrics
Total time deficit (-)/speed up (+):
-32 min
Total time of synchronization:
44 min
Average elapsed time difference of calls:
0.1 sec
Total synchronization events:
3675119064
Following are the key types of information you must focus on in order to judge the performance on the test system:
Pay special attention to the divergence of the replay from the captured workload performance. If an online divergence reveals serious divergence, you can stop the replay. Alternatively, you can use offline divergence reporting at the end of the replay to determine how successful the replay was. Your goal is to minimize all types of negative record-and-replay divergence. Data divergence is shown by the differences in the number of rows returned by queries in response to identical SQL statements. Data divergences merit your utmost scrutiny. Data divergences can be any one of the following:

Smaller or larger results sets
Updates to a database state
A return code or an error code

Errors generated during the workload replay.

Performance deviations between workload capture and workload replay. You can see how long the replay took to perform the same amount of work as the captured workload. If the workload replay takes longer than workload capture, it's a cause for concern and you must investigate this further.
Performance statistics captured by AWR reports. You can also use ADDM to measure the performance difference between the workload capture system and the replay system.
You must investigate any of the data divergences listed in order to reduce the divergence between recording and replaying the database workload. Any of the following workload characteristics will increase data or error divergence between capture and replay of the workload:
Implicit session dependencies due to things such as the use of the DBMS_PIPE package
Multiple commits within PL/SQL
User locks
Using non-repeatable functions
Any external interaction with URLs or database links
The following data dictionary views help you manage the Database Replay feature:
DBA_WORKLOAD_CAPTURES shows all workload captures you performed in a database.
DBA_WORKLOAD_FILTERS shows all workload filters you defined in a database.
DBA_WORKLOAD_REPLAYS shows all workload replays you performed in a database.
DBA_WORKLOAD_REPLAY_DIVERGENCE helps monitor workload divergence.
DBA_WORKLOAD_THREAD helps monitor the status of external replay clients.
DBA_WORKLOAD_CONNECTION_MAP shows all connection strings used by workload replays.


参考至:《McGraw.Hill.OCP.Oracle.Database.11g.New.Features.for.Administrators.Exam.Guide.Apr.2008》

                     http://www.oracle-base.com/articles/11g/database-replay-11gr1.php
                     http://www.stanford.edu/dept/itss/docs/oracle/10gR2/appdev.102/b14258/d_workload_capture.htm#CFHFAAFB

                     http://docs.oracle.com/cd/E11882_01/appdev.112/e40758/d_workload_replay.htm#ARPLS73997
                     http://docs.oracle.com/cd/E11882_01/appdev.112/e40758/d_workload_replay.htm#ARPLS69088

本文原创,转载请注明出处、作者

如有错误,欢迎指正

邮箱:[email protected]

猜你喜欢

转载自czmmiao.iteye.com/blog/1914518