Tips for data migration in Oracle database

       At the end of last year, we did many system data migrations. Most of the systems were logically migrated due to platform and version reasons, and a few were physically migrated. I have some experiences and experiences that I would like to share with you.

  First, let’s talk about the migration process. Before migrating, write a good plan, especially the steps of the implementation plan, and then conduct a complete test. When we migrated, some systems were tested four or five times to improve the plan and process through testing.

  For physical migration, that is, restoring through RMAN backup and applying archives (cold migration through dd is not discussed here), although it is important to set the database to force logging, and use RMAN for full backup Before, be sure to execute:

  Otherwise, bad blocks may occur.

  For logical migration, before job_processes is set to a value >0, pay attention to the next execution time of the job and the user to which the job belongs. For example, if the job definition has been imported before, but the job has already been run during the migration, then after the migration is completed, the next time of the job will still be the original time, which may cause it to run repeatedly. In addition, after the job is imported through IMP, the user to which the job belongs will become the name of the imported user. Obviously, the original user of the job cannot manage the JOB. It can be modified through the following sql:

  Before migration, structural modifications and releases to the system, such as table structures, indexes, stored procedure packages, etc., should be prohibited.

  If objects, including stored procedures, are imported using exp/imp, you should check whether the objects are consistent with the original production library. For example, due to dblink, the stored procedures cannot be created after imp, resulting in the loss of some stored procedures. Although these stored procedures May not be used.

  Here are some tips to speed up your migration:

  Through dblink, use the append insert method and use parallelism. This method is faster than exp/imp.

  For LONG type columns, the insert..select method is obviously not possible. You can use exp/imp, but this method is very slow. The reason is that the table is inserted row by row during imp. There is another way, which is the sqlplus copy command.

  However, the sqlpus copy command does not support tables with timestamp and lob column types. If there is a timestamp type table, you can divide a table into multiple parts and operate them simultaneously by adding the rowid condition during exp. The same can be done for tables with lob type (because in insert...select mode, there are lob type column, it is also inserted row by row). Note that in this method, you cannot use the direct method exp/imp.

  Divide the table into several parts and operate at the same time. You can not only use rowid, but also use columns on the table. For example, there is a created_date column on the table, and it is guaranteed to insert data incrementally. In this case, you can also use this Fields divide the table into different ranges for export and import simultaneously. However, using ROWID is usually more efficient.

  Of course, for tables with lob columns, you can split them into multiple insert methods and insert them at the same time according to the above method, without exp/imp.

  ·For particularly large partitioned tables, although using parallelism can increase the speed, it is limited to a single process (parallel transactions cannot be performed across DB LINK, only parallel queries, that is, insert..select can only be parallelized if it is the SELECT part) The processing power is still limited in this way. Data can be inserted into multiple intermediate tables in parallel, and then the partitions are exchanged through exchange partition without validation. This method will greatly improve the speed.

  ·Some friends may ask, why not directly insert into the partition table in parallel? Of course, if it is a non-direct path (append) method, there is no problem, but the performance of this method of insertion is low. The direct path method will hold a TM lock of mode=6 (mutually exclusive) on the table, and multiple sessions cannot be inserted at the same time. (update: Use this statement when inserting: insert into tablename partition (partname) select * from tablename where ...., which is simpler and more efficient.)

  ·During migration, the data is divided into two parts. One part is the historical table, and the second part is the dynamically changing table. Before migration, import the historical table and build an index on the historical table. This will undoubtedly greatly reduce the business during migration. System interruption time.

  ·Before migrating, consider cleaning out junk data.

  ·When migrating, make sure there are no indexes, constraints (except NOT NULL) and triggers on the table. Indexes should be rebuilt after the data import is completed. The same is true when building an index, using multiple processes to run scripts at the same time. After the index is successfully created, the PARALLEL attribute of the index should be removed.

  ·When creating constraints, you should first create CHECK constraints, primary keys, unique keys, and then create foreign key constraints in this order. The constraint status is ENABLE NOVALIDATE, which will greatly reduce constraint creation time. After the migration is completed, consider setting it back to ENABLE VALIDATE.

  ·Import statistical information on the original database by using dbms_stats.export_schame_stats and dbms_stats.import_schame_stats without having to re-collect statistics for use.

  Friends can see that the above are all for 9i. In fact, they still have a lot of reference significance in 10g or even 11g environments. Of course, these techniques are not only used for complete database migrations, but can also be applied to copying individual tables to other databases.

  What is not mentioned here is the use of technologies such as materialized views or advanced replication and triggers, because these technologies, after all, have to modify the production library and have a relatively large impact on the operation of the production library. Therefore, they are only used when downtime requirements are particularly strict. , and it should be considered only if the migration cannot be completed within this time.

  From migration experience, only a complete process and complete testing can guarantee success. Here are just some tips. If you are interested in the entire migration process, you can discuss this topic further.

Guess you like

Origin blog.csdn.net/caryxp/article/details/132922709