Oracle changed to a large table partition table and table space handoff scheme

Oracle changed to a large table partition table and table space handoff scheme

One, background

Since before the database tables and indexes on a table space table space data files lead to fast growth, the number of files about to reach the limit Oracle table space, the need for table (no partition table and some amount of data up to several billion, TB file size level) a table space migration, and the transformation of some of the tables for the partition table.

Second, analysis of alternatives

1. IMP / EXP

Import (import) export (export) tool has long years, export data to a binary file, will be disposed of after 11g r2, only for importing legacy data export

This tool can complete the required function, but with the following restrictions:

1) When you export a large amount of data is very slow, only batch operation

2) does not support the mapping table, we need a third user database or to participate

3) requires the operation user DBA authority

4) the need for IMP / EXP tool operating authority

5) operation at the client, the influence by the network

2. Use IMPDP / IMPDP

EXPDP / IMPDP tool from 10g been introduced, similar to the format parameter EXP / IMP

advantage:

1) import and export tasks can execute concurrently

2) a plurality of file importing and exporting

3) do not need caching, direct manipulation of the database and object files

4) without network impact, expdp server program

5) supports the object mapping, direct mapping table and tables

6) may never introduced into the partition table partition table

Disadvantages:

1) about 5TB of server disk space required to store binary files can be exported in batches

2) a one-time introduction space large enough undo

3.    使用alter table ** move tablespace ***

This command is a move to another table in the tablespace

advantage:

1) For large tables move, this query table is not affected, only the DML operations affected

2) index structure is not affected, only after the move is completed rebuild

3) other objects with dependencies are not affected, before the operation is not necessary to worry about the dependencies between objects

4) move operation may be parallel.

5) NOLOGGING option can greatly accelerate the pace of reconstruction, if you want to move the table is nologging, you need to specify

Disadvantages:

1) The current user must be the original table space and table space has a new operating authority

2) the structure of the conversion table can not be changed from the normal table partition table

3) too much data might not successful

4) Proven does not release the original table space data file formatted disk space

5) will fail when prompted midway through the target table space is insufficient

 

4. Using Table redefined

The amount of data that a very large common area table table modified ingredients, if offline operation can solve the problem, do not use online redefinition, such as some static data archive migration of historical data, you can use CTAS, alter table move, import or export carry out

scenes to be used:

Physical Properties 1) changes to the table storage parameters

2) to migrate the table to a different table space

3) Elimination of table fragmentation, free space

4) an increase in the table, delete or rename fields

5) large quantities of data in the table change

principle:

Achieved by DBMS_REDEFINITION package, first create a fast refresh materialized view as the transition table, and then load the data source table to the transition table, and create materialized view logs on the source table to support fast refresh synchronization data

Constraints and risks:

1) primary key-based manner, the table and the weight after the original definition table must have the same primary key

2) based ROWID way, is not index-organized tables

3) There are tables materialized views or materialized view logs, tables, temporary tables in the materialized view container table, senior team list, index-organized table overflow table, with BFILE, LOGN columns in the table, Cluster table, sys and system can not be online redefine

4) does not support the level of data subsets

5) Use only have to determine the result of the expression in the column mapping, such as sub-queries will not work

6) additional new intermediate table, there must be no constraint NOT NULL

7) not have referential integrity between the original table and the intermediate table

8) online redefinition unable to adopt nologging

9) table space should be left at least greater than the source table space with the remaining space will be created in advance, at least, the table space 5T

10) the impact on small business, but the process takes a long time, twenty million data tests takes ten minutes 20

11) If the transaction operation on the source table too often, may be more serious waiting to happen, the transaction does not exist

Reference: https://www.sohu.com/a/166577098_505827

 

The use of CATS + RENAME

CTAS This method uses DDL statements, no UNDO, only a small amount REDO, construction of the table after the data has been distributed to each partition, and finally exchange the names of the source and target tables can

核心sql:create table t(id, time) partition by range (time) (partition t1 values less than (to_date('201311', 'yyyymm')), partition t2 values less than (maxvalue)) nologging parallel 4 as select /*+parallel*/ id, time from s;

Performance by:

1) Add nologging: alter table t nologging; After completion of the table needs to be modified according to logging

2)   并行DDL: alter session enable parallel dml;

3) query parallelism

6. Using INSERT + RENAME

This method is suitable for containing a large amount of operation data table to a partition in the partition table, the partition table is established to use insert structure is then implemented to meet the data found in an intermediate partition tables, and exchange intermediate partition table and the target table, the name swap source and target tables are done after each portion (alter table p exchange partition p1 with table t).

Performance:

1) modified to Table nologging

2) enable parallel DML, alter session enable parallel dml;

3) inserted append

4) All data is inserted after the completion of construction index

insufficient:

1) consistency problems still exist, after the swap RENAME before T_NEW TO T, query, update, and delete errors can occur or can not access the data, but does not currently exist

2) requires that the data distributed into the plurality of partitions, increases the complexity of the operation, efficiency decreases

7. Data cleaning method

This method is to first establish the structure of the table, and then create the task each time, insert the data from the source table query within a certain time orientation to the new table, the final exchange name of the source and target tables, and simple operation.

Performance:

5) to modify the table nologging,

6) data insertion after the completion of construction index

7) enable parallel DML, alter session enable parallel dml;

8) The append inserted

Third, the principle of migration

  1. All the indexes are on the table space TM_INDX
  2. Table (or partition) migrated as far as possible evenly distributed in the new tablespace
  3. Select the appropriate table structure (clustered tables, IOT, partition table ...), improve query performance, maintainability
  4. Reasonable structure parameter setting table, query performance, save storage space
  5. Select the appropriate index (function, bitmaps ...), delete unwanted irrational index

Fourth, the migration steps

1. Small table

For small amounts of data tables or partition table does not require a stable, do not require migration table space, or directly using Scheme 5 (ctas + rename) uniformly migrate to a new tablespace

2. Migration large table

First excluded Scenario 3 (not free up space) and program 4 (low efficiency, do not involve matters)

1) confirm the need to operate table (migration, reconstruction, delete)

2) delete the duplicate function table

3) migrate relatively small amount of data table (10-100G), respectively, to verify the efficiency and feasibility of 1,2,5,6,7

4) based on the verification results to choose the best way to migrate large table

3. Index

The index of less than deleted

All in TM_DATA table space to rebuild the index table space TM_INDX

Guess you like

Origin www.cnblogs.com/muphy/p/11595264.html