The operation of converting ORACLE ordinary table into partition table

[Foreword] Oracle officially recommends using partition tables for management when the size of the table is greater than 2GB. Compared with small tables, partition tables have great advantages in management and performance. This document does not introduce specific advantages for the time being, but mainly introduces Several methods for converting ordinary tables into partitioned tables;

[Method overview] Oracle officially gives the following four methods of operation:

 A) Export/import method

 B) Insert with a subquery method

 C) Partition exchange method

 D) DBMS_REDEFINITION (online redefinition)



The idea of ​​these methods is to create a new partition table, then transfer the data of the old table to the new table, then transfer the corresponding dependencies, and finally rename the table, and rename the new table and the old table.
Among them, the three methods of A, B, and C will affect the normal use of the system. This document will not introduce it in detail. This document mainly introduces the D method, which is the method that is commonly used to convert ordinary tables into partitioned tables.

[Online redefinition for partition table operation] The whole operation idea is as follows, take the EMP table under SCOTT as an example
.

BEGIN

DBMS_REDEFINITION.CAN_REDEF_TABLE('SOCTT','EMP',DBMS_REDEFINITION.CONS_USE_PK);

END;

/

PL/SQL procedure successfully completed. It shows that there is no problem

2. Create a temporary table with DEPTNO as the partition option
CREATE TABLE SCOTT.EMP_1
(
  EMPNONUMBER(4),
  ENAMEVARCHAR2(10 BYTE),
  JOBVARCHAR2(9 BYTE),
  MGRNUMBER(4),
  HIREDATEDATE,
  SALNUMBER(7,2),
  COMMNUMBER(7,2),
  DEPTNONUMBER(2)
)
PARTITION BY RANGE (DEPTNO)
(
  PARTITION EMP_A1 VALUES LESS THAN (20),
  PARTITION EMP_A2 VALUES LESS THAN (30) ,
  PARTITION EMP_A3 VALUES LESS THAN (40),
  PARTITION EMP_A4 VALUES LESS THAN (50),
  PARTITION EMP_A5 VALUES LESS THAN (60)
      )


3. Start data migration
EXEC DBMS_REDEFINITION.START_REDEF_TABLE('SCOTT', 'EMP', 'EMP_1');

4. If there is a lot of data in the table, the 3 steps may be very long, during which the system may continue to write or update data to the table EMP , then you can execute the following statement, so that you can avoid long-term locking when executing the last step ( this process is optional )
BEGIN    DBMS_REDEFINITION.SYNC_INTERIM_TABLE( 'SCOTT' , 'EMP', 'EMP_1'); END; / 5. Migrate permission objects DECLARE num_errors PLS_INTEGER; BEGIN DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS('SCOTT', 'EMP','EMP_1', DBMS_REDEFINITION.CONS_ORIG_PARAMS, TRUE, TRUE, TRUE, TRUE, num_errors); END; /













6. Query related errors, check before operation, query DBA_REDEFINITION_ERRORS to try to query errors:
select object_name, base_table_name, ddl_txt from DBA_REDEFINITION_ERRORS;

7. End the whole redefinition
BEGIN
DBMS_REDEFINITION.FINISH_REDEF_TABLE('scott', 'emp', 'emp_1') ;
END;
/

[Summary] I made a table with a size of 2.3GB and a total number of 3.6 million rows. The whole process took about 56 seconds, and the whole process was quite fast. It is recommended that the execution of a specific production environment should be executed after strict testing, and the execution time of the entire process can be roughly known during the testing process.

In addition, if an error occurs during the execution, you can end the entire process with the following statement:

BEGIN
DBMS_REDEFINITION.ABORT_REDEF_TABLE(uname => 'SCOTT',
orig_table => 'EMP',
int_table => 'EMP_1'
);
END; 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325198476&siteId=291194637