XTTS one of the series: U2L migration solution of using XTTS

This series of positioning is XTTS and related technologies in-depth study and research. As the series begins, in line with the principle of practicality, I put a real production environment U2L migration of practical embodiment refine simplified, clearly intended to explain how to use this solution to XTTS U2L migration, reached first you can do what I do down the primary target, if you are interested to go in-depth study details.

1.XTTS Overview

Using XTTS (Cross Platform Transportable Tablespaces) migration patterns, to be exact here refers to the enhanced version XTTS, with cross-platform endian conversion, the total amount of initialization, before rolling several times the increment function, effectively shortening the formal migration phase of production downtime time, the successful completion of the migration work U2L. For example, The requirements are as follows:

Source End goal
IP addresses 10.6.xx.xx 10.5.xx.xx
operating system AIX 5.3 RHEL 6.7
Whether RAC no Yes
Name database sourcedb targetdb
Migrating business users JINGYU JINGYU

2. Migration preparation phase

2.1 self-contained calibration

The only JINGYU migrate users, the user only checks JINGYU table space where you can verify a self-contained:

SQL> select distinct tablespace_name from dba_segments where owner='JINGYU' order by 1; 

TABLESPACE_NAME 
------------------------------ 
DBS_D_JINGYU 
DBS_I_JINGYU

SQL> execute dbms_tts.transport_set_check('DBS_D_JINGYU, DBS_I_JINGYU');
PL/SQL procedure successfully completed. 

SQL> select * from TRANSPORT_SET_VIOLATIONS; 
no rows selected

If there is no row above query result returned by the checking described self-contained.

2.2 Creating XTTS working directory
The working directory XTTS I set / exp / newxx, will create the relevant directory in the source and target-side, upload and unzip xttconvert script MOS (document ID 1389592.1) provides.

--源端AIX创建相关目录
mkdir -p  /exp/newxx
mkdir -p  /exp/newxx/src_backup
mkdir -p  /exp/newxx/tmp
mkdir -p  /exp/newxx/dump
mkdir -p  /exp/newxx/backup_incre
chown -R ora103:dba /exp/newxx

--源端AIX上传rman-xttconvert_2.0.zip至/exp/newxx
cd /exp/newxx
unzip rman-xttconvert_2.0.zip

--目标端Linux创建相关目录
mkdir -p  /exp/newxx
mkdir -p  /exp/newxx/src_backup
mkdir -p  /exp/newxx/tmp
mkdir -p  /exp/newxx/dump
mkdir -p  /exp/newxx/backup_incre
chown -R ora11g:dba /exp/newxx

--目标端Linux上传rman-xttconvert_2.0.zip至/exp/newxx
cd /exp/newxx
unzip rman-xttconvert_2.0.zip

2.3 open source bct
source turned bct (block change tracking)

SQL> alter database enable block change tracking using file '/exp/newxx/bct2'; 
Database altered.

If the test phase found bct not be effective (incremental backup time is very long), it can be considered to conduct a manual table space 0 backup:

--手动以level 0进行备份待迁移的表空间(只是为了增量可读bct,不做其他恢复操作)
RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 16 BACKUP TYPE TO BACKUPSET;
RMAN> backup incremental level 0 tablespace DBS_D_JINGYU, DBS_I_JINGYU format '/exp/test/%U.bck';

Note: This is a special case, not necessarily encounter, choose whether or not to do according to your actual test conditions. This probably 2T table space 0 backup time: 2h.

2.4 Configuration xtt.properties

AIX xtt.properties source configuration file attributes:

cd /exp/newxx
vi xtt.properties

#增加如下配置信息:
tablespaces=DBS_D_JINGYU,DBS_I_JINGYU
platformid=6
dfcopydir=/exp/newxx/src_backup
backupformat=/exp/newxx/backup_incre
backupondest=/exp/newxx/backup_incre
stageondest=/exp/newxx/src_backup
storageondest=+DG_DATA/targetdb/datafile
parallel=16
rollparallel=16
getfileparallel=6

Configure the target Linux xtt.properties end properties file:

cd /exp/newxx
vi xtt.properties

#增加如下配置信息:
tablespaces=DBS_D_JINGYU,DBS_I_JINGYU
platformid=6
dfcopydir=/exp/newxx/src_backup
backupformat=/exp/newxx/backup_incre
backupondest=/exp/newxx/backup_incre
stageondest=/exp/newxx/backup_incre
storageondest=+DG_DATA/targetdb/datafile
parallel=16
rollparallel=16
getfileparallel=6
asm_home=/opt/app/11.2.0/grid
asm_sid=+ASM1

NOTE: platformid = 6 here is to confirm the source terminal OS platform determined by the platform_id v $ database query field can also refer v $ transportable_platform corresponding to the platform.

2.5 ahead of target end users to establish roles
goal to create JINGYU end user, complete metadata can modify the default table space after import.
The following are performed at the source end user gets created and the corresponding role, after permission statement, corresponding to the target side created (if you clearly want to migrate the business user information such as user passwords and permissions, you can also choose to directly create):

--源端执行:
--create user
sqlplus -S / as sysdba
set pages 0
set feedback off
spool /exp/newxx/scripts/create_user.sql
select 'create user '||name||' identified by values '''||password||''';' from user$ where name = 'JINGYU' and type#=1;
spool off
exit

--create role
sqlplus -S / as sysdba
set pages 0
set feedback off
spool /exp/newxx/scripts/create_role.sql
select 'grant '||GRANTED_ROLE||' to '||grantee||';' from dba_role_privs where grantee = 'JINGYU';
spool off
exit

--owner为sys的表的权限需要手动赋予
sqlplus -S / as sysdba
set pages 0
set feedback off
spool /exp/newxx/scripts/grant_sys_privs.sql
select 'grant '||PRIVILEGE||' on '||owner||'.'||table_name||' to '||GRANTEE||';' from dba_tab_privs where owner='SYS'  and GRANTEE = 'JINGYU';
spool off
exit

--源端验证SQL正确与否:
cat /exp/newxx/scripts/create_user.sql
cat /exp/newxx/scripts/create_role.sql
cat /exp/newxx/scripts/grant_sys_privs.sql

--目标端执行:
@/exp/newxx/scripts/create_user.sql
@/exp/newxx/scripts/create_role.sql
@/exp/newxx/scripts/grant_sys_privs.sql

2.6 tablespace full backup
source executes transmission service AIX tablespace full backup configuration files created during execution xtts tablespace full backup script generated, and convert the data file for each incremental backup and restore, simultaneously executed by each the amount of backup, configuration file contents will change for the new incremental recovery, primarily changes in the SCN.
Rman backup increase the degree of parallelism:

RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 16 BACKUP TYPE TO BACKUPSET;

After editing the backup file, failure will produce a file fails in / exp / newxx / tmp needs to be removed before each backup run again

cd /exp/newxx
--full_backup.sh脚本内容如下
export ORACLE_SID=sourcedb
export TMPDIR=/exp/newxx/tmp
export PERL5LIB=/opt/app/ora103/10.2.0/product/perl/lib
/opt/app/ora103/10.2.0/product/perl/bin/perl /exp/newxx/xttdriver.pl -p –d

Perform full backup in the background:

cd /exp/newxx
nohup sh full_backup.sh > full_backup.log &

View / exp / newxx / total amount of generated src_backup backup size (the size of this test is 2T, the backup took 4 hours and 34 minutes)

2.7 Table space restore and convert the full amount
you transfer files to the target side

cd /exp/newxx/src_backup
scp * [email protected]:/exp/newxx/src_backup
--scp拷贝耗时10小时
cd /exp/newxx/tmp
scp * [email protected]:/exp/newxx/tmp

Linux end target execution table space restore and convert the data file to an ASM disk group will produce (see special instructions below in Section 3.2) fails after files need to be deleted before they run again / exp / newxx / tmp at each failure recovery .

cd /exp/newxx
--full_restore.sh脚本内容如下
export TMPDIR=/exp/newxx/tmp
export ORACLE_SID=targetdb1
/opt/app/ora11g/product/11.2.0/perl/bin/perl /exp/newxx/xttdriver.pl -c –d

Perform a full recovery and the amount of conversion in the background:

nohup sh full_restore.sh > full_restore.log &

The restoration and conversion time: 4 hours 15 minutes.

3. Before increment roll stage

3.1 tablespace incremental backup
source incremental backups:

cd /exp/newxx
--增量备份脚本incre_backup.sh内容如下
export ORACLE_SID=sourcedb
export TMPDIR=/exp/newxx/tmp
export PERL5LIB=/opt/app/ora103/10.2.0/product/perl/lib
/opt/app/ora103/10.2.0/product/perl/bin/perl /exp/newxx/xttdriver.pl -i –d

Incremental backups in the background:

cd /exp/newxx
nohup sh incre_backup.sh > incre_backup.log &

Before incremental backup confirmation xtt.properties file is configured correctly, incremental backup can take several minutes, indicating bct working.

--(选做)第二次做一个测试验证表:
SQL> create table JINGYU.xttstest tablespace DBS_D_JINGYUas SELECT * FROM DBA_objects;
Select count(1) from JINGYU.xttstest;

Transferring files to the target end:

cd /exp/newxx/backup_incre
scp *_1_1 [email protected]:/exp/newxx/backup_incre
cd /exp/newxx/tmp
scp * [email protected]:/exp/newxx/tmp

3.2 table space incremental recovery
target end incremental recovery:

cd /exp/newxx
--incre_recover.sh脚本内容如下
export TMPDIR=/exp/newxx/tmp
export ORACLE_SID=targetdb1
/opt/app/ora11g/product/11.2.0/perl/bin/perl /exp/newxx/xttdriver.pl -r –d

Incremental recovery in the background:

nohup sh incre_recover.sh > incre_recover.log &

Special Note:
Step 1. increment above pre-roll can be repeated multiple times before the formal migration for multiple target library table space incremental restore, the target-side database with almost the same production database before the formal migration, significantly reduce migration downtime.
2. After each backup (full and incremental amount) is successful, the source / exp / newxx / tmp file directory will be generated, all the files in this directory need to lower transmit / exp / newxx / tmp (i.e. each covering can)
3. after each backup (full volume and incremental), / exp / newxx / tmp directory will generate new xttplan.txt.new file, which records the latest scn each table space, you need the old file renaming xttplan.txt following each incremental restore linux front end:
CD / exp / newxx / tmp
Music Videos xttplan.txt xttplan.old1.txt
Music Videos xttplan.txt.new xttplan.txt

4. formal migration phase

4.1 tablespace read only
application business side of the rear stop, confirm that no user review database level a session connection;
the source traffic to be transmitted AIX modified to READ ONLY table space state:

sqlplus -S / as sysdba
set pages 0
set feedback off
spool /exp/newxx/scripts/read_only.sql
select 'alter tablespace '||name||' read only;' from v$tablespace where name in ('DBS_D_JINGYU','DBS_I_JINGYU') order by 1;
spool off
exit

cat /exp/newxx/scripts/read_only.sql 
@/exp/newxx/scripts/read_only.sql

4.2 last incremental operating
method in accordance with the pre-roll in front of the incremental stage of completion of the last incremental backup and recovery.
The test, the last incremental backup time by 21 minutes.

4.3 sink opening flashback
sink Linux Open Open flashback metadata before importing

SQL> alter system set db_recovery_file_dest_size=100g scope=both;
System altered.

SQL> alter system set db_recovery_file_dest='+DG_DATA' scope=both;
System altered.

SQL> alter database flashback on;
Database altered.

SQL> select flashback_on from v$database;
FLASHBACK_ON
------------------
YES

SQL> create restore point before_imp_xtts guarantee flashback database;
Restore point created.

SQL> select name from v$restore_point;
确认有刚建立的restore point。

4.4 metadata import XTTS
4.4.1 AIX source metadata derived XTTS:

create directory dump as '/exp/newxx/dump';

Export table space, the user metadata:

--导出表空间元数据(vi expdp_xtts.sh)
expdp system/oracle parfile=expdp_xtts.par
--expdp_xtts.par内容如下:
directory=dump
dumpfile=tbs_xtts.dmp
logfile=expdp_xtts.log
transport_tablespaces=('DBS_D_JINGYU','DBS_I_JINGYU')
transport_full_check=y
metrics=yes

--导出用户元数据(vi expdp_xtts_other.sh)
expdp system/oracle parfile=expdp_xtts_other.par
--expdp_xtts_other.par内容如下
directory=dump
dumpfile=tbs_xtts_other.dmp
logfile=expdp_xtts_other.log
content=metadata_only
schemas=JINGYU
metrics=yes

The export table space, user metadata script:

cd /exp/newxx/dump
./expdp_xtts.sh
./expdp_xtts_other.sh

After completion of the export dump file transfer to the target end / exp / newxx / dump directory

cd /exp/newxx/dump
scp *.dmp [email protected]:/exp/newxx/dump

4.4.2 LINUX target end into XTTS Metadata:
Create a directory:

create or replace directory dump as '/exp/newxx/dump';

Import XTTS Metadata:

--导入XTTS元数据(vi impdp_xtts.sh)
impdp system/oracle parfile=impdp_xtts.par
--impdp_xtts.par内容如下:
directory=dump
logfile=impdp_xtts.log
dumpfile=tbs_xtts.dmp
cluster=n
metrics=yes
transport_datafiles='+DG_DATA/targetdb/DATAFILE/DBS_D_JINGYU.290.976290433',
'+DG_DATA/targetdb/DATAFILE/DBS_I_JINGYU.286.976290433'

NOTE: The above data file path needs to be changed according to the actual situation import.

Execute scripts import XTTS metadata:

cd /exp/newxx/dump
./impdp_xtts.sh
SQL> select count(1) from JINGYU.xttstest;
正常返回结果.

After execution, and to verify success.

4.5 Table space the Write the Read
LINUX target end table space read write:

sqlplus -S / as sysdba
set pages 0
set feedback off
spool /exp/newxx/scripts/read_write.sql
select 'alter tablespace '||name||' read write;' from v$tablespace where name in ('DBS_D_JINGYU','DBS_I_JINGYU') order by 1;
spool off
exit 

cat /exp/newxx/scripts/read_write.sql
@/exp/newxx/scripts/read_write.sql

4.6 second open flashback
target Linux open end flashback again before other metadata import

sqlplus / as sysdba
select flashback_on from v$database;
create restore point before_imp_other guarantee flashback database;
select name from v$restore_point;

4.7 Importing other metadata
import other metadata

--导入其他元数据(vi impdp_xtts_other.sh)
impdp system/oracle parfile=impdp_xtts_other.par 
--impdp_xtts_other.par 内容如下
directory=dump 
dumpfile=tbs_xtts_other.dmp
logfile=impdp_xtts_other.log 
content=metadata_only
schemas=JINGYU
cluster=n
metrics=yes

The import script other metadata:

cd /exp/newxx/dump
./impdp_xtts_other.sh

4.8 Check the public dblink
original query public dblink production environment, if the results to create a new production environment:

--原生产环境查询:
select * from dba_db_links;
OWNER                          DB_LINK                        USERNAME                       HOST                           CREATED
------------------------------ ------------------------------ ------------------------------ ------------------------------ -------------------
JINGYU                          XXX_TMP                        JINGYU                          xxdb                          2008-05-20 09:51:14

select dbms_metadata.get_ddl('DB_LINK',DB_LINK,'JINGYU') FROM DBA_DB_LINKS where owner='JINGYU';

CREATE DATABASE LINK "XXX_TMP"
   CONNECT TO "JINGYU" IDENTIFIED BY VALUES '056414CFC01C4F42E2E496B913FDC0212A'
   USING 'xxdb';

--连接到JINGYU用户创建即可,开始没有权限:
grant create database link to JINGYU;

4.9 checking public synonyms
of the original query public synonyms production environment, if the results to create a new production environment:

select owner,SYNONYM_NAME,TABLE_OWNER,TABLE_NAME from dba_synonyms where owner='PUBLIC' and table_owner in ('JINGYU');

The inconclusive.

4.10 Check the external table
of the original production environment query an external table information, if the results to create a new production environment:

SQL> select * from dba_external_tables;

The inconclusive.

4.11 compared to the data
source and destination environments, respectively, compared to the query:

set linesize 200
set pagesize 9999
col owner format a15
col object_type format a15
select owner, object_type, count(*)
  from dba_objects
 where object_name not like 'BIN%'
   and owner in ('JINGYU')
 group by owner, object_type
 order by 1,2 desc;

OWNER           OBJECT_TYPE       COUNT(*)
--------------- --------------- ----------
JINGYU           VIEW                     2
JINGYU           TABLE PARTITION         25
JINGYU           TABLE                   49
JINGYU           SEQUENCE                 4
JINGYU           PROCEDURE                5
JINGYU           INDEX PARTITION        225
JINGYU           INDEX                   55
JINGYU           FUNCTION                 3
JINGYU           DATABASE LINK            1

9 rows selected.

4.12 compile failure Objects
query failure number of objects, grouped by object type statistics:

sqlplus / as sysdba
set timing on
select owner, object_type, count(*)
  from dba_objects
 where status <> 'VALID'
   and owner in ('JINGYU')
 group by owner, object_type
 order by 1, 2 desc;

OWNER           OBJECT_TYPE       COUNT(*)
--------------- --------------- ----------
JINGYU           PROCEDURE                1
JINGYU           FUNCTION                 1

--查看具体失效对象,比对源端、目标端:
set linesize 200
set pagesize 9999
col owner format a15
col object_type format a15
col OBJECT_NAME for a32
select owner,object_name, object_type, status
  from dba_objects
 where status <> 'VALID'
   and owner in ('JINGYU') order by 2;

OWNER           OBJECT_NAME                      OBJECT_TYPE     STATUS
--------------- -------------------------------- --------------- -------
JINGYU           XXXXCITY                         FUNCTION        INVALID
JINGYU           TAB_MAINTAIN_XXX                 PROCEDURE       INVALID

编译失效对象(若两边失效对象一致可不做)
exec utl_recomp.recomp_parallel(64);

4.13 Change the user's default table space
due to the table space is created later than the user, you need to manually change the default table space:

alter user JINGYU default tablespace DBS_D_JINGYU;
select username,default_tablespace,temporary_tablespace from dba_users where username='JINGYU';

4.14 delete flashback point

after confirmation of this migration successfully, manually remove the Flashback point

--(选做)此时可以先删除之前的测试表:
drop table JINGYU.xttstest;

--手动删除闪回点:
drop restore point BEFORE_IMP_XTTS;
drop restore point before_imp_other;
select name from v$restore_point;

--关闭闪回数据库的功能:
SQL> alter database flashback off;

Database altered.

SQL> select flashback_on from v$database;

FLASHBACK_ON
------------------
NO

SQL> alter system set db_recovery_file_dest='' sid='*' scope=both;

System altered.

4.15 modify service parameters
manually modify the user of the same service and parameters:

--service_names参数保持和源端一致:
show parameter service_names
alter system set service_names ='dbaas','targetdb','jingyursv', 'jingyu' sid='*' scope=both;
alter system register;

Check the registration of the listener:

--本次环境使用的非默认监听,名字是targetdb:
lsnrctl status targetdb

5. Other considerations

Experience gathered colleagues actually XTTS migration project in other projects, combined with their own situation in this implementation process encountered listed XTTS other considerations related to (welcome to continue to add their own when stepped on and pit XTTS corresponding to the sharing of experience):

  • 1. Use the time required for migration XTTS according to different situations takes time is different, every time need to assess your downtime;
  • 2.XTTS migration rate primarily derived metadata import time (can not use parallel), the more slowly the import and export objects;
  • 3. The import process checks whether there are other users built on export user table space index, this index can not be detected in a self-contained inspection, it should be checked prior to full backup. In advance to deal with such objects, this time I did not encounter:
  /*查询表空间中对象的详细信息*/
  SELECT OWNER                  AS OWNER
       ,SEGMENT_NAME           AS SEGMENT_NAME
       ,SEGMENT_TYPE           AS SEGMENT_TYPE
     ,SUM(BYTES)/1024/1024   AS SEGMENT_SIZE
  FROM DBA_SEGMENTS
  WHERE TABLESPACE_NAME in ('DBS_D_JINGYU','DBS_I_JINGYU') and owner <> 'JINGYU'
  GROUP BY OWNER,SEGMENT_NAME,SEGMENT_TYPE
  ORDER BY 4;
  • 4. Use XTTS database migration must be tested each database, and generate detailed operational documentation (due to the different relationships between each database for each user object, then pour the metadata when interdependencies may be more difficult to comb. testing must be done ahead of time to sort out);
  • 5.XTTS target section 11.2.0.4 must have a data file conversion software, source a minimum of 10G database;
  • When 6.XTTS do the whole amount in the target segment and an incremental recovery, recovery will restart running instances of the script (if the target client database as well as other business users belong to the production run, with particular attention to this point);
  • 7. The test encountered due to the source repository data file name contains special characters lead to the table space full backup of missing files and log does not report any errors, only to find missing files during the recovery phase, it is recommended that the preparatory work to pay more after a number of data files check the alignment:
  select count(1) from dba_data_files where tablespace_name in ('DBS_D_JINGYU','DBS_I_JINGYU');
  --本次迁移数据文件数量135,与表空间全量备份出来的文件数量进行对比确认一致。

On item 7, I actually met and conducted to reproduce in a test environment, specifically refer to the previous essay:

Guess you like

Origin www.cnblogs.com/jyzhao/p/11260010.html