10g+: Transportable Tablespaces Across Different Platforms (Doc ID 243304.1)

10g+: Transportable Tablespaces Across Different Platforms (Doc ID 243304.1)

APPLIES TO:

Oracle Database - Enterprise Edition - Version 10.1.0.2 to 12.2.0.1 [Release 10.1 to 12.2]
Oracle Database - Standard Edition - Version 10.1.0.2 to 12.2.0.1 [Release 10.1 to 12.2]
Oracle Database Cloud Schema Service - Version N/A and later
Oracle Database Exadata Cloud Machine - Version N/A and later
Oracle Cloud Infrastructure - Database Service - Version N/A and later
Information in this document applies to any platform.

PURPOSE

Ask Questions, Get Help, And Share Your Experiences With This Article

Would you like to explore this topic further with other Oracle Customers, Oracle Employees, and Industry Experts?

Click here to join the discussion where you can ask questions, get help from others, and share your experiences with this specific article.
Discover discussions about other articles and helpful subjects by clicking here to access the main My Oracle Support Community page for Database Datawarehousing.

Introduction  Introduction

This bulletin explains how tablespaces can now be transported between different OS platforms (cross-platform), as well as different RDBMS versions.  The cross-platform feature is available in 10g onwards.  This list is taken from 11.2.0.3; older RDBMS versions may contain fewer platforms.

This notice explains how and now transport tablespaces between different RDBMS versions in different OS platforms (cross-platform). Cross-platform functionality available since 10g. This list is taken from 11.2.0.3; older versions may contain less RDBMS platform.

SQL> -- This list taken from 11.2.0.3. Older RDBMS versions may contain fewer platforms.
SQL> -- The list will not contain the platform info for the database from which you are running the query.
SQL> col platform_name for a35
SQL> select * from v$transportable_platform order by platform_id;

PLATFORM_ID PLATFORM_NAME                       ENDIAN_FORMAT
----------- ----------------------------------- --------------
          1 Solaris[tm] OE (32-bit)             Big
          2 Solaris[tm] OE (64-bit)             Big
          3 HP-UX (64-bit)                      Big
          4 HP-UX IA (64-bit)                   Big
          5 HP Tru64 UNIX                       Little
          6 AIX-Based Systems (64-bit)          Big
          7 Microsoft Windows IA (32-bit)       Little
          8 Microsoft Windows IA (64-bit)       Little
          9 IBM zSeries Based Linux             Big
         10 Linux IA (32-bit)                   Little
         11 Linux IA (64-bit)                   Little
         12 Microsoft Windows x86 64-bit        Little
         13 Linux x86 64-bit                    Little
         14 Linux x86 32-bit                    Little
         15 HP Open VMS                         Little
         16 Apple Mac OS                        Big
         17 Solaris Operating System (x86)      Little
         18 IBM Power Based Linux               Big
         19 HP IA Open VMS                      Little
         20 Solaris Operating System (x86-64)   Little
         21 Apple Mac OS (x86-64)               Little

SQL> -- You can easily find the platform info for the database running the above query by using the following SQL:

SQL> SELECT tp.platform_id,substr(d.PLATFORM_NAME,1,30), ENDIAN_FORMAT
     FROM V$TRANSPORTABLE_PLATFORM tp, V$DATABASE d
     WHERE tp.PLATFORM_NAME = d.PLATFORM_NAME;

The output of the query can change with version. So please use the query above to find the current support platforms.  In previous releases, the transportable tablespace feature allowed the transfer between platforms of the same architecture only.

The output of the query may vary depending on the version. Therefore, please use the search to find the current supported platforms. In previous versions, the transfer function allows only tablespaces transfer between the platform of the same architecture.

SCOPE

- Publish structured data and distribute for integration   on other platforms publish and distribute structured data to be integrated on other platforms
- Distribute data from a DW environment to  data marts (typically different platforms) will distribute data from the DW environment to data marts (usually is a different platform)
- share the Read only the tablespaces across heterogeneous clusters  shared read-only table space between heterogeneous clusters
- Migrate a database from one platform to  another by only rebuilding the catalog and transporting the datafiles reconstruction only directory and file data that is transmitted the database can be migrated from one platform to another

DETAILS

Steps

1. Check for restrictions  check limit

Review the "Limitations on Transportable Tablespace Use" section in Note 371556.1

Among other things, objects that reside in the SYSTEM tablespace and objects owned by SYS will not be transported. This includes but is not limited to users, privileges, PL / SQL stored procedures, and views.
Among other things, will not be transferred in objects left in the SYSTEM table space and objects owned by SYS. This includes but is not limited to users, privileges, PL / SQL stored procedures, and views.
You use the Indexes Spatial IF, in the Apply at The Solution  Note 579,136.1  "IMPDP Transportable TABLESPACE FAILS for the SPATIAL INDEX)" Continuing the before.

2. Prepare the database   to prepare the database

Check that the tablespace will be self-   contained check whether it is a separate table space
SQL> execute sys.dbms_tts.transport_set_check('TBS1,TBS2', true);
SQL> select * from sys.transport_set_violations;

 

==> These violations must be resolved before  the tablespaces can be transported  must resolve these conflicts before you can transfer the table space

Set the tablespace to READ ONLY   table space is set to read-only
SQL> alter tablespace TBS1 read only;
Tablespace altered.

3. Export metadata

<HP-UX> exp userid=\'/ as sysdba\' transport_tablespace=y
tablespaces=TBS1
file=tts.dmp log=exp_tts.log
statistics=none

Export: Release 10.2.0.4.0 - Mon Nov 26 11:49:49 2007
...

Note: table data (rows) will not be exported
About to export transportable tablespace metadata...
For tablespace TBS1 ...
. exporting cluster definitions
. exporting table definitions
. . exporting table COL_CHG
. . exporting table DATABASES
....
. . exporting table SYSUSERS
. exporting referential integrity constraints
. exporting triggers
. end transportable tablespace metadata export
Export terminated successfully without warnings.

 

Review the export log for warnings and errors and resolve issues before continuing. Failure to do so can result in data loss.
Before proceeding, please view the exported log warnings and errors, and solve problems. Doing so may cause data loss.
CAN BE Used for that Datapump Purpose TOO:   Datapump can also be used for this purpose:

expdp \'/ as sysdba\' directory=tts_dump dumpfile=tts1_dp.dmp logfile=tts_dump_log:tts.log
transport_tablespaces=TBS1 transport_full_check=y

Starting "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_02": system/******** directory=tts_datafile dumpfile=tts1.dmp logfile=tts_dump_log:tts.log transport
_tablespaces=TBS1 transport_full_check=y
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_02" successfully loaded/unloaded
***********************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_02 is:
+DATA/tts1.dmp
Job "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_02" successfully completed at 14:00:34

Movement of data and Enabling TTS   data movement and enable TTS

4. Check the endianness of the target database   and convert, if necessary endian inspection target database, and converted, if necessary

Case 1: Same Endianness (Big->  Big or Little-> Little) Case 1: the same byte order (Big-> Big or Little-> Little  )

The source platform is Sun SPARC Solaris: endianness Big
The target platform is HP-UX (64-bit): endianness Big

SQL> SELECT PLATFORM_ID , PLATFORM_NAME FROM V$TRANSPORTABLE_PLATFORM;

PLATFORM_ID PLATFORM_NAME                       ENDIAN_FORMAT
----------- ----------------------------------- --------------
          1 Solaris[tm] OE (32-bit)             Big
          2 Solaris[tm] OE (64-bit)             Big
          3 HP-UX (64-bit)                      Big

 

File conversion is NOT needed for files that meet all three of the following requirements: (1) have a source and target OS with the same endianness (bitness does not matter), (2) will be imported into an RDBMS version that contains the patch for unpublished Bug 8973825 (10.2.0.5, or 11.2.0.2 and higher), and (3) do not contain undo and rollback segments.   If the fix for unpublished Bug 8973825 is not available for your target database version, then you need to use the RMAN convert feature as shown for Case 2 below. 

All the files do not need to meet the following three requirements document conversion: the same as (1) the source and target OS byte order (the number of bits does not matter), (2) introduced into the RDBMS program version contains the patch for Bug unpublished 8973825 (10.2.0.5 or 11.2.0.2 and later), and (3) does not include undo and rollback segment. If unpublished Bug 8973825 repair is not available for your version of the target database, you need to use RMAN conversion function, as shown in the following circumstances 2.

Case 2: Different Endianness (Big->   Little or Little-> Big) Case 2: different byte order (Big-> Little or Little-> Big  )

IS Source Platform at The in the Microsoft WIndows NT : Little endianness
at The Platform IS target the HP-UX (64-bit-) : Big endianness

the If the Move at The WE Files and Import at The TABLESPACE:   If we move the file and import the table space:

. importing SYS's objects into SYS
IMP-00017: following statement failed with ORACLE error 1565:
"BEGIN sys.dbms_plugts.beginImpTablespace('TBS_TTS',37,'SYS',1,0,8192,2,57"
"54175,1,2147483645,8,128,8,0,1,0,8,462754339,1,1,5754124,NULL,0,0,NULL,NULL"
"); END;"
IMP-00003: ORACLE error 1565 encountered
ORA-01565: error in identifying file '/database/db1/VB2/datafile/tbs1df.dbf'
ORA-27047: unable to read the header block of file
HP-UX Error: 2: No such file or directory
Additional information: 2
ORA-06512: at "SYS.DBMS_PLUGTS", line 1540
ORA-06512: at line 1
IMP-00000: Import terminated unsuccessfully

 

You have to convert the files; the   files can be converted on source OR on target: You must convert files; files can be converted on the source, you can also convert on the target:
-> ON at The locally at The Import the before the SOURCE SO that the STEP the files are endian compatible:   locally on SOURCE before introducing step, so that the file is compatible with endian

<Solaris> rman target=/

Recovery Manager: Release 10.2.0.4.0 - 64bit
connected to target database: VB2 (DBID=3287908689)

RMAN> convert tablespace 'TBS1'
2> to platform="Linux IA (32-bit)"
3> db_file_name_convert='/database/db1/VB2/datafile/tbs1df.dbf',
4> '/tmp/tbs1df.dbf';

Starting backup at 26-NOV-07
using target database controlfile instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=8 devtype=DISK
channel ORA_DISK_1: starting datafile conversion
input datafile fno=00006 name=/database/db1/VB2/datafile/tbs1df.dbf
converted datafile=/tmp/reposit01.dbf
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:01
Finished backup at 26-NOV-07

The converted datafile is staged in / tmp   directory until it is copied to the target server. Data converted files temporarily stored in the / tmp directory until you copy it to the target server so far.

-.> Remotely on the target server  after having copied them on the server after they are replicated in on the target server to server, on a remote server

Conversion on target platform is a way forward when the v $ transportable_platform of the source does Not list the target platform.  when the source of v $ transportable_platform target platform is not listed in the conversion is a process on the target platform forward.
When conversion is done on the target platform  then CONVERT DATAFILE is used instead of CONVERT TABLESPACE, ie:  after the transition on the target platform, instead of using CONVERT DATAFILE CONVERT TABLESPACE, namely:

RMAN> CONVERT DATAFILE
'/database/db1/VB2/datafile/tbs1df.dbf'
TO PLATFORM="Linux IA (32-bit)"
FROM PLATFORM="HP TRu64 UNIX"
DB_FILE_NAME_CONVERT="/database/db1/VB2/datafile/", "/tmp/";

5. Move datafiles and export dump file   moving data files and export dump file

FTP tts.dmp $ 
+ / Database / DB1 / VB2 / datafile / tbs1df.dbf (NO Conversion) - unconverted

 

or

/tmp/tbs1df.dbf (converted file if conversion had been required) - (If you need to convert, the conversion file)

6. Import metadata

Note: Users need to be created in the target database first with a default tablespace of an existing tablespace name.

Note: you first need to use existing table space as the default table space to create a user in the target database.

$ imp userid=\'/ as sysdba\' TRANSPORT_TABLESPACE=Y
datafiles=/database/db1/VB2/datafile/tbs1df.dbf
(or /tmp/tbs1df.dbf )
file=tts.dmp log=imp_tts.log

Import: Release 10.2.0.4.0 - on Mon Nov 26 03:37:20 2007

Export file created by EXPORT:V10.02.00 via conventional path
About to import transportable tablespace(s) metadata...
...
. importing SYS's objects into SYS
. importing OMWB's objects into OMWB
. . importing table "COL_CHG"
...
. . importing table "SYSUSERS"
Import terminated successfully without warnings.

 

Review the import log for warnings and errors and resolve issues before continuing. Failure to do so can result in data loss.
Before proceeding, please check the import log warnings and errors and solve problems. Doing so may cause data loss.
If we exported with DataPump, import must   be done with that same tool: If you use DataPump export, you must use the same tools for import

impdp \'/ as sysdba\' directory=tts_dump dumpfile=tts1_dp.dmp logfile=tts_dump_log:tts.log
transport_datafiles='/database/oradata/tbs1.dbf','/database/oradata/tts2_db1.dbf'

 

It's not possible to import when tablespace already exists or when target schema is not created. If users don't exist, DataPump provides an alternative by using remap_schema (for import utility we can create the schema), ie:

Table space already exists or create the target schema, you can not be imported. If the user does not exist, DataPump can be used remap_schema provide alternative methods (for imp, we can create a schema), namely:

REMAP_SCHEMA=<source_user>:<target_user> 

 

If tablespace already exists in target, we can use remap_tablespace parameter on impdp (there is no option in import but rename tablespace at source or the existing one at target).
If the target already exists in the table space, you can use remap_tablespace parameters on impdp (import no options, but rename table at the source or at the destination space rename table space).

REMAP_TABLESPACE=(<source_tbs1>:<target_tbs1>,<source_tbs2>:<target_tbs2>,...)

7. Set the imported tablespace to READ WRITE  import table space is set to READ WRITE

Note: After the tablespaces are read write, you will want to alter your users' default tablespaces to the correct ones.

Note: The table space to read and write, you will need to change a user's default table space for the correct table space.

SQL> alter tablespace reposit read write;
Tablespace altered.

8. Take a full, no-rows  export of the source database and import it into the target database to create the missing objects that are not transported with TTS, such as sequences, roles, etc. a complete no-rows derived source database, and import it into the target database to create the missing object is not transmitted with the TTS, such as sequence and roles.

- For traditional export, the parameter is ROWS set to N  export conventional, the parameter N is set to ROWS

exp FULL=y GRANTS=y CONSTRAINTS=y ROWS=n

- For Data pump export, the parameter   is CONTENT set to METADATA_ONLY for Data Pump Export, the parameter is set to METADATA_ONLY CONTENT

expdp FULL=y CONTENT=metadata_only

Still have questions   as well as questions

    • Case you need the Move to an In AN ASM TABLESPACE, at The Steps are quite Similar and  Note 394,798.1  CAN Help AS IT Describes at The Full Process.  If you need to move ASM table space, the steps are very similar,  Note 394,798.1  can help you describe the whole process.
    • See Document 1166564.1 Master Note for Transportable Tablespaces (TTS) -- Common Questions and Issues
    • Use MOS Data Warehousing community to search for similar discussions or start a new discussion on this subject.

REFERENCES

NOTE:394798.1 - How to Create Transportable Tablespaces Where the Source and Destination are ASM-Based
NOTE:371556.1 - How to Migrate to different Endian Platform Using Transportable Tablespaces With RMAN
NOTE:100693.1 - Getting Started with Transportable Tablespaces
NOTE:733824.1 - How To Recreate A Database Using TTS (Transportable TableSpace)
NOTE:77523.1 - Transportable Tablespaces -- An Example to Setup and Use
NOTE:243245.1 - 10G New Storage Features and Enhancements

Guess you like

Origin www.cnblogs.com/zylong-sys/p/12013670.html