Oracle11g数据库导入Oracle10g数据库 EXPDP工具

分享一下我老师大神的人工智能教程。零基础!通俗易懂!风趣幽默!还带黄段子!希望你也加入到我们人工智能的队伍中来!https://blog.csdn.net/jiangjunshow

               

EXPDP要注意的是,导出的文件是在服务器端的 DATA_PUMP_DIR目录里

11g备份,导入10g的时候会抛错,直接阻止导入。


但是有时候还必须得把11g的数据库导入到10g,我今天就遇到了这种情况。


一开始只是把11g中的表全部备份,成功导入10g。但是缺少视图、存储过程等。一个个sql创建累死人,方法太笨。请教了一下资深DBA,重新备份,重新导入,成功。


现在把我导入操作的全过程记录下来,希望可以方便更多的人!


一、在11g服务器上,使用expdp命令备份数据


EXPDP USERID='SYS/cuc2009@cuc as sysdba' schemas=sybj directory=DATA_PUMP_DIR dumpfile=aa.dmp logfile=aa.log version=10.2.0.1.0


其中,红色文字部分是根据需要改写的地方。例如我的sys密码是cuc2009,数据库sid是cuc,要到出的用户名是sybj,要导入到10.2.0.1.0版本的Oracle数据库中去。aa.dmp和aa.log将会在11g的dpdump目录中生成,例如我的11g装在了E盘下面,于是aa.dmp将会在E:/app/Administrator/admin/cuc/dpdump目录下被生成。

二、在10g服务器上,使用impdp命令恢复数据


准备工作:1.建库2.建表空间3.建用户并授权4.将aa.dmp拷贝到10g的dpdump目录下

1-3点可以去参考博主的上一篇博客“Oracle数据库移植全步骤”,介绍的很详细,这里不再多说。关于第4点,我的10g装在了e:/tools目录下,于是我将aa.dmp文件拷贝到了E:/tools/admin/cucf/dpdump目录下。


IMPDP USERID='SYS/cuc2009@cucf as sysdba' schemas=sybj directory=DATA_PUMP_DIR dumpfile=aa.dmp logfile=aa.log version=10.2.0.1.0


其中红色部分是根据需要改写的地方。例如我的sys密码是cuc2009,数据库sid是cucf,要导入用户名为sybj,要导入到10.2.0.1.0版本的Oracle数据库中去。aa.log将会在10g的dpdump目录中生成。


每一步都讲过自己亲自操作。没问题的。朋友们如果有什么好的办法也可以拿出来共享一下。开源的氛围中,技术的成长是最快的!

==================================================================================

oracle泵入与泵出
2009-12-18 11:57

泵入与泵出


1. 测试库与测试用户
2. expdp 命令帮助
3. 确认备份泵出文件的位置
4. 泵出用户数据
5. 泵入用户的数据
6. 泵入时已存在的表默认将不被重新覆盖
7. 将由用户testuser1泵出的数据泵入到testuser2中
8. 泵出特定表并进行泵入





1. 测试库与测试用户
==================


SQL> create tablespace yzhqts datafile '/u02/oradata/testdb/yzhqts01.dbf' size 500m;

Tablespace created.


SQL> create user testuser1 identified by sun2moon default tablespace yzhqts;

User created.

SQL> grant connect,resource to testuser1;

Grant succeeded.

SQL>

然后建立测试数据




2. expdp 命令帮助
=================

[oracle@web151 ~]$ impdp -help
[oracle@Manager ~]$ expdp -help



[oracle@web151 ~]$ impdp -help

Import: Release 10.2.0.4.0 - Production on Wednesday, 03 June, 2009 13:25:46

Copyright (c) 2003, 2007, Oracle. All rights reserved.


The Data Pump Import utility provides a mechanism for transferring data objects
between Oracle databases. The utility is invoked with the following command:

     Example: impdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

You can control how Import runs by entering the 'impdp' command followed
by various parameters. To specify parameters, you use keywords:

     Format: impdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
     Example: impdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

USERID must be the first parameter on the command line.

****** 有省略 ********



[oracle@Manager ~]$ expdp -help

Export: Release 10.2.0.4.0 - Production on Wednesday, 03 June, 2009 11:26:33

Copyright (c) 2003, 2007, Oracle. All rights reserved.


The Data Pump export utility provides a mechanism for transferring data objects
between Oracle databases. The utility is invoked with the following command:

   Example: expdp scott/tiger DIRECTORY=dmpdir DUMPFILE=scott.dmp

You can control how Export runs by entering the 'expdp' command followed
by various parameters. To specify parameters, you use keywords:

   Format: expdp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)
   Example: expdp scott/tiger DUMPFILE=scott.dmp DIRECTORY=dmpdir SCHEMAS=scott
               or TABLES=(T1:P1,T1:P2), if T1 is partitioned table

USERID must be the first parameter on the command line.

Keyword               Description (Default)
------------------------------------------------------------------------------
ATTACH                Attach to existing job, e.g. ATTACH [=job name].
COMPRESSION           Reduce size of dumpfile contents where valid
                      keyword values are: (METADATA_ONLY) and NONE.
CONTENT               Specifies data to unload where the valid keywords are:
                      (ALL), DATA_ONLY, and METADATA_ONLY.
DIRECTORY             Directory object to be used for dumpfiles and logfiles.
DUMPFILE              List of destination dump files (expdat.dmp),
                      e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ENCRYPTION_PASSWORD   Password key for creating encrypted column data.
ESTIMATE              Calculate job estimates where the valid keywords are:
                      (BLOCKS) and STATISTICS.
ESTIMATE_ONLY         Calculate job estimates without performing the export.
EXCLUDE               Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
FILESIZE              Specify the size of each dumpfile in units of bytes.
FLASHBACK_SCN         SCN used to set session snapshot back to.
FLASHBACK_TIME        Time used to get the SCN closest to the specified time.
FULL                  Export entire database (N).
HELP                  Display Help messages (N).
INCLUDE               Include specific object types, e.g. INCLUDE=TABLE_DATA.
JOB_NAME              Name of export job to create.
LOGFILE               Log file name (export.log).
NETWORK_LINK          Name of remote database link to the source system.
NOLOGFILE             Do not write logfile (N).
PARALLEL              Change the number of active workers for current job.
PARFILE               Specify parameter file.
QUERY                 Predicate clause used to export a subset of a table.
SAMPLE                Percentage of data to be exported;
SCHEMAS               List of schemas to export (login schema).
STATUS                Frequency (secs) job status is to be monitored where
                      the default (0) will show new status when available.
TABLES                Identifies a list of tables to export - one schema only.
TABLESPACES           Identifies a list of tablespaces to export.
TRANSPORT_FULL_CHECK Verify storage segments of all tables (N).
TRANSPORT_TABLESPACES List of tablespaces from which metadata will be unloaded.
VERSION               Version of objects to export where valid keywords are:
                      (COMPATIBLE), LATEST, or any valid database version.

The following commands are valid while in interactive mode.
Note: abbreviations are allowed

Command               Description
------------------------------------------------------------------------------
ADD_FILE              Add dumpfile to dumpfile set.
CONTINUE_CLIENT       Return to logging mode. Job will be re-started if idle.
EXIT_CLIENT           Quit client session and leave job running.
FILESIZE              Default filesize (bytes) for subsequent ADD_FILE commands.
HELP                  Summarize interactive commands.
KILL_JOB              Detach and delete job.
PARALLEL              Change the number of active workers for current job.
                      PARALLEL=<number of workers>.
START_JOB             Start/resume current job.
STATUS                Frequency (secs) job status is to be monitored where
                      the default (0) will show new status when available.
                      STATUS[=interval]
STOP_JOB              Orderly shutdown of job execution and exits the client.
                      STOP_JOB=IMMEDIATE performs an immediate shutdown of the
                      Data Pump job.

[oracle@Manager ~]$


3. 确认备份泵出文件的位置
========================

SQL> set linesize 180
SQL> col directory_path format a60
SQL> select * from dba_directories

OWNER                          DIRECTORY_NAME                 DIRECTORY_PATH
------------------------------ ------------------------------ ------------------------------------------------------------
SYS                            WORK_DIR                       /ade/aime_stamc10/oracle/work
SYS                            ADMIN_DIR                      /ade/aime_stamc10/oracle/md/admin
SYS                            ORACLE_OCM_CONFIG_DIR          /u01/app/oracle/oracle/product/10.2.0/db_1/ccr/state
SYS                            DATA_PUMP_DIR                  /u01/app/oracle/oracle/product/10.2.0/db_1/rdbms/log/

SQL>




SQL> select directory_path from dba_directories where directory_name='DATA_PUMP_DIR';

DIRECTORY_PATH
--------------------------------------------------------------------------------
/u01/app/oracle/oracle/product/10.2.0/db_1/rdbms/log/

SQL>   


4. 泵出用户数据
===============

expdp testuser1/sun2moon DIRECTORY=data_pump_dir DUMPFILE=testuser1.dmp


[oracle@Manager ~]$ expdp testuser1/sun2moon DIRECTORY=data_pump_dir DUMPFILE=testuser1.dmp

Export: Release 10.2.0.4.0 - Production on Wednesday, 03 June, 2009 11:52:49

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-39087: directory name DATA_PUMP_DIR is invalid


[oracle@Manager ~]$


注意:普通用户需要授权才能使用DATA_PUMP_DIR, 以下是操作过程。

SQL> grant read,write on directory DATA_PUMP_DIR to testuser1;

Grant succeeded.

SQL>


[oracle@Manager ~]$ expdp testuser1/sun2moon DIRECTORY=data_pump_dir DUMPFILE=testuser1.dmp

Export: Release 10.2.0.4.0 - Production on Wednesday, 03 June, 2009 11:54:10

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "TESTUSER1"."SYS_EXPORT_SCHEMA_01": testuser1/******** DIRECTORY=data_pump_dir DUMPFILE=testuser1.dmp
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 128 KB
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
. . exported "TESTUSER1"."T1"                            4.968 KB       2 rows
. . exported "TESTUSER1"."T2"                            17.35 KB     128 rows
Master table "TESTUSER1"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for TESTUSER1.SYS_EXPORT_SCHEMA_01 is:
/u01/app/oracle/oracle/product/10.2.0/db_1/rdbms/log/testuser1.dmp
Job "TESTUSER1"."SYS_EXPORT_SCHEMA_01" successfully completed at 11:54:41

[oracle@Manager ~]$



日志 export.log 会存放在 /u01/app/oracle/oracle/product/10.2.0/db_1/rdbms/log/


5. 泵入用户的数据
=================


在另一台服务器上:

impdp testuser1/sun2moon DIRECTORY=data_pump_dir DUMPFILE=testuser1.dmp


注意:建立相应的表空间、用户及相应的权限,并把testuser1.dmp放入指定的目录
(/u01/app/oracle/oracle/product/10.2.0/db_1/rdbms/log/) 下


建立用户:
create tablespace yzhqts datafile '/u02/oradata/zytest/yzhqts01.dbf' size 500m;
create user testuser1 identified by sun2moon default tablespace yzhqts;
grant connect,resource to testuser1;
grant read,write on directory DATA_PUMP_DIR to testuser1;



泵入数据:

[oracle@web151 ~]$ impdp testuser1/sun2moon DIRECTORY=data_pump_dir DUMPFILE=testuser1.dmp

Import: Release 10.2.0.4.0 - Production on Wednesday, 03 June, 2009 13:49:40

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "TESTUSER1"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "TESTUSER1"."SYS_IMPORT_FULL_01": testuser1/******** DIRECTORY=data_pump_dir DUMPFILE=testuser1.dmp
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "TESTUSER1"."T1"                            4.968 KB       2 rows
. . imported "TESTUSER1"."T2"                            17.35 KB     128 rows
Job "TESTUSER1"."SYS_IMPORT_FULL_01" successfully completed at 13:49:43

[oracle@web151 ~]$



SQL> set linesize 180
SQL> col segment_name format a10
SQL> select owner, segment_name ,segment_type, tablespace_name from dba_segments where owner='TESTUSER1' order by TABLESPACE_NAME;

OWNER                          SEGMENT_NA SEGMENT_TYPE       TABLESPACE_NAME
------------------------------ ---------- ------------------ ------------------------------
TESTUSER1                      T1         TABLE              YZHQTS
TESTUSER1                      T2         TABLE              YZHQTS

SQL>



可以看到数据按照要求已经全部跑到yzhqts表空里了。


6. 泵入时已存在的表默认将不被重新覆盖
===================================

例如

[oracle@web151 log]$ impdp testuser1/sun2moon DIRECTORY=data_pump_dir DUMPFILE=testuser1-01.dmp

Import: Release 10.2.0.4.0 - Production on Wednesday, 03 June, 2009 14:44:21

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "TESTUSER1"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "TESTUSER1"."SYS_IMPORT_FULL_01": testuser1/******** DIRECTORY=data_pump_dir DUMPFILE=testuser1-01.dmp
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
ORA-39151: Table "TESTUSER1"."T1" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
ORA-39151: Table "TESTUSER1"."T2" exists. All dependent metadata and data will be skipped due to table_exists_action of skip
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "TESTUSER1"."T3"                            4.937 KB       1 rows
Job "TESTUSER1"."SYS_IMPORT_FULL_01" completed with 2 error(s) at 14:44:23

[oracle@web151 log]$



7. 将由用户testuser1泵出的数据泵入到testuser2中
=============================================


知识点:

remap_schema=源用户:目标用户


Using this command you can import data from one schema to other schema
Even if the schema or username is not present it gets created.

Following is the syntax of creating Remap_Schema

impdp scott/tiger@oracle directory=testdir dumpfile=bij.dmp remap_schema=scott:testscott




例子1:

$ impdp testuser1/sun2moon DIRECTORY=data_pump_dir DUMPFILE=testuser1-01.dmp remap_schema=testuser1:testuser2



(1)先创建用户testuser2

create user testuser2 identified by sun2moon default tablespace yzhqts;
grant connect,resource to testuser2;




(2) 给用户testuser1 授导入权限

SQL> grant imp_full_database to testuser1;

Grant succeeded.



目标数据库上执行上面这条授权命令,否则会报如下错误:


Import: Release 10.2.0.4.0 - Production on Wednesday, 03 June, 2009 16:12:58

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-31631: privileges are required
ORA-39122: Unprivileged users may not perform REMAP_SCHEMA remappings.





[oracle@web151 ~]$ impdp testuser1/sun2moon DIRECTORY=data_pump_dir DUMPFILE=testuser1-01.dmp remap_schema=testuser1:testuser2

Import: Release 10.2.0.4.0 - Production on Wednesday, 03 June, 2009 16:19:12

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "TESTUSER1"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "TESTUSER1"."SYS_IMPORT_FULL_01": testuser1/******** DIRECTORY=data_pump_dir DUMPFILE=testuser1-01.dmp remap_schema=testuser1:testuser2
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "TESTUSER2"."T1"                            4.976 KB       3 rows
. . imported "TESTUSER2"."T2"                            17.35 KB     128 rows
. . imported "TESTUSER2"."T3"                            4.937 KB       1 rows
Job "TESTUSER1"."SYS_IMPORT_FULL_01" successfully completed at 16:19:13

[oracle@web151 ~]$


例子2:


也可以直接使用 testuser2用户进行导入,此时就没有必要建立testuser1用户了。


create user testuser2 identified by sun2moon default tablespace yzhqts;
grant connect,resource to testuser2;
grant read,write on directory DATA_PUMP_DIR to testuser2;
grant imp_full_database to testuser2;


impdp testuser2/sun2moon DIRECTORY=data_pump_dir DUMPFILE=testuser1-01.dmp remap_schema=testuser1:testuser2


[oracle@web151 ~]$ impdp testuser2/sun2moon DIRECTORY=data_pump_dir DUMPFILE=testuser1-01.dmp remap_schema=testuser1:testuser2

Import: Release 10.2.0.4.0 - Production on Wednesday, 03 June, 2009 16:40:31

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "TESTUSER2"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "TESTUSER2"."SYS_IMPORT_FULL_01": testuser2/******** DIRECTORY=data_pump_dir DUMPFILE=testuser1-01.dmp remap_schema=testuser1:testuser2
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "TESTUSER2"."T1"                            4.976 KB       3 rows
. . imported "TESTUSER2"."T2"                            17.35 KB     128 rows
. . imported "TESTUSER2"."T3"                            4.937 KB       1 rows
Job "TESTUSER2"."SYS_IMPORT_FULL_01" successfully completed at 16:40:33

[oracle@web151 ~]$



8. 泵出特定表并进行泵入
=======================

原库:泵出用户的表t1,t2
$ expdp becvx/sun2moon tables=/(t1,t2/) directory=dp dumpfile=becvx01.dmp logfile=exp_becvx.log job_name=exp_becvx

Export: Release 10.2.0.4.0 - Production on Wednesday, 10 June, 2009 11:18:13

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "BECVX"."EXP_BECVX": becvx/******** tables=(t1,t2) directory=dp dumpfile=becvx01.dmp logfile=exp_becvx.log job_name=exp_becvx
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 192 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
. . exported "BECVX"."T1"                                67.95 KB     256 rows
. . exported "BECVX"."T2"                                4.960 KB       1 rows
Master table "BECVX"."EXP_BECVX" successfully loaded/unloaded
******************************************************************************
Dump file set for BECVX.EXP_BECVX is:
/u02/backup/becvx01.dmp
Job "BECVX"."EXP_BECVX" successfully completed at 11:18:24



目标库:创建用户,并授予相应的权限(注意此次导出没有带任何的用户权限之类的数据,所以要全新创建用户)

create user becvx identified by sun2moon ;
grant connect, resource to becvx;
create directory dp as '/u02/backup';   <---- 之前执行过后,不用再执行

grant read,write on directory dp to public;



$ impdp becvx/sun2moon directory=dp dumpfile=becvx01.dmp

Import: Release 10.2.0.4.0 - Production on Wednesday, 10 June, 2009 10:55:45

Copyright (c) 2003, 2007, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "BECVX"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "BECVX"."SYS_IMPORT_FULL_01": becvx/******** directory=dp dumpfile=becvx01.dmp
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "BECVX"."T1"                                67.95 KB     256 rows
. . imported "BECVX"."T2"                                4.960 KB       1 rows
Job "BECVX"."SYS_IMPORT_FULL_01" successfully completed at 10:55:46

$
           

分享一下我老师大神的人工智能教程。零基础!通俗易懂!风趣幽默!还带黄段子!希望你也加入到我们人工智能的队伍中来!https://blog.csdn.net/jiangjunshow

猜你喜欢

转载自blog.csdn.net/qq_43679720/article/details/87884086