expdp\impdp and exp\imp


The article written by the author is very useful

http://www.cnblogs.com/lanzi/archive/2011/01/06/1927731.html

expdp introduces

EXPDP command-line options
1. ATTACH
This option is used in the client session and the existing export function Establish an association between them. The syntax is as follows
ATTACH=[schema_name.]job_name
Schema_name is used to specify the schema name, and job_name is used to specify the export job name. Note that if the ATTACH option is used, it cannot be specified on the command line except for the connection string and the ATTACH option. Any other options, examples are as follows:
Expdp scott/tiger ATTACH=scott.export_job

2. CONTENT
This option is used to specify what to export. The default value is ALL
CONTENT={ALL | DATA_ONLY | METADATA_ONLY}
When CONTENT is set to ALL, it will Export the object definition and all its data. When it is DATA_ONLY, only the object data is exported. When it is METADATA_ONLY, only the object definition is exported.
Exdp scott/tiger DIRECTORY=dump DUMPFILE=a.dump CONTENT=METADATA_ONLY

3. DIRECTORY
specifies the dump file and log file Directory where
DIRECTORY=directory_object
Directory_object is used to specify the directory object name. It should be noted that the directory object is an object created using the CREATE DIRECTORY statement, not the OS directory
Expdp scott/tiger DIRECTORY=dump DUMPFILE=a.dump

4. DUMPFILE
is used to specify the name of the dump file, The default name is expdat.dmp DUMPFILE
=[directory_object:]file_name [,….]
Directory_object is used to specify the directory object name, and file_name is used to specify the dump file name. It should be noted that if directory_object is not specified, the export tool will automatically use the DIRECTORY option The specified directory object Expdp scott/tiger DIRECTORY=dump1 DUMPFILE
=dump2:a.dmp

5. ESTIMATE
specifies the method of estimating the disk space occupied by the exported table. The default value is BLOCKS
ESTIMATE={BLOCKS |
The space occupied by the object will be estimated according to the number of data blocks occupied by the target object multiplied by the size of the data block. When set to STATISTICS, the space occupied by the object will be estimated according to the latest statistical value.
Expdp scott/tiger TABLES=emp ESTIMATE=STATISTICS DIRECTORY=dump DUMPFILE= a.dump

6. ESTIMATE_ONLY
specifies whether to estimate only the disk space occupied by the export job, the default value is N
When EXTIMATE_ONLY={Y | N}
is set to Y, the export function only estimates the disk space occupied by the object, and does not perform the export job. When it is N, it not only estimates the disk space occupied by the object, but also performs the export operation.
Expdp scott/tiger ESTIMATE_ONLY=y NOLOGFILE=y

7. EXCLUDE (for details, see 2. Exclude the specified object of the specified type in the export user)
This option is used to specify the type of object to be excluded or related objects to be released when the operation is performed
EXCLUDE=object_type[:name_clause] [,….]
Object_type is used to specify the type of object to be excluded, and name_clause is used to specify the specific object to be excluded. EXCLUDE and INCLUDE cannot be used at the same time
Expdp scott/tiger DIRECTORY=dump DUMPFILE=a.dup EXCLUDE=VIEW

8. FILESIZE
specified The maximum size of the exported file, the default is 0, (indicating that there is no limit to the file size)

9. FLASHBACK_SCN
specifies to export the table data of a specific SCN time
FLASHBACK_SCN=scn_value
Scn_value is used to identify the SCN value. FLASHBACK_SCN and FLASHBACK_TIME cannot use
Expdp scott/tiger DIRECTORY= dump DUMPFILE=a.dmp FLASHBACK_SCN=358523

10. FLASHBACK_TIME
specifies to export table data at a specific point in time
FLASHBACK_TIME=”TO_TIMESTAMP(time_value)”
Expdp scott/tiger DIRECTORY=dump DUMPFILE=a.dmp FLASHBACK_TIME=”TO_TIMESTAMP('25-08-2004 14:35:00','DD-MM-YYYY HH24:MI:SS' )”

11. FULL
specifies the database schema to export, the default is N. When
FULL={Y | N}
is Y, it marks the execution of database export.

12. HELP
specifies whether to display the help information of EXPDP command line options, the default is N
when it is set to Y When , the help information of the export options will be displayed.
Expdp help=y

13. INCLUDE (for details, see 1. Include the specified object of the specified type in the export user) to
specify the object type and related objects to be included when exporting.
INCLUDE = object_type[:name_clause] [,… ]

14. JOB_NAME
specifies the name of the role to be exported, the default is SYS_XXX
JOB_NAME=jobname_string
SELECT * FROM DBA_DATAPUMP_JOBS;--View existing jobs

15. LOGFILE
specifies the name of the export log file, the default name is export.log
LOGFILE= [directory_object:]file_name
Directory_object is used to specify the name of the directory object, and file_name is used to specify the name of the export log file. If directory_object is not specified, the export function will automatically use the corresponding option value of DIRECTORY.
Expdp scott/tiger DIRECTORY=dump DUMPFILE=a.dmp logfile=a.log

16. NETWORK_LINK
specifies the database chain name. If you want to export remote database objects to the dump file of the local routine, you must set this option.
For example: expdp gwm/gwm directory=dir_dp NETWORK_LINK=igisdb tables=p_street_area dumpfile =p_street_area.dmp logfile=p_street_area.log job_name=my_job
igisdb is the link name between the destination database and the source data, and
dir_dp is the directory
on If the connection string (@fgisdb) is used directly, expdp belongs to the server-side tool, and the file generated by expdp defaults to It is stored on the server.

17. NOLOGFILE
This option is used to specify that the generation of export log files is prohibited. The default value is N.

18. PARALLEL
specifies the number of parallel processes that perform export operations, and the default value is 1.
Note: The parallelism setting should not exceed the CPU 2 times the number, if there are 2 CPUs, PARALLEL can be set to 2, which is faster than PARALLEL of 1 when importing
    For the exported file, if PARALLEL is set to 2, there is only one export file, and the export speed is not much improved, because the export is all to the same file, which will compete for resources. So you can set the export file to two, as follows:
    expdp gwm/gwm directory=d_test dumpfile=gwmfile1.dp, gwmfile2.dp parallel=2

19. PARFILE
specifies the name of the export parameter file
PARFILE=[directory_path] file_name

20. QUERY QUERY=[schema.] [table_name:] query_clause
schema is used to specify the schema name, table_name is used to specify the table name, and query_clause is used to specify the conditional restriction clause. The QUERY option cannot be used with CONNECT=METADATA_ONLY , EXTIMATE_ONLY, TRANSPORT_TABLESPACES and other options are used at the same time. Expdp scott/tiger directory=dump dumpfile=a.dmp Tables=emp query='WHERE deptno=20' 21. SCHEMAS This scheme is used to specify the execution scheme mode export, the default is the current user scheme . 22. STATUS specifies the detailed status of the export process, the default value is 0 23. TABLES specifies the table schema export












TABLES=[schema_name.]table_name[:partition_name][,…]
Schema_name is used to specify the schema name, table_name is used to specify the name of the table to be exported, and partition_name is used to specify the name of the partition to be exported.

24. TABLESPACES
specifies the list of tablespaces to be exported

25. TRANSPORT_FULL_CHECK
This option is used to specify the check method of the relationship between the moved tablespace and the unmoved tablespace. The default is N.
When set to Y, the export function will check the direct and complete relationship of the tablespace. If only one tablespace of the space or the tablespace where the index is located is moved, an error message will be displayed. When set to N, the export function only checks for single-end dependencies. If the tablespace where the index is located is moved, but the tablespace where the table is located is not moved, it will be Displays the error message. If the tablespace where the table is located is moved, and the tablespace where the index is located is not moved, the error message will not be displayed.

26. TRANSPORT_TABLESPACES
specifies the execution of tablespace schema export

27. VERSION
specifies the database version of the exported object, and the default value is COMPATIBLE. When
VERSION={COMPATIBLE | LATEST | version_string}
is COMPATIBLE, the object metadata will be generated according to the initialization parameter COMPATIBLE; when it is LATEST, the object metadata will be generated according to the actual version of the database. version_string is used to specify the database version string. Call EXPDP

data Steps to export the pump tool:
1. Create a DIRECTORY
create directory dir_dp as 'D:\oracle\dir_dp';
2. Authorize
Grant read,write on directory dir_dp to lttfm; --View
directories and permissions
SELECT privilege, directory_name, DIRECTORY_PATH FROM user_tab_privs t, all_directories d
WHERE t.table_name(+ ) = d.directory_name ORDER BY 2, 1;
3. Execute export
expdp lttfm/lttfm@fgisdb schemas=lttfm directory=dir_dp dumpfile =expdp_test1.dmp logfile=expdp_test1.log;

connect to: Oracle Database 10g Enterprise Edition Release 10.2.0.1
With the Partitioning, OLAP and Data Mining options
start "LTTFM". "SYS_EXPORT_SCHEMA_01": lttfm/********@fgisdb sch
ory=dir_dp dumpfile =expdp_test1.dmp logfile=expdp_test1.log; */
Remarks:
   1. directory=dir_dp must be placed in the front. If it is placed at the end, it will prompt ORA-39002: invalid operation
                                                             ORA-39070: cannot open the log file.    ORA -
                                                             39087: directory name DATA_PUMP_DIR; invalid
    
Specifying the exported JOB name will generate a default JOB name. If the JOB name is specified during the export process, it will appear with the specified name and
     change to:
     expdp lttfm/lttfm@fgisdb schemas=lttfm directory=dir_dp dumpfile =expdp_test1.dmp logfile=expdp_test1.log,job_name=my_job1;
   3. There should be no semicolon after the export statement, otherwise the job table name in the above export statement is 'my_job1;' instead of my_job1. Therefore, expdp lttfm/lttfm attach=lttfm.my_job1 keeps prompting that the job table cannot be found when the command is executed.
   4. The created directory must be on the machine where the database is located. Otherwise, it is also prompted:
ORA-39002: invalid operation
ORA-39070: cannot open log file.
ORA-39087: directory name DATA_PUMP_DIR; invalid





Export related commands use:
   1) Ctrl+C key combination: During execution, you can press Ctrl+C key combination to exit the current interactive mode, after exiting, the export operation will not stop
   2) Export> status -- View the status and related information of the current JOB
   3) Export> stop_job -- Pause the JOB (export mode will be exited after suspending the job)
   4) Re-enter export mode: C:\Documents and Settings\Administrator>expdp lttfm /lttfm attach=lttfm.my_job1 --no semicolon after the statement
   5)Export> start_job --opens the suspended JOB (does not start re-execution)
   6)Export> continue_client --restarts "LTTFM" with this command." MY_JOB":
   7) Export> kill_job -- cancel the current job and release the related client session (delete the job and delete the dmp file)
   8) Export> exit_client -- exit the export mode through this command (pass 4) to enter the export mode again Note: After the
export is completed, the job is automatically uninstalled. Various modes of

data pump export:
1. Export by table mode:
expdp lttfm/lttfm@fgisdb tables=lttfm.b$i_exch_info,lttfm.b$i_manhole_info dumpfile=expdp_test2.dmp logfile=expdp_test2.log directory=dir_dp job_name=my_job

2. Export by query condition:
expdp lttfm/lttfm@fgisdb tables= lttfm.b$i_exch_info dumpfile=expdp_test3.dmp logfile=expdp_test3.log directory=dir_dp job_name=my_job query='"where rownum<11"'

3. Export by tablespace:
Expdp lttfm/lttfm@fgisdb dumpfile=expdp_tablespace.dmp tablespaces =GCOMM.DBF logfile=expdp_tablespace.log directory=dir_dp job_name=my_job

4. Export scheme
Expdp lttfm/lttfm DIRECTORY=dir_dp DUMPFILE=schema.dmp SCHEMAS=lttfm,gwm

5. Export the entire database:
expdp lttfm/lttfm@fgisdb dumpfile = full.dmp full=y logfile=full.log directory=dir_dp job_name=my_jobUse


exclude,include to export data
1. Include export the specified objects of the specified type in the user
--only export all tables starting with B under the lttfm user, including indexes, remarks, etc. related to the table. Other object types such as procedures are not included:
expdp lttfm/lttfm@fgisdb dumpfile=include_1.dmp logfile=include_1.log directory=dir_dp job_name=my_job include=TABLE:\"LIKE \'B%\'\"

--Export lttfm user Exclude all tables starting with B$:
expdp lttfm/lttfm@fgisdb schemas=lttfm dumpfile=include_1.dmp logfile=include_1.log directory=dir_dp job_name=my_job include=TABLE:\"NOT LIKE \'B$%\'\ "

--Export only all stored procedures under the lttfm user:
expdp lttfm/lttfm@fgisdb schemas=lttfm dumpfile=include_1.dmp logfile=include_1.log directory=dir_dp job_name=my_job include=PROCEDURE;  

2. Exclude exports the specified type in the user The specified object
--export all objects under the lttfm user except the TABLE type. If the table is not exported, then the table-related indexes, constraints and other object types associated with the table will not be exported:
expdp lttfm/lttfm@fgisdb schemas=lttfm dumpfile=exclude_1.dmp logfile=exclude_1.log directory=dir_dp job_name=my_job exclude=TABLE;

--Export all tables starting with B$ under the lttfm user:
expdp lttfm/lttfm@fgisdb dumpfile =include_1.dmp logfile=include_1.log directory=dir_dp job_name=my_job exclude=TABLE:\"LIKE\'b$%\'\";

--Export all objects under the lttfm user, but only export the table type with b Table starting with $:
expdp lttfm/lttfm@fgisdb dumpfile=include_1.dmp logfile=include_1.log directory=dir_dp job_name=my_job exclude=TABLE:\"NOT LIKE \'b$%\'\";


IMPDP introduction

IMPDP command line There are many options in common with EXPDP, the differences are:
1. REMAP_DATAFILE
This option is used to convert the name of the source data file into the name of the target data file. This option may be required when moving tablespaces between different platforms.
REMAP_DATAFIEL=source_datafie:target_datafile

2. REMAP_SCHEMA
This option is used to load all objects of the source schema into the target schema.
REMAP_SCHEMA=source_schema:target_schema

3. REMAP_TABLESPACE
imports all objects in the source tablespace into the target tablespace
REMAP_TABLESPACE=source_tablespace:targettablespace

If there are multiple tablespaces, separate them with commas, such as REMAP_TABLESPACE=source_tablespace1:targettablespace1,source_tablespace2:targettablespace2

4. REUSE_DATAFILES
This option specifies whether to overwrite the existing data files when creating a table space. The default is N
REUSE_DATAFIELS={Y | N}

5. SKIP_UNUSABLE_INDEXES
specifies whether to skip the unusable index when importing. The default is N

6. SQLFILE
specifies that the import will be The specified index DDL operation is written into the SQL script
SQLFILE=[directory_object:]file_name
Impdp scott/tiger DIRECTORY=dump DUMPFILE=tab.dmp SQLFILE=a.sql

7. STREAMS_CONFIGURATION
specifies whether to import Stream Matadata, the default The value is Y.

8. TABLE_EXISTS_ACTION
This option is used to specify the operation to be performed by the import job when the table already exists, the default is SKIP
TABBLE_EXISTS_ACTION={SKIP | APPEND | TRUNCATE | FRPLACE }
When this option is set to SKIP, the import job will skip the existing table to process the next object; when set to APPEND, data will be appended, and when it is TRUNCATE, the import job will be truncated table, and then append new data to it; when set to REPLACE, the import job will delete the existing table, rebuild the table and append data, note that the TRUNCATE option does not apply to cluster tables and NETWORK_LINK options

9, TRANSFORM
This option is used to specify whether Modify the DDL statement for creating an object
TRANSFORM=transform_name:value[:object_type]
Transform_name is used to specify the conversion name, where SEGMENT_ATTRIBUTES is used to identify segment attributes (physical attributes, storage attributes, table space, log and other information), STORAGE is used to identify segment storage Attribute, VALUE is used to specify whether to include segment attributes or segment storage attributes, and object_type is used to specify the object type.
Impdp scott/tiger directory=dump dumpfile=tab.dmp Transform=segment_attributes:n:table

10, TRANSPORT_DATAFILES
This option is used to specify the move The data file to be imported to the target database when spaced
TRANSPORT_DATAFILE=datafile_name
Datafile_name is used to specify the data file to be copied to the target database
Impdp system/manager DIRECTORY=dump DUMPFILE=tts.dmp
TRANSPORT_DATAFILES='/user01/data/tbs1.f' call IMPDP


impdp import schema:
1. Import the table
in p_street_area.dmp file by table, this file is exported by gwm user according to schema=gwm:
impdp gwm/gwm@fgisdb dumpfile =p_street_area.dmp logfile=imp_p_street_area.log directory=dir_dp tables=p_street_area job_name=my_job

2. Import by user (user information can be imported directly, that is, if user information does not exist, it can also be imported directly)
impdp gwm/gwm @fgisdb schemas=gwm dumpfile =expdp_test.dmp logfile=expdp_test.log directory=dir_dp job_name=my_job

3. Direct import method without generating dmp file through the steps of expdp:
--Import table p_street_area
impdp from source database to target database gwm/gwm directory=dir_dp NETWORK_LINK=igisdb tables=p_street_area logfile=p_street_area.log job_name=my_job
igisdb is the link name between the destination database and the source data, dir_dp is the directory on the destination database 4.   Use the remap_tablespace parameter

to replace the tablespace

  --Export all data under the gwm user
expdp system/orcl directory=data_pump_dir dumpfile=gwm.dmp SCHEMAS=gwm
Note: If the user data is exported by the sys user, including the user creation and authorization parts, it is not included in the export by the own user These contents
- the following is the key to import all the data under the gwm user into the tablespace gcomm (originally under the gmapdata tablespace) under
impdp system/orcl directory=data_pump_dir dumpfile=gwm.dmp remap_tablespace=gmapdata:gcomm


exp and imp
exp Word description:
Keyword Description (default value)       
------------------------------
USERID Username/password           
BUFFER Data buffer size        
FILE output file (EXPDAT.DMP) 
COMPRESS import to a zone (Y)     
GRANTS export authority (Y)         
INDEXES export index (Y)         
DIRECT direct path (N) -- direct export is faster       
LOG screen output log file     
ROWS export data line (Y)       
CONSISTENT Consistency of crosstabs (N)  
FULL Export entire file (N)
OWNER List of owner usernames
TABLES List of table names
RECORDLENGTH Length of IO records
INCTYPE Incremental export type
RECORD Track incremental export (Y)
TRIGGERS Export triggers (Y )
STATISTICS analysis object (ESTIMATE)
PARFILE parameter filename
CONSTRAINTS export constraints (Y)
OBJECT_CONSISTENT transactions set to read only during object export (N)
FEEDBACK progress per x lines (0)
FILESIZE per dump file The maximum size of the FLASHBACK_SCN SCN used to set the session
snapshot back to the previous state
FLASHBACK_TIME The time used to get the SCN closest to the specified time The
select clause used to export a subset of the table
(N)
RESUMABLE_NAME A text string used to identify a recoverable statement
RESUMABLE_TIMEOUT RESUMABLE waiting time
TTS_FULL_CHECK Perform a full or partial dependency check on TTS
TABLESPACES Tablespace list to be exported
TRANSPORT_TABLESPACE Export transportable tablespace metadata (N)
TEMPLATE Call iAS schema export template name

Commonly used exp keywords

1, full Used to export the entire database, used together with rows=n, to export the structure of the entire database.
   For example: exp userid=gwm/gwm file=/test.dmp log=test.log full=y rows=n direct=y
2, OWNER and TABLES are used to define the objects exported by exp, and the query conditions can be added to export the The number of rows is
   such as: exp userid=gwm/gwm file=/test.dmp log=test.log owner=gwm table=(table1,table2) query="'where rownum<11'"
3. If buffer and feedback are relatively large, consider using both parameters.
   For example: exp userid=gwm/gwm file=/test.dmp log=test.log feedback=10000 buffer=100000000 tables=(table1,table2)
4. file and log are used to specify the backup dmp name and log name
5. compress does not compress the content of the exported data, the default is y
6. filesize If the exported data file is large, this parameter should be used, and the file size should not exceed 2g,
   such as: exp userid=gwm/gwm file=/test1,test2,test3, test4,test5 filesize=2G log=test.log
       This will create test1.dmp, test2.dmp, etc., each file size is 2g.



imp keyword description
keyword description (default) keyword description (default)
-------------------------------- -----------------------------
USERID username/password FULL import entire file (N)
BUFFER data buffer size FROMUSER owner user Name list
FILE Input file (EXPDAT.DMP) TOUSER User name list
SHOW List file content only (N) TABLES Table name list
IGNORE Ignore creation errors (N) RECORDLENGTH IO Record length
GRANTS Import permission (Y) INCTYPE Incremental import type
INDEXES import index (Y) COMMIT commit array insert (N)
ROWS Import data rows (Y) PARFILE Parameter file name
LOG Log file for screen output CONSTRAINTS Import limit (Y)

DESTROY Overwrite tablespace data file (N)
INDEXFILE Write table/index information to the specified file
SKIP_UNUSABLE_INDEXES Skip unavailable indexes MAINTENANCE (N)
FEEDBACK SHOW PROGRESS every x rows (0)
TOID_NOVALIDATE Skip validation for the specified type ID
FILESIZE Maximum size of each dump file
STATISTICS Always import precomputed statistics
RESUMABLE Hang on errors about space ( N)
RESUMABLE_NAME A text string used to identify a resumed statement
RESUMABLE_TIMEOUT RESUMABLE's latency
COMPILE Compilation procedures, packages and functions (Y)
STREAMS_CONFIGURATION Import stream general metadata (Y)
STREAMS_INSTANTIATION Import stream instantiation metadata (N)

The following keyword only for transportable tablespaces
TRANSPORT_TABLESPACE Import transportable tablespace metadata (N)
TABLESPACES Tablespace DATAFILES
to be transferred to the database Datafiles
to be transferred to the database Abandoned, I used oracle11g to do the experiment and found that this parameter is no longer available. As shown in the following experiment: C:\Users\thinkpad>imp fyzh_ora/FYZH_ORA file=rm_trs_seg.dmp log=rm_trs_seg.log f romuser=ltwebgis inctype=restore Import: Release 11.1.0.7.0 - Production on Tue Jan 10 22: 18:14 2012 Copyright (c) 1982, 2007, Oracle. All rights reserved. Connection to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options via the regular path Export file created by EXPORT:V10.02.01 WARNING : These objects were exported by LTWEBGIS, not by the current user












Completed import for ZHS16GBK character set and AL16UTF16 NCHAR character set
IMP-00021: INCTYPE parameter deprecated
IMP-00083: Dump file does not contain incremental export

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326689154&siteId=291194637