Oracle impdp/expdp usage details

Most of them are transferred, all kinds, recorded.

EXPDP can export the data of all databases, as long as it can connect to this database, or the database it logs in
can read the data of the exported library, because at this time, the data of the exported library can be read through the database chain.

http://zalbb.itpub.net/post/980/395955

a. 创建到服务端的dblink
create database link link_name connect to  username identified by password using 'connect_string' ;//username和password是server端的
b.conn / as sysdba
create or replace directory dir as 'directory';
grant read,write on directory dir to username;

c.expdp username/password directory=dir network_link=link_name ... //The username here is the user who created the dblink, and the directory is also created by the target database   

3. If you want to import a database directly without generating a dmp file, the principle is similar to 2. Use impdp with network_link directly, so that you can directly impdp, and bypass the steps of expdp

nelwork_link means to import the datafile from the local database directly to the remote database, omitting the operation of exp in the middle, nelwork_link=source_database_link, please confirm that the nelwork_link parameter is an existing target database name, and there is a database link at the same time.

Add his restrictions, just saw
nelwork_link parameter Restrictions

Network imports do not support the use of evolved types.
a:  When the NETWORK_LINK parameter is used in conjunction with the TABLES parameter, only whole tables can be imported (not partitions of tables).
b:  If the USERID that is executing the import job has the IMP_FULL_DATABASE role on the target database, then that user must also have the EXP_FULL_DATABASE role on the source database.The only types of database links supported by Data Pump Import are: public, fixed-user, and connected-user. Current-user database links are not supported.

Haha, the direction of NETWORK_LINK is really easy to confuse. I only remembered after you said it. The difference is very simple:

Which database impdp is connected to indicates which database to import the data into. Therefore, when impdp uses the NETWORK_LINK chain, it should lead the remote data it points to to the library connected by impdp.

This understanding is easy to remember.

 http://www.itpub.net/thread-943998-1-1.html

expdp introduction

EXPDP command line option
1. ATTACH
This option is used to establish an association between a client session and an existing export role. The syntax is as follows
ATTACH=[schema_name.]job_name
Schema_name is used to specify the schema name, and job_name is used to specify the export job name. Note that, If you use the ATTACH option, you cannot specify any other options on the command line except the connection string and the ATTACH option, for example:
Expdp scott/tiger ATTACH=scott.export_job

2. CONTENT
This option is used to specify the content to be exported. The default value is ALL
CONTENT={ALL | DATA_ONLY | METADATA_ONLY}
When setting CONTENT to ALL, the object definition and all its data will be exported. When it is DATA_ONLY, only the object data will be exported , when it is METADATA_ONLY, only export object definition
Expdp scott/tiger DIRECTORY=dump DUMPFILE=a.dump CONTENT=METADATA_ONLY

3. DIRECTORY
specifies the directory where the dump files and log files are located.
DIRECTORY=directory_object
Directory_object is used to specify the name of the directory object. It should be noted that the directory object is an object created using the CREATE DIRECTORY statement, not the OS directory
Expdp scott/tiger DIRECTORY=dump DUMPFILE =a.dump

4. DUMPFILE
is used to specify the name of the dump file, the default name is expdat.dmp
DUMPFILE=[directory_object:]file_name [,….]
Directory_object is used to specify the name of the directory object, and file_name is used to specify the name of the dump file. Note that, If you do not specify directory_object, the export tool will automatically use the directory object specified by the DIRECTORY option
Expdp scott/tiger DIRECTORY=dump1 DUMPFILE=dump2:a.dmp

5. ESTIMATE
specifies the method for estimating the disk space occupied by the exported table. The default value is BLOCKS
ESTIMATE={BLOCKS | STATISTICS}
is set to BLOCKS, oracle will estimate the object according to the number of data blocks occupied by the target object multiplied by the data block size Occupied space, when set to STATISTICS, estimate the space occupied by the object according to the latest statistical value
Expdp scott/tiger TABLES=emp ESTIMATE=STATISTICS DIRECTORY=dump DUMPFILE=a.dump

6. ESTIMATE_ONLY
specifies whether to estimate only the disk space occupied by the export job. The default value is N. When
EXTIMATE_ONLY={Y | N}
is set to Y, the export function only estimates the disk space occupied by the object and does not execute the export job. When N, not only estimate the disk space occupied by the object, but also perform the export operation.
Expdp scott/tiger ESTIMATE_ONLY=y NOLOGFILE=y

7. EXCLUDE (see 2. Exclude exporting the specified object of the specified type in the user)
This option is used to specify the type of object to be excluded or related objects to be released when the operation is performed.
EXCLUDE=object_type[:name_clause] [,….]
Object_type is used to specify The object type to be excluded, name_clause is used to specify the specific object to be excluded. EXCLUDE and INCLUDE cannot be used at the same time
Expdp scott/tiger DIRECTORY=dump DUMPFILE=a.dup EXCLUDE=VIEW

8. FILESIZE
specifies the maximum size of the exported file, the default is 0, (indicating that there is no limit to the file size)

9. FLASHBACK_SCN
specifies to export table data at a specific SCN time
FLASHBACK_SCN=scn_value
Scn_value is used to identify the SCN value. FLASHBACK_SCN and FLASHBACK_TIME cannot be used at the same time.
Expdp scott/tiger DIRECTORY=dump DUMPFILE=a.dmp FLASHBACK_SCN=358523

10. FLASHBACK_TIME
specifies to export table data at a specific time point
FLASHBACK_TIME=”TO_TIMESTAMP(time_value)”
Expdp scott/tiger DIRECTORY=dump DUMPFILE=a.dmp FLASHBACK_TIME=“TO_TIMESTAMP('25-08-2004 14:35:00',' DD-MM-YYYY HH24:MI:SS')"

11. FULL
specifies the export of the database schema, and the default is N. When
FULL={Y | N} is Y, the flag executes the database export.

12. HELP
Specifies whether to display the help information of the EXPDP command line option, the default is N
When it is set to Y, it will display the help information of the export option.
Expdp help=y

13. INCLUDE (see 1. Include exporting the specified object of the specified type in the user)
specify the object type and related objects to be included in the export
INCLUDE = object_type[:name_clause] [,…]

14. JOB_NAME
specifies the name of the role to be exported, the default is SYS_XXX
JOB_NAME=jobname_string
SELECT * FROM DBA_DATAPUMP_JOBS;--View existing jobs

15. LOGFILE
specifies the name of the exported log file, and the default name is export.log
LOGFILE=[directory_object:]file_name
Directory_object is used to specify the name of the directory object, and file_name is used to specify the name of the exported log file. If you do not specify directory_object, the export function will be automatic Use the appropriate option value for DIRECTORY.
Expdp scott/tiger DIRECTORY=dump DUMPFILE=a.dmp logfile=a.log

16. NETWORK_LINK
specifies the name of the database link. If you want to export the remote database object to the dump file of the local routine, you must set this option.
For example: expdp gwm/gwm directory=dir_dp NETWORK_LINK=igisdb tables=p_street_area dumpfile =p_street_area.dmp logfile=p_street_area.log job_name=my_job
igisdb is the link name between the destination database and the source data,
dir_dp is the directory on the destination database
and if you use the connection string (@fgisdb) directly, expdp belongs to the server tool, and the files generated by expdp default is stored on the server

17. NOLOGFILE
This option is used to specify that the generation of export log files is prohibited, and the default value is N.

18. PARALLEL
specifies the number of parallel processes to perform the export operation. The default value is 1.
Note: The parallelism setting should not exceed 2 times the number of CPUs. If there are 2 CPUs, you can set PARALLEL to 2, and the speed is faster than PARALLEL when importing. For the
    exported file, if PARALLEL is set to 2, there is only one exported file, and the export speed will not increase much, because the export is all to the same file, which will compete for resources. So you can set two export files, as follows:
    expdp gwm/gwm directory=d_test dumpfile=gwmfile1.dp,gwmfile2.dp parallel=2

19. PARFILE
specifies the name of the export parameter file
PARFILE=[directory_path] file_name

20. QUERY
is used to specify the where condition for filtering and exporting data
QUERY=[schema.] [table_name:] query_clause
Schema is used to specify the scheme name, table_name is used to specify the table name, query_clause is used to specify the conditional clause. The QUERY option cannot be used with CONNECT=METADATA_ONLY, EXTIMATE_ONLY, TRANSPORT_TABLESPACES and other options are used at the same time.
Expdp scott/tiger directory=dump dumpfile=a.dmp Tables=emp query='WHERE deptno=20'

21. SCHEMAS
This scheme is used to specify the execution scheme mode export, and the default is the current user scheme.

22. STATUS
specifies to display the detailed status of the export process, the default value is 0

23. TABLES
specifies the table mode export
TABLES=[schema_name.]table_name[:partition_name][,...]
Schema_name is used to specify the scheme name, table_name is used to specify the exported table name, and partition_name is used to specify the partition name to be exported.

24. TABLESPACES
specifies the list of table spaces to be exported

25. TRANSPORT_FULL_CHECK
This option is used to specify the check method for the relationship between the moved table space and the unmoved table space. The default is N.
When it is set to Y, the export function will check the direct and complete relationship of the table space. If the table space is located in the table Only one tablespace where the space or its index is located is moved, and an error message will be displayed. When it is set to N, the export function only checks the single-ended dependency. If the tablespace where the index is moved, but the tablespace where the table is not moved, it will be Display error information, if the table space where the table is moved, but the table space where the index is not moved, the error message will not be displayed.

26. TRANSPORT_TABLESPACES
specifies the execution of table space schema export

27. VERSION
specifies the database version of the exported object, and the default value is COMPATIBLE. When
VERSION={COMPATIBLE | LATEST | version_string}
is COMPATIBLE, object metadata will be generated according to the initialization parameter COMPATIBLE; when it is LATEST, it will be generated according to the actual version of the database Object metadata. version_string is used to specify the database version string. Call EXPDP

Steps for data pump tool export:
1. Create DIRECTORY
create directory dir_dp as 'D:\oracle\dir_dp';
2. Authorize
Grant read,write on directory dir_dp to lttfm;
--View directory and permissions
SELECT privilege, directory_name, DIRECTORY_PATH FROM user_tab_privs t, all_directories d
 WHERE t.table_name(+) = d.directory_name ORDER BY 2, 1;
3. Execute export
expdp lttfm/lttfm@fgisdb schemas=lttfm directory=dir_dp dumpfile =expdp_test1.dmp logfile=expdp_test1.log;

Connect to: Oracle Database 10g Enterprise Edition Release 10.2.0.1
With the Partitioning, OLAP and Data Mining options
start "LTTFM"."SYS_EXPORT_SCHEMA_01": lttfm/********@fgisdb sch
ory=dir_dp dumpfile =expdp_test1. dmp logfile=expdp_test1.log; */
Remarks:
   1. directory=dir_dp must be placed in the front, if you put it in the end, it will prompt ORA-39002: invalid operation
                                                             ORA-39070: cannot open the log file.
                                                             ORA-39087: directory name DATA_PUMP_DIR; invalid
     
   2. During the export process, DATA DUMP created and used an object named SYS_EXPORT_SCHEMA_01, which is the JOB name used in the DATA DUMP export process. If there is no If you specify the exported JOB name, a default JOB name will be generated. If you specify the JOB name during the export process, the specified name will appear as
     follows:
     expdp lttfm/lttfm@fgisdb schemas=lttfm directory=dir_dp dumpfile=expdp_test1.dmp logfile=expdp_test1.log,job_name=my_job1;
   3. Do not have a semicolon after the export statement, otherwise the job table name in the above export statement is 'my_job1' ;' instead of my_job1. Therefore, expdp lttfm/lttfm attach=lttfm.my_job1 always prompts that the job table cannot be found when executing this command.
   4. The created directory must be on the machine where the database is located. Otherwise it is also a hint:

ORA-39002: invalid operation
 ORA-39070: cannot open log file.
ORA-39087: directory name DATA_PUMP_DIR; is invalid
 

Export related commands are used:
   1) Ctrl+C key combination: During execution, you can press Ctrl+C key combination to exit the current interactive mode. After exiting, the export operation will not stop
   2) Export> status -- view the status of the current JOB Status and related information
   3) Export> stop_job --pause the JOB (the job will exit the expor mode after pausing the job)
   4) Re-enter the export mode: C:\Documents and Settings\Administrator>expdp lttfm/lttfm attach=lttfm.my_job1 -- There is no semicolon after the statement
   5) Export> start_job -- open the suspended JOB (does not start re-execution)
   6) Export> continue_client -- restart "LTTFM" through this command."MY_JOB":
   7) Export> kill_job - -Cancel the current JOB and release the relevant client session (delete the job and delete the dmp file at the same time)
   8) Export> exit_client --Exit the export mode through this command (pass 4) and then enter the export mode)
 Note: After the export is completed, the job will automatically uninstall

Various modes of data pump export:
1. Export by table mode:
expdp lttfm/lttfm@fgisdb tables=lttfm.b$i_exch_info,lttfm.b$i_manhole_info dumpfile=expdp_test2.dmp logfile=expdp_test2.log directory=dir_dp job_name=my_job

2. Export according to query conditions:
expdp lttfm/lttfm@fgisdb tables=lttfm.b$i_exch_info dumpfile =expdp_test3.dmp logfile=expdp_test3.log directory=dir_dp job_name=my_job query='"where rownum<11"'

3. Export by tablespace:
Expdp lttfm/lttfm@fgisdb dumpfile=expdp_tablespace.dmp tablespaces=GCOMM.DBF logfile=expdp_tablespace.log directory=dir_dp job_name=my_job

4. Export scheme
Expdp lttfm/lttfm DIRECTORY=dir_dp DUMPFILE=schema.dmp SCHEMAS=lttfm,gwm

5. Export the entire database:
expdp lttfm/lttfm@fgisdb dumpfile =full.dmp full=y logfile=full.log directory=dir_dp job_name=my_job


Use exclude, include to export data
1. Include to export the specified object of the specified type in the user
--only export all tables starting with B under the lttfm user, including indexes and notes related to the table. Do not include other object types such as procedures:
expdp lttfm/lttfm@fgisdb dumpfile=include_1.dmp logfile=include_1.log directory=dir_dp job_name=my_job include=TABLE:\"LIKE \'B%\'\"

--Export all tables under the lttfm user except B$:
expdp lttfm/lttfm@fgisdb schemas=lttfm dumpfile=include_1.dmp logfile=include_1.log directory=dir_dp job_name=my_job include=TABLE:\"NOT LIKE \'B $%\'\"

-- Only export all stored procedures under the lttfm user:
expdp lttfm/lttfm@fgisdb schemas=lttfm dumpfile=include_1.dmp logfile=include_1.log directory=dir_dp job_name=my_job include=PROCEDURE;   

2. Exclude the specified object of the specified type in the export user
--export all objects under the lttfm user except the TABLE type. If the table is not exported, then the index related to the table, constraints and other object types associated with the table will not be exported. :
expdp lttfm/lttfm@fgisdb schemas=lttfm dumpfile=exclude_1.dmp logfile=exclude_1.log directory=dir_dp job_name=my_job exclude=TABLE;

--Export all tables under the lttfm user except B$:
expdp lttfm/lttfm@fgisdb dumpfile=include_1.dmp logfile=include_1.log directory=dir_dp job_name=my_job exclude=TABLE:\"LIKE\'b$%\' \";

--Export all objects under the lttfm user, but only export tables starting with b$ for the table type:
expdp lttfm/lttfm@fgisdb dumpfile=include_1.dmp logfile=include_1.log directory=dir_dp job_name=my_job exclude=TABLE:\ "NOT LIKE \'b$%\'\";


Introduction to IMPDP

The IMPDP command line options have many similarities with EXPDP, and the differences are:
1. REMAP_DATAFILE
This option is used to change the name of the source data file into the name of the target data file. This option may be required when moving the table space between different platforms. REMAP_DATAFIEL
= source_datafile:target_datafile

2. REMAP_SCHEMA
This option is used to load all objects of the source scheme into the target scheme.
REMAP_SCHEMA=source_schema:target_schema

3. REMAP_TABLESPACE
imports all objects in the source tablespace into the target tablespace
REMAP_TABLESPACE=source_tablespace:target:tablespace

4. REUSE_DATAFILES
This option specifies whether to overwrite existing data files when creating a table space. The default is N
REUSE_DATAFILES={Y | N}

5. SKIP_UNUSABLE_INDEXES
specifies whether to skip unusable indexes when importing, the default is N

6. SQLFILE
specifies to import the specified index DDL operation into the SQL script
SQLFILE=[directory_object:]file_name
Impdp scott/tiger DIRECTORY=dump DUMPFILE=tab.dmp SQLFILE=a.sql

7. STREAMS_CONFIGURATION
specifies whether to import stream metadata (Stream Matadata), the default value is Y.

8. TABLE_EXISTS_ACTION
This option is used to specify the operation to be performed by the import job when the table already exists. The default is SKIP
TABBLE_EXISTS_ACTION={SKIP | APPEND | TRUNCATE | FRPLACE }
When this option is set to SKIP, the import job will skip the existing table Process the next object; when it is set to APPEND, data will be appended; when it is TRUNCATE, the import job will truncate the table and then append new data to it; when it is set to REPLACE, the import job will delete the existing table, rebuild the table and append Data, note that the TRUNCATE option does not apply to cluster tables and the NETWORK_LINK option

9. TRANSFORM
This option is used to specify whether to modify the DDL statement that creates the object.
TRANSFORM=transform_name:value[:object_type]
Transform_name is used to specify the transformation name, where SEGMENT_ATTRIBUTES is used to identify segment attributes (physical attributes, storage attributes, table spaces, logs, etc. information), STORAGE is used to identify segment storage attributes, VALUE is used to specify whether to include segment attributes or segment storage attributes, and object_type is used to specify the object type.
Impdp scott/tiger directory=dump dumpfile=tab.dmp Transform=segment_attributes:n:table

10. TRANSPORT_DATAFILES
This option is used to specify the data file to be imported into the target database when moving space.
TRANSPORT_DATAFILE=datafile_name
Datafile_name is used to specify the data file copied to the target database.
Impdp system/manager DIRECTORY=dump DUMPFILE=tts.dmp
TRANSPORT_DATAFILES=' /user01/data/tbs1.f' calls IMPDP


impdp import mode:
1. Import
the tables in the p_street_area.dmp file by table. This file is exported by the gwm user according to schemas=gwm:
impdp gwm/gwm@fgisdb dumpfile =p_street_area.dmp logfile=imp_p_street_area.log directory=dir_dp tables =p_street_area job_name=my_job

2. Import by user (user information can be imported directly, that is, if the user information does not exist, it can also be imported directly)
impdp gwm/gwm@fgisdb schemas=gwm dumpfile =expdp_test.dmp logfile=expdp_test.log directory=dir_dp job_name =my_job

3. The method of importing directly without generating a dmp file through the steps of expdp: --import table p_street_area impdp gwm/gwm directory=dir_dp NETWORK_LINK=igisdb tables=p_street_area logfile=p_street_area.log job_name=my_job igisdb
from the source database to the target database is the name of the link between the destination database and the source data, and dir_dp is the directory on the destination database


  4. Use the remap_tablespace parameter to replace the tablespace
  -- export all data under the gwm user
expdp system/orcl directory=data_pump_dir dumpfile=gwm.dmp SCHEMAS=gwm
Note: If the user data is exported by the sys user, including user creation and authorization , exporting with your own user does not include these contents
--the following is to import all the data under the gwm user into the tablespace gcomm (originally under the gmapdata tablespace)
impdp system/orcl directory=data_pump_dir dumpfile=gwm.dmp remap_tablespace=gmapdata :gcomm


Keyword description of exp and imp
exp:
keyword description (default value)        
------------------------------
USERID username/ Password            
BUFFER Data buffer size         
FILE Output file (EXPDAT.DMP)  
COMPRESS Import to an area (Y)      
GRANTS Export authority (Y)          
INDEXES Export index (Y)          
DIRECT Direct path (N) --Direct export is faster        
LOG screen output log file      
ROWS export data rows (Y)        
CONSISTENT cross-table consistency (N)   
FULL export the entire file (N)
OWNER owner user name list
TABLES table name list
RECORDLENGTH IO record length
INCTYPE incremental export type
RECORD tracking increment EXPORT (Y)
TRIGGERS EXPORT TRIGGER (Y)
STATISTICS Analysis object (ESTIMATE)
PARFILE Parameter file name
CONSTRAINTS Constraints for export (Y)
OBJECT_CONSISTENT Set read-only transaction only during object export (N)
FEEDBACK Show progress every x lines (0)
FILESIZE Maximum size for each dump file
FLASHBACK_SCN Used to set session snapshot back SCN of the previous state
FLASHBACK_TIME used to get the time of the SCN closest to the specified time
QUERY used to export a subset of the table's select clause
RESUMABLE Suspend when a space-related error is encountered (N)
RESUMABLE_NAME used to identify a resumable statement TEXT STRING
RESUMABLE_TIMEOUT Waiting time
for RESUMABLE TTS_FULL_CHECK Perform full or partial dependency check on TTS
TABLESPACES List of tablespaces
to export TRANSPORT_TABLESPACE Export transportable tablespace metadata (N)
TEMPLATE Call iAS schema export template name


Commonly used exp keywords

1. full is used to export the entire database, used together with rows=n, to export the structure of the entire database.
   Such as: exp userid=gwm/gwm file=/test.dmp log=test.log full=y rows=n direct=y
2. OWNER and TABLES are used to define the objects exported by exp
   such as: exp userid=gwm/gwm file =/test.dmp log=test.log owner=gwm table=(table1,table2)
3. buffer and feedback If the exported data is large, consider using these two parameters.
   Such as: exp userid=gwm/gwm file=/test.dmp log=test.log feedback=10000 buffer=100000000 tables=(table1,table2) 4. file
and log are used to specify the backup dmp name and log name
5. compress Do not compress the content of the exported data, the default is y
6. filesize If the exported data file is large, this parameter should be used, and the file size should not exceed 2g.
   For example: exp userid=gwm/gwm file=/test1,test2,test3,test4,test5 filesize=2G log=test.log
       will create test1.dmp, test2.dmp, etc., each file size is 2g.

 imp keyword description
keyword description (default value) keyword description (default value)
-------------------------------- -----------------------------
USERID username/password FULL import entire file(N)
BUFFER data buffer size FROMUSER owner user Name list
FILE Input file (EXPDAT.DMP) TOUSER User name list
SHOW Only list file content (N) TABLES Table name list
IGNORE Ignore creation errors (N) RECORDLENGTH IO record length
GRANTS Import permission (Y) INCTYPE Incremental import type
INDEXES import index (Y) COMMIT commit array insert (N)
ROWS import data rows (Y) PARFILE parameter file name
LOG screen output log file CONSTRAINTS import limit (Y)

DESTROY Overwrite tablespace data file(N)
INDEXFILE Write table/index information to specified file
SKIP_UNUSABLE_INDEXES Skip maintenance of unusable indexes(N)
FEEDBACK Show progress every x rows(0)
TOID_NOVALIDATE Skip validation of specified type ID
FILESIZE every Maximum size of dump files
STATISTICS Always import precomputed statistics
RESUMABLE Suspend on error about space(N)
RESUMABLE_NAME Text string used to identify resumable statements
RESUMABLE_TIMEOUT Wait time for RESUMABLE
COMPILE Compile process, program Packages and functions (Y)
STREAMS_CONFIGURATION Import stream general metadata (Y)
STREAMS_INSTANTIATION Import stream instantiation metadata (N)

The following keywords are only used for transportable tablespaces
TRANSPORT_TABLESPACE import transportable tablespace metadata (N)
TABLESPACES tablespaces to be transferred to the database
DATAFILES datafiles to be transferred to
the database TTS_OWNERS users who own data in the transportable tablespace set

Guess you like

Origin blog.csdn.net/qq_39194322/article/details/120350504
Recommended