Explanation of expdp / impdp usage

A note about expdp and impdp when using EXPDP and IMPDP:
EXP and IMP are client tool programs, they can be used on the client side as well as on the server side.
EXPDP and IMPDP are tool programs on the server side, they can only be used on the ORACLE server side, not on the client side.
IMP only applies to EXP exported files, not EXPDP exported files; IMPDP only applies to EXPDP exported files, not EXP exported files.
When using the expdp or impdp command, you can temporarily not indicate the user name/password@instance name as identity, and then enter it according to the prompt, such as:
expdp schemas=scott dumpfile=expdp.dmp DIRECTORY=dpdata1;
1. Create a logical directory, this command does not A real directory will be created in the operating system, preferably created by an administrator such as system.
create directory dpdata1 as 'd:\test\dump';
Second, check the administrator directory (also check whether the operating system exists, because Oracle does not care whether the directory exists, if it does not exist, an error occurs)
select * from dba_directories;
3. Give the scott user the operation authority in the specified directory, preferably given by administrators such as system.
grant read,write on directory dpdata1 to scott;
4. Export data
1) Guide by user
expdp scott/tiger@orcl schemas=scott dumpfile=expdp.dmp DIRECTORY=dpdata1;
2) Parallel process parallel
expdp scott/tiger@orcl directory=dpdata1 dumpfile=scott3.dmp parallel=40 job_name=scott3 3) Direct expdp
by table name
scott/tiger@orcl TABLES=emp,dept dumpfile=expdp.dmp DIRECTORY=dpdata1;
4) Import expdp according to query conditions
scott/tiger@orcl directory=dpdata1 dumpfile=expdp.dmp Tables=emp query='WHERE deptno=20' ;
5) Lead
expdp system/manager DIRECTORY=dpdata1 DUMPFILE=tablespace.dmp TABLESPACES=temp, example by tablespace;
6) Lead the entire database
expdp system/manager DIRECTORY=dpdata1 DUMPFILE=full.dmp FULL=y;
5. Restore data
1) Import to the specified user
impdp scott/tiger DIRECTORY=dpdata1 DUMPFILE=expdp.dmp SCHEMAS=scott;
2) Change the owner of the table
impdp system/manager DIRECTORY=dpdata1 DUMPFILE=expdp.dmp TABLES=scott.dept REMAP_SCHEMA=scott:system;
3) Import tablespace
impdp system/manager DIRECTORY=dpdata1 DUMPFILE=tablespace.dmp TABLESPACES=example;
4) Import database
impdb system /manager DIRECTORY=dump_dir DUMPFILE=full.dmp FULL=y;
5) Append data
impdp system/manager DIRECTORY=dpdata1 DUMPFILE=expdp.dmp SCHEMAS=system TABLE_EXISTS_ACTION

Two additional notes Parallel operation (PARALLEL)
You can use the PARALLEL parameter for export more than one thread to significantly speed up the job. Each thread creates a separate dump file, so the parameter dumpfile should have as many items as there are parallelisms. Instead of entering individual file names explicitly, you can specify wildcards as filenames, for example:
expdp ananda/abc123 tables=CASES directory=DPDATA1 dumpfile=expCASES_%U.dmp parallel=4 job_name=Cases_Export
Note: The dumpfile parameter has a wildcard %U, which indicates that the file will be created as needed, and the format will be expCASES_nn.dmp, where nn starts at 01 and increases upwards as needed.
In parallel mode, the status screen will show four worker processes. (In default mode, only one process is visible.) All worker processes fetch data synchronously and display their progress on the status screen.
It is important to separate the input/output channels for accessing data files and the dump directory file system. Otherwise, the overhead associated with maintaining the Data Pump job may outweigh the benefits of parallel threads, reducing performance as a result. Parallel mode is only effective when the number of tables is greater than the parallel value and the table is very large.
Database Monitoring
You can also get more information about running Data Pump jobs from the database view. The main view for monitoring jobs is DBA_DATAPUMP_JOBS, which will tell you how many worker processes (column DEGREE) are working on the job.
Another important view is DBA_DATAPUMP_SESSIONS, which when combined with the above view and V$SESSION will give the session SID of the main foreground process.
select sid, serial# from v$session s, dba_datapump_sessions d where s.saddr = d.saddr;
This command displays the sessions of the foreground process. More useful information can be obtained from the alert log. When the process starts, the MCP and worker processes appear in the alert log as follows:
kupprdp:master process DM00 started with pid=23, OS id=20530 to execute - SYS.KUPM$MCP.MAIN('CASES_EXPORT', 'ANANDA'); kupprdp:worker process DW01 started with worker id=1, pid=24 , OS id=20532 to execute - SYS.KUPW$WORKER.MAIN('CASES_EXPORT', 'ANANDA'); kupprdp:worker process DW03 started with worker id=2, pid=25, OS id=20534 to execute - SYS. KUPW$WORKER.MAIN('CASES_EXPORT', 'ANANDA');
It shows the PID of the session started for the data pump operation. You can find the actual SID with the following query:
select sid, program from v$session where paddr in (select addr from v$process where pid in (23,24,25));
The PROGRAM column will correspond to the name in the alert log file Displays the process DM (for the master process) or DW (for the worker process). If a worker process uses a parallel query, say SID 23, you can see it in the view V$PX_SESSION and find it out. It will show you all parallel query sessions running from the worker process represented by SID 23:
select sid from v$px_session where qcsid = 23;

select sid, serial#, sofar, totalwork from v$session_longops where opname = 'CASES_EXPORT' and sofar != totalwork;
the column totalwork shows the total work and the number of sofar for that column is added to the current moment — so you can use it To estimate how long it will take.
Three oracle 10g and 11g mutual import and export 1) You can use 10g client to connect 11 databases that export 11g, you can import 10g 2) Use expdp, impdp, such as:
on the 11g server, use the expdp command to backup data

EXPDP USERID ='SYS/cuc2009@cuc as sysdba' schemas=sybj directory=DATA_PUMP_DIR dumpfile=aa.dmp logfile=aa.log version=10.2.0.1.0

On the 10g server, use the impdp command to restore data

Preparation : 1. Build a database 2. Create a tablespace 3. Create a user and authorize 4. Copy aa.dmp to the dpdump directory of 10g

IMPDP USERID='SYS/cuc2009@cucf as sysdba' schemas=sybj directory=DATA_PUMP_DIR dumpfile=aa.dmp logfile=aa .log version=10.2.0.1.0 is

reprinted to: http://www.cnblogs.com/huacw/p/3888807.html

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326269260&siteId=291194637