Importing and Exporting Oracle data

This paper version oracle12C for window operating system.

1.sqlplus perform a single sql file

1. Run sqlplus login command: sqlplus username / password @ Host: Port / SERVICE_NAME as sysdba (where the ordinary user may not be added later as sysdba)

2. Execute the sql file command in sqlplus: @file_path

 

2.sqlplus execute multiple sql file

1. Create a sql file, add the sql file path to be executed:

@file_path1

@file_path2

。。。。。

2. In the implementation of the new sqlplus sql file to

 

3.oracle expdp and Export Import Tool, impdp

It should be noted that the use of expdp matters and impdp:

exp and imp is a client utility, they can either use the client can also be used on the server side.
expdp and impdp are server-side utilities, they can only use the oracle server can not be used on the client side.
imp exp applies only to file export, the export file does not apply to expdp; impdp only applies to expdp exported file, and not to exp export file.

1. Export

1. Log in sqlplus as sysdba, such as a command: sqlplus / AS sysdba

2. Create a logical directory, the command does not create a true operating system directory, the Create or the replace Directory directory alias as 'operating system on the absolute directory path';

如:create directory xyy as 'D:\cowry\cj';

3. View management administrator directory (while viewing the operating system exists because the oracle does not care whether the directory exists, if not, later to be wrong)

select * from dba_directories;

4. gives the user needs to be exported in the specified operation directory permissions. Command: Grant the Read, ON the Write Directory  directory alias to user_name;

5. Exit sqlplus, execute the following command in cmd window:

1) Export User

 expdp user_name/pwd@host:port/service_name dumpfile=xxx.dmp [logfile=xxx.log] directory=xxx [schemas=xxx] [compression=xxx] [parallel=xxx]

 

2) Export specified table

 expdp user_name/pwd@host:port/service_name dumpfile=xxx.dmp directory=xxx tables=xxx,xxx1,...

 

There are other export table space, and so the entire database modes, can be found relevant information.

The name of the exported data file: dumpfile.

logfile: Log Files

directory: logical directory export, must be created in the oracle, and authorized users read and write permissions

schemas: Use dblink exported user is not a local user, the user need to add schema to determine export, similar to exp the owner, but there are some differences.

compression: compressed dump file. Effective keyword values: ALL, DATA_ONLY, [METADATA_ONLY] and NONE.

parallel: the amount of parallelism.

tables: Specify the export table.

network_link: name of the remote database link source system. dblink used to remotely export, you need to specify the name of the dblink.

 

2. Import

1) into the user

  impdp user_name/pwd@host:port/service_name dumpfile=xxx.dmp [logfile=xxx.log] directory=xxx [remap_schema=xxx] [parallel=xxx]

 

2) import table

  impdp user_name/pwd@host:port/service_name dumpfile=xxx.dmp directory=xxx [remap_schema=xxx:xxx1] [parallel=xxx] tables=xxx,xxx1,...

 

remap_schema: migrating data of a user to another user

 

Use 4.oracle sqlloader import batch data

 

SQL * Loader feature

SQL * Loader to load data from external files into tables of an Oracle database.

It has a powerful data parsing engine, virtually no restrictions on the data file data format. You can use SQL * Loader to do the following:

  • If the data file is in a different database systems, data is loaded through the network.

  • During the same session load to load data from multiple files.

  • During the same session is loaded to load data into multiple tables.

  • The character set specified data.

  • Selectively load data (you can load records based on the value recorded).

  • SQL function processing using data before the data is loaded.

  • Generate a unique key sequence in the designated column.

  • Using the operating system's file system to access data files.

  • From disk, tape, or named pipes to load data.

  • Generate complex error reports, which greatly aid troubleshooting.

  • Load arbitrarily complex object-relational data.

  • Use secondary data file to load LOB and collections.

  • Using conventional, direct load path or external table.

 

You can use SQL * Loader in two ways: with or without control file. SQL * Loader control files control the behavior of one or more data files and loaded for use. Using the control file can better control the loading operation, which may be ideal for more complex loading conditions. But for simple loading, you can use SQL * Loader in without specifying a control file;  this is known as SQL * Loader express mode.

Parameter Description sqlloader, particularly in view cmd enter sqlldr

1  valid keywords:
 2  
. 3      the userid - the ORACLE username / password
 . 4     Control - the control file name
 . 5         log - log file name
 . 6         Bad - the error file
 . 7        Data - data file name
 . 8     discard - discardable file name
 . 9  discardmax - allowable number obsolete files (all default)
 10        skip - the number of logical records to skip (default 0)
 . 11        load - the number of the logical record to be loaded (all default)
 12 is      errors - permissible number of errors (default 50)
 13 is        rows - conventional path bind array or number of lines between direct path data storage
 14                 (default: conventional path 64, all direct path)
 15    bindsize - bind array of regular path size (in bytes) (default 256000)
 16     silent - during operation hidden message (header, the feedback error, waste, partitions)
 . 17      Direct - using the direct path (default FALSE)
 18 is     parfile - parameter file: a file contains the name of the Parameter Description
 . 19    Parallel - performed in parallel load (default FALSE)
 20        file - file from the following objects in the allocation area
 21  SKIP_UNUSABLE_INDEXES - Disable / Enable the use of useless index or index partition (default FALSE)
 22  skip_index_maintenance - no maintenance of the index, the index will be affected marked as unnecessary (default FALSE)
 23 is  commit_discontinued - submitted interrupt line load loaded (default FALSE)
 24    value of readsize - read buffer size (default 1048576)
 25  external_table - using external loading tables; NOT_USED, GENERATE_ONLY , the EXECUTE
 26 is  columnarrayrows - direct path column number of rows in the array (default 5000)
 27 streamsize - direct flow path buffer size (in bytes) (default 256000)
 28  multithreading - use multithreading in direct path
 29  RESUMABLE - to enable or disable the current session recoverable (default FALSE)
 30  RESUMABLE_NAME - - help text string to identify recoverable statement
 31 is  RESUMABLE_TIMEOUT - the RESUMABLE latency (in seconds) (default 7200)
 32  date_cache - date conversion cache size (in terms entry) (default 1000)
 33 is  no_index_errors - abort loading (default FALSE) any error index
 34 is  partition_memory - a direct path partition memory limit (kB) (default 0) start overflow
 35       table - table fast mode for loading
 36  DATE_FORMAT - for fast date format is loaded mode
 37 [  timestamp_format, - timestamp format for fast mode load
 38 is  terminated_by - terminated by the character mode for quick loading of
 39  enclosed_by - loaded by a closure for fast mode character
 40 optionally_enclosed_by - (optional) mode for fast loading of a character closure
 41 is  CharacterSet - fast mode for loading character set
 42 is  the degree_of_parallelism - mode for fast loading and the degree of parallelism of the external loading table
 43 is        TRIM - for taken quick mode type external table and loading load
 44 is         csv - csv format for data files loaded quick mode
 45      nullif - for quick loading mode nullif clause table level
 46 is  FIELD_NAMES are - loading the data for the fast mode the first document record field names set
 47  dnfs_enable - to enable or disable the input data file Direct NFS (dNFS) options (default FALSE)
 48  dnfs_readbuffers - Direct NFS (dNFS) number of read buffer (default. 4)
 49  sdf_prefix - LOB to be attached to the head of each file and the auxiliary data file prefix
 50        help - displays the help message (default FALSE)
 51 is  empty_lobs_are_null - blank LOB set to a null value (default FALSE)
 52 is   defaults - direct path loading default values; EVALUATE_ONCE, EVALUATE_EVERY_ROW, the IGNORE, IGNORE_UNSUPPORTED_EVALUATE_ONCE, IGNORE_UNSUPPORTED_EVALUATE_EVERY_ROW
 53 is  direct_path_lock_wait - currently locked, wait table access rights (default FALSE)
 54 is  
55  PLEASE NOTE: command line parameters can be specified by location or keywords
 56  . Examples of the former are 'sqlldr
 57 is  Scott / Tiger foo'; an example of the latter case is 'foo sqlldr Control =
 58  the userid = Scott / Tiger'. Specify the location parameters of time must be older than
 59  but not later than specified by the keyword arguments. For example,
 60  to allow 'sqlldr scott / tiger control = foo logfile = log', but
 61  is not allowed 'sqlldr scott / tiger control = foo log', even if the
 62 parameters 'log' in the correct position.

 

1. establish control files whose names .ctl, add the following in the control file:

OPTIONS (SKIP=num,ROWS=num,DIRECT=true,BINDSIZE=num)

LOAD DATA

CHARACTERSET character set

INFILE "Data File Path" BADFILE "error file path, the file suffix .bad" DISCARDFILE "discard the file path, the file suffix .dis"

If we continue to add more data files
INFILE "xxx" BADFILE "xxx" DISCARDFILE "xxx"

......

[操作类型] INTO TABLE table_name
fields terminated by "xxx"

optionally enclosed by "xxx"
trailing nullcols
(col,col1,col2,...)

Parameter Description:

SKIP: Skip the start line number, i.e. the number of rows not read.

 

ROWS: For the case of the conventional regular path introduced, filed on behalf of a number of lines.

 

BINDSIZE: maximum number of buffers each commit record (only traditional conventional path load), default 256000 Bytes. BINDSIZE by setting than the default value and the buffer size calculated by the parameter ROWS higher priority.

That BINDSIZE can restrict ROWS, if the data needed to submit a buffer larger than BINDSIZE ROWS setting will prevail BINDSIZE settings.

 

DIRECT: Use the direct path (default FALSE).

 

CHARACTERSET: Chinese garbage problem usually occurs. That oracle character set encoding and character set encoding data files are inconsistent results in Chinese garbage problem. The values ​​are: ZHS16GBK. . .

 

Operation Type:

  insert - the default mode, the data begins loading requirements list is empty
  append - adds a new record in the table
  replace - delete old records (with delete from table statement), replaced with the newly loaded recording
  truncate - Delete old records (with truncate table statement), replaced with the newly loaded recording

 

fields terminated by: a delimiter between each field and field line.

 

optionally enclosed by: Each field of data when "" from the box, such as the field of "," separator. For the parcels field is Chinese, there may be no guide into the database, you can try to modify the source data file encoding format to ASCII .

 

trailing nullcols: The source file to be imported this column is empty, in the table into the database, the contents of this column is null.

 

2. Run the cmd window:

  sqlldr user_name / pwd @ service_name control = control file path log = log file path

 

5.oracle use UTL_FILE package to export bulk data

File I / O is very important for the development of the database, the data in the database, such as a part of the file from the disk, then you need to use the I / O interface to import the data into the database in the past. There is no direct I / O interface in PL / SQL,  

Usually in the debugger you can use Oracle's own DBMS_OUTPUT package put_line function (ie screen I / O operations) can be, but for the disk file I / O operations it can do nothing.

In fact, Oracle also provides a practical package ----- UTL_FILE package can file I / O, the use of this utility package provides functions to achieve disk I / O operations.

 

1. Create a directory as sysdba: the CREATE OR REPLACE DIRECTORY directory nickname AS 'directory path';

 

2. The read-write access to the path of giving the user: GRANT the READ, WRITE ON DIRECTORY directory another name TO user name;

 

3. Create the following stored procedure in user:

 1 create or replace PROCEDURE SQL_TO_FILE(P_QUERY IN VARCHAR2,P_DIR IN VARCHAR2,P_FILENAME IN VARCHAR2)
 2 IS
 3   L_OUTPUT UTL_FILE.FILE_TYPE; 
 4   L_THECURSOR INTEGER DEFAULT DBMS_SQL.OPEN_CURSOR;
 5   L_COLUMNVALUE VARCHAR2(4000);
 6   L_STATUS INTEGER;
 7   L_COLCNT NUMBER := 0;
 8   L_SEPARATOR VARCHAR2(1);
 9   L_DESCTBL DBMS_SQL.DESC_TAB;
10   P_MAX_LINESIZE NUMBER := 32000;
11 BEGIN
12   L_OUTPUT := UTL_FILE.FOPEN(P_DIR, P_FILENAME, 'W', P_MAX_LINESIZE);
13   EXECUTE IMMEDIATE 'ALTER SESSION SET NLS_DATE_FORMAT=''YYYY-MM-DD HH24:MI:SS''';
14   DBMS_SQL.PARSE(L_THECURSOR, P_QUERY, DBMS_SQL.NATIVE);
15   DBMS_SQL.DESCRIBE_COLUMNS(L_THECURSOR, L_COLCNT, L_DESCTBL);
16   FOR I IN 1 .. L_COLCNT LOOP
17     UTL_FILE.PUT(L_OUTPUT,L_SEPARATOR || '"' || L_DESCTBL(I).COL_NAME || '"');
18     DBMS_SQL.DEFINE_COLUMN(L_THECURSOR, I, L_COLUMNVALUE, 4000);
19     L_SEPARATOR := ',';
20   END LOOP;
21   UTL_FILE.NEW_LINE(L_OUTPUT);
22   L_STATUS := DBMS_SQL.EXECUTE(L_THECURSOR);
23   WHILE (DBMS_SQL.FETCH_ROWS(L_THECURSOR) > 0) LOOP
24     L_SEPARATOR := '';
25     FOR I IN 1 .. L_COLCNT LOOP
26       DBMS_SQL.COLUMN_VALUE(L_THECURSOR, I, L_COLUMNVALUE);
27       UTL_FILE.PUT(L_OUTPUT,
28                   L_SEPARATOR || '"' ||
29                   TRIM(BOTH ' ' FROM REPLACE(L_COLUMNVALUE, '"', '""')) || '"');
30       L_SEPARATOR := ',';
31       END LOOP;
32     UTL_FILE.NEW_LINE(L_OUTPUT);
33   END LOOP;
34   DBMS_SQL.CLOSE_CURSOR(L_THECURSOR);
35   UTL_FILE.FCLOSE(L_OUTPUT);
36 EXCEPTION
37   WHEN OTHERS THEN
38     RAISE;
39 END;

 

4 performing the above procedures

EXEC SQL_TO_FILE('para','para1','para2');

The first parameter is: SQL query data, and the second is: another name directory, and the third is: the exported file name.

 

Guess you like

Origin www.cnblogs.com/nearWind/p/11518226.html