Dameng database logical backup (dexp/dimp)

 Logical backup and restoration is the backup and restoration of the logical components of the database, such as database objects such as tables, views, and stored procedures. Logical export (dexp) and logical import (dimp) are two command-line tools of DM database, which are used to implement logical backup and logical restoration of DM database respectively.

Logical export and logical import database objects are divided into four levels: database level, user level, schema level and table level. The four levels are independent and mutually exclusive and cannot exist at the same time. Functionality provided by the four levels:

      Database level (FULL): Export or import all objects in the entire database.

      User level (OWNER): Export or import all objects owned by one or more users.

      Schema level (SCHEMAS): Export or import all objects under one or more schemas.

      Table level (TABLES): Export or import one or more specified tables or table partitions.

View the relevant parameters of logical backup and restore:

[dmdba@localhost bak]$ dexp help
dexp V8
version: 03134283904-20220630-163817-20005
format: ./dexp KEYWORD=value or KEYWORD=(value1,value2,...,valueN)

例程: ./dexp  SYSDBA/SYSDBA GRANTS=Y TABLES=(SYSDBA.TAB1,SYSDBA.TAB2,SYSDBA.TAB3)

USERID must be the first argument on the command line

Keyword Description (default)
-------------------------------------------- -------------------------------------
USERID username/password format: {<username>[/ <password>] | /}[@<connect_identifier>][<option>] [<os_auth>]
                    <connect_identifier> : [<svc_name> | host[:port] | <unixsocket_file>]
                    <option> : #{<exetend_option >=<value>[,<extend_option>=<value>]...}
                               --The outer {} of this line is for encapsulating parameters, and
                    <os_auth> needs to be reserved when writing: AS {SYSDBA|SYSSSO|SYSAUDITOR| USERS|AUTO}
FILE Export file (dexp.dmp)
DIRECTORY Directory where the export file is located
FULL Whole library export (N)
OWNER Export format in user mode (user1,user2,...)
SCHEMAS Export format in mode (schema1,schema2,...)
TABLES Export format in table mode (table1,table2,...)
FUZZY_MATCH TABLES option supports fuzzy matching (N)
QUERY Select for exporting a subset of tables The clause
PARALLEL is used to specify the number of threads used in the export process.
TABLE_PARALLEL is used to specify the number of concurrent threads in the table during the export process. In MPP mode, it will be converted into a single thread.
TABLE_POOL is used to specify the number of buffers in the table.
EXCLUDE Ignored The specified object 
                       format EXCLUDE=(CONSTRAINTS,INDEXES,ROWS,TRIGGERS,GRANTS) or
                            EXCLUDE=TABLES:table1,table2 or
                            EXCLUDE=SCHEMAS:sch1,sch2 
INCLUDE contains the specified object 
                       format INCLUDE=(CONSTRAINTS,INDEXES,ROWS,TRIGGERS, GRANTS) or 
                            INCLUDE=TABLES:table1,table2
CONSTRAINTS export constraints (Y)
TABLESPACE export objects with table space (N)
GRANTS export permissions (Y)
INDEXES export indexes (Y)
TRIGGERS export triggers (Y)
ROWS export data rows (Y)
LOG The log file output on the screen
NOLOGFILE does not use the log file (N)
NOLOG does not display the log information on the screen (N)
LOG_WRITE writes the log information to the file in real time: yes (Y), no (N)
DUMMY interactive information processing: print (P), All interactions are processed by YES (Y), NO (N) 
PARFILE parameter file name
FEEDBACK display progress every x lines (0)
COMPRESS whether the exported data is compressed (N)
ENCRYPT whether the exported data is encrypted (N)
ENCRYPT_PASSWORD the encryption key of the exported data
ENCRYPT_NAME the name of the encryption algorithm
FILESIZE The maximum size of each dump file
FILENUM The number of files that can be generated by a template
DROP Delete the original table after export, but do not delete in cascade (N)
DESCRIBE Export the description information of the data file, record in the data file
COL_DEFAULT_SEPARATE whether to export the column separately Default (Y)
HELP Print help information
 

The name of the dexp tool has two ways of writing: dexp and dexpdp. Both have exactly the same syntax. The only difference is that the files exported by dexp must be stored on the client side, and the files exported by dexpdp must be stored on the server side.

[dmdba@localhost bak]$ dimp help
dimp V8
version: 03134283904-20220630-163817-20005
format: ./dimp KEYWORD=value or KEYWORD=(value1,value2,...,vlaueN)

Default: ./dimp SYSDBA/SYSDBA IGNORE=Y ROWS=Y FULL=Y

USERID must be the first argument on the command line

Keyword Description (default)
-------------------------------------------- -------------------------------------
USERID username/password format: {<username>[/ <password>] | /}[@<connect_identifier>][<option>] [<os_auth>]
                       <connect_identifier> : [<svc_name> | host[:port] | <unixsocket_file>]
                       <option> : #{<exetend_option >=<value>[,<extend_option>=<value>]...}
                                  --The outer {} of this line is for encapsulating parameters, and
                       <os_auth> needs to be reserved when writing: AS {SYSDBA|SYSSSO|SYSAUDITOR| USERS|AUTO}
FILE import file name (dexp.dmp)
DIRECTORY The directory where the import file is located
FULL Whole library import (N)
OWNER Import format in user mode (user1, user2,...)
SCHEMAS import format by mode (schema1,schema2,...)
TABLES import format by table name (table1,table2,...)
PARALLEL is used to specify the number of threads used in the import process
TABLE_PARALLEL is used to specify the import The number of sub-threads used by each table in the process, effective when FAST_LOAD is Y
IGNORE Ignore creation errors (N)
TABLE_EXISTS_ACTION The action taken when the required imported table exists in the target library [SKIP | APPEND | TRUNCATE | REPLACE]
FAST_LOAD Whether Use dmfldr to import data (N)
FLDR_ORDER Use dmfldr to import data in strict order (Y)
COMMIT_ROWS The number of rows submitted in batches (5000)
EXCLUDE Ignore the specified object format 
                           Format EXCLUDE=(CONSTRAINTS,INDEXES,ROWS,TRIGGERS,GRANTS )
GRANTS import permissions (Y)
CONSTRAINTS import constraints (Y)
INDEXES import indexes (Y)
TRIGGERS Import triggers (Y)
ROWS Import data rows (Y)
LOG Specify the log file
NOLOGFILE Do not use the log file (N)
NOLOG Do not display log information on the screen (N)
LOG_WRITE Write log information to the file in real time (N): Yes (Y ), No (N)
DUMMY Interaction information processing (P): print (P), all interactions are processed as YES (Y), NO (N) 
PARFILE parameter file name
FEEDBACK display progress every x lines (0)
COMPILE compilation process, Packages and functions... (Y)
INDEXFILE Write the index/constraint information of the table into the specified file
INDEXFIRST Build the index first when importing (N)
REMAP_SCHEMA format (SOURCE_SCHEMA:TARGET_SCHEMA)
                       Import the data in SOURCE_SCHEMA into the 
ENCRYPT_PASSWORD data in TARGET_SCHEMA The name of the encryption key
ENCRYPT_NAME encryption algorithm
SHOW/DESCRIBE Print out the information of the specified file (N)
TASK_THREAD_NUMBER is used to set the number of threads for dmfldr to process user data
BUFFER_NODE_SIZE is used to set the size of dmfldr read file buffer
TASK_SEND_NODE_NUMBER is used to set the number of dmfldr sending nodes [16,65535]
LOB_NOT_FAST_LOAD if If a table contains large fields, dmfldr is not used, because dmfldr is
the PRIMARY_CONFLICT primary key conflict handling method [IGNORE|OVERWRITE|OVERWRITE2] submitted line by line , and the default error
is TABLE_FIRST Whether to import the table first (N): Yes (Y), No ( N)
SHOW_SERVER_INFO Whether to print server information (N): Yes (Y), No (N)
IGNORE_INIT_PARA Ignore the difference in database building parameters (0): CASE_SENSITIVE(1), LENGTH_IN_CHAR(2)
AUTO_FREE_KEY Whether to release the key after the data import is completed ( N): Yes (Y), No (N)
REMAP_TABLE format (SOURCE_SCHEMA.SOURCE_TABLE:TARGET_TABLE)
                       Import data from SOURCE_TABLE to TARGET_TABLE 
REMAP_TABLESPACE format (SOURCE_TABLESPACE:TARGET_TABLESPACE)
                       maps the SOURCE_TABLESPACE tablespace to the 
HELP print help information in the TARGET_TABLESPACE tablespace

case:

1、

dexp SYSDBA/SYSDBA file=full.dmp log=full.log full=Y

Full library export. The file and log paths are not written here, so the directory where the command is executed will store the exported file. If you write an absolute path, pay attention to whether the permissions of the target directory are satisfied. The directory can also be specified by the parameter DIRECTORY.

dexp SYSDBA/SYSDBA file=full1.dmp log=full1.log full=Y COMPRESS=Y

The whole database is compressed and backed up, and the test database has almost no data, but it can be seen that there are still some effects.

[dmdba@localhost bak]$ ll -thr
total 208K
-rw-r--r-- 1 dmdba dinstall 130K Aug 29 02:07 full.dmp
-rw-r--r-- 1 dmdba dinstall  15K Aug 29 02:07 full.log
-rw-r--r-- 1 dmdba dinstall  43K Aug 29 02:09 full1.dmp
-rw-r--r-- 1 dmdba dinstall  15K Aug 29 02:09 full1.log

dexp SYSDBA/SYSDBA file=full2.dmp log=full2.log full=Y COMPRESS=Y PARALLEL=4

Full database compressed multi-threaded backup, these parameters can be used together. You can also follow some encrypted parameters, which are not demonstrated here.

2、

dexp SYSDBA/SYSDBA owner=test1 file=test01.dmp log=test01.log full=Y
dexp V8
[Warning] temporarily does not support multiple export modes...
[Warning] Export failed

When exporting a user, the full library cannot be exported. The four levels are independent and mutually exclusive, and cannot exist at the same time. Just cancel full=Y here. If you want to export multiple users, separate them with commas after the owner.

dexp SYSDBA/SYSDBA owner=test1,test2,test3 file=test.dmp log=testlog

3、

dexp SYSDBA/SYSDBA file=cust.dmp log=cust.log tables=sales.customer_address query="where addressid=12"

Export a table based on conditions

4、

dexp SYSDBA/SYSDBA file=sales.dmp log=sales.log schemas=sales exclude=tables:customer

Export all objects under a SCHEMAS, but exclude a customer table. It can be understood from HELP that the columns of indexes and constraints can also be excluded. But beware of table level and schema level.

5、

dimp userid=SYSDBA/SYSDBA remap_schema=SALES:SALES1 file=sales.dmp log=sales1.log parallel=4

Switch the schema when importing. The target schema does not need to be created in advance. It will be created automatically after the import command is executed. (The remap_tablespace is tested here, the statement can be executed, but it has no actual effect)

6、

dimp SYSDBA/SYSDBA file=sales.dmp log=sales_dimp.log schemas=sales table_exists_action=replace commit_rows=6

When importing, what to do if the table exists. From HELP, there are 4 options: [SKIP | APPEND | TRUNCATE | REPLACE], choose according to your needs. The number of rows submitted in batches is 6 rows.

Many other parameters are no longer tested one by one, such as FILESIZE, FILENUM, and IGNORE should be more commonly used parameters.

Community address: https://eco.dameng.com

         

Guess you like

Origin blog.csdn.net/duanpian_dba/article/details/126570721