HTTPS: // www.cnblogs.com/chjbbs/p/6480687.html article written by very good today, very good the next test.
Usage:
pg_dump [OPTION] ... [DBNAME] database name put Finally, do not specify the default system variable PGDATABASE specified database.
General options :( general options)
-f, --file = save after export of FILENAME output file or directory name filename
-F, --format = c | d | t | p output file format (custom, directory, tar, export file formats
Plain text (default))
-j, --jobs use the this MANY NUM = parallel to the dump Jobs concurrent
-v, --verbose verbose mode verbose mode
-V, --version output version information, then exit the output version information, then exit
-Z, --compress = 0-9 compression level for compressed formats is a compressed format compression levels
--lock-wait-timeout = tIMEOUT fail after waiting tIMEOUT for a table lock wait list in the operating lock out failure
- ?, --help show this help, then exit show this help, then exit
Options controlling the output content :( output control options)
-a, at The --data-only dump only the Data, not only at The Schema export data , excluding mode
-b, --blobs include large objects in dump in the dump include large objects
-c, --clean clean (drop) database objects before recreating before re-create, to clear (delete) database objects
-C , --create include commands to create database in dump commands included in the dump in order to create a database (including building a database statement without having to build the first database before importing)
-E, at the --encoding = the eNCODING dump the Data in the eNCODING encoding turn eNCODING reservoir in the form of encoded data
-n, --schema = SCHEMA dump the named schema (s) only dump mode only the specified name
-N, --exclude-schema = SCHEMA do nOT dump the named schema (s) is not dump named mode
-o, --oids include OIDs in dump dump the OID comprises
-O, --no-owner skip restoration of object ownership in plaintext format, is ignored by the object belongs to restore
Plain text the format-
-s, --schema -only dump only the schema, no data dump mode only, not including the data (not export data)
-S, --superuser nAME = user name to use in the superuser Plain text-dump in the format, the user name specified super
-t, --table = tABLE dump the named table (s) only dumps only the specified table name
-T, --exclude-table = tABLE do NOT dump the named table (s) only dump the table with the specified name
- x, --no-privileges do not dump privileges (grant / revoke) Do not dump permission (Grant / REVOKE)
--binary-upgrade for use by upgrade Utilities only be used only by the upgrade tool
--column-inserts dump data as INSERT commands with column names to INSERT commands with column names in the form of dump
--disable-dollar-quoting disable dollar quoting , use SQL standard quoting cancel dollar (sign) in quotation marks, using standard SQL quotes
--disable-triggers disable triggers during data- only restore disable triggers only during data recovery
--exclude-table-data = TABLE do nOT dump data for the named table (s) to the INSERT command, rather than COPY dump command form
--inserts dump the dump the INSERT AS data commands, COPY Rather Within last
--no-Security-label Security Labels do the dump Assignments Not
--no-snapshots do Not use the synchronized-snapshots in the synchronized Parallel Jobs
--no- tablespaces do not dump tablespace assignments do not dump table space allocation information
Table--unlogged---no Data Table Data unlogged do the dump Not
--quote All-All-quote identifiers identifiers, the even IF Not Key words
--section the dump the named sectionTop the SECTION = (pre-Data, Data, or Data-POST )
--serializable an until-DEFERRABLE the wait at the dump CAN RUN the without Anomalies
--use the session-the Authorization-the SET-
use the SET SESSION AUTHORIZATION Commands INSTEAD of
the ALTER OWNER Commands to the SET Ownership
connection options :( control option connection)
-d, - dbname = dBNAME database to dump the database name
-h, --host = hOSTNAME database server host or socket directory or host name of the database server socket directory
-p, --port = port number pORT database server port number of the database server
-U, --username = NAME connect as specified database user to specify a user database coupled
-w, --no-password never prompt for password never prompt for a password
-W, --password force password prompt (should happen automatically) Force a password prompt (automatic)
--role the ROLE = ROLENAME do the before the SET dump
the If name iS nO Supplied database, the then environment at the PGDATABASE If you do not provide a database name, then use PGDATABASE environment variable value.
variable value iS Used.
Report bugs to < [email protected]>.
a: pure script file formats:
example:
1. only data derived postgres database does not include a mode -s
pg_dump -U postgres postgres -s -f /postgres.sql (database)
2. export postgres database (including data)
pg_dump -U postgres postgres -f /postgres.sql (database name)
3. export data postgres database tables of test01
create database "test01" with owner = "postgres" encoding = 'utf-8'; ( single and double quotation marks can not be wrong)
pg_dump -U postgres -t -f /postgres.sql Test01 postgres (database)
4. Export postgres test01 database table data, in the form of an insert statement
pg_dump -U postgres -f /postgres.sql -t test01 --column -inserts postgres ( database)
5. bk01 restoring data to the database
psql -U postgres -f / postgres .sql bk01
Second, the use of archive formats:
pg_restore
use plain text pg_restore restore script in plain text format can not restore
[root @ localhost postgres-9.3.5] # pg_restore -U postgres -d bk01 / mnt / hgfs / window \ & ubuntu \ Shared \ Folder / vendemo.sql
pg_restore: [Archiver] the INPUT file the Appears to BE A text format dump Please use psql..
use with pg_restore to rebuild the database and archive file formats.
1. To back up:
pg_dump -U postgres -F t -f /vendemo.tar vendemo backed up more than 800 k
Recovery:.
pg_restore to Postgres -d -U BK01 /vendemo.tar
2. First backup:
pg_dump -U Postgres -F -f C / vendemo .tar vendemo backed up more than 300 k
recovery:
pg_restore -d -U Postgres BK01 /vendemo.tar
Third, compressed backup and recovery:
processing large databases:
1. use compressed dumps use your favorite compression program, for example, says gzip.
. Back up:
pg_dump -U Postgres vendemo | gzip> /vendemo.gz backed down only 30 k
recovery:.
Gunzip -c /vendemo.gz | Postgres psql -U bk02
or
cat /vendemo.gz | gunzip | psql -U bk02 Postgres
2. use split. . Split command allows you to use the following method your output decomposed into the operating system can accept size. For example, so that each block size of 1 megabyte:
. Back up:
pg_dump -U postgres -d vendemo | split -b 100k - / vend / vend
guide out of the way is 100K vendaa
vendab 100K
vendac 100K
vendad 16K
recovery:.
CAT / Vend / Vend * | Postgres psql -U bk02
Usage:
pg_dump [OPTION] ... [DBNAME] database name put Finally, do not specify the default system variable PGDATABASE specified database.
General options :( general options)
-f, --file = save after export of FILENAME output file or directory name filename
-F, --format = c | d | t | p output file format (custom, directory, tar, export file formats
Plain text (default))
-j, --jobs use the this MANY NUM = parallel to the dump Jobs concurrent
-v, --verbose verbose mode verbose mode
-V, --version output version information, then exit the output version information, then exit
-Z, --compress = 0-9 compression level for compressed formats is a compressed format compression levels
--lock-wait-timeout = tIMEOUT fail after waiting tIMEOUT for a table lock wait list in the operating lock out failure
- ?, --help show this help, then exit show this help, then exit
Options controlling the output content :( output control options)
-a, at The --data-only dump only the Data, not only at The Schema export data , excluding mode
-b, --blobs include large objects in dump in the dump include large objects
-c, --clean clean (drop) database objects before recreating before re-create, to clear (delete) database objects
-C , --create include commands to create database in dump commands included in the dump in order to create a database (including building a database statement without having to build the first database before importing)
-E, at the --encoding = the eNCODING dump the Data in the eNCODING encoding turn eNCODING reservoir in the form of encoded data
-n, --schema = SCHEMA dump the named schema (s) only dump mode only the specified name
-N, --exclude-schema = SCHEMA do nOT dump the named schema (s) is not dump named mode
-o, --oids include OIDs in dump dump the OID comprises
-O, --no-owner skip restoration of object ownership in plaintext format, is ignored by the object belongs to restore
Plain text the format-
-s, --schema -only dump only the schema, no data dump mode only, not including the data (not export data)
-S, --superuser nAME = user name to use in the superuser Plain text-dump in the format, the user name specified super
-t, --table = tABLE dump the named table (s) only dumps only the specified table name
-T, --exclude-table = tABLE do NOT dump the named table (s) only dump the table with the specified name
- x, --no-privileges do not dump privileges (grant / revoke) Do not dump permission (Grant / REVOKE)
--binary-upgrade for use by upgrade Utilities only be used only by the upgrade tool
--column-inserts dump data as INSERT commands with column names to INSERT commands with column names in the form of dump
--disable-dollar-quoting disable dollar quoting , use SQL standard quoting cancel dollar (sign) in quotation marks, using standard SQL quotes
--disable-triggers disable triggers during data- only restore disable triggers only during data recovery
--exclude-table-data = TABLE do nOT dump data for the named table (s) to the INSERT command, rather than COPY dump command form
--inserts dump the dump the INSERT AS data commands, COPY Rather Within last
--no-Security-label Security Labels do the dump Assignments Not
--no-snapshots do Not use the synchronized-snapshots in the synchronized Parallel Jobs
--no- tablespaces do not dump tablespace assignments do not dump table space allocation information
Table--unlogged---no Data Table Data unlogged do the dump Not
--quote All-All-quote identifiers identifiers, the even IF Not Key words
--section the dump the named sectionTop the SECTION = (pre-Data, Data, or Data-POST )
--serializable an until-DEFERRABLE the wait at the dump CAN RUN the without Anomalies
--use the session-the Authorization-the SET-
use the SET SESSION AUTHORIZATION Commands INSTEAD of
the ALTER OWNER Commands to the SET Ownership
connection options :( control option connection)
-d, - dbname = dBNAME database to dump the database name
-h, --host = hOSTNAME database server host or socket directory or host name of the database server socket directory
-p, --port = port number pORT database server port number of the database server
-U, --username = NAME connect as specified database user to specify a user database coupled
-w, --no-password never prompt for password never prompt for a password
-W, --password force password prompt (should happen automatically) Force a password prompt (automatic)
--role the ROLE = ROLENAME do the before the SET dump
the If name iS nO Supplied database, the then environment at the PGDATABASE If you do not provide a database name, then use PGDATABASE environment variable value.
variable value iS Used.
Report bugs to < [email protected]>.
a: pure script file formats:
example:
1. only data derived postgres database does not include a mode -s
pg_dump -U postgres postgres -s -f /postgres.sql (database)
2. export postgres database (including data)
pg_dump -U postgres postgres -f /postgres.sql (database name)
3. export data postgres database tables of test01
create database "test01" with owner = "postgres" encoding = 'utf-8'; ( single and double quotation marks can not be wrong)
pg_dump -U postgres -t -f /postgres.sql Test01 postgres (database)
4. Export postgres test01 database table data, in the form of an insert statement
pg_dump -U postgres -f /postgres.sql -t test01 --column -inserts postgres ( database)
5. bk01 restoring data to the database
psql -U postgres -f / postgres .sql bk01
Second, the use of archive formats:
pg_restore
use plain text pg_restore restore script in plain text format can not restore
[root @ localhost postgres-9.3.5] # pg_restore -U postgres -d bk01 / mnt / hgfs / window \ & ubuntu \ Shared \ Folder / vendemo.sql
pg_restore: [Archiver] the INPUT file the Appears to BE A text format dump Please use psql..
use with pg_restore to rebuild the database and archive file formats.
1. To back up:
pg_dump -U postgres -F t -f /vendemo.tar vendemo backed up more than 800 k
Recovery:.
pg_restore to Postgres -d -U BK01 /vendemo.tar
2. First backup:
pg_dump -U Postgres -F -f C / vendemo .tar vendemo backed up more than 300 k
recovery:
pg_restore -d -U Postgres BK01 /vendemo.tar
Third, compressed backup and recovery:
processing large databases:
1. use compressed dumps use your favorite compression program, for example, says gzip.
. Back up:
pg_dump -U Postgres vendemo | gzip> /vendemo.gz backed down only 30 k
recovery:.
Gunzip -c /vendemo.gz | Postgres psql -U bk02
or
cat /vendemo.gz | gunzip | psql -U bk02 Postgres
2. use split. . Split command allows you to use the following method your output decomposed into the operating system can accept size. For example, so that each block size of 1 megabyte:
. Back up:
pg_dump -U postgres -d vendemo | split -b 100k - / vend / vend
guide out of the way is 100K vendaa
vendab 100K
vendac 100K
vendad 16K
recovery:.
CAT / Vend / Vend * | Postgres psql -U bk02