The Postgresql database server backup and restore ------ SQL dump papers

Author: Small P
from: League of Legends Community
Summary: and everything that contains valuable data, like, PostgreSQL databases should be backed up frequently. There are three ways to back up PostgreSQL database, pros and cons of various methods, we will speak about methods SQL dump.

 
  1. Overview;  
  2. Data backup;  
  2.1 pg_dump;  
  2.2 the pg_dumpall;  
  2.3 scheduled tasks;  
  3. recovery from a dump;  
  3.1 with a recovery pg_dump;  
  3.2 Recovery with the pg_dumpall;  
  3.3 the ANALYZE;  
  4. handling large databases;  
  4.1 large output database ;  
  4.2 recover large databases;  
  4.3 slipt;  
  4.3.1 segmentation;  
  4.3.2 combined;  
  5. About herein;  
  6. update log;  
  7. documented;  
  8. related documents; 
  


+++++++++++++++++++++++++++++++++++++++++++
text
++++++ +++++++++++++++++++++++++++++++++++++

Have you ever had the situation not accidentally deleted a database table or database storage disk damage it? If that happens, I'm sure you will feel very frustrated because the results of your efforts to work for a few weeks just vanished into nothingness. And anything that contains valuable data, like, PostgreSQL databases should be backed up frequently. If your data is stored in PostgreSQL table, you can perform a database backup timing, so as to avoid the tragedy above. When PostgreSQL comes with built-in tools to perform backups, and system crash or accident, you can "roll back" With these tools, and restore the system to its original state from a previously saved snapshot.


1. Overview;

PostgreSQL database, there are three completely different ways to back up data from PostgreSQL:

SQL dump

File system level backup

Online Backup

Each backup has its own advantages and disadvantages, the following describes a method SQl dump;


2. Data Backup;

SQL dump method is to create a text file, the text of SQL commands that, when put back the file to the server, the rebuilding and dump the same state database.


2.1 pg_dump;

PostgreSQL comes with built-in backup tool called pg_dump. This tool is read through a series of SQL statements to a specified database and copy its contents to it as a snapshot and used for future data recovery. Client to server connection is used to perform the backup job. The backup file is the previously talked about snapshots and can be used for future data recovery. And to ensure that the client and server-side connection is required to perform a backup. The basic usage of this command is:

pg_dump dbname > outfile

Note: Before carrying out the work, first make sure you have permission to log on to the server and access the database or table you want to back up. You can line client --psql by using the PostgreSQL command for server login. Enter the host name (-h) on the client, the user name (-u) and password (-p), and database name, and then you can verify that you are authorized to access.

Use pg_dump is very simple - just at the command prompt, enter the name of the database to be exported can be backed up to work in the following example (change the path to your own path PostgreSQL installation):

xiaop@xiaop-laptop:~$ /usr/lib/postgresql/8.2/bin/pg_dump -D -h localhost -U xiaop(用户名) mydb(数据库名称) >  mydb.bak

Through the above command will create a file named mydb.bak, the file recorded in the SQL command to restore the database.

As you can see, pg_dump to output the results to standard output. Here we can see doing any good.

pg_dump is a regular PostgreSQL client application (albeit a very clever thing.) This means that you can back up work from any remote host that can access the database. But remember that pg_dump does not operate with special permissions. Specifically, it must have read access to the table you want to back up, so, in fact, that you almost always want to be a database superuser.

To declare pg_dump should be coupled in which user identity, use the command line options -h host and -p port. The default host is the local host or your environment variables PGHOST declaration. Similarly, the default port is an environment variable PGPORT or (if it does not exist) compiled the default values. (Servers usually have the same default, so fairly easily.)

And any other PostgreSQL client application, as time pg_dump default connection to the database user name carried the same name as the current operating system with a user. To override this, either specify the -U option or set the environment variable PGUSER. Remember that pg_dump connections are subject to the same general customer application by customer authentication mechanism.

Backup created by pg_dump are internally consistent, that is, when running pg_dump updates to the database will not be dumped. pg_dump does not block other work when the operation of the database. (Exceptions are those operations require an exclusive lock operations, such as VACUUM FULL.)

Note: : If your database structure depends on the OID (for instance as foreign keys), then you must pg_dump to dump the OID also poured out. To pour OID, you can use the -o command-line option. They will not dump "large objects" is the default. If you use a large object, please refer to the manual page of pg_dump command.


2.2 pg_dumpall;

If you want the whole system to back up all the databases of words (and not only for one database backup), you can use the pg_dumpall command instead of pg_dump. All databases can execute this command can identify PostgreSQL (including its own database system) backed up to a file. Here is an example of use:

xiaop@xiaop-laptop:~$ /usr/lib/postgresql/8.2/bin/pg_dumpall -D -h localhost -U xiaop(用户名) >  all.bak   

This will all localhost database backup file to the all.bak;


2.3 Scheduled Tasks;

To ensure that your backups are always kept up to date, you can schedule a recurring backup to the cron table by adding pg_dump or pg_dumpall command. Here are two example cron entries. The first test is to make a backup of the database at 3:00 every day, while the second is a backup of all databases in the 21 o'clock every Friday:

xiaop@xiaop-laptop:~$ 0 3 * * * /usr/lib/postgresql/8.2/bin/pg_dump -D -h localhost -U xiaop(用户名) mydb(数据库名称) > /home/xiaop/mydb.bak0 21 * *
xiaop@xiaop-laptop:~$ 5 /usr/lib/postgresql/8.2/bin/pg_dumpall -D -h localhost -U xiaop(用户名) > /home/xiaop/all.bak


3. Restore from dump


3.1 with pg_dump recovery;

Restore data from a backup job even easier than performing a backup - you have to do is recreate the database by performing a backup file in SQL commands. If you are using pg_dump for one database backup, then the backup will have a CREATE TABLE statement to copy the source table. Of course, you must first create a new empty database to store the data tables. Createdb You can use this tool to do this job, this tool is also part of the suite of PostgreSQL:

xiaop@xiaop-laptop:~$ /usr/lib/postgresql/8.2/bin/createdb mydb(数据库名称)

You can now perform a backup file in SQL commands to restore the database, pg_dump generated text file can be read by psql program. Common command format to recover from a dump is:

psql dbname < infile

As shown in the following example:

xiaop@xiaop-laptop:~$ /usr/lib/postgresql/8.2/bin/psql -h localhost -U xiaop(用户名) -d mydb(数据库名称) < mydb.bak


3.2 with pg_dumpall recovery;

If you are using pg_dumpall backs up all the databases, there is no need to create a new database, because the backup files already contains the necessary calls to CREATE DATABASE work. Here, just type in the backup file corresponding psql command-line client can, without the need to specify the target database:

xiaop@xiaop-laptop:~$ /usr/lib/postgresql/8.2/bin/psql -h localhost -U xiaop(用户名 ) < all.bak

Once the data recovery is complete, you can log in to the server and view the data recovered.


3.3 ANALYZE;

Once the recovery is complete, run ANALYZE on each database is a wise move, so the optimizer will have a useful statistical data. You can always run vacuumdb -a -z to VACUUM ANALYZE all databases; this is equivalent to manually run VACUUM ANALYZE;


4. Large processing database;


4.1 Output large database;

Since PostgreSQL allows tables larger than the size of your system allows a maximum file size, possible to dump the table to a file be problematic, because the resulting file is likely to allow large files than the maximum of your system. Because pg_dump output to standard output, you can use standard Unix tools to work around this problem:
Use compressed dumps using your favorite compression program, for example gzip.

xiaop@xiaop-laptop:~$ pg_dump mydb(数据库名) | gzip > mydbBACK.gz


4.2 restore large database;

recover with the following command:

xiaop@xiaop-laptop:~$ createdb mydbNEW(新数据库名)
xiaop@xiaop-laptop:~$ gunzip -c mydbBACK.gz | psql mydbNEW

or
xiaop@xiaop-laptop:~$ cat mydbBACK.gz | gunzip | psql mydbNEW


4.3 Use split;


4.3.1 segmentation;

Split command allows you to use the following method to operate the system output decomposing acceptable size. About the split usage can "divide and split cut combined tools cat introduced Files" query. For example, so that each block size is 1 megabyte:

xiaop@xiaop-laptop:~$ pg_dump dbname | split -b 1m - filename


4.3.2 merger;

After splitting can recover with the following command:

xiaop@xiaop-laptop:~$createdb dbname
xiaop@xiaop-laptop:~$cat filename* | psql dbname


5. About herein;

PostgreSQl about database backup and restore the other two methods of "File system level backup" and "online backup", we will discuss it later, we are referring to Chinese most of the information document, the purpose is to let the brothers find convenient, detailed documents have something in Chinese, thank you for pointing brothers :)


6. Update log;


7. Reference Documents;

"PostgreSQL 8.1 Chinese Documents"


Relevant Documents;

"PostgreSQL to install and simple to use,"
"PostgreSQL configuration file and user permissions"
"PostgreSQL Database User Authentication"
routine maintenance "PostgreSQL database"

Reproduced in: https: //www.cnblogs.com/licheng/archive/2008/01/23/1050116.html

Guess you like

Origin blog.csdn.net/weixin_33893473/article/details/92631000