Postgres data backup and recovery

This article is mainly based on application data migration, how to migrate libraries, tables, some fields, and auto-increment sequences of PG database well, quickly and effectively. It mainly revolves around the original commands and does not cover third-party tools. If you have useful third-party tools, please share them.

Postgres version: 11.4

Docker deployment

Continuing from the above the content of Postgres, the following will introduce how to migrate the data in the database table.

Here are two native instructions: pg_dump, copy

Content directory:


1. PG_DUMP

It can directly export the structure, data, and sequence information of the specified library and specified table through external instructions, which is compressible, relatively efficient, and suitable for large amounts of data extraction or backup

The command parameters are described as follows

postgres@a:/$ pg_dump --help
pg_dump dumps a database as a text file or to other formats.

Usage:
  pg_dump [OPTION]... [DBNAME]

General options:
  -f, --file=FILENAME          output file or directory name
  -F, --format=c|d|t|p         output file format (custom, directory, tar,
                               plain text (default))
  -j, --jobs=NUM               use this many parallel jobs to dump
  -v, --verbose                verbose mode
  -V, --version                output version information, then exit
  -Z, --compress=0-9           compression level for compressed formats
  --lock-wait-timeout=TIMEOUT  fail after waiting TIMEOUT for a table lock
  --no-sync                    do not wait for changes to be written safely to disk
  -?, --help                   show this help, then exit

Options controlling the output content:
  -a, --data-only              dump only the data, not the schema
  -b, --blobs                  include large objects in dump
  -B, --no-blobs               exclude large objects in dump
  -c, --clean                  clean (drop) database objects before recreating
  -C, --create                 include commands to create database in dump
  -E, --encoding=ENCODING      dump the data in encoding ENCODING
  -n, --schema=SCHEMA          dump the named schema(s) only
  -N, --exclude-schema=SCHEMA  do NOT dump the named schema(s)
  -o, --oids                   include OIDs in dump
  -O, --no-owner               skip restoration of object ownership in
                               plain-text format
  -s, --schema-only            dump only the schema, no data
  -S, --superuser=NAME         superuser user name to use in plain-text format
  -t, --table=TABLE            dump the named table(s) only
  -T, --exclude-table=TABLE    do NOT dump the named table(s)
  -x, --no-privileges          do not dump privileges (grant/revoke)
  --binary-upgrade             for use by upgrade utilities only
  --column-inserts             dump data as INSERT commands with column names
  --disable-dollar-quoting     disable dollar quoting, use SQL standard quoting
  --disable-triggers           disable triggers during data-only restore
  --enable-row-security        enable row security (dump only content user has
                               access to)
  --exclude-table-data=TABLE   do NOT dump data for the named table(s)
  --if-exists                  use IF EXISTS when dropping objects
  --inserts                    dump data as INSERT commands, rather than COPY
  --load-via-partition-root    load partitions via the root table
  --no-comments                do not dump comments
  --no-publications            do not dump publications
  --no-security-labels         do not dump security label assignments
  --no-subscriptions           do not dump subscriptions
  --no-synchronized-snapshots  do not use synchronized snapshots in parallel jobs
  --no-tablespaces             do not dump tablespace assignments
  --no-unlogged-table-data     do not dump unlogged table data
  --quote-all-identifiers      quote all identifiers, even if not key words
  --section=SECTION            dump named section (pre-data, data, or post-data)
  --serializable-deferrable    wait until the dump can run without anomalies
  --snapshot=SNAPSHOT          use given snapshot for the dump
  --strict-names               require table and/or schema include patterns to
                               match at least one entity each
  --use-set-session-authorization
                               use SET SESSION AUTHORIZATION commands instead of
                               ALTER OWNER commands to set ownership

Connection options:
  -d, --dbname=DBNAME      database to dump
  -h, --host=HOSTNAME      database server host or socket directory
  -p, --port=PORT          database server port number
  -U, --username=NAME      connect as specified database user
  -w, --no-password        never prompt for password
  -W, --password           force password prompt (should happen automatically)
  --role=ROLENAME          do SET ROLE before dump

If no database name is supplied, then the PGDATABASE environment
variable value is used.

Report bugs to <[email protected]>.

Among the above parameters, this time we will use

  • -f --file=FILENAME : Specify the output file address
  • -F, --format=c|d|t|p : Select the file content format (custom, directory, tar, plain text (default)), the default is the text format
  • -a, --data-only : select export-only data (increment with auto-increment sequence)
  • -s, --schema-only : select export-only schema (with definition of auto-increment sequence)
  • -t, --table=TABLE only which tables to export
  • -T, --exclude-table=TABLE only which tables to exclude
  • --column-inserts export as insert statement instead of COPY
  • -d, --dbname=DBNAME specify the database
  • -h, --host=HOSTNAME specify host
  • -p, --port=PORT connection port
  • -U, --username=NAME access user name, need super user permission

1.1 Export the userinfo table structure, data and related sequence information in the sms library

 pg_dump -U postgres -d sms -t userinfo -t userinfo_userid_seq -Fp -f /var/lib/postgresql/data/dumpsql/userinfoall

The above uses the postgres administrator user to export the structure + data of the userinfo database and userinfo_userid_seq sequence in the sms library to the /var/lib/postgresql/data/dumpsql/userinfoall file

Import the data exported by the sms library into the sms2 library:

psql -U postgres -d sms2 -f /var/lib/postgresql/data/dumpsql/userinfoall

SET
SET
SET
SET
SET
 set_config 
------------
 
(1 row)

SET
SET
SET
SET
SET
SET
CREATE TABLE
ALTER TABLE
ALTER TABLE

1.2 Export the userinfo table structure and sequence in the sms library

pg_dump -U postgres -d sms -t userinfo  -Fp  -s -f /var/lib/postgresql/data/dumpsql/userinfoschema

The above only exports the structure:

--
-- PostgreSQL database dump
--

-- Dumped from database version 11.4 (Debian 11.4-1.pgdg90+1)
-- Dumped by pg_dump version 11.4 (Debian 11.4-1.pgdg90+1)

SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;

--
-- Name: userinfo_userid_seq; Type: SEQUENCE; Schema: public; Owner: test
--

CREATE SEQUENCE public.userinfo_userid_seq
    START WITH 1
    INCREMENT BY 1
    NO MINVALUE
    NO MAXVALUE
    CACHE 1;


ALTER TABLE public.userinfo_userid_seq OWNER TO test;

SET default_tablespace = '';

SET default_with_oids = false;

--
-- Name: userinfo; Type: TABLE; Schema: public; Owner: test
--

CREATE TABLE public.userinfo (
    userid bigint DEFAULT nextval('public.userinfo_userid_seq'::regclass) NOT NULL,
    account character varying(255) NOT NULL,
    appkey character varying(255),
    createtime timestamp(6) without time zone,
    createuserid character varying(255),
    password character varying(255),
    showname character varying(255),
    status integer,
    updatetime timestamp(6) without time zone,
    updateuserid character varying(255)
);


ALTER TABLE public.userinfo OWNER TO test;

--
-- Name: userinfo userinfo_account_key; Type: CONSTRAINT; Schema: public; Owner: test
--

ALTER TABLE ONLY public.userinfo
    ADD CONSTRAINT userinfo_account_key UNIQUE (account);


--
-- Name: userinfo userinfo_pkey; Type: CONSTRAINT; Schema: public; Owner: test
--

ALTER TABLE ONLY public.userinfo
    ADD CONSTRAINT userinfo_pkey PRIMARY KEY (userid);


--
-- PostgreSQL database dump complete
--

1.3 Export the userinfo table data and sequence values ​​in the sms library

pg_dump -U postgres -d sms -t userinfo  -Fp  -a -f /var/lib/postgresql/data/dumpsql/userinfodata

The above only exports the data:

--
-- PostgreSQL database dump
--

-- Dumped from database version 11.4 (Debian 11.4-1.pgdg90+1)
-- Dumped by pg_dump version 11.4 (Debian 11.4-1.pgdg90+1)

SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;

--
-- Data for Name: userinfo; Type: TABLE DATA; Schema: public; Owner: test
--

COPY public.userinfo (userid, account, appkey,  createtime, createuserid,  password, status, updatetime, updateuserid, usertype) FROM stdin;
1	admin	21232f********c3 2016-09-01 13:41:57.200126	1	11111111111	admin	1	2021-11-22 15:56:31.265	1	
\.


--
-- Name: userinfo_userid_seq; Type: SEQUENCE SET; Schema: public; Owner: test
--

SELECT pg_catalog.setval('public.userinfo_userid_seq', 1, true);


--
-- PostgreSQL database dump complete
--


2. COPY

The copy command is used to flexibly export the required data, heterogeneous data, data with inconsistent table structure, etc.

2.1 Export data instruction

-- 全部字段同步
COPY (select * from userinfo where userid >8 and userid < 10 ) TO '{somepath}' WITH csv;

-- 同步指定字段
COPY (select userid,username,sex,mobile from userinfo where userid >8 and userid < 10 ) TO '{somepath}' WITH csv;

2.2 Import data instruction

COPY userinfo FROM '{somepath}' WITH csv;

2.3 Reset sequence value

Under the COPY command, after the data is usually imported, the sequence value is not reset, and the primary key may be duplicated when the new page is added later, because the sequence needs to be manually reset

SELECT setval('userinfo_userid_seq', max(userid)) FROM userinfo;

Guess you like

Origin blog.csdn.net/c_zyer/article/details/126305191