[Backup Utility] mydumper

Mydumper main features: a high-performance multi-threaded backup and recovery tool for MySQL, developers mainly from MySQL, Facebook, SkySQL company.


Features:
1: Lightweight written in C
2: execution speed 10 times faster than mysqldump
3: transactional and non-transactional tables consistent snapshot (0.2.2 or later apply)
4: fast file compression
5: Support export binlog
6: multithreading recovery (applies to 0.2.1 or later)
7: daemon to work, time snapshots and continuous binary log (applies to 0.5.0 or later)
8: open source (GNU GPLv3)

installation address and installation:
yum the install glib2 MySQL-devel-devel zlib zlib-devel PCRE devel-GCC-GCC C ++ CMake -Y
wget https://launchpadlibrarian.net/225370879/mydumper-0.9.1.tar.gz
the tar-XF mydumper 0.9 .1.tar.gz
CD-0.9.1 mydumper /
CMake.
the make the make the install &&

 


The following are the sizes after percona-xtrabackup and mydumper compressed backup:
[the root-5-69 @ Test BAK] # * du -SH
3.3G 2017-02-09_04-00-02
3.3G 2017-02-10_04-00-02
2017-02-11_04-00-02 3.3G
3.3G 2017-02-12_04-00-02
3.3G 2017-02-13_04-00-02
3.3G 2017-02-14_04-00-02
3.3G 2017-02- 15_04-00-02
852M all_20170215


The advantage of using the test library:
mydumper each table is exported in the form of a file, very effective development and testing misuse recovery. The percona-xtrabackup is a whole library, but also to restore the whole database. Misuse of a single table to restore cumbersome.


mydumper备份后的存储格式:
[root@test-5-69 all_20170215]# ls
jxcommoninfo.orderinfo-schema.sql.gz jxorder.ord_OrderCount.sql.gz metadata
jxcommoninfo.orderinfo.sql.gz jxorder.ord_OrderExchangeCodeDetail-schema.sql.gz mysql.columns_priv-schema.sql.gz
jxcommoninfo-schema-create.sql.gz jxorder.ord_OrderExchangeCodeDetail.sql.gz mysql.db-schema.sql.gz
jxorder.checksums-schema.sql.gz jxorder.ord_OrderExt-schema.sql.gz mysql.db.sql.gz
jxorder.dsns-schema.sql.gz jxorder.ord_OrderExt.sql.gz mysql.event-schema.sql.gz
jxorder.dsns.sql.gz jxorder.ord_OrderMarkLog-schema.sql.gz mysql.func-schema.sql.gz
jxorder.dz_packingMaterialsOrder-schema.sql.gz jxorder.ord_OrderMark-schema.sql.gz mysql.help_category-schema.sql.gz

 


[root @ the Test-5-69 all_20170215] # mydumper --help
the Application Options:
-B, --database need to back up the database, a database command a backup, or else to back up all databases, including mysql.
-T, Table --tables-list to be backed up, separated by commas.
-o, --outputdir backup files directory
number of bytes -s, --statement-size insert statements generated, the default 1,000,000, this parameter can not be too small, otherwise it will be reported Bigger Within last statement_size for tools.t_serverinfo Row
-r, - tried to row block rows partition table, the parameter filesize-off --chunk
-F, --chunk filesize-block division table row the file size, the unit is MB
-C, --compress compressed output file
-e, - build-empty-files even if no data table, also generates an empty file
-x, --regex regular expression matching, such as 'db.table'
-i, --ignore-negligible storage engine engines, separated by commas
-m , --no-schemas do not export the table structure
-d, --no-data table data is not exported
-G, --triggers triggers export
-E, --events export events
-R, --routines export stored procedure
-k, --no-locks does not perform a shared read locks Warning: This will lead to inconsistent backup
--less-locking lock on minimizing innodb table.
the -l, --long-Query-Guard set long time query the default 60 seconds more than the time it will error: There are queries Within last longer in PROCESSLIST running 60s, ABORTING dump
-K, --kill-long-queries the kill off the long-running queries, backup error: Lock wait timeout exceeded; try restarting transaction


-D, --daemon enable daemon mode
-I, --snapshot-interval dump snapshot interval, default 60s, needed in daemon mode
-L, --logfile use the log file, the default standard output to the terminal
--tz- utc when backup allows the backup Timestamp, this will result in different time zones to restore the backup will be problems, turned off by default, parameters: - Skip-TZ-utc to disable.
--skip-TZ-utc
--use-use savepoints save point lock information recorded metadata, permissions required SUPER
--success-ON-1146 Not Warning and INCREMENT COUNT iNSTEAD of error in Critical Case of table Not does exist
--lock-All-lock tables full table, instead of FLUSH tABLE WITH READ LOCK
-U, --updated-Operating since update_time to dump the Use only in the Tables Updated U at The Last Days
--trx-only of the Transactional Consistency Consistency-only
-h, --host at The Host to Connect to
-u, --user with the Username to RUN at The dump privileges
-p, --password the User password
-P, --port TCP / IP Port to Connect to
-S, --socket File to use UNIX Domain socket Connection for
-t, --threads number of threads execute the backup, the default 4 threads
-C, --compress- mysql link protocol used on compression protocol
-V, --version the Program Version and the Show Exit
-v, --verbose more outputs, 0 = Silent, errors =. 1, 2 = Represents warnings,. 3 = info, default 2


[the root all_20170215 the Test-5-69 @] # myLoader --help
the Application Options:
-d, directory --directory backup files are located
-q, the number of query --queries-per-transaction for each transaction, default 1000
-o, - -overwrite-tables table if it exists delete, use this command, you need to back up the table structure when you want to back up, or restore will not find a table
-B, --database specify the database needs to be restored
-s, --source-db reduction database
-e, --enable-binlog enable binary log recovery data
-h, --host the host to connect to
-u, --user the Username with privileges to RUN at The dump
-p, --password the User password
-P, --port TCP / IP Port to Connect to
-S, --socket UNIX Domain socket to use for File Connection
-t , the number of threads used --threads default. 4
-C, --compress-protocol connection using a compression protocol
-V, --version the Program Version and the Show Exit
-v, --verbose more outputs, 0 = silent, 1 = errors, 2 = warnings, 3 = info, default 2


#!/bin/sh
. /etc/profile


DIR='/my/bak/'
DATE=`date +%Y%m%d_%H-%M`

/usr/local/bin/mydumper -c -o $DIR$DATE

find $DIR -mindepth 1 -maxdepth 1 -type d -mtime +3 -exec rm -rf {} \;

Guess you like

Origin www.cnblogs.com/hankyoon/p/11012677.html