压缩备份集
stream模式支持且只支持:tar 和 xbstream 两种格式,后者是xtrabackup提供的专有格式,解包时需要同名的专用命令处理
innobackupex --defaults-file=/data/mysqldata/3306/my.cnf --user=backup --password='backup' --stream=tar /tmp | gzip -> /data/mysqldata/backup/xtra_full.tar.gz
innobackupex: Created backup directory /tmp
这段信息表明,流格式标准输出的数据会被临时保存到我们指定的/tmp目录
innobackupex: You must use -i (--ignore-zeros) option for extraction of the tar stream.
最后这条提示我们,解包时必须使用-i参数
[mysql@master backup]$ du -sh *
2.8G 2015-07-07_17-11-03
14M xtra_full.tar.gz
打包压缩后的差距是很大的
mkdir xbstream
innobackupex --defaults-file=/data/mysqldata/3306/my.cnf --user=backup --password='backup' --stream=xbstream ./ > /data/mysqldata/backup/xbstream/incremental.xbstream
./是以当前目录作为临时存放备份的目录
解包
[mysql backup]$ cd xbstream/
[mysql xbstream]$ xbstream -x < incremental.xbstream
[mysql xbstream]$ ls
backup-my.cnf ibdata1 mysql sakila xtrabackup_binlog_info xtrabackup_info
fandb incremental.xbstream performance_schema test xtrabackup_checkpoints xtrabackup_logfile
Incremental Streaming Backups using xbstream and tar Incremental streaming backups can be performed with the xbstream streaming option. Currently backups are packed in custom xbstream format. With this feature taking a BASE backup is needed as well.
Taking a base backup:
innobackupex /data/backups
Taking a local backup:
innobackupex --incremental --incremental-lsn=LSN-number --stream=xbstream ./ > incremental.xbstream
Unpacking the backup:
xbstream -x < incremental.xbstream
Taking a local backup and streaming it to the remote server and unpacking it:
innobackupex --incremental --incremental-lsn=LSN-number --stream=xbstream ./ | /
ssh user " cat - | xbstream -x -C > /backup-dir/"
测试:
[mysql@master xbstream]$ xbstream -x -v < incremental.xbstream
[mysql@master xbstream]$ ls
backup-my.cnf ibdata1 mysql sakila xtrabackup_binlog_info xtrabackup_info
fandb incremental.xbstream performance_schema test xtrabackup_checkpoints xtrabackup_logfile
可以是用-C指定解压到哪个目录
xbstream -x -v < incremental.xbstream -C /tmp
[mysql@master xbstream]$ more xtrabackup_checkpoints
backup_type = full-backuped
from_lsn = 0
to_lsn = 143684677
last_lsn = 143684677
compact = 0
recover_binlog_info = 0
官方文档示例有问题,最后改成这样:
innobackupex --defaults-file=/data/mysqldata/3306/my.cnf --user=backup --password='backup' --incremental --incremental-lsn=143684677 --stream=xbstream ./ | ssh mysql@192.168.255.202 " xbstream -x -C /data/mysqldata/backup/"
远程端:
[mysql@slave backup]$ ls
backup-my.cnf ibdata1.delta mysql sakila xtrabackup_binlog_info xtrabackup_info
fandb ibdata1.meta performance_schema test xtrabackup_checkpoints xtrabackup_logfile
Compact Backups
当备份innodb表时,可以忽略secondary index pages.这回缩小备份集的大小.负面影响是,这回增加prepare用时,因为要重建secondary index
Compact Backups需开启innodb-file-per-table
innobackupex --defaults-file=/data/mysqldata/3306/my.cnf --user=backup --password='backup' --compact /data/mysqldata/backup/
查看xtrabackup_checkpoints可以看到compact = 1
[mysql@master 2016-08-22_22-50-51]$ more xtrabackup_checkpoints
backup_type = full-backuped
from_lsn = 0
to_lsn = 143789687
last_lsn = 143789687
compact = 1
recover_binlog_info = 0
Preparing Compact Backups
prepare compact backup需要指定 --reduild-indexes参数
innobackupex --apply-log --rebuild-indexes /data/mysqldata/backup/2016-08-2_22-50-51/
Restoring Compact Backups
innobackupex --copy-back /path/to/BACKUP-DIR
compress备份
通过--compress压缩备份,指定该参数后实际会传递给xtrabackup命令,故只能备份innodb文件,通过'quicklz'算法压缩
可以使用--compress-threads增加压缩并发,提高速度
使用这种方式备份会生成以.qp结尾的压缩文件,实测2.9G的备份压缩后52M
开始备份
innobackupex --defaults-file=/data/mysqldata/3306/my.cnf --user=backup --password='backup' --stream=xbstream --compress --compress-threads=4 ./ > /data/mysqldata/backup/backup.xbstream
解压xbstream
xbstream -x < backup.xbstream -C /data/mysqldata/backup/fan
解压后,qpress文件并没有解压,在xtrabackup2.1.4之前使用命令进行解压
for bf in `find . -iname "*\.qp"`; do qpress -d $bf $(dirname $bf) && rm $bf; done
xtrabackup2.1.4之后可以使用--decompress参数解压,这个参数实际调用了qpress命令,所以需要先安装qpress,下面通过percona的yum源来安装
自动安装percona yum源
rpm -Uhv http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm
[root@master ~]# rpm -Uhv http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm
获取http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm
准备中... ################################# [100%]
正在升级/安装...
1:percona-release-0.0-1 ################################# [100%]
手动配置percona yum源
[percona]
name = $releasever - Percona
baseurl=http://repo.percona.com/centos/$releasever/os/$basearch/
enabled = 1
gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-percona
gpgcheck = 1
安装
[root@master ~]# yum install qpress
已加载插件:fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: ftp.sjtu.edu.cn
* extras: mirrors.nwsuaf.edu.cn
* updates: mirrors.neusoft.edu.cn
正在解决依赖关系
--> 正在检查事务
---> 软件包 qpress.x86_64.0.11-1.el7 将被 安装
--> 解决依赖关系完成
依赖关系解决
==================================================================================================================================================
Package 架构 版本 源 大小
==================================================================================================================================================
正在安装:
qpress x86_64 11-1.el7 percona 31 k
事务概要
==================================================================================================================================================
安装 1 软件包
总计:31 k
安装大小:65 k
Is this ok [y/d/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
警告:RPM 数据库已被非 yum 程序修改。
正在安装 : qpress-11-1.el7.x86_64 1/1
验证中 : qpress-11-1.el7.x86_64 1/1
已安装:
qpress.x86_64 0:11-1.el7
完毕!
安装完毕,解压
innobackupex --decompress
mysqldata fan/
之后就可以prepare了
更多XtraBackup相关教程见以下内容: