One, the background:
Because Ali cloud RDS production libraries need to back up every day and read a copy of the self-built library, but if you use Ali cloud automatically copied to read-only instance, the cost is too high, so the use of self-realization method for writing synchronization scripts.
Second, the premise:
1) It has been launched Ali cloud RDS, and open the regular backup function. (Backup generation backup files for download)
2). Mysql database has been installed on the target server backup.
3) The backup target server data recovery tools installed Percona XtraBackup, you can download and install from Percona XtraBackup official website.
- 5.6 and previous versions of MySQL need to install Percona XtraBackup 2.3, installation instructions, see the official documentation Percona XtraBackup 2.3 .
- MySQL 5.7 version needs to be installed Percona XtraBackup 2.4, installation instructions, see the official documentation Percona XtraBackup 2.4 .
- MySQL 8.0 version needs to be installed Percona XtraBackup 8.0, installation instructions, see the official documentation Percona XtraBackup 8.0 .
Third, script writing and testing
1. Write a script SHELL
#!/usr/bin/env bash ######### ############# database basic information #Input parameters URL_PATH=$1 # Define the time format DATE=`date +%Y%m%d%H%M%S` # Logging file Address LOGPATH = / data / dbbakup / bakuplog # The number of backup save BACK_NUM = 3 # Backup file storage path BAKUPPATH = / data / dbbakup / ########################## back_up(){ cd $ {BAKUP_PATH} echo " === $ {DATE} - download the backup, URL_path} = $ {URL_path === " >> $ {} log_path wget -C " $ {URL_path} " .. {DATE} $ -O Database the tar .gz echo " === create folders for storing extracting file === " >> $ {log_path} mkdir Database. $ {DATE} echo " === decompression Database. $ {DATE} === " >> $ { } log_path tar -izxvf Database $ {DATE}.. tar .gz - . C Database $ {DATE} echo " === restore backup files unzip good === " >> $ {} log_path innobackupex --defaults File = / Data / db_bakup / Database. $ {DATE} /backup-my.cnf --apply-log / Data / db_bakup / Database. $ {DATE} echo " === === stop database " >> $ {} log_path service mysql stop # Delete database data soft links (my database is installed in / data / MySQL / ) RM -rf / data / MySQL / data echo " === delete the original folder database === " >> $ {} log_path # Delete the original folder database rm -rf database/ # Renamed mv database.${DATE} database # Create a soft link database ln -s / data / dbbakup / database / data / mysql / data # Create a version number of the file (for easy identification in the current database backup which version belongs) Touch Database / rev.database CAT " $ {DATE} " >> Database / rev.database echo " === modify the file owner === " >> $ {} log_path chown -R & lt MySQL: MySQL / Data / db_bakup / database chown -R & lt MySQL: MySQL / Data / MySQL echo " === resume database === " >> $ {} log_path service mysql start # Identify the need to remove backup delfile = ` the ls -l crt BAKUPPATH $ {} / * .tar.gz | awk '{print $ 9}' | head -1` # Judge now the number of backups is greater than $ number count = `ls -l -crt $ {BAKUP_PATH} / *. tar.gz | awk '{print $ 9}' | wc -l` if [[ $count -gt $BACK_NUM ]];then # Delete the oldest backup generation, leaving only number the number of backup rm $ Delfi # Write log files to delete echo "delete $delfile" >> ${LOG_PATH} be echo "=== end ===" >> $ {LOG_PATH} } back_up;
Save the file as a script: /data/db_bakup/back_up.sh
2. added to the file permissions to run
chmod u+x /data/db_bakup/back_up.sh
3. Start the backup command:
SH / the Data / db_bakup / back_up. SH " backup file download link "
Note: The "backup file download link" Ali cloud copy obtained from the RDS Management Console
4. Review the backup log to see the process
tail -f /data/db_bakup/bakup.log -n 50
5. Log on to see if the database has switched to the new backup
mysql -uroot -p
6. finished ~~~
references:
RDS for MySQL physical backup file recovery to self-built database