Choose and practice MYSQL automatic backup policy

Currently popular are several backup methods:

1, logical backup: use mysql mysqldump tool that comes back up. Backup to sql file.
Advantages: The greatest benefit is the ability to automatically work with mysql running
during operation to ensure that the backup was the point, it will automatically lock operation correspondence table, does not allow other users to modify (only access). You may prevent modification operations. sql file generic portable convenience.

Disadvantages: backup speed is slower. If the amount of data a lot of time. Very time-consuming. If the database server is available to users in the state service, in the course of this long operation, meant to lock the table (usually a read lock, it can only be read but not write). Then the service will be affected.


Note: The so-called automatically work with mysql server actually refers to add parameters to control the mysql server, such as locking all the tables can only read, can not write.

Tables-All---lock


2, the physical backup: direct copy of mysql data directory.

Myisam only applies to a direct copy type of table. This type of table is independent of the machine. But the reality is that when you design a database can not all use myisam type table. You can not: because myisam type table and machine independent, easy to transplant, so they choose this table, this is not a reason to choose it.

Cons: You can not go running server operating mysql (updated users to access data through the application process in the copy, so that you can not back up data at that time) may not be portable to other machines go up.


More often, you will be based on business characteristics (for example, you need to support a transaction mechanism must be used innodb), query speed and service performance to select the type of table.


We must ensure that the table is not in use .
If the server copy when you're a table (a user during the update, insert operation) to change it, copy the data will be meaningless backup (not exactly the time to restore to a point).


If the database table is modified in the file system backup, the backup into the state table file subject inconsistent, and for later retrieval table would be meaningless.

The best way to ensure the integrity of your copy is: shut down the server, copy the file, then restart the server.
Or, you want to lock the corresponding table (causing access problems for front-end user).

 

Explanation copy files directly, why not have portability?


mysqldump generated are portable to other machines, even having a text file on a different hardware configuration of the machine. Copies of documents can not be directly ported to other machines, unless you want to copy using MyISAM table storage format. ISAM tables only between the copy machine having the same hardware configuration. For example, a file copy from the S PARC's Solaris machines to Intel's Solaris machines (or vice versa) is not feasible. MySQL3.23 by the introduction of the MyISAM table storage format to solve this problem, because the format is machine independent. Therefore, if the following two conditions are satisfied, then the file can be transferred directly to a copy machine having a hardware configuration different: i.e. another machine must be run more than MySQL3.23 version, and the file must be expressed as a MyISAM table, and not ISAM tables.

 


3, hot backup.

mysql database does not incremental backup mechanism. When the data is too big when the backup is a big problem. Fortunately, there is provided a master mysql database from a backup mechanism (i.e. hot standby)
Advantages: time for large volumes of data. I get it now. Large Internet companies for mysql data backup, are hot backup. Set up multiple database servers, master-slave replication.

Master-slave replication problem often encountered is how to ensure that data is not blocked, do not delay. This problem can still be tolerated, there are a number of programs could be improved. After all, trade-offs. This is a very worry and effort of the way.

 

================================================

 

I currently weigh what kind of backup strategy should be used:

Physical backup, quicker recovery, of course, preferably stored on a machine. I am now using physical or logical backup for the backup is good?
After taking into account migrated platform. In order to ensure versatility. Recovery speed gap of about one minute I can be tolerated (sql file backup and restore speed is not physical with fast, direct physical backup copy can overwrite the original file). So I for cross-platform, I am more willing to use logical backup. Sql file storage format.

Dual hot backup mode, there is no more hardware. Limited technical staff, need manpower to maintain, too much trouble. Therefore excluded.

 

 

Solution:
1, the overall strategy: to write a scheduled task. Timing in the evening or early morning automatic backup (regardless of database server can not shut down during operation)
After a successful backup, delete the previous code made. Avoid a lot of data occupies disk.

2, taking into account the initial volume of data is so small. Use mysqldump backup it. Set in the early morning when few (4-6 point this time is basically no one visit) automatic backup.


3, using the logical backup: restore speed gap of about one minute I can be tolerated. So I for cross-platform, I am more willing to use logical backup. Sql file storage format.

4, are backed up every day. Because it is mysqldump to lock in the morning, when accessing the database server. Almost no impact on the server. It can be backed up every day. Have a sql file every day. So will a lot of files.
So, after the success of each backup. Delete the previous file. Keep the last week sql backup file.


Backup tool path: / usr / bin / mysqldump
backup data save path: / data / backdata /

 

5, write backup script

Ideas:

5.1 call mysqldump generated backup files in shell scripts (this tool can generate a sql file to disk up)

5.2 In order to facilitate future search. Recording into each backup log form. Some were backup operation, generating what the file name. This allows for easy access later if someday no successful backup

Deleted files as log information is also recorded.

5.3 Let crontab processes under linux calling script execution.

Command: crontab -e

Open the file add Code: 005 * * * path to the script /mysqlback.sh

 

 

 

mysqlback.sh content:

 

# /bin/bash
DB_NAME="****"
DB_USER="****"
DB_PASS="****"
BIN_DIR="/usr/bin"
BACK_DIR="/data/backdata"
DATE="mysql-`date +'%Y%m%d-%H:%M:%S'`"
LogFile="$BACK_DIR"/dbbakup.log #日志记录保存的目录
BackNewFile=$DATE.sql

$BIN_DIR/mysqldump --opt --force -u$DB_USER  -p$DB_PASS $DB_NAME > $BACK_DIR/$DATE.sql


echo -----------------------"$(date +"%y-%m-%d %H:%M:%S")"----------------------- >> $LogFile



echo  createFile:"$BackNewFile" >> $LogFile


#find "/data/backdata/" -cmin +1 -type f -name "*.sql" -print > deleted.TXT "-print> deleted.txtfind -ctime +7 -type f "/ data
# -ctime time of creation, here represented delete the file creation time is the number of days before, which is the result of many days of data to retain only


-e echo "the Delete Files: the n-\" >> $ LogFile

# cycle to delete matching files
CAT deleted.txt | the while the Read LINE
do
    RM -rf $ LINE
    echo $ LINE >> $ LogFile
DONE


echo "----- -------------------------------------------------- -------- ">> $ LogFile

 

 

Guess you like

Origin www.cnblogs.com/SyncNavigator-V8-4-1/p/11015716.html