InnoDB database engine Myslq data recovery

First of all I wish to see this article, you should never have a chance to use it ...

This article refers to data for recovery Mysql InnoDB database engine, if it is Mysql other engines or other databases at your own google ...

If one day you hand fell accidentally deleted the official data in the database, or even the entire library to drop off, and instantly feel a black eyes, a wood there, I felt like my world had a wood there, if you have a database backup Fortunately, you can reply data backup, or you open the Mysql binary logging seems to be recovering from the inside, but if there is a backup of wood and wood open binary logging is not feeling hard to force unable to start, the following content will give some workarounds for this situation ...

1. First you need to service stopped as soon as mysql, mysql database contents of the data folder copy all out, after covering the operation to prevent deleted data.

 

2. Prepare a Linux system I use Ubuntu 12.04.

 

3. Prepare database recovery tool percona-data-recovery-tool-for-innodb, on percona-data-recovery-tool-for-innodb:

  3.1. This tool can only InnoDB / XtraDB table is valid, and can not restore MyISAM tables.

  3.2. This tool is saved in the MySQL data file recovery, without MySQL Server running.

  3.3. Guarantee that data must always be recovered. For example, the rewritten data can not be recovered.

  3.4. Using this tool will need to manually do some work done is not automatic.

  3.5. Recovery process depends on your level of understanding of data loss, you may need to choose between different versions of data during the recovery process. So if you are more aware of their data, the greater the likelihood of recovery.

 

4. Download percona-data-recovery-tool-for-innodb and extract, open a terminal, enter the following: 

wget https://launchpad.net/percona-data-recovery-tool-for-innodb/trunk/release-0.5/+download/percona-data-recovery-tool-for-innodb-0.5.tar.gz
tar -zxvf percona-data-recovery-tool-for-innodb-0.5.tar.gz

  4.1 Go to the unpacked directory of mysql-source subdirectory, run the configuration command:

cd percona-data-recovery-tool-for-innodb-0.5/mysql-source
./configure

  4.2 After completing the configuration steps, back to the root directory after decompression, run the make command, compiled page_parserand constraints_parser工具(如果编译过程中出现问题,诸如包依赖之类的,请根据错误提示自行补全):

cd ..
make

 

5. Data Extraction

  The default page size InnoDB is 16K, each of the pages belonging to a particular table for a particular index. page_parser tools read the data file, according to the page header index ID, each page is copied to a separate file.

  If your MySQL server is configured innodb_file_per_table=1, the system has helped you achieve the above procedure. All required pages are .ibd file, and usually you do not need to slicing it. However, if .ibd file may contain multiple index, then cut to separate the individual page is still necessary. If the MySQL server is not configured innodb_file_per_table, then the data will be stored in a global table namespace (usually a file named ibdata1, this paper this is the case), this time we need to press the page file segmentation.

  5.1 cut pagination - New dbfile file in the root directory of the unzipped folder and copy the files in your ibdata1 copy of the data directory to the folder.

    5.1.1 If the MySQL before version 5.0, InnoDB is taken REDUNDANT format, run the following command in a terminal:

./page_parser -4 -f ./dbfile/ibdata1

    5.1.2 If MySQL 5.0 or later, InnoDB take is COMPACT format, run the following command in a terminal:

./page_parser -5 -f ./dbfile/ibdata1

  5.2. After running, page_parser tool creates a pages- <TIMESTAMP> directory, which is a UNIX system TIMESTAMP timestamp. In this directory, to index ID page to create a subdirectory for each index ID. Figure:

  5.3. Select Index ID required

    5.3.1. If you just delete the data in the table without deleting the table, you can start InnoDB Tablespace Monitor, the output of all tables and indexes, index IDs to MySQL server error log file. Create a table to store innodb_table_monitor collect innodb storage engine tables and indexes:

mysql> CREATE TABLE innodb_table_monitor (id int) ENGINE=InnoDB;

If innodb_table_monitor already exists, drop the table and re-create the table. Once MySQL error log output, may drop off this table to stop print out more monitoring. An example of the output is as follows:

TABLE: name sakila/customer, id 0 142, columns 13, indexes 4, appr.rows 0
  COLUMNS: customer_id: DATA_INT len 2 prec 0; store_id: DATA_INT len 1 prec 0; first_name: type 12 len 135 prec 0; last_name: type 12 len 135 prec 0; email:
 type 12 len 150 prec 0; address_id: DATA_INT len 2 prec 0; active: DATA_INT len 1 prec 0; create_date: DATA_INT len 8 prec 0; last_update: DATA_INT len 4 pr
ec 0; DB_ROW_ID: DATA_SYS prtype 256 len 6 prec 0; DB_TRX_ID: DATA_SYS prtype 257 len 6 prec 0; DB_ROLL_PTR: DATA_SYS prtype 258 len 7 prec 0; 
  INDEX: name PRIMARY, id 0 286, fields 1/11, type 3
   root page 50, appr.key vals 0, leaf pages 1, size pages 1
   FIELDS:  customer_id DB_TRX_ID DB_ROLL_PTR store_id first_name last_name email address_id active create_date last_update
  INDEX: name idx_fk_store_id, id 0 287, fields 1/2, type 0
   root page 56, appr.key vals 0, leaf pages 1, size pages 1
   FIELDS:  store_id customer_id
  INDEX: name idx_fk_address_id, id 0 288, fields 1/2, type 0
   root page 63, appr.key vals 0, leaf pages 1, size pages 1
   FIELDS:  address_id customer_id
  INDEX: name idx_last_name, id 0 289, fields 1/2, type 0
   root page 1493, appr.key vals 0, leaf pages 1, size pages 1
   FIELDS:  last_name customer_id

Here, we have restored a customer table under sakila library, you can get its primary key information from the above:

INDEX: name PRIMARY, id 0 286, fields 1/11, type 3

    5.3.2 If you drop off the entire database, that is no way to follow the above method to find the Index ID required, then you can use some of the methods to obtain the required Index ID:

      A: If you can identify the required tables in some of the fields have special value is only recorded in this table, you can search for all pages after splitting with the following command:

grep -r "CP201310090001" pages-1384167811

        Search results as shown (see Index ID 0-968 is needed):

      B: If you can not find special value, it seems that the official tutorial can search for all pages after splitting table by the field name, but because it says very general, I tested for a long time without success, there is a need reference link: HTTP: // www.percona.com/docs/wiki/innodb-data-recovery-tool:mysql-data-recovery:advanced_techniques#finding_index_ids

      C: The last method, brute force, because I want to reply to other tables in addition to no special value inside A method, B method did not come up, only cycle through all script files to compare, first you need to perform 5.4 , 5.5 steps, generating a good table for a find definitions and compile constraints_parser tools, and then create a shell script in the root directory unpacked, my name is test.sh, script contents:

#!/bin/bash
function ergodic(){
for file in ` ls $1`
do
                if [ -d $1"/"$file ]
                then
                      ergodic $1"/"$file
                else
                      echo $1"/"$file #输出文件的完整的路径
              ./constraints_parser -5 -f $1"/"$file #提取数据
              echo $1"/"$file #输出文件的完整的路径
              sleep  1 #暂停一秒
               fi

done
}
INIT_PATH="pages-1384167811/FIL_PAGE_INDEX" #这里是分页后页面文件所在的文件夹
ergodic $INIT_PATH 

        Then enter the terminal ./test.sh execute the script, staring at the output terminal when the terminal until there is data, and not garbled, then the files in the current folder where you extracted the name of what you're looking for Index ID, then execute 5.6,5.7 the process can, shots are as follows:

  5.4. A table definition, which requires that you know to restore the structure of the table, the easiest way to create a different and want to restore the database the database on a test server, and then enter in a terminal (if you do not know the table structure, is also possible through the table definition .frm file reply in detail please refer to: http://www.percona.com/docs/wiki/innodb-data-recovery-tool:mysql-data-recovery:advanced_techniques#getting_create_table_from_frm_files ):

./create_defs.pl --host=服务器地址 --user=用户名 --password=密码 --db=数据库名 --table=表名 > include/table_defs.h

Then go to decompress the root directory include subdirectories open table_defs.h file, check whether the generated correct, and prevent the build definition file is empty, as you make a mistake in the above database or table name:

5.5 Compile constraints_parser tool (each time a table definition must be recompiled):

make
gcc -DHAVE_OFFSET64_T -D_FILE_OFFSET_BITS=64 -D_LARGEFILE64_SOURCE=1 -D_LARGEFILE_SOURCE=1 -g -I include -I mysql-source/include -I mysql-source/innobase/include -c tables_dict.c -o lib/tables_dict.o
gcc -DHAVE_OFFSET64_T -D_FILE_OFFSET_BITS=64 -D_LARGEFILE64_SOURCE=1 -D_LARGEFILE_SOURCE=1 -g -I include -I mysql-source/include -I mysql-source/innobase/include -o constraints_parser constraints_parser.c lib/tables_dict.o lib/print_data.o lib/check_data.o lib/libut.a lib/libmystrings.a
gcc -DHAVE_OFFSET64_T -D_FILE_OFFSET_BITS=64 -D_LARGEFILE64_SOURCE=1 -D_LARGEFILE_SOURCE=1 -g -I include -I mysql-source/include -I mysql-source/innobase/include -o page_parser page_parser.c lib/tables_dict.o lib/libut.a

5.6 merge pages generated after cutting paging Index number of folders the following will be a lot of files, extract the first record you need to first merge the files into one file:

find pages-1384167811/FIL_PAGE_INDEX/0-968/ -type f -name '*.page' | sort -n | xargs cat > pages-1384167811/FIL_PAGE_INDEX/0-968/customer_pages_concatenated

The combined figure:

5.7 after the extraction of consolidated data:

./constraints_parser -5 -f pages-1384167811/FIL_PAGE_INDEX/0-968/customer_pages_concatenated > pages-1384167811/FIL_PAGE_INDEX/0-968/customer_data.tsv

After the files are extracted as shown:

Content file is recorded in the table, you can go directly to the content tsv file into a database, you can also open the tsv file to copy the contents to excel spreadsheet with the way you are used to import.

Reproduced in: https: //my.oschina.net/secyaher/blog/274474

Guess you like

Origin blog.csdn.net/weixin_34348111/article/details/91967120