最近关于下厨房的数据恢复,讨论的很火热,乘机测试了把Percona Data Recovery Tool for InnoDB工具,并且事实证明这款工具对一些操作的恢复,还是比较困难的。
我们以一张小表为例做一个测试:
mysql> select * from customer; +-------------+----------+------------+-----------+-------------------+------------+--------+---------------------+---------------------+ | customer_id | store_id | first_name | last_name | email | address_id | active | create_date | last_update | +-------------+----------+------------+-----------+-------------------+------------+--------+---------------------+---------------------+ | 1 | 1 | liu | yang | www.yihaodian.com | 141 | 1 | 2013-07-09 15:34:23 | 2013-07-09 15:34:23 | +-------------+----------+------------+-----------+-------------------+------------+--------+---------------------+---------------------+ 1 row in set (0.00 sec) mysql> TRUNCATE TABLE customer; Query OK, 0 rows affected (0.01 sec) mysql> exit
[root@db-42 tmp]# wget https://launchpad.net/percona-data-recovery-tool-for-innodb/trunk/release-0.5/+download/percona-data-recovery-tool-for-innodb-0.5.tar.gz .. 2013-07-09 15:41:03 (11.3 KB/s) - `percona-data-recovery-tool-for-innodb-0.5.tar.gz' saved [1960195/1960195] [root@db-42 tmp]# tar -zxvf percona-data-recovery-tool-for-innodb-0.5.tar.gz [root@db-42 tmp]# cd percona-data-recovery-tool-for-innodb-0.5/mysql-source [root@db-42 mysql-source]# ./configure
编译生成page_parser和constraints_parser工具 (到根目录下直接make即可)这里注意create_defs.pl脚本需要依赖DBD,DBI,安装过程中可能出现如下错误:
[root@db-42 percona-data-recovery-tool-for-innodb-0.5]# ./create_defs.pl --host=localhost --user=root --password=yihaodian --db=test --table=customer > include/table_defs.h perl: symbol lookup error: /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/auto/DBD/mysql/mysql.so: undefined symbol: mysql_init
通过如下方式解决:
[root@db-42 percona-data-recovery-tool-for-innodb-0.5]# mkdir /tmp/mysql-static [root@db-42 percona-data-recovery-tool-for-innodb-0.5]# cp /usr/lib64/mysql/*.a /tmp/mysql-static
删除DBD目录
重新解压
perl Makefile.PL –libs=”-L/tmp/mysqldbd-install -lmysqlclient”
重新编译即可解决
下面继续,获取这个表的定义:
[root@db-42 percona-data-recovery-tool-for-innodb-0.5]# ./create_defs.pl --host=localhost --user=root --password=yihaodian --db=test --table=customer > include/table_defs.h [root@db-42 percona-data-recovery-tool-for-innodb-0.5]# cat include/table_defs.h #ifndef table_defs_h #define table_defs_h // Table definitions table_def_t table_definitions[] = { { name: "customer", { { /* smallint(5) unsigned */ name: "customer_id", type: FT_UINT, fixed_length: 2, has_limits: FALSE, limits: { can_be_null: FALSE, uint_min_val: 0, uint_max_val: 65535 }, can_be_null: FALSE }, { /* */ name: "DB_TRX_ID", type: FT_INTERNAL, fixed_length: 6, can_be_null: FALSE }, { /* */ name: "DB_ROLL_PTR", type: FT_INTERNAL, fixed_length: 7, can_be_null: FALSE }, { /* tinyint(3) unsigned */ name: "store_id", type: FT_UINT, fixed_length: 1, has_limits: FALSE, limits: { can_be_null: FALSE, uint_min_val: 0, uint_max_val: 255 }, can_be_null: FALSE }, { /* varchar(45) */ name: "first_name", type: FT_CHAR, min_length: 0, max_length: 135, has_limits: FALSE, limits: { can_be_null: FALSE, char_min_len: 0, char_max_len: 135, char_ascii_only: TRUE }, can_be_null: FALSE }, { /* varchar(45) */ name: "last_name", type: FT_CHAR, min_length: 0, max_length: 135, has_limits: FALSE, limits: { can_be_null: FALSE, char_min_len: 0, char_max_len: 135, char_ascii_only: TRUE }, can_be_null: FALSE }, { /* varchar(50) */ name: "email", type: FT_CHAR, min_length: 0, max_length: 150, has_limits: FALSE, limits: { can_be_null: TRUE, char_min_len: 0, char_max_len: 150, char_ascii_only: TRUE }, can_be_null: TRUE }, { /* smallint(5) unsigned */ name: "address_id", type: FT_UINT, fixed_length: 2, has_limits: FALSE, limits: { can_be_null: FALSE, uint_min_val: 0, uint_max_val: 65535 }, can_be_null: FALSE }, { /* tinyint(1) */ name: "active", type: FT_INT, fixed_length: 1, has_limits: FALSE, limits: { can_be_null: FALSE, int_min_val: -128, int_max_val: 127 }, can_be_null: FALSE }, { /* datetime */ name: "create_date", type: FT_DATETIME, fixed_length: 8, can_be_null: FALSE }, { /* timestamp */ name: "last_update", type: FT_TIMESTAMP, fixed_length: 4, can_be_null: FALSE }, { type: FT_NONE } } }, };
提取需要的页
[root@db-42 percona-data-recovery-tool-for-innodb-0.5]# cp /var/lib/mysql/liuyang/customer.ibd /tmp/ [root@db-42 percona-data-recovery-tool-for-innodb-0.5]# ./page_parser -5 -f /tmp/customer.ibd Opening file: /tmp/customer.ibd: 2051 ID of device containing file 1798748 inode number 33184 protection 1 number of hard links 0 user ID of owner 0 group ID of owner 0 device ID (if special file) 147456 total size, in bytes 4096 blocksize for filesystem I/O 296 number of blocks allocated 1373360322 time of last access 1373360322 time of last modification 1373360322 time of last status change 147456 Size to process in bytes 104857600 Disk cache size in bytes
实际上对这个表并不能够挖掘出数据,也就是说无法挖掘出truncate过的数据,并且如果是drop操作在没有备份(冷备份orSlave)的情况下获取表的定义都成了问题。所以上面的操作可以获取表定义 但是无法unload出数据。
在使用innodb monitor之后发现并不能从21号子目录中抽取出数据:
TABLE: name liuyang/customer, id 20, flags 1, columns 12, indexes 4, appr.rows 2423 COLUMNS: customer_id: DATA_INT DATA_UNSIGNED DATA_BINARY_TYPE DATA_NOT_NULL len 2; store_id: DATA_INT DATA_UNSIGNED DATA_BINARY_TYPE DATA_NOT_NULL len 1; first_name: DATA_VARMYSQL DATA_NOT_NULL len 135; last_name: DATA_VARMYSQL DATA_NOT_NULL len 135; email: DATA_VARMYSQL len 150; address_id: DATA_INT DATA_UNSIGNED DATA_BINARY_TYPE DATA_NOT_NULL len 2; active: DATA_INT DATA_BINARY_TYPE DATA_NOT_NULL len 1; create_date: DATA_INT DATA_BINARY_TYPE DATA_NOT_NULL len 8; last_update: DATA_INT DATA_UNSIGNED DATA_BINARY_TYPE DATA_NOT_NULL len 4; DB_ROW_ID: DATA_SYS prtype 256 len 6; DB_TRX_ID: DATA_SYS prtype 257 len 6; DB_ROLL_PTR: DATA_SYS prtype 258 len 7; INDEX: name PRIMARY, id 21, fields 1/11, uniq 1, type 3 ----- extract from here root page 3, appr.key vals 2423, leaf pages 11, size pages 12 FIELDS: customer_id DB_TRX_ID DB_ROLL_PTR store_id first_name last_name email address_id active create_date last_update INDEX: name idx_fk_store_id, id 22, fields 1/2, uniq 2, type 0 root page 4, appr.key vals 1, leaf pages 2, size pages 3 FIELDS: store_id customer_id INDEX: name idx_fk_address_id, id 23, fields 1/2, uniq 2, type 0 root page 5, appr.key vals 1, leaf pages 2, size pages 3 FIELDS: address_id customer_id INDEX: name idx_last_name, id 24, fields 1/2, uniq 2, type 0 root page 6, appr.key vals 1, leaf pages 4, size pages 5 FIELDS: last_name customer_id
[root@db-42 percona-data-recovery-tool-for-innodb-0.5]# ./constraints_parser -5 -f pages-1373360911/FIL_PAGE_INDEX/0-21/0-00000003.page LOAD DATA INFILE '/tmp/percona-data-recovery-tool-for-innodb-0.5/dumps/default/customer' REPLACE INTO TABLE `customer` FIELDS TERMINATED BY '\t' OPTIONALLY ENCLOSED BY '"' LINES STARTING BY 'customer\t' (customer_id, store_id, first_name, last_name, email, address_id, active, create_date, last_update);
没有任何数据
同样我们重新创建一张表,插入数据,不做truncate是可以抽取出数据的。
mysql> use liuyang Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> mysql> insert into customer (store_id,first_name,last_name,email,address_id,create_date) values (1,'liu','yang','www.yihaodian.com','141',sysdate()); Query OK, 1 row affected (0.00 sec) mysql> commit; Query OK, 0 rows affected (0.00 sec) 。。。。。。 mysql> insert into customer (store_id,first_name,last_name,email,address_id,create_date) values (1,'liu','yang','www.yihaodian.com','141',sysdate()); Query OK, 1 row affected (0.00 sec) mysql> commit; Query OK, 0 rows affected (0.00 sec)
[root@db-42 percona-data-recovery-tool-for-innodb-0.5]# ./page_parser -5 -f /var/lib/mysql/liuyang/customer.ibd Opening file: /var/lib/mysql/liuyang/customer.ibd: 2054 ID of device containing file 1867914 inode number 33200 protection 1 number of hard links 203 user ID of owner 156 group ID of owner 0 device ID (if special file) 425984 total size, in bytes 4096 blocksize for filesystem I/O 840 number of blocks allocated 1373361248 time of last access 1373362171 time of last modification 1373362171 time of last status change 425984 Size to process in bytes 104857600 Disk cache size in bytes [root@db-42 0-21]# ls -lrt total 192 -rw-r--r-- 1 root root 16384 Jul 9 17:30 9-00000012.page -rw-r--r-- 1 root root 16384 Jul 9 17:30 8-00000011.page -rw-r--r-- 1 root root 16384 Jul 9 17:30 7-00000010.page -rw-r--r-- 1 root root 16384 Jul 9 17:30 6-00000009.page -rw-r--r-- 1 root root 16384 Jul 9 17:30 5-00000008.page -rw-r--r-- 1 root root 16384 Jul 9 17:30 4-00000007.page -rw-r--r-- 1 root root 16384 Jul 9 17:30 21-00000024.page -rw-r--r-- 1 root root 16384 Jul 9 17:30 20-00000023.page -rw-r--r-- 1 root root 16384 Jul 9 17:30 16-00000019.page -rw-r--r-- 1 root root 16384 Jul 9 17:30 13-00000016.page -rw-r--r-- 1 root root 16384 Jul 9 17:30 10-00000013.page -rw-r--r-- 1 root root 16384 Jul 9 17:30 0-00000003.page
[root@db-42 0-21]# cat *.page > /tmp/temp.page [root@db-42 0-21]# cd .. [root@db-42 FIL_PAGE_INDEX]# cd .. [root@db-42 pages-1373362201]# cd .. [root@db-42 percona-data-recovery-tool-for-innodb-0.5]# ./constraints_parser -5 -f /tmp/temp.page customer 1299 1 "liu" "yang" "www.yihaodian.com" 141 1 "2013-07-09 17:29:19" "2013-07-09 09:29:19" customer 1300 1 "liu" "yang" "www.yihaodian.com" 141 1 "2013-07-09 17:29:19" "2013-07-09 09:29:19" customer 1301 1 "liu" "yang" "www.yihaodian.com" 141 1 "2013-07-09 17:29:19" "2013-07-09 09:29:19" customer 1302 1 "liu" "yang" "www.yihaodian.com" 141 1 "2013-07-09 17:29:19" "2013-07-09 09:29:19" customer 1303 1 "liu" "yang" "www.yihaodian.com" 141 1 "2013-07-09 17:29:19" "2013-07-09 09:29:19" customer 1304 1 "liu" "yang" "www.yihaodian.com" 141 1 "2013-07-09 17:29:19" "2013-07-09 09:29:19" customer 1305 1 "liu" "yang" "www.yihaodian.com" 141 1 "2013-07-09 17:29:19" "2013-07-09 09:29:19" customer 1306 1 "liu" "yang" "www.yihaodian.com" 141 1 "2013-07-09 17:29:19" "2013-07-09 09:29:19" .............
通过LOAD语句直接将文件导入到MySQL
mysql> show tables; +-------------------+ | Tables_in_liuyang | +-------------------+ | customer | +-------------------+ 1 row in set (0.00 sec) mysql> truncate table customer; Query OK, 0 rows affected (0.00 sec) mysql> LOAD DATA INFILE '/tmp/customer.csv' REPLACE INTO TABLE `customer` FIELDS TERMINATED BY '\t' OPTIONALLY ENCLOSED BY '"' LINES STARTING BY 'customer\t' (customer_id, store_id, first_nam e, last_name, email, address_id, active, create_date, last_update); Query OK, 2306 rows affected (0.07 sec) Records: 2306 Deleted: 0 Skipped: 0 Warnings: 0 mysql> select count(*) from customer; +----------+ | count(*) | +----------+ | 2306 | +----------+ 1 row in set (0.01 sec) mysql> select * from customer limit 1; +-------------+----------+------------+-----------+-------------------+------------+--------+---------------------+---------------------+ | customer_id | store_id | first_name | last_name | email | address_id | active | create_date | last_update | +-------------+----------+------------+-----------+-------------------+------------+--------+---------------------+---------------------+ | 1 | 1 | liu | yang | www.yihaodian.com | 141 | 1 | 2013-07-09 17:28:00 | 2013-07-09 09:28:00 | +-------------+----------+------------+-----------+-------------------+------------+--------+---------------------+---------------------+ 1 row in set (0.00 sec)
可以看到在正常情况下可以顺利的unload出数据,这为一些非极端的环境下的恢复提供了很多的帮助。
[root@db-42 pages-1373362201]# cd ..
——————–
这里还要加一步:[root@db-42 percona-data-recovery-tool-for-innodb-0.5]#make
—————
[root@db-42 percona-data-recovery-tool-for-innodb-0.5]# ./constraints_parser -5 -f /tmp
Thanks