The Problem
In the past users were unable to take full advantage of the FLUSH TABLES WITH READ LOCK statement. InnoDB simply ignored the flush to disk part. If the table did not have any dirty pages in the InnoDB buffer that weren’t synced to disk (due to sheer luck) then it was safe to copy the .ibd file to another location. Also, the restore was not without its limitations and complications. The .ibd file could not be copied over to another server because InnoDB during import did not do any fix ups required proper import. The main problems during import were :
- If the tablespace id of the IMPORTing instance had changed then the import would fail.
- If the table and index ids in the tablespace were different than the importing server
- InnoDB uses a global row id, this is used for tables without an explicit primary key, the row id if present in a to be imported tablespace had to be lower than importing server current maximum.
- Maximum LSN of the tablespace had to be lower than the server current max LSN
- The maximum trx id of the tablespace had to be lower than the server current max trx id
- Purge and change buffer issues
In short, if it worked you were lucky that the above constraints were satisfied or your tables were probably read-only.
The Solution
Do the flush (export) properly and also the import. During flushing we disable purge, merge all pending change buffer entries to the tablespace and then flush all the dirty pages to disk. Note: Purge will remain disabled until UNLOCK TABLES; is issued. Write out a meta-data file that contains the table schema, hostname of the exporting server, page size of the exporting instance and the highest autoinc value in memory etc.. So that when we do an import we can check if the table schema matches (currently ignores foreign key relationships) and also restore the autoinc value. This meta-data file is created in the same directory as the tablespace, it has a suffix of .cfg. If the tables is named customer then the meta-data file will be named customer.cfg and the tablespace will be named customer.ibd. Note: FLUSH TABLES and IMPORT only work for tables that are stored outside the system tablespace, in their own tablespace. The InnoDB config parameter innodb_file_per_table must be set when creating the table.
Because InnoDB stores data in big-endian format, this allows copying data between different architectures possible. Copying between different operating systems will also work.
Example
If you want to backup a single table:
CREATE DATABASE ftwrl;
CREATE TABLE ftwrl.t (C INT) ENGINE=InnoDB;
INSERT INTO ftwrl.t VALUES(1),(2),(3);
SELECT * FROM ftwrl.t;
FLUSH TABLES ftwrl.t WITH READ LOCK;
From the command line, copy the t.ibd and t.cfg to your backup directory. The files should in the ftwrl/ sub-directory under datadir.
UNLOCK TABLES; — Note: This will remove the t.cfg file.
DROP TABLE ftwrl.t;
CREATE TABLE ftwrl.t (C INT) ENGINE=InnoDB;
ALTER TABLE ftwrl.t DISCARD TABLESPACE;
The DISCARD tablespace will rename the t.ibd file to t.ibt in the same directory, you have to remove this file manually for now. Once you’ve removed this file, copy the backed up tablespace t.ibd and the meta-data file t.cfg to the ftwrl/ sub-directory. Then issue the folowing:
ALTER TABLE ftwrl.t IMPORT TABLESPACE;
CHECK TABLE ftwrl.t;
SELECT * FROM t;
You should be good to go. If you want to backup multiple tables together then the syntax for that is:
FLUSH TABLES t1, t2, …, tn WITH READ LOCK;
Copy as before but this time the multiple tablespaces and meta-data files
UNLOCK TABLES;
Then first discard the tablespace as above followed by an import as above.
Other changes
Errors and warnings will be pushed to the client so that the user doesn’t need to poke around in the server error log files to check why an operation failed. The intention is to send enough information to the user so that the problem can be clearly identified and understood.
Conclusion
The syntax for “export” is not finalised yet and may change, for now the above examples using FLUSH TABLES WITH READ LOCK should work. To export tables that are in a foreign key relationship you must export and import all the tables together, otherwise you may end up with inconsistencies. The export/import doesn’t work on partitioned tables, this limitation will be addressed later. A matching table definition must exist in the importing server instance, currently we don’t automagically create the table using the saved meta-data during import. The .cfg format is not written in stone yet, we may add more data to this file, e.g., when we add partition support.
Your feedback is important to us, so that we can improve this feature further.
PlanetMySQL Voting: Vote UP / Vote DOWN