Quantcast
Channel: Planet MySQL
Viewing all articles
Browse latest Browse all 18766

InnoDB online index add and “The table ‘t’ is full” error

$
0
0

While trying to add an index to a fairly large table today, on a server1 I’d not worked on previously, I got the following error after some time (and while I was away from the computer):

mysql> ALTER TABLE t
    ->   ADD KEY `index_a` (`a`),
    ->   ADD KEY `index_b` (`b`),
    ->   ADD KEY `index_c` (`c`);
ERROR 1114 (HY000): The table 't' is full

The error log did not bring particular enlightenment (as usual, InnoDB is extremely verbose with the logs, without saying anything useful):

110105 16:22:30  InnoDB: Error: Write to file (merge) failed at offset 4 1387266048.
InnoDB: 1048576 bytes should have been written, only 888832 were written.
InnoDB: Operating system error number 0.
InnoDB: Check that your OS and file system support files of this size.
InnoDB: Check also that the disk is not full or a disk quota exceeded.
InnoDB: Error number 0 means 'Success'.
InnoDB: Some operating system error numbers are described at
InnoDB: http://dev.mysql.com/doc/refman/5.1/en/operating-system-error-codes.html
110105 16:23:00 [ERROR] /usr/sbin/mysqld: The table 't' is full

I had to re-run the command while keeping a close eye on things, and I discovered that it was writing significant amounts of data to the root file system (which isn’t very big, as usual). I looked in all the usual places, and didn’t see any files of note. However, on a hunch, I checked out /proc/<pid>/fd (which can be a lifesaver). I found these:

# ls -l /proc/`pidof mysqld`/fd | grep deleted
lrwx------ 1 root root 64 Jan  5 17:33 14 -> /var/tmp/ibEnEaSj (deleted)
lrwx------ 1 root root 64 Jan  5 17:33 5 -> /var/tmp/ibEoZHQc (deleted)
lrwx------ 1 root root 64 Jan  5 17:33 6 -> /var/tmp/ibHlWZb3 (deleted)
lrwx------ 1 root root 64 Jan  5 17:33 7 -> /var/tmp/ibUtVhxT (deleted)
lrwx------ 1 root root 64 Jan  5 17:33 8 -> /var/tmp/ibt1daDR (deleted)

I can only assume it’s one of these files that’s growing. Changing the setting of tmpdir fixed things up, and it’s writing its large data files to a place with significant space now (and on a much bigger and faster RAID array, to boot). However, this brings with it a couple of questions:

  1. Why does InnoDB need significant space in tmpdir? This is a new requirement with InnoDB plugin (due to online index addition only?), but I don’t see it documented anywhere.2
  2. Why are the files deleted while in use? This makes it very painful for a DBA to manage it and see what’s using space. I know it’s a typical Unix paradigm, but cleanup-on-start and leaving the files linked is much easier to manage.
  3. Why are the error messages useless? How else is a DBA supposed to track this down?

I could also note that using online index add makes useless the only previous way of getting some sort of a status update while adding indexes: watching the temporary file grow. Perhaps it’s time to bring back my patch to show progress (MySQL Bug #26182)?

1 Running MySQL-server-percona-5.1.42-9.rhel5 with innodb_version = 1.0.6-9.

2 Perhaps it should go on Fast Index Creation in the InnoDB Storage Engine: Limitations at very least?


PlanetMySQL Voting: Vote UP / Vote DOWN

Viewing all articles
Browse latest Browse all 18766

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>