Smokes your problems, coughs fresh air.

Converting all tables in MySQL DB to InnoDB

#!/bin/bash
 
exit 1
 
dbname="eorder"
 
echo 'SHOW TABLES;'  | mysql $dbname  | awk '!/^Tables_in_/ {print "ALTER TABLE `"$0"` ENGINE = InnoDB;"}'  | column -t 
echo 'SHOW TABLES;'  | mysql $dbname  | awk '!/^Tables_in_/ {print "ALTER TABLE `"$0"` ENGINE = InnoDB;"}'  | column -t | mysql $dbname

5 Comments

  1. Rowan Rodrik

    But why?

  2. halfgaar

    Because I don’t like MyISM tables. Or, I like MyISM tables less.

  3. Rowan Rodrik

    But are you actually using transactions in the database (or MySQL’s attempt at foreign keys)? Else it’s just a world of DBA pain you’re entering for no gain.

    If you’re switching to InnoDB and you expect your tables to significantly shrink in size sometime, do make sure that you have innodb_file_per_table enabled, or you won’t be able to recover any of your freed space. I’m experiencing this problem first-hand at the moment.

    IMNSHO, MySQL is just a glorified flat-file interface and only really good for quick and dirty hack jobs. It’s a real shame that PostgreSQL isn’t more ubiquitous.

  4. Dan

    I know the pain of which you speak ๐Ÿ™‚

    A few years ago I ran into similar issues with InnoDB. I had a 20G partition (I’ll never need more than that, right?!) that I had put MySQL on, only to later discover the evil single ibdata1 file. It wasn’t until I couldn’t write to the database anymore that I noticed that it filled!

    IIRC, I had tried to add some new indexes that had filled up the partition. The downer for me was that this old box had a very limited size hard drive in it, so I couldn’t do much with the system running in runlevel 2 (Debian). Ultimately, I think I ended up playing musical partitions (LVM+nbd’s), moving stuff off to another machine, resizing one partition, shrinking others etc. It was hell ๐Ÿ™‚ (On a side note, LVM is a blessing!)

    I will say though, that I was glad that I had went to InnoDB in the end. The FKey integrity saved my but on a few occasions. That being said, referential integrity doesn’t exactly speed things up, nor is it even the default in some of the PHP applications out there. In the context of a wiki though, I think it just might be worthwhile though since on a large wiki with constant use, a daily cronjob backup, combined with table locks on MyISAM tables, is just as painful. Innodb with the –single-transaction switch to mysqldump I think is probably worth the extra overhead. Also, if you don’t have stable power, I wouldn’t go near MyISAM with a 10ft. pole.

    At one point, I even went through the hassle of setting up master/slave. Another fresh new hell ๐Ÿ˜‰ I swear, even with the master having filesystem/hdd caching disabled, sync mounted volumes etc, the master would fall behind the slave (heterogeneous machines in almost every way) probably 80% of the time the power kicked out. And of course, every time the slave was ahead of the master, I had to dump/restore and replay the binlogs (as opposed to going in and hand fixing rows, resetting autoincrement counters, etc.).

    Personally, I would just suggest backing up the entire database, and test blowing away the MySQL data directory in a VM to get a feel for the commands, and restoring your setup to a nice clean setup with the split table InnoDB option. It will help with table lock issues, and give you a more reliable mysqldump, IMHO. Just make sure to leave the MediaWiki tables that need MyISAM alone (ie: counters and fulltext tables). Another nice trick, if memory serves me, was that TRUNCATE with split InnoDB tables will actually reclaim space and be very quick compared to deleting all rows, which just marks the space as free in the tables on disk files.

    Anyways, I think I am reaching a new world record for a comment ๐Ÿ™‚ I hope things work out for you, and yay for pgsql! Cheers o/

    Dan

  5. Rowan Rodrik

    Dan, That’s an educational tale. Thanks for your (way more than) 2 cents! ๐Ÿ˜›

© 2024 BigSmoke

Theme by Anders NorenUp ↑